Search is not available for this dataset
text
stringlengths 216
4.52M
| meta
dict |
---|---|
\section{Introduction}
\IEEEPARstart{D}{eep} learning has dramatically advanced talker-independent speaker separation in the past decade~\cite{WDLreview}, especially since deep clustering~\cite{Hershey2016} and permutation invariant training (PIT)~\cite{Kolbak2017} successfully addressed the label permutation problem.
Early studies train DNNs for magnitude estimation, with or without estimating phase~\cite{Isik2016, Wang2018AlternativeObejectives, WZQe2eMISI2018, Wang2019Trigonometric}.
Subsequent studies carry out separation in the complex T-F domain via complex ratio masking~\cite{Liu2019DeepCASA} or in the time domain via TasNets~\cite{Luo2017TasNet, Luo2018TasNetRealTime, Luo2019}.
Since 2019, Conv-TasNet and its variants~\cite{Luo2019, Lam2020MBT, Shi2019FurcaNeXt, Tzinis2020, Luo2020, Nachmani2020, Chen2020DPTnet, Zhu2021, Subakan2021, Lam2021, Zeghidour2020, Qian2022, Rixen2022, Rixen2022QDPN}, featuring advanced DNN architectures with learned encoder-decoder modules operating on very short windows of signals for end-to-end masking based separation, have gradually become the most popular and dominant approach for speaker separation in anechoic conditions.
Their performance on the standard WSJ0-2mix benchmark~\cite{Hershey2016} has reached an impressive SI-SDR improvement (SI-SDRi) of 22.1 dB~\cite{Rixen2022QDPN}.
In the meantime, T-F domain models, which usually use larger window sizes and hop sizes, have been largely under-explored and under-represented in speaker separation studies in anechoic conditions.
Recently, TFPSNet~\cite{Yang2022TFPSNet}, claimed to operate in the complex T-F domain, reports on WSJ0-2mix a strong SI-SDRi at 21.1 dB, which is comparable to the top results achievable by modern time-domain models.
It leverages a modern dual-path architecture, following DPRNN~\cite{Luo2020} and DPTNet~\cite{Chen2020DPTnet}, but applies the architecture on the complex T-F spectrogram~\cite{Le2021DPCRN, Dang2022DPTFSNet} by using the transformer module proposed in DPTNet~\cite{Chen2020DPTnet} to model spectro-temporal information.
Although TFPSNet is claimed to operate in the T-F domain~\cite{Yang2022TFPSNet}, it closely follows the encoder-separator-decoder scheme~\cite{Luo2019} widely-used in TasNets and its performance, even with a modern DNN architecture, is still much lower than contemporary time-domain models~\cite{Qian2022, Rixen2022, Rixen2022QDPN}.
In this context, for anechoic speaker separation this paper (and our preliminary study~\cite{Wang2022GridNet}) makes the following contributions to improve complex T-F domain approaches:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt]
\item We propose to use complex spectral mapping for speaker separation in anechoic conditions.
Complex spectral mapping~\cite{Williamson2016, Fu2017, Tan2020, Wang2020CSMDereverbJournal, Wang2020chime, Wang2020dMCCSMconference, Wang2020css, Wang2021LowDistortion, Wang2021FCPjournal, Tan2022NSF, Wang2021FCPwaspaa}, which predicts target RI components based on the RI components of input signals, has shown strong potential on noisy-reverberant speech separation when combined with modern DNN architectures and loss functions, exhibiting strong robustness to noise and reverberation in both single- and multi-microphone conditions.
Its potential on anechoic speaker separation, however, has not been studied, especially in an era when time-domain models, which perform masking in a learned filterbank domain, have become so popular and dominant on this task.
This paper is the first study to explore this direction for monaural, anechoic speaker separation.
\item We propose a novel DNN architecture named TF-GridNet for speech separation.
It operates in the complex T-F domain to model speech spectrograms in a grid-like manner.
Based on an improved TFPSNet~\cite{Yang2022TFPSNet}, we add a cross-frame self-attention path for dual-path models to leverage global information across frames, resulting in a multi-path model;
\item Building upon the SI-SDR loss~\cite{Luo2019, LeRoux2019}, we propose to add a novel loss term to encourage estimated sources to add up to the mixture. We also combine this loss term with loss functions other than SI-SDR.
\end{itemize}
Without using any data augmentation and dynamic mixing, on WSJ0-2mix~\cite{Hershey2016} our best model obtains 23.5 dB SI-SDRi, surpassing the previous best (at 22.1 dB)~\cite{Rixen2022QDPN} by a large margin and breaking a theoretical upper bound (at 23.1 dB)~\cite{Lutati2022}.
However, our preliminary study~\cite{Wang2022GridNet} does not show the potential of TF-GridNet for speech separation in noisy-reverberant conditions and it lacks an extension to multi-channel conditions.
To address the first problem, we evaluate TF-GridNet on monaural reverberant speaker separation using the SMS-WSJ dataset~\cite{Drude2019} and monaural noisy-reverberant speaker separation using WHAMR!~\cite{Maciejewski2020}.
To address the second problem, we integrate TF-GridNet with a MISO-BF-MISO approach~\cite{Wang2020dMCCSMconference, Wang2020css, Wang2021LowDistortion}, which sandwiches a beamformer with two multi-channel-input single-channel-output (MISO) DNNs, with the beamformer computed based on the output of the first DNN and the second DNN performing post-filtering.
In our recent work~\cite{Lu2022}, we follow this MISO-BF-MISO approach and stack two TCN-DenseUNets with a novel multi-channel multi-frame Wiener filter (MFWF) in between.
The TCN-DenseUNet~\cite{Wang2020dMCCSMconference, Wang2020CSMDereverbJournal, Wang2020chime, Wang2020css, Wang2021LowDistortion, Wang2021FCPjournal, Tan2022NSF, Wang2021FCPwaspaa} is a strong, representative model adopted in many previous complex spectral mapping studies, and the MFWF is computed based on both DNN-estimated target magnitude and phase, and leverages both future and past frames for sub-band linear filtering.
This solution wins the recent L3DAS22 3D speech enhancement challenge~\cite{Guizzo2022L3DAS}, which attracted 17 submissions.
In this paper, a major difference from~\cite{Lu2022} is that we replace the TCN-DenseUNet with the newly-proposed TF-GridNet by modifying TF-GridNet for multi-microphone complex spectral mapping~\cite{Wang2020dMCCSMconference, Wang2020css, Wang2021LowDistortion}, and we observe large improvement over~\cite{Lu2022} and many other strong multi-channel systems.
Both TF-GridNet and MISO-BF-MISO can be understood from the perspective of integrated full- and sub-band modeling, either inside TF-GridNet or outside through beamforming and post-filtering.
State-of-the-art performance is achieved on four major speech separation tasks, including reverberant speaker separation, noisy-reverberant speaker separation, speech dereverberation and noisy-reverberant speech enhancement, showing the effectiveness of the proposed algorithms at single- and multi-channel separation.
In our experiments, for each task we strive to use public datasets with strong results published by previous studies.
A sound demo\footnote{See \url{https://zqwang7.github.io/demos/TF-GridNet-demo/index.html}.} is available online.
\section{System Overview}\label{systemoverview}
\subsection{Physical Model and Objective}
For an $N$-sample, $C$-speaker mixture signal recorded by a $P$-microphones array in a noisy-reverberant setting, at sample $n$ the physical model describing the relationship between the mixture $\mathbf{y}[n] \in {\mathbb R}^P$, reverberant non-target signals $\mathbf{v}[n]\in {\mathbb R}^P$, and dry source signal $(o(c))[n]\in {\mathbb R}$, direct-path signal $(\mathbf{s}(c))[n]\in {\mathbb R}^P$ and reverberation $(\mathbf{h}(c))[n]\in {\mathbb R}^P$ of speaker $c$ can be formulated in the time domain as
\begin{align}
\mathbf{y}&[n] = \sum\nolimits_{c=1}^C \Big(o(c) * \mathbf{r}(c)\Big)[n] + \mathbf{v}[n] \nonumber \\
&= \sum\nolimits_{c=1}^C \Big(\left(o(c) * \mathbf{r}^d(c)\right)[n] + \left(o(c) * \mathbf{r}^{e+l}(c)\right)[n]\Big) + \mathbf{v}[n] \nonumber \\
&= \sum\nolimits_{c=1}^C \Big(\left(\mathbf{s}(c)\right)[n] + \left(\mathbf{h}(c)\right)[n]\Big) + \mathbf{v}[n], \label{eq:phymodel_time}
\end{align}
where $*$ is the linear convolution operator, and the $P$-channel room impulse response (RIR) of speaker $c$, $\mathbf{r}(c)$, can be decomposed into the RIR of the direct-path signal, $\mathbf{r}^d(c)$, and that of early reflections and late reverberation combined, $\mathbf{r}^{e+l}(c)$.
In the short-time Fourier transform (STFT) domain, the physical model is formulated as
\begin{align}
\mathbf{Y}(t,f) &= \sum\nolimits_{c=1}^C \Big(\mathbf{S}(c, t,f)+\mathbf{H}(c,t,f)\Big)+\mathbf{V}(t,f), \label{eq:phymodel_freq}
\end{align}
where $t$ indexes $T$ frames, $f$ indexes $F$ frequencies, and $\mathbf{Y}(t,f)$, $\mathbf{V}(t,f)$, and $\mathbf{S}(c,t,f)$ and $\mathbf{H}(c,t,f)\in {\mathbb C}^P$ respectively denote the STFT vectors of the mixture, non-target signals, and the direct-path signal and reverberation of speaker $c$.
The corresponding spectrograms are denoted by $\mathbf{Y}$, $\mathbf{V}$, $\mathbf{S}(c)$, and $\mathbf{H}(c)$.
This formulation covers all the tasks we consider:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt]
\item For monaural, anechoic speaker separation, $C > 1$, $P=1$ and there is no $\mathbf{H}$ and $\mathbf{V}$;
\item For reverberant speaker separation, $C > 1$ and $\mathbf{V}$ is a weak stationary noise (e.g., microphone sensor noise);
\item For noisy-reverberant speaker separation, $C > 1$ and $\mathbf{V}$ consists of challenging non-stationary noises;
\item For speech dereverberation, $C=1$ and $\mathbf{V}$ is a weak stationary noise;
\item For noisy-reverberant speech enhancement, $C=1$ and $\mathbf{V}$ contains challenging non-stationary noises.
\end{itemize}
Given a single- or multi-channel mixture, we aim at reconstructing the direct-path signal of each speaker at a reference microphone $q$ (i.e., $s_q(c)$).
This requires us to not only remove noise and reverberation but also separate the speakers if there are more than one.
For all the tasks, we assume that the maximum number of speakers in each mixture is known, and that the array geometry is fixed between training and testing.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{system_overview.pdf}
\vspace{-0.1cm}
\caption{System overview.}
\label{system_overview}
\vspace{-0.6cm}
\end{figure}
\begin{figure*}[h!]
\centering
\includegraphics[width=18cm]{GridNet_overview_combined.pdf}
\vspace{-0.1cm}
\caption{Proposed TF-GridNet based DNN$_2$.}
\label{GridNet_overview}
\vspace{-0.5cm}
\end{figure*}
\subsection{Approach Outline}
Our system (see Fig.~\ref{system_overview}) operates in the complex T-F domain.
It follows a two-DNN approach named MISO-BF-MISO~\cite{Wang2021LowDistortion, Wang2020css, Wang2021FCPjournal}, where DNN$_1$ first produces an initial estimate for each target source, the initial estimate is then used to compute a sub-band linear filter (in this paper a multi-frame Wiener filter) for each source, and DNN$_2$ takes in the mixture, the outputs of DNN$_1$, and the linear-filtering results for post-filtering.
In our experiments, DNN$_1$ and DNN$_2$ are trained sequentially rather than jointly.
After DNN$_1$ is trained, we use it to generate an initial estimate $\hat{S}_q^{(1)}(c)$ and compute a sub-band linear filtering result $\hat{S}_q^{\text{MFWF}}(c)$ for each speaker $c$, and feed them and $\mathbf{Y}$ to DNN$_2$ to further predict target speech (denoted as $\hat{S}_q^{(2)}(c)$).
The superscripts in $\hat{S}_q^{(1)}(c)$ and $\hat{S}_q^{(2)}(c)$ denote which of the two DNNs produces the estimate.
Following~\cite{Wang2020css, Wang2021FCPjournal, Wang2021FCPwaspaa}, for speaker separation DNN$_1$ is trained with utterance-wise PIT~\cite{Kolbak2017} but DNN$_2$ is trained in an enhancement way (i.e, predicting all the speakers but not using PIT), since the label-permutation problem has been addressed by DNN$_1$.
For monaural, anechoic speaker separation, we only train DNN$_1$, without using linear filtering and DNN$_2$.
\section{TF-GridNet}\label{proposedalgorithm}
Fig.~\ref{GridNet_overview} illustrates the proposed TF-GridNet for DNN$_2$.
DNN$_1$ has the same architecture but uses only $\mathbf{Y}$ as input.
Both DNNs are trained to perform complex spectral mapping~\cite{Williamson2016, Fu2017, Tan2020, Wang2020CSMDereverbJournal, Wang2020chime, Wang2020dMCCSMconference, Wang2021LowDistortion, Wang2020css, Wang2021FCPjournal, Tan2022NSF, Wang2021FCPwaspaa}, where the RI components of input signals are stacked as input features to predict the RI components of each speaker at the reference microphone $q$, i.e., $S_q(c)$.
Our system is non-causal.
We normalize the sample variance of each input
mixture to $1.0$ and use the same scaling factor to scale each target source before using them for training. %
This amounts to adjusting the volume of each input mixture to a similar level.
In Fig.~\ref{GridNet_overview}, for each of the three real-valued input features (i.e., the mixture $\mathbf{Y}$ with shape $2P\times T\times F$, DNN$_1$'s outputs $\hat{S}_q^{(1)}(1),\dots,\hat{S}_q^{(1)}(C)$ with shape $2C\times T\times F$, and MFWF's outputs $\hat{S}_q^{\text{MFWF}}(1),\dots,\hat{S}_q^{\text{MFWF}}(C)$ with shape $2C\times T\times F$), we first use a two-dimensional (2D) convolution (Conv2D) with a $3\times 3$ kernel followed by global layer normalization (gLN)~\cite{Luo2019} to compute a $D$-dimensional embedding for each T-F unit, and then summate the T-F embeddings generated for the three input features, obtaining a tensor with shape $D\times T\times F$.
Next, we feed the tensor to $B$ TF-GridNet blocks, each consisting of an intra-frame full-band module, a sub-band temporal module, and a cross-frame self-attention module, to leverage spectral, spatial and temporal information to gradually make the T-F embeddings more discriminative for separation.
After that, a 2D deconvolution (Deconv2D) with $2C$ output channels and a $3\times 3$ kernel followed by linear units is used to obtain the predicted RI components for all the $C$ speakers, and inverse STFT (iSTFT) is applied for signal re-synthesis.
The rest of this section describes the three modules in each TF-GridNet block, and the loss functions.
To avoid confusion, in Table~\ref{summary_hyperparam} we list the hyper-parameters we will use to describe TF-GridNet.
\begin{table}[t]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{List of Hyper-Parameters of TF-GridNet.}}
\vspace{-0.2cm}
\label{summary_hyperparam}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{
cc
}
\toprule
Symbols & Description \\
\midrule
$D$ & Embedding dimension for each T-F unit \\
$B$ & Number of TF-GridNet blocks \\
\midrule
$I$ & Kernel size for Unfold and Deconv1D \\
$J$ & Stride size for Unfold and Deconv1D \\
$H$ & Number of hidden units of BLSTMs in each direction \\
\midrule
$L$ & Number of heads in self-attention \\
$E$ & \begin{tabular}{@{}c@{}}Number of output channels in point-wise Conv2D to obtain key and query\\tensors in self-attention\end{tabular} \\
\bottomrule
\end{tabular}
\vspace{-0.6cm}
\end{table}
\subsection{Intra-Frame Full-Band Module}
For the intra-frame module, we view the input tensor $R_b \in {\mathbb R}^{D\times T \times F}$ to the $b^{\text{th}}$ block as $T$ separate sequences, each with length $F$, and use a sequence model to capture the full-band spectral and spatial information within each frame.
In detail, we first use the \textit{torch.unfold} function~\cite{Paszke2019} with kernel size $I$ and stride $J$ to stack nearby embeddings at each step along frequency, after zero-padding the frequency dimension to $F'= \lceil \frac{F-I}{J} \rceil \times J + I$, and then apply layer normalization (LN) along the first dimension, i.e.,
\begin{align}\label{intra-frame-unfold-ln}
\dot{R}_b = \text{LN}\Big(\Big[&\text{Unfold}(R_b[:,t,:]),\nonumber \\
&\text{for}\,\,t=1,\dots,T\Big]\Big) \in {\mathbb R}^{(I\times D)\times T \times (\frac{F'-I}{J}+1)}.
\end{align}
We denote this order of operations as \textbf{Unfold-LN}.
An alternative is to first perform LN on $R_b$ and then zero-pad and stack nearby embeddings, i.e.,
\begin{align}\label{intra-frame-ln-unfold}
\dot{R}_b = \Big[\text{Un}&\text{fold}\Big(\text{LN}(R_b)[:,t,:]\Big),\nonumber \\
&\text{for}\,\,t=1,\dots,T\Big] \in {\mathbb R}^{(I\times D)\times T \times (\frac{F'-I}{J}+1)}.
\end{align}
We denote this order as \textbf{LN-Unfold}.
We point out that LN-Unfold uses fewer parameters than Unfold-LN, and, since the \textit{torch.unfold} function creates a view of the input tensor without allocating new memory, LN-Unfold consumes less memory when $I/J > 1$.
Note that our preliminary paper~\cite{Wang2022GridNet} uses Unfold-LN, and this paper proposes LN-Unfold, which leads to slightly better separation.
We then use a bi-direcitonal long short-term memory (BLSTM) with $H$ units in each direction to model inter-frequency information within each frame: %
\begin{align}\label{intra-frame-blstm}
\ddot{R}_b = \Big[\text{BLSTM}\big(&\text{LN}(\dot{R}_b)[:,t,:]\big), \nonumber \\
&\text{for}\,\,t=1,\dots,T\Big]
\in {\mathbb R}^{2H\times T \times (\frac{F'-I}{J}+1)}.
\end{align}
Note that $J$ can be larger than one so that the sequence length and thus the amount of computation can be reduced.
Next, a one-dimensional deconvolution (Deconv1D) layer with kernel size $I$, stride $J$, input channel $2H$ and output channel $D$ (and without subsequent normalization and non-linearity) is applied to the hidden embeddings of the BLSTM: %
\begin{align}\label{intra-frame-deconv}
\dddot{R}_b = \Big[\text{Deconv1D}(&\ddot{R}_b[:,t,:]), \nonumber \\ &\text{for}\,\,t=1,\dots,T\Big] \in {\mathbb R}^{D\times T \times F'}.
\end{align}
After removing zero paddings, this tensor is added to the input tensor via a residual connection to produce the output tensor:
\begin{align}\label{intra-frame-residual}
U_b = \dddot{R}_b[:,:,:F] + R_b \in {\mathbb R}^{D\times T \times F}.
\end{align}
\subsection{Sub-Band Temporal Module}\label{subband_description}
In the sub-band temporal module, the procedure is almost the same as that in the intra-frame full-band module.
The only difference is that the input tensor $U_b \in {\mathbb R}^{D\times T\times F}$ is viewed as $F$ separate sequences, each with length $T$, and a BLSTM is used to model the temporal information within each frequency.
The output tensor is denoted as $Z_b \in {\mathbb R}^{D\times T\times F}$.
\subsection{Discussion on Full- and Sub-Band Modeling }\label{fullsubband_discussion}
In multi-channel conditions, performing sub-band modeling is a reasonable strategy to leverage spatial information afforded by multiple microphones.
The idea is that inter-microphone spatial patterns such as the inter-channel phase differences (IPD) do not change along time for sources that do not move within each utterance, while they usually change with frequency due to the linear phase structure of phase differences
and the effects of phase wrapping (see an example plot of IPD vs. frequency in anechoic conditions in Fig. 3 of~\cite{Wang2018combiningspectralspatial}).
This is partly the reason why many conventional beamforming~\cite{Gannot2017}, dereverberation~\cite{Nakatani2010} and spatial clustering~\cite{Haeb-Umbach2020} algorithms are performed separately within each frequency.
In light of this physical phenomenon, we believe that it intuitively makes sense to perform such a DNN-based sub-band modeling, as the inter-channel phase patterns important for supervised learning are stable and salient within each frequency for each source.
In addition, using a shared DNN block to separately model each sub-band is easier than using a DNN block to simultaneously model all the frequencies, as there are fewer variations to model.
This echoes the idea of weight sharing, a core concept in convolutional neural networks~\cite{Courville2016}. %
Similarly, in multi-microphone conditions the intra-frame full-band module described in the previous subsection could not only model the full-band, spectral patterns such as the harmonic structure along frequency but also model the gradual changes of inter-microphone phase patterns along frequency (see the helix structure of IPD along frequency in Fig. 3(c) of~\cite{Wang2018combiningspectralspatial}).
We emphasize that the pattern of such gradual changes along frequency exists at every frame where the target source (assumed non-moving) is active.
It is therefore reasonable to run the same BLSTM based full-band module at each frame to model such patterns.
Such sub-band modeling approach could better deal with reverberation.
Since reverberation time (T60) and reverberation patterns vary with frequency~\cite{A.P.Habets2018DereverbBook}, it is reasonable to use sub-band modules in TF-GridNet to separately model each frequency.
In a broader perspective, weighted prediction error (WPE)~\cite{Nakatani2010}, the most popular conventional algorithm for dereverberation, is also performed per-frequency by computing a linear, inverse filter at each frequency (preferably with different number of filter taps at different frequencies~\cite{Nakatani2019ConvBeamformer}) to estimate late reverberation.
There are studies~\cite{Zhao2018cLSTMLateReverb} using a non-linear LSTM to mimic the linear, inverse filtering of WPE, but the LSTM is trained to model all the frequencies simultaneously rather than separately.
We believe that using sub-band DNN modules to mimic sub-band inverse filtering is likely better, because reverberation, at each frequency, can be approximated as a linear convolution of a sub-band filter and the anechoic signal, according to the narrow-band approximation property~\cite{Gannot2017, Wang2021FCPjournal} in the STFT domain.
There are earlier studies using DNNs to perform full-band and sub-band modeling~\cite{Zhou2022ShorteningTarget, Quan2022NarrowbandConformer, Hao2021FullSubNet, Chen2022FullSubBandplus}.
Some differences include: (1) they only perform sub-band modelling without full-band modelling~\cite{Zhou2022ShorteningTarget, Quan2022NarrowbandConformer}; and (2) they perform sub-band modeling followed by full-band modelling~\cite{Hao2021FullSubNet, Chen2022FullSubBandplus} but without iterative information flow from sub- to full-band modules and from full- to sub-band, while we stack multiple TF-GridNet blocks to enable such an information flow so that full- and sub-band modelling can be integrated.
\ZQHL{There are earlier studies~\cite{Yang2022TFPSNet, Xu2018GridLSTM} using LSTMs to model spectrograms along time and frequency in monaural anechoic speaker separation.
However, they do not reach very strong performance.
}
\subsection{Cross-Frame Self-Attention Module}
In the cross-frame self-attention module (shown in Fig.~\ref{GridNet_overview}), we first compute frame embeddings at each frame using the T-F embeddings within that frame, and then use full-utterance self-attention on these frame embeddings to model long-range context information.
The motivation is that the information flow between two T-F units needs to go through many steps in the intra-frame full-band and sub-band temporal BLSTMs, and the self-attention module enables each frame to directly attend to any frames of interest to allow for more direct information flow.
We follow the self-attention mechanism proposed in~\cite{Liu2020Attn, Pandey2021}, which is designed for U-Net based monaural music source separation and speech denoising.
Differently, we use multi-head attention instead of single-head and we use the self-attention mechanism with dual-path models rather than U-Net for single- and multi-microphone speech separation.
The self-attention module has $L$ heads.
In each head $l$, we apply point-wise Conv2D, PReLU, LN along the channel and frequency dimensions (denoted as cfLN), and reshape layers to respectively obtain 2D query $Q_l \in {\mathbb R}^{T\times (F\times E)}$, key $K_l \in {\mathbb R}^{T\times (F\times E)}$ and value $V_l \in {\mathbb R}^{T\times (F\times D/L)}$ tensors. %
The point-wise Conv2D layers for computing the query and key tensors have $E$ output channels, leading to $F \times E$-dimensional query and key vectors at each frame. %
Similarly, the point-wise Conv2D layer for computing the value tensor has $D/L$ output channels, leading to an $F \times D/L$-dimensional value vector at each frame.
All the three point-wise Conv2D layers has $D$ input channels.
Following~\cite{Vaswani2017}, we compute the attention output $A_l \in {\mathbb R}^{T\times (F\times D/L)}$ by:
\begin{align}%
A_l = \text{softmax}(\frac{Q_l K_l^{{\mathsf T}}}{\sqrt{F\times E}})V_l.
\end{align}
We then concatenate the attention outputs of all the $L$ heads along the second dimension, reshape it back to $D\times T\times F$, apply a point-wise Conv2D with $D$ input and $D$ output channels followed by a PReLU and a cfLN to aggregate cross-head information.
Next, we add it to the input tensor $Z_b$ via a residual connection to obtain the output tensor $R_{b+1}$, which is fed to the next TF-GridNet block.
This self-attention mechanism only adds a negligible number of parameters by using point-wise Conv2D layers.
It operates at the frame level and the memory cost on attention matrices is $\mathcal{O}(B\times L \times T^2)$.
In comparison, TFPSNet~\cite{Yang2022TFPSNet} uses multi-head self-attention in each path-scanning module, and the memory cost on attention matrices is $\mathcal{O}\big(B\times L \times F \times T^2\big) + \mathcal{O}\big(B\times L \times T \times F^2\big)$, which is much higher.
\subsection{Loss Functions}\label{loss}
Since evaluation metrics usually change with datasets, we use different loss functions for different datasets, considering that different loss functions have their strengths and weaknesses~\cite{Wang2021compensation}. %
This section describes two loss functions, SI-SDR and Wav+Mag, both defined based on the re-synthesized signals of predicted RI components.
They have been proposed in earlier studies.
Our novelty is a mixture-constraint loss term to be used with SI-SDR and Wav+Mag.
\subsubsection{SI-SDR Loss with Mixture Constraint}\label{si_sdr_loss_description}
For anechoic speaker separation, there is only DNN$_1$, without the linear-filtering module and DNN$_2$.
The model in this case is trained with utterance-level PIT~\cite{Kolbak2017}.
The loss function follows the SI-SDR loss~\cite{LeRoux2019, Luo2019}, but with two differences.
First, in the original SI-SDR metric paper~\cite{LeRoux2019}, there are two definitions for SI-SDR.
One scales \textit{source} to equalize its gain with that of estimate, and the other instead scales \textit{estimate}.
The SI-SDR loss proposed in the seminal DANet~\cite{Chen2017a} and Conv-TasNet~\cite{Luo2019} studies (and almost all the follow-up studies) uses the former, while our study uses the latter:
\begin{align}\label{sinrloss}
\mathcal{L}_{\text{SI-SDR-SE}} = - \sum\nolimits_{c=1}^{C} 10\,\text{log}_{10} \frac{\| s_q^{(c)} \|_2^2}{\| \hat{\alpha}_q^{(c)}\hat{s}_q^{(c)} - s_q^{(c)} \|_2^2},
\end{align}
where $\| \cdot \|_2^2$ computes the $L_2$ norm, $\hat{s}_q^{(c)}$ is the re-synthesized signal based on the predicted RI components for speaker $c$, $\hat{\alpha}_q^{(c)}={{\text{argmin}}}_{\alpha}\,\| \alpha \hat{s}_q^{(c)} - s_q^{(c)} \|_2^2=(\hat{s}_q^{(c)})^{{\mathsf T}}s_q^{(c)}/(\hat{s}_q^{(c)})^{{\mathsf T}}\hat{s}_q^{(c)}$, and the ``SE'' in $\mathcal{L}_{\text{SI-SDR-SE}}$ means ``scaling estimate''.
We observe that this loss leads to similar performance and faster convergence, compared with the former.
Second, we add a loss term between the summation of target sources and that of scaled estimated sources:
\begin{align}\label{sinrloss+mc}
\mathcal{L}_{\text{SI-SDR-SE+MC}} = &\mathcal{L}_{\text{SI-SDR-SE}}\,\,+ \nonumber \\
&\frac{1}{N} \Big\| \sum\nolimits_{c=1}^{C} \hat{\alpha}_q^{(c)}\hat{s}_q^{(c)} - \sum\nolimits_{c=1}^{C} s_q^{(c)} \Big\|_1,
\end{align}
where $\| \cdot \|_1$ computes the $L_1$ norm. Since $y_q=\sum_{c=1}^{C} s_q^{(c)}$ in our considered task of monaural, anechoic speaker separation, we name the loss term as mixture-constraint (MC) loss.
It is motivated by a trigonometric perspective~\cite{Wang2019Trigonometric} in source separation, which suggested that constraining the separated sources to sum up to the mixture yields better phase estimation.
We point out that $\sum_{c=1}^{C} \hat{\alpha}_q^{(c)}\hat{s}_q^{(c)}$ would not equal $y_q$ at run time.
This distinguishes our loss from mixture consistency~\cite{Wisdom2018MixtureConsistency}, which enforces the separated sources to sum up to the mixture.
Our loss is also different from another mixture consistency loss proposed in~\cite{Zmolikova2021}, where the DNN is trained for real-valued phase-sensitive masking without phase estimation and the task is target speaker extraction based meeting transcription.
\subsubsection{Wav+Mag Loss}\label{wav_mag_loss}
Following~\cite{Wang2021compensation}, we define the loss on the re-synthesized signal and its magnitude:
\begin{align}\label{wav+mag}
&\mathcal{L}_{\text{Wav+Mag}} =
\sum\nolimits_{c=1}^C \Big( \frac{1}{N} \| \hat{s}_q(c) - s_q(c) \|_1 + \nonumber \\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\frac{1}{T\times F} \Big\| \Big|\text{STFT}(\hat{s}_q(c))\Big| - \Big|\text{STFT}(s_q(c))\Big| \Big\|_1 \Big),
\end{align}
where $|\cdot|$ computes magnitude and $\text{STFT}(\cdot)$ extracts a complex spectrogram.
It has been demonstrated in~\cite{Wang2021compensation} that the magnitude loss can improve metrics such as perceptual evaluation of speech quality (PESQ), short-time objective intelligibility~\cite{H.Taal2011} (STOI), and word error rates (WER) which favor signals with a good magnitude, at a degradation on time-domain metrics such as SI-SDR.
When $C>1$, we can also add a mixture-constraint loss, similarly to Eq.~(\ref{sinrloss+mc}):
\begin{align}\label{wav+mag+MC}
&\mathcal{L}_{\text{Wav+Mag+MC}} =
\sum\nolimits_{c=1}^C \Big( \frac{1}{N} \| \hat{s}_q(c) - s_q(c) \|_1 + \nonumber \\
&\,\,\,\,\,\,\,\frac{1}{T\times F} \Big\| \Big|\text{STFT}(\hat{s}_q(c))\Big| - \Big|\text{STFT}(s_q(c))\Big| \Big\|_1\Big) + \nonumber \\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\frac{1}{N}\Big\| \sum\nolimits_{c=1}^C \hat{s}_q(c) - \sum\nolimits_{c=1}^C s_q(c) \Big\|_1 + \nonumber \\
&\,\frac{1}{T\times F} \Big\| \Big|\text{STFT}\Big(\sum_{c=1}^{C} \hat{s}_q(c)\Big)\Big| - \Big|\text{STFT}\Big(\sum_{c=1}^{C} s_q(c)\Big)\Big| \Big\|_1.
\end{align}
\section{Beamforming and Sub-Band Modelling}\label{sec:MCMFWF}
This section proposes a novel DNN-supported beamformer and connects it with integrated sub- and full-band modeling.
\subsection{Multi-Frame Wiener Filter}\label{beamforming_discussion}
Assuming that target speakers are non-moving within each utterance and based on the estimated target speech $\hat{S}_q^{(1)}(c)$ by DNN$_1$, we compute a time-invariant MFWF per frequency by solving the minimization problem below:
\begin{align}\label{MCMFWF}
\underset{\mathbf{w}_q(c,f)}{{\text{argmin}}}
\sum\nolimits_{t=1}^T \big|
\hat{S}_q^{(1)}(c,t,f) - \mathbf{w}_q(c,f)^{{\mathsf H}} \widetilde{\mathbf{Y}}(t,f)
\big|^2,
\end{align}
where $\widetilde{\mathbf{Y}}(t,f)=[\mathbf{Y}(t-\Delta_l,f)^{\mathsf T},\dots,\mathbf{Y}(t,f)^{\mathsf T},\dots,\mathbf{Y}(t+\Delta_r,f)^{\mathsf T}]^{\mathsf T}$ stacks the mixtures at nearby T-F units, $\mathbf{w}_q(c,f)\in {\mathbb C}^{(\Delta_l+1+\Delta_r)\times P}$ is a time-invariant linear filter, and $(\cdot)^{{\mathsf H}}$ computes complex Hermitian.
$\Delta_l$ ($\geq 0$) and $\Delta_r$ ($\geq 0$) control the context of frames for filtering, resulting in a single-frame Wiener filter when $\Delta_l$ and $\Delta_r$ are both zeros and an MFWF otherwise.
A closed-form solution is available:
\begin{align}\label{mcwfcov}
&\hat{\mathbf{w}}_q(c,f) \nonumber \\
&= \Big( \sum_{t=1}^T \widetilde{\mathbf{Y}}(t,f) \widetilde{\mathbf{Y}}(t,f)^{{\mathsf H}} \Big)^{-1} \sum_{t=1}^T \widetilde{\mathbf{Y}}(t,f) \Big(\hat{S}_q^{(1)}(c,t,f)\Big)^{*},
\end{align}
where $(\cdot)^{*}$ computes complex conjugate.
The filtering result $\hat{S}_q^{\text{MFWF}}(c)$ is computed as
\begin{equation}\label{}
\hat{S}_q^{\text{MFWF}}(c,t,f) = \hat{\mathbf{w}}_q(c,f)^{{\mathsf H}} \widetilde{\mathbf{Y}}(t,f).
\end{equation}
We name MFWF as MCMFWF when $P>1$ and as single-channel MFWF (SCMFWF) when $P=1$.
The idea of MCMFWF was proposed in~\cite{Wang2021seq}.
Differently, we use multi-microphone complex spectral mapping to obtain $\hat{S}_q^{(1)}(c)$, which consists of DNN-estimated magnitude and phase, while the system in~\cite{Wang2021seq}, even in multi-microphone cases, performs monaural, real-valued magnitude masking to obtain $\hat{S}_q^{(1)}(c)$, which consists of DNN-estimated magnitude and the mixture phase.
It should be noted that in our recent studies~\cite{Wang2021LowDistortion, Wang2022LowLatency}, we proposed to project the mixture to DNN-estimated target speech using Eq.~(\ref{MCMFWF}), but the beamformer is single-frame (i.e., $\Delta_l=0$ and $\Delta_r=0$).
We will show in our experiments that single-frame filtering leads to worse performance than multi-frame filtering, likely due to its insufficient degrees of freedom for suppressing non-target signals.
In monaural conditions, Eq.~(\ref{mcwfcov}) becomes an SCMFWF, which can reduce reverberation by exploiting the correlations among nearby frames due to reverberation.
It is similar to the inverse convolutive prediction filter proposed in~\cite{Wang2021FCPjournal}.
The key different is that, in~\cite{Wang2021FCPjournal}, only past frames are filtered (i.e., $\Delta_l>0$ and $\Delta_r=0$).
However, future frames are also correlated with the current frame and they can also be linear-filtered to reduce the reverberation at the current frame.
\ZQHL{In the literature, convolutional beamformer~\cite{Nakatani2019ConvBeamformer} and WPE~\cite{Nakatani2010} are the most popular multi-frame linear filters.
In their DNN-supported versions, DNN-estimated target magnitude is used in a maximum-likelihood objective for filter computation~\cite{Kinoshita2017}.
We will show in our experiments that the output of the proposed MFWF improves the performance of DNN$_2$ by a larger factor.
}
\subsection{Discussion on Beamforming and Sub-Band Modelling}\label{beamforming_discussion}
When beamforming results are used as extra features for DNN training (e.g., in the way shown in Fig.~\ref{system_overview}), large improvement has been observed in earlier studies~\cite{Wang2020css, Wang2021LowDistortion} (see for example the last two rows of Table~\ref{dereverb_8ch_results}).
One interesting observation is that the DNNs in these studies usually perform full-band modelling, where one typical approach is to use an encoder to encode each frame into an embedding, perform sequence modelling to refine the frame embeddings, and use a decoder to reconstruct target speech from the refined embeddings.
The encoder, for example, can be just a linear fully-connected layer followed by a non-linear activation~\cite{Luo2019} or contain a stack of non-linear layers in the form of a UNet-style encoder~\cite{Wang2020css}.
Our insight is that the large improvement is likely because the beamformers are computed based on signals only within each sub-band and the beamforming results could hence be complementary to full-band modeling, which simultaneously models all the frequencies but may not be good at sufficiently modeling each frequency since different frequencies exhibit diverse spectral, temporal and spatial patterns (see also our discussions in Section~\ref{fullsubband_discussion}).
Each sub-band temporal module in TF-GridNet models each frequency using a BLSTM shared across all the frequencies.
This could be a better way of \textit{neural beamforming} than earlier approaches where DNNs are mainly used for full-band modeling.
In our best-performing system, we still compute an MCMFWF result based on the output of a first TF-GridNet and use a second TF-GridNet for post-filtering (i.e., Fig.~\ref{system_overview}).
This can be viewed as another way of full- and sub-band integrated modeling, and is found to improve the performance of using just one single TF-GridNet, but the improvement brought by the beamformer is much less impressive than the one achieved when the DNNs are trained to perform full-band modelling.
See also our discussion later in Section~\ref{result_dereverb}.
We point out that the sub-band (\textit{a.k.a} narrow-band) property for per-frequency modeling is afforded by STFT.
This property bears an important advantage of STFT-domain approaches: we can exploit intra- and cross-frequency information to potentially achieve better separation.
In comparison, the learned bases by time-domain models are usually not narrow-band~\cite{Luo2019, Cornell2022FB}, and many current time-domain models do not have a concept of sub-band or narrow-band frequency to exploit and could hence produce worse or even sub-optimal performance.
\section{Experimental Setup}\label{experiments}
We evaluate the proposed algorithms on five tasks, including speaker separation in anechoic, reverberant and noisy-reverberant conditions, speech dereverberation, and noisy-reverberant speech enhancement.
This section describes the setup for each task, baselines, and miscellaneous configurations.
Our experiments cover major speech separation tasks and we use public datasets with existing published results to highlight that the improvements obtained in our study are relative to very strong baselines.
\subsection{Setup for Monaural, Anechoic Speaker Separation}\label{wsj02mix_setup}
We use \textbf{WSJ0-2mix}~\cite{Hershey2016}, the most popular dataset to benchmark monaural talker-independent speaker separation algorithms in anechoic conditions.
It has 20,000 ($\sim$30.4 h), 5,000 ($\sim$7.7 h) and 3,000 ($\sim$4.8 h) two-speaker mixtures respectively in its training, validation and test sets.
The clean source signals are sampled from the WSJ0 corpus.
The speakers in the training and validation sets are different from the speakers for testing.
The two utterances in each mixture are fully-overlapped, with their relative energy level randomly sampled from the range $[-5, 5]$ dB.
The sampling rate is 8 kHz.
\subsection{Setup for Reverberant Speaker Separation}\label{smswsj_setup}
We use \textbf{SMS-WSJ}~\cite{Drude2019}, a popular corpus for comparing two-speaker separation algorithms in reverberant conditions.
The clean speech is sampled from the WSJ0 and WSJ1 datasets.
The corpus contains 33,561 ($\sim$87.4 h), 982 ($\sim$2.5 h) and 1,332 ($\sim$3.4 h) two-speaker mixtures for training, validation and testing, respectively.
The simulated microphone array has six microphones arranged uniformly on a circle with a diameter of 20 cm.
For each mixture, the speaker-to-array distance is drawn from the range $[1.0, 2.0]$ m, and T60 from $[0.2, 0.5]$ s.
A weak white noise is added to simulate microphone sensor noises, and the energy level between the sum of the reverberant speech signals and the noise is sampled from the range $[20, 30]$ dB.
The sampling rate is 8 kHz.
For ASR evaluation, the default Kaldi-based ASR backend provided with SMS-WSJ~\cite{Drude2019} is used.
It is trained using single-speaker noisy-reverberant speech as inputs and the state alignments of its corresponding direct-path signal as labels.
A standard tri-gram language model is used for decoding.
We perform joint denoising, dereverberation and separation.
We consider one-, two- and six-channel tasks, and use the direct-path signals as the training target.
For two-channel processing, we take the signals at microphone 1 and 4 as input, and for monaural separation, we use the signal at microphone 1.
The first microphone is always used as the reference.
\subsection{Setup for Noisy-Reverberant Speaker Separation}\label{whamr_setup}
We use \textbf{WHAMR!}~\cite{Maciejewski2020} to validate our algorithms for noisy-reverberant speaker separation.
It re-uses the two-speaker mixtures in WSJ0-2mix~\cite{Hershey2016} but reverberates each clean source and adds non-stationary noises. %
In each mixture, the T60 is sampled from the range $[0.2, 1.0]$ s, signal-to-noise ratio (SNR) between the louder speaker and noise from $[-6, 3]$ dB, relative energy level between the two speakers from $[-5, 5]$ dB, and speaker-to-array distance from $[0.66, 2.0]$ m.
There are 20,000 ($\sim$30.4 h), 5,000 ($\sim$7.7 h) and 3,000 ($\sim$4.8 h) binaural mixtures respectively for training, validation and testing.
We use its \textit{min} and 8 kHz version.
We aim at joint dereverberation, denoising and speaker separation.
The direct-path signal of each speaker at the first microphone is used as the target for training and as the reference for metric computation.
\subsection{Setup for Speech Dereverberation}\label{reverbwsj0cam_setup}
We use a simulated reverberant dataset with weak air-conditioning noises, since there lacks a well-designed popular dataset for speech dereverberation\footnote{We considered the REVERB corpus~\cite{Kinoshita2016}, but its training set is simulated based on 24 eight-channel RIRs, which are too few for training DNN models.}.
Although simulated by ourselves, this dataset has been used in our recent studies~\cite{Wang2021FCPjournal, Wang2021LowDistortion}, which reported very strong results.
The clean source signals for simulation are from the WSJCAM0 corpus, which includes 7,861, 742 and 1,088 utterances respectively in its training, validation and test sets.
Based on them, we simulate 39,293 ($\sim$77.7 h), 2,968 ($\sim$5.6 h), and 3,262 ($\sim$6.4 h) noisy-reverberant mixtures respectively as our training, validation, and test sets.
The data spatialization process follows~\cite{Wang2020dMCCSMconference}, where, for each utterance, we randomly sample a room with random room characteristics and speaker and microphone locations, using the Pyroomacoustics RIR generator~\cite{Scheibler2018}.
The simulated microphone array has eight microphones arranged on a circle with a diameter of 20 cm.
The speaker-to-array distance is drawn from the range $[0.75, 2.5]$ m and T60 from $[0.2, 1.3]$ s.
For each utterance, an eight-channel diffuse air-conditioning noise is sampled from the REVERB dataset~\cite{Kinoshita2016} and added to the reverberant speech, and the SNR between the direct-path signal and the noise is sampled from the range $[5, 25]$ dB.
The sampling rate is 16 kHz.
We denote this dataset as \textbf{WSJ0CAM-DEREVERB}.
We aim at removing any early reflections and late reverberation.
The direct-path signal of the target speaker at the first microphone is used as %
the reference for metric computation.
\subsection{Setup for Noisy-Reverberant Speech Enhancement}\label{l3das_setup}
The \textbf{L3DAS22} 3D speech enhancement task~\cite{Guizzo2022L3DAS} challenges participants to reconstruct the dry speech source signal from its far-field mixture simulated by using two four-channel Ambisonic-format signals in a noisy-reverberant office environment.
The dry source signals are drawn from LibriSpeech and noise signals from FSD50k~\cite{fonseca2020fsd50k}.
The SNR is sampled from the range $[6, 16]$ dBFS (decibels relative to full scale).
Real RIRs are used for simulation. Such RIRs were recorded in an office room by using two first-order A-format Ambisonic arrays, each with four microphones.
The microphone placement is fixed, with one Ambisonic microphone array placed at the room center and the other being 20\,cm away.
The room configuration is the same between training and testing, and the source positions are sampled uniformly inside the room with no overlap of positions between training and testing.
Artificial mixtures are generated by convolving dry speech and dry noise signals with the measured RIRs and the convolved signals are then added together.
There are 37,398 ($\sim$81.3 h), 2,362 ($\sim$3.9 h) and 2,189 ($\sim$3.5 h) mixtures respectively in the training, validation and test sets.
The generated A-format Ambisonic mixtures are converted to B-format Ambisonic via a transformation consisting of a pre-filter, a mixing matrix and a post-filter.
The task is to predict the dry speech based on the B-format Ambisonic mixture.
The sampling rate is 16 kHz.
The submitted systems were ranked by using a combination of STOI and WER:
\begin{equation}\label{eq:task1_metric}
\text{Task1Metric} = \Big(\text{STOI} + (1 - \text{WER})\Big)/2.
\end{equation}
Since STOI and WER scores are both in the range of $[0,1]$, the composite metric is also in $[0,1]$.
The WER is computed from the transcription of enhanced speech with that of the dry speech, both decoded by a pre-trained wav2vec2 ASR model.
Differently from the other setups, the goal in this task is to predict the dry speech from far-field multi-channel mixtures.
This requires the submitted systems to not only remove reverberation and noises, but also to time-align the estimated speech with the dry speech (as STOI degrades with misalignment), which requires the systems to perform implicit or explicit localization of the target source so that a time-aligned estimate can be obtained.
This is achievable since the Ambisonic arrays form a fixed three-dimensional geometry.
\subsection{Baselines}
We can compare our approaches with others by using system-level performance.
For MFWF, we provide the results of other linear filters, including (1) in multi-channel cases, convolutional beamformer~\cite{Nakatani2019ConvBeamformer}; and (2) in monaural cases, WPE~\cite{Nakatani2010, Kinoshita2017}.
We replace the MFWF module between DNN$_1$ and DNN$_2$ in Fig.~\ref{system_overview} with a DNN-supported convolutional beamformer or WPE filter to compare their effectiveness at improving DNN$_2$. %
\subsubsection{System-Level Baselines}
Since the datasets in all the considered tasks have existing results reported in earlier studies, we can compare our results with the strongest ones achieved by competing approaches.
Notably, we will compare with our previous studies~\cite{Wang2020css, Wang2021FCPjournal, Wang2021FCPwaspaa, Lu2022}, which also follow the MISO-BF-MISO approach shown in Fig.~\ref{system_overview} but uses TCN-DenseUNet and other sub-band linear filters.
\subsubsection{Baseline for MCMFWF}
In multi-channel cases, we consider convolutional beamformer~\cite{Nakatani2019ConvBeamformer}, a very popular multi-channel multi-frame filter in speech separation, as the baseline.
We compute it by solving the problem~\cite{Nakatani2019ConvBeamformer} below:
\begin{align}\label{convbf_objective}
\underset{\mathbf{w}_q(c,f)}{{\text{argmin}}} \sum\nolimits_{t=1}^{T} \frac{|\mathbf{w}_q(c,f)^{{\mathsf H}}\ \Bar{\mathbf{Y}}(t,f)|^2}{\hat{\lambda}_q(c, t, f)} \nonumber \\ \text{subject to}\,\,\,\,\mathbf{w}_{q;0}(c,f)^{{\mathsf H}}\hat{\mathbf{d}}_q(c,f) = 1,
\end{align}
where $\Bar{\mathbf{Y}}(t,f)=[\mathbf{Y}(t-\Delta_d-\Delta_l+1,f)^{\mathsf T},\dots,\mathbf{Y}(t-\Delta_d,f)^{\mathsf T}, \mathbf{Y}(t,f)^{\mathsf T}]^{\mathsf T} \in {\mathbb C}^{(\Delta_l+1)\times P}$ with $\Delta_d$ denoting a prediction delay and $\Delta_l$ the number of filter taps for past frames beyond the prediction delay, $\mathbf{w}_q(c,f)=[\mathbf{w}_{q;-\Delta_d-\Delta_l+1}(c,f)^{\mathsf T},\dots,\mathbf{w}_{q;-\Delta_d}(c,f)^{\mathsf T},\mathbf{w}_{q;0}(c,f)^{\mathsf T}]^{\mathsf T} \in {\mathbb C}^{(\Delta_l+1)\times P}$ with $\mathbf{w}_{q;i}(c,f)\in {\mathbb C}^P$ denoting the filter applied to frame $t+i$ in order to produce the result at the current frame $t$, and $\hat{\mathbf{d}}_q(c,f)$ is the estimated relative transfer function for microphone $q$.
Following~\cite{Drude2020NARAWPE} and based on the DNN-estimated target speech $\hat{S}_q^{(1)}(c)$, $\hat{\lambda}_q(c)$, the estimated power spectral density of target speech, can be computed as:
\begin{align}\label{wpelambda}
\hat{\lambda}_q(c,t,f)=\text{max}\Big(\varepsilon\,\, \text{max}(|\hat{S}_q^{(1)}(c)|^2),|\hat{S}_q^{(1)}(c,t,f)|^2\Big),
\end{align}
where $\text{max}(\cdot)$ extracts the maximum value of a spectrogram, $\text{max}(\cdot,\cdot)$ returns the larger of two values, and $\varepsilon$ is a floor value to avoid putting too much weight on T-F units with low energy.
Through T-F masking and also based on the DNN-estimated target speech $\hat{S}_q^{(1)}(c)$, $\hat{\mathbf{d}}_q(c,f)$ is computed as the principal eigenvector of an estimated speech covariance matrix~\cite{Yoshioka2015, Heymann2015, Gannot2017} for non-moving point sources, i.e.,
\begin{align}
\hat{\mathbf{\Phi}}(c,f) &= \sum\nolimits_{t=1}^T \hat{m}(c,t,f) \mathbf{Y}(t,f)\mathbf{Y}(t,f)^{{\mathsf H}}, \\
\hat{m}(c,t,f) &= \frac{|\hat{S}_q^{(1)}(c,t,f)|}{|\hat{S}_q^{(1)}(c,t,f)| + |Y_q(t,f) - \hat{S}_q^{(1)}(c,t,f)|}, \\
\hat{\mathbf{d}}(c,f) &= \mathcal{P}\big(\hat{\mathbf{\Phi}}(c,f)\big), \\
\hat{\mathbf{d}}_q(c,f) &= \hat{\mathbf{d}}(c,f) / \hat{d}(c,f;q),
\end{align}
where $\mathcal{P}(\cdot)$ extracts the principal eigenvector, and $\hat{d}(c,f;q)$ denotes the $q^{\text{th}}$ element in $\hat{\mathbf{d}}(c,f)$.
The results of convolutional beamformer is comptued as
\begin{align}\label{convbf_result}
\hat{S}_q^{\text{ConvBF}}(c,t,f) = \hat{\mathbf{w}}_q(c,f)^{{\mathsf H}}\ \Bar{\mathbf{Y}}(t,f),
\end{align}
where ``ConvBF'' denotes convolutional beamformer.
\ZQHL{Notice that our DNN-supported MCMFWF in Eq.~(\ref{MCMFWF}) is simpler to compute than the convolutional beamformer.}
\subsubsection{Baseline for SCMFWF}
In the single-microphone case, convolutional beamformer turns into the WPE filter~\cite{Nakatani2010}.
Following the DNN-WPE algorithm~\cite{Kinoshita2017}, we compute it by using the magnitude of $\hat{S}_q^{(1)}(c)$ estimated by DNN$_1$.
The filter is computed by solving the following problem:
\begin{align}\label{dnn_wpe}
\underset{\mathbf{w}_q(c,f)}{{\text{argmin}}} \sum\nolimits_{t=1}^{T} \frac{|Y_q(t,f) - \mathbf{w}_q(c,f)^{{\mathsf H}}\ \Breve{\mathbf{Y}}(t-\Delta_d,f)|^2}{\hat{\lambda}_q(c, t, f)},
\end{align}
where $\Breve{\mathbf{Y}}_q(t,f)=[Y_q(t-\Delta_l+1,f)^{\mathsf T},\dots,Y_q(t,f)^{\mathsf T}]^{\mathsf T} \in {\mathbb C}^{\Delta_l}$, $\mathbf{w}_q(c,f)\in {\mathbb C}^{\Delta_l}$, $\Delta_d$ is a prediction delay, and $\hat{\lambda}_q(c)$ is computed using Eq.~(\ref{wpelambda}).
The WPE result is obtained as
\begin{align}\label{wpe_result}
\hat{S}_q^{\text{WPE}}(c,t,f)=Y_q(t,f) - \hat{\mathbf{w}}_q(c,f)^{{\mathsf H}} \Breve{\mathbf{Y}}(t-\Delta_d,f).
\end{align}
\subsection{Miscellaneous Setup}
For STFT, the window length is $32$ ms and hop length $8$ ms, and the square-root Hann window is used as the analysis window.
For $16$ kHz sampling rate, a $512$-point discrete Fourier transform (DFT) is applied to extract $257$-dimensional complex STFT spectra at each frame, and for $8$ kHz, a $256$-point DFT is used to extract $129$-dimensional complex STFT spectra.
$E$ (see its definition in Table~\ref{summary_hyperparam}) is set to $4$ for 8 kHz and to $2$ for $16$ kHz.
This way, the dimension of frame-level embeddings (i.e., $F\times E$) used for self-attention is reasonable.
For MFWF, we set $\Delta_l$ and $\Delta_r$, which controls the filter taps, to $4$ and $3$ for eight-channel separation,
to $5$ and $4$ for six-channel, to $15$ and $14$ for two-channel, and to $20$ and $19$ for single-channel.
For convolutional beamformer, we set the prediction delay $\Delta_d$ to 3 following~\cite{Nakatani2019ConvBeamformer}, and tune $\Delta_l$ to $7$ for eight-channel processing, to $9$ for six-channel, and to $29$ for two-channel.
For WPE, $\Delta_d$ is also 3 and $\Delta_l$ is tuned to $40$.
We emphasize that a positive prediction delay $\Delta_d$ is found important for convolutional beamformer and WPE to avoid target cancellation~\cite{Nakatani2010, Nakatani2019ConvBeamformer}, and both filters are designed by the original authors to not filter future frames, because future frames contain the reverberation of the target speech at the current frame and including them for linear filtering would lead to target cancellation.
$\varepsilon$ in Eq.~(\ref{wpelambda}) is tuned to $10^{-5}$.
In each epoch, we sample a $4$-second segment from each mixture for training.
We normalize the sample variance of each mixture segment to $1.0$ and use the same scaling factor to scale the target sources, before using them for training.
Adam is used as the optimizer.
The $L_2$ norm for gradient clipping is set to $1.0$.
The learning rate starts from $0.001$ and is reduced by half if the validation loss does not improve in $3$ epochs.
We do not use dynamic mixing or data augmentation~\cite{Subakan2021}.
\subsection{Evaluation Metrics}
The evaluation metrics vary with tasks.
We consider SI-SDR or SI-SDRi~\cite{LeRoux2019}, SDR or SDR improvement (SDRi)~\cite{Vincent2006a}, PESQ, STOI or extended STOI (eSTOI)~\cite{H.Taal2011}\footnote{\url{https://github.com/mpariente/pystoi}, v0.3.3}, and WER.
For PESQ, we use the \textit{python-pesq} toolkit\footnote{\url{https://github.com/ludlows/python-pesq}, v0.0.2} to report narrow-band MOS-LQO scores. %
The number of model parameters is reported in millions (M).
\section{Evaluation Results}\label{results}
\subsection{Results on WSJ0-2mx}\label{result_WSJ02mix}
We evaluate TF-GridNet on monaural, anechoic speaker separation.
SI-SDRi~\cite{LeRoux2019} and SDRi~\cite{Vincent2006a} are used as the evaluation metrics, following previous studies.
The mixture SI-SDR is $0$ dB and the mixture SDR $0.2$ dB.
We always use $B=6$ blocks for WSJ0-2mix.
\subsubsection{Comparison with DPRNN and TFPSNet}
Table~\ref{result_fair} compares the performance of TF-GridNet with DPRNN~\cite{Luo2020} and TFPSNet~\cite{Yang2022TFPSNet}.
Each model has almost the same number of parameters and uses almost the same amount of computation.
This is implemented by using BLSTMs in each model and unifying the embedding dimension (or the bottleneck dimension in the cases of DPRNN and TFPSNet) to $64$ and the hidden dimension of the BLSTMs to $128$. %
For DPRNN, we set the window size to $2$ samples, hop size to $1$ sample, chunk size to $250$ frames, and overlap between consecutive chunks to 50\%, following~\cite{Luo2020}. %
For TF-GridNet, in each block we remove the full-band self-attention module, and set $I$ and $J$ to $1$ (in this case, the order of LN and Unfold does not matter).
From row 1, 2 and 5, we observe that TF-GridNet with complex spectral mapping obtains better results (21.2 vs. 18.8 and 19.7 dB).
Table~\ref{result_fair} also reports the performance of using TF-GridNet with masking in row 3 and 4.
In row 3, we perform mask learned embeddings, following~\cite{Luo2019,Luo2020,Yang2022TFPSNet}.
We closely follow the encoder-masking-decoding modules used in~\cite{Yang2022TFPSNet}, but replace their path-scanning modules with our intra-frame full-band and sub-band temporal modules.
In row 4, we use TF-GridNet for complex ratio masking based separation~\cite{Williamson2016, Liu2019DeepCASA}.
After obtaining the output tensor of the Deconv2D module (see Fig.~\ref{GridNet_overview}), we first truncate the values in the tensor to the range $[-5,5]$ to obtain an estimated complex ratio mask and then multiply it with the mixture spectrogram for separation.
From row 3, 4 and 5, we notice that complex spectral mapping performs better (21.2 vs. 20.7 and 20.8 dB).
\begin{table}[t]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Masking and Mapping Comparison Based on WSJ0-2mix.}}
\vspace{-0.1cm}
\label{result_fair}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{
r
l
c
S[table-format=1.1,round-precision=1]
c
S[table-format=2.1,round-precision=1]
}
\toprule
{\multirow{2}{*}{Row}} & {\multirow{2}{*}{Systems}} & {\multirow{2}{*}{Masking or Mapping?}} & {\#params} & {\multirow{2}{*}{Loss}} & {SI-SDRi} \\
& & & {(M)} & & {(dB)} \\
\midrule
1 & DPRNN~\cite{Luo2020} & Embedding masking & 2.6 & - & 18.8 \\
2 & TFPSNet (BLSTM)~\cite{Yang2022TFPSNet} & Embedding masking & 2.6 & - & 19.7 \\
\midrule
3 & TF-GridNet & Embedding masking & 2.8 & (\ref{sinrloss}) & 20.7 \\ %
4 & TF-GridNet & Complex ratio masking & 2.6 & (\ref{sinrloss}) & 20.8 \\
5 & TF-GridNet & Complex spectral mapping & 2.6 & (\ref{sinrloss}) &\bfseries 21.2 \\
\bottomrule
\end{tabular}
\vspace{-0.6cm}
\end{table}
\subsubsection{Ablation Results with Different Hyper-Parameters}
Table~\ref{result_hyperparam} presents the ablation results of our models on WSJ0-2mix using different model hyper-parameters.
From row 1-4, we can see that, when the kernel size is sufficiently large (i.e., $I=8$), using the Unfold and Deconv1D mechanism together with a smaller embedding dimension (i.e., $D=16$) does not decreases SI-SDRi, compared with the configuration that uses a larger embedding dimension (i.e., $D=128$) but does not stack nearby T-F embeddings (i.e., $I=1$).
One benefit of using the former configuration is that the memory consumption is lower.
From row 4 and 5, we can see that the MC loss produces slightly better SI-SDRi (21.6 vs. 21.8 dB).
From row 5-7, we notice that enlarging the model size by increasing the number of hidden units $H$ in BLSTMs and the embedding dimension $D$ produces clear improvement.
The results in row 7, 8 and 9 suggest that including the full-band self-attention module is beneficial, and using four attention heads leads to better performance than just using one (22.9 vs. 22.6 dB).
In row 10, we increase the embedding dimension to $48$ and reduce the kernel size $I$ to 4, and obtain slightly better SI-SDRi than the model in row 9 (23.0 vs. 22.9 dB).
In row 11, we use LN+Unfold rather than Unfold+LN.
This results in 0.2 dB better SI-SDRi (23.2 vs. 23.0 dB).
Further enlarging model size in row 12 produces further gains (from 23.2 to 23.5 dB).
\begin{table}[t]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment =center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Ablation Results on WSJ0-2mix.}}
\vspace{-0.1cm}
\label{result_hyperparam}
\setlength{\tabcolsep}{1.5pt}
\begin{tabular}{
r
c
c
c
S[table-format=1,round-precision=0]
S[table-format=3,round-precision=0]
S[table-format=1,round-precision=0]
S[table-format=1,round-precision=0]
S[table-format=3,round-precision=0]
S[table-format=2.1,round-precision=1]
c
S[table-format=2.1,round-precision=1]
}
\toprule
{\multirow{2}{*}{Row}} & {\multirow{2}{*}{Systems}} & {Unfold+LN or} & Use & {\multirow{2}{*}{$L$}} & {\multirow{2}{*}{$D$}} & {\multirow{2}{*}{$I$}} & {\multirow{2}{*}{$J$}} & {\multirow{2}{*}{$H$}} & {\#params} & {\multirow{2}{*}{Loss}} & {SI-SDRi} \\
& & {LN+Unfold} & attention? & & & & & & {(M)} & & {(dB)} \\
\midrule
1 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 64 & 1 & 1 & 128 & 2.6 & (\ref{sinrloss}) & 21.2 \\
2 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 16 & 4 & 1 & 128 & 2.6 & (\ref{sinrloss}) & 20.5 \\
3 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 128 & 1 & 1 & 128 & 3.6 & (\ref{sinrloss}) & 21.6 \\
4 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 16 & 8 & 1 & 128 & 3.6 & (\ref{sinrloss}) & 21.6 \\
\midrule
5 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 16 & 8 & 1 & 128 & 3.6 & (\ref{sinrloss+mc}) & 21.8 \\ %
6 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 16 & 8 & 1 & 192 & 6.5 & (\ref{sinrloss+mc}) & 21.9 \\ %
7 & TF-GridNet & {Unfold+LN} & \xmark & {-} & 24 & 8 & 1 & 192 & 8.0 & (\ref{sinrloss+mc}) & 22.5 \\ %
\midrule
8 & TF-GridNet & {Unfold+LN} & \cmark & 1 & 24 & 8 & 1 & 192 & 8.0 & (\ref{sinrloss+mc}) & 22.6 \\ %
9 & TF-GridNet & {Unfold+LN} & \cmark & 4 & 24 & 8 & 1 & 192 & 8.0 & (\ref{sinrloss+mc}) & 22.9 \\ %
10 & TF-GridNet & {Unfold+LN} & \cmark & 4 & 48 & 4 & 1 & 192 & 8.0 & (\ref{sinrloss+mc}) & 23.0 \\
11 & TF-GridNet & {LN+Unfold} & \cmark & 4 & 48 & 4 & 1 & 192 & 8.0 & (\ref{sinrloss+mc}) & 23.2 \\
12 & TF-GridNet & {LN+Unfold} & \cmark & 4 & 64 & 4 & 1 & 256 & 14.5 & (\ref{sinrloss+mc}) & \bfseries 23.5 \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Performance Comparison with Other Systems on WSJ0-2mix.}}
\vspace{-0.1cm}
\label{comparison_with_others}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{
cc
S[table-format=4,round-precision=0]
S[table-format=3.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
}
\toprule
Systems & Domain & {Year} & {\#params (M)} & {SI-SDRi (dB)} & {SDRi (dB)} \\
\midrule
DPCL++~\cite{Isik2016} & T-F & 2016 & 13.6 & 10.8 & {-} \\
uPIT-BLSTM-ST~\cite{Kolbak2017} & T-F & 2017 & 92.7 & {-} & 10.0 \\
ADANet~\cite{Chen2017a} & T-F & 2018 & 9.1 & 10.4 & 10.8 \\
WA-MISI-5~\cite{WZQe2eMISI2018} & T-F & 2018 & 32.9 & 12.6 & 13.1 \\
Sign Prediction Net~\cite{Wang2019Trigonometric} & T-F & 2019 & 56.6 & 15.3 & 15.6 \\
Conv-TasNet~\cite{Luo2019} & Time & 2019 & 5.1 & 15.3 & 15.6 \\
Deep CASA~\cite{Liu2019DeepCASA} & T-F & 2019 & 12.8 & 17.7 & 18.0 \\
Conv-TasNet-MBT~\cite{Lam2020MBT} & Time & 2020 & 8.8 & 15.6 & {-} \\
FurcaNeXt~\cite{Shi2019FurcaNeXt} & Time & 2020 & 51.4 & {-} & 18.4 \\
SUDO RM -RF~\cite{Tzinis2020} & Time & 2020 & 2.6 & 18.9 & {-} \\
DPRNN~\cite{Luo2020} & Time & 2020 & 2.6 & 18.8 & 19.0 \\
Gated DPRNN~\cite{Nachmani2020} & Time & 2020 & 7.5 & 20.1 & 20.4 \\
DPTNet~\cite{Chen2020DPTnet} & Time & 2020 & 2.7 & 20.2 & 20.6 \\
DPTCN-ATPP~\cite{Zhu2021} & Time & 2021 & 4.7 & 19.6 & 19.9 \\
SepFormer~\cite{Subakan2021} & Time & 2021 & 26.0 & 20.4 & 20.5 \\
Sandglasset~\cite{Lam2021} & Time & 2021 & 2.3 & 20.8 & 21.0 \\
Wavesplit~\cite{Zeghidour2020} & Time & 2021 & 29.0 & 21.0 & 21.2 \\
TFPSNet~\cite{Yang2022TFPSNet} & T-F & 2022 & 2.7 & 21.1 & 21.3 \\
MTDS (DPTNet)~\cite{Qian2022} & Time & 2022 & 4.0 & 21.5 & 21.7 \\
SFSRNet~\cite{Rixen2022} & Time & 2022 & 59.0 & 22.0 & 22.1 \\
QDPN~\cite{Rixen2022QDPN} & Time & 2022 & 200.0 & 22.1 & {-} \\
\midrule
TF-GridNet & T-F & 2022 & 14.5 & \bfseries 23.5 & \bfseries 23.6 \\
\bottomrule
\end{tabular}
\vspace{-0.5cm}
\end{table}
\subsubsection{Comparison with Previous Models}
Table~\ref{comparison_with_others} compares the performance of our best TF-GridNet with previous models on WSJ0-2mix.
Compared with previous best such as SepFormer~\cite{Subakan2021}, SFSRNet~\cite{Rixen2022} and QDPN~\cite{Rixen2022QDPN}, our model enjoys a modest size.
Notice that, since 2019, T-F domain models have been largely under-explored and under-represented for anechoic speaker separation, and many research efforts have been devoted to time-domain approaches.
The recent TFPSNet model~\cite{Yang2022TFPSNet} achieves a competitive SI-SDRi at 21.1 dB, but the performance still falls within the range of scores (i.e., $[20.0,22.0]$ dB SI-SDRi) that can be commonly achieved by modern time-domain models.
Our study, for the first time since 2019, unveils that complex T-F domain models, with a contemporary DNN architecture, can outperform modern time-domain models by a large margin.
\subsection{Results on SMS-WSJ and WHAMR!}\label{result_smswsj}
This section evaluates TF-GridNet and the two-DNN system on reverberant and noisy-reverberant speaker separation.
In the following experiments, in default we use $B=4$ and $H=192$ to save computation\footnote{We also experimented with larger TF-GridNets and observed better performance but we consider this unnecessary. We will show later that TF-GridNet with this setup already produces better results than competing models.} and use LN+Unfold.
Based on the validation sets, we set $I=4$, $J=1$ and $D=48$ for SMS-WSJ, and $I=8$, $J=1$ and $D=24$ for WHAMR!.
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on SMS-WSJ (1ch).}}
\vspace{-0.1cm}
\label{sms_wsj_results_1ch}
\setlength{\tabcolsep}{1.75pt}
\begin{tabular}{l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
S[table-format=2.2,round-precision=2]
}
\toprule
{\multirow{2}{*}{Systems}} & {\multirow{2}{*}{$\Delta_l$}} & {\multirow{2}{*}{$\Delta_r$}} & {\multirow{2}{*}{Loss}} & {SI-SDR} & {SDR} & {\multirow{2}{*}{PESQ}} & {\multirow{2}{*}{eSTOI}} & {WER} \\
& & & & {(dB)} & {(dB)} & & & {(\%)} \\
\midrule
Unprocessed & {-} & {-} & - & -5.5 & -0.4 & 1.50 & 0.441 & 78.4 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{sinrloss+mc}) & 16.1956 & 17.249209 & 3.45168 & 0.9238977 & 9.49 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 14.72918 & 15.747810 & 3.35166746 & 0.91447677 & 9.64 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag+MC}) & 15.66463337 & 16.604234733 & 3.4118591 & 0.9239642 & 9.26 \\ DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+MC}) & 16.6 & 17.6 & 3.53 & 0.93449799 & 8.80 \\
DNN$_1$+SCMFWF+DNN$_2$ & 39 & 0 & (\ref{wav+mag+MC}) & 17.595673 & 18.66561035 & 3.66981964 & 0.9459155409 & 8.51 \\
DNN$_1$+SCMFWF+DNN$_2$ & 20 & 19 & (\ref{wav+mag+MC}) & \bfseries 18.4490 & \bfseries 19.62540 & \bfseries 3.6988555 & \bfseries 0.95216320 & \bfseries 7.91 \\
\midrule
DNN$_1$+WPE+DNN$_2$ & 40 & {-} & (\ref{wav+mag+MC}) & 17.536703685113977 & 18.571642394523618 & 3.668501243755982 & 0.9465367005852843 & 8.19 \\
\midrule
DPRNN-TasNet~\cite{Luo2020} & {-} & {-} & - & 6.5 & {-} & 2.28 & 0.734 & 38.1 \\
SISO$_1$~\cite{Wang2020css} & {-} & {-} & - & 5.7 & {-} & 2.40 & 0.748 & 28.7 \\
DNN$_1$+(FCP+DNN$_2$)$\times$2~\cite{Wang2020css} & {-} & {-} & - & 12.7 & 14.1 & 3.25 & 0.899 & 12.8 \\
DNN$_1$+(msFCP+DNN$_2$)$\times$2~\cite{Wang2021FCPwaspaa} & {-} & {-} & - & 13.4 & {-} & 3.41 & {-} & 10.9 \\
\midrule
Oracle direct-path signal & {-} & {-} & - & $\infty$ & $\infty$ & 4.5 & 1.0 & 6.28 \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\subsubsection{Comparison of Loss Functions}\label{loss}
The speaker separation community usually uses SI-SDR as the key evaluation metric and many previous models are trained to optimize SI-SDR.
We also do this in our experiments on WSJ0-2mix in order to compare TF-GridNet with earlier models.
However, using SI-SDR as the loss is known to produce sub-optimal magnitude estimates due to the compensation between estimated magnitude and phase~\cite{Wang2021compensation}, while metrics such as PESQ, eSTOI and WER favor signals with good magnitude estimates.
Based on SMS-WSJ and WHAMR!, in Table~\ref{sms_wsj_results_1ch}, \ref{sms_wsj_results_2ch}, \ref{sms_wsj_results_6ch}, \ref{wharmr_results_1ch} and \ref{wharmr_results_2ch} we make a direct comparison of training TF-GridNet (i.e., DNN$_1$) with the SI-SDR+MC loss in Eq.~(\ref{sinrloss+mc}), Wav+Mag in (\ref{wav+mag}) and Wav+Mag+MC in (\ref{wav+mag+MC}).
We observe that, compared with SI-SDR+MC, Wav+Mag+MC performs better or comparably good in PESQ, eSTOI and WER, and slightly worse in SI-SDR and SDR; and compared with Wav+Mag, it usually performs better.
We will in default use the Wav+Mag+MC loss in the following experiments.
\subsubsection{Comparison in Monaural, Single-DNN Setup}\label{results_reverb_and_noisy_reverb_description}
Table~\ref{sms_wsj_results_1ch} and \ref{wharmr_results_1ch} respectively present the results of TF-GridNet (denoted as DNN$_1$) on the monaural tasks of SMS-WSJ and WHAMR!.
TF-GridNet substantially outperforms competing systems that train a single DNN for separation.
For example, in Table~\ref{sms_wsj_results_1ch} TF-GridNet is 9.2 dB better than DPRNN-TasNet (15.7 vs. 6.5 dB SI-SDR)~\cite{Luo2020} and 10.0 dB better than TCN-DenseUNet based SISO$_1$ (15.7 vs. 5.7 dB SI-SDR)~\cite{Wang2020css}.
To obtain state-of-the-art performance, many previous speaker separation studies tend to use dynamic mixing (DM) to generate more training mixtures.
Their DM results on the monaural task of WHAMR! are listed in the bottom panel of Table~\ref{wharmr_results_1ch}.
Although DM yields slight improvements for previous models, their final performance is still worse than the 10.6 dB SI-SDR result obtained by TF-GridNet without DM (i.e., the DNN$_1$ row).
These results show the effectiveness of TF-GridNet for noisy-reverberant speaker separation.
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on SMS-WSJ (2ch).}}
\vspace{-0.1cm}
\label{sms_wsj_results_2ch}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{
l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
S[table-format=2.2,round-precision=2]
}
\toprule
{\multirow{2}{*}{Systems}} & {\multirow{2}{*}{$\Delta_l$}} & {\multirow{2}{*}{$\Delta_r$}} & {\multirow{2}{*}{Loss}} & {SI-SDR} & {SDR} & {\multirow{2}{*}{PESQ}} & {\multirow{2}{*}{eSTOI}} & {WER} \\
& & & & {(dB)} & {(dB)} & & & {(\%)} \\
\midrule
Unprocessed & - & - & - & -5.5 & -0.4 & 1.50 & 0.441 & 78.4 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{sinrloss+mc}) & 17.8350 & 19.068708 & 3.6723616 & 0.945635 & 8.33 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 15.96 & 17.327 & 3.52 & 0.936 & 8.31 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag+MC}) & 17.7 & 18.9 & 3.68 & 0.950 & 7.68 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+MC}) & 17.728 & 18.933 & 3.68 & 0.9494756 & 7.90 \\
DNN$_1$+MCMFWF+DNN$_2$ & 0 & 0 & (\ref{wav+mag+MC}) & 17.7790 & 18.9775 & 3.69560 & 0.950037 & 7.84 \\
DNN$_1$+MCMFWF+DNN$_2$ & 29 & 0 & (\ref{wav+mag+MC}) & 19.8989525 & 21.4730 & 3.79483 & 0.9647201 & \bfseries 7.12 \\
DNN$_1$+MCMFWF+DNN$_2$ & 15 & 14 & (\ref{wav+mag+MC}) & \bfseries 20.304 & \bfseries 21.9608 & \bfseries 3.8064 & \bfseries 0.96726 & 7.41 \\
\midrule
DNN$_1$+ConvBF+DNN$_2$ & 29 & {-} & (\ref{wav+mag+MC}) & 19.397328 & 20.861329 & 3.803369 & 0.960734 & 7.52 \\
\midrule
MC-ConvTasNet~\cite{ZhangJisi2020} & {-} & {-} & - & 5.8 & {-} & 2.16 & 0.720 & 45.7 \\
FasNet+TAC~\cite{Luo2020e2e} & {-} & {-} & - & 6.9 & {-} & 2.27 & 0.731 & 34.8 \\
MISO$_1$~\cite{Wang2020css} & {-} & {-} & - & 8.2 & {-} & 2.85 & 0.826 & 17.2 \\
MISO$_1$-BF-MISO$_3$~\cite{Wang2020css} & {-} & {-} & - & 12.7 & {-} & 3.43 & 0.907 & 10.7 \\
\makecell{DNN$_1$+(msFCP$_{\text{MVDR}}$\\+msFCP+DNN$_2$)$\times$2}~\cite{Wang2021FCPwaspaa} & {-} & {-} & - & 15.8 & {-} & 3.71 & {-} & 8.6 \\
\midrule
Oracle direct-path signal & {-} & {-} & - & $\infty$ & $\infty$ & 4.5 & 1.0 & 6.28 \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table}
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on SMS-WSJ (6ch).}}
\vspace{-0.1cm}
\label{sms_wsj_results_6ch}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{
l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
S[table-format=2.2,round-precision=2]
}
\toprule
{\multirow{2}{*}{Systems}} & {\multirow{2}{*}{$\Delta_l$}} & {\multirow{2}{*}{$\Delta_r$}} & {\multirow{2}{*}{Loss}} & {SI-SDR} & {SDR} & {\multirow{2}{*}{PESQ}} & {\multirow{2}{*}{eSTOI}} & {WER} \\
& & & & {(dB)} & {(dB)} & & & {(\%)} \\
\midrule
Unprocessed & {-} & {-} & - & -5.5 & -0.4 & 1.50 & 0.441 & 78.4 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{sinrloss+mc}) & 19.649174 & 20.993957 & 3.8725276 & 0.9611383543 & 7.63 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 19.4 & 20.8 & 3.83 & 0.964 & 6.92 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag+MC}) & 19.9 & 21.2 & 3.89 & 0.966 & 7.27 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+MC}) & 19.920 & 21.209 & 3.89 & 0.966 & 7.34 \\
DNN$_1$+MCMFWF+DNN$_2$ & 0 & 0 & (\ref{wav+mag+MC}) & 20.068 & 21.4404 & 3.898 & 0.96655 & 7.28 \\
DNN$_1$+MCMFWF+DNN$_2$ & 9 & 0 & (\ref{wav+mag+MC}) & 22.551951 & 24.578 & 4.03515 & 0.977579 & \bfseries 6.65 \\
DNN$_1$+MCMFWF+DNN$_2$ & 5 & 4 & (\ref{wav+mag+MC}) & \bfseries 22.809367 & \bfseries 24.855 & \bfseries 4.07624 & \bfseries 0.979954 & 6.76 \\
\midrule
DNN$_1$+ConvBF+DNN$_2$ & 9 & {-} & (\ref{wav+mag+MC}) & 21.86570 & 23.6565 & 4.00062 & 0.97459 & 6.74 \\
\midrule
FasNet+TAC~\cite{Luo2020e2e} & {-} & {-} & - & 8.6 & {-} & 2.37 & 0.771 & 29.8 \\
MC-ConvTasNet~\cite{ZhangJisi2020} & {-} & {-} & - & 10.8 & {-} & 2.78 & 0.844 & 23.1 \\
MISO$_1$~\cite{Wang2020css} & {-} & {-} & - & 10.2 & {-} & 3.05 & 0.859 & 14.0 \\
LBT~\cite{Taherian2022LBT} & {-} & {-} & - & 13.2 & 14.8 & 3.33 & 0.910 & 9.6 \\
MISO$_1$-BF-MISO$_3$~\cite{Wang2020css} & {-} & {-} & - & 15.6 & {-} & 3.76 & 0.942 & 8.3 \\
\midrule
Oracle direct-path signal & {-} & {-} & - & $\infty$ & $\infty$ & 4.5 & 1.0 & 6.28 \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\subsubsection{Comparison in Multi-Channel, Single-DNN Setup}\label{results_reverb_and_noisy_reverb_description}
Table~\ref{sms_wsj_results_2ch} and \ref{sms_wsj_results_6ch} respectively present the results of TF-GrdiNet based DNN$_1$ for two- and six-channel separation on SMS-WSJ, and Table~\ref{wharmr_results_2ch} reports two-channel results on WHAMR!.
TF-GridNet shows substantially better performance than competing single-DNN approaches.
For example, in Table~\ref{sms_wsj_results_6ch} TF-GridNet obtains 19.9 dB SI-SDR, while FasNet+TAC~\cite{Luo2020e2e}, MC-Conv-TasNet~\cite{ZhangJisi2020}, TCN-DenseUNet~\cite{Wang2020css} and LBT~\cite{Taherian2022LBT} respectively obtain 8.6, 10.8, 10.2 and 13.2 dB.
\subsubsection{Effectiveness of Including MFWF and Post-Filtering}\label{results_MCMFWF_post-filtering}
For the post-filtering network (i.e., DNN$_2$), which is trained in an enhancement way, we use the same configuration as DNN$_1$ but use $B=3$ TF-GridNet blocks.
Although TF-GridNet based DNN$_1$ already exhibits strong separation performance, we observe that using its outputs to compute an MFWF and another TF-GridNet for post-filtering still produces clear improvements.
This can be observed in Table~\ref{sms_wsj_results_2ch} and \ref{sms_wsj_results_6ch} by comparing DNN$_1$+MCMFWF+DNN$_2$, DNN$_1$, and DNN$_1$+DNN$_2$ (which stacks two TF-GridNets but not performing linear filtering in between).
In the monaural case, in Table~\ref{sms_wsj_results_1ch} DNN$_1$+SCMFWF+DNN$_2$ is also better than DNN$_1$.
\subsubsection{MFWF vs. Other Linear Filters}\label{comparison_MCMFWF_convbf_wpe}
In Table~\ref{sms_wsj_results_2ch} and \ref{sms_wsj_results_6ch}, we observe that using MCMFWF with both past and future context (i.e., $\Delta_l > 0$ and $\Delta_r > 0$) between DNN$_1$ and DNN$_2$ produces clear improvements over MCMFWF with only past context (i.e., $\Delta_l > 0$ and $\Delta_r = 0$), MCMFWF with no context (i.e., $\Delta_l = 0$ and $\Delta_r = 0$), and convolutional beamformer.
In Table~\ref{sms_wsj_results_1ch}, SCMFWF with both past and future context leads to better scores than WPE as well as SCMFWF with only past context in the single-channel case.
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on WHAMR! (1ch).}}
\vspace{-0.1cm}
\label{wharmr_results_1ch}
\setlength{\tabcolsep}{1.7pt}
\begin{tabular}
{l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
}
\toprule
Systems & $\Delta_l$ & $\Delta_r$ & Loss & {SI-SDR (dB)} & {SDR (dB)} & {PESQ} & {eSTOI} \\
\midrule
Unprocessed & {-} & {-} & - & -6.1 & -3.5 & 1.41 & 0.317 \\
\midrule
DNN$_1$ & {-} & {-}& (\ref{sinrloss+mc}) & 11.0 & 12.1 & 2.69 & 0.790 \\
DNN$_1$ & {-} & {-}& (\ref{wav+mag}) & 10.327445 & 11.39139 & 2.7156155 & 0.786574 \\
DNN$_1$ & {-} & {-}& (\ref{wav+mag+MC}) & 10.640508 & 11.664329 & 2.74717658 & 0.7932153 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+MC}) & 10.702205 & 11.766915 & 2.7373689 & 0.79382971 \\
DNN$_1$+SCMFWF+DNN$_2$ & 20 & 19 & (\ref{wav+mag+MC}) & \bfseries 11.2242 & \bfseries 12.30882 & \bfseries 2.79482 & \bfseries 0.8077484 \\
\midrule
Conv-TasNet~\cite{Luo2019, Maciejewski2020} & {-} & {-} & -& 2.2 & {-} & {-} & {-} \\
SISO$_1$~\cite{Wang2020css} & {-} & {-} & -& 4.2 & 6.2 & 1.79 & 0.594 \\
3-Stage BLSTM-TasNet~\cite{Maciejewski2020} & {-} & {-} & -& 4.8 & {-} & {-} & {-} \\
Wavesplit~\cite{Zeghidour2020} & {-} & {-} & -& 5.9 & {-} & {-} & {-} \\
Gated DPRNN~\cite{Nachmani2020} & {-} & {-} & -& 6.1 & {-} & {-} & {-} \\
QDPN~\cite{Rixen2022QDPN} & {-} & {-} & -& 7.0 & {-} & {-} & {-} \\
DNN$_1$+(FCP+DNN$_2$)$\times$2~\cite{Wang2021FCPjournal} & {-} & {-} & -& 7.4 & 8.9 & 2.39 & 0.743 \\
\midrule
Wavesplit + DM~\cite{Zeghidour2020} & {-} & {-} & -& 7.1 & 8.7 & {-} & {-} \\
SUDO RM -RF + DM~\cite{Tzinis2020} & {-} & {-} & -& 7.4 & {-} & {-} & {-} \\
SepFormer + DM~\cite{Subakan2021, Subakan2022journal} & {-} & {-} & - & 7.9 & 9.5 & {-} & {-} \\
QDPN + DM~\cite{Rixen2022QDPN} & {-} & {-} & -& 8.3 & {-} & {-} & {-} \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table}
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on WHAMR! (2ch).}}
\vspace{-0.1cm}
\label{wharmr_results_2ch}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
}
\toprule
Systems & $\Delta_l$ & $\Delta_r$ & Loss & {SI-SDR (dB)} & {SDR (dB)} & {PESQ} & {eSTOI} \\
\midrule
Unprocessed & {-} & {-} & - & -6.1 & -3.5 & 1.41 & 0.317 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{sinrloss+mc}) & 12.832 & 13.955 & 3.00 & 0.844 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 12.0 & 13.2 & 3.01 & 0.839 \\
DNN$_1$ & {-} & {-} & (\ref{wav+mag+MC}) & 12.5 & 13.5 & 3.05 & 0.846 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+MC}) & 12.5212 & 13.5727 & 3.0542 & 0.84615 \\
DNN$_1$+MCMFWF+DNN$_2$ & 15 & 14 & (\ref{wav+mag+MC}) & \bfseries 13.67305 & \bfseries 14.8236 & \bfseries 3.1639 & \bfseries 0.868275 \\
\midrule
MC-ConvTasNet~\cite{ZhangJisi2020, Zhang2021} & {-} & {-} & - & 6.0 & {-} & {-} & {-} \\
\makecell{MC-ConvTasNet with\\speaker extraction}~\cite{Zhang2021} & {-} & {-} & - & 7.3 & {-} & {-} & {-} \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\subsection{Results on WSJ0CAM-DEREVERB}\label{result_dereverb}
Starting from here, we set $I=4$, $J=2$, and $D=48$. $J$ is increased to $2$ as the sample rate increases to 16 kHz.
The other setups are the same as those in the previous subsection.
Table~\ref{dereverb_1ch_results} and \ref{dereverb_8ch_results} respectively present the results of using TF-GridNet for one- and eight-channel dereverberation.
Trained to perform complex spectral mapping, DNN$_1$ based on TF-GridNet achieves substantially better performance than SISO$_1$ (16.6 vs. 8.4 dB SI-SDR in Table~\ref{dereverb_1ch_results}) and MISO$_1$ (19.9 vs. 11.3 dB SI-SDR in Table~\ref{dereverb_8ch_results}) proposed in~\cite{Wang2021LowDistortion}, which also uses complex spectral mapping but with TCN-DenseNet.
With beamforming and post-filtering, DNN$_1$+MCMFWF+DNN$_2$ based on TF-GridNet also shows better results than the competing approach (21.2 vs. 18.2 dB SI-SDR) in~\cite{Wang2021LowDistortion}, which uses two TCN-DenseUNets with a composition of linear filters.
From the last two rows of Table~\ref{dereverb_8ch_results}, we notice that, based on TCN-DenseUNet, using complicated sub-band linear filtering followed by post-filtering (i.e., the last row) produces large improvement over MISO$_1$ (18.2 vs. 11.3 dB SI-SDR)~\cite{Wang2021LowDistortion}.
This indicates that the sub-band linear filters can model what TCN-DenseUNet, which performs full-band modeling, is not good at modeling.
In comparison, using a single TF-GridNet alone is already better than the last two rows (i.e., 19.9 vs. 11.3 and 18.2 dB SI-SDR) and the improvement brought by beamforming and post-filtering is not large (21.2 vs. 19.9 dB SI-SDR).
This indicates that TF-GridNet could, to a large extent, model what sub-band linear filters complement to full-band models, likely through the sub-band temporal modules.
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on WSJ0CAM-DEREVERB (1ch).}}
\vspace{-0.1cm}
\label{dereverb_1ch_results}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
}
\toprule
Systems & $\Delta_l$ & $\Delta_r$ & Loss & {SI-SDR (dB)} & {PESQ} & {eSTOI} \\
\midrule
Unprocessed & {-} & {-} & - & -3.6 & 1.64 & 0.494 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 16.6 & 3.72 & 0.947 \\
DNN$_1$+DNN$_2$ & {-} & {-}& (\ref{wav+mag}) & 16.9845 & 3.7677 & 0.94836 \\
DNN$_1$+SCMFWF+DNN$_2$ & 20 & 19 & (\ref{wav+mag}) & \bfseries 17.3 & \bfseries 3.78 & \bfseries 0.950 \\
\midrule
SISO$_1$~\cite{Wang2021LowDistortion} & {-} & {-} & - & 8.4 & 3.12 & 0.868 \\
DNN$_1$+(FCP+DNN$_2$)$\times$2~\cite{Wang2021FCPjournal} & {-} & {-} & - & 12.7 & 3.46 & {-} \\
\makecell{SISO$_1$+FCP$_{\text{WPE}}$+WPE+SISO$_5$}~\cite{Wang2021LowDistortion} & {-} & {-} & - & 12.7 & 3.49 & 0.919 \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table}
\begin{table}[]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on WSJ0CAM-DEREVERB (8ch).}}
\vspace{-0.1cm}
\label{dereverb_8ch_results}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{l
c %
c %
c
S[table-format=2.1,round-precision=1]
S[table-format=1.2,round-precision=2]
S[table-format=1.3,round-precision=3]
}
\toprule
Systems & $\Delta_l$ & $\Delta_r$ & Loss & {SI-SDR (dB)} & {PESQ} & {eSTOI} \\
\midrule
Unprocessed & {-} & {-} & - & -3.6 & 1.64 & 0.494 \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{wav+mag}) & 19.9 & 3.95 & 0.971 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag}) & 20.288 & 4.00 & 0.972 \\
DNN$_1$+MCMFWF+DNN$_2$ & 4 & 3 & (\ref{wav+mag}) & \bfseries 21.2 & \bfseries 4.02 & \bfseries 0.975 \\
\midrule
MISO$_1$~\cite{Wang2021LowDistortion} & {-} & {-} & - & 11.3 & 3.49 & 0.921 \\
\makecell[l]{MISO$_1$+FCP$_{\text{mWMPDR}_{\text{WPE}}}$+\\\,\,\,\,mWMPDR$_{\text{WPE}}$+WPE+MISO$_{10}$}~\cite{Wang2021LowDistortion} & {-} & {-} & - & 18.2 & 3.98 & 0.967 \\
\bottomrule
\end{tabular}
\vspace{-0.2cm}
\end{table}
\begin{table}[h!]
\scriptsize
\centering
\sisetup{table-format=2.2,round-mode=places,round-precision=2,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\caption{\textsc{Results on L3DAS22 3D Speech Enhancement Task (8ch).}}
\vspace{-0.1cm}
\label{l3das_results}
\setlength{\tabcolsep}{1.7pt}
\begin{tabular}{l
c %
c %
c
S[table-format=2.2,round-precision=2]
S[table-format=1.3,round-precision=3]
S[table-format=1.3,round-precision=3]
}
\toprule
Systems & $\Delta_l$ & $\Delta_r$ & Loss & {WER (\%)} & {STOI} & {Task1Metric} \\
\midrule
DNN$_1$ & {-} & {-} & (\ref{wav+mag+GEQ}) & 1.6776608371442375 & 0.98774326113973 & 0.9854833263841438 \\
DNN$_1$+DNN$_2$ & {-} & {-} & (\ref{wav+mag+GEQ}) & 1.625296164170487 & 0.9888854462821834 & 0.9863162423202393 \\
DNN$_1$+MCMFWF+DNN$_2$ & 4 & 3 & (\ref{wav+mag+GEQ}) & \bfseries 1.2931165717770235 & \bfseries 0.9937596966842833 & \bfseries 0.9904142654832564 \\
\midrule
Winner system (ESP-SE)~\cite{Lu2022} & {-} & {-} & - & 1.89 & 0.987 & 0.984 \\
Runner-up system (BaiduSpeech)~\cite{Zhang2022BaiduSpeechL3DAS} & {-} & {-} & - & 2.50 & 0.975 & 0.975 \\
3nd-place system (PCG-AIID)~\cite{Li2022PCGL3DAS} & {-} & {-} & - & 3.20 & 0.972 & 0.970 \\
Challenge baseline~\cite{ren2021neural} & {-} & {-} & - & 21.2 & 0.878 & 0.833 \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\subsection{Results on L3DAS22}\label{result_L3DAS_description}
L3DAS22 requires participants to estimate the dry source signal.
Following Eq.~(\ref{wav+mag}), we define the loss as
\begin{align}\label{wav+mag+GEQ}
&\mathcal{L}_{\text{Wav+Mag,GEQ}} =
\sum\nolimits_{c=1}^C \Big(\frac{1}{N} \| \hat{\alpha}^{(c)} \hat{o}(c) - o(c) \|_1 + \nonumber \\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\frac{1}{T\times F} \Big\| \Big|\text{STFT}(\hat{\alpha}^{(c)}\hat{o}(c))\Big| - \Big|\text{STFT}(o(c))\Big| \Big\|_1\Big),
\end{align}
where $o(c)$ denotes the dry source signal of speaker $c$ (see our physical model in Eq.~(\ref{eq:phymodel_time})) and $\hat{\alpha}^{(c)} = {{\text{argmin}}}_{\alpha}\,\| \alpha \hat{o}^{(c)} - o^{(c)} \|_2^2=(\hat{o}^{(c)})^{{\mathsf T}}o^{(c)}/(\hat{o}^{(c)})^{{\mathsf T}}\hat{o}^{(c)}$ is a gain equalization (GEQ) factor~\cite{LeRoux2019, Lu2022} that allows estimated speech to have an energy level different from target speech.
Table~\ref{l3das_results} reports the results.
A single TF-GridNet (i.e., DNN$_1$) already outperforms our winning solution~\cite{Lu2022} and the rest 16 submissions (see this link\footnote{\url{https://www.l3das.com/icassp2022/results.html}} for the challenge ranking), including the runner-up system~\cite{Zhang2022BaiduSpeechL3DAS}, whose monaural version~\cite{Zhang2022Axial} won the recent DNS2022 and AEC2022 challenges.
Including beamforming and post-filtering yields further improvement.
Here MCMFWF is computed in a way similarly to Eq.~(\ref{MCMFWF}), but we project the far-field B-format Ambisonic mixture to the dry source signal estimated by DNN$_1$ so that the beamforming result can be potentially time-aligned with the dry target, if the dry target estimated by DNN$_1$ is reasonably good, which is the case from the DNN$_1$ row.
In comparison, DNN-supported convolutional beamformer cannot produce an estimate time-aligned with the dry source, and how to modify it to deal with B-format Ambisonic signals is unknown.
\section{Conclusion}\label{conclusion}
We have proposed TF-GridNet, a multi-path DNN architecture modeling complex spectrograms, for single- and multi-channel speech separation.
By integrating full- and sub-band modeling inside TF-GridNet and outside through beamforming and post-filtering, the proposed systems achieve state-of-the-art performance for speech separation in noisy-reverberant conditions on multiple public datasets.
We will release the code of TF-GridNet in the ESPnet-SE++ toolkit~\cite{Lu2022ESPNetSE++}.
TF-GridNet obtains a state-of-the-art $23.5$ dB SI-SDRi on WSJ0-2mix.
This result highlights the strong performance of T-F domain models also for anechoic speaker separation, suggesting that T-F domain methods modeling complex representations, which implicitly perform phase estimation by predicting target RI components simultaneously, are not sub-optimal compared to time-domain approaches for the task of anechoic speaker separation.
The performance differences between these two approaches observed in earlier studies could mainly result from their differences in DNN architectures. %
The major limitation of TF-GridNet comes from its sizable amount of computation, because in each TF-GridNet block we run a BLSTM within each frame and another BLSM within each frequency.
To reduce computation, we can increase the stride size $J$ up to the kernel size $I$ in the unfold and Deconv1D operations (if $J$ is doubled, the amount of computation would be approximately halved), reduce the hidden dimension $H$ in BLSTMs, and reduce the embedding dimension $D$, at a degradation in separation performance.
We can also replace BLSTMs with DNN blocks that can process all the steps in each sequence in parallel to make inference faster.
In closing, we emphasize that the patterns of speech spectrograms vary with frequency and full-band or sub-band modeling alone is likely not capable of sufficiently modeling such variations.
Our proposed ways to integrate them exhibit excellent performance in our experiments.
The meta-idea of integrated full- and sub-band modeling, we believe, would motivate the design of many new algorithms in future research on neural speech separation.
\section{Acknowledgments}\label{ack}
We would like to thank Dr. Wangyou Zhang at SJTU for generously sharing his reproduced code of TFPSNet.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Debris disks represent the late stage of planet formation when the primordial disk has dissipated and the accretion of giant planets has concluded, but rocky planets may still continue to accrete from a disk of icy/rocky minor bodies. Collisional cascades of the small bodies generate copious amounts of fine dust (e.g., \citealt[][]{Gaspar2012}), often detectable in thermal emission and sometimes in scattered light. With typical ages between ten to a few hundred million
years, bright debris disks provide insights into the properties and evolution of young planetary systems.
The distribution of fine dust in these systems can reveal the structure of the underlying planetary systems (e.g. \citealt[][]{StarkKuchner2008,Apai2008,Roberge2009,Su2013})
and allows statistical studies of the planetary system properties, dynamics, and collisional history (for reviews see \citealt[][]{Meyer2007,Wyatt2008,MatthewsPPVI}).
In addition, debris disks likely pose a critical limitation on future missions to directly image terrestrial exoplanets \citep[][]{Guyon2006,Millan-Gabet2011}.
However, debris disks around stars known to host exoplanets remain very rare, making it difficult to establish a quantitative link between disk structures and planets. $\beta$~Pic is a unique system hosting the best-studied debris disk and in it a directly imaged super-jupiter. This system is also nearby (19.44$\pm$0.05~pc, \citealt[][]{vanLeeuwen2007}), allowing high physical resolution with current instrumentation.
The $\beta$~Pic disk is massive and exceptionally bright ($F_{disk}/F_{\star}$ = $\sim$2.5$\times10^{-3}$, \citealt[][]{Lagrange2000}),
with a diameter exceeding 400 au (e.g. \citealt[][]{SmithTerrile1984}).
The disk has been imaged with multiple instruments, but
the most spectacular and most detailed images have been obtained with
HST/STIS and HST/ACS/HRC (\citealt[][]{Heap2000,Golimowski2006}).
Perhaps the most perplexing structure in $\beta$~Pic is a warp {\em or} secondary disk inclined $\simeq5^\circ$
to the main disk.
The STIS coronagraphic images have been interpreted as a warped disk \citep[][]{Heap2000}, while
\citet[][]{Golimowski2006} have used deconvolved HST/ACS images to argue for the presence of a primary and a smaller, fainter, and inclined secondary disk. In the deconvolved ACS images the two wings of the main disk show prominent NW-SE asymmetry and at least three
breaks between 40 and 250 au in its radial surface brightness distribution, indicative of
changes in the disk structure or in dust grain properties. These breaks in surface brightness
distribution may emerge from dust belts and/or planetesimals belts \citep[e.g.][]{Wilner2011}.
The NW and SW wings of the disk also have different optical colors; in addition, while the
NW wing is linear, the SW wing is bowed.
In the HST/ACS images the two wings of the secondary disk follow similar radial brightness profiles, suggesting
identical radial density distributions and dust grain populations \citep[][]{Golimowski2006}.
Surprisingly, however, these two linear wings are not collinear.
In summary, the scattered-light $\beta$~Pic disk has a complex structure with evidence for different grain
populations and radial density profiles, as well as breaks, discontinuities or asymmetries in
the central region of the disk. Planetary perturbations have been invoked to explain the large-scale asymmetries in the main disk or the secondary
disk/warp \citep[e.g.][]{Mouillet1997, Augereau2001,Golimowski2006}.
Recently, \citet[][]{Lagrange2009} and \citet[][]{Lagrange2010} announced the discovery and
confirmation of a giant planet orbiting $\beta$~Pictoris. The planet has an estimated mass of 8--10 M$_{\mathrm J}$
and an orbital semi-major axis of about 8 au \citep[][]{Chauvin2012}. Since the discovery of the planet its motion has
been closely monitored and its orbit has been gradually refined \citep[e.g.][]{Quanz2010,Bonnefoy2011,Chauvin2012, Males2014,Nielsen2014,Macintosh2014}.
The planet's orbit was found to be slightly inclined ($\sim1^\circ$) with respect to the main disk and to have a relatively low eccentricity (e$<$0.1, \citealt[][]{Lagrange2012,Macintosh2014}).
Recent observations with Herschel \citep[][]{Vandenbussche2010} and SMA \citep[][]{Wilner2011}
sampled the spectral energy distribution of $\beta$~Pic at far-infrared and millimeter wavelengths and provided resolved
or marginally resolved images of the disk. Most interestingly, \citet[][]{Wilner2011} identified
two millimeter clumps at radii $\sim$3\farcs5 and argued that these indicate a population of
planetesimals in a ring at $r=94\pm8$~au -- inclined with respect to the midplane -- whose collisions serve as a source of dust grains.
A few months ago \citet[][]{Dent2014} published ALMA high-resolution (12 au) and high sensitivity 870~$\mu$m continuum and $^{12}$CO 3-2 transition observations of the $\beta$~Pic disk. The continuum images revealed that the southwestern disk is brighter than the northeastern one and shows a peak at 60 au (3\farcs08) with a tail extending beyond 4\farcs0. The CO distribution also shows excess emission in the SW side, but it peaks at a larger separation (85 au or 4\farcs4). Unlike the continuum emission, the CO clump detected by ALMA peaks above the disk midplane ($\sim$5 au). The velocity-based de-projection of the CO data argues for, but does not prove, a broad CO clump at the Earth-facing SW quarter of the disk at with an orbital radius of 85~au and a CO tail extending toward the NE side of the disk \citep[][]{Dent2014}. The dust mass is estimated to be $4.7\pm0.5 \times 10^{23}$~kg (6.4 M$_{Moon}$, while the CO mass is approximately $1.7 \times 10^{20}$~kg (0.0023 M$_{Moon}$). The presence of such large amounts of CO gas is surprising, given its short dissociation lifetime ($\sim$120~yrs) and poses a problem similar to that seen in an another old debris disk (HD 21997, \citealt[][]{Kospal2013}).
\citet[][]{Dent2014} identify two possible scenarios to explain the morphology of the CO and continuum observations: In the first, the clumps emerge from collisions of planetesimals trapped in 2:1 and 3:2 mean motion resonance with an outward migrating $>$10~M$_{Earth}$ planet \citep[][]{Wyatt2003}. In the second, a single, massive, and recent ($\sim$0.5~Myr) collision with an approximately Mars-sized parent body injects dust and CO gas into the system.
In a 2014 VLT adaptive optics study \citet[][]{Milli2014} imaged the disk between 0\farcs4 and 3\farcs8 in L$^\prime$. They report an overall bowed disk structure, particularly prominent in the inner disk ($<$1" from the star). Based on an { anisotropic} scattering model they explain the key features in the disk surface brightness distribution (bowed structure, surface brightness profile, and the warp) by a two-component disk: an outer (main) disk extending outward from an inner radius ($r_{warp}$), and an inner warped disk inclined at 4$^\circ$ with respect to the main disk. In their model, the entire disk is inclined toward the observer at an inclination angle 86$^\circ$.
In this study we explored two questions important to understanding
the $\beta$~Pictoris system: {\em i) What is the structure of the inner disk?}, and {\em ii) Are there planetesimal groups in orbits
resonant or co-moving with $\beta$~Pic~b?} We addressed these questions by obtaining high-quality coronagraphic images
of the system using the Hubble Space Telescope. The new optical disk images have the smallest yet inner working angles ($\sim0\farcs35$), allowing
detailed studies of the inner disk in the absence of image artifacts at small stellocentric distances previously probed. We also re-reduced data from 1997, obtained in a similar mode but with a larger inner
working angle, (now) with PSF-template subtracted coronagraphy. By comparing the disk structure in the two epochs we searched for temporal evolution in the inner disk over
timescales comparable to the orbital period of the planet and similar to radiation pressure timescales.
The paper is organized as follows. We first review the observations, data reduction and basic
data analysis (Section~\ref{Sect:Observations}). Section~\ref{DiskAngle} describes the disk's orientation and overall shape, Section~\ref{DiskSurfaceBrightnessProfiles}
discusses the surface brightness profiles along various directions and the large-scale asymmetry of the disk, Section~\ref{VerticalStructure} explores the vertical structure of the disk and the warp through various methods, Section~\ref{S:Orbit} compares the orbit of the planet to the inner disk structure and explores temporal evolution of the scattered light disk. Section~\ref{S:DiskStructure} provides a multi-wavelength view of the disk. Section~\ref{Sect:Models} discusses our observational results in the context of disk--planet interaction models. Finally, Section~\ref{Summary} summarizes the key findings of our work.
\section{Observations and Data Reduction}
\label{Sect:Observations}
In this paper we present a re-reduction of a 1997 HST/STIS (Space Telescope Imaging Spectrograph, \citealt[][]{Woodgate1998}) archival dataset and also introduce a new set of HST/STIS coronagraphic observations we obtained in 2012. STIS coronagraphy utilizes an image plane mask including two orthogonal wedge occulters that is inserted in the STIS focal plane. STIS cannot use filters in its coronagraphic (50CORON) mode; thus, the images described below are
unfiltered data whose wavelength coverage is only set by the spectral response of the STIS CCD detector ($\sim$200-1,050 nm with $\lambda_{pivot}$=575.2 nm and FWHM=433 nm and a spectrally flat source). Based on the STIS Exposure Time Calculator 22.1.1 and assuming an input stellar spectrum of A5V (corresponding to $\beta$~Pic ) we find that the effective wavelength (the wavelength at which incoming photons generate the most electrons) is 571.3~nm.
The STIS CCD detector is a 1024$\times$1024 pixel UV-enhanced chip with an image pixel scale of 0\farcs05077 per pixel and at these wavelengths a resolution element is $\sim$60~mas.
\subsection{The 1997 STIS Observations}
The 1997 observations and their analysis are described in detail in \citet[][]{Heap2000} and we only give a brief summary of the data we have used here. Table~\ref{T:observations} summarizes the observations.The first epoch observations were carried out in program GO-7125 (PI: Heap) on Sep 16-17, 1997 with STIS occulting wedge B. $\beta$~Pic was observed in three non-contiguous orbits (program visits \#4-6, archived raw dataset IDs: O4204, 05, 06). Prior data from visits 1--3 using wedge A failed or were degraded and ill-suited for an investigation.
In the first wedge B orbit the disk was placed halfway between the telescope's diffraction spikes and (nearly) orthogonal to the occulting wedge, while the second and third orbits were executed with spacecraft off-rolled by $-12^\circ$ and $+14^\circ$ relative to the first orbit. During the observations $\beta$~Pic was occulted by WedgeB2.0 and WedgeB1.0. At each location
a series of images were taken with 5~s and 3~s integration times, respectively. Here we only re-reduced the WedgeB1.0 images, for lack of a suitable Wedge B2.0 reference PSF.
\subsection{The 2012 STIS Observations}
Our new STIS observations were carried out on March 6, 2012 and used a refined observing strategy that resulted in higher signal-to-noise, weaker PSF residuals, and even smaller inner working angle. Our observing strategy followed \citet[][]{Schneider2014}, using multiple roll angles, coronographic wedge positions, and contemporaneous observations of a color- and brightness-matched reference star to minimize PSF residuals.
The data were acquired in program GO-12551 (PI: Apai). Table~\ref{T:observations} gives a summary of the observations. In this program using three contiguous orbits we targeted $\beta$~Pic in the first and third orbits and the PSF reference star ($\alpha$~Pic) in the second orbit.
The PSF star ($\alpha$~Pic, V=3.3, B$-$V= 0.18) was selected to closely match the celestial position, apparent magnitude, and color of $\beta$~Pic (V=3.86, B$-$V=0.17). We executed spacecraft rolls between the three orbits to recover (in combination) azimuthal sectors in the celestial frame otherwise unimaged due to the imposition of the coronagraphic wedge or degraded by the telescope diffraction spikes, and to further reduce the rotationally non-invarient components of the PSF-subtraction residuals.
In each of the three orbits we used a combination of short, medium, and long integrations (13.2\,s, 48\,s, 240\,s) at 0\farcs6 and 1\farcs0 wedge width locations; while the
0\farcs6 location was observed in WedgeA, the 1\farcs0 wedge location was observed both at the A- and B coronagraphic wedges of STIS.
The inner working angle (IWA) equals the half-width of the occulting wedge at the stellar position: WEDGE(A or B)0.6 = 0\farcs3 IWA and WEDGE{A|B}1.0=0\farcs5 IWA.
{ Due to the very high contrast in the image between the star, inner disk, and the outer disk no single image has suitable dynamic range to capture the scattered light surface brightness levels at all radii; therefore, we used short integrations to probe the stellar PSF and the inner disk, and longer integration times to probe the outer disk (while saturating the inner disk). The combination of the non-saturated pixels in images with different integration times allowed us to significantly increase the effective dynamic range of our final images and to probe the disk with high signal-to-noise over a surface brightness range of more than $10^5$. For more details on the image combination we refer the readers to \citep[][]{Schneider2014}. }
\begin{table}
\begin{center}
\caption{Summary of the STIS observations used in this work.\label{T:observations}}
\begin{tabular}{lccccccc}
\tableline\tableline
Prog. & Visit \# & Target & Date & Int. Times [s] & ORIENT$^a$ & Aperture \\
\tableline
7125 & 04 & $\beta$~Pic & 1997-09-17 & 16$\times$3 & 210.07 & WEDGEB1.0 \\
7125 & 05 & $\beta$~Pic & 1997-09-17 & 16$\times$3 & 224.07 & WEDGEB1.0 \\
7125 & 06 & $\beta$~Pic & 1997-09-17 & 16$\times$3 & 198.07 & WEDGEB1.0 \\
\hline
12551 & 01 & $\beta$~Pic & 2012-03-6 & 11$\times$1.2 & 239.15 & WEDGEA0.6 \\
12551 & 01 & $\beta$~Pic & 2012-03-6 & 16$\times$3.0, 4$\times$60.0 & 239.15 & WEDGEA1.0 \\
12551 & 01 & $\beta$~Pic & 2012-03-6 & 4$\times$60.0, 16$\times$3.0 & 239.15 & WEDGEB1.0 \\
\hline
12551 & 02 & $\alpha$~Pic & 2012-03-6& 11$\times$0.7 & 245.96 & WEDGEA0.6 \\
12551 & 02 & $\alpha$~Pic & 2012-03-6& 4$\times$36.0, 16$\times$1.9 & 245.96 & WEDGEA1.0 \\
12551 & 02 & $\alpha$~Pic & 2012-03-6& 16$\times$1.9, 4$\times$36.0& 245.96 & WEDGEB1.0 \\
12551 & 02 & $\alpha$~Pic & 2012-03-6& 11$\times$0.7, 2$\times$(3$\times$0.7)$^b$ & 245.96 & WEDGEB0.6 \\
\hline
12551 & 03 & $\beta$~Pic & 2012-03-6 & 11$\times$1.2 & 272.62 & WEDGEA0.6 \\
12551 & 03 & $\beta$~Pic & 2012-03-6 & 16$\times$3.0, 4$\times$60.0 & 272.62 & WEDGEA1.0 \\
12551 & 03 & $\beta$~Pic & 2012-03-6& 4$\times$60, 16$\times$3.0 & 272.62 & WEDGEB1.0 \\
\tableline
\end{tabular}
{$^a$}{ORIENTAT: position angle of the image +Y axis measured eastward from celestial north.}
{$^b$}{Additional two sets of three exposures each with ${1 \over 4}$ pixel in Y offsets for calibration purposes.}
\end{center}
\end{table}
\begin{figure}
\epsscale{1.20}
\plottwo{CompleteImage7125_LogScale}{CompleteImage12551_LogScale}
\caption{Direct comparison of the two STIS data sets on the $\beta$~Pictoris disk. {\em Left:} Our re-reduction of the
STIS image taken in GO-7125 in 1997. {\em Right:} New STIS images taken in our program GO-12551 provide
higher signal-to-noise and better PSF-subtraction, as well as smaller inner working angle. The two images are shown
here in a logarithmic stretch, with instrumental brightness units of counts per second per pixel; North is up. \label{Fig-Complete}}
\end{figure}
\subsection{Coronagraphic Data Reduction}
Our data reduction follows that of \citet[][]{Schneider2009}. Here we provide a brief summary of the key steps
and differences from that dataset and note that the GO 7125 and GJ12551 images were reduced separately.
First, the {\tt RAW} files were reprocessed with the STIS basic calibration package {\tt calstis} using the
most up-to-date dark and bias reference files closely contemporaneous with the observations. Each instrumentally calibrated image was then manually inspected for anomalies, but
none were found. Within each visit groups of exposures with the same setup (target, orientation, pointing, aperture, and exposure time) were median combined. Next, for each median-combined image we used small sub-arrays located far from the target to estimate
the combination of the sky and instrumental background, which were then subtracted from the images.
By dividing each background-subtracted median combined image with the exposure of the individual images we calculated the count rates per pixel. We next
located the positions of the target (occulted star) by identifying the intersection of two lines fit to the diffraction
spikes (see \citealt[][]{Schneider2014} for details of the process).
The above steps have been repeated on every frame taken on $\beta$~Pic and the PSF reference star.
In the next steps from each $\beta$~Pic image we subtracted a matching PSF template. Because in the GO~7125 no PSF template star observations were taken, from these images we subtracted the WedgeB PSF template derived from the 2012 epoch GO~12551 observations. The precise alignments of the target and PSF images are critical and we used a two-step alignment procedure. First, the target and template images have been co-aligned using fractional pixel offsets to the initial centers measured from the diffraction spike-fitting procedure { (\citealt[][]{Schneider2009}, \S 4.2)}.
Then we refined the alignments between the target and PSF images by iteratively minimizing the diffraction spike residuals
in the PSF-subtracted images where the disk flux did not dominate by varying three free parameters ($\Delta~X$ and $\Delta~Y$, and intensity scaling). We iterated all three parameters until convergence was reached in minimizing the subtraction residuals. This solution was also verified by visual inspection. This two-step alignment procedure was repeated for every target--PSF image pair.
After precise target and template alignment and PSF subtraction the images were rotated to a common "north up" orientation about the occulted star; image edge clipping was avoided by embedding the images in a larger, zero--padded image. For each image we created a binary "bad" data mask that flagged all pixels that were saturated, corrupted by diffraction spikes, covered or affected by the wedges, or otherwise significantly degraded. For the short Wedge0.6A/B images this mask also included pixels at large stellocentric angles where the signal-to-noise was low. Finally, the rotated, masked images were median combined to create the final analysis-quality images from the GO~7125 and GO~12551 data sets.
Our final images reveal the complex debris disk around $\beta$~Pic between 0\farcs35 to about 13",
providing an image with the highest quality yet and also the smallest working angle (Fig.~\ref{Fig-Complete}) at visible wavelengths.
{ Figure~\ref{Fig-SNRMap} shows our estimated signal-to-noise maps of the re-reduced 1997 and the new 2012 images. The maps were calculated by dividing the instrumental counts by the noise in the images, both averaged over 3$\times$3 STIS pixels. The noise for each pixel position was estimated as the standard deviation of the counts in the given pixel in each of the frames that covered that pixel. Note, that the reliability of the noise estimate for any given pixel is sensitive to the number of valid pixels that covered that location.}
To demonstrate the improvement in the image quality with the GO~12551 data to the re-reduced GO-7125 data we note that the signal-to-noise integrated over 2$\times$2 pixel area in the 1997 images is 20, while it is 40 in the 2012 images (in the disk midplane at separation 35 pixels or 1\farcs77); in the better-sampled closer in regions the improvement is even more significant: the 2012 images reach a signal-to-noise of 120, while the 1997 images have a signal-to-noise of 12 (at 21 pixels or 1\farcs06) after our reprocessing.
In Figure~\ref{Fig-BigDisk} we plot the two images with the main disk axis aligned horizontally (assuming a position angle P.A.=29$^\circ$.1) and by applying a radial brightness scaling ($\times r^{1.8}$) that allows easy comparison of the inner and outer disk structures. Here the 1.8 exponent is not physically motivated but was chosen for visual illustration in collapsing the radial dynamic display range.
\section{Disk Orientation and Surface Brightness Profile}
\subsection{Disk Position Angle}
\begin{figure}
\epsscale{1.0}
\plotone{DirectComparison.ps}
\caption{Direct comparison of the two disk images obtained from the re-reduction of the 1997 dataset and from
the new 2012 STIS images. While the 1997 data has a slightly larger effective field of view (not shown here), importantly the 2012 data have smaller inner working angle and higher signal-to-noise. These images have been multiplied by an $r^{1.8}$ function to enhance the fainter
outer disk's visibility while simultaneously showing the inner disk. The image has been rotated counter-clockwise by 60.9$^\circ$; the NE side of the disk is on the left.\label{Fig-BigDisk}}
\end{figure}
\begin{figure}
\epsscale{1.0}
\plotone{SNR_Map.ps}
\caption{Signal-to-noise ratio maps for the combined 2012 and 1997 images. The 2012 images provide higher signal-to-noise ratio across the image and, in particular, at small inner working angles close to the star. The images have been rotated counter-clockwise by 60.9$^\circ$; the NE side of the disk is on the left.\label{Fig-SNRMap}}
\end{figure}
\label{DiskAngle}
We determined the main disk's position angle by measuring the directions of vectors pointing from the star's position to the disk isophotically-determined mid-plane at { stellocentric} separations of { 11.2"} in both the northeastern and southwestern wings { for both the 1997 and the 2012 images}.
Because the disk is not completely straight we did not attempt an algorithmic determination of the disk angle but instead opted for the manual identification of the brightest pixel at the given radii and calculated the slope of the line connecting these points. { With this procedure we found a disk position angle of $29.17^\circ$ for the 1997 image and a disk position angle of $29.05^\circ$ for the 2012 image, each with an estimated uncertainty of $\pm0.1^\circ$. This uncertainty corresponds to 1 pixel uncertainty in the position of the disk midplane over the full visible disk in our images, which is realistic given our manual fitting procedure. However, this uncertainty does not include the uncertainty in the spacecraft roll angle, which is typically less than $0.1^\circ$ \citep[][]{Kalas2013}.
To estimate the uncertainties in the absolute spacecraft roll angle between the two epochs we searched for the minimum of the difference between the x-y aligned 1997 and 2012 images on a finely sampled grid of rotation (0.05$^\circ$/grid point). We found that a 0.0$^\circ$ relative rotation produces the smallest difference, i.e. the HST astrometric reference frame is highly reproducible between the two images, even though different guide stars were used in the two epochs at different spacecraft orientation angles. This finding reinforces our conclusion that the uncertainty of our disk angle measurement is dominated by the uncertainty in the disk midplane {\em determination}, and not differences in the spacecraft's astrometric reference frame.
Therefore, we adopt the average of the two position angles as the disk position angle, i.e. we will use 29.1$\pm0.1^\circ$ (counter-clockwise from north).}
\begin{figure}
\epsscale{1.0}
\plotone{DiskAngleCompleteImage12551.ps}
\caption{{ Our 2012 image and the illustration of the disk angle determination. The disk position angle was determined by fitting a line (white line) on the brightest points in the disk (disk spine) 11" on either sides of the disk. This line does not precisely intersect the star itself (see inset), demonstrating an overall bow in the disk. The black lines connect the {\em star} to the brightest points on either side of the disk at 11" separation. The same procedure was repeated for the 1997 image, which gave very similar results. In this figure north is up.} \label{Fig-DiskAngle}}
\end{figure}
Recently, it was pointed out that the $\beta$~Pic disk midplane is not straight, but slightly bowed \citep[][]{Milli2014}. We test evidence
for the overall bow in our data by independently fitting the two sides of the disk. We used the analysis quality reduced images from the 2012 observations and fit two lines starting from the position of the star to the brightest point in the disk at the edge of the field of view { ($\sim$11" on each side, same points as used above for the disk position angle determination, see black lines in Fig.~\ref{Fig-DiskAngle}). These fits yield about 0.23$^\circ$ {\em difference} between the position angle of the two sides. As illustrated in the inset of Fig.~\ref{Fig-DiskAngle}, the star's position lies NE from the vector connecting the SW and NE disk midplane (white line), which is consistent with a curvature to the NW direction, i.e. the two disk wings slightly bowed to the SE direction.} In other words -- with respect to a straight line fitted on the disk -- the inner disk appears slightly NW of the line and the two disk wings (in NE and SW directions) both are slightly bent to the other side (SE).
The fact that the disk is slightly bowed may be due to the interactions with ISM \citep[e.g.][]{Gaspar2008,Buenzli2010} or, more likely, due to the effect of forward-scattering grains in a not completely edge-on disk \citep[][]{Rodigas2014}; given the direction of the curvature and the inner disk asymmetries, it is not likely that the overall bow is a result of radiation pressure-driven grains emerging from the asymmetric inner disk.
\subsection{Surface Brightness Along the Disk Midplane}
\label{DiskSurfaceBrightnessProfiles}
To measure the disk surface brightness profile we first rotate the "north up", PSF-subtracted image of the disk (by 60.9$^\circ$ CCW) to place the main axis of the disk on the image horizontal and then extracted a 4 pixel-wide rectangular stripe centered on the disk mid-plane. The image was converted to physical flux density units by multiplying by the STIS inverse sensitivity (4.0169$\times10^{-19} $ergs/s/cm$^2$/\AA \, per count/s for the 1997 image and $4.1446\times10^{-19}~$ergs/s/cm$^2$/\AA \, per count/s for the 2012 image), a value that is provided by STScI's and is based on the absolute flux calibration of the STIS instrument. Next, for each radius (at each pixel) we calculated the median of the vertical stripe, excluding pixels not sampled in the image.
We applied this process to the northeastern and southwestern wings of the disk and for both epochs (1997 and 2012),
resulting in four surface brightness profiles. Figure~\ref{Fig-Profiles} shows the resulting surface brightness profiles for the 1997 and 2012 data.The 2012 STIS observations have closer inner working angle and probe the disk surface brightness all the way down to 0\farcs35; however, we consider the data reliable (i.e. sampled by
multiple pixels at a given radius) only for radii greater than 0\farcs4.
{ In Fig.~\ref{Fig-Profiles} we also include representative $\pm1\sigma$ uncertainties for two of the profiles as blue and red shaded envelopes. We estimated the uncertainties at each radial bin as the standard deviation within the 4-pixel wide stripe, which reflects the combination of photon and readout noise (minor contributions), PSF subtraction residuals, and actual vertical differences in the surface brightness in the disk (major contribution). The fact that the actual scatter in our profiles is smaller than the envelope suggests that our uncertainty estimate is conservative. We also considered another component, HST's photometric calibration uncertainty, but concluded that it is not significant for our analysis for the following reasons. The calibration uncertainty is less than 1\% (see \S,~\ref{S:TemporalChanges}) and would therefore not visibly change our current noise estimate. In addition, it would effect all measurements the same way (small relative increase in surface flux density), i.e. and would therefore not change our results on the surface brightness slopes (see below).
In Fig.~\ref{Fig-Profiles} we also plot in light gray the ALMA 1.3~mm dust continuum profile (integrated in the direction perpendicular to the disk midplane) from the measurements presented in \citet[][]{Dent2014}. The mm-sized grain population traced by the ALMA observations is thought to follow the distribution of the planetesimals in the disk, but it is somewhat complicated by the projection effect in the nearly edge-on viewing geometry. We point out that the regions R2--R6 -- covering annuli between break points in the surface brightness slopes -- appear to all coincide with inflection points in the dust continuum. This agreement suggests that the fine grains seen in scattered light provide also approximately trace the distribution of planetesimals.}
\subsection{Large-scale Radial Asymmetry}
We confirm a previously reported brightness difference between the NE and SW wings of the disk. The SW side of the disk is brighter in the inner { 8\farcs0} and fainter at larger distances on the NE side (see Fig. \ref{Fig-Profiles}). We find that this brightness difference extends to the previously unexplored inner disk down to at least 0\farcs5 or about 10~au projected radius. The brightness difference is greater at the smallest separations ($\sim$50\%), initially decreases toward larger radii, and the two sides reach identical brightness at 2\farcs5. Between 3\farcs0 and { 8\farcs0} the SW side is again brighter, but its brightness drops faster beyond 7\farcs0 than that of the NE side and beyond { 8\farcs0} the NE side is seen to be brighter.
We note that the more precise characterization of the surface brightness profile is limited by the fact that the disk has considerable vertical structure (\S~\ref{VerticalStructure}) and thus projecting it to a one-dimensional distribution is an ill-defined problem: different vertical aperture widths will return slightly different results depending on what disk regions are included.
\subsection{Breaks in the Radial Surface Brightness Profile}
\label{RSB}
The radial surface brightness profiles show distinct disk regions with
different slopes. To characterize the surface brightness profiles we fitted linear slopes to the logarithms of the
projected separation and the surface brightness profiles (see Fig.~\ref{Fig-Profiles}).
We find that the slopes are very well constrained and very similar in the two epochs.
We note that the uncertainty of the slopes is not dominated by noise in the images but by the precise definition of the aperture over which the slopes are calculated in the non-linear and extended disk. We estimate the
typical uncertainty of the power-law coefficients to be $\pm$0.04.
\begin{figure}
\epsscale{1.10}
\plotone{SurfaceBrightnessDistribution.eps}
\caption{Comparison of the disk surface brightness profiles from two epochs and two sides of the disk. The disk shows an NE-SW asymmetry: the NE (red) side is slightly fainter in the inner { 8\farcs0}, but brighter at larger separations than the SW (blue) side. Multiple breaks are present in the disk surface brightness: at 2\farcs0 in the SW wing only and at 6\farcs0 in both wings. The profiles observed in the two epochs show near-perfect matches, except for a possible fainter emission in the newer { (2012, red dashed line) NE} measurements at radii between 1\farcs0 and 1\farcs7". {change The light gray curve shows the ALMA 1.3mm continuum profile (Dent et al. 2014) integrated perpendicular to the disk and presented here in a linear, normalized scale. Note, that several of the disk region boundaries identified in the scattered light image the correspond to inflection points in the planetesimal distribution traced by the ALMA continuum data.} The shaded region marks the probable semi-major axis range of $\beta$ Pic~b. \label{Fig-Profiles}}
\end{figure}
Our analysis of the disk's radial surface brightness reveals at least five distinct disk regions
(see Table~\ref{TableRadialProfiles}) ranging from 0\farcs4 to 13\farcs4, corresponding to 7.7 au to 260~au.
We labeled these R2--R6 and also included region R1, which is only probed in our 2012 image.
We characterized the surface brightness slopes by fitting the brightness distributions in log-log space with
linear functions, corresponding to $S\propto r^\alpha$ power-law radial distributions.
The surface brightness slopes determined range from a mild slope in the inner disk ($\alpha \simeq-1.7$ to $-1.8$ for
radii smaller than 39 au, { i.e. Regions R1 and R2}), surrounded by an even milder slope ($\alpha \simeq-1.4$ for radii between 39 au and { 68 au, Region R3}).
Beyond 70 au we find three disk regions with much steeper slopes: the first has $\alpha \simeq-1.9$ (between 72 au and 110 au, Region R4), the following large region shows the steepest slope observed ($\alpha \simeq-4.4$ for radii between 130 au and 190 au, Region R5). The outermost disk region again shows a steep, but somewhat milder slope ($\alpha \simeq-3.9 $ to $-4.5$ for radii between 190 au and 260 au, Region R6).
The disk surface brightness profile has been studied in detail by \citet[][]{Golimowski2006} on the basis of
their HST/ACS F606W coronagraphic observations. The boundaries of our regions R3--R6 were matched to
those of the four regions identified by \citet[][]{Golimowski2006} (regions 1--4 in their paper) allowing
direct comparison (see Column 4 in Table~\ref{TableRadialProfiles}). { We note here that we adopted the slopes \citet[][]{Golimowski2006} derived from the non PSF-deconvolved ACS images; these show greater similarity to our STIS images than the PSF-deconvolved ACS images of \citet[][]{Golimowski2006} .}
We find that the overall radial disk structure { observed in our STIS images matches closely} that described by those authors.
In spite of the good overall agreement the surface brightness slopes fitted to the two different datasets are slightly different,
which we attribute to the different spectral coverage of the unfiltered STIS observations ($\sim$200-1,050 \AA) and the filtered
F606W ACS images.
Our 2012 images have smaller inner working angles than the previous ACS and STIS observations, allowing us to accurately measure the radial surface brightness of the inner disk. In the 2012 data set we can probe the disk structure down to 0\farcs4. We find that the slope observed between 0\farcs8 and 2\farcs0 in the SW disk continues inward in the NE disk without significant change down to radii 0\farcs4.
\begin{table}
\begin{center}
\caption{Power-law fits to radial surface brightness profiles and comparison to the values measured by Golimowski et al. (2006) (see their Table 3). \label{TableRadialProfiles}}
\begin{tabular}{cccrrr}
\tableline\tableline
Region &Range & Range & \multicolumn{3}{c}{$\alpha$} \\
&[arcsec] & [au] &1997 STIS & 2012 STIS & F606W ACS\tablenotemark{a} \\
\tableline
Northeastern Disk\\
\tableline
R1 & 0.4-0.8 & 7.8--16 & $-1.17\pm0.43$ & $-1.80\pm0.20$ & --- \\
R2 & 0.8-2.0 & 16--39 & $-1.80\pm0.05 $ & $-1.87\pm0.10 $ & --- \\
R3 & 2.0-3.5 & 39 -- 68 & $-1.36\pm0.06 $ & $-1.26\pm0.07 $ & $-$1.34 $\pm$0.02 \\
R4 & 3.7-5.6 & 72 -- 110 & $-2.05\pm0.06 $ & $-1.99\pm0.07 $ & $-$1.91$\pm$0.02 \\
R5 & 6.6-10.0 & 130--190 & $-4.23\pm0.07 $ & $-4.19\pm0.08 $& $-$4.19$\pm$0.02 \\
R6 & 10.0-13.4 & 190 -- 260 & $-4.25\pm0.57 $ & $-3.92\pm0.49$\tablenotemark{b} & $-$3.63$\pm$0.02 \\
\tableline
Southwestern Disk\\
\tableline
R1 & 0.4-0.8 & 7.8--16 & $-1.91\pm1.63 $& $-1.58\pm0.33$ & --- \\
R2&0.8-2.0 & 16--39& $-2.00\pm0.11 $ & $-1.86\pm0.11 $ & --- \\
R3&2.0-3.5 & 39--68&$-1.45\pm0.16 $ & $-1.37\pm0.16 $ & $-$1.06$\pm$0.02 \\
R4&3.7-5.6 & 72--110 & $-1.89\pm0.18 $& $-1.85\pm0.18$ & $-$2.03$\pm$0.02 \\
R5&6.6-10.0 & 130-190 & $-4.83\pm0.09$ & $-4.70\pm0.09$ & $-$4.76$\pm$0.02 \\
R6& 10.0-13.4 & 190--260 & $-5.00\pm0.52$ & $-4.36\pm0.63$\tablenotemark{b} & $-$4.00$\pm$0.02 \\
\tableline
\end{tabular}
\tablenotetext{a}{The HST/ACS uncertainties may not include the systematics fully.}
\tablenotetext{b}{The outer radius of Region R6 in the STIS 2012 images is 11\farcs0. }
\end{center}
\end{table}
\section{Vertical Disk Structure and Asymmetries}
\label{VerticalStructure}
The $\beta$~Pic disk shows a complex vertical-radial structure noted since after the disk was first imaged
\citep[e.g.][]{SmithTerrile1984,KalasJewitt1995,Heap2000,Golimowski2006}.
Our high-quality and small working angle images allow detailed study of the inner disk structure. In the following we use different analysis techniques to probe the asymmetries
in the disk. First, we show and discuss a radially normalized disk image and model the vertical structure
at each radius as a sum of two gaussian functions. Second, we will explore the rotationally asymmetric disk
component. Third, we attempt to separate the main disk contribution from the warped component.
\subsection{Radially Normalized Vertical Structure}
\label{Section:VerticalStructure}
We now inspect the disk vertical structure by modeling it with analytical functions at each radius. To better
visualize the vertical structure we normalize the disk brightness at each radius (1 pixel vertical slice) by the
peak value at each corresponding radius (see panel with {\em Radially Normalized Image} in Fig.~\ref{RNormalized} and a 4$\times$ vertically stretched version in Fig.~\ref{VertExpansion}). This image reveals a prominent asymmetry in the disk: the disk is warped in its NE (left in Fig.~\ref{RNormalized}) wing between $\sim2\farcs$ and $\sim5"$; a similar, but not identical warp is apparent in the SW wing between $\sim3\farcs0$ and $5\farcs5$.
The warp, in effect, introduces a counter-clockwise deformation in the disk, i.e. adding excess scattered light
south of the NE wing and north of the SW wing.
To quantify the vertical structure we fitted each vertical slice with the combination of two Gaussian functions. In Fig.~\ref{RNormalized} we fixed the center of one of the Gaussians to the disk mid plane, while in Fig.~\ref{VertExpansion} we treated the centers of both Gaussians as free parameters. The
fits were optimized by the IDL function {\tt MPFIT} \citep[][]{Markwardt2009} which applies Levenberg-Marquardt least-squares optimization.
We found that the sum of two Gaussians provided a very good fit for all radii, with the exception of the innermost
1". The top panels in Figs.~\ref{RNormalized} and \ref{VertExpansion} show the vertical disk brightness
profiles (blue), the best-fit analytical functions (red), and the residuals (thin black lines) for the four vertical slices
shown in the radially normalized image (labeled as Cuts A--D).
These best-fit analytical functions allow us to generate a simple "disk model", which we subtract from the
actual radially normalized image to verify how well this simple approach can reproduce the disk structure.
In the second panel from the bottom of Fig.~\ref{RNormalized} we show the model-subtracted radially
normalized disk. This image demonstrates that our simple analytical fit captures well most of the disk structure down to a stellocentric separation of 1".
We note that the fit shown in Fig.~\ref{VertExpansion}, { in which all parameters of both Gaussians are unconstrained}, provides a better fit to the disk than the more constrained fit shown in Fig.~\ref{RNormalized}.
The bottom panels in Figs.~\ref{RNormalized} and \ref{VertExpansion} show the offsets of the centers of the two Gaussian functions relative to the disk midplane. The four cuts (A--D) are also marked on this plot. The third panel in Fig.~\ref{VertExpansion} shows the width of the two Gaussians.
{ As a cautionary note, we stress here that the two-Gaussian fit is not physically motivated and does not capture the true complexity of the disk. The two Gaussians do not provide a one-to-one match to the structures an observer would identify as the main disk and the warped disk (or primary and secondary disks). Nevertheless, the combination of the two Gaussians provides an overall good fit to the {\em entire vertical} disk profile and allows us to trace the general radial dependence of the vertical structure, without decomposing the disk to physical components. }
\begin{figure}
\epsscale{0.8}
\plotone{RadiallyNormalized_MPFIT.ps}
\caption{{\em Top panel:} Disk vertical profiles (blue) at Cuts A, B, C, and D fit by a sum of two Gaussians (sum in red, components in purple). The center of one of the Gaussians was fixed to the disk midplane, but all other parameters were unconstrained. {\em Second panel:} A radially normalized disk image shows a complex vertical structure. The locations of cuts A, B, C, and D are marked. {\em Bottom panel:} The vertical offset of the first- and second Gaussian fits to the vertical disk profiles as a function of radius. The plot reveals large NE-SW asymmetry, mainly introduced by the presence of the warp (within 4"). The first gaussian fits are the darker symbols. \label{RNormalized}}
\end{figure}
\begin{figure}
\epsscale{0.8}
\plotone{RadiallyNormalized_VertExpan.ps}
\caption{{\em Top panel:} Same as in Fig.~\ref{RNormalized}, but all parameters of the two Gaussian were unconstrained and the radially normalized disk is shown in a 4$\times$vertical stretch. \label{VertExpansion}}
\end{figure}
\begin{figure}
\epsscale{1.1}
\plotone{RadialModels.ps}
\caption{ Comparison of the observed and modeled vertical position of the disk spine (brightest point). Shown are composite (two-disk) models by \citet[][]{Milli2014} and \citet[][]{Ahmic2009}. The blue symbols show the peak position of a single vertical Gaussian function fitted to the STIS observations presented in this paper, an analogous measurement to the modeled spines shown. \label{Models}}
\end{figure}
The Gaussian centers (shown in bottom panels in Fig.~\ref{RNormalized} and \ref{VertExpansion}) reveal a NE-SW asymmetry and trace the vertically extended disk structure most prominent at $\pm$4".
This structure appears as a counter-clockwise warp in the disk that is traceable to radii $<$1". The structure seen here has been identified as a "warp" in early STIS images \citep[e.g.][]{Heap2000} { and}, based on PSF-deconvolved 2003 HST/ACS images, \citet[][]{Golimowski2006} have { described it as an inclined {\em secondary disk}. Recently, \citet[][]{Ahmic2009} and \citet[][]{Milli2014} have modeled the appearance of a two-component dust disk, considering anisotropic scattering and high inclination (but not exactly edge-on). }
At larger separations in the SW wing -- the Gaussian center(s) are again positively offset above the disk midplane and this opening continues beyond the field of view of our images. The NE wing shows a similar, but less pronounced structure. These asymmetries have been recognized in early coronagraphic studies at even larger separations and referred to as "butterfly asymmetry" \citep[][]{KalasJewitt1995}.
\subsection{The Warp: Separating Asymmetric Structures}
\label{SectionWarp}
We explore the disk warp by subtracting the contribution of the "main disk", based on the assumption that
the main disk has a perfect mirror symmetry to the main disk midplane, following { \citep[][]{Lagrange2012}}. Therefore, we identify four quadrants in
the image (numbered clockwise from 1 to 4, see~Fig~\ref{Fig-MainDiskSubtracted}), two of which ({ 2nd and 4th}) contain large
flux contributions from the warped disk, while the other two ({ 1st and 3rd}) are dominated by emission from
the main disk.
By mirroring the { 1st and 3rd} quadrants to the main disk mid-plane and combining these with the original
{ 1st and 3rd} quadrant images we can approximate the main disk. We then subtract this combined image from the 2012 data set, thus producing an approximately main-disk-subtracted image (see Fig.~\ref{Fig-MainDiskSubtracted}).
{ We note that because the main disk is not perfectly symmetric with respect to the disk midplane this method is only providing an imperfect separation of the warped structure. }
This subtraction highlights the disk warp and its NE-SW asymmetry. The location and the asymmetry are consistent with those seen in Fig.~\ref{VertExpansion}: in the NE wing the warp appears around 2$-$4", while in the SW wing it is present from 1" to 5"; not only is the emission more extended in the SW, but it is much more pronounced (i.e. brighter). { The mirror-subtracted image shows a morphology that appears to be consistent with a warp caused by an inclined secondary disk (see Section~\ref{Section:VerticalStructure}).}
\begin{figure}
\epsscale{1.10}
\plotone{MainDiskSubtracted}
\caption{Result of the subtraction of the quadrants without warp from the two quadrants with warp. The warp structure shows a clear NE-SW asymmetry. The image is rotated by 60.9$^\circ$ counter-clockwise from the standard north up orientation to align the disk with the x-axis. \label{Fig-MainDiskSubtracted}}
\end{figure}
{ In Fig.~\ref{Models} we compare the radial dependence of the disk spine -- as measured by the peak position of a single vertical Gaussian function -- to two different recent models by \citet[][]{Ahmic2009} and \citet[][]{Milli2014}. From NE to SW the STIS single-Gaussian fit traces a deviation to "below" the disk mid-plane in the figure, corresponding to the SE direction, between 140 and 55 au, followed by a turn to the other, NW, side of the disk . From NE 55 au toward the star the disk spine is located "above" the disk mid-plane in the figure, corresponding to the NW plane of the disk. The disk spine displays another break around SW 70 au and turns back toward the disk midplane; until 130 au the disk spine is approaching the disk midplane. From 130 au it displays another break and leaves the disk midplane.
Both models shown as comparison in Fig.~\ref{Models} use a two-component disk with anisotropic scattering phase functions to reproduce the disk spine as seen in projection and in scattered light.
The \citet[][]{Ahmic2009} model has been specifically developed to match the HST/ACS images presented in \citet[][]{Golimowski2006}, which is a data set very similar in wavelength to the STIS data presented here, but does not extend to the inner working angle shown in our images. The model from \citet[][]{Milli2014} has been developed to explain the VLT/NACO L' observations presented in that paper; the VLT paper has a similar inner working angle to the STIS data presented here.
Both papers attempt to reproduce the warp observed in $\beta$~Pic with a combination of two misaligned dust disks. The \citet[][]{Ahmic2009} study uses two axisymmetric, partly spatially overlapping disks, each with multi-component power-law density distributions. A Markov Chain Monte Carlo routine optimizated the disk parameters to best reproduce the ACS observations. The \citet[][]{Milli2014} study used a grid-based parameter exploration.
We compare our observations to these models as a preliminary exploration to asses whether the two-component disk geometry is consistent with the observed disk spine location. We note that while neither of the two models matches well our new, smaller inner working angle data, they both offer overall similar spine geometries. While our observations do not directly show a secondary disk, the general spine morphology supports the interpretation that the inner disk is inclined with respect to the outer disk, leading to a "warp" as seen in projection. We emphasize that comprehensive scattered light modeling is required for further interpretation of the disk, but such an effort is beyond the scope of the current work.
}
\section{The Planet and the Inner Disk }
\label{S:Orbit}
In this section we explore possible connections between the disk structure and the super-jupiter
$\beta$~Pic b embedded in the disk. First, we compare $\beta$~Pic ~b's orbit to the inner disk structure as seen in our 2012 STIS images.
As of now the orbit of $\beta$~Pic ~b is closely monitored, but not yet fully known. The planet was first imaged in
Nov. 2003 \citep[][]{Lagrange2009}, but follow-up observations could not confirm its presence until
2009 \citep[][]{Lagrange2010}. After its confirmation
the planet was followed by multiple teams and, with over a dozen additional detections, its orbit has
been significantly refined { \citep[e.g.][]{Bonnefoy2011,Chauvin2012, Males2014,Macintosh2014,Bonnefoy2014}}. Although the orbit is well constrained after 2009, uncertainties are very large in its position prior to 2009, based on a single detection.
Here we adopt results from one of the latest orbital analyses of $\beta$~Pic b, which includes all published and several yet
unpublished VLT/NACO observations { \citep{Lagrange2014}, 12 positions in total}. The planet's orbit has been fitted with a Markov Chain Monte Carlo-based (MCMC) optimization of a Keplerian orbit.
With less than one quarter of the planet's orbit followed accurately, multiple solutions are possible. Table~\ref{T:Orbitalelements} summarizes three different solutions that are consistent with the existing astrometry.
The three solutions differ in the following ways: In Solution 1 the eccentricity was fixed to zero; Solution 2 represents
the most likely solution from the MCMC program; Solution 3 is a high-eccentricity solution far from the peak of the distribution, i.e. less likely than
the others, but still consistent with the current set of observations. During the preparation of this manuscript an independent orbital modeling based on existing VLT/NACO and new Gemini GPI observations has been presented by \citet[][]{Macintosh2014} and { \citet[][]{Bonnefoy2014}}; we note that all three of our solutions are fully consistent with the best fit solution and its uncertainties given by these papers.
In Figure~\ref{Fig-PlanetOrbit} we plot the best-fit orbit of the planet (Solution \#1) as projected on the plane of the sky. The plot is centered on the star's position and we marked the planet's position at a few significant epochs, including March 6, 2012, when our HST/STIS observations were obtained. For the 1997 position we also plotted an
uncertainty ellipse with its 1~$\sigma$ semi-major axis set to match the difference in the positions predicted by Solutions \#1 and \#2.
\begin{table}
\begin{center}
\caption{Orbital solutions for $\beta$~Pic b. \label{T:Orbitalelements}}
\begin{tabular}{ccccl}
\tableline\tableline
Parameter & Solution 1 & Solution 2 & Solution 3 \\
\tableline
a [au] & 8.6686 & 9.341403 & 11.18887 \\
P [yr] &19.29343 & 21.58237 & 28.29178 \\
e &0.0 & 5.160$\times^{-2}$ & 0.163\\
inc [$^\circ$] &89.269 & 88.603 & 88.803 \\
$\Omega$ [$^\circ$] & $-$148.2210 &$-$148.2013 & $-$147.7354 \\
$\omega$ [$^\circ$] & 169.5635 &$-$127.3659 &3.996902\\
t$_p$ &2002.229 & 2005.643 & 2013.290\\
$\chi^2$ & 6.26 & 8.67 & 5.367 \\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\epsscale{0.7}
\plotone{PlanetPosition.ps}
\caption{The projected orbit of $\beta$~Pic~b (blue) and the disk midplane (red). { Selected epochal positions of the planet at are marked
with red symbols}. The orbit shown here corresponds to Solution \#1 in Table~\ref{T:Orbitalelements}. { The ellipse corresponds to the
1$\sigma$ positional uncertainty of the planet at the time of the 1997 STIS observations.} \label{Fig-PlanetOrbit}}
\end{figure}
\begin{figure}
\epsscale{1.0}
\plotone{WarpAngle.ps}
\caption{ { The warp and the planet's orbit in $\beta$~Pictoris. The planet's inclination with respect to the disk midplane is ($0.7^\circ \pm 0.7$, see \S~\ref{S:Orbit}), while the warp (vertically extended disk emission) is seen up to at least 4$^\circ$ from the midplane. } \label{Fig-WarpAngle}}
\end{figure}
In the left panel of Figure~\ref{Fig-InnerDiskPlanet} we show the innermost section of the disk (39$\times$39 au)
centered on the location of $\beta$~Pic.
Black pixels mark locations covered by the coronagraphic wedge in
every spacecraft orientation and thus without valid data. The image shown here is from our 2012 dataset; the earlier
1997 image has significantly larger inner working angle and it is less informative about the inner
disk structure.
In the right panel of Figure~\ref{Fig-InnerDiskPlanet} we compare the orbit of the planet with a
contour plot of the inner disk surface brightness. The planet's predicted position for the epoch
of the observations coincides with the innermost valid pixel of our STIS data in the direction of the
planet. However, no evidence for excess emission is seen in the pixel where the planet's point
spread function's core should fall. This suggests that either the planet's optical brightness
is much weaker than that of the disk or the current best estimate for the planet's orbit is slightly ($\sim0\farcs025$) inaccurate and the planet may be just within our inner working angle.
The fact that the planet is not visible in our images should not come as a surprise:
Exoplanet atmospheric evolution models from \citet[][]{Baraffe2003} predict a visual apparent brightness for $\beta$~Pic b between V=21.2 and V=22.9 (thermal emission only, assuming a mass of 10~M$_{Jup}$, adopting the distance of $\beta$~Pic , for an age between 8 and 21~Myr \citealt[e.g.][]{Ortega2002,Song2003,Ortega2004,BinksJeffries2014}) and, based on simple geometric considerations, we estimate the reflected light of the planet to be fainter than V=26 (assuming a radius of 1.3~R$_{Jup}$, isotropic scattering, and a Bond albedo of 0.4, 50\% illumination). The predicted brightness from thermal emission and reflected light is much lower than that of the disk at the same radius: from our 2012 image we measure
7.02$\times10^{-16}$ ergs/s/cm$^2$/\AA ~ or V=16.74 for the pixel to which the planet is projected. This value must be a combination of scattered light from the disk, and thermal emission and reflected light from $\beta$~Pic b. The fact that this pixel has very similar brightness to the surrounding pixels, which only include the light scattered from the disk, argues for the flux density observed at the location of the planet also being dominated by scattered light emission from the disk. The values discussed here show that the disk surface brightness overwhelms the emission from $\beta$~Pic by about 5 magnitudes. Note, that the above comparison is only an approximation that does not correctly account for the actual bandwidth of our unfiltered STIS observations; nevertheless, because the choice of V-band for the comparison provides the {\em most favorable } planet--to--disk contrast, a more realistic estimate should find that the contribution from the planet is even lower.
{
In Fig.~\ref{Fig-WarpAngle} we marked the planet's inclination ($\approx0.7^\circ \pm 0.7^\circ$ with respect to the disk midplane, e.g. \citealt[][]{Lagrange2014,Macintosh2014}) and the approximate angle under which the warp is observed in our STIS images ($\approx4.0^\circ$). We note that the warp angle derived from our images agrees very well with the angle of the secondary disk structure described by \citet[][]{Golimowski2006}. The warped disk -- as seen in projection -- subtends an angle larger than the best-fit orbital inclination of $\beta$~Pic b, suggesting that planetesimals may be perturbed to higher inclinations than that of the perturbing giant planet. This finding is consistent with the predictions of dynamical simulations of a planetesimal system influenced by secular perturbations of a planet on inclined orbits: for example, a study by \citet[][]{Dawson2011} finds that the inclinations of the planetesimals will oscillate between $0$ and $2~i_p$, where $i_p$ is the inclination of the planet. The fact that the warp is seen at angles $4^\circ$, would taken on face value, then suggests that $\beta$~Pic \, b's inclination is -- within uncertainties -- underestimated by current measurements and it may be close to $\sim2^\circ$.
Alternatively, $\beta$~Pic b's inclination may have been damped, i.e. now may be lower than in the past, which would explain a low present-day planet inclination with a more inclined inner disk, but would require the presence of another planet \citep[][]{Dawson2011}.
However, we note that scattered light is an imperfect tracer of the inner disk, and the not perfectly edge-on viewing geometry and anisotropic scattering further complicate its interpretation.}
\begin{figure}
\epsscale{1.00}
\plotone{InnerDisk_Planet.ps}
\caption{{\em Left panel:} The view of the inner disk (2"$\times$2"). {\em Right panel:} The orbit
of the planet (blue) overlaid on a contour plot of the inner disk with the red symbol marking
the position of the planet during the observations. Our images do not show evidence for excess
emission from the planet, but allow probing the disk structure at the orbit of the planet.
{ The image is rotated by 60.9$^\circ$ counter-clockwise from the standard north up orientation to align the disk with the x-axis.}
\label{Fig-InnerDiskPlanet}}
\end{figure}
\begin{figure}
\epsscale{0.65}
\plotone{Multiwavelength-Planet}
\caption{ { The multi-wavelength view of the disk reveals a major NE-SW asymmetry and a warp. While the NE--SW asymmetry is visible at all wavelengths (from optical through infrared to sub-mm), the warp (emission significantly above the disk midplane) is visible in the optical and near-infrared. The CO peak is also located above the mid plane and may or may not be associated with the warp.
The left white line marks 2\farcs7, the location of the MIR and CO clumps, and the right white line marks 3\farcs5, the outer edge of the sub-mm dust clump. The blue dot on the top panel also shows the approximate location of the planet during the observations. Data from this paper, \citet[][]{Lagrange2012}, \citet[][]{Li2012}, and \citet[][]{Dent2014}). { The images have been rotated by 60.9$^\circ$ counter-clockwise from the standard north up orientation to align the disk with the x-axis.}} \label{Fig-Multiwavelength}}
\end{figure}
\begin{figure}
\epsscale{1.0}
\plotone{TrueColor_STIS_ALMA_Blackbar.eps}
\caption{ { Color composite image of the $\beta$ Pictoris disk. The scattered light STIS image is shown in blue, the 1.3mm ALMA dust continuum image is shown in green and the CO velocity integrated line emission is shown in red. The ALMA data are from \citet[][]{Dent2014}. } \label{Fig-ColorImage}}
\end{figure}
\section{Disk Structure and Its Temporal Evolution}
\label{S:DiskStructure}
\subsection{Multi-wavelength View of $\beta$~Pic }
The $\beta$~Pic disk displays multiple radial (Sect.~\ref{RSB}) and vertical (Sect.~\ref{Section:VerticalStructure}) asymmetries, some of which have been proposed consequences of one or two massive planets orbiting within the disk. In Fig.~\ref{Fig-Multiwavelength} we provide a
multi-wavelength-view of the disk including optical, near-infrared, mid-infrared, and sub-millimeter images.
The top panel shows our STIS midplane-normalized image. The second panel from the top shows a VLT/NACO Ks-band image of the disk \citep[][]{Lagrange2012}. For clarity we show the NACO image with two wedge-like sections covering the northern and southern regions directly above the star, areas that have been heavily contaminated by diffraction spike residuals.
A separate paper in preparation describes the constraints from the VLT/NACO images and we refer to that work for a detailed comparison of the surface brightness slopes; here we will only focus on the disk structure and asymmetries. The Ks-band image traces disk emission in its entire field of view ($\sim14"$ in diameter). Careful analysis by \citet[][]{Lagrange2012} showed that the disk position angle is approximately $29^\circ .3^{+0.22}_{-0.30}$, with the precise value depending on details of the data reduction procedure. These authors also found a slightly different position angle for the SW side of the disk ($\sim209^\circ.10 ^{+0.22} _{-0.38}$); these values are in good agreement (within 1$\sigma$) with the $29.1^\circ\pm0.1^\circ$ value found for our STIS data (Sect.~\ref{DiskAngle}).
The NACO Ks-band image also shows the warp structure seen at optical wavelengths, although less prominently. The disk width, as seen here, reaches its maxima at $\pm$3.5--4\farcs0 separations from the star.
The third panel from the top shows an 11.7~$\mu$m mid-infrared image of the disk emission from \citet[][]{Li2012}. As reported in \citet[][]{Telesco2005} the MIR-clump -- located at 2\farcs7 or 52~au projected separation -- displays a mid-infrared color significantly different from the surrounding disk, indicating that grains in it differ in temperature, size and/or composition from the particles characteristic of the rest of the disk.
The lower two panels show 870~$\mu$m continuum and $^{12}$CO 3--2 transition images obtained with the ALMA sub-millimeter array \citep[][]{Dent2014}. Both the continuum and the CO image show bright peaks at the SW side of the disk.
Based on earlier sub-mm images \citet[][]{Wilner2011} argued that the mm peaks at $\pm$3\farcs5 (68 au) mark the locations of a planetesimal belt in which collisions produce large grains. This picture is consistent with the model by \citet[][]{Augereau2001} and also with the new, higher resolution and sensitivity ALMA observations \citep[][]{Dent2014} shown in Fig.~\ref{Fig-Multiwavelength}.
{ Fig.~\ref{Sketch} provides a visual summary of the key structural elements identified in the $\beta$~Pic disk from multi-wavelength imaging.} Table~\ref{T:Asymmetries} summarizes the asymmetries observed in the $\beta$~Pic disk at wavelengths ranging from blue optical to sub-millimeter. { We also show a color-composite image (Fig.~\ref{Fig-ColorImage}) as an illustration allowing the comparison of the spatial location of the scattered light (blue), the ALMA dust continuum emission (green), and the ALMA velocity-integrated CO emission (red) in the disk.}
We group the observed disk structures in two major categories: apparently axisymmetrical and non-axisymmetrical structures.
The {\em apparently axisymmetrical}, but not mirror-symmetrical disk structures are primarily those that contribute to the warped inner disk, seen in the optical and near-infrared scattered-light images. There is confident detection of extra emission above (SW) and below (NE) of the optical disk midplane (warp) at { 2\farcs5--5\farcs0} at {\em all} wavelengths.
The {\em apparently non-axisymmetrical} structures are primarily those that reflect the major SW--NE asymmetry: the MIR and CO clumps present in the SW disk, the surface brightness differences between the SW and NE disks, and the different ALMA dust continuum levels. Specifically, there is evidence that the SW side of the disk is brighter than the NE at optical, near-infrared, mid-infrared and sub-millimeter wavelengths. New ALMA observations also show a similar asymmetry in CO gas \citep[][]{Dent2014}. In addition, new Herschel/HIFI observations also argue for a higher abundance of {C~\sc II} gas in the SW wing than in NE wing \citep[][]{Cataldi2013}. The MIR and mm images also argue for a different dust population above the disk, indicating recently released smaller grains (in the MIR) at a projected separation of 52~au and larger grains that trace the planetesimal belt that released them at 68 au \citep[][]{Telesco2005,Li2012}. Our images also argue for a warp at this location and with the SW warp being brighter than the NE one. Finally, we find that the disk surface brightness slope in unbroken between 0\farcs4 and 2\farcs0, indicating no significant changes in the disk surface density at radii adjacent and exterior to the most likely semi-major axis of the planet.
\begin{table}
\begin{center}
\caption{Summary of the asymmetries observed in the $\beta$~Pic disk. A visual summary of the key features is given in Fig.~\ref{Sketch}. References: 1 -- \citet[][]{Burrows1995}, 2 -- \citet[][]{Heap2000},
3 -- \citet[][]{Golimowski2006}, 4 -- \citet[][]{Wahhaj2003}, 5 -- \citet[][]{Telesco2005}, 6 -- \citet[][]{Li2012}, 7 -- \citet[][]{Lagrange2012}, 8 -- \citet[][]{Weinberger2003}, 9 -- \citet[][]{Wilner2011}, 10 -- \citet[][]{Mouillet1997}, 11 -- \citet[][]{Pantin1997}, 12 -- \citet[][]{Milli2014}, 13 -- \citet[][]{Dent2014}. \label{T:Asymmetries}}
\begin{tabular}{lcccl}
\tableline\tableline
Asymmetry & Radius ["] & Proj. Dist. [au]& Fig. & References \\
\tableline
{\bf Inner Disk} & & & \\
MIR disk tilted $\sim15^\circ$ CW & $<$1\farcs0 & $<$19 & & 4, 8 but see 5 \\
\hline
\multicolumn{3}{l}{\bf Warp -- Axisymmetrical Structures} & \\
Optical: CCW tilt & 2"-7"& 38--136 &Figs.~\ref{Fig-MainDiskSubtracted} & 1, 2, 3, this work \\
Ks: CCW tilt & 3\farcs7--4\farcs8 & 70--93 &Fig.~\ref{Fig-Multiwavelength} & 7\\
L$^\prime$: CCW tilt & 0\farcs4--3\farcs8 & 8 -- 74 & &12\\
\hline
\multicolumn{4}{l}{\bf SW -- NE Asymmetry -- Non-axisymmetrical structures} \\
SW disk brighter in optical & 0\farcs6--8\farcs0 & 12--160 & Fig.~\ref{Fig-Profiles} & this work\\
SW brighter at 12~$\mu$m & 1--4" & 19--38 & Fig.~\ref{Fig-Multiwavelength} & 11, 4, 9, 5, 6 \\
MIR clump SW side, diff. grains & 2\farcs7& 52 &Fig.~\ref{Fig-Multiwavelength} & 4, 5, 6 \\
SW disk brighter in sub-mm & 1\farcs5--4\farcs1 & 60 & Fig.~\ref{Fig-Multiwavelength} & 9, 13 \\
NE disk brighter in optical & 8\farcs0-13\farcs0 & 160--630 & Fig.~\ref{Fig-Profiles} & this work\\
ALMA CO Peaks above midplane & 4\farcs4 & 85 & Fig.~\ref{Fig-Multiwavelength} & 13\\
\hline
\multicolumn{3}{l}{\bf Bowed Disk: Inclination effect?} & \\
Slightly curved disk & 0--13" & { 0--250} & Sect.~\ref{DiskAngle} & 3, 7, 12, this work\\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\epsscale{0.95}
\plotone{BetaPic_AC.eps}
\caption{{ Key structures in the $\beta$~Pic system, as derived from multi-wavelength imaging.} \label{Sketch}}
\end{figure}
\subsection{Temporal Changes and Comparison to Keplerian Motion}
\label{S:TemporalChanges}
We now search for changes between our two images spanning 14.5 years to explore any temporal evolution that may be observable in the disk at optical wavelengths. We followed two different approaches to compare the images: In the first approach we aligned the images based on the stellar position and position angles identified during the reduction process (based on the location of the diffraction spikes and the spacecraft pointing information). In the second approach we searched for the optimal offsets ($\delta$x and $\delta$y) and rotation between the two images by minimizing the residuals in the subtracted image.
Specifically, we carried out a grid search covering the relative position angle range $+1^\circ$ to $-1^\circ$ (with steps of 0.005$^\circ$) to identify the relative rotation angle that minimizes the sum of the absolute subtraction residuals between the two images at stellocentric separations $>$3.5". For each relative position we cross-correlated the images to identify the optimal offsets. This method has identified 0.0$^\circ$ relative position as the optimal position with offsets of about 1~pixel.
Tests of the second method via the introduction and recovery of artificial image rotations and offsets revealed that while the rotation angle is reliably identified with an accuracy of 0.1$^\circ$ or better, the cross-correlation-based image offset determination's typical uncertainty was $\sim$1.5 pixels (probably set by the broad structures in the images and the noise level). Given that our expected positional uncertainty from the image reduction is more accurate than the cross-correlation based centering, we opted to adopt the first approach outlined above, but note that the second approach also resulted in consistent results.
In the 14.5 year period separating our Epoch 1 and 2 images the STIS detector response has changed slightly. The temporal evolution of the detector's response as a function of wavelength has been monitored closely the Space Telescope Science Institute since STIS's activation. The evolution is described by Krist et al. (2013, STIS Instrument Science Report 2013-03). Standard star observations show that in the blue (filter G430L) STIS's sensitivity has declined by 2.8\%, while in the red (filter G750L) its sensitivity has declined by 2.5\%. We used the $F_{Phot,\lambda}$ keywords in the FITS headers of the two datasets to convert our instrumental count rates into units of flux density; these conversion factors reflect the sensitivities calculated for the two different datasets given the unfiltered CCD throughput curves and assuming a flat source spectrum. Given the accuracy of the calibration and the low wavelength-dependence of the sensitivity change we estimate an uncertainty of $\sim$0.3\% in the $F_{Phot,\lambda}$ factor.
Figure~\ref{Fig-Difference} shows the Epoch 2 (2012) STIS image in a logarithmic scale (upper panel) and the ratio of the Epoch 1 and Epoch 2 images (lower panel). Naturally, the ratio image is sensitive to the signal-to-noise ratio of the images divided. The ratio image is dominated by noise above and below the disk midplane, and at the inner and outer edges of the images. The structures seen in the inner disk ($<$2\farcs5) are dominated by PSF residuals which are different in the two images and result in noise. The ratio image, however, provides information on changes in the disk properties between 2\farcs5 and $\sim$13". At these separations the ratio is close to 1.00, with variations typically below 3\%. The overall morphology of the ratio image does not suggest that any structure within the disk midplane has changed beyond these levels over our 15 year baseline.
Figure~\ref{Fig-Difference_Zoom} shows a magnified image focusing on the 3\farcs0--6\farcs0 region southwest from the star, extracted from the larger ratio image shown in Fig.~\ref{Fig-Difference}. This region covers most of the SW warp, the CO clump reported in \citet[][]{Dent2014}. The upper panel of this figure also shows the velocity-integrated ALMA CO emission overlaid in contours. In order to improve the signal-to-noise ratio we binned the image by 2$\times$2~pixels using a flux-conserving IDL algorithm (frebin).
The left edge of the image ($\sim3\farcs0$) is still influenced by the PSF residuals; however, the rest of the image provides a higher quality measurement of the disk brightness evolution. This entire section of the image is within $\pm$2\% of 1.00, the level corresponding to no change. Given the small uncertainties in our photometric calibration we conclude that there is no evidence for changes in the disk surface brightness between 1997 and 2012 at levels 3\% or higher at the disk midplane or above at the location of the warp, the MIR clump, or the CO/sub-mm continuum clumps.
\begin{figure}
\epsscale{0.95}
\plotone{Ratio_DirectImageATLANTIC.ps}
\caption{ { {\em Upper panel:} Logarithmically scaled Epoch 2 disk image. {\em Lower panel:} The ratio of Epoch 1 to Epoch 2 image probes disk surface brightness evolution between 1997 and 2012. The ratio image inside 3\farcs0 is dominated by PSF residuals from the Epoch 1 image. Between 3\farcs0 and 5\farcs5 in the disk midplane no significant changes are observed in the disk structure. The images have been rotated by 60.9$^\circ$ counter-clockwise from the standard north-up orientation to align the disk with the x-axis. } \label{Fig-Difference}}
\end{figure}
\begin{figure}
\epsscale{0.95}
\plotone{ZoomATLANTIC_99}
\caption{{\em Top:} STIS direct image of the disk in logarithmic scale and velocity-integrated CO intensity from the ALMA observations by \citet[][]{Dent2014}. {\em Bottom:} Relative change between Epoch 1 and 2 in the SW wing. In the disk mid plane between 3--6" our measurements do not show changes greater than 3\%. { The images have been rotated by 60.9$^\circ$ counter-clockwise from the standard north-up orientation to align the disk with the x-axis. }\label{Fig-Difference_Zoom}}
\end{figure}
\begin{figure}
\epsscale{0.95}
\plotone{ProjectedMovement.ps}
\caption{The red-shaded contours show the difference in the projected location of a particle (in arc seconds) at a Keplerian orbit around $\beta$~Pic between our Epoch 1 and 2 observations. The yellow curves show the curves corresponding to 3", 4", and 5" projected separations from $\beta$~Pic . The STIS pixel scale is 0\farcs0507 and the predicted projected orbital motion is greater than a resolution element over the entire plot. Our observations place a 3\% upper limit at the difference in the mid-plane disk surface brightness over 15 years, effectively excluding the presence of single dust clumps contributing at this level to disk above the 3" yellow curve. \label{OrbitalMotionTimescale}}
\end{figure}
In Fig.~\ref{OrbitalMotionTimescale} we compare the projected Keplerian motion for circular orbits over our STIS baseline (14.5 years) as a function of the orbital semi-major axis (x-axis) and the ratio of the projected separation and the semi-major axis (y-axis). For example, the upper edge of the plot (y=1) shows the projected motions if the planet is seen at quadrature; smaller y values correspond to narrower observer-star-orbital position angles. The red-shaded contours show the projected motion in arc seconds, ranging from 0\farcs1 to 0\farcs9 in the figure. Yellow contours mark the x/y combinations that correspond to 3\farcs, 4\farcs0, and 5\farcs0 separations from the star. Based on the comparison of our Epoch 1 and Epoch 2 STIS images we excluded changes in the disk mid plane ($\pm$0\farcs25) greater than 3\% at separations 3\farcs0 to 5\farcs0.
The resolution elements of the STIS image is approximately 2 pixels or 0\farcs101. This value is smaller than the {\em smallest} projected Keplerian motion possible within the bounds of Fig.~\ref{OrbitalMotionTimescale}.
Therefore, we conclude that any point source on a Keplerian orbit in or near the disk mid-plane contributing more than 3\% to the surface brightness distribution to the disk surface brightness, with projected separations between 3\farcs0 and 5\farcs0, would have led to a detectable change in our two-epoch comparison. This area covers part of the disk warp as well as the SW extension of the dust continuum seen in the ALMA images. It does not directly probe, however, the peak of the ALMA-detected integrated CO clump and the MIR-detected dust clump located at 2\farcs7 projected separation.
\subsection{Temporal Evolution Constraints on Extended Disk Structures}
{ In this section we use a simple approach to translate the upper limits on change in disk surface brightness (see \S,\ref{S:TemporalChanges} and Fig.~\ref{Fig-Difference}) to upper limits on the time variations in the disk density structure considering a complicating factor, the disk's edge-on orientation. We synthesized some images of a generic dust cloud around $\beta$~Pic assuming an isotropic scattering phase function. The model images contain azimuthal structures in the shape of simple cosine perturbations on the dust density, expressed in wavenumbers ($m$). These structures rotate and shear with the Keplerian shear. We compared these models the region from 3--6" from the star on the SW side where our data is the best.
The model has an unperturbed dust density with a radial power law index of $-1$, and a vertical distribution given by the \citet[][]{Kelsall1998} model of the solar zodiacal cloud. To simulate the perturbations, we multiplied the density distribution by the function:
\begin{equation}
1+ A \times cos(m \times (\Delta_{hel}-\Omega \times t) - \theta_{ripple})
\end{equation}
where $A$ is the amplitude of the ripple, $\Delta_{hel}$ is the stellocentric longitude, $m$ is the azimuthal wavenumber, $t$ is time, $\Omega$ is the local Keplerian angular frequency, and $\theta_{ripple}$ is the phase of the wave at time zero. This representation is equivalent to a cosine function that winds up with the Keplerian shear.
We created images at time $-7.5$ and $+7.5$ years later (via $t$, see Fig.~\ref{MarcKuchnerImages}) and took the ratio of the fluxes in a slice through the midplane. We then repeatedly rotated the disk slightly (via $\theta_{ripple}$), calculated the r.m.s. of the ratios to average over rotation angles. We repeated this process for each of several angular wavenumbers and plotted slices to generate predictions for the signal as a function of radius.
Only the $m=1$ wavenumber yielded any substantial signal in the 3--6" region for amplitudes $A<1$. Figure~\ref{MarcKuchnerImages} shows the rotation-averaged $m=1$ model compared to the ratio of $\beta$~Pic images. A dashed line shows the 3\% uncertainty in the ratio. The $m=1$ model in this figure has an amplitude of 50\%, i.e. $A=0.5$. This 50\% perturbation is marginally consistent with the detected change in the ratio. An $m=1$ perturbation of this kind might be produced via an avalanche of dust from a single localized event -- a large collision, for example (see \citealt[][]{Kral2014}).
\begin{figure}
\epsscale{0.9}
\plotone{Kuchner-Mode_A.eps}
\caption{Face-on images of the m=3 mode, at time 0 and 15 years later, in logarithmic scaling.
For the plot of the predicted temporal evolution signal shown in Fig.~\ref{MarcKuchnerPlot} edge-on images were used. \label{MarcKuchnerImages}}
\end{figure}
\begin{figure}
\epsscale{0.9}
\plotone{betapiclimitfig2.ps}
\caption{Change in surface brightness between Epoch 1 and 2 for the angular wave number ($m=1$,
assuming 50\% perturbations, i.e. $A$=0.5). Our upper limit of 3\% beyond 3" argues for absent or very low-amplitude low-order wavenumbers.\label{MarcKuchnerPlot}}
\end{figure}
Our simple model shows that our tight upper limit on the changes ($<3\%$ at 3" and beyond) requires a disk that is devoid of high-amplitude, low-order angular modes and small, isolated structures; instead, $\beta$~Pic appears azimuthally homogeneous at the radii probed.
We point out that the methodology described here can be used for future multi-epoch observations of disks and demonstrate that sensitive multi-epoch observations of nearby debris disks can provide powerful constraints on the azimuthal structure of the disk.
Our current observations are limited by the accuracy of the lower-quality first-epoch STIS image (1997). For the well-sampled disk regions (mid-plane, beyond 3\farcs0) our comparison yields an upper limit of only 2--3\%, which is the most accurate such comparison yet. Interestingly, even at these large separations and long periods the 14.5-year baseline results in large and easily-detectable projected Keplerian motions (see Fig.~\ref{OrbitalMotionTimescale}). The precision reached shows { that further} follow-up observations separated by just 3 years from our 2012 images would yield images capable of probing orbital motions at 2\farcs7, the location of the MIR- and CO-clumps.
Similarly, image pairs separated by only 3 years, taken by the James Webb Space Telescope or the ALMA sub-millimeter interferometer, should be able to sensitively probe Keplerian motions in the disk.
}
\section{Possible Origins of The Disk Structure in the $\beta$ Pic System}
\label{Sect:Models}
We now contrast disk structure models with the high-quality multi-wavelength images (\S \ref{S:DiskStructure}) that cover the $\beta$~Pic disk and with the
improved orbital fits to $\beta$~Pic ~b (\S \ref{S:Orbit}). We also use simple models to interpret our upper limits on temporal variations (\S \ref{S:TemporalChanges}) and
demonstrate the future potential of such measurements.
\subsection{Planet-induced Disk Structures}
Multiple models have been proposed to explain the asymmetries of the $\beta$~Pic disk. \citet[][]{Mouillet1997} and \citet[][]{Augereau2001} introduced a dynamical model that accounts for many of the observed properties of the $\beta$~Pic disk and successfully predicted the presence of a massive planetary companion in the disk. This model assumed a giant planet at an orbit inclined by 3$^\circ$ with respect to a planetesimal belt, which extends up to $\sim$120~au. A collisional cascade within this planetesimal belt replenishes the dust grain population and the smallest grains (with sizes below the blow-out size) are efficiently moved by radiation pressure from the planetesimal belt to longer orbits. These small grains are responsible for the very large scattered-light disk seen beyond $\sim$120~au.
In this model the warp in the inner disk is a direct result of the gravitational perturbation of the planetesimal belt by the giant planet. The inclined orbit of the giant planet forces the precession of the planetesimals' orbit, which leads to a warp. The outer radius of the warp will slowly grow in time, as the effect of the perturbations gradually accumulates for planetesimals on longer orbits. In addition, their simulations show the small dust grains generated by a collisional cascade in the warp are then subsequently blown out and will provide an extended, asymmetric structure, fully consistent with the prominent butterfly asymmetry.
The disk structure predicted by the \citet[][]{Augereau2001} model continues to show an excellent agreement with the {\em axisymmetric} disk structures in $\beta$~Pic , including the radial surface brightness profile (\S\, \ref{DiskSurfaceBrightnessProfiles}), the radially-normalized vertical disk profile (\S\, \ref{Section:VerticalStructure}) and the disk warp (\S\, \ref{SectionWarp}), as well as the large-scale structure and the inner disk structure (\S\, \ref{S:Orbit}). The model has only two assumptions: a planetesimal belt extending out to 120--150~au and the presence of a giant planet on an inclined orbit.
The latter prediction has been verified through the detection of $\beta$~Pic b and the former is supported by the submm observations (SMA: \citealt[][]{Wilner2011}, ALMA: \citealt[][]{Dent2014}).
\subsection{A Simple Model for the Warp Morphology}
The inner disk morphology offers important insights into the disk structure and we employ here a simple model aiming to explain the variation in the deviations above and below a linear disk mid-plane with stellocentric distance characterized by the brightest points along the disk major axis (see Fig.~\ref{VertExpansion}).
We explore the morphology of the warp in the inner disk by applying a model based on \citet[][]{Wyatt2005} and its extension (see, \citealt[][]{MatthewsPPVI}). We assume an edge-on planetesimal disk with the planetesimals' semi-major axes logarithmically spaced from 40--140~au, and assume that the disk surface brightness is scattered light that is directly proportional to the surface density of planetesimals. We placed a mass representing $\beta$~Pic b on an inclined orbit ($i_{pl}=2^\circ$) and allowed the system to evolve for 10~Myr. We assumed that the disk surface brightness is scattered light that is directly proportional to the surface density of planetesimals whose mutual collisions produce the light-scattering dust. This is, of course, a toy model with several important simplifications: First, the system parameters (semi-major axes, inclinations, etc.) have not been optimized in any way; second, the model only follows planetesimals and not dust grains, which are affected by radiation pressure. Nevertheless, we expect the model to qualitatively reproduce the first-order morphology of the disk and thus serves a useful way to probe the $\beta$~Pic disk morphology.
In the right panel of Fig.~\ref{MarkWyattModels} we show the vertical location of the peak brightness at each radius (peak brightness profile) from this model. We vary the line-of-nodes of the planet ($\Omega_{pl}$) and show the resulting peak brightness profiles compared to a peak brightness profile derived from the observed disk image in an identical way. We note that this representation is very similar, but not identical to the more detailed fitting procedure described in \S\,\ref{Section:VerticalStructure}.
\begin{figure}
\epsscale{1.0}
\plotone{Wyatt_Combined.eps}
\caption{ {\em Left:} Our simple model for disk-planet interactions predict a tightly wound spiral in the density distribution of the disk if the planet is on an eccentric orbit. {\em Right:} The vertical location of the peak disk brightness at each radius as a function of the orientation of the line-of-nodes (in color) and the same observed quantity (thick solid line). \label{MarkWyattModels}}
\end{figure}
Our simple toy model leads to three conclusions: First, we find that the orbit and mass of $\beta$~Pic ~b and the age of the system are such that the secular perturbations from the planet would be expected to impose a warp on the disk at a radial location and magnitude that is compatible with that observed. This conclusion is in line with previous studies \citep[e.g.][]{Augereau2001} and confirms the trend in spite of the simplicity of our approach.
Second, our model demonstrates the importance of constraining the shape of the warp in the inner region explored by our STIS observations, because that shape constrains the orientation of the planet's orbital plane, specifically whether the line-of-nodes is in the plane of the sky ($\Omega=0^\circ$) or oriented along the line-of-sight ($\Omega=90^\circ$). If $\Omega_{pl}$=90$^\circ$ then the plane of symmetry in the inner disk would be a straight line between the warp. However, if $\Omega_{pl}$=0$^\circ$ then the inner disk would be aligned with the outer disk.
Early models had noted that this parameter affected the magnitude of the warp \citep[][]{Mouillet1997}, but since that magnitude was maximized for $\Omega=90^\circ$ (Fig.~6 in that paper), later models only considered the $\Omega=90^\circ$ case. The fact that the Gaussian fits in Fig.~\ref{VertExpansion} find the inner regions to be well aligned with the outer disk seems to favor a model with $\Omega=0^\circ$ (see Fig.~\ref{MarkWyattModels}).
{ However, the orbital fits in Table~\ref{T:Orbitalelements} favour a value closer to $90^\circ$.}
To explore this further would require a detailed model which addresses the simplifications mentioned in the first paragraph of this section, which is beyond the scope of the current paper.
Third, our model also highlights the point that if the planet has an eccentric orbit then the same secular perturbations that result in the warp will also have caused the disk to have a tightly wound spiral structure (left panel in Fig.~\ref{MarkWyattModels}, e.g., \citealt[][]{Wyatt2005}, and also in \S\,4.3 of \citealt[][]{Mouillet1997}). The spiral does not significantly affect the structure of the warp because the evolution of eccentricities and inclinations are decoupled for low $e$ and $I$. However, the spiral could contribute a small brightness asymmetry to the disk. We note, that this spiral would be at the same radial location as the warp, since secular perturbations make orbital planes and pericenters precess at the same rate, which is also the same location as the clump. { A more comprehensive modeling of the time-evolving morphology introduced by the projection of the spiral structure would be valuable, but beyond the scope of the current paper.}
With the rapidly improving orbital period and mass estimate for $\beta$~Pic and high-quality multi-wavelength images at hand, a more exhaustive study of $\beta$~Pic disk dynamics is well motivated, but it is beyond the scope of the current paper.
\subsection{Open Questions: NE-SW Asymmetry and the Origin of a Super-jupiter on Inclined Orbit}
While all the axisymmetric structures observed in the disk are consistent with the predictions of the models describing the dynamical interactions of a planetesimal belt and a giant planet on an inclined orbit, these models cannot reproduce the prominent NE--SW asymmetries
(see Table~\ref{T:Asymmetries} and \S\,\ref{S:DiskStructure}). This fact highlights the two fundamental open questions on $\beta$~Pic : {\em What is the origin of the NE-SW asymmetries?} and {\em What is the origin of the $\beta$~Pic b super-jupiter's inclined orbit?}
While our observations do not directly help to answer the latter question, they provide some constraints on the different scenarios proposed to explain the former.
The mid-infrared observations by \citet[][]{Telesco2005} and \citet[][]{Li2012} show that the dust grain population in the bright dust clump in the SW wing is different (possibly smaller grains) than those typical to the rest of the disk. These authors argued for the possibility of a single, recent collision injecting copious amounts of dust in the system. The recent ALMA images by \citet[][]{Dent2014} show that the SW clump not only has a different dust population, but also contains gas-phase CO. Given that the UV--photodissociation lifetime of gas-phase CO by the interstellar UV field in an optically thin disk is $\sim$120 years \citet[][]{Visser2009}, the observed CO gas must have been released recently, consistent with a recent major collision (\citealt[][]{Telesco2005} and \citealt[][]{Dent2014}). The estimated mass in the SW dust clump detected by ALMA argues for a Mars-sized parent body, assuming that about 10\% of its mass has been released as debris \citep[][]{Dent2014}.
A similar recent major collision has been proposed by \citet[][]{Stark2014} as one of two possibilities to explain the asymmetries observed in the HD~181327 debris disk. To avoid the improbability of having witnessed such a massive collision only 120 years ago in $\beta$~Pic , \citet[][]{Jackson2014} pointed out that the asymmetric dust distribution resulting from such an event can last up to 0.5~Myr if the dust clumps mark the location where the collision occurred, since debris released by the collision will have a variety of orbits but the orbits will converge at the collision point. However, this scenario would predict that the dust clump's location is constant (i.e. not on Keplerian orbit), which is marginally inconsistent with the tentative detection of orbital motion of the clump \citet[][]{Li2012}.
An alternative scenario has been proposed for $\beta$~Pic based on work by \citet[][]{Wyatt2003} and \citet[][]{Wyatt2006}. In this picture a planet -- during its outward migration -- captures planetesimals in mean motion resonances. The frequent mutual collisions of these planetesimals give rise to dust clumps that trace the planetesimal population and thus produce resonant structures that co-rotate with the planet. This scenario predicts a $>$10~M$_{Earth}$ planet at the inner edge of the dust belt (with half the orbital period as that of the belt). This scenario has been proposed for $\beta$~Pic both by \citet[][]{Telesco2005} and \citet[][]{Dent2014}.
Perhaps the most directly testable key difference between these two scenarios is the SW dust clump's orbital motion or its absence: if the clump is on orbit (as suggested by \citealt[][]{Li2012}) then this would argue for it being a resonant structure. If its location is constant, the clump would likely be a result of a recent collision.
It is important to point out that neither of the above scenarios for the origin of the SW emission invoke the dynamics proposed to explain the warp. Yet, the warp is at exactly the same radial location as the clump, which may be either a coincidence or there is a causal connection. There may be scenarios in which the warp aids the formation of the clump. For example, if $\beta$~Pic ~b is on an eccentric orbit then there will be a spiral pattern at the location of the warp. The resulting brightness asymmetry is much smaller than that observed. However, we also know that this is also the location where orbits of debris that were previously not overlapping start to cross \citep[][]{MustillWyatt2009}, which might increase the chances of seeing a recent collision at that location. For the second scenario there is not a causal connection, but the spatial coincidence is not surprising, since the orbit of the outer planet (that with the resonantly trapped planetesimals) would be aligned with $\beta$~Pic ~b and would also have aligned the resonant planetesimals with its orbit.
\section{Summary}
\label{Summary}
We present new and re-reduced archival HST/STIS coronagraphic optical imaging of the $\beta$~Pictoris debris disk. Our data provide the yet highest signal-to-noise and smallest inner working angle optical images of the disk. Based on our images we characterize in detail the
disk structure and compare the optical images to published images taken at wavelengths ranging from the near-infrared to mm.
The key findings of the paper are the following:
1) Our high-contrast images provide a continuous coverage of the inner disk covering radii from 0\farcs35 to 13" in unfiltered light. The inner working angle is about $2\times$ smaller than that of previous optical images and allows us to image the disk at a location and time where the $\beta$~Pic b planet was present in 2012.
2) We identify radial regions with constant radial surface brightness slopes. We show that corresponding
regions in the northeastern and southwestern wings have similar, but slightly different surface brightness slopes, indicating an asymmetry in the disk.
3) We find that the surface brightness slope between 0\farcs4 -- 2\farcs0 is constant, arguing against any significant changes in the disk structure at or adjacent to the most likely semi-major axis of $\beta$~Pic b.
4) We present the first optical images of the inner structure of the main disk and its vertical extension. { While our images do not show the presence of a separate secondary disk as suggested by \citet[][]{Golimowski2006}, preliminary comparison to two-component scattered light disk models suggest that our observations are consistent with the warp caused by the projection of an inner, inclined disk onto the outer disk.}
5) NE--SW asymmetry: we confirm that the two wings of the disk have different radial surface brightness slopes and that the SW wing is brighter in the inner { 8\farcs0} but fainter beyond that than the NE wing. We show that the same asymmetry extends to the innermost disk, down to at least 0\farcs5.
{ 6) The angle of the disk warp (seen in projection) is approx. $4^\circ$, significantly larger than the best-fit inclination of $\beta$~Pic b's orbit ($0.7^\circ$). }
7) Careful comparison of STIS images obtained over 15 year baseline (in 1997 and in 2012) shows no difference greater than 3\% in the disk surface brightness at the locations of the disk warp part of the CO and sub-mm continuum clumps (SW 3--5"). Similarly, we do not detect any difference between the radial surface brightness profiles.
8) We compile all disk asymmetries seen in wavelengths ranging from optical through mid-infrared to sub-mm wavelengths. We divide the asymmetries in two groups: apparently axisymmetric and apparently non-axisymmetric. The axisymmetric disk structures appear to be fully consistent with the structures predicted by models of \citet[][]{Mouillet1997} and \citet[][]{Augereau2001}, but the axisymmetric structures argue for either a recent major collision or the presence of plantesimals on resonant orbits with a yet unseen planet at $\sim$80~au.
9) We show that over a few year baseline projected Keplerian motions are sufficiently large to allow detection with existing and near-future facilities. We argue that, in particular, HST, JWST, and ALMA multi-epoch observations will provide powerful constraints on the azimuthal structure, dynamics, and disk-planet interactions in the $\beta$~Pic and other nearby debris disks.
Our work leads to two fundamental questions required to explain the properties and origin of the $\beta$~Pic system: {\em How did super-jupiter $\beta$~Pic b ended up on an inclined orbit?} and {\em What is the origin of the NE--SW asymmetry?} We discuss different scenarios and show that time-resolved studies of Keplerian motions in the $\beta$~Pic can be a powerful way to discriminate these.
\acknowledgments
We thank STScI program coordinator Tricia Royle, contact scientist John Debes, and Charles Proffitt for their dedicated support of this program. { We thank Jean-Charles Augereau, Rebecca Dawson, Andras Gaspar, Paul Kalas, John Debes, Bill Dent, and Kate Su, among others, for valuable discussions. We thank the anonymous referee, whose timely report has helped to improve the interpretation of our results and the clarity of the manuscript.}
We thank the entire Servicing Mission 4 crew for restoring HST and STIS operations.
{ We are grateful to the anonymous referee for a prompt and detailed review which helped to clarify the paper. The paper also benefitted from the presentations and discussions at the workshop Beta Pictoris at 30, Paris. }
Support for Program number 12551 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. MCW is grateful for support from the EU through ERC grant number 279973.We acknowledge support from the French National Research Agency (ANR) through the grant ANR10-BLANC0504-01.
{\it Facilities:} \facility{HST (STIS), \facility{ALMA}, \facility{VLT (NACO}, \facility{Gemini (TRECS}}.
\clearpage
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Acknowledgment}
This research was supported in part by grants from CAPES, CNPq
FAPERJ (Brazil) and NSF under the grant CNS-1065133.
\bibliographystyle{elsarticle-num}
\section{Predicting bursty departures}
\label{sec:application}
The model presented in Section \ref{sec:model} can be used to estimate the
number of departures that occur in a burst. In particular, consider the
arrival of a leecher that initiates a busy period (i.e., the first arrival to
a swarm with no leechers). In the following, we estimate the average number
of peers that depart the swarm in a burst together with the leecher that
initiated the busy period.
In practice, bursty departures do not occur exactly at the same time due to
variations inherent to the network and to the inexistence of mechanisms that
enforce synchronization between peers implemented in the protocol (e.g.: they do
not request pieces at exactly the same time).
Nonetheless, our model does not take
these factors into account and, thus, we focus on leechers that leave the swarm
at exactly the same time as the first leecher.
Let $f$ denote the first leecher of a busy period and assume that the leecher
arrival follows a Poisson distribution with rate $\lambda$. Also, as assumed by
the model, a seed is always present and has uplink capacity of $c_s$, while
leechers have identical uplink capacities equal to $c_l$. Finally,
let $S$ denote the number of pieces of the content.
We know that each leecher downloads pieces from the seed at rate $c_s/N$, where
$N$ is the number of peers in the swarm. These pieces are interesting to all the
other $N-1$ peers and can be sent to them. Thus, if $c_l <
c_s\times\frac{N-1}{N}$, leechers will start to accumulate pieces received from
the seed which cannot be uploaded to the other peers. Therefore, every leecher will own pieces
interesting to all of its neighbors. Consequently, the upload rate between any two
leechers $i$ and $j$ will be equal to $u_{ij} = c_l/(N-1)$, since $g_{ij} =
\infty$ (see Equation~(\ref{eq:model_equations2})). We conclude that when $c_l <
c_s\times\frac{N-1}{N}$, all leechers have the same download rate which prevents
other leechers from departing in a burst with $f$.
Conversely, when $c_l\geq c_s\times\frac{N-1}{N}$, the neighbors of $f$ can
upload
to it the pieces they download from the seed. Since
leecher $f$ downloads from the seed at rate $c_s/N$ and each of its $N-1$
neighbors receives
pieces from the seed and uploads them to $f$ at the same rate, $f$ will download
the content at a constant rate equal to $c_s$,
independently on the number of peers in the swarm.
Note that $c_s$ is also the upper bound on the average download rate, as the seed
cannot uploads pieces into the swarm at a faster rate. Hence, leecher $f$ will take
$T = S/c_s$ seconds to finish the download.
We now show how to calculate the lower and upper bounds to the number of bursty
departures when $c_l \geq c_s\times\frac{N-1}{N}$.
Consider arrivals that occur while peer $f$ is in the swarm. The number of such
arrivals, say $n=N-1$, is a random variable and follows the Poisson distribution with
parameters $\lambda$ and $T$. The download rates of these leechers are a function
of $n$ and also of their instants of arrival. Moreover, as discussed in
Section~\ref{sec:validation}, larger values of $n$ imply a larger spread in the
download rates. To obtain a conservative lower
and upper bound on these download rates, we will consider a sufficiently large
value for $n$. In particular, we use the 99-th percentile of $n$, namely $n_{99}$, and
thus, $P[n \leq n_{99}] \leq 0.99$.
Given that exactly $n_{99}$ leechers will join the swarm before the departure of
$f$, we can use the model to obtain the minimum and maximum download rates of these
peers, independently of their inter-arrival timing. Let $d_{\min}$ and $d_{\max}$
be, respectively, the minimum and the maximum download rates obtained from the model
given that the swarm has $n_{99}+1$ leechers.
Consider again the subsets presented in the previous section, namely $A$
(leechers with the same number of pieces than $f$) and $B$ (leechers with less
pieces than $f$). The minimum download rate is obtained by a leecher $m$ in $B$ when
the only leecher in $A$ is $f$. In this case, the download rate $d_{\min}$ is given by
\begin{equation}
d_{\min} = \frac{c_s}{n_{99}+1} + u_{f,m} + \sum_{i \in B} u_{i,m},
\end{equation}
where $\sum_{i \in B} u_{i,m}$ corresponds to the sum of the rates at which $m$
downloads from peers in $B$.
On the other hand, a leecher $m$ in $B$ obtains the maximum download rate
$d_{\max}$ when it is the only peer in $B$, i.e, $|A|=n_{99}$. In this
case, the download rate is given by
\begin{equation}
d_{\max} = \frac{c_s}{n_{99}+1} + \sum_{i \in A} u_{i,m}.
\end{equation}
The minimum and maximum time for
the leechers to download the content is, respectively, $S/d_{max}$ and $S/d_{min}$.
Therefore, at least all leechers that arrive before $T - S/d_{min}$ will
leave the swarm together in a burst with $f$. The expected number of peers that
will arrive within this time period, $B_{min}$ is simply given by
\begin{equation}
B_{min} = \lambda \, \left( T - \frac{S}{d_{min}} \right).
\label{eq:B_min}
\end{equation}
Similarly, at most all leechers that arrive before $T - S / d_{max}$ will
leave the swarm in a burst with $f$. The expected number of peers that will arrive
within this time period, $B_{max}$ is simply given by
\begin{equation}
B_{max} = \lambda \, \left( T-\frac{S}{d_{max}} \right).
\label{eq:B_max}
\end{equation}
Finally, $B_{min}$ and $B_{max}$ provide a lower and upper bound for the average
number of leechers that will depart the swarm in a burst with $f$.
\begin{table}[!t]
\centering
\caption{Bounds for the expected number of leechers that depart in a burst with $f$, for $\lambda=1/1000$.}
\label{tab:burst-bounds}
\begin{tabular}{c|c|c|c|c|c}
$c_{s}$ & \multirow{2}{*}{$E[N]$} & \multirow{2}{*}{$B_{min}$} & \multirow{2}{*}{$B_{max}$}
& \multirow{2}{*}{$\frac{\textrm{$B_{min}$}}{E[N]}$}
& \multirow{2}{*}{$\frac{\textrm{$B_{max}$}}{E[N]}$}\\
(kBps) & & & & &\\
\hline
48 & 5.333 & 1.667 & 4.378 & 0.312 & 0.821\\
64 & 4.000 & 0.400 & 1.895 & 0.100 & 0.474\\
\end{tabular}
\end{table}
Table~\ref{tab:burst-bounds} shows the expected number of arrivals to the swarm
before $f$ departs, $E[N]$, which is simply $\lambda T$, and both the lower and
upper bounds $B_{min}$ and $B_{max}$, respectively. The table shows numerical
results for different $c_s$ values but with $c_l = 64$ kBps and $\lambda = 1/1000$.
The results indicate that average number of peers that depart the swarm in a burst
with $f$ can be significant: between 31\% and 82\% of all arrivals when the seed
is slower than the leechers and between 10\% and 47\% when they have the same
upload capacity. We also observe that these ratios reduce as $c_s$ increases,
indicating that bursty departures are less likely to occur with faster seeds.
Recall that, as indicated above, there is a minimum value of $c_s$ for which
bursty departures do not occur.
\section{BT overview and the observed behavior}
\label{sec:bt}
\subsection{Brief BT overview}
BT is a swarm based file sharing P2P application. Swarm is a set of users
(peers) interested in downloading and/or sharing the same content (a single or a
bundle of files). The content is chopped into pieces (chunks) which are
exchanged among peers connected to the swarm. The entities in a swarm may be of
three different types: (i) the seeds which are peers that have a complete copy
of the content and are still connected to the system altruistically uploading
data to other peers; (ii) the leechers which are peers that have not yet fully
recovered the content and are actively downloading and simultaneously uploading
the chunks; and, (iii) the tracker which is a kind of swarm coordinator, it
keeps track of the leechers and seeds connected to the swarm.
Periodically, the tracker distributes lists with a random subset of peers
connected to the swarm to promote the interaction among participating peers. In
a first interaction, two peers exchange their bitmaps (a list of all file chunks
they have downloaded). All updates in their bitmaps are
reported by the leecher to its neighbors.
In order to receive new chunks, the leecher must send ``Interested'' messages to
all peers that announced to have the wanted pieces in their respective bitmaps.
Because of the rarest first approach specified in BT protocol, leechers
prioritize to download first the chunks that are scarcer in the swarm. Once a
sub-piece of any chunk is received, the ``strict priority'' policy defines that
the remaining sub-pieces from that particular chunk must be requested before
starting the download of any other chunk.
Whenever an ``Interested'' message is received, peers have to decide whether to
``unchoke'' that leecher and serve the piece or to ``choke'' the peer and ignore
the request. Leechers preferentially upload content to other leechers that
reciprocate likewise, it is based on a ``tit-for-tat'' incentive strategy defined
by BT's protocol. More precisely, a major fraction of its bandwidth is allocated
to serve the peers that have contributed the most to the leecher.
However, a minor fraction of its bandwidth must be dedicated to
altruistically serve leechers that have never reciprocated. This policy,
referred to
as ``optimistic unchoke'', is useful for leechers to bootstrap new reciprocity
relationships. As the seeds do not reciprocate, they adopt the ``optimistic
unchoke'' approach all the time. These BT policies were designed with the main
purpose of giving all leechers a ``fair share'' of bandwidth. It means that
peers uploading in higher rates will receive in higher download rates, and in a
population of leechers uploading at the same rate, they all must reach equal download rates.
\section{Conclusion}
\label{sec:conc}
This paper identifies, characterizes and models an interesting phenomenon in
BT: homogeneous peers (with respect to their upload capacity) experience
heterogeneous download rates. The behavior is pronounced in
unpopular swarms (few leechers) and has important consequences that directly
impact peer and system performance. The mathematical model proposed captures
well these heterogeneous download rates of peers and
provides fundamental insights into the root cause of the phenomenon. Namely,
the allocation of system capacity (aggregate uplink capacity of all peers) among
leechers depends on the piece interest relationship among peers, which for
unpopular swarms is directly related to arrival order of peers and can be significantly
different among them.
\section{General discussions}
\label{sec:disc}
We now discuss other aspects related to the described phenomenon such as
different arrival processes, what happens if the seed is not available all the
time, what happens when leechers stay as seeds for some
time and the missing piece syndrome.
\subsection{General arrival processes}
It is interesting to consider the occurrence of the observed phenomenon in more
general scenarios. Although we have shown its prevalence under a crafted peer
arrival process and under Poisson arrivals, we claim that homogeneous peers
can have heterogeneous download rates under very general arrival patterns.
In particular, given any arrival pattern of peers into a swarm, it
is possible to choose system parameters (i.e., seed upload capacity, leechers upload capacity,
and file size) such that the effects described in this paper will be very prevalent. Intuitively,
by choosing a fast enough seed, peers will not be able to disseminate old pieces
before new ones are pushed into the swarm, and thus will have
significantly different number of blocks. In a sense, the
behavior observed and described in this paper is quite general, although the requirement
of the swarm being unpopular is important, as we next describe.
\begin{figure}[!t]
\centering
\captionsetup[subfloat]{margin=0.5pt}
\subfloat[Number of leechers over time]{\label{fig:popular_busyperiod}
\includegraphics[width=2.8in]{swarm_size_result_u_50_lambda_12}}
\subfloat[Empirical CCDF of the download time.]{\label{fig:popular_CCDF}
\includegraphics[width=2.8in]{download_ccdf_u50_lambda12}}
\caption{Experimental results under a popular swarm ($\lambda = 1/12$,
$c_s= 50$ kBps, $c_l=50$ kBps)}
\label{fig:popular}
\end{figure}
What happens if we consider very popular swarms, where the peer arrival rate is very large,
yielding very large swarm sizes? Figure \ref{fig:popular_busyperiod} shows experimental results of the
dynamics of leecher arrivals and departures for this scenario (Poisson arrivals with
rate $\lambda = 1/12$, uplink capacities of $c_s = 50$ kBps and $c_l = 50$
kBps) and file size $S=80$ pieces (i.e., 20 MB). The empirical CCDF of the
download time is depicted in Figure~\ref{fig:popular_CCDF}.
Interestingly, we can still observe the consequences of having heterogeneous
download rates, such as bursty departures, content synchronization
and high variability of download times (peers that leave in a large burst have different
download times, as arrival is well-behaved), for example, at times 600 s and 1200 s. In a
sense, the phenomenon is quite prevalent even during the busy period, but not strong
enough to end the busy period. The characterization and modeling of the phenomenon in
this scenario is much more entailed, given the complicated dynamics of piece exchange
of BT and consequently the interest relationship among peers. We leave the investigation
of these scenarios (popular swarms) as future work.
\subsection{When the seed is not available all the time}
We have considered so far swarms that have a single seed which is always
connected. However, what happens if the seed alternately joins and leaves the swarm?
Intuitively, leechers start to synchronize their contents right after the seed
leaves because no new pieces are being placed into the swarm. After they become fully
synchronized, they will stall until the seed comes back. Then, since they are
synchronized, they will have relatively low download rates and will leave almost
at the same time. Therefore, the intermitent seed makes the average download rates even
more heterogeneous.
In order to support this claim, we modify the simulation model such that the state of the
seed (connected/disconnected) is given by an ON-OFF source. We assume that the time until the seed leaves
the ON state (leaves the swarm) is exponentially distributed with rate $1/\lambda$ (arbitrarily set to be equal to the
leecher arrival rate). Furthermore, we choose the rate at which the seed goes from the OFF back to the
ON state (rejoins the swarm) so that the availability of the seed is $0.75$, $0.50$ and $0.25$.
Table~\ref{tab:on-off} summarizes the results for $\lambda=1/1000$ s, $c_s = c_l = 64$
kBps, $S= 1000$ pieces. Each scenario was simulated during $800,000$ s. We
observe that
the mean, variance and maximum download time monotonically increase, what is
expected as there are less resources on average. Interestingly,
the minimum download time becomes smaller. This is due to the fact that a
a new leecher may arrive when some peers are stalling in the absence of the
seed, just before finishing the download.
The new leecher then benefits from the spare bandwidth capacity and
might complete the download right after the seed comes back.
Finally, it is clear the download time (equivalently, download
rate) becomes more heterogeneous.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c|c|c}
$P(S=\textrm{ON})$ & Mean & Variance & Minimum & Maximum\tabularnewline
\hline
1.00 & $3.360\times10^{3}$ & $9.319\times10^{4}$ & $2.295\times10^{3}$ & $4.247\times10^{3}$\tabularnewline
0.75 & $3.865\times10^{3}$ & $1.003\times10^{6}$ & $1.276\times10^{3}$ & $8.431\times10^{3}$\tabularnewline
0.50 & $5.307\times10^{3}$ & $1.062\times10^{7}$ & $5.640\times10^{3}$ & $2.518\times10^{4}$\tabularnewline
0.25 & $1.045\times10^{4}$ & $7.671\times10^{7}$ & $3.640\times10^{3}$ & $3.321\times10^{4}$\tabularnewline
\end{tabular}
\caption{Statistics of the empirical distribution of the download time, when the
seed leaves and joins the swarm.}
\label{tab:on-off}
\end{table}
Note that, in fact, the proposed analytical model
can cope with this scenario as long as peers do not accumulate pieces
interesting to each other, i.e. $c_l \geq c_s \times \frac{N-1}{N}$ (see
Section~\ref{sec:application}). In particular, if the seed departs, then this
will only affect the upload rate among peers, given by
Equation~(\ref{eq:bandwidth-requirements}) (or
Equation~(\ref{eq:bandwidth-requirements2})).
More precisely, the term corresponding to the download rate from the seed ($c_s/N$) should be
set to zero in the equations that describe time periods where the seed is not
present. Thus, given the state of the swarm with respect to the seed's presence and
number of leechers, we can still apply our modeling framework and determine
the download rates of leechers (under the condition above).
\subsection{When leechers stay as seeds for some time}
Another aspect that can be taken into account is that leechers may stay as seeds
for a period of time after they finish the download and before leaving the
swarm. Intuitively, since the capacity available to disseminate the file increases as
leechers stay as seeds, peers concurrently downloading a file tend to receive
pieces at similar download rates, possibly reducing the consequences of
heterogeneous download rates.
We performed
simulations for scenarios where $\lambda = 1/1000$ s, $c_s = c_l = 64$ kBps, $S
= 1000$ pieces and the time which leechers stay seeding is deterministic and
equal to $1/\gamma$. Each scenario was simulated 10 times during 400,000~s,
but the first 100,000~s were discarded to avoid transient effects.
Figures~\ref{fig:seeding}(a)-(c) depict the results for many values
of $1/\gamma$.
As indicated in Figure~\ref{fig:bursty_departures-seeding}, bursty departures
are less likely to occur when leechers stay in the swarm after downloading the
entire content. However, for small values of $1/\gamma$ (with respect to
$1/\lambda$), the difference is barely noticeable and the departure
process is still very bursty.
Figure~\ref{fig:download_times-seeding} shows the
CCDF of leechers' download times for different values of $1/\gamma$ while
Table~\ref{tab:seeding} contains statistics of these distributions, namely the
sample mean and variance, minimum and maximum values. Intuitively, leechers
that find two or more seeds at arrival time have significant better performance
and hence the minimum download time decreases as the seeding time increase. On
the other hand, the maximum download time is the approximately same for the majority of the
curves. This is because there is a non-zero probability that a leecher downloads
the content entirely from a single seed. However, this probability becomes
smaller with the seeding time. Initially the variance increases with $1/\gamma$, but
when leechers stay as seeds for a long period of time,
it is unlikely that a leecher will have a download time much larger than the
average and thus, the sample variance diminishes (see $1/\gamma = $ 5,000 and
$1/\gamma = $ 10,000 in Table \ref{tab:seeding}).
Finally,
Figure~\ref{fig:relative_order-seeding} shows that while early arrivals are
detrimented for small values of $1/\gamma$, they are benefited when
$1/\gamma$ is high. The presence of multiple seeds has the same effect as a
single seed with higher capacity. This can be observed by comparing
the curves for $1/\gamma$ equal to 5,000 and 10,000 and the
curve corresponding to $c_s = 96$ kBps in
Figure~\ref{fig:xx-64_1000_1000_relative-order}.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.1pt}
\subfloat[Bursty departures characterization.]{\label{fig:bursty_departures-seeding}
\includegraphics[width=2.2in]{bursty_departures-seeding}}
\subfloat[Empirical CCDF of the download time.]{\label{fig:download_times-seeding}
\includegraphics[width=2.2in]{download_times-seeding}}
\subfloat[Impact of arrival order.]{\label{fig:relative_order-seeding}
\includegraphics[width=2.2in]{relative_order-seeding}}
\caption{\label{fig:seeding} System performance when leechers stay seeding.}
\end{center}
\end{figure}
\begin{table}[!t]
\centering
\caption{Statistics of the empirical distribution of the download time, when
leechers stay as seeds for some time.}
\label{tab:seeding}
\begin{tabular}{c|c|c|c|c}
$1/\gamma$ & Mean & Variance & Minimum & Maximum\\
\hline
0 & $3.360 \times 10^3$ & $9.319 \times 10^4$ & $2.295 \times 10^3$ & $4.247 \times 10^3$\\
100 & $3.262 \times 10^3$ & $1.093 \times 10^5$ & $2.120 \times 10^3$ & $4.181 \times 10^3$\\
500 & $2.926 \times 10^3$ & $2.446 \times 10^5$ & $1.174 \times 10^3$ & $4.183 \times 10^3$\\
1000 & $2.612 \times 10^3$ & $4.394 \times 10^5$ & $6.028 \times 10^2$ & $4.181 \times 10^3$\\
2000 & $2.067 \times 10^3$ & $6.278 \times 10^5$ & $3.197 \times 10^2$ & $4.170 \times 10^3$\\
5000 & $1.190 \times 10^3$ & $4.306 \times 10^5$ & $2.600 \times 10^2$ & $4.209 \times 10^3$\\
10000 & $5.508 \times 10^2$ & $7.614 \times 10^4$ & $2.100 \times 10^2$ & $2.028 \times 10^3$\\
\end{tabular}
\end{table}
\subsection{Missing piece syndrome}
Last, we now comment on the relationship of our findings and the phenomenon
known as {\it missing piece syndrome}. This phenomenon states that in swarms
where the arrival rate is large enough, the
system can become unstable (i.e., number of leechers grows unboundedly) if the upload
capacity of the seed is not large enough \cite{mathieu2,hajek_zhu_2010,hajek2}.
The key aspect of this syndrome is content
synchronization, where a large fraction of peers have all but one and the same
piece. This situation is particularly bad to the performance of the swarm, as the
departure rate of the swarm will be equal to the seed upload capacity
(assuming peers depart as soon as they acquire the last block). Our work has
shown that peers can synchronize their content in such a way that several
identical pieces are missing which eventually leads to the missing piece
syndrome. In
some sense, this generalizes the syndrome to a {\it piece synchronization
syndrome}, which is inherent to BT dynamics due to the heterogeneous download
rates as discussed in this work. Once peers have synchronized their content, they can only acquire new pieces
from the seed, at the upload capacity of the seed. In this scenario,
the {\it missing piece syndrome} is bound to occur.
\section{Introduction}
Peer-to-peer (P2P) applications have widely been used for content recovery in
Internet. Among them, BitTorrent (BT)~\cite{bt} is one of the most popular, used
by millions daily to retrieve millions of files (movies, TV series, music, etc),
accounting for large fractions of today's Internet traffic~\cite{urlinternet}.
The mainstream success of BT is closely related to its performance (e.g., fast
download times) and together with its high complexity, has triggered the
interest of researchers.
Understanding and characterizing the performance of BT through mathematical models has
been an active topic of research~\cite{xia_muppala_2010}.
Several studies have uncovered peculiar aspects BT's dynamic, many of which have
direct impact on system performance.
Moreover, models that capture user and system performance under
homogeneous and heterogeneous peer population (with respect to their upload capacities)
have been proposed for various scenarios \cite{yang_veciana_2004,qiu_srikant_2004,liao_papadopoulos_psounis_2007,chow_golubchik_misra_2009}.
However, most proposed models target large-scale systems, either with a large and
fixed initial peer population or relatively high peer arrival rates.
We consider a BT swarm where all peers have identical upload capacities but unconstrained
(or large) download capacities. In this context, we identify and characterize a phenomenon
that has not been previously observed: homogeneous peers experience heterogeneous
download rates. Although this is expected in swarms where peers have
different capacities, in homogeneous swarms, peers should, at first,
exhibit similar
average performance. Thus, we focus in the latter type of swarm, for which the
described behavior has not been captured by any prior model
(to the best of our knowledge). Moreover, this observation has several important
implications, such as high variability of download times, unfairness with respect to
peer arrival order, bursty departures and content synchronization among the peers.
Two peers are said to be content-synchronized after their content become identical at
a given instant. This last consequence is particularly critical since it is closely related to
the missing piece syndrome \cite{mathieu2,hajek_zhu_2010,hajek2}, a scenario where a very large
number of peers have all except a single missing piece.
We characterize the fact that homogeneous peers experience heterogeneous download rates and
its various consequences by using detailed packet-level simulations and prototype-based
experiments on the Internet. To underpin critical parameters for this behavior, we consider
various scenarios. We show that peer arrival times strongly influence their
average download rate. We also develop a mathematical model that explains the phenomenon and predicts
the heterogeneous download rates of the homogeneous peers as a function of their content.
The comparison of model predictions with simulation results indicate the model is quite
accurate. More importantly, the model sheds light on the key insight for this behavior:
upload capacity allocation of peers in BT depends fundamentally on piece interest relationship,
which for unpopular swarms can be rather asymmetric. We also apply the model to calculate lower and upper bounds to
the number of departures that occur in a burst.
\noindent{\bf Remark: The case for unpopular swarms with seeds}
The phenomenon we identify is more prevalent in swarms that have a very small
peer population and a single seed (peer with entire content) with limited bandwidth.
However, this is by far the most prevalent kind of swarm in BT, as observed by different and
independent measurement studies. In particular, it has been shown that inter-arrival times of peers into
swarms increase exponentially with the age of the swarm \cite{guo,Kaune2010}. Thus, some time after
it has been created, swarms receive few peers and therefore have a very small size. A detailed
measurement study of swarm sizes in BT considering various repositories
and various media types has also recently appeared in the literature \cite{Hossfeld2011}. Their
results indicate that 70\% of active swarms from different repositories have less than 10
peers (Figure 2 in \cite{Hossfeld2011}). When considering swarms that do not change size over a
relative short time, 97\% of them have less than 5 peers (Figure 3 in \cite{Hossfeld2011}).
We have also conducted measurements in Torlock.com, one of the most popular Torrent Search Engines
available in the Internet nowadays. In particular, we collected information concerning {\it swarm health}
(number of peers, number of seeds, etc) on all available swarms in the website (around 150,000)
once a day for ten consecutive days in November 2011. Each swarm has a size which is given by the number
of peers connected to the swarm (seeders plus leechers) at the time data was collected.
Figure \ref{fig:size-distrib-1} shows the empirical complementary cumulative distribution of swarm
sizes for all ten days, considering only swarms that have at least one seed (around 130,000 swarms).
Interestingly, swarm size distribution is heavy tailed, with some swarms having a size
1000 times larger than the average. Moreover, most swarms are very small: about 58\% of the swarms have
less than 5 peers and about 73\% have less than 10 peers. Finally, this observation is persistent and
consistent over the ten measurement days, indicating that small swarms are very prevalent in BT.
Intuitively, swarms without any seeds are not likely to exist in BT since the content
may not be fully available in them. Figure \ref{fig:crawling_frac_k} shows the fraction of swarms
of size $K$ with at least one seed. As expected, the fraction of swarms with at least one seed is very
large, more than 90\% for all swarm sizes greater than 2. Moreover, as the size of the swarm increases,
this fraction also increases. Again, we observe that this is consistent over the ten measurement days,
indicating that swarms with at least one seed are very frequent, even when considering unpopular swarms,
with sizes less than 5.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{}
\subfloat[Empirical complementary cumulative distribution of swarm sizes with at least one seed for different days (each curve corresponds to a day).]{\label{fig:size-distrib-1}
\includegraphics[width=2.6in]{crawling_ccdf}}
\hspace{1cm}
\subfloat[Fraction of swarms of size $K$ with at least one seed in Torlock.com for different days and swarm sizes (each bar corresponds to a day).]{\label{fig:crawling_frac_k}
\includegraphics[width=3.5in]{crawling_frac_k}}
\caption{\label{fig:size-distrib} Distribution of swarm sizes and fraction of swarms with at least one seed in Torlock.com for ten consecutive measurement days.}
\end{center}
\end{figure}
Finally, as supported by experimental evidence, unpopular swarms (swarms of very small sizes, e.g., five or less
peers) with at least one seed are very common in the real world. Thus, they are the focus point of this paper,
although we will present and discuss some generalizations.
The rest of this paper is organized as follows. In~\textsection\ref{sec:bt} we present
a brief overview of BT and motivate the phenomenon we have identified.
In~\textsection\ref{sec:problem} we characterize the phenomenon and its consequences using
simulations and experiments with a real BT application. \textsection\ref{sec:model} presents
our mathematical model, its validation in comparison with simulations, and some model
generalizations. In \textsection\ref{sec:application} we apply
the model to make predictions about bursty departures. We include a discussion
and possible model
extensions as well as present some related work in \textsection\ref{sec:disc} and \textsection\ref{sec:related},
respectively. Finally, we conclude the paper in~\textsection\ref{sec:conc}.
\section{Model}
\label{sec:model}
We develop a simple model attaining to understand the origin of the heterogeneous
download times and its consequences. Our model obtains an approximation to the
average upload and download rates observed by each leecher on different time
intervals for unpopular swarms.
Consider a homogeneous swarm of some unpopular content with a single seed to which
leechers arrive sequentially and depart as soon as they complete their download,
such as the one illustrated in Figure~\ref{fig:simul1}. By unpopular content we
imply a swarm with an arrival rate that is small enough such that there is never
too many peers in the swarm. In particular, our modeling framework assumes that
the maximum number of upload connections of peers is always larger than (or
equal to) the swarm
size. In such scenario, Tit-for-Tat (TFT) and optimistic unchoke algorithms have
no effect, since all peers upload to one another. Thus, such mechanisms are not
present in our model. However, note that rarest-first mechanism continues to operate
since is not affected by this assumption and is therefore captured by our model.
In the described scenario, bursty departures can only happen if younger leechers
obtain roughly the same number of pieces as older ones, and leave the swarm
at about the same instant. This in turn implies that younger leechers
must have higher download rates than older ones, at least for some
periods of time. Why is that? At a given moment, an older leecher $i$ may
have all pieces owned by a younger leecher $j$. Thus, leecher's $j$ uplink
capacity will be used to serve other leechers until $j$ receives a piece
that $i$ does not have. During this period of time, $j$ simply cannot
serve $i$, even if it has no other leecher to serve. Therefore, the
sets of pieces owned by each leecher are the root causes for
heterogeneous download rates and must be considered.
In order to capture the observation above, each peer, either a seed or a
leecher, is represented by a queueing system with multiple queues (see
Figure~\ref{fig:multiple-queue}), one for each neighbor, under a
processor sharing discipline. Queue $j$ of peer $i$ contains the pieces
interesting to peer $j$ (i.e., all pieces that $i$ has that $j$ has not). When
peer $j$ downloads one of these pieces, from $i$ for instance, this piece is
removed from the $j$-th queue of $i$, and from the $j$-th queues of other peers where the
piece was present. On the other hand, whenever a peer downloads a piece that
other neighbors are interested in, this piece is placed in the queues
corresponding to those neighbors, increasing their queues sizes. Finally, the
queues of the seed always have all pieces that are needed by the leechers. As a
leecher downloads pieces from the seed and other leechers, this queue decreases,
eventually becoming empty when the leecher downloads the entire content
and departs the swarm. We note that the order at which these pieces are
served from these queues depend on the piece selection policy.
\begin{figure}[h]
\centering
\includegraphics[width=2in]{multiple-queue}
\caption{Leecher $i$ can be represented as server with multiple queues, one for each
neighbor, containing pieces that are interesting to them.}
\label{fig:multiple-queue}
\end{figure}
Let $c_s$ and $c_l$ be the seed and leechers' uplink capacities, respectively.
Assume that the leechers' downlink capacities are much larger than $c_s$ and
$c_l$. Let $N(t)$ be the number of leechers in the system at time $t$. Since
the seed always has interesting pieces to every leecher, all the $N(t)$ queues
in the seed are backlogged. Thus, all queues will be served at rate $c_s/N(t)$.
Note that, since the swarm is unpopular, we assume the swarm size is small
enough such that every leecher is neighbor of every other peer (including the
seed) and can serve all of them simultaneously.
A leecher may not have interesting pieces to some of its neighbors at time $t$.
Let a leecher be identified by its arrival order, thus leecher $i$ is the $i$-th
leecher to join the swarm. Also let $n_i(t) \leq N(t)-1$ be the number of
leechers interested in pieces owned by $i$. The instantaneous upload rate
from $i$ to any of these leechers is $c_l/n_i(t)$.
Whether a leecher has or has not pieces interesting to another
depends on the leechers' respective {\it bitmaps}, i.e. the current
subsets of pieces owned by a leecher at time $t$. The set of bitmaps of all leechers
would precisely determine the exact pieces in each queue. However,
the dynamics of the bitmaps are intricated and to keep track of them would
be unnecessarily complicated for modeling the phenomenon we are interested in.
Instead, we consider the number of pieces owned by each leecher $i$, $b_i(t)$,
and infer whether a leecher has interesting pieces to other leechers.
For the sake of simplicity, let $b_i(t) = b_i$ and $N(t) = N$.
Two remarks can be made with respect to $b_i$ and the interest relationship among
leechers:
\begin{remark}[]
\label{more_pieces}
If $b_i > b_j$, then $i$ has at least $b_i - b_j$ interesting pieces to $j$.
\end{remark}
\begin{remark}[]
\label{less_pieces}
If $0 < b_i \leq b_j$, it is impossible to determine whether $i$ has or
has not interesting pieces to $j$ without further information.
\end{remark}
In the following, we will use these two remarks to derive a simple model to
capture the upload and download rates between peers. With respect to
Remark \ref{less_pieces}, we will assume that no further information is available,
and hence the piece interest relationship among peers will be ignored in this
case. Nevertheless, we will see that a peer with less pieces than other can
still upload pieces to the latter.
\subsection{A simple fluid model}
We assume that content is fluid, or equivalently, that pieces can be
subdivided in infinitely many parts that can be exchanged (uploaded
and downloaded) continuously.
To simplify the explanation, assume that $b_1 > b_2 >
\dots b_N$, i.e. an older leecher has strictly more pieces than a younger
one. We will relax this assumption later on this section, allowing the model to
represent swarms where two peers arrived at the same time, or more
generally, where some leechers have the same number of pieces.
We now make the following assumptions:
\begin{itemize}
\item Even if leecher $i$ has joined the swarm after $j$, i.e.
$i > j$, $i$ can still upload pieces to $j$ as long as $i$ downloads
pieces from any peer $k$ that has more pieces than $j$, i.e. $k < j$. Thus,
younger peers can upload to older peers.
\item Every piece downloaded from the seed by a leecher is
immediately interesting to all other leechers, independently of their
arrival time. The rarest-first piece selection policy provides support
for this assumption.
Figure~\ref{fig:example-interesting-pieces} depicts the idea that a younger peer
can upload pieces to an older one. In this scenario, peer 4 can upload to peer 2,
since it is downloading pieces interesting to peer 2 from the seed and from peer 1.
\begin{figure}[!t]
\centering
\includegraphics[width=2.0in]{example-interesting-pieces}
\caption{Peer 4 can upload pieces to peer 2, since it is downloading pieces
interesting to the latter from the seed and from peer 1.}
\label{fig:example-interesting-pieces}
\end{figure}
\item Since the seed's upload capacity is $c_s$, each leecher downloads from
it at rate $c_s/N$. Now let $g_{ij}$ be the rate at which peer $i$
could potentially upload data to peer $j$ provided that there is no capacity
constraints (i.e. independently of upload and download capacities of
peers $i$ and $j$, respectively).
If a leecher $i$ is older than $j$, $i$ has interesting pieces to $j$.
Therefore, from the perspective of the multiple queueing system, queue $j$
in leecher $i$ is backlogged and $g_{ij} = \infty$.
On the other hand, if $i$ is younger than $j$, the rate
$g_{ij}$ is given by the rate at which $i$ downloads interesting
pieces to $j$.
\end{itemize}
We draw the reader's attention to the first two assumptions. They account for
the upload rate that a younger leecher can sustain to an older leecher, even
though we cannot say that the former has interesting pieces to the latter just
from the number of pieces they own.
From these assumptions, we can conclude that the rate $g_{ij}$ at
which a peer $i$ uploads interesting pieces to an older peer $j$
is equal to the rate at which peers older than $j$ upload to
$i$ plus the rate at which $i$ downloads from the
seed. We thus have:
\begin{subnumcases}{\label{eq:bandwidth-requirements} g_{ij}=}
\infty, & if $i < j$\\
\frac{c_s}{N} + \sum_{k<j}u_{ki}, & if $i > j$ \label{eq:bandwidth-requirement}
\end{subnumcases}
where $u_{ki}$ is the rate at which leecher $k$ uploads to $i$.
For instance, in Figure~\ref{fig:example-interesting-pieces}, $g_{4,2}$
is the sum of the rates at which peer 2 downloads from the seed ($c_s/5$) and
from peer 1 ($u_{1,4}$). Hence, $g_{4,2} = c_s/5 + u_{1,4}$. Again we see that
the proposed model accounts for the fact that a younger leecher can upload
pieces to the older ones. In a real swarm however, peer 4 may upload to peer 2
pieces downloaded from younger leechers as well, such as peer 5. Although the
pieces that peer 5 downloads from the seed are immediately interesting to both
peers 2 and 4, they will not start and finish downloading this piece from peer 5
at the same time. Thus, leecher 4 may finish first the download of such a piece and
then help serve the remaining sub-pieces to peer 2, violating our assumption. Intuitively
however, the contribution of peer 4 in uploading this piece to peer 2 is small,
since peer 4 must fully finish the download before it can start uploading, by which
time peer 2 will have downloaded most of the piece from peer 5.
Thus, we claim that such effects are negligible and can be ignored since the model
is accurate when compared to simulations and experiments, as discussed in
Section \ref{sec:validation}.
We now make an important observation concerning Equation~(\ref{eq:bandwidth-requirement}).
Consider leecher $i$ and some other leecher $j$. The older $j$ is with respect to
$i$ the smaller is the rate at which $i$ can upload to $j$, that is, the smaller is
$g_{ij}$. If $j$ is younger than $i$, then $g_{ij} = \infty$. This observation implies
that $g_{i1} \leq g_{i2} \leq \dots \leq g_{iN}$.
\begin{figure}[!t]
\centering
\includegraphics[width=6.0in]{water-filling1}
\caption{The upload bandwidth allocation of leecher $i$ follows a progressive
filling algorithm.}
\label{fig:water-filling}
\end{figure}
In addition, note that $g_{ij}>0$ for all $i,j$. As we consider a
small swarm, all peers upload to one another.
Since the upload capacity of peers is finite, we must now determine
how the capacity of a given peer $i$ will be divided to serve each of the $N-1$
other leechers.
In particular, recall that $u_{ij}$ is the upload rate from peer $i$ to peer $j$ and note
that $\sum_k u_{ik} \leq c_l$, where $c_l$ is the upload capacity of a leecher. To determine
$u_{ij}$ given the values of $g_{ij}$, where $1 \leq j \leq N$,
we use a bandwidth allocation mechanism that
follows a progressive filling algorithm. This mechanism determines the outcome of the processor
sharing discipline. Figure \ref{fig:water-filling} illustrates the progressive
filling algorithm for the example presented in
Figure~\ref{fig:example-interesting-pieces}.
Roughly, infinitesimal amounts of bandwidth are allocated to each neighbor until
(1) the leecher's capacity is completely allocated or (2) a leecher $j$ is
satisfied with respect to the $g_{ij}$ constraint. In the former case, the
algorithm stops. In the latter, it continues to distribute the remaining
capacity among the non-satisfied leechers until one of the two conditions occurs
again.
Due to the fact that $g_{i1} \leq g_{i2} \leq \dots \leq g_{iN}$,
the final bandwidth allocation for leecher $i$ can be efficiently
obtained by computing the following equation in the order $j=1,\dots,N$:
\begin{equation}
\label{eq:model_equations}
u_{ij} = \min\Bigg( g_{ij},
\frac{ c_l - \sum_{k<j}u_{ik}}
{ N-1-|\{k|k<j,k\neq i\}| }\Bigg)
\end{equation}
where $|A|$ is the cardinality of a set $A$.
Now recall from Equation~(\ref{eq:bandwidth-requirement}) that $g_{ij}$
depends on $u_{1,i},u_{2,i},\dots,u_{j-1,i}$, for $i > j$. Therefore, by calculating
$u_{ij}$ in the order $i=1,\dots,N$, we assure that every variable in
Equation~(\ref{eq:model_equations}) has been previously computed.
\begin{figure}[!t]
\centering
\includegraphics[width=1.7in]{matrix}
\caption{Example of matrix $\mathbf{U} = (u_{ij})$ showing the right order of
calculation.}
\label{fig:matrix}
\end{figure}
As an example, consider the calculation of the matrix $\mathbf{U} = (u_{ij})$, which
determines upload rates between peers at a given moment, for a small swarm containing
a single seed and $N = 3$ leechers. Let their upload capacities be equal
to $c_s=60$ kBps and $c_l=96$ kBps, respectively, and assume $b_1 > b_2 > b_3$.
Matrix $\mathbf{U}$ and the order of computation of its elements are
depicted in Figure~\ref{fig:matrix}. The download rate $d_i$ for peer $i$ is
simply $c_s/N$ plus the sum of the elements in column $i$:
\begin{equation}
\label{eq:download}
d_i = \frac{c_s}{N} + \sum_{j=1}^{N} u_{ji}.
\end{equation}
Hence,
\begin{eqnarray}
d_1 & = & 60/3 + 0 + 20 + 20 = 60 \label{d1} \\
d_2 & = & 60/3 + 48 + 0 + 68 = 136 \label{d2} \\
d_3 & = & 60/3 + 48 + 76 + 0 = 144 \label{d3}
\end{eqnarray}
Equations~(\ref{d1})-(\ref{d3}) corroborate the idea that homogeneous
peers can exhibit heterogeneous download rates which depend on the number of
pieces owned by each leecher. Moreover, younger leechers tend to
have a higher download rate, as they obtain a higher upload rate from other
leechers. This is the opposite of what happens in large swarms, where the older
leechers usually manage to keep the TFT for longer periods, hence achieving higher
download rates.
Eventually the number of pieces owned by a leecher may reach the number of
pieces owned by an older one. In particular, this is bound to occur since
younger leechers tend to have a higher download rate. In this case, these
two leechers will no longer have pieces interesting to each other. Thus,
Equations (\ref{eq:bandwidth-requirements}) and (\ref{eq:model_equations})
must be rewritten as functions of $b_i,\forall i$:
\begin{subnumcases}{\label{eq:bandwidth-requirements2} g_{ij}=}
\infty, & if $b_i > b_j$\\
\frac{c_s}{N} + \sum_{b_k>b_j}u_{ki}, & if $b_i \leq b_j$
\label{eq:bandwidth-requirement2}
\end{subnumcases}
\begin{equation}
\label{eq:model_equations2}
u_{ij} = \min\Bigg( g_{ij},
\frac{ c_l - \sum_{k|b_k>b_j}u_{ik}}
{ N-1-|\{k|b_k>b_j,k\neq i\}| }\Bigg).
\end{equation}
Intuitively, Equation~(\ref{eq:model_equations2}) combines the two constraints
on the rate at which $i$ upload pieces to $j$. The first term stands for the
maximum instantaneous rate irrespective of capacity limitations. The
second term reflects the fraction of $i$'s uplink capacity that can be
dedicated to $j$ given that some bandwidth has already been allocated. In this
case, $c_l - \sum_{k|b_k>b_j}u_{ik}$ is the remaining capacity of $i$
and $N-1-|\{k|b_k>b_j,k\neq i\}|$ is the number of peers that will share
it (including $j$). Note that the equations above relax our initial assumption
that $b_i,\forall i$ had to be distinct at all times, allowing for leechers to join
the system simultaneously or more generally, for leechers to have the same number
of pieces.
In Equations (\ref{eq:bandwidth-requirements2}) and (\ref{eq:model_equations2}),
variables $N$, $b_i$ and $b_j$ change over time, representing arrivals, departures and
the acquiral of new pieces.
Instead of writing those variables as a
function of time, we dropped the $t$ variable for the sake of simplicity.
Therefore, these equations can be computed for each time interval by assigning to
these variables their corresponding values at that time. However, note that
a change in $b_i$ does not necessarily imply in a change in the download rate of
leechers as what matters is the relationship between $b_i$ and $b_j$, for all $i,j$.
Thus, as the system evolves the variables that govern the equations will change value,
but not the equations themselves, which can be used to compute the current download
rate of leechers given the state of the swarm.
We will see in Section~\ref{sec:validation} that the
proposed model given by Equations (\ref{eq:bandwidth-requirements2}) and (\ref{eq:model_equations2})
yields accurate results for unpopular swarms, indicating that
for this scenario it is sufficient to know the number of pieces each peer
possesses. Nevertheless, we further discuss two useful generalizations to this
model in Section~\ref{sec:general}.
\subsection{Model Validation}
\label{sec:validation}
Our model gives an approximation to the average download rate experienced by
a leecher in a unpopular swarm, which depends on the relationship between the number
of pieces owned by the peers and upload capacities. In this section, we validate the model
comparing its predictions with simulations results. We will see that even though
the model does not take the TFT and other mechanism into account, its results are
very similar to those obtained from our simulator, which implements a fully functional
version of the BT protocol (see simulator description in Section \ref{subsec:strange}).
We consider homogeneous swarms with $c_s = c_l = 64$ kBps, where exactly $N = 5$
arrivals occur. In addition, all leechers arrive
before the first one completes the download and all the arrivals
occur before any two leechers synchronize their contents. In our simulations, we
say that two leechers $i$ and $j$ are synchronized if they have roughly the same
number of pieces, i.e., $|b_i-b_j| < 3$. We use deterministic arrivals to
reproduce the exact scenarios we intend to compare.
\begin{figure*}[h]
\centerline{\subfloat[Evolution of downloaded
pieces.]{\includegraphics[width=2.7in]{validation_simul}
\label{fig:validation_simul}}
\subfloat[Comparison between simulation and model
results.]{\includegraphics[width=2.7in]{validation}
\label{fig:validation}}}
\caption{Model validation and comparison with simulation.}
\label{fig:comparison}
\end{figure*}
Consider the evolution of number of downloaded pieces in such a swarm
illustrated in Figure~\ref{fig:validation_simul}. The first leecher arrives at
time $t = 0$ and four other leechers join the swarm at $t = 30,40,50,60$. After $t =
120$, leechers start to synchronize. We chose several points from curves in
this figure corresponding to instants of time where an event that can change
peers' download rates occurs. More precisely, we labeled points in these curves
with numbers when new leechers arrive or when two leechers synchronize.
Figure \ref{fig:validation} shows peers' download rates from simulations and
model for the labeled points indicated in \ref{fig:validation_simul}. We have simulated
five runs for each scenario including the one depicted in this figure. The confidence intervals
obtained are relatively small and are omitted.
The simulation results for points 1, 2, 4, 7, 11, 16, 20, 23 and 25 show
approximately the same download rate, what is correctly captured by the model.
The download rates obtained from the model are exactly the same due to fact that
the corresponding peers already have every piece that was previously pushed by
the seed into the swarm. Thus, neighbors of such peer can only upload to it
new pieces they receive directly from the seed, i.e., their upload rate is
constrained by $c_s/N$. Since this constraint is below the capacity that can be
allocated to serve a neighbor when $c_s = c_l$ (which is, at least $c_s/(N-1)$),
every peer in the swarm will upload to one such peer with rate equal
to $c_s/N$. Therefore, the average download rate predicted by the
model for peers in $A$ is $c_s/N + (N-1)c_s/N = c_s = 0.25$. In particular,
the relative error is less than 1.5\% for all these points. The model is quite
accurate even for other values of $N$.
On the other hand, simulation results for the other points exhibit a great
variety of download rates. However, those points which correspond to the same
moment in time display similar download rates (e.g. 8, 9 and 10). We
observe that the download rates decrease with new arrivals. We also note
that as more leechers
become synchronized, non-synchronized leechers achieve higher download rates
(see points 21, 22 and 24). This increase in the download rates occurs
because the greater is the number of synchronized peers, the greater is the
remaining capacity to serve leechers with less pieces. This is due to the fact
that the rate at which synchronized leechers can transmit to each other is very
constrained as we discussed before. The relative error of the model is less than
1\% for all points, but the 5-th (7\%) and the 24-th (3\%).
From these figures we conclude that when $c_s = c_l$, at a given moment in time,
it is possible to partition the set of
leechers in two subsets: leechers with the same number of pieces as the oldest
leecher (subset $A$), and those with less pieces than the oldest one (subset
$B$). When $c_s = c_l$, the model predicts that all leechers in each of
these subsets will have identical download rates. Moreover, a leecher in $B$
will have a higher download rate than one in $A$ and this difference depends on
the set sizes. In particular, larger swarms imply lower values of the minimum
download rate and higher values of the maximum download for
leechers in $B$. This tendency can be observed both in simulation and
model.
Considering all the simulations performed, we conclude
that the model is quite accurate, with differences being unnoticeable in most
scenarios and less than 10\% in all cases. More importantly, the model captures
well the trends observed in simulation.
\subsection{Model generalizations}
\label{sec:general}
Some of the assumptions of the model we propose are: (1) unconstrained (or
large) download capacities, and (2) leechers with identical upload
capacities. We now relax the former assumption by providing an upper bound for
the download rate of a peer. This bound is a function that does not grow fast on
the system parameters. Clearly, if the download capacities are greater than
this function then all the previously presented results hold.
In what concerns the latter assumption, we indicate how to adapt the model to
cope with similar (but not identical) upload capacities. Furthermore, we present
some simulation results that show that the general behavior of the system under
this scenario is similar to the one presented in Section~\ref{sec:problem}.
\subsubsection{Finite (and small) download capacities}
When the oldest peers are synchronized, they can only send to each other what
they receive directly from the seed (see Equation (\ref{eq:bandwidth-requirement2})).
This constraint leads to more capacity available to serve those peers that are not
synchronized. In particular, if there is only one non-synchronized peer, it can benefit from this
idle bandwidth alone and consequently achieve the highest possible download
rate. In what follows we compute an upper bound for this maximum download rate.
Consider an unpopular swarm with $N>1$ peers, such that the $N-1$ oldest peers are
synchronized. From Equation~(\ref{eq:bandwidth-requirements2}) we can compute the maximum instantaneous upload rate of
a synchronized leecher $i$ to the other peers irrespective of capacity limitations:
\begin{subnumcases}{g_{ij}=}
\infty, & if j is not synchronized\\
\frac{c_s}{N} , & if j is synchronized
\label{eq:bandwidth-requirement3}
\end{subnumcases}
According to Equation~(\ref{eq:model_equations2}), each leecher $i$ will upload to
each of the other $N-2$ synchronized peers at rate $\min\{\frac{c_l}{N-1}, \frac{c_s}{N}\}$.
The remaining capacity of $i$ that can be used to serve the younger leecher $N$ is
$c_l - (N-2) \min\{\frac{c_l}{N-1}, \frac{c_s}{N}\}$. Since there are $N-1$ synchronized
leechers, the capacity that can be used to serve only non-synchronized leecher
is $(N-1)[c_l - (N-2) \min\{\frac{c_l}{N-1}, \frac{c_s}{N}\}]$. In addition, the
younger leecher downloads from the seed at rate $\frac{c_s}{N}$. Therefore, the maximum
download rate is given by
\begin{eqnarray*}
d_{\max} &=& (N-1)c_l - (N-2) \min\{c_l, \frac{c_s(N-1)}{N}\} + \frac{c_s}{N} \label{eq:equality}
\end{eqnarray*}
Thus, we have:
\begin{subnumcases}{d_{\max}=}
c_l + \frac{c_s}{N} & if $c_l \leq c_s(N-1)/N$ \\
(c_l - c_s)(N-1) + 2c_s - \frac{c_s}{N} & otherwise
\label{eq:dmax}
\end{subnumcases}
Note that in both cases the maximum download rate is a value that does not grow fast in any
of the system parameters. In particular, for small $N$, which is the case of interest, $d_{\max}$
has a relative small value with respect to the upload capacities or leechers and seed. Thus, if
the download capacities of leechers are larger than $d_{\max}$, then results predicted by the model
are just as good. This condition replaces the requirement of unbounded (or arbitrarily large)
download capacities assumed earlier in the model.
To illustrate, consider the example in Section \ref{sec:validation} where $c_s = c_l = 64$ kBps
and $N=5$. In this case, the highest download rate would be $d_{\max} = 2 \times 64 + 64/5 = 140.8$ kBps.
Thus, if download capacities of leechers are larger than 140.8Kbps, then the results predicted by the model
would be unchanged.
\subsubsection{Similar but not identical upload capacities}
Although we have assumed upload capacities of peers to be identical, this is
certainly not necessary for the piece distribution process in unpopular swarms
to lead to heterogeneous download rates. Note that our modeling framework allows
for peers to have different upload capacities, as $c_l$ could depend on $i$ in
Equation~\ref{eq:model_equations} (equivalently, in Equation~\ref{eq:model_equations2}).
Clearly, this would have an impact on the heterogeneity of the performance and would depend on
the values of $c_{l_i}$ and the order of their arrivals to the swarm.
However, if $c_{l_i}\,\forall i$ are close to one another, for example, chosen uniformly at
random from a small range, then we expect not to see much differences with respect to
the constant $c_l$ value.
In order to support this last claim, we repeat the simulations described in
Section~\ref{sec:poisson} but allowing the upload capacity of leecher $i$ to be drawn
uniformly at random from the range $[c(1-\epsilon),c(1+\epsilon)]$, where
$c = 64$ kBps and $\epsilon \in \{0.25,0.50\}$.
Figures~\ref{fig:varying-cl}(a-b) show the average download time of peers
binned according to the number of leechers in the swarm at the arrival time for
$\epsilon = 0.25$ and $0.50$, respectively.
We conclude that, when the upload capacities are close to each other,
the system exhibits a very similar behavior to that we observed when the upload
capacities are the same (see Figure~\ref{fig:xx-64_1000_1000_relative-order}).
Not surprisingly, the larger the range of upload capacities, the the greater
the impact on the results, when compared to a constant upload capacity.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.5pt}
\subfloat[$\epsilon=0.25$.]{\label{fig:clvar0.25}
\includegraphics[width=2.8in]{xx-64_1000_1000_clvar-025}}
\subfloat[$\epsilon=0.50$.]{\label{fig:clvar0.50}
\includegraphics[width=2.8in]{xx-64_1000_1000_clvar-050}}
\caption{\label{fig:varying-cl} Average download time as a function of arrival
order in a busy period, when the uplink capacity is $c_{l_i} \sim
U(c_l(1-\epsilon),c_l(1+\epsilon))$, with $c_l = 64$ kBps}.
\end{center}
\end{figure}
\section{Heterogeneity in homogeneous BT swarms}
\label{sec:problem}
In order to understand the behavior exhibited by BT in Figures
\ref{fig:simul1} and \ref{fig:simul2}, we will analyze the total number
of pieces each leecher has downloaded over time.
Consider Figures~\ref{fig:simul1_downloaded} and \ref{fig:simul2_downloaded} where each
curve indicates
the total number of pieces downloaded by a given peer for the corresponding
scenario in Figures~\ref{fig:simul1} and \ref{fig:simul2}, respectively.
Note that the slope of each curve corresponds to respective leecher's
download rate.
We start by considering Figure~\ref{fig:simul1_downloaded}.
Despite the slope of the first leecher being
smaller than that of the remaining peers, the curves never meet. In particular,
a leecher finishes the download (and leaves the swarm) before the next
leecher reaches the number of blocks it has. We also note that all other leechers
have very similar slopes. In addition, we observe a peculiar behavior: the slope of
the fifth leecher suddenly decreases when it becomes the single leecher in the
system.
The results illustrated in Figure~\ref{fig:simul2_downloaded} which
correspond to the scenario considered in Figure~\ref{fig:simul2} show a
very different behavior. Several
interesting observations can be drawn from this figure. The slope
of the first peer is practically constant, remaining unchanged by the arrival
of other peers. The slope of all other peers is larger than that of
the first peer, meaning the curves may eventually meet. When two curves meet, the
corresponding leechers have the same number of blocks and possibly the
same content (we will comment on this point in the following section).
The figure also shows that a younger peer does not overcome the first peer,
but instead the two maintain the same number of downloaded pieces, possibly
with their contents synchronized.
Finally, the slope of the second, third and fourth peer are rather similar.
However, the slope
of the fifth peer is slightly larger than the others, meaning a higher download rate and
consequently smaller download time.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.5pt}
\subfloat[Corresponding to Figure~\ref{fig:simul1}.]{\label{fig:simul1_downloaded}
\includegraphics[width=2.8in]{simul1_downloaded}}
\subfloat[Corresponding to Figure~\ref{fig:simul2}.]{\label{fig:simul2_downloaded}
\includegraphics[width=2.8in]{simul2_downloaded}}
\caption{\label{fig:pieces-downloaded} Evolution of the number of downloaded pieces.}
\end{center}
\end{figure}
In summary, we make the following general observations:
\begin{itemize}
\item
The first leecher downloads approximately at constant rate.
\item
Subsequent leechers download at a faster rate than the first.
\item
Once a leecher reaches the total number of pieces downloaded by the first
leecher, their download rates are identical.
\item
The greater is the number of leechers with the same number of pieces of the first
leecher, the higher is the download rate of the other leechers.
\end{itemize}
All these observations are related to the dynamics of BT and will be
discussed and explained in Section~\ref{sec:model} using a simple mathematical
model. In the remainder of this section, we discuss the consequences of
the observed phenomenon and illustrate that it happens even when peer arrival is
random (i.e., Poisson process).
\subsection{Consequences of heterogeneity in homogeneous swarms}
\label{subsec:consequences}
Despite the homogeneous upload
capacity of peers, the observations above lead to the following consequences:
\begin{itemize}
\item
{\bf Variability in download times.} Since peers can experience a
consistently different download rate, their download times can also differ.
\item
{\bf Unfairness with respect to peer arrival order.} Since peers download rates,
and thus download times, may depend on their arrival order, the system is inherently
unfair, potentially benefiting latecomers in a swarm.
\item
{\bf Content synchronization.} Due to different download rates and BT's piece
selection mechanisms (most notably rarest-first), leechers can synchronize on
the number of pieces they have and, more strongly, on the content itself. This
means that peers may end up with exactly the same content at some instant,
despite arriving at different points in time.
\item
{\bf Bursty departures.} A direct consequence of content synchronization is bursty
departures. This means that peers tend to leave the swarm within a small
interval of time despite arriving at the swarm at relatively far apart instants.
\end{itemize}
Although the figures do not show the content synchronization
explicitly, since the first leecher is downloading the file at the same rate at
which the seed pushes new pieces into the swarm (seed upload capacity), whenever a leecher reaches the
same number of pieces than it, they have exactly the same content.
Of course, the prevalence of the phenomenon and its consequences depend directly on
the parameters of the swarm. In particular, the arrival times of peers is certainly
the most determinant. However, parameters like upload capacity of seed and leechers
and number of pieces are also fundamentally important. Intuitively, a file with a
larger number of pieces or a seed with a lower upload capacity increase the
probability that the consequences above occur. In fact, for any arrival
order of a small set of peers, one can always find system parameters for which
this behavior and its consequences occur.
\subsection{Heterogeneity under Poisson arrivals}
\label{sec:poisson}
The behavior above does not require deterministic
arrivals or any crafted leecher arrival pattern. It arises even
when arrival patterns are random. In this section we characterize the
consequences of the heterogeneous download rates
phenomenon under Poisson arrivals.
We conducted a large amount of evaluations using detailed packet-level
simulations.
In particular, we consider a BT swarm where
a single seed is present at all times, while leechers arrive according to
a Poisson process and depart the swarm as soon as their download is completed.
In the evaluation that follows, all leechers have the same upload capacity of
64 kBps (and very large download capacities) and download a file with
1000 pieces. The upload capacity of the seed ($c_s$) varies between
48 kBps, 64 kBps, and 96 kBps, and the leecher arrival rate ($\lambda$) is
1/1000 s.
These scenarios generate a swarm that has a time average size of 3.7, 3.4 and 3.0 leechers,
respectively.
We start by characterizing the variability in the download times and the
unfairness with respect to leecher arrival order. Figure \ref{fig:xx-64_1000_1000_relative-order}
illustrates the average download time for a peer as a function of the number
of leechers in the swarm at its arrival time. Thus, if a peer joins the swarm
when $i$ leechers are present, it is mapped to index $i$.
The different curves correspond to different upload capacities of the seed. The
results clearly indicate that the download time depends on leecher arrival
order. In particular, for the case $c_s = 64$ kBps, the average download
time tends to decrease with increasing arrival order, and so the first arrival
has the largest average download time. Moreover, the download time
differences are also significant, and can reach up to 30\% (e.g., difference
between first and fourth arrival).
\begin{figure}[!t]
\centering
\includegraphics[width=2.8in]{xx-64_1000_1000_relative-order}
\caption{Average download time as a function of arrival order in a busy period.}
\label{fig:xx-64_1000_1000_relative-order}
\end{figure}
Figure~\ref{fig:xx-64_1000_1000_relative-order} also indicates that variability in download times strongly
depends on the seed upload capacity. In particular, a fast seed yields the reverse
effect: leechers' download times tend to {\it increase} with arrival order. Intuitively, when a
slow seed is present, late arrivals to a busy period obtain large download rates from other
leechers, thus exhibiting a lower download time. However, when a fast seed is present,
the first leecher has the larger upload capacity of the seed until the second arrival,
thus exhibiting a lower download time. The results also illustrate second order
effects. For instance, a very late arrival can have an average download time slightly larger
(or smaller) than a late arrival (e.g., the sixth leecher arrival has longer
download time than fourth for $c_s = 64$ kBps).
Intuitively, this occurs because a very late arrival is likely to be alone in the busy
period, having to resort to the seed for finishing the download. Since the upload capacity
of the seed can be smaller (larger) than the aggregate download rate it receives from
other leechers, its download time can increase (decrease). This behavior and its
consequences will be explained
and captured by the mathematical model presented in the next section.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.1pt}
\subfloat[Empirical CCDF of peer inter-departure time conditioned on a busy period.]{\label{fig:xx-64_1000_1000_departure-ccdf}
\includegraphics[width=2.8in]{xx-64_1000_1000_departure-ccdf}}
\subfloat[Empirical CCDF of the download time.]{\label{fig:download_times-cs}
\includegraphics[width=2.8in]{download_times-cs}}
\caption{\label{fig:empirical_ccdfs}
Characterization of consequences
of heterogeneous download rates for different values of seed capacity.}
\end{center}
\end{figure}
We now characterize the burstiness in the leecher departure process.
Figure~\ref{fig:xx-64_1000_1000_departure-ccdf}
shows the empirical CCDF (Complementary Cumulative Distribution Function) of the leecher
inter-departure times conditioned on a busy period (i.e., not including the inter-departure
time between the last leecher in a busy period and the first leecher of the next). Note
that the peer inter-arrival times follow an exponential distribution with rate 1/1000. However,
the results indicate a very distinct departure process. In particular, many peers tend to
leave the swarm at roughly the same time: up to 30\% of peers leave the swarm within a
couple of seconds from each other when $c_s = 64$ kBps. Moreover, the departure process
also exhibits high variability and some peers take as much as ten times more to
leave the system after a departure than the average
(when $c_s = 64$ kBps). The figure also clearly shows that this observation strongly depends on
the seed upload capacity, and is more pronounced when the seed is slow. Intuitively, a
slower seed increases the average download time, thus increasing the chances that
leechers synchronize their content during the download and depart almost at the same time.
Finally, we also note that a fast seed yields a much less bursty departure process, although
still favoring short inter-departure times.
\begin{table}[h]
\centering
\caption{Average number of leechers and average number of synchronized leechers
conditioned on intervals where the number of leechers is greater than 1.}
\label{tab:synch}
\begin{tabular}{c|c|c}
$c_{s}$ & cond. avg. number & cond. avg. number\\
(kBps) & of leechers & of synch. leechers\\
\hline
48 & 4.45 & 2.40 \\
64 & 3.86 & 1.44 \\
96 & 3.57 & 0.87 \\
\end{tabular}
\end{table}
One consequence of the heterogeneous download rates that is closely related to
the bursty departures is content synchronization. Here we refer to as
synchronized, leechers that are not interested in more than 50 pieces
(5\% of the file) of any other. In this context, we compare the
average number of leechers in the system and the average number of those which
are synchronized. These metrics are conditioned on time intervals where the
number of leechers is greater than 1, because synchronization is not
defined otherwise. Table \ref{tab:synch} shows the results of our simulations.
The conditional average number of synchronized leechers corresponds to $53.9\%$,
$37.3\%$ and $24.4\%$ for $c_s$ equal to $48$, $64$ and $96$ kBps
respectively. While the synchronization is
less pronounced when the seed capacity is high, it is very significant when $c_s
\leq c_l$.
It is possible to have different download times even when all peers
that are simultaneously in the swarm have the same instantaneous download rate.
Since peers join the system at different times, they observe the swarm in
different sequences of states, in some of which there is more bandwidth
available. Those peers will have smaller download times.
Nevertheless, as we discussed in Section~
\ref{subsec:consequences}, heterogeneous download rates also contribute to the
variability in the download times. Figure~\ref{fig:download_times-cs} shows the
empirical CCDF of the leecher download time for different values of seed
capacity ($c_s$). While the maximum download
time is $45.5\%$ and $52.8\%$ higher than the minimum respectively for $c_s$ equal to $96$
and $64$ kBps, it is $218.7\%$ higher for $c_s = 48$ kBps. Surprisingly, the
minimum download time is the smallest when the seed capacity is the lowest
(i.e., 48 kBps). This is because leechers synchronize with high
probability under these circumstances and, as we will see in
Section~\ref{sec:validation}, non-synchronized
leechers receive at very high download rates in the presence of many
synchronized ones.
We observe that the seed capacity plays an important role on the occurrence of the
described consequences under unpopular swarms. In the following, we characterize the
impact of another important aspect on these consequences, namely the content
popularity, which can be captured through leecher arrival rate. For this purpose, we conducted simulations where
the seed and the leechers have the same upload capacity of 64 kBps and
average inter-arrival time (i.e, $1/\lambda$) varying between
500, 1000, 1500, 2000 and 2500 s.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.1pt}
\subfloat[Box plot of download time of leechers.]{\label{fig:64-64_xxxx_1000_box-plot}
\includegraphics[width=2.8in]{64-64_xxxx_1000_box-plot}}
\subfloat[Conditional average number of leechers and conditional average number of synchronized leechers.]{\label{fig:xx-64_xxxx_1000_synch}
\includegraphics[width=2.8in]{xx-64_xxxx_1000_synch}}
\caption{\label{fig:interarrival_results} Characterization of consequences
of heterogeneous download rates for different values of average inter-arrival time.}
\end{center}
\end{figure}
We consider the influence of the average inter-arrival time of leechers on the download times,
independently of arrival order. Figure~\ref{fig:64-64_xxxx_1000_box-plot} shows
the empirical CCDF
of the download times
of peers as a function of the average inter-peer arrival time (i.e., the inverse
of arrival rate), for $c_s = 64$ kBps. Note that there are sharp drops for $t >
4000$ which correspond to leechers whose average download rate is approximately
equal to $c_s$. These sharp drops are more pronounced when the inter-arrival
time is large. In addition, as the inter-arrival time grows, the
10th-percentile decreases and the 90th-percentile increases, indicating
that the download times become less concentrated around the average.
However, the variability between minimum and maximum
download time does not diminish with the inter-arrival time.
Figure~\ref{fig:xx-64_xxxx_1000_synch} illustrates the intensity of content
synchronization for different arrival rates. It shows the
average number of leechers in the system and the average number of those which
are synchronized. We observe that,
the number of synchronized leechers
remains practically the same as we increase the inter-peer arrival time,
indicating that a larger fraction of peers have similar content when popularity
decreases.
As with content synchronization, the fraction of bursty departures is also
strongly dependent on the leecher arrival rate. While approximately $5\%$ of the
intervals between departures are smaller than 10 seconds for an arrival rate
$\lambda = 1/500$, more than $30\%$ of intervals are smaller than 10 for
$\lambda = 1/2500$. On the other hand, the unfairness
with respect to the arrival order in a busy period is almost insensitive to the leecher
arrival rate (considering $1/2500 \leq \lambda \leq 1/500$).
\subsection{Real experimental evaluation}
The results shown above were all obtained through simulations and we now
present results from prototype-based experiments deployed in more realistic
scenarios. The real experiments were performed in the Internet using machines
from Planetlab\cite{planetlab} and running an instrumented version of a BT
client\cite{legout_urvoy-keller_michiardi_2006}. Although a large number of
experiments were conducted, we
report only on a limited set of these results due to space constraints. The goal here
is to validate the phenomenon of heterogeneity in homogeneous BT swarms and its
consequences in real BT application running over the Internet.
In the experiments, the PlanetLAB machines were selected using a quick and
simple performance test. Before starting every experiment, a controller
dispatches a command via ssh for a set of few hundred machines randomly
chosen from the complete list of all PlanetLAB machines. The command line
basically makes the machines to download and install all the necessary files
(including BT client and scripts) to execute locally the experiment. The set of
machines that had the best performance downloading and installing the files was
used in the experiments. This performance test was enough to avoid using
machines overloaded or connected through congested links.
We consider only private swarms in the experiment, in the sense that only peers
controlled by the experiment can connect to the swarm for uploading and downloading
content. Each private swarm consists of a single file of size $S$~MB which is owned by a
single seed that is always available and has upload capacity of $c_s$. Leechers
interested in downloading the content arrive to the swarm according to a
Poisson process with rate $\lambda$.
All leechers that arrive to the swarm are homogeneous and have upload capacity equal to $c_l$.
The maximum upload capacities used in the experiments are defined as parameters
of any BT client (including the one we use). Note that those upload capacity values used for the
experiments were far below the limit imposed for each slice (user) in PlanetLAB.
Each experiment run is executed for $t=5,000$~seconds and leave the swarm once
the download is completed.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.1pt}
\subfloat[Evolution of swarm size.]{\label{fig:swarm_size_experiment}
\includegraphics[width=2.6in]{swarm_size_result_u_50_lambda_125}}
\subfloat[Leechers' arrivals and
departures.]{\label{fig:dynamic}
\includegraphics[width=2.6in]{arrival_departure_u_50_lambda_125}}\\
\subfloat[Zoom-in of the first busy period.]{\label{fig:swarm_size_zoom}
\includegraphics[width=2.6in]{swarm_size_zoom}}
\caption{\label{fig:} Swarm dynamics in real experiments.}
\end{center}
\end{figure}
We start by analyzing the evolution of the swarm size for an unpopular swarm.
Figure~\ref{fig:swarm_size_experiment} shows the number of leechers in the swarm
over time for the duration of the experiment, with parameters $\lambda=1/125$
peers/s, $S=20$~MB, and $c_s=c_l=50$~kBps. We can observe several occurrences of
bursty departures, even if leechers arrive according to a Poisson process. As
previously discussed, bursty departures are consequence of content
synchronization among the leechers in the swarm.
Using the same experiment as above, we investigate the impact of the leechers' arrival order
on their download times. Figure~\ref{fig:dynamic} illustrates the dynamics of the swarm, where
each horizontal line corresponds to the lifetime of a leecher in the swarm, starting when the
peer arrives and ending when it departs the swarm. Note that peers exhibit significantly
different download time (which corresponds to their lifetime in the system). In particular,
in many cases leechers arrive at different time instants but depart in the same burst.
For instance, the fifth leecher to arrive to the swarm departs in a burst
almost together with all
four prior arrivals (see Figure~\ref{fig:swarm_size_zoom} for a zoom-in of the
first busy period). Thus, the fifth leecher has a much smaller download completion time, when
compared to the first leecher. Similar behavior occurs between the fifteenth leecher and the
three leechers that arrived immediately before. Besides illustrating the variability of the
download times, this observation also indicates the unfairness with respect to leecher arrival
order. In particular, late arrivals to a busy period tend to have smaller download times.
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{margin=0.1pt}
\subfloat[$c_s=50$~kBps and $c_l=50$~kBps.]{\label{fig:download_ccdf_50}
\includegraphics[width=2.8in]{download_dist_u_50_lambda_125}}
\subfloat[$c_s=60$~kBps and $c_l=50$~kBps.]{\label{fig:download_ccdf_60}
\includegraphics[width=2.8in]{download_dist_u_60_lambda_125}}
\caption{\label{fig:download_ccdf} CCDF of download time from real experiments.}
\end{center}
\end{figure}
We now focus on the distribution of the leechers' download times to illustrate their
relative high variability. Figures~\ref{fig:download_ccdf_50} and
\ref{fig:download_ccdf_60} show the complementary cumulative distribution function (CCDF) of download times computed for two experiments with
distinct upload capacities for the seed ($c_s = 50$~kBps and $c_s = 60$~kBps, respectively,
with all other parameters the same). In both results, download times exhibit a high variance,
as shown in the figures. In the case $c_s=50$~kBps (Figure~\ref{fig:download_ccdf_50}), the
minimum and maximum values are 145 and 480 seconds, respectively, with the maximum being more
than three times the minimum. When the upload capacity of the seed is higher
than that of the
leechers, Figure~\ref{fig:download_ccdf_60} shows that the variance in download times
decreases, as expected, since the system capacity is increased. Finally, we note
several discontinuities (i.e., sharp drops) in both CCDF curves which are caused
by sets of leechers that have approximately the same download time.
\section{Related prior works}
\label{sec:related}
Modeling P2P file sharing systems and in particular BT has been an active area
of research in the past few years, driven mainly by the high complexity, robustness
and user-level performance of such systems. One of the first BT models to
predict the download times of peers was presented in \cite{qiu_srikant_2004}. This
simple fluid model based on differential equations assumes homogeneous peer
population (with respect to download and upload capacities) and Poisson arrivals,
but yields analytical steady state solution for performance metrics. Several subsequent BT models have been
proposed in the literature to capture various system characteristics, among them
heterogeneous peer population (with respect to upload and download capacities)
\cite{piccolo_neglia_2004,liao_papadopoulos_psounis_2007,meulpolder_2008,chow_golubchik_misra_2009}.
However, to the best of our knowledge, all models predict that identical peers
(with respect to their upload capacities) simultaneously downloading a file will
have similar performance (with respect
to download rates), contrary to the findings in this paper. Moreover, BT models
generally assume either a rather large peer arrival rate (e.g., Poisson) or a large
flash crowd (all peers join the swarm at the same time). This is somewhat surprising,
given that most real BT swarms are rather small in size and quite unpopular
\cite{guo}. Finally, one perverse effect of this lack of popularity, known as content
unavailability, is shown to be a severe problem found in most of BT swarms~\cite{conext2009}.
Another interesting aspect of BT has been the discovery and characterization of
some non-trivial phenomena induced by its complex dynamics. For example, peers in BT
swarm tend to form clusters based on their upload link capacities, exhibiting a strong
homophily effect. In particular, peers with identical upload capacities tend to exchange
relatively more data between them
\cite{legout_liogkas_kohler_zhang_2007,bharambe_herley_padmanabhan_2006,performance2010}.
Yet another peculiar behavior is the fact that arriving leechers can continue to
download the entire content despite the presence of any seed in the swarm, a property
known as self-sustainability \cite{menasche_et_al_2010}.
More recently, the {\it missing piece syndrome} has been
characterized mathematically \cite{hajek_zhu_2010,hajek2}. In
\cite{stability} is presented an evaluation for the impact of different peer
selection strategies on the stability of the system. Such strategies may
reduce the effect of content synchronization among peers.
However, to the best
of our knowledge, we are not aware of any prior work that has alluded the phenomenon
we describe in this paper, namely, that homogeneous peers can have heterogeneous
download rates.
\subsection{The observed behavior}
\label{subsec:strange}
Having presented BT's mechanisms, we now illustrate the heterogeneous download rate
phenomenon and its consequences with two simple examples. Consider a swarm formed
by a seed and 5 leechers. All peers, including the single seed, have identical
upload capacity (64 kBps), but large (unconstrained) download capacity. The leechers
download a file containing 1000 pieces (256 MB) and exit the swarm immediately after
download completion. The seed never leaves the swarm.
This system was evaluated using an instrumented implementation of the BitTorrent
mainline 4.0.2 client (also used in \cite{legout_urvoy-keller_michiardi_2006}) running on
PlanetLab as well as a
detailed packet-level simulator of BT.
Both the PlanetLab experiments and the simulations use fully functional BT
clients that implement all BT control messages and policies, including peer selection strategies:
TFT, optimistic unchoke; and piece selection modes: random-first, rarest-first,
strict priority.
The simulation model was developed in the modeling tool Tangram-II
\cite{tangram2} (open source and publicly available software).
The model we developed is very detailed and faithfully implements the
protocol of the BitTorrent mainline 4.0.2 client, including all control
messages and policies. In accordance with Tangram-II's modeling paradigm,
entities that participate in the system are implemented as separate
objects that communicate by message passing. Thus, peers (leechers and seed)
and tracker are represented by objects that can be fully parametrized (upload
rate, file size, seed after download, etc).
\begin{figure}[!t]
\begin{center}
\captionsetup[subfloat]{}
\subfloat[Arrival intervals: 10 min, 4 min, 4 min, 4 min.]{\label{fig:simul1}
\includegraphics[width=2.8in]{simul1}}
\subfloat[Arrival intervals: 4 min, 4 min, 4 min, 10 min.]{\label{fig:simul2}
\includegraphics[width=2.8in]{simul2}}
\caption{\label{fig:swarm-size} Evolution of the number of leechers in the swarm.}
\end{center}
\end{figure}
In the following simulations and experiments, leechers start to join the swarm only after the seed is
connected and they leave immediately after finishing the download.
The simulation/experiment ends when the last leecher leaves.
Figures \ref{fig:simul1} and \ref{fig:simul2} show the evolution of the swarm size as a function
of time for both simulation and experimental results and two different leecher
arrival patterns. In Figure \ref{fig:simul1}, peers leave the swarm in the order
they arrived (i.e., FIFO) and have a relatively similar download time. Thus, the
download time is relatively indifferent to arrival order (with the exception of
the first peer).
Figure \ref{fig:simul2} shows the same metric just for different arrival
times (in fact, the inter-arrival times of peers are also mostly preserved).
Surprisingly, an unexpected behavior can be observed in the system dynamics:
despite the significant difference on arrival times, all five leechers completed
their respective download nearly at the same time.
The time inter departures is small comparing to the download time, which characterizes bursty departures. It means that peers that arrive later to the swarm have a smaller download
time. In fact, the fifth peer completed the download in about half the time of
the first leecher. Thus, the system is quite unfair with respect to the arrival
order of leechers, with late arrivals being significantly favored.
What is happening? Why does BT exhibit such
dynamics? We answer these questions in the next sections.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Self-supervised learning holds great promise for improving representations when labeled data are scarce. In semi-supervised learning, recent self-supervision methods are state-of-the-art \citep{rotnet, exemplar_nets, S4L}, and self-supervision is essential in video tasks where annotation is costly \citep{Vondrick_2018_ECCV, Vondrick_2016}. To date, however, self-supervised approaches lag behind fully supervised training on standard accuracy metrics and research has existed in a mode of catching up to supervised performance. Additionally, when used in conjunction with fully supervised learning on a fully labeled dataset, self-supervision has little impact on accuracy. This raises the question of whether large labeled datasets render self-supervision needless.
We show that while self-supervision does not substantially improve accuracy when used in tandem with standard training on fully labeled datasets, it can improve several aspects of model robustness, including robustness to adversarial examples \citep{madry}, label corruptions \citep{Patrini, noise_label_overfitting}, and common input corruptions such as fog, snow, and blur \citep{hendrycks2019robustness}. Importantly, these gains are masked if one looks at clean accuracy alone, for which performance stays constant. Moreover, we find that self-supervision greatly improves out-of-distribution detection for difficult, near-distribution examples, a long-standing and underexplored problem. In fact, using self-supervised learning techniques on CIFAR-10 and ImageNet, we are even able to \emph{surpass fully supervised methods}.
These results demonstrate that self-supervision need not be viewed as a collection of techniques allowing models to catch up to full supervision. Rather, using the two in conjunction provides strong regularization that improves robustness and uncertainty estimation even if accuracy does not change. Importantly, these methods can improve robustness and uncertainty estimation without requiring larger models or additional data \citep{madrydata, kurakin}. They can be used with task-specific methods for additive effect with no additional assumptions. With self-supervised learning, we make tangible progress on adversarial robustness, label corruption, common input corruptions, and out-of-distribution detection, suggesting that future self-supervised learning methods could also be judged by their utility for uncertainty estimates and model robustness. Code and our expanded ImageNet validation dataset are available at \href{https://github.com/hendrycks/ss-ood}{\texttt{https://github.com/hendrycks/ss-ood}}.
\section{Related Work}
\textbf{Self-supervised learning.}\quad
A number of self-supervised methods have been proposed, each exploring a different pretext task. \citet{relative_position} predict the relative position of image patches and use the resulting representation to improve object detection. \citet{exemplar_nets} create surrogate classes to train on by transforming seed image patches. Similarly, \citet{rotnet} predict image rotations. Other self-supervised approaches include using colorization as a proxy task \cite{gustav_colorization}, deep clustering methods \cite{IID}, and methods that maximize mutual information \citep{deepinfomax} with high-level representations \citep{cpc, hnaff2019dataefficient}. These works focus on the utility of self-supervision for learning without labeled data and do not consider its effect on robustness and uncertainty estimation.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-15pt}
\includegraphics[width=0.95\linewidth]{figures/eps.pdf}
\caption{The effect of attack strength on a $\varepsilon=8/255$ adversarially trained model. The attack strengths are $\varepsilon \in \{ 4/255, 5/255, \ldots, 10/255 \}$. Clean accuracy does not change when applying self-supervision, and hence self-supervision's benefits are masked when looking at clean accuracy alone.}
\label{fig:advresultsfig}
\vspace{-15pt}
\end{wrapfigure}
\textbf{Robustness.}\quad
Improving model robustness refers to the goal of ensuring machine learning models are resistant across a variety of imperfect training and testing conditions. \citet{hendrycks2019robustness} look at how models can handle common real-world image corruptions (such as fog, blur, and JPEG compression) and propose a comprehensive set of distortions to evaluate real-world robustness. Another robustness problem is learning in the presence of corrupted labels \citep{Nettleton, Patrini}. To this end, \citet{hendrycks2018glc} introduce Gold Loss Correction (GLC), a method that uses a small set of trusted labels to improve accuracy in this setting. With high degrees of label corruption, models start to overfit the misinformation in the corrupted labels \citep{noise_label_overfitting}, suggesting a need for ways to supplement training with reliable signals from unsupervised objectives. \citet{madry} explore adversarial robustness and propose PGD adversarial training, where models are trained with a minimax robust optimization objective. \citet{Zhang2019theoretically} improve upon this work with a modified loss function and develop a better understanding of the trade-off between adversarial accuracy and natural accuracy.
\textbf{Out-of-distribution detection.}\quad
Out-of-distribution detection has a long history. Traditional methods such as one-class SVMs \citep{OC-SVM} have been revisited with deep representations \citep{DeepSVDD}, yielding improvements on complex data. A central line of recent exploration has been with out-of-distribution detectors using supervised representations. \citet{hendrycks17baseline} propose using the maximum softmax probability of a classifier for out-of-distribution detection. \citet{kimin} expand on this by generating synthetic outliers and training the representations to flag these examples as outliers. However, \citet{outlier_exposure} find that training against a large and diverse dataset of outliers enables far better out-of-distribution detection on unseen distributions. In these works, detection is most difficult for near-distribution outliers, which suggests a need for new methods that force the model to learn more about the structure of in-distribution examples.
\section{Robustness}
\subsection{Robustness to Adversarial Perturbations}\label{section:adv}
Improving robustness to adversarial inputs has proven difficult, with adversarial training providing the only longstanding gains \citep{bypass, obfuscated_gradients}. In this section, we demonstrate that auxiliary self-supervision in the form of predicting rotations \citep{rotnet} can improve upon standard Projected Gradient Descent (PGD) adversarial training \citep{madry}. We also observe that auxiliary self-supervision can provide gains when complemented with stronger defenses like TRADES \citep{Zhang2019theoretically} and is not broken by gradient-free attacks such as SPSA \citep{uesato2018adversarial}.
\textbf{Setup.}\quad
The problem of defending against bounded adversarial perturbations can be formally expressed as finding model parameters $\theta$ that minimize the objective
\begin{equation}
\begin{matrix}
\min_{\theta} \mathbb{E}_{(x, y) \sim \mathcal{D}} \left[ \max_{x' \in S} L(x', y; \theta) \right ]
&
\textup{where}
&
S = \{x':\left \| x - x' \right \| < \varepsilon\}
\end{matrix}
\end{equation}
In this paper, we focus on $\ell_\infty$ norm bounded adversaries. \citet{madry} propose that PGD is ``a universal first-order adversary.'' Hence, we first focus on defending against PGD. Let $\textup{PGD}(x)$ be the $K^{\text{th}}$ step of PGD, where
\begin{equation}
\begin{matrix}
x^{k+1} = \Pi_{S} \left( x^k + \alpha \textup{ sign}(\nabla_{x} L(x, y; \theta)) \right)
&
\textup{and}
&
x^0 = x + U(-\varepsilon, \varepsilon)
\end{matrix}
\end{equation}
where $K$ is a preset parameter which characterizes the number of steps that are taken, $\Pi_S$ is the projection operator for the $l_\infty$ ball $S$, and $L(x, y; \theta)$ is the loss we want to optimize. Normally, this loss is the cross entropy between the model's softmax classification output for $x$ and the ground truth label $y$. For evaluating robust accuracy, we use 20-step and 100-step adversaries. For the 20-step adversary, we set the step-size $\alpha=2/256$. For the 100-step adversary, we set $\alpha=0.3/256$ as in \citep{madry}. During training, we use 10-step adversaries with $\alpha=2/256$.
In all experiments, we use 40-2 Wide Residual Networks \cite{wideresnet}. For training, we use SGD with Nesterov momentum of 0.9 and a batch size of 128. We use an initial learning rate of 0.1 and a cosine learning rate schedule \cite{sgdr} and weight decay of $5\times 10^{-4}$. For data augmentation, we use random cropping and mirroring. Hyperparameters were chosen as standard values and are used in subsequent sections unless otherwise specified.
\begin{table*}[t]
\begin{center}
\begin{tabular}{lccc}
\toprule
& Clean & 20-step PGD & 100-step PGD \\ \midrule
Normal Training & 94.8 & 0.0 & 0.0 \\
Adversarial Training & 84.2 & 44.8 & 44.8 \\
+ Auxiliary Rotations & 83.5 & 50.4 & 50.4 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Results for our defense. All results use $\varepsilon=8.0/255$. For 20-step adversaries $\alpha=2.0/255$, and for 100-step adversaries $\alpha=0.3/255$. More steps do not change results, so the attacks converge. Self-supervision through rotations provides large gains over standard adversarial training.\looseness=-1}
\label{tab:advresults}
\end{table*}
\textbf{Method.}\quad
We explore improving representation robustness beyond standard PGD training with auxiliary rotation-based self-supervision in the style of \cite{rotnet}. In our approach, we train a classification network along with a separate auxiliary head, which takes the penultimate vector from the network as input and outputs a 4-way softmax distribution. This head is trained along with the rest of the network to predict the amount of rotation applied to a given input image (from 0\degree, 90\degree, 180\degree, and 270\degree). Our overall loss during training can be broken down into a supervised loss and a self-supervised loss
\begin{equation}
L_{\textup{Total}} (x, y; \theta) = L_{\textup{CE}} (\textup{PGD}(x), y; \theta) + \lambda L_{\textup{SS}} (\textup{PGD}(x); \theta).
\end{equation}
Note that the self-supervised component of the loss does not require the ground truth training label $y$ as input. The supervised loss does not make use of our auxiliary head, while the self-supervised loss only makes use of this head. When $\lambda = 0$, our total loss falls back to the loss used for PGD training. For our experiments, we use $\lambda = 0.5$ and the following rotation-based self-supervised loss
\begin{equation}
L_{\textup{SS}} = \frac{1}{4} \left[ \sum_{ r \in \{ 0^{\circ}, 90^{\circ}, 180^{\circ}, 270^{\circ} \} } L_{\textup{CE}}(R_{r}(x), r; \theta)\right],
\end{equation}
where $R_{r}(x)$ is a rotation transformation and $L_{\textup{CE}}(x, r; \theta)$ is the cross-entropy between the auxiliary head's output and the ground-truth label $r \in \{ 0^{\circ}, 90^{\circ}, 180^{\circ}, 270^{\circ} \}$. In order to adapt the PGD adversary to the new training setup, we modify the loss used in the PGD update equation (2) to maximize both the rotation loss and the classification loss. The overall loss that PGD will try to maximize for each training image is $L_{\textup{CE}}(x, y; \theta) + L_{\textup{SS}}(x; \theta)$. At test-time, the PGD loss does not include the $L_{\textup{SS}}$ term.
\textbf{Results and analysis.}\quad We are able to attain large improvements over standard PGD training by adding self-supervised rotation prediction. Table \ref{tab:advresults} contains results of our model against PGD adversaries with $K=20$ and $K=100$. In both cases, we are able to achieve a 5.6\% absolute improvement over classical PGD training. In Figure \ref{fig:advresultsfig}, we observe that our method of adding auxiliary rotations actually provides larger gains over standard PGD training as the maximum perturbation distance $\varepsilon$ increases. The figure also shows that our method can withstand up to 11\% larger perturbations than PGD training without any drop in performance.
In order to demonstrate that our method does not rely on gradient obfuscation, we attempted to attack our models using SPSA \citep{uesato2018adversarial} and failed to notice any performance degradation compared to standard PGD training. In addition, since our self-supervised method has the nice property of being easily adaptable to supplement other different supervised defenses, we also studied the effect of adding self-supervised rotations to stronger defenses such as TRADES \citep{Zhang2019theoretically}. We found that self-supervision is able to help in this setting as well. Our best-performing TRADES + rotations model gives a 1.22\% boost over standard TRADES and a 7.79\% boost over standard PGD training in robust accuracy. For implementation details, see code.
\subsection{Robustness to Common Corruptions}
\begin{figure*}
\centering
\centerline{\includegraphics[width=1\linewidth]{figures/reordered_coff_def_taller_2.pdf}}
\vspace{-0.1in}
\caption{Accuracy of normal training compared to auxiliary rotation self-supervision on the nineteen corruptions in CIFAR-10-C grouped into noise, blur, digital, and weather. Each bar represents an average over all five corruption strengths for a given corruption type.}\label{fig:common_corruptions}
\vspace{-5pt}
\end{figure*}
\textbf{Setup.}\quad
In real-world applications of computer vision systems, inputs can be corrupted in various ways that may not have been encountered during training. Improving robustness to these common corruptions is especially important in safety-critical applications. \citet{hendrycks2019robustness} create a set of fifteen test corruptions and four validation corruptions common corruptions to measure input corruption robustness. These corruptions fall into noise, blur, weather, and digital categories. Examples include shot noise, zoom blur, snow, and JPEG compression.
We use the CIFAR-10-C validation dataset from \cite{hendrycks2019robustness} and compare the robustness of normally trained classifiers to classifiers trained with an auxiliary rotation prediction loss. As in previous sections, we predict all four rotations in parallel in each batch. We use 40-2 Wide Residual Networks and the same optimization hyperparameters as before. We do not tune on the validation corruptions, so we report average performance over all corruptions.
\textbf{Results and analysis.}\quad
The baseline of normal training achieves a clean accuracy of 94.7\% and an average accuracy over all corruptions of 72.3\%. Training with auxiliary rotations maintains clean accuracy at 95.5\% but increases the average accuracy on corrupted images by 4.6\% to 76.9\%. Thus, the benefits of self-supervision to robustness are masked by similar accuracy on clean images. Performance gains are spread across corruptions, with a small loss of performance in only one corruption type, JPEG compression. For glass blur, clean accuracy improves by 11.4\%, and for Gaussian noise it improves by 11.6\%. Performance is also improved by 8.9\% on contrast and shot noise and 4.2\% on frost, indicating substantial gains in robustness on a wide variety of corruptions. These results demonstrate that self-supervision can regularize networks to be more robust even if clean accuracy is not affected.
\subsection{Robustness to Label Corruptions}
\begin{figure*}[t]
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar10_no_corr.pdf}
\label{fig:plot11}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar10_glc_at_5.pdf}
\label{fig:plot14}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar10_glc_at_10.pdf}
\label{fig:plot13}
\end{subfigure}
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar100_no_corr.pdf}
\label{fig:plot21}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar100_glc_at_5.pdf}
\label{fig:plot24}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/label_noise/cifar100_glc_at_10.pdf}
\label{fig:plot23}
\end{subfigure}
\vspace{-0.1in}
\caption{Error curves for label corruption comparing normal training to training with auxiliary rotation self-supervision. Auxiliary rotations improve performance when training without loss corrections and are complementary with the GLC loss correction method.}\label{fig:labelnoise}
\vspace{-5pt}
\end{figure*}
\textbf{Setup.}\quad
Training classifiers on corrupted labels can severely degrade performance. Thus, several prior works have explored training deep neural networks to be robust to label noise in the multi-class classification setting \cite{Sukhbaatar, Patrini, hendrycks2018glc}. We use the problem setting from these works. Let $x$, $y$, and $\widetilde{y}$ be an input, clean label, and potentially corrupted label respectively. Given a dataset $\widetilde{\mathcal{D}}$ of $(x,\widetilde{y})$ pairs for training, the task is to obtain high classification accuracy on a test dataset $\mathcal{D}_\text{test}$ of cleanly-labeled $(x,y)$ pairs.
Given a cleanly-labeled training dataset $\widetilde{\mathcal{D}}$, we generate $\widetilde{\mathcal{D}}$ with a corruption matrix $C$, where $C_{ij} = p(\widetilde{y}=j \mid y=i)$ is the probability of a ground truth label $i$ being corrupted to $j$. Where $K$ is the range of the label, we construct $C$ according to $C = (1-s)I_K + s\mathsf{1}{\mathsf{1}}^\mathsf{T}/K$. In this equation, $s$ is the corruption strength, which lies in $[0,1]$. At a corruption strength of $0$, the labels are unchanged, while at a corruption strength of $1$ the labels have an equal chance of being corrupted to any class. To measure performance, we average performance on $\mathcal{D}_\text{test}$ over corruption strengths from $0$ to $1$ in increments of $0.1$ for a total of 11 experiments.
\textbf{Methods.}\quad
Training without loss correction methods or self-supervision serves as our first baseline, which we call \textit{No Correction} in Table \ref{tab:labelnoise}. Next, we compare to the state-of-the-art \textit{Gold Loss Correction (GLC)} \cite{hendrycks2018glc}. This is a two-stage loss correction method based on \cite{Sukhbaatar} and \cite{Patrini}. The first stage of training estimates the matrix $C$ of conditional corruption probabilities, which partially describes the corruption process. The second stage uses the estimate of $C$ to train a corrected classifier that performs well on the clean label distribution. The \textit{GLC} assumes access to a small dataset of trusted data with cleanly-labeled examples. Thus, we specify the percent of amount of trusted data available in experiments as a fraction of the training set. This setup is also known as a semi-verified setting \cite{semiverified}.
To investigate the effect of self-supervision, we use the combined loss $\mathcal{L}_\text{CE}(x, y; \theta) + \lambda\mathcal{L}_\text{SS}(x; \theta)$, where the first term is standard cross-entropy loss and the second term is the auxiliary rotation loss defined in \Cref{section:adv}. We call this \textit{Rotations} in Table \ref{tab:labelnoise}. In all experiments, we set $\lambda = 0.5$. \citet{rotnet} demonstrate that predicting rotations can yield effective representations for subsequent fine-tuning on target classification tasks. We build on this approach and pre-train with the auxiliary rotation loss alone for 100 epochs, after which we fine-tune for 40 epochs with the combined loss.
We use 40-2 Wide Residual Networks \citep{wideresnet}. Hyperparameters remain unchanged from \Cref{section:adv}. To select the number of fine-tuning epochs, we use a validation split of the CIFAR-10 training dataset with clean labels and select a value to bring accuracy close to that of \textit{Normal Training}. Results are in \Cref{tab:labelnoise} and performance curves are in \Cref{fig:labelnoise}.
\begin{table*}[ht]
\begin{center}
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}
& Normal Training & Rotations & Normal Training & Rotations \\ \midrule
No Correction & 27.4 & 21.8 & 52.6 & 47.4 \\
GLC (5\% Trusted) & 14.6 & 10.5 & 48.3 & 43.2 \\
GLC (10\% Trusted) & 11.6 & 9.6 & 39.1 & 36.8 \\ \bottomrule
\end{tabular}
\end{center}
\caption{Label corruption results comparing normal training to training with auxiliary rotation self-supervision. Each value is the average error over 11 corruption strengths. All values are percentages. The reliable training signal from self-supervision improves resistance to the noisy training signal from the corrupted labels.}
\label{tab:labelnoise}
\end{table*}
\textbf{Analysis.}\quad
We observe large gains in robustness from auxiliary rotation prediction. Without loss corrections, we reduce the average error by 5.6\% on CIFAR-10 and 5.2\% on CIFAR-100. This corresponds to an 11\% relative improvement over the baseline of normal training on CIFAR-100 and a 26\% relative improvement on CIFAR-10. In fact, auxiliary rotation prediction with no loss correction outperforms the GLC with 5\% trusted data on CIFAR-100. This is surprising given that the GLC was developed specifically to combat label noise.
We also observe additive effects with the GLC. On CIFAR-10, the GLC with 5\% trusted data obtains 14.6\% average error, which is reduced to 10.5\% with the addition of auxiliary rotation prediction. Note that doubling the amount of trusted data to 10\% yields 11.6\% average error. Thus, using self-supervision can enable obtaining better performance than doubling the amount of trusted data in a semi-supervised setting. On CIFAR-100, we observe similar complementary gains from auxiliary rotation prediction. Qualitatively, we can see in \Cref{fig:labelnoise} that performance degradation as the corruption strength increases is softer with auxiliary rotation prediction.
On CIFAR-100, error at 0\% corruption strength is 2.3\% higher with auxiliary rotation predictions. This is because we selected the number of fine-tuning epochs on CIFAR-10 at 0\% corruption strength, for which the degradation is only 1.3\%. Fine-tuning for longer can eliminate this gap, but also leads to overfitting label noise \citep{noise_label_overfitting}. Controlling this trade-off of robustness to performance on clean data is application-specific. However, past a corruption strength of 20\%, auxiliary rotation predictions improve performance for all tested corruption strengths and methods.
\section{Out-of-Distribution Detection}
\textbf{Setup.}\quad
In the following experiments, we take a dataset consisting in $k$ classes and train a model on one class. This model is used as an out-of-distribution detector. For the source of OOD examples, we use the examples from the remaining unseen $k-1$ classes. Consequently, for the datasets we consider, the OOD examples are near the in-distribution and make for a difficult out-of-distribution detection challenge.
We evaluate each OOD detector by using the area under the receiver operating characteristic curve (AUROC) \citep{auroc}. Each OOD detector produces an anomaly score. The AUROC is equal to the probability an out-of-distribution example has a higher anomaly score than an in-distribution example. Thus an OOD detector with a 50\% AUROC is at random-chance levels, and one with a 100\% AUROC is without a performance flaw.
\subsection{CIFAR-10}
\begin{table*}[ht]
\small
\setlength\tabcolsep{3pt}
\begin{center}
\begin{tabular}{lcccccc|c|cc}
\toprule
& OC-SVM & DeepSVDD & Geometric & RotNet & DIM & IIC & Supervised (OE) & Ours & Ours + OE
\\ \midrule
Airplane & 65.6 & 61.7 & 76.2 & 71.9 & 72.6 & 68.4 & 87.6 & 77.5 & 90.4 \\
Automobile & 40.9 & 65.9 & 84.8 & 94.5 & 52.3 & 89.4 & 93.9 & 96.9 & 99.3 \\
Bird & 65.3 & 50.8 & 77.1 & 78.4 & 60.5 & 49.8 & 78.6 & 87.3 & 93.7 \\
Cat & 50.1 & 59.1 & 73.2 & 70.0 & 53.9 & 65.3 & 79.9 & 80.9 & 88.1 \\
Deer & 75.2 & 60.9 & 82.8 & 77.2 & 66.7 & 60.5 & 81.7 & 92.7 & 97.4 \\
Dog & 51.2 & 65.7 & 84.8 & 86.6 & 51.0 & 59.1 & 85.6 & 90.2 & 94.3 \\
Frog & 71.8 & 67.7 & 82.0 & 81.6 & 62.7 & 49.3 & 93.3 & 90.9 & 97.1 \\
Horse & 51.2 & 67.3 & 88.7 & 93.7 & 59.2 & 74.8 & 87.9 & 96.5 & 98.8 \\
Ship & 67.9 & 75.9 & 89.5 & 90.7 & 52.8 & 81.8 & 92.6 & 95.2 & 98.7 \\
Truck & 48.5 & 73.1 & 83.4 & 88.8 & 47.6 & 75.7 & 92.1 & 93.3 & 98.5 \\ \hline
Mean & 58.8 & 64.8 & 82.3 & 83.3 & 57.9 & 67.4 & 87.3 & 90.1 & 95.6 \\
\bottomrule
\end{tabular}
\end{center}
\caption{AUROC values of different OOD detectors trained on one of ten CIFAR-10 classes. Test time out-of-distribution examples are from the remaining nine CIFAR-10 classes. In-distribution examples are examples belonging to the row's class. Our self-supervised technique surpasses a fully supervised model. All values are percentages.}
\vspace{-10pt}
\label{tab:cifar}
\end{table*}
\textbf{Baselines.}\quad
One-class SVMs \citep{OC-SVM} are an unsupervised out-of-distribution detection technique which models the training distribution by finding a small region containing most of the training set examples, and points outside this region are deemed OOD. In our experiment, OC-SVMs operate on the raw CIFAR-10 pixels. Deep SVDD \citep{DeepSVDD} uses convolutional networks to extract features from the raw pixels all while modelling one class, like OC-SVMs.
RotNet \citep{rotnet} is a successful self-supervised technique which learns its representations by predicting whether an input is rotated 0\degree, 90\degree, 180\degree, or 270\degree. After training RotNet, we use the softmax probabilities to determine whether an example is in- or out-of-distribution. To do this, we feed the network the original example (0\degree) and record RotNet's softmax probability assigned to the 0\degree\hspace{1pt} class. We then rotate the example 90\degree\, and record the probability assigned to the 90\degree\, class. We do the same for 180\degree\, and 270\degree, and add up these probabilities. The sum of the probabilities of in-distribution examples will tend to be higher than the sum for OOD examples, so the negative of this sum is the anomaly score. Next, \citet{golan} (Geometric) predicts transformations such as rotations and whether an input is horizontally flipped; we are the first to connect this method to self-supervised learning and we greatly improve their method. Deep InfoMax \citep{deepinfomax} networks learn representations which have high mutual information with the input; for detection we use the scores of the discriminator network. A recent self-supervised technique is Invariant Information Clustering (IIC) \citep{IID} which teaches networks to cluster images without labels but instead by learning representations which are invariant to geometric perturbations such as rotations, scaling, and skewing. For our supervised baseline, we use a deep network which performs logistic regression., and Ffor the negative class we use Outlier Exposure. In Outlier Exposure, in which the network is exposed to examples from a real, diverse dataset of consisting in out-of-distribution examples. Done correctly, this process teaches the network to generalize to new, unseen distributions. For the outlier dataset, we use 80 Million Tiny Images \citep{80mil_tiny_images} with CIFAR-10 and CIFAR-100 examples removed. Crucial to the success of the supervised baseline is our loss function choice. To ensure the supervised baseline learns from hard examples, we use the Focal Loss \citep{focal_loss}.
\textbf{Method.}\quad
For our self-supervised one-class OOD detector, we use a deep network to predict geometric transformations and thereby surpass previous work and the fully supervised network. Examples are rotated 0\degree, 90\degree, 180\degree, or 270\degree\, then translated 0 or $\pm 8$ pixels vertically and horizontally. These transformations are composed together, and the network is has three softmax heads: one for predicting rotation, one for predicting vertical translations, and one for predicting horizontal translations. The backbone architecture is a 16-4 WideResNet \citep{wideresnet} trained with a dropout rate of 0.3 \citep{dropout}. We choose a 16-4 network because there are fewer training samples. Networks are trained with a cosine learning rate schedule \citep{sgdr}, an initial learning rate of 0.1, Nesterov momentum, and a batch size of 128. Data is augmented with standard cropping and mirroring. Our RotNet and supervised baseline use the same backbone architecture and training hyperparameters. When training our method with Outlier Exposure, we encourage the network to have uniform softmax responses on out-of-distribution data. For Outlier Exposure to work successfully, we applied the aforementioned geometric transformations to the outlier images so that the in-distribution data and the outliers are as similar as possible.
Results are in \Cref{tab:cifar}. Notice many self-supervised techniques perform better than methods specifically designed for one-class learning. Also notice that our self-supervised technique outperforms the fully supervised method. In consequence, a model trained with self-supervision can surpass a fully supervised model. Combining our self-supervised technique with supervision through Outlier Exposure nearly solves this CIFAR-10 task.
\subsection{ImageNet}
\textbf{Dataset.}\quad We consequently turn to a harder dataset to test self-supervised techniques. For this experiment, we select 30 classes from ImageNet \cite{imagenet}. These classes are `acorn', `airliner', `ambulance', `American alligator', `banjo', `barn', `bikini', `digital clock', `dragonfly', `dumbbell', `forklift', `goblet', `grand piano', `hotdog', `hourglass', `manhole cover', `mosque', `nail', `parking meter', `pillow', `revolver', `rotary dial telephone', `schooner', `snowmobile', `soccer ball', `stingray', `strawberry', `tank', `toaster', and `volcano'. These classes were selected so that there is no obvious overlap, unlike classes such as `bee' and `honeycomb.' There are 1,300 training images per class, and 100 test images per class. To create a dataset with 100 test images per class, we took ImageNet's 50 validation images, and we collected and additional 50 images for each class for an expanded test set.
\textbf{Method.}\quad Like before, we demonstrate that a self-supervised model can surpass a model that is fully supervised. The fully supervised model is trained with Outlier Exposure using ImageNet-22K outliers (with ImageNet-1K images removed).
The architectural backbone for these experiments is a ResNet-18. Images are resized such that the smallest side has 256 pixels, while the aspect ratio is maintained. Images are randomly cropped to the size $224\times224\times3$. Since images are larger than CIFAR-10, new additions to the self-supervised method are possible. Consequently, we can teach the network to predict whether than image has been resized. In addition, since we should like the network to more easily learn shape and compare regions across the whole image, we discovered there is utility in self-attention \citep{Woo_2018_ECCV} for this task. Other architectural changes, such as using a Wide \emph{RevNet} instead of a Wide ResNet, can increase the AUROC from 65.3\% to 77.5\%. AUROCs are shown in \Cref{tab:imagenet}. Self-supervised methods outperform the fully supervised baseline by a large margin, yet there is still wide room for improvement on large-scale OOD detection.
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|c}
\toprule
Method & AUROC \\ \midrule
Supervised (OE) & 56.1\\
\cdashline{1-2}
RotNet & 65.3\\
RotNet + Translation & 77.9 \\
RotNet + Self-Attention & 81.6\\
RotNet + Translation + Self-Attention & 84.8 \\
RotNet + Translation + Self-Attention + Resize & 85.7 \\
\bottomrule
\end{tabular}
\end{center}
\caption{AUROC values of supervised and self-supervised OOD detectors. AUROC values are an average of 30 AUROCs corresponding to the 30 different models trained on exactly one of the 30 classes. Each model's in-distribution examples are from one of 30 classes, and the test out-of-distribution samples are from the remaining 29 classes. The self-supervised methods greatly outperform the supervised method. All values are percentages.}
\vspace{-5pt}
\label{tab:imagenet}
\end{table}
\section{Conclusion}
In this paper, we applied self-supervised learning to improve the robustness and uncertainty of deep learning models beyond what was previously possible with purely supervised approaches. We found large improvements in robustness to adversarial examples, label corruption, and common input corruptions. For all types of robustness that we studied, we observed consistent gains by supplementing current supervised methods with an auxiliary rotation loss. We also found that self-supervised methods can drastically improve out-of-distribution detection on difficult, near-distribution anomalies, and that in CIFAR and ImageNet experiments, self-supervised methods outperform fully supervised methods. Self-supervision had the largest improvement over supervised techniques in our ImageNet experiments, where the larger input size meant that we were able to apply a more complex self-supervised objective. Our results suggest that future work in building more robust models and better data representations could benefit greatly from self-supervised approaches.
\newpage
\subsection{Acknowledgments}
This material is in part based upon work supported by the National Science Foundation under Grant
No. TWC-1409915, Berkeley Deep Drive, and DARPA D3M under Grant No. FA8750-17-2-0091.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In standard cosmology, hot, thermal relics are identified with the three
light, active neutrino flavours of the Standard Model of elementary
particles. The masses of these three neutrino states have an impact
in the different cosmological observables, see Refs.~\cite{sergio,sergio2} for a
detailed description. Traditionally, the largest
effect caused by neutrino masses on the Cosmic Microwave Background
(CMB) anisotropies, is via the \emph{Early Integrated Sachs
Wolfe effect (ISW)}. Light active neutrino species may turn
non-relativistic close to the decoupling period, affecting
the gravitational potentials and leaving a signature which
turns out to be maximal around the first acoustic oscillation peak in
the photon temperature anisotropy spectrum.
More recently, the Planck
satellite CMB data~\cite{planck}, has opened the window to tackle the neutrino mass via
gravitational lensing measurements: neutrino masses are expected to
leave an imprint on the lensing potential (due to the higher expansion rate) at
scales smaller than the horizon when neutrinos turn non relativistic
states~\cite{lensingnu}. However, the largest effect of neutrino masses
on the several cosmological observables comes from the suppression of galaxy
clustering at small scales. Neutrinos, being hot thermal relics,
possess large velocity dispersions. Consequently, the
non-relativistic neutrino overdensities will only cluster at wavelengths larger than their
free streaming scale, reducing the growth of matter density
fluctuations at small scales, see e.g Refs.~\cite{Reid:2009nq,Hamann:2010pw,dePutter:2012sh,Giusarma:2012ph,Zhao:2012xw,Hinshaw:2012fq,Hou:2012xq,
Sievers:2013wk,Archidiacono:2013lva,Giusarma:2013pmn,Archidiacono:2013fha,Riemer-Sorensen:2013jsa,Hu:2014qma}.
Non degenerate neutrinos have different free streaming scales and in
principle, with perfect measurements of the matter power spectrum, the
individual values of the neutrino masses could be identified. In
practice, the former is an extremely challenging task.
Cosmological measurements are, for practical purposes, only sensitive
to the total neutrino mass, i.e. to the sum of the three active neutrino masses.
CMB Measurements from the Planck satellite, including the lensing likelihood and low-$\ell$ polarization measurements from WMAP
9-year data~\cite{Bennett:2012fp}
provide a limit on the sum of the three active neutrino masses of
$\sum m_\nu<1.11$~eV at $95\%$~CL. When a prior on the Hubble constant $H_0$ from
the Hubble Space Telescope~\cite{Riess:2011yx} is added in the
analysis, the constraint is strongly tightened, being $\sum
m_\nu<0.21$~eV at $95\%$~CL, due to the huge existing degeneracy between $H_0$
and $\sum m_\nu$, see Ref.~\cite{Giusarma:2012ph}.
The addition of Baryon Acoustic Oscillation (BAO) measurements from the
Sloan Digital Sky Survey (SDSS)-II Data Release 7~\cite{dr71,dr72}, from
the WiggleZ survey~\cite{wigglez}, from the Baryon Acoustic Spectroscopic Survey
(BOSS)~\cite{Dawson:2012va}, one of the four surveys of SDSS-III~\cite{Eisenstein:2011sa} Data Release 9~\cite{anderson} and from 6dF~\cite{6df}
to Planck CMB measurements also significantly improves the
neutrino mass constraints, leading to $\sum m_\nu<0.26$~eV at
$95\%$~CL (see also the recent work of \cite{dePutter:2014hza}).
However, the former bounds are obtained assuming that neutrinos are
the only hot thermal relic component in the universe. The existence of
extra hot relic components, as sterile neutrino species and/or thermal
axions will change the cosmological neutrino mass constraints, see
Refs.~\cite{Hamann:2010bk,Giusarma:2011ex,Giusarma:2011zq,Hamann:2011ge,Giusarma:2012ph,Archidiacono:2013lva,Archidiacono:2013fha,Melchiorri:2007cd,Hannestad:2007dd,Hannestad:2008js,Hannestad:2010yi,Archidiacono:2013cha}.
Massless, sterile neutrino-like particles, arise naturally in the context of
models which contain a dark radiation sector that decouples from the
Standard Model. A canonical example are asymmetric dark matter models,
in which the extra radiation degrees of freedom are produced by the annihilations of the
thermal dark matter component~\cite{Blennow:2012de}, see also Refs.~\cite{Diamanti:2012tg,Franca:2013zxa} for
extended weakly-interacting massive particle models.
On the other hand, extra sterile massive, light neutrino species, whose existence is not forbidden by any
fundamental symmetry in nature, may help in resolving the so-called
neutrino oscillation anomalies~\cite{Abazajian:2012ys,Kopp:2013vaa}, see
also Refs.~\cite{Melchiorri:2008gq,Archidiacono:2012ri,Archidiacono:2013xxa,Mirizzi:2013kva,Valentino:2013wha}
for recent results on the preferred sterile neutrino masses and abundances considering both
cosmological and neutrino oscillation constraints. Another candidate
is the thermal axion~\cite{PecceiQuinn}, which constitutes the most elegant solution to the strong CP problem,
i.e. why CP is a respected symmetry of Quantum Chromodynamics (QCD)
despite the existence of a natural, four dimensional, Lorentz and
gauge invariant operator which badly violates CP. Axions are the Pseudo-
Nambu-Goldstone bosons associated to a new global $U(1)_{PQ}$
symmetry, which is spontaneously broken at an energy scale $f_a$.
The axion mass is inversely proportional to the axion coupling constant $f_{a}$
\begin{eqnarray}
m_a = \frac{f_\pi m_\pi}{ f_a } \frac{\sqrt{R}}{1 + R}=
0.6\ {\rm eV}\ \frac{10^7\, {\rm GeV}}{f_a}~,
\label{eq:massaxion}
\end{eqnarray}
where $R=0.553 \pm 0.043 $ is the up-to-down quark masses
ratio and $f_\pi = 93$ MeV is the pion decay constant.
Axions may be copiously produced in the early universe via thermal or
non-thermal processes, providing therefore, a possible hot relic
candidate in the thermal case, to be considered together with the
standard relic neutrino background.
Both extra, sterile neutrino species and axions have an associated free
streaming scale, reducing the growth of matter fluctuations at small scales.
Indeed, it has been noticed by several authors~\cite{Hamann:2013iba,Wyman:2013lza} that
the inclusion of Planck galaxy cluster number counts data~\cite{Ade:2013lmv} in the
cosmological data analyses, favours a non zero value for the sterile neutrino
mass: the free streaming sterile neutrino nature
will reduce the matter power at small (i.e. cluster) scales but will leave unaffected
the scales probed by the CMB. A similar tendency for $\sum m_\nu>0$ appears, albeit to a smaller
extent~\cite{Hamann:2013iba}, when considering CFHTLens weak lensing constraints on
the clustering matter amplitude~\cite{Heymans:2013fya}.
Extra dark radiation or light species as neutrinos and axions will also contribute to the effective
number of relativistic degrees of freedom $N_{\textrm{eff}}$, defined as
\begin{equation}
\rho_{rad} = \left[1 + \frac{7}{8} \left(\frac{4}{11}\right)^{4/3}N_{\textrm{eff}}\right]\rho_{\gamma} \, ,
\end{equation}
where $\rho_{\gamma}$ is the present energy density of the CMB.
The canonical value $N_{\textrm{eff}}=3.046$ corresponds to the three active
neutrino contribution. If there are extra light species at the Big Bang
Nucleosynthesis (BBN) epoch, the expansion rate of the universe will
be higher, leading to a higher freeze out temperature for the weak
interactions which translates into a higher primordial helium fraction. The
most recent measurements of deuterium~\cite{Cooke:2013cba} and helium~\cite{Izotov:2013waa} light
element abundances provide the constraint $N_{\textrm{eff}}=3.50\pm 0.20$~\cite{Cooke:2013cba}.
It is the aim of this paper to analyse the constraints on the three
active neutrino masses, extending the analyses to possible scenarios with
additional hot thermal relics, as sterile neutrino species or axions,
using the available cosmological data in the beginning of this year 2014.
The data combination used here includes also the recent and most
precise distance BAO constraints to date from the BOSS Data Release 11
(DR11) results~\cite{Anderson:2013vga}, see also Refs.~\cite{Samushia:2013yga,Sanchez:2013tga,Chuang:2013wga}.
The structure of the paper is as follows. Section~\ref{sec:params}
describes the different cosmological scenarios with hot thermal relics
explored here and the data used in our numerical analyses. In
Sec.~\ref{sec:results} we present the current limits using the
available cosmological data in the three active neutrino massive
scenario, and in this same scheme but enlarging the hot relic
component firstly with thermal axions, secondly with additional
dark radiation (which could be represented, for instance, by massless
sterile neutrino
species) and finally, with massive sterile neutrino species. We draw our conclusions in Sec.~\ref{sec:concl}.
\section{Cosmological data analyses}
\label{sec:params}
The baseline scenario we analyse here is light active massive
neutrino scheme with three degenerate massive neutrinos,
described by the parameters:
\begin{equation}
\label{parameter}
\{\omega_b,\omega_c, \Theta_s, \tau, n_s, \log[10^{10}A_{s}], \sum m_\nu\}~,
\end{equation}
$\omega_b\equiv\Omega_bh^{2}$ and $\omega_c\equiv\Omega_ch^{2}$
being the physical baryon and cold dark matter energy densities,
$\Theta_{s}$ the ratio between the sound horizon and the angular
diameter distance at decoupling, $\tau$ is the reionization optical depth,
$n_s$ the scalar spectral index, $A_{s}$ the amplitude of the
primordial spectrum and $\sum m_\nu$ the sum of the masses of the
three active neutrinos in eV. We then consider simultaneously the
presence of two hot relics, both massive neutrinos and axions,
enlarging the former scenario with one thermal axion of mass $m_a$,
see Appendix~\ref{sec:appdn} for details concerning the calculation of
the axion energy density as a function of the cosmic time.
The other possibility is the existence of extra dark radiation
species, that we have firstly addressed by introducing a number of massless
sterile neutrino-like species, parameterized via $N_{\textrm{eff}}$ (together with the
baseline three massive neutrino total mass $\sum m_\nu$). The extra
additional sterile states, if massive, may help in resolving the so-called
neutrino oscillation anomalies. Consequently, we also constrain here
simultaneously the $N_{\textrm{eff}}$ massive sterile neutrino scenario and the
sum of the three active neutrino masses $\sum m_\nu$.
The effective number of massive sterile neutrino species is represented by
$\DeltaN_{\textrm{eff}}=N_{\textrm{eff}}- 3.046$, and its mass is $m^\textrm{eff}_s$, which
is related to the physical sterile neutrino mass via the relation:
\begin{equation}
\label{parameter}
m^\textrm{eff}_s= (T_s/T_\nu)^3m_s=(\Delta N_{\textrm{eff}})^{3/4} m_s~,
\end{equation}
being $T_s$ ($T_\nu$) the current temperature of the sterile (active)
neutrino, and assuming that the sterile states are hot thermal relics
with a phase space distribution similar to the active neutrino one.
Table \ref{tab:priors} specifies the priors considered on the different cosmological
parameters. For our numerical analyses, we have used the Boltzmann CAMB
code~\cite{camb} and extracted cosmological parameters from current data
using a Monte Carlo Markov Chain (MCMC) analysis based on the publicly
available MCMC package \texttt{cosmomc}~\cite{Lewis:2002ah}.
In particular, we run chains using the Metropolis-Hastings (MH) algorithm to obtain posterior distributions
for the model parameters, given a certain dataset combination.
The only exception is for the measurements of the power spectrum amplitude (described
in the following section), that are included in our analysis by post-processing
the MH chains that were previously generated without accounting for these data.
The post-processing is done using the technique of importance sampling;
this technique is very reliable when the posterior distributions obtained
after including new data are centered on the same values as the old distributions,
and becomes on the contrary less reliable the more the new posteriors are
shifted with respect to the old ones. The reason for this is that in this case one
needs to sample from the low-probability tail of the old distribution, that is poorly
explored by the MH algorithm (unless the chains run for a very long time).
We stress this fact since, as we shall see in the following, the inclusion of the data on the power spectrum
amplitude shifts the posterior for some of the model parameters.
All the cases under consideration (additional massless species, massive sterile neutrinos, and axions)
can be studied with none or minimal to modifications to the CAMB code. In particular,
the massive sterile and axion cases can be reproduced in the Boltzmann code by means of a
suitable reparameterization and by treating, code-wise, the
additional species as massive neutrinos. This relies on the fact
that, for an equilibrium distribution function, the evolution equations only depend on the mass over temperature
ratio $m_i/T_i$ and on the total density $\Omega_i$ ($i=\mathrm{a},\,\mathrm{s}$).
The equivalence is perfect for thermal sterile neutrinos, because they have a Fermi-Dirac distribution
function like ordinary neutrinos; instead, this is not the case for thermal axions since they
are described by a Bose-Einstein distribution function. We take into
account here the bosonic nature of axions at the background
level, but not in the perturbation equations. However we argue that the error
that we commit in keeping the Fermi-Dirac distribution function in the perturbation equations
for axions is negligible given the uncertainties on the model parameters.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c}
\hline\hline
Parameter & Prior\\
\hline
$\Omega_{b}h^2$ & $0.005 \to 0.1$\\
$\Omega_{c}h^2$ & $0.01 \to 0.99$\\
$\Theta_s$ & $0.5 \to 10$\\
$\tau$ & $0.01 \to 0.8$\\
$n_{s}$ & $0.9 \to 1.1$\\
$\ln{(10^{10} A_{s})}$ & $2.7 \to 4$\\
$\sum m_\nu$ [eV] & $0.06 \to 3$\\
$m_a$ [eV] & $0.1 \to 3$\\
$N_{\textrm{eff}}$ & $0 (3.046) \to 10$\\
$m^\textrm{eff}_s$ [eV] & $0 \to 3$\\
\hline\hline
\end{tabular}
\caption{Uniform priors for the cosmological parameters considered
here. In the case of the extra relativistic degrees of freedom $N_{\textrm{eff}}$, the numbers refer
to the massless (massive) case.}
\label{tab:priors}
\end{center}
\end{table}
\subsection{Cosmological data}
\label{sec:data}
\subsubsection{CMB data }
We consider the data on CMB temperature anisotropies measured
by the Planck satellite (including information on the lensing potential)
\cite{Ade:2013ktc,Planck:2013kta,Ade:2013tyw}
combined with 9-year polarization data from WMAP \cite{Bennett:2012fp}
and with additional temperature data from high-resolution CMB experiments,
namely the Atacama Cosmology Telescope (ACT) \cite{Das:2013zf} and
the South Pole Telescope (SPT) \cite{Reichardt:2011yv}.
The likelihood functions associated to these datasets are estimated
and combined using the likelihood code
distributed by the Planck collaboration, described in Refs. \cite{Planck:2013kta}
and \cite{Ade:2013tyw}, and publicly
available at Planck Legacy Archive\footnote{\url{http://pla.esac.esa.int/pla/aio/planckProducts.html}}.
The Planck TT likelihood is constructed following a hybrid approach:
the high-$\ell$ ($\ell \ge 50$) part is based on
a pseudo-$C_\ell$ technique and uses power spectra estimated from the detectors
of the 100, 143 and 217 GHz frequency channels, while the
low-$\ell$ ($\ell\le 49$) part uses a Gibbs sampling-based approach
and combines data from all frequencies from 30 to 353 GHz.
We use Planck TT data up to a maximum multipole number of $\ell_{\rm max}=2500$.
These are supplemented by the low-$\ell$ WMAP 9-year polarization
likelihood, that includes multipoles up to $\ell=23$ \cite{Bennett:2012fp}.
For what concerns the small-scale observations, we follow the approach
of the Planck collaboration, as implemented in
their likelihood code, and include the ACT spectra presented in Ref. \cite{Das:2013zf}
and the SPT spectra presented in Ref. \cite{Reichardt:2011yv}.
In particular, the likelihood uses the ACT $148\times148$ spectra in the range
$1000<\ell<9440$, the ACT $148\times218$ and $218\times218$
spectra in the range $1500<\ell<9440$,
and the SPT 95, 150 and 220 GHz spectra in the range $2000<\ell<10000$,
as described in Sec. 4.1 of Ref. \cite{planck}. The primary purpose of
considering these subsets of the ACT and SPT data is to improve
the constraints on the unresolved foregrounds. Finally, we use the information
on the gravitational lensing power spectrum estimated from the trispectrum
of the Planck maps, as implemented in the Planck lensing likelihood described
in Ref. \cite{Ade:2013tyw}.
We shall refer to the combination of all the above-mentioned data as the \emph{CMB} data set.
In our analysis of the CMB dataset, we compute the helium abundance
following the BBN theoretical prediction, in which the helium mass
fraction is a function of $\Omega_b h^2$
and $N_{\textrm{eff}}$ (see the BBN section below) and fix the lensing spectrum
normalization to $A_L=1$.
We marginalize over all foregrounds
parameters as described in \cite{planck}.
\subsubsection{Large scale structure data}
We consider here several large scale structure data sets in different
forms. First of all, we include all the available galaxy survey
measurements in the form of Baryon Acoustic Oscillation (BAO) data.
As a novelty, we add to the existing BAO data sets (SDSS Data
Release 7~\cite{dr71,dr72}, WiggleZ survey~\cite{wigglez},
6dF~\cite{6df}) the most recent and most accurate BAO
measurements to date, arising from the BOSS Data Release 11 (DR11)
results~\cite{Anderson:2013vga}. Using approximately a sample of one
million galaxies and covering $8500$ squared degrees, the DR11 results
provide the constraints on the spherically averaged distance $D_V /r_d$~\footnote{The value of the sound horizon $r_d$ used for these values
is obtained using the Eisenstein \& Hu fitting formula~\cite{Eisenstein:1997ik}.}
to be $13.42\pm 0.13$ and $8.25\pm 0.16$ at redshifts
$z=0.57$ and $z=0.32$, respectively. We present results separately for
DR11 BAO measurements, as well as the combination of the former
results with other previous BAO measurements, referring to them as
\emph{DR11} and \emph{BAO}, respectively.
We also exploit here the WiggleZ survey large scale structure
measurements in their full matter power spectrum form~\cite{Parkinson:2012vd}, in order to
quantify the benefits of using \emph{shape} measurements of the matter power
spectrum versus \emph{geometrical} BAO information in extended
cosmological scenarios with additional degeneracies among the
different parameters, see the earlier work of Refs.~\cite{Hamann:2010pw,Giusarma:2012ph}
where similar comparisons were performed. This data set is referred as
\textit{WZ}, and whenever it is included, the BAO measurement from the
WiggleZ survey is not considered in the BAO data set.
\subsubsection{Supernova luminosity distance and Hubble constant measurements}
Supernova luminosity distance measurements from the first three years
of the Supernova Legacy Survey~\cite{snls} are included in the hot
thermal dark matter relic bounds presented here, referring to these
data as \emph{SNLS}.
Our cosmological data analyses will also address the effect of a gaussian prior on the Hubble constant
$H_0=73.8\pm2.4$ km/s/Mpc, accordingly with the measurements from the
Hubble Space Telescope~\cite{Riess:2011yx}. We refer to this prior as
\emph{HST}.
\subsubsection{Additional data sets: $\sigma_8$ measurements \label{sec:sigma8}}
Measurements of the galaxy power shear spectra by tomographic weak lensing surveys provide a powerful tool to set constraints on the mass distribution in the universe. The amplitude and the shape of the weak lensing signal are sensitive to the normalization of the power spectrum, the so-called $\sigma_8$ parameter
(which is the standard deviation of the matter density perturbations in a sphere of radius $8$–Mpc$/h$), as well as to the overall matter energy density of the universe, $\Omega_m$. Using six tomographic redshift bins spanning from $z=0.28$ to $z=1.12$, the CFHTLens survey finds $\sigma_8 (\Omega_m/0.27)^{0.46}=0.774^{+0.032}_{-0.041}$~\cite{Heymans:2013fya}. We shall use this constraint in our analyses, applying this constraint to our Monte Carlo Markov chains.
A strong and independent measurement of the amplitude of the power spectrum arises from the abundance of clusters as a function of the redshift, being the cluster redshift distribution a powerful probe of both $\Omega_m$ and $\sigma_8$. The Planck Sunyaev-Zeldovich (SZ) selected clusters catalog, which consists of 189 galaxy clusters with measured redshift in the X range, is the largest SZ cluster sample to date and has provided the constraint $\sigma_8 (\Omega_m/0.27)^{0.3}=0.782\pm 0.010$~\cite{Ade:2013lmv} via the cluster mass function. We will address as well this constraint in our Monte Carlo Markov chain analyses.
These measurements are included in our analysis by post-processing the
chains that were previously generated without accounting for these data.
\subsubsection{Big Bang Nucleosynthesis}
The light elements abundance is also sensitive to several cosmological parameters. The primordial abundance
of deuterium is usually considered as an invaluable \emph{baryometer},
since the higher the baryon abundance $\Omega_b h^2$, the less
deuterium survives. On the other hand, while the mass fraction of
helium-4 $^4He$ ($Y_p$) is rather insensitive to $\Omega_b h^2$, it is
directly related to the expansion rate at the BBN period, which
strongly depends on the effective number of relativistic degrees
of freedom $N_{\textrm{eff}}$. As previously stated, if there are extra light
species at the BBN epoch, the expansion rate of the universe will be
higher, leading to a higher freeze out temperature for the weak
interactions which translates into a higher primordial helium fraction
$Y_p$. Here we exploit the primordial deuterium values from
Ref.~\cite{fabio} $(D/H)_p = (2.87 \pm 0.22) \times
10^{-5}$ as well as the most recent deuterium measurements $(D/H)_p = (2.53 \pm 0.04) \times
10^{-5}$~\cite{Cooke:2013cba}, to compare the cosmological constraints
obtained with these two diferent primordial deuterium estimates,
including also the measurements of the helium mass fraction $Y_p= 0.254 \pm
0.003$~\cite{Izotov:2013waa}. We shall use the former constraints in
the scenarios in which extra relativistic degrees of freedom are expected to be present at the BBN period.
Notice that Planck CMB data is also sensitive to the value of $Y_p$ via measurements of the CMB damping
tail (high multipole region), and therefore we use the BBN consistency
option of the MCMC software exploited here,
\texttt{cosmomc}~\cite{Lewis:2002ah}, assuming therefore that the value of the
extra relativistic degrees of freedom remains unchanged between the
BBN and the CMB epochs. Then, given a cosmological
model, the theoretical primordial abundance of helium, which is a function of $\Omega_b h^2$
and $N_{\textrm{eff}}$~\footnote{See for instance the fitting functions provided in
Ref.~\cite{fabio}, extracted from the numerical results of the
PArthENopE BBN code~\cite{parthenope}.} is computed, using
AlterBBN~\cite{alterbbn}, a numerical code devoted to calculate the BBN
abundances within non standard cosmologies. We perform a
similar calculation for the deuterium primordial abundance, and then
fit the theoretical expectations for the deuterium and helium
primordial abundances (previously computed for the CMB data analyses in the latter case) to the measurements quoted above, adding the
resulting likelihood in our MCMC analyses by means of a postprocessing of our chains.
\subsubsection{Consistency of datasets}
We derive our constraints on model parameters using different combinations of
the datasets described in the previous sections. However, in a few cases there are tensions
between datasets, that we describe in the following. We also briefly assess, at least
qualitatively, the effect on parameters of adding these data.
We use the Planck lensing likelihood in all our analyses. The lensing likelihood is based
on the information encoded in the 4-point correlation function (i.e., the trispectrum) of CMB temperature
anisotropies. On the other hand, lensing also directly affects the CMB power spectrum.
As explained in Sec. 5.1 of Ref. \cite{planck}, there is a slight tension between the lensing amplitudes
that are inferred from the trispectrum and from the power spectrum. In particular, while the
former is consistent with the value expected in $\Lambda$CDM, the temperature
power spectrum shows a mild preference for a higher lensing power. Since the effect of
increasing the neutrino mass is similar to that of a smaller lensing amplitude (as
both result in a suppression of power at small scales), including the lensing likelihood
tends to shift the value of the total neutrino mass to larger values \cite{planck}.
Instead, the inclusion of the lensing likelihood does not change significantly the constraints
on the effective number of relativistic degrees of freedom, at least for the $\Lambda$CDM
model.
Another piece of information that is in tension with the corresponding Planck estimate
is the value of the Hubble constant inferred from astrophysical measurements,
as discussed in Sec. 5.3 of Ref. \cite{planck}.
This includes the HST value used in our analysis, $H_0=73.8\pm2.4$ km/s/Mpc, that is discrepant
with the Planck $\Lambda$CDM estimate $H_0=67.3\pm1.2$ km/s/Mpc at more than 2$\sigma$,
although it should always be remembered that CMB estimates are highly model dependent.
The reasons for this discrepancy are, to date, not yet well understood and are
a matter of intense debate in the community.
It is however possible that this tension is relieved in some extensions of the
standard $\Lambda$CDM model. For this reason, we have decided to consider
the HST data in some of our enlarged datasets.
Finally, we use the $\sigma_8$ measurements from the CHFTLens survey
and from the Planck SZ cluster counts, as reported in Sec. \ref{sec:sigma8}.
These values are however both discrepant with the value estimated from Planck CMB at
the 2$\sigma$ level (see discussion in Sec. 5.5 of Ref. \cite{planck}).
This tension has not yet been explained either, but it could be related to the
difficulties in adequately modelling selection biases and calibrating cluster masses.
As in the case of the Hubble constant, however, there is the possibility that the discrepancy
is alleviated in some extended cosmological models (like for example those that include
the neutrino mass as a free parameter). Following the same rationale as for the inclusion of the HST data,
we have derived constraints from enlarged datasets that include the $\sigma_8$ measurements.
These should however be regarded as quite un-conservative.
\begin{table*}
\begin{center}\footnotesize
\begin{tabular}{lcccccccc}
\hline \hline
& CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+ DR11 & CMB+DR11 & CMB+DR11 & CMB+ DR11 \\
& & +HST & +WZ & +WZ+HST & +WZ+BAO+HST & +BAO& +BAO+HST &+BAO+SNLS\\
\hline
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.25$ & $<0.22$ & $<0.25$ & $<0.23$ & $<0.24$ & $<0.26$ & $<0.22$ & $<0.23$\\
\hspace{1mm}\\
\hline\\
SZ Clusters \&& & & & & & & & \\
CFHTLens & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV]& $0.30_{-0.14}^{+0.12}$ & $0.25_{-0.13}^{+0.12}$ & $0.27_{-0.13}^{+0.14}$ & $0.25_{-0.11}^{+0.10}$ & $0.26_{-0.13}^{+0.18}$ & $0.29_{-0.12}^{+0.13}$ & $0.24_{-0.12}^{+0.10}$ & $0.27_{-0.13}^{+0.12}$\\
\hspace{1mm}\\
\hline\\
SZ Clusters & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $0.30_{-0.14}^{+0.12}$ & $0.25_{-0.13}^{+0.13}$ & $0.27_{-0.13}^{+0.12}$ & $0.24_{-0.10}^{+0.10}$ & $0.25_{-0.13}^{+0.17}$\ &$0.29_{-0.12}^{+0.13}$ & $0.23_{-0.12}^{+0.10}$ & $0.27_{-0.13}^{+0.11}$\\
\hspace{1mm}\\
\hline\\
CFHTLens& & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.33$ & $<0.28$ & $<0.30$ & $<0.27$ & $<0.28$ & $<0.33$ & $<0.27$ & $<0.30$ \\
\hspace{1mm}\\
\hline
\hline
\end{tabular}
\caption{$95\%$~CL constraints on the sum of the neutrino masses, ${{\Sigma}m_{\nu}}$, from
the different combinations of data sets explored here.}
\label{tab:mnu}
\end{center}
\end{table*}
\section{Results}
\label{sec:results}
\subsection{Massive neutrinos}
\label{sec:st}
We present here the results on our baseline scenario with three active
neutrino degenerate species. Table~\ref{tab:mnu} depicts the $95\%$~CL
constraints on the sum of the three active neutrino masses $\sum m_\nu$.
Notice that, without the inclusion of the constraints on $\sigma_8$ and $\Omega_m$ the upper limits
on the neutrino mass are mostly driven by the new BOSS DR11 BAO measurements, being the tightest limit
$\sum m_\nu < 0.22$~eV at $95\%$~CL from the combination of CMB data, BAO and HST measurements of the Hubble constant.
However, since there exists a well known discrepancy on the measured
value of $H_0$ from the Planck and the HST experiments~\cite{planck}, we have also considered the combination of
CMB and BAO data with SNLS Supernovae Ia luminosity distance measurements. Such a combination provides an upper $95\%$~CL limit of
$\sum m_\nu < 0.23$~eV, in perfect agreement with the findings of the
recent BOSS results\cite{Sanchez:2013tga} using the full shape of the
clustering correlation function.
The addition of the constraints on $\sigma_8$ and $\Omega_m$ from the
CFHTLens survey displaces the bounds on the neutrino mass to higher
values, the reason for that being the lower $\sigma_8$ preferred by
CFHTLens weak lensing measurements. Due the poor constraining power of
the weak lensing data the neutrino mass bounds are not significantly
altered. On the other hand, when adding the constraint on $\sigma_8$ and $\Omega_m$
from the Planck-SZ cluster catalog on galaxy number counts, a non zero
value for the sum of the three active neutrino masses of $\sim 0.3$~eV
is favoured at $4\sigma$. In particular, the combination of CMB
data with BAO measurements from BOSS DR11, WiggleZ power spectrum
(full shape) data and a prior on $H_0$ from HST after considering the
inclusion of Planck SZ clusters information leads to the value $\sum
m_\nu =0.24_{-0.10}^{+0.10}$~eV at $95\%$~CL. The combination of weak
lensing data and galaxy number counts data is mostly driven by the
latter and therefore the constraints do not change significantly with
respect to the case in which the analyses are performed with galaxy
cluster counts information only. A similar effect, although in a slightly different scenario and
different data sets, was found by Refs.~\cite{Hamann:2013iba,Wyman:2013lza}.
Figure~\ref{fig:mnu} illustrates our findings for three possible data
combinations: CMB data, combined with BOSS DR11 BAO measurements,
additional BAO measurements and a prior on the Hubble constant from
HST (depicted by the blue contours); and the same data combination but
considering also the $\sigma_8-\Omega_m$ weak lensing
(galaxy number counts) constraint (depicted by the red (green)
contours). The left panel
depicts the very well known degeneracy in the ($\sum
m_\nu$ (eV), $H_0$) plane, showing the $68\%$ and $95\%$~CL allowed
contours by the different data sets specified above. Considering CMB
data only, a higher value of $\sum m_\nu$ can be compensated by a
decrease on the Hubble constant $H_0$ since the shift induced in the
distance to the last scattering surface caused by a larger $\sum
m_\nu$ can be compensated by a lower $H_0$. Notice that when Planck SZ cluster information on the $\sigma_8-\Omega_m$
relationship is added, the allowed neutrino mass regions are displaced and
a non zero value for the sum of the three active neutrino masses is
favoured at $\sim 4\sigma$. The right panel of Fig.~\ref{fig:mnu} shows the $68\%$ and $95\%$~CL allowed
regions in the ($\sum m_\nu$ (eV), $\sigma_8$) plane. The allowed contours of
both $\sigma_8$ and $\sum m_\nu$ are considerably displaced after
considering Planck clusters data. The power spectrum normalization
$\sigma_8$ has smaller values when neutrinos are massive (due to the
neutrino free streaming nature), being precisely these smaller
values of $\sigma_8$ those preferred by galaxy cluster number counts.
\begin{figure*}
\begin{tabular}{c c}
\includegraphics[width=8cm]{mnu_h0_bao2.pdf}&\includegraphics[width=8.4cm]{mnu_sigma8_bao_2.pdf}\\
\end{tabular}
\caption{Left panel: the blue contours show the $68\%$ and $95\%$~CL allowed
regions from the combination of CMB data, BOSS DR11 BAO measurements,
additional BAO measurements and a prior on the Hubble constant from
HST in the ($\sum m_\nu$ (eV), $H_0$) plane. The red (green) contours depict the results when the $\sigma_8-\Omega_m$ weak lensing
(galaxy number counts) constraint is added in the analysis.
Right panel: as in the left panel but in the ($\sum m_\nu$ (eV), $\sigma_8$) plane.}
\label{fig:mnu}
\end{figure*}
\subsection{Massive neutrinos and thermal axions}
\label{sec:st3}
\begin{table*}
\begin{center}\footnotesize
\begin{tabular}{lcccccccc}
\hline \hline
& CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 \\
& & +HST & +WZ & +WZ+HST & +WZ+BAO+HST & +BAO& +BAO+HST &+BAO+SNLS\\
\hline
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.24$ & $<0.21$ & $<0.24$ & $<0.22$ & $<0.21$ & $<0.23$ & $<0.20$ & $<0.22$\\
\hspace{1mm}\\
$m_a$ [eV] & $<0.79$ & $<0.77$ & $<0.65$ & $<0.62$ & $<0.59$ & $<0.74$ & $<0.75$ & $<0.76$\\
\hspace{1mm}\\
\hline\\
SZ Clusters \&& & & & & & & & \\
CFHTLensing & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.36$ & $<0.27$ & $0.21_{-0.13}^{+013}$ & $<0.32$ & $<0.30$ & $<0.31$ & $<0.28$ & $<0.31$\\
\hspace{1mm}\\
$m_a$ [eV] & $<1.08$ & $<1.09$ & $<0.88$ & $<0.81$& $<0.77$& $<1.12$ & $0.63_{-0.49}^{+0.47}$& $0.58_{-0.48}^{+0.50}$\\
\hspace{1mm}\\
\hline\\
SZ Clusters & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.36$ & $<0.27$ & $0.20_{-0.14}^{+013}$ & $<0.32$ & $<0.30$ & $<0.31$ & $<0.27$ & $<0.31$\\
\hspace{1mm}\\
$m_a$ [eV] & $<1.07$ & $<1.07$ & $<0.87$& $<0.81$ & $<0.77$ & $<1.10$ & $0.62_{-0.48}^{+0.46}$ & $0.57_{-0.47}^{+0.50}$\\
\hspace{1mm}\\
\hline\\
CFHTLens & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.29$ & $<0.24$ & $<0.28$ & $<0.25$ & $<0.25$ & $<0.27$ & $<0.24$ & $<0.26$ \\
\hspace{1mm}\\
$m_a$ [eV] & $<0.94$ & $<0.95$ & $<0.74$ & $<0.68$ & $<0.67$ & $<0.96$ & $<0.94$ & $<0.98$ \\
\hspace{1mm}\\
\hline\\
BBN & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{Cooke:2013cba}& $<0.27$ & $<0.24$ & $<0.26$ & $<0.27$ & $<0.25$ & $<0.27$ & $<0.23$ & $<0.24$\\
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{fabio} & $<0.24$ & $<0.20$ & $<0.23$ & $<0.21$ & $<0.21$ & $<0.22$ & $<0.20$ & $<0.22$\\
\hspace{1mm}\\
$m_a$ [eV] $(D/H)_p$\cite{Cooke:2013cba}& $<1.15$ & $<0.76$ & $<0.60$ & $<0.59$ & $<0.57$ & $<0.79$ & $<0.77$ & $<1.38$ \\
\hspace{1mm}\\
$m_a$ [eV] $(D/H)_p$\cite{fabio}& $<0.82$ & $<0.80$ & $<0.67$ & $<0.64$ & $<0.61$ & $<0.77$ & $<0.78$ & $<0.79$\\
\hspace{1mm}\\
\hline
\hline
\end{tabular}
\caption{$95\%$~CL constraints on the sum of the neutrino masses,
${{\Sigma}m_{\nu}}$, and on the axion mass, $m_a$, both in eV, from
the different combinations of data sets explored here. When
BBN bounds are included, the first (second) raw refers to the constraints
obtained combining the primordial deuterium values from
Ref.~\cite{Cooke:2013cba} (\cite{fabio}) $(D/H)_p = (2.53 \pm 0.04) \times
10^{-5}$ ($(D/H)_p = (2.87 \pm 0.22) \times
10^{-5}$) with measurements of the helium mass fraction $Y_p= 0.254 \pm
0.003$ from Ref.~\cite{Izotov:2013waa}.}
\label{tab:ma}
\end{center}
\end{table*}
In this section we present the constraints on a scenario including
both massive neutrinos and a thermal axion.
Table~\ref{tab:ma} presents the constraints on the sum of the three
active neutrino masses and on the axion mass (both in eV) for the
different cosmological data combinations considered here. Notice that
BBN bounds are also quoted here since a thermal axion will also
contribute to the extra radiation component at the BBN period, by an
amount given by:
\begin{equation}
\Delta N_{\textrm{eff}} =\frac{ 4}{7}\left(\frac{3}{2}\frac{n_a}{n_\nu}\right)^{4/3}~,
\end{equation}
being $n_a$ the current axion number density and $n_\nu=112$~
cm$^{-3}$, the current number density of each active neutrino plus
antineutrino flavour. We have applied the BBN consistency
relation in our MCMC analyses of Planck data, to compute the Helium
mass fraction as a function of $\Delta N_{\textrm{eff}}$. Nevertheless the bounds on neutrino and
axion masses are not significantly affected if the Helium mass
fraction is kept fixed for CMB purposes. Notice that, before applying constraints from
Planck SZ Clusters or CHFTLens constraints on the $\sigma_8-\Omega_m$
relationship, the most stringent $95\%$~CL bounds, without including BBN
bounds, are $\sum m_\nu <0.21$~eV and $m_a<0.59$~eV, considering CMB, BOSS BAO DR11, additional BAO
measurements, WiggleZ power spectrum (full shape) information and the
$H_0$ HST prior. These bounds are in perfect agreement with the
findings of Ref.~\cite{Archidiacono:2013cha}, albeit they are slightly
tighter, mostly due to the more accurate new BOSS BAO measurements.
After considering BBN bounds with deuterium estimates from ~\cite{Cooke:2013cba} (\cite{fabio}) and helium constraints from Ref.~\cite{Izotov:2013waa}, which constrain the contribution of the thermal axion to
the relativistic degrees of freedom at the BBN epoch, the $95\%$~CL bounds quoted above traslate into $\sum m_\nu <0.25$~eV
and $m_a<0.57$~eV ($\sum m_\nu <0.21$~eV and $m_a<0.61$~eV).
The addition of weak lensing constraints on the $\sigma_8-\Omega_m$ relationship from the CFHTLens
experiment makes the neutrino and axion mass bounds weaker, due to the lower $\sigma_8$ preferred by the
former data set, which favours higher values for the thermal relic
masses. If further information on the $\sigma_8-\Omega_m$ relationship from
the Planck SZ cluster number counts is considered in the MCMC
analyses, there exists evidence for a neutrino mass of $\sim 0.2$~eV
at the $\sim 3\sigma$ level exclusively for the case in which CMB data
is combined with BOSS BAO DR11 measurements and full-shape power spectrum information from the
WiggleZ galaxy survey. There exists as well a mild evidence ($\sim
2 \sigma$) for an axion mass of $0.6$~eV for two isolated
cases in which either the HST $H_0$ prior or SNIa luminosity distance measurements are considered in combination
with all the BAO measurements here exploited. However, there is no
evidence for neutrino and axions masses simultaneously.
Figure~\ref{fig:ma}, left panel, depicts the $68\%$ and $95\%$~CL allowed
regions arising from the combination of CMB data, BOSS DR11 BAO measurements,
additional BAO measurements and a prior on the Hubble constant from
HST in the ($\sum m_\nu$ (eV), $m_a$(eV)) plane. Once the Planck SZ
cluster number counts information on the $\sigma_8-\Omega_m$ relationship is
added, a non zero value of the axion mass is favoured by data at the
$\sim 2.2\sigma$. The right panel of Fig.~\ref{fig:ma} shows the
$68\%$ and $95\%$~CL contours in the ($\sum m_\nu$ (eV), $m_a$(eV))
plane resulting from the analysis of CMB data, BOSS DR11 BAO measurements,
additional BAO measurements - except for the WiggleZ galaxy survey
information which is removed and considered in its full-shape form -
and the HST $H_0$ prior. Notice that no evidence for non-zero neutrino masses
nor for non-zero axion mass appears in this case.
\begin{figure*}
\begin{tabular}{c c}
\includegraphics[width=8cm]{axi_mnu_ma_bao_bbnt4.pdf}&\includegraphics[width=8.cm]{axi_mnu_ma_wz_bbnt4.pdf}\\
\end{tabular}
\caption{Left panel: the blue contours show the $68\%$ and $95\%$~CL allowed
regions from the combination of CMB data, BOSS DR11 BAO measurements,
additional BAO measurements and a prior on the Hubble constant from
HST (depicted by the blue contours) in the ($\sum
m_\nu$ (eV), $m_a$ (eV)) plane. The red (green) contours depict the results when the $\sigma_8-\Omega_m$ weak lensing
(galaxy number counts) constraint is added in the analysis. Right panel: as in the left panel but replacing the WiggleZ BAO geometrical
information by the WiggleZ full-shape matter power spectrum
measurements.}
\label{fig:ma}
\end{figure*}
\subsection{Massive neutrinos and extra dark radiation species}
\label{sec:st2}
\begin{table*}
\begin{center}\footnotesize
\begin{tabular}{lcccccccc}
\hline \hline
& CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 \\
& & +HST & +WZ & +WZ+HST & +WZ+BAO+HST & +BAO& +BAO+HST &+BAO+SNLS\\
\hline
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.31$ & $<0.31$ & $<0.32$ & $<0.34$ & $<0.34$ & $<0.31$ & $<0.31$ & $<0.29$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ & $3.45_{-0.54}^{+0.59}$ & $3.66_{-0.49}^{+0.52}$ & $3.32_{ -0.62}^{+0.55 }$ & $3.57_{-0.48}^{+0.50}$ & $3.56_{-0.49}^{+0.45}$ &$3.43_{-0.59}^{+0.58}$ & $3.66_{-0.47}^{+0.48}$ & $3.48_{-0.56}^{+0.58}$ \\
\hspace{1mm}\\
\hline\\
SZ Clusters \&& & & & & & & & \\
CFHTLensing & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV]$\&$ & $0.37_{-0.18}^{+0.24}$ & $0.37_{-0.20}^{+0.20}$ & $0.32_{ -0.19}^{+0.19 }$ & $0.35_{-0.17}^{+0.16}$ & $0.37_{-0.17}^{+0.26}$ &$0.32_{-0.21}^{+0.18}$ & $0.37_{-0.20}^{+0.18}$ & $0.32_{-0.17}^{+0.15}$ \\
\hspace{1mm}\\
$N_{\textrm{eff}}$& $3.32_{-0.55}^{+0.53}$ & $3.54_{-0.54}^{+0.48}$ & $ 3.24_{ -0.70}^{+0.58 }$ & $3.56_{-0.59}^{+0.59}$ & $3.56_{-0.60}^{+1.09}$ & $3.17_{-0.59}^{+0.64}$ & $3.54_{-0.62}^{+60}$ & $3.25_{-0.43}^{+0.47}$ \\
\hspace{1mm}\\
\hline\\
SZ Clusters & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV]& $0.37_{-0.19}^{+0.24}$ & $0.36_{-0.18}^{+0.18}$ & $0.32_{-0.19}^{+0.19}$ & $0.35_{-0.16}^{+0.17}$ & $0.36_{-0.18}^{+0.26}$&$0.32_{-0.20}^{+0.18}$ & $0.37_{-0.21}^{+0.18}$ & $0.32_{-0.16}^{+0.15}$ \\
\hspace{1mm}\\
$N_{\textrm{eff}}$& $3.33_{-0.53}^{+0.55}$ & $3.55_{-0.58}^{+0.51}$ & $3.25_{-0.68}^{+0.57}$ & $3.56_{-0.58}^{+0.59}$ & $3.55_{-0.59}^{+0.65}$ & $3.18_{-0.59}^{+0.63}$ & $3.54_{-0.59}^{+0.62}$ & $3.25_{-0.44}^{+0.49}$ \\
\hspace{1mm}\\
\hline\\
CFHTLens & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV]& $<0.41$ & $<0.44$ & $<0.39$ & $<0.41$ & $<0.42$ & $<0.40$ & $<0.43$ & $<0.39$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$& $3.39_{-0.55}^{+0.57}$ & $3.59_{-0.54}^{+0.52}$ & $3.28_{ -0.63}^{+0.58 }$ & $3.55_{-0.47}^{+0.53}$ & $3.54_{-0.47}^{+0.52}$ & $3.33_{-0.61}^{+0.61}$ & $3.58_{-0.50}^{+0.50}$ & $3.37_{-0.55}^{+0.58}$ \\
\hspace{1mm}\\
\hline\\
BBN & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{Cooke:2013cba}& $<0.27$ & $<0.29$ & $<0.29$ & $<0.24$ & $<0.25$ & $<0.28$ & $<0.32$ & $<0.32$\\
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{fabio} & $<0.30$ & $<0.28$ & $<0.32$ & $<0.31$ & $<0.32$ & $<0.31$ & $<0.28$ & $<0.28$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ $(D/H)_p$\cite{Cooke:2013cba}& $3.17_{-0.27}^{+0.26}$ & $3.24_{-0.25}^{+0.26}$ & $3.13_{-0.29}^{+0.30}$ & $3.25_{-0.24}^{+0.25}$ & $3.22_{-0.25}^{+0.27}$ & $3.11_{-0.31}^{+0.31}$ & $3.23_{-0.26}^{+0.27}$ & $3.18_{-0.31}^{+0.29}$ \\
\hspace{1mm}\\
$N_{\textrm{eff}}$ $(D/H)_p$\cite{fabio}& $3.47_{-0.34}^{+0.35}$ & $3.56_{-0.33}^{+0.34}$ & $3.52_{-0.31}^{+0.33}$ & $3.52_{-0.26}^{+0.27}$ & $3.52_{-0.32}^{+0.33}$ & $3.48_{-0.36}^{+0.35}$ & $3.57_{-0.33}^{+0.34}$ & $3.49_{-0.35}^{+0.36}$ \\
\hspace{1mm}\\
\hline
\hline
\end{tabular}
\caption{$95\%$~CL constraints on the sum of the neutrino masses,
${{\Sigma}m_{\nu}}$, in eV, and on the relativistic degrees of freedom $N_{\textrm{eff}}$ from
the different combinations of data sets explored here. When
BBN bounds are included, the first (second) raw refers to the constraints
obtained combining the primordial deuterium values from
Ref.~\cite{Cooke:2013cba} (\cite{fabio}) $(D/H)_p = (2.53 \pm 0.04) \times
10^{-5}$ ($(D/H)_p = (2.87 \pm 0.22) \times
10^{-5}$) with measurements of the helium mass fraction $Y_p= 0.254 \pm
0.003$ from Ref.~\cite{Izotov:2013waa}}.
\label{tab:mnunom}
\end{center}
\end{table*}
We first report here the constraints resulting when considering both
massive neutrinos and $\Delta N_{\textrm{eff}}$ massless dark radiation
species. These massless species may appear in extensions of the
Standard Model of elementary particles containing a dark sector, as,
for instance, in the so-called asymmetric dark matter scenarios.
In all these models, when the value of $N_{\textrm{eff}}$ is larger than the
canonical 3.046, $\DeltaN_{\textrm{eff}}=N_{\textrm{eff}}-3.046$ is related to the extra
density in massless hot relics. On the other hand, if the value of $N_{\textrm{eff}}$ is smaller
than the standard 3.046, the active neutrino temperature is reduced
and there are no extra massless species.
Table~\ref{tab:mnunom} depicts the $95\%$~CL
constraints on the sum of the three active neutrino masses $\sum
m_\nu$ and well as on the total number of dark radiation species $N_{\textrm{eff}}$,
corresponding to the contribution from the three active neutrinos plus
$\Delta N_{\textrm{eff}}$ massless dark radiation species, for the different data
combinations explored here. The bounds on the neutrino mass are less stringent
than in standard three neutrino massive case due to the
large degeneracy between $\sum m_\nu$ and $N_{\textrm{eff}}$, since a larger
number of massless sterile neutrino-like species will increase the radiation
content of the universe, and, in order to leave unchanged both the
matter-radiation equality era and the location of the CMB acoustic
peaks, the matter content of the universe must also increase, allowing therefore
for larger neutrino masses. We find $\sum m_\nu < 0.31$~eV and
$N_{\textrm{eff}}=3.45_{-0.54}^{+0.59}$ at $95\%$~CL from the combination of CMB
data and BOSS DR11 BAO measurements.
When the prior on the value of the Hubble constant from HST is
included in the analyses, the mean value of $N_{\textrm{eff}}$ and the bound
on the neutrino masses are both mildly larger accordingly to the larger value
of $H_0$ preferred by HST data. The Hubble constant $H_0$
and $N_{\textrm{eff}}$ are positively correlated through measurements of the CMB,
see Ref.~\cite{Hou:2011ec} for a complete description of the effects
of $N_{\textrm{eff}}$ on the CMB . If the value of $N_{\textrm{eff}}$ is increased, in order to
keep fixed both the angular location of the acoustic peaks and the
matter-radiation equality epoch (to leave unchanged the first peak height
via the early ISW effect), the expansion rate is also increased, implying
therefore a larger $H_0$ and a shorter age of the Universe at recombination.
Since HST measurements point to a higher $H_0$ value, a larger value of $N_{\textrm{eff}}$
will be favoured by data, which also implies a higher neutrino mass
bound due to the strong $\sum m_\nu-N_{\textrm{eff}}$ degeneracy. The $95\%$~CL
constraints from the combination of CMB data, BOSS DR11 BAO
measurements and the HST $H_0$ prior are $\sum m_\nu < 0.34$~eV and
$N_{\textrm{eff}}=3.57_{-0.48}^{+0.45}$. Once the Hubble constant prior from the
HST experiment is added in the analyses, there exists a very mild
preference ($2 \sigma$) for a value of $N_{\textrm{eff}}$ larger than the canonical
expectation of 3.046, agreeing as well with the results of
Ref.~\cite{planck}.
The addition of the measurements of the deuterium (either from older
estimates~\cite{fabio}, or from the most recent measurements from Ref.~\cite{Cooke:2013cba}) and the helium~\cite{Izotov:2013waa}
light element abundances, reduce both the mean value and the errors of
$N_{\textrm{eff}}$ significantly. After the addition of BBN bounds the errors on
$N_{\textrm{eff}}$ are reduced by a half.
Table~\ref{tab:mnunom} contains the BBN constraints obtained using the
fitting functions for the theoretical deuterium and helium
primordial abundances, as a function of $\Omega_b h^2$ and $N_{\textrm{eff}}$, of
Ref.~\cite{fabio} (extracted from the numerical results of the
PArthENopE BBN code~\cite{parthenope}). We report in the table
exclusively these constraints because they are
the most conservative ones: we find $\sum m_\nu < 0.24$~eV and
$N_{\textrm{eff}}=3.25_{-0.24}^{+0.25}$ at $95\%$~CL from the analysis of CMB data,
WiggleZ power spectrum measurements, the HST $H_0$ prior and
BBN light elements abundances information (with the deuterium measurements
from Ref.~\cite{Cooke:2013cba}). Notice that there is no evidence for
$N_{\textrm{eff}}>3$ when considering the most recent estimates of primordial
deuterium abundances. However, if we consider instead previous measurements of
deuterium, as those from Ref.~\cite{fabio}, there exists a $3.5-4\sigma$
preference for $N_{\textrm{eff}}>3$ if HST data is included in the analyses. Without the inclusion of HST data the
preference for $N_{\textrm{eff}}>3$ still persists, albeit at the
$2.5-3\sigma$~CL. As previously stated, the BBN bounds on $N_{\textrm{eff}}$ and $\sum
m_\nu$ quoted in Tab.~\ref{tab:mnunom} are the most conservative ones we
found. Different bounds are obtained if an alternative fitting function
is used in order to compute the theoretical deuterium and helium
primordial abundances. We have performed as well such an exercise,
using the fitting functions from
Refs.~\cite{Steigman:2012ve,Cooke:2013cba}
and, in general, the mean value obtained for $N_{\textrm{eff}}$ is larger than
the constraints quoted above.
In the case in which recent deuterium measurements are considered in
the analysis, the mean value of $N_{\textrm{eff}}$ is displaced by $\sim 2
\sigma$ with respect to the mean values obtained when using
the fitting function of \cite{fabio}.
If previous deuterium measurements~\cite{fabio} are used for our
numerical analyses, the mean value of $N_{\textrm{eff}}$ is also mildly larger
than the mean $N_{\textrm{eff}}$ values obtained when applying the fitting
functions from Ref.~\cite{fabio}. The upper bound on the sum of the
three active neutrino masses is also larger for the two analyses (with
recent and previous deuterium measurements),
due to the degeneracy between $N_{\textrm{eff}}$ and $\sum m_\nu$.
As an example, from the analysis of CMB data, WiggleZ power spectrum
measurements, the HST $H_0$ prior and BBN light elements abundances
information (with recent deuterium measurements from
Ref.~\cite{Cooke:2013cba}), our analysis point to the following
values: $N_{\textrm{eff}}=3.47^{+0.27}_{-0.27}$ and $\sum m_\nu <0.30$~eV, both
at $95\%$~CL. If previous measurements of deuterium are instead
considered~\cite{fabio}, the $95\%$~CL limits are
$N_{\textrm{eff}}=3.60^{+0.33}_{-0.32}$ and $\sum m_\nu <0.32$~eV. Therefore, a
preference for $N_{\textrm{eff}} > 3$ at the $3.5-4\sigma$ ($2.5-3\sigma$)~CL
with (without) the HST $H_0$ prior included in the analyses
will always be present in the results obtained with the fitting functions
of Refs.~\cite{Steigman:2012ve,Cooke:2013cba},
independently of the deuterium measurements exploited.
As in the standard three massive neutrino case, the addition of the
constraints on the $\sigma_8$ and $\Omega_m$ cosmological parameters from the
CFHTLens survey displaces the bounds on the neutrino mass to higher
values. When adding the $\sigma_8-\Omega_m$ relationship
from the Planck-SZ cluster catalog on galaxy number counts, a non zero
value for the sum of the three active neutrino masses of $\sim 0.35$~eV
is favoured at $4\sigma$. Notice that in this case the preferred
mean value for $\sum m_\nu$ is higher than in the three massive
neutrino case due to the fact that $N_{\textrm{eff}}$ is a free parameter and
there exists a large degeneracy among $N_{\textrm{eff}}$ and $\sum m_\nu$.
The combination of CMB data with BAO measurements from BOSS DR11, WiggleZ power spectrum
(full shape) data and a prior on $H_0$ from HST after considering the
inclusion of Planck SZ clusters information leads to the values $\sum
m_\nu =0.35_{-0.16}^{+0.17}$~eV and $N_{\textrm{eff}}=3.56_{-0.58}^{+0.59}$ at $95\%$~CL.
The bounds quoted above have been obtained using the BBN
theoretical prediction for helium in the CMB data analysis. However, it is also
possible to fix the helium fraction $Y_p$ in the Monte Carlo Markov Chain analyses of
CMB data and assume that $Y_p$ is an independent parameter
constrained by BBN observations only. We have also performed such an
exercise, fixing $Y_p=0.24$, and we find, in general, larger values
for both the mean value of $N_{\textrm{eff}}$ and its errors, and, consequently,
a slightly larger bound on the neutrino mass, due to the $\sum
m_\nu-N_{\textrm{eff}}$ degeneracy. In particular, we find $\sum m_\nu < 0.32$~eV and
$N_{\textrm{eff}}=3.60_{-0.65}^{+0.67}$ at $95\%$~CL from the combination of CMB
data and BOSS DR11 BAO measurements, and $\sum m_\nu < 0.34$~eV and
$N_{\textrm{eff}}=3.84_{-0.56}^{+0.60}$ at $95\%$~CL if a prior from HST on the Hubble
constant $H_0$ is added to the former data combination. These
findings agree with the results of Ref.~\cite{Archidiacono:2013fha},
where it is also found that the BBN consistency relation leads to a constraint on $N_{\textrm{eff}}$
closer to the canonical value of $3.046$ than in the case of
fixing $Y_p=0.24$. Once BBN measurements are considered in the data
analyses, the differences between the analyses with and without the
BBN consistency relation included become irrelevant.
\begin{figure*}
\begin{tabular}{c c}
\includegraphics[width=8cm]{nnu_mnu_bao_bbnt4.pdf}&\includegraphics[width=8.15cm]{nnu_h0_bao_bbnt4.pdf}\\
\end{tabular}
\caption{Left panel: the red contours show the $68\%$ and $95\%$~CL allowed
regions from the combination of CMB data, BOSS DR11 BAO measurements
and additional BAO measurements in the ($\sum
m_\nu$ (eV), $N_{\textrm{eff}}$) plane. The blue contours depict the
constraints after a prior on the Hubble constant from
HST is added in the analysis. Right panel: as in the left panel but in the ($N_{\textrm{eff}}$, $H_0$) plane.}
\label{fig:nnu}
\end{figure*}
Figure~\ref{fig:nnu}, left panel, shows the degeneracy between
the $\sum m_\nu$ and the total number of dark radiation species $N_{\textrm{eff}}$
(which accounts for the contribution of the three active neutrino
species plus $\Delta N_{\textrm{eff}}$ massless sterile neutrino-like species). The red
contours depict the $68\%$ and $95\%$~CL allowed regions resulting
from the combination of CMB, BOSS DR11 BAO measurements, and previous
BAO measurements. As the value of $N_{\textrm{eff}}$ increases, a larger neutrino
mass is allowed, to leave unchanged both the matter radiation equality
era and the angular location of the acoustic peaks, as well as the
high of the first acoustic peak via the early ISW effect.
The blue region denotes the results considering the HST $H_0$ prior as
well in the analysis: notice that the allowed regions are shifted
towards higher values of $N_{\textrm{eff}}$. Figure~\ref{fig:nnu}, right panel, illustrates the degeneracy between
$N_{\textrm{eff}}$ and the Hubble constant $H_0$. The color coding is identical
to the one used in the figure shown in the left panel, in which the red contours are related to the $68\%$ and $95\%$~CL allowed
regions from the combination of CMB data, BOSS DR11 BAO measurements
and additional BAO measurements and the blue regions refer to the
constraints after adding a prior on the Hubble constant from
the HST experiment.
\subsection{Massive neutrinos and extra massive sterile neutrino species}
The latest possibility for thermal relics explored in this study is
the case in which there exists three active light massive neutrinos
plus one massive sterile neutrino species characterised by an effective mass
$m^\textrm{eff}_s$, which reads
\begin{equation}
\label{parameter}
m^\textrm{eff}_s= (T_s/T_\nu)^3m_s=(\Delta N_{\textrm{eff}})^{3/4} m_s~,
\end{equation}
being $T_s$ ($T_\nu$) the current temperature of the sterile (active)
neutrino species, $\Delta N_{\textrm{eff}} \equiv N_{\textrm{eff}}-3.046=(T_s/T_\nu)^3$ the effective number of degrees of freedom associated to the sterile,
and $m_s$ its real mass.
\begin{table*}
\begin{center}\footnotesize
\begin{tabular}{lcccccccc}
\hline \hline
& CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 & CMB+DR11 \\
& & +HST & +WZ & +WZ+HST & +WZ+BAO+HST & +BAO& +BAO+HST &+BAO+SNLS\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.28$ & $<0.27$ & $<0.28$ & $<0.30$ & $<0.31$ & $<0.30$\,\footnotemark[1] & $<0.29$ & $<0.26$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] & $<0.29$ & $<0.28$ & $<0.60$ & $<0.28$ & $<0.25$ & $<0.27$\,\footnotemark[1]
& $<0.28$ & $<0.31$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ & $<4.01$ & $3.73_{-0.51}^{+0.51}$ & $<3.89$ & $<4.06$ & $3.64_{-0.48}^{+0.48}$ & $3.57_{-0.50}^{+0.50}$ \footnotemark[1] & $<4.16$ & $<4.02$ \\
\hspace{1mm}\\
\hline\\
SZ Clusters\&& & & & & & & & \\
CFHTLens & & & & & & & &\\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.40$ & $<0.43$ & $<0.36$ & $<0.41$ & $<0.43$ & $<0.43$ & $<0.39$ & $<0.37$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] & $<0.50$ & $<0.48$ & $<1.37$ & $<0.39$ & $<0.34$ & $<0.49$ & $<0.59$ & $<0.59$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ & $<3.90$ & $3.67_{-0.55}^{+0.49}$ & $<3.77$ & $<4.08$ & $3.67_{-0.45}^{+0.51}$ & $3.47_{-0.39}^{+0.51}$ & $<4.01$ & $<3.85$ \\
\hspace{1mm}\\
\hline\\
SZ Clusters & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.40$ & $<0.42$ & $<0.36$ & $<0.41$ & $<0.42$ & $<0.41$ & $<0.39$ & $<0.38$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] & $<0.49$ & $<0.48$ & $<1.36$ & $<0.39$ & $<0.34$ & $<0.49$ & $<0.53$ & $<0.59$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ & $<3.90$ & $3.66_{-0.55}^{+0.49}$ & $<3.77$ & $<4.06$ & $3.66_{-0.45}^{+0.50}$ & $3.46_{-0.38}^{+0.41}$ & $<4.02$ & $<3.85$ \\
\hspace{1mm}\\
\hline\\
CFHTLens & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] & $<0.35$ & $<0.33$ & $<0.32$ & $<0.35$ & $<0.35$ & $<0.39$ & $<0.34$ & $<0.31$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] &$<0.39$ & $<0.39$ & $<1.16$ & $<0.34$ & $<0.29$ & $<0.37$ & $<0.43$ & $<0.47$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ & $<3.94$ & $3.68_{-0.51}^{+0.51}$ & $<3.85$ & $<4.06$ & $3.63_{-0.49}^{+0.49}$ & $3.50_{-0.44}^{+0.48}$ & $<4.09$ & $<3.94$ \\
\hspace{1mm}\\
\hline\\
BBN & & & & & & & & \\
\hline\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{Cooke:2013cba}&$<0.28$ & $<0.23$ & $<0.25$ & $<0.24$ & $<0.27$ & $<0.39$ & $<0.24$ & $<0.23$\\
\hspace{1mm}\\
${{\Sigma}m_{\nu}}$ [eV] $(D/H)_p$\cite{fabio} &$<0.27$ & $<0.25$ & $<0.28$ & $<0.28$ & $<0.28$ & $<0.29$ & $<0.26$ & $<0.25$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] $(D/H)_p$\cite{Cooke:2013cba}& $<0.45$ & $<0.34$ & $<0.37$ & $<0.46$ & $<0.14$ & $<0.24$ & $<0.56$ & $<0.62$\\
\hspace{1mm}\\
$m^\textrm{eff}_s$ [eV] $(D/H)_p$\cite{fabio} &$<0.27$ & $<0.25$ & $<0.29$ & $<0.24$ & $<0.23$ & $<0.27$ & $<0.26$ & $<0.27$\\
\hspace{1mm}\\
$N_{\textrm{eff}}$ $(D/H)_p$\cite{Cooke:2013cba}& $<3.41$ & $<3.53$ & $<3.49$ & $<3.58$ & $3.28_{-0.21}^{+0.22}$ & $3.25_{-0.17}^{+0.17}$ & $<3.47$ & $<3.43$ \\
\hspace{1mm}\\
$N_{\textrm{eff}}$ $(D/H)_p$\cite{fabio}& $3.48_{-0.35}^{+0.37}$ & $3.59_{-0.34}^{+0.35}$ & $3.45_{-0.38}^{+0.33}$ & $3.56_{-0.34}^{+0.34}$ & $3.56_{-0.32}^{+0.33}$ & $3.50_{-0.36}^{+0.35}$ & $3.59_{-0.45}^{+0.35}$ & $3.50_{-0.37}^{+0.36}$ \\
\hspace{1mm}\\
\hline
\end{tabular}
\footnotetext[1]{These limits have been obtained by imposing an additional prior on the thermal velocity of sterile neutrinos. See discussion in the text for further details.}
\caption{$95\%$~CL constraints on the active (sterile) neutrino masses,
${{\Sigma}m_{\nu}}$ ($m^\textrm{eff}_s$), in eV, and on the total number of
massive neutrino species, $N_{\textrm{eff}}$, from
the different combinations of data sets explored here. When
BBN bounds are included, the first (second) raw refers to the constraints
obtained combining the primordial deuterium values from
Ref.~\cite{Cooke:2013cba} (\cite{fabio}) $(D/H)_p = (2.53 \pm 0.04) \times
10^{-5}$ ($(D/H)_p = (2.87 \pm 0.22) \times
10^{-5}$) with measurements of the helium mass fraction $Y_p= 0.254 \pm
0.003$ from Ref.~\cite{Izotov:2013waa}.}
\label{tab:mnumeff}
\end{center}
\end{table*}
Table~\ref{tab:mnumeff} depicts the $95\%$~CL
constraints on the active and sterile neutrino masses as well as on
the total number of massive neutrinos $N_{\textrm{eff}}$. Notice that the mean
value of $N_{\textrm{eff}}$ is, in general, slightly larger than in the case in
which the sterile neutrinos are considered as massless particles due to
the fact that $m^\textrm{eff}_s$ and $N_{\textrm{eff}}$ are positively
correlated. Indeed, there exists a physical lower prior for $N_{\textrm{eff}}$ of 3.046
which is not needed in the case of three active neutrinos plus extra
massless species. We quote exclusively the $95\%$~CL upper limit for the cases
in which the $95\%$~CL lower limit is set by the physical prior of 3.046.
Concerning the bounds on the sum of the three active
neutrinos, they are more stringent than in the massless sterile
neutrino-like scenario because $\sum m_\nu$ and $m^\textrm{eff}_s$ are
also positively correlated. As in the massless sterile neutrino-like analyses, larger values of $N_{\textrm{eff}}$
will be favoured by data when HST measurements
are included. The addition of BBN bounds reduce the errors on
$N_{\textrm{eff}}$ significantly, alleviating the degeneracies between $N_{\textrm{eff}}$
and the active/sterile neutrino masses.
Table~\ref{tab:mnumeff} contains the BBN constraints obtained using the
fitting functions for the theoretical deuterium and helium
primordial abundances from Ref.~\cite{fabio}, which, as in the
massless extra dark radiation case, are found to provide the most
conservative bounds. We find $\sum m_\nu < 0.27$~eV, $m^\textrm{eff}_s<0.14$~eV and
$N_{\textrm{eff}}=3.28_{-0.21}^{+0.22}$ at $95\%$~CL from the analysis of CMB data,
BOSS DR11 BAO, additional BAO measurements, WiggleZ full-shape large
scale structure information, the HST $H_0$ prior and
BBN light elements abundances information with the most recent
measurements of the primordial deuterium abundances from
Ref.~\cite{Cooke:2013cba}, indicating no significant preference for
$N_{\textrm{eff}}>3$. However, when considering primordial deuterium
measurements from Ref.~\cite{fabio}, there exists a preference for
$N_{\textrm{eff}}>3$ at the $3\sigma$ level (mildly stronger when HST data is
also considered in the analyses). This preference is similar to that found in the extra massless case, although notice that in
this case there exists a lower prior on $N_{\textrm{eff}}=3.046$ and therefore
the mean value of $N_{\textrm{eff}}$ will always be larger than its standard
prediction. If we instead use the theoretical functions for the helium
and deuterium abundances from
Refs.~\cite{Steigman:2012ve,Cooke:2013cba}, we get similar conclusions
to those found in the massless dark radiation case: a $3-4\sigma$ preference for
$N_{\textrm{eff}}>3$ is always present. The bounds on the neutrino masses
are, as in the massless case, mildly loosened. The constraints quoted
above translate into $\sum m_\nu < 0.28$~eV, $m^\textrm{eff}_s<0.22$~eV and
$N_{\textrm{eff}}=3.50_{-0.28}^{+0.27}$ ($\sum m_\nu < 0.30$~eV, $m^\textrm{eff}_s<0.24$~eV and
$N_{\textrm{eff}}=3.64_{-0.33}^{+0.33}$) at $95\%$~CL from the analysis of CMB data,
BOSS DR11 BAO, additional BAO measurements, WiggleZ full-shape large
scale structure information, the HST $H_0$ prior and
BBN light elements abundances information with the most recent
measurements of the primordial deuterium abundances from
Ref.~\cite{Cooke:2013cba} (\cite{fabio}).
We have also found that the posterior distribution obtained from the CMB+DR11+BAO dataset
(without the addition of any BBN or $\sigma_8$ information) is multimodal. In fact,
we find that the probability density is significantly different from zero, other than for
$m_\mathrm{eff} \lesssim 0.3$ eV (as for the other datasates),
also for $m_\mathrm{eff} \gtrsim 1$ eV. A further inspection of the chains has
shown that these two regions roughly corresponds to the two cases of a hot/warm
(at recombination) sterile neutrino, with a mass-to-temperature ratio at that time $m_s/T_{s,\mathrm{rec}}
\lesssim 10$, and of a cold sterile with $m_s/T_{s,\mathrm{rec}} \gtrsim 100$. The limits quoted in Tab. V
for the CMB+DR11+BAO dataset, in the basic case where no other information is considered, have
been obtained by postprocessing the chains in order to keep only those models with $m_s/T_{s,\mathrm{rec}}
\lesssim 10$. This is consistent with the purpose of the paper of costraining the presence of
a hot component in addition to active neutrinos. We have also verified that these limits
are reasonably stable with respect to the choice
of the value of the mass-to-temperature ratio at which to cut the distribution, as long as this value lies
inside the low-probability region $10 \lesssim m_s/T_{s,\mathrm{rec}} \lesssim 100$.
It still has to be clarified which, if any in particular, of the BAO datasets is responsible for the appearance
of the ``cold sterile'' region in the posterior probability, and to which feature in the data this is possibly related.
A very preliminar analysis, performed using only one at a time among the DR7, 6dF and WiggleZ BAO datasets,
seems to show that this effect is mainly driven by the first two datasets, while using only the WiggleZ BAO measurements
yields naturally an upper limit for $m_\mathrm{eff}$ of about $0.3$ eV, without any need to exclude a priori the cold region. However
a more robust and precise assessment of the role of the different datasets would certainly require a more detailed
analysis that goes beyond the scope of the present paper.
Contrarily to the massless dark radiation case (and similarly to the thermal axion scenario), the addition of the
constraints on the $\sigma_8$ and $\Omega_m$ cosmological parameters
from the Planck-SZ cluster catalog on galaxy number counts does not
lead to a non zero value for the neutrino masses. However, the bounds on the
neutrino masses are less stringent when adding the Planck-SZ or the
CFHTLens constraints on the $\sigma_8$ and $\Omega_m$ cosmological
parameters, due to the lower $\sigma_8$ preferred by the
former data sets, which favours higher values for the thermal relic
masses. After considering the inclusion of Planck SZ clusters and
CFHTLens information to CMB data,
BOSS DR11 BAO and additional BAO measurements and the HST $H_0$ prior,
the $95\%$~CL bounds on the active and the sterile neutrino parameters are $\sum m_\nu < 0.39$~eV, $m^\textrm{eff}_s<0.59$~eV and
$N_{\textrm{eff}}<4.01$.
The bounds quoted in Tab.~\ref{tab:mnumeff} have been obtained using the BBN theoretical prediction for
helium in the CMB data analysis, as in the case of extra massless
species. We have also performed in this massive case the exercise of
fixing the helium fraction $Y_p$ in the Monte Carlo Markov Chain analyses of
CMB data and assume that $Y_p$ is an independent parameter
constrained by BBN observations only. Again, as in the massless case,
we find larger values for the mean value of $N_{\textrm{eff}}$ (and,
consequently, slightly larger bounds on both the active and sterile neutrino
masses) when neglecting the BBN consistency relation in the MCMC
analyses.
\begin{figure*}
\begin{tabular}{c c}
\includegraphics[width=8cm]{mnu_nnu_wz_bbnt4.pdf}&\includegraphics[width=8.2cm]{mnu_meff_wz_bbnt4.pdf}\\
\end{tabular}
\caption{Left panel: the red contours show the $68\%$ and $95\%$~CL allowed
regions from the combination of CMB data, BOSS DR11 BAO measurements
and WiggleZ full shape power spectrum measurements in the ($\sum
m_\nu$ (eV), $N_{\textrm{eff}}$) plane. The blue contours depict the
constraints after a prior on the Hubble constant from
HST and the remaining BAO data are added in the analysis. Right panel: as in the left panel but in the ($\sum
m_\nu$ (eV), $m^\textrm{eff}_s$ (eV)) plane. }
\label{fig:meff}
\end{figure*}
Figure~\ref{fig:meff}, left panel, shows the degeneracy between
the $\sum m_\nu$ and the total number of neutrino species $N_{\textrm{eff}}$
(which accounts for the contribution of the three active neutrino
species plus $\Delta N_{\textrm{eff}}$ massive sterile neutrinos). The red
contours depict the $68\%$ and $95\%$~CL allowed regions resulting
from the combination of CMB, BOSS DR11 BAO measurements, and full
shape power spectrum measurements from the WiggleZ survey. Notice that the allowed values of $N_{\textrm{eff}}$ are
slightly larger than in the massless dark radiation scenario, since
sub-eV massive sterile neutrinos are contributing to the matter energy
density at the recombination period and therefore a larger value of
$N_{\textrm{eff}}$ will be required to leave unchanged both the angular location and
the height of the first acoustic peak. The blue region depicts the results considering both the HST
$H_0$ prior and the remaining BAO data as well in the analysis. The right panel of Fig.~\ref{fig:meff}, illustrates the degeneracy between the active and the sterile neutrino
masses, since both active and sterile sub-eV massive neutrinos contribute to the matter energy density at
decoupling, and both are free streaming relics which suppress
structure formation at small scales, after they become non relativistic.
\section{CMB constraints including the recent results from the BICEP2 experiment}
\begin{figure*}
\begin{tabular}{c c}
\includegraphics[width=8cm]{nnu_r.pdf}&\includegraphics[width=8.cm]{mnu_r.pdf}\\
\end{tabular}
\caption{Left panel: Constraints in the $N_{eff}$ vs $r$ plane from
Planck+WP and Planck+WP+BICEP2 data. Notice how the inclusion of
the BICEP2 constraint shifts the contours towards $N_{\textrm{eff}}>3$.
Right panel: constraints on the $\Sigma m_{\nu}$ vs $r$ plane from Planck+WP
and Planck+WP+BICEP2 data. In this case there is no indication for
neutrino masses from the combination of CMB data.}
\label{fig:bicep2}
\end{figure*}
Very recently, the BICEP2 experiment \cite{bicep2} claimed a detection at about $5.9\sigma$ for
B-mode polarization on large angular scales, compatible with the presence of
a tensor component with amplitude $r_{0.002}=0.2_{-0.05}^{+0.06}$ at $68 \%$
c.l.. It is therefore interesting to evaluate the impact of this measurement
for the effective number of relativistic species and neutrino masses.
We have therefore performed an analysis including a tensor component
(with zero running). The results are presented in Fig. \ref{fig:bicep2}.
As we can see, when the BICEP2 data are included, an extra background of
relativistic particle is preferred with $N_{eff}=4.00\pm0.41$ at $68 \%$ c.l..
CMB data alone is therefore suggesting a value for $N_{\textrm{eff}}>3$ at good significance.
This result comes from the apparent tension between the Planck+WP limit of
$r<0.11$ at $95 \%$ c.l. and the recent BICEP2 result. This tension appears as
less evident when extra relativistic particles are included. We imagine a
further preference for $N_{\textrm{eff}}>3$ if the HST data is included.
The BICEP2 result does not affect the current constraints on neutrino masses
as we can see from the right side of figure Fig. \ref{fig:bicep2}.
\section{Conclusions}
\label{sec:concl}
Standard cosmology includes hot thermal relics which refer to the three
light, active neutrino flavours of the Standard Model of elementary
particles. The largest effect of neutrino masses on the different
cosmological observables arises from their free streamig nature: the
non-relativistic neutrino overdensities will contribute to clustering
only at scales larger than their
free streaming scale, suppressing the growth of matter density
fluctuations at small scales. CMB measurements from the Planck
satellite, including the lensing likelihood, low-$\ell$ polarization measurements from WMAP
9-year data and Baryon Acoustic Oscillation (BAO) measurements from a
number of surveys lead to the bound $\sum m_\nu<0.26$~eV at $95\%$~CL.
However, the existence of extra hot relic components, as dark
radiation relics, sterile neutrino species and/or thermal
axions will change the cosmological neutrino mass constraints.
Dark radiation (i.e. purely massless species) may arise in several extentions of the
Standard Model of elementary particles, as, for instance, in
asymmetric dark matter models. On the other hand, the existence of extra massive species is well motivated by either
the so-called neutrino oscillation anomalies (in the case of sterile
neutrino species) or by the strong CP problem (in the case of thermal
axions). Both extra, sterile neutrino species and axions have an associated free
streaming scale, reducing the growth of matter fluctuations at small
scales. These extra species will also contribute to the effective
number of relativistic degrees of freedom $N_{\textrm{eff}}$, being
$N_{\textrm{eff}}=3.046$ the standard value, corresponding to the three active
neutrino contribution. The existence of extra light species at the Big Bang
Nucleosynthesis (BBN) epoch modifies the light element abundances,
especially the primordial helium mass fraction.
We have presented here the constraints on the masses of the different thermal
relics in different scenarios using the available cosmological data in the beginning of this year 2014.
The data combination used here includes also the recent and most
precise distance BAO constraints to date from the BOSS Data Release 11
(DR11) results~\cite{Anderson:2013vga}, see also Refs.~\cite{Samushia:2013yga,Sanchez:2013tga,Chuang:2013wga}.
The tightest limit we find in the minimal three active massive
neutrino scenario is $\sum m_\nu < 0.22$~eV at $95\%$~CL from the
combination of CMB data, BAO data and HST measurements of the Hubble constant.
The addition of the constraints on $\sigma_8$ and $\Omega_m$ from the
CFHTLens survey displaces the bounds on the neutrino mass to higher
values. However, the constraint on $\sigma_8$ and $\Omega_m$
from the Planck-SZ cluster catalog on galaxy number counts favours a non zero
value for the sum of the three active neutrino masses of $\sim 0.3$~eV
at $4\sigma$, see also Refs.~\cite{Hamann:2013iba,Wyman:2013lza}.
When considering simultaneously thermal axions and active massive
neutrino species, and including CMB, BOSS BAO DR11, additional BAO
measurements, WiggleZ power spectrum (full shape) information, the
$H_0$ HST prior and BBN light element abundances, the $95\%$~CL bounds
are $\sum m_\nu <0.25$~eV and $m_a<0.57$~eV ($\sum m_\nu <0.21$~eV and $m_a<0.61$~eV) using recent (previous) deuterium estimates from
\cite{Cooke:2013cba} (\cite{fabio}) and helium constraints from Ref.~\cite{Izotov:2013waa}.
Neither the addition of weak lensing
constraints on the $\sigma_8-\Omega_m$ relationship from the CFHTLens
experiment nor from the Planck SZ cluster number counts favours non-zero thermal relic masses, except
for few cases in which the Planck SZ cluster number counts
information is considered together with the HST $H_0$ prior (or SNIa
luminosity distances) and all the BAO measurements. Only in this case there exists a mild $\sim
2.2\sigma$ preference for a non zero axion mass of $0.6$~eV. Concerning neutrino
masses, there exists evidence for a neutrino mass of $\sim 0.2$~eV
at the $\sim 3\sigma$ level exclusively for the case in which CMB data
is combined with BOSS BAO DR11 measurements and full-shape power spectrum information from the
WiggleZ galaxy survey.
In the case in which we consider both massive
neutrinos and $\Delta N_{\textrm{eff}}$ dark radiation species, the neutrino mass bounds are less stringent
than in standard three neutrino massive case due to the
large degeneracy between $\sum m_\nu$ and $N_{\textrm{eff}}$, finding $\sum m_\nu < 0.31$~eV and
$N_{\textrm{eff}}=3.45_{-0.54}^{+0.59}$ at $95\%$~CL from the combination of CMB
data and BOSS DR11 BAO measurements. Contrarily to the massless
dark radiation case, but similarly to the thermal axion scenario, the addition of the
constraints on the $\sigma_8$ and $\Omega_m$ cosmological parameters
from the Planck-SZ cluster catalog on galaxy number counts does not
lead to a non zero value for the neutrino masses. After considering
the inclusion of Planck SZ clusters and
CFHTLens information to CMB data,
BOSS DR11 BAO, additional BAO measurements and the HST $H_0$ prior,
the $95\%$~CL bounds on the active and the sterile neutrino parameters are $\sum m_\nu < 0.39$~eV, $m^\textrm{eff}_s<0.59$~eV and
$N_{\textrm{eff}}<4.01$.
Big Bang Nucleosynthesis
constraints reduce both the mean value and the errors of
$N_{\textrm{eff}}$ significantly. After the addition of the most recent measurements of
deuterium~\cite{Cooke:2013cba} and helium~\cite{Izotov:2013waa}, and
using the theoretically derived fitting functions of Ref.~\cite{fabio}, we
find $\sum m_\nu < 0.24$~eV and $N_{\textrm{eff}}=3.25_{-0.24}^{+0.25}$ at $95\%$~CL from the analysis of CMB data,
WiggleZ power spectrum measurements and the HST $H_0$ prior finding no evidence for
$N_{\textrm{eff}}>3$. If previous estimates of the deuterium primordial aundances are
used in the analysis~\cite{fabio}, there exists a $4 (2.5)\sigma$
preference for $N_{\textrm{eff}}>3$, with (without) HST data included in the
numerical analyses. If the additional sterile neutrino states are considered as massive
species, a $\sim 3.5 \sigma$ preference for $N_{\textrm{eff}}>3$ still appears when
considering BBN measurements (with previous estimates of the deuterium
abundances from Ref.~\cite{fabio}) and the HST prior on the Hubble
constant. The $2.5-4\sigma$ preference for $N_{\textrm{eff}}> 3$ always appears
for both the massless and the massive extra hot relic scenarios when
considering the theoretical fitting functions of Refs.~\cite{Steigman:2012ve,Cooke:2013cba},
independently of the deuterium measurements used in the analyses.
Accurate measurements as well as sharp theoretical predictions of the primordial deuterium
and helium light element abundances are therefore crucial
to constrain the value of $N_{\textrm{eff}}$.
Finally, we have considered the recent B-mode polarization measurements
made by the BICEP2 experiment. Assuming that this detection is produced by
a primordial tensor component, we have found that in a LCDM$+r$ scenario
the presence of extra relativistic particles is significantly suggested by
current Planck+WP+BICEP2 data with $N_{eff}=4.00\pm0.41$ at $68 \%$ c.l..
An extra relativistic component therefore solves the current tension between
the Planck and BICEP2 experiments on the amplitude of tensor modes.
\section{Acknowledgments}
M.L. is supported by Ministero dell'Istruzione, dell'Universit\`a e
della Ricerca (MIUR) through the PRIN grant \emph{Galactic and extragalactic polarized microwave
emission'} (contract number PRIN 2009XZ54H2-002). Most of this work was carried out
while M.L. was visiting the Instituto de F\'isica Corpuscular, whose hospitality is
kindly acknowledged, supported by the grant \emph{Giovani ricercatori} of the University of Ferrara,
financed through the funds \emph{Fondi 5x1000 Anno 2010} and \emph{Fondi Unicredit
2013}. O.M. is supported by the Consolider Ingenio project CSD2007-00060, by
PROMETEO/2009/116, by the Spanish Ministry Science project FPA2011-29678 and by the ITN Invisibles PITN-GA-2011-289442.
\section{Appendix}
\label{sec:appdn}
For axion thermalization purposes, only the axion-pion interaction will be relevant.
To compute the axion decoupling temperature $T_D$ we follow the usual freeze out condition
\begin{eqnarray}
\Gamma (T_D) = H (T_D)~.
\label{eq:freezeout}
\end{eqnarray}
The average rate $\pi + \pi \rightarrow \pi
+a$ is given by~\cite{chang}:
\begin{eqnarray}
\Gamma = \frac{3}{1024\pi^5}\frac{1}{f_a^2f_{\pi}^2}C_{a\pi}^2 I~,
\end{eqnarray}
where
\begin{eqnarray}
C_{a\pi} = \frac{1-R}{3(1+R)}~,
\end{eqnarray}
is the axion-pion coupling constant~\cite{chang}, and
\begin{eqnarray}
I &=&n_a^{-1}T^8\int dx_1dx_2\frac{x_1^2x_2^2}{y_1y_2}
f(y_1)f(y_2) \nonumber \\
&\times&\int^{1}_{-1}
d\omega\frac{(s-m_{\pi}^2)^3(5s-2m_{\pi}^2)}{s^2T^4}~,
\end{eqnarray}
where $n_a=(\zeta_{3}/\pi^2) T^3$ is the number density for axions in
thermal equilibrium, $f(y)=1/(e^y-1)$ denotes the pion distribution
function,
$x_i=|\vec{p}_i|/T$, $y_i=E_i/T$ ($i=1,2$), $s=2(m_{\pi}^2+T^2(y_1y_2-x_1x_2\omega))$, and we assume a common mass for the charged and neutral pions, $m_\pi=138$ MeV.
We have numerically solved the freeze out equation
Eq.~(\ref{eq:freezeout}), obtaining the axion decoupling temperature
$T_D$ versus the axion mass $m_a$ (or, equivalently, versus the axion
decay constant $f_a$).
From the axion decoupling temperature, we can compute the current axion number density, related to the present photon density $n_\gamma=410.5 \pm 0.5$ cm$^{-3}$ via
\begin{eqnarray}
n_a=\frac{g_{\star S}(T_0)}{g_{\star S}(T_D)} \times \frac{n_\gamma}{2}~,
\label{eq:numberdens}
\end{eqnarray}
where $g_{\star S}$ refers to the number of \emph{entropic} degrees of
freedom. At the current temperature, $g_{\star S}(T_0) = 3.91$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $\mathcal F$ be the category of finite dimensional modules over complex algebraic group $GL(n)$ and $K(\mathcal F)$ be its Grothendieck ring. There exists a natural pairing on the ring $K(\mathcal F)$
$$
([U],[V])= \dim Hom_{GL(n)}(U,V)
$$
Let us identify the ring $K(\mathcal F)$ with the ring of symmetric Laurent polynomials
$
\Bbb Z[x_1^{\pm1},\dots, x_n^{\pm1}]^{S_n}
$
then the above pairing in terms of characters can be expressed in the following form
$$
([U],[V])=\left[(ch\, U)^* ch\,V \prod_{i\ne j}\left(1-\frac{x_i}{x_j}\right)\right]_0
$$
where $[f]_0$ means the constant term of Laurent polynomial and $f^*(x_1,\dots,x_n)=f(x_1^{-1},\dots,x_n^{-1})$ (see \cite {Mac, Mac1}). This formula is very interesting from many points of view.
On one side it allows to connect problems in representation theory to some problems with symmetric functions and their generalisations including Jack and Macdonald polynomials. On the other side the above formula can be extended to the root system of any semisimple Lie algebra and the corresponding analogues of symmetric polynomials.
The main goal of this paper is to prove the same kind of formula for complex algebraic Lie supergroup $GL(m,n)$ and to illustrate some of its applications. The category of finite dimensional representations of $GL(m,n)$ is not semisimple. Therefore in this case we have the natural pairing only between projective modules $P(\mathcal F)$ and finite dimensional modules $K(\mathcal F)$ (see \cite{Brun, GS3})
$$
P(\mathcal F)\times K(\mathcal F)\longrightarrow \Bbb Z,\quad ([U],[V])= \dim Hom_{GL(m,n)}(U,V)
$$
where $[L]$ means the class of module $L$ in appropriate version of Grothendieck ring. We also should mention that the category of partially polynomials modules defined below is a convenient object from the point of view of our bilinear form.
\section{Preliminaries}
Instead of general linear complex algebraic supergroup $GL(m,n)$ it is more convenient to deal with the complex Lie superalgebra $\frak{gl}(m,n)$. So we will consider only finite dimensional representations of $\frak{gl}(n,m)$ such that every one of them can be lifted to the representation of $GL(m,n)$.
Let us remind that the Lie superalgebra $\frak{gl}(m,n)$ is the Lie superalgebra of the linear transformations of a $\Bbb Z_2$ graded vector space $V=V_0\oplus V_1$ ($V$ is also called the standard representation of $\frak g$). We have
$$
\frak g_{\bar0}=\frak{gl}(m)\oplus\frak{gl}(n),\quad\frak g_{\bar1}=V_0\otimes V_1^*\oplus V_1\otimes V_0^*
$$
We also have $\Bbb Z$ graded decomposition
$
\frak g=\frak g_{-1}\oplus \frak g_0\oplus \frak g_1
$
where
$$
\frak g_{-1}=V_1\otimes V^*_0,\,\, \frak g_{1}=V_0\otimes V^*_1,\,
$$
Let us fix bases in $V_0=<e_1,\dots,e_m>$ and $V_1=<f_1,\dots,f_n>$ respectively.
Let $\frak{b}$ be the subalgebra of upper triangular matrix in $\frak{gl}(m,n)$ and $\frak{k}$ be the subalgebra of diagonal matrix in $\frak{gl}(m,n)$ in the above basis. By $\varepsilon_1,\dots,\varepsilon_m,\delta_1,\dots,\delta_n$ we will denote the weights of standard representation with respect to $\frak k$. The corresponding system of positive roots $R^+=R^+_0\cup R^+_1$ of $\frak{gl}(m,n)$ can be described in the following way
$$
R^+_{0}=\{\varepsilon_i-\varepsilon_j\,:\, 1\le i< j\le m\,:\, \delta_k-\delta_l,\, 1\le k<l\le n \}
$$
$$
R_1^+=\{\varepsilon_i-\delta_k,\, 1\le i\le m,\,1\le k\le n\}
$$
Let also
$$
P=\{\chi=\lambda_1\varepsilon_1+\dots+\lambda_m\varepsilon_m+\mu_1\delta_1+\dots+\mu_n\delta_n,\mid n_i,m_j\in \Bbb Z\}
$$
be the weight lattice and
$$
P^+=\{\chi\in P\mid \lambda_i-\lambda_j\ge0,\,i<j\,:\mu_k-\mu_l\ge0,\,k<l\}
$$
be the set of highest weights.
We will use the following parity on the weight lattice due to C. Gruson and V. Serganova \cite{GS1} and Brundan and Stroppel \cite{BS} by saying that $\varepsilon_i$ (resp. $\delta_j$) is even (resp. odd). It is easy to check that every finite dimensional module $L$ can be represented in the form
$$
L=L^+\oplus L^{-}
$$
where $L^+$ is the submodule of $L$ in which weight space has the same parity as the corresponding weight and $L^{-}$ is the submodule in which the parities differ. We should note that this construction is a particular case of Deligne construction category $Rep(G,z)$ from the paper \cite{D} for $G=GL(m,n)$ and $z=diag(\underbrace{1,\dots,1}_{m},\underbrace{-1,\dots,-1}_{n}).$
Let us denote by $\mathcal F$ the category of finite dimensional modules over $\frak{gl}(n,m)$ such that every module in $\mathcal F$ is semisimple over Cartan subalgebra $\frak k$ and and all its weights are in $P$.
By $K(\mathcal F )$ we will denote the quotient of the Grothendieck ring of $\mathcal F$ by the relation $[L]-[\Pi(L)]=0$ where $\Pi(L)$ is the module with the shifted parity $\Pi(L)_0=L_1, \Pi(L)_1=L_0$ and $x*v=(-1)^{p(x)}xv,x\in\mathfrak{gl}(m,n).$ For every $L\in \mathcal F$ we can define
$$
ch\,L=\sum_{\chi}\dim L_{\chi}e^{\chi}
$$
where the sum is taken over all weights of $L$. It is easy to see that $ch\,L$ is well defined function on $K(\mathcal F)$.
The ring $K(\mathcal F)$ can be describe explicitly in the following way. Let
$$
P_{m,n}=\Bbb Z[x_1^{\pm1},\dots, x_m^{\pm1},\, y_1^{\pm1},\dots, y_n^{\pm1}]
$$
be the ring of Laurent polynomials in variables $x_1,\dots,x_m$ and $y_1,\dots, y_n.$
If we set $x_i=e^{\varepsilon_i},\, y_j=e^{\delta_j}$ then we get a character map
$$
ch : K(\mathcal F)\longrightarrow P_{m,n}
$$
Let also
$$
\Lambda^{\pm}_{m,n}=\{f\in P_{m,n}^{S_m\times S_n}\mid x_i\frac{\partial f}{\partial x_i}+y_j\frac{\partial f}{\partial y_j}\in(x_i+y_j) \}
$$
be the subring of $P_{m.n}$ of supersymmetric Laurent polynomials.
\begin{thm}\cite{SV1} The ring $K(\mathcal F)$ is isomorphic to the ring $\Lambda^{\pm}_{m,n}$ under the character map.
\end{thm}
\begin{remark} Actually in the paper \cite{SV1} slightly different versions of Grothen-dieck ring and the algebra $\Lambda^{\pm}_{m,n}$ were considered. But it is easy to check that they are isomorphic to our ones. We prefer to use characters instead of supercharacters in this paper in order to avoid some unnecessary signs.
\end{remark}
It will be needed later an explicit description of the projective covers of the irreducible finite dimensional modules due to Brundan \cite{Brun}. We give the description here in a slightly different way.
First let us for any $\chi\in P^+$ define a pair of sets
$$
A=\{(\chi+\rho,\varepsilon_1),\dots, (\chi+\rho,\varepsilon_m)\},\quad B=\{(\chi+\rho,\delta_1),\dots, (\chi+\rho,\delta_n)\}
$$
where
$$
\rho=\frac12\sum_{\alpha\in R_0^+}\alpha-\frac12\sum_{\alpha\in R_1^+}\alpha+\frac12(n-m+1)(\sum_{i=1}^m\varepsilon_i-\sum_{j=1}^n\delta_j)
$$
$$
=\sum_{i=1}^m(1-i)\varepsilon_i+\sum_{j=1}^n(m-j)\delta_j
$$
Our $\rho$ is slightly different from the standard one but it is more convenient since the elements of $A$ and $B$ are integers.
So instead of highest weights we will use the set of pairs $(A,B)$ such that $A,B\subset\Bbb Z$ and $|A|=m,\,|B|=n$. We will also use the language of diagrams which is due to Brundun and Stroppel \cite{BS} but we will use it here in a form due to I. Musson and V. Serganova \cite{MS}.
\begin{definition} Let $(A,B)$ be a pair of subsets in $\Bbb Z$ such that $|A|=m,\, |B|=n$. Then the corresponding diagram is the following function on $\Bbb Z$
$$
f(x)=\begin{cases} \times,\,\,x\in A\cap B\\
\circ,\,\,x\in A'\cup B'\\
>,\,\,x\in A\setminus B\\
<,\,\, x\in B\setminus A
\end{cases}
$$
\end{definition}
Let us also set
$$
\varphi(\times)=1,\,\varphi(\circ)=-1,\,\,\varphi(>)=\varphi(<)=0
$$
$$
[a,b]=\{c\in\Bbb Z\mid a\le c\le b\},\,[a,b)=\{c\in\Bbb Z\mid a\le c< b\},
$$
$$
\,(a,b)=\{c\in\Bbb Z\mid a< c<b\}
$$
and for integers $a<b$ let us define a transposition
$$
\pi_{a}^b:\Bbb Z\longrightarrow\Bbb Z,\quad \pi_a^b(x)=\begin{cases}x,\,x\ne a,b\\
b,\,x=a\\
a,\,x=b
\end{cases}
$$
\begin{definition}\label{defadm} We will call a transposition $\pi_a^b$ an admissible for $f$ if $a\in f^{-1}(\times),\,b\in f^{-1}(\circ)$
and the following conditions are fulfilled
$$
b>a,\,\, \sum_{i\in [a,b]}(\varphi\circ f)(i)=0,\,\, \sum_{i\in[a,c]}(\varphi\circ f)(i)>0,\, \text{for any}\,\, c\in[a,b).
$$
Since $b$ is uniquely defined by $f$ and $a$ we sometimes will omit $b$.
\end{definition}
The following Lemma easily follows from the definition above.
\begin{lemma}\label{prop} The following statements hold true
$1)$ If $\pi_a^b,\, \pi_{c}^d$ are two admissible transpositions for $f$ then one of the following conditions is fulfilled
\begin{equation}\label{cond0}
[a,b]\cap [c,d]=\emptyset,\,\,\, [a,b]\subset (c,d),\,\,\,[c,d]\subset (a,b)
\end{equation}
$2)$ Let $\pi_a^b$ be an admissible transposition for $f$ and $d\in[a,b]$ be such that $f(d)=\circ$. Then there exist an admissible transposition for $f$ of the form $\pi_c^d$.
\end{lemma}
\begin{corollary} Admissible transpositions pairwise commute.
\end{corollary}
\begin{proof} It easily follows from Lemma \ref{prop}.
\end{proof}
Now let us define for a diagram $f$ and any $C\subset f^{-1}(\times)$ the permutation of\, $\Bbb Z$\, by the formula
\begin{equation}\label{prod}
\pi_C=\prod_{c\in C}\pi_c
\end{equation}
We should mention that the above product is well defined since admissible transpositions commute with each other.
\begin{definition} Let $P(f)$ be the projective cover of irreducible module $L(f)$. We will denote by $\mathcal P(f)$ the set of $g$ such that $K(g)$ is a subquotient of $P(f)$.
\end{definition}
Now we can formulate the main result of Brundan \cite{Brun}.
\begin{thm}\label{brundan} $P(f)$ has a multiplicity free Kac flag and
$$
\mathcal P(f)=\{g\mid g=\pi_C(f), C\subset f^{-1}(\times)\}
$$
\end{thm}
In order to get an algorithm for the decomposition Kac modules into the sum of irreducible modules we need the following combinatorial Lemma.
\begin{lemma}\label{adm} Let $f,g$ be such diagrams that
$$
g=\tau_r\circ\tau_2\circ\dots\circ\tau_1(f)
$$
where $\tau_i=\pi_{a_i}^{b_i} $ is a transposition and $f_i=\tau_{i-1}\circ\dots\tau_1(f),\,i=1,\dots, r$. Suppose also that for any pair of $i>j$ we have
\begin{equation}\label{cond}
[a_i,b_i]\cap[a_j,b_j]=\emptyset, \, \text{or}\,\,\, [a_i,b_i]\subset (a_j,b_j)
\end{equation}
Then for any $i=1,\dots r$ the transposition $\tau_i$ is admissible for $f$ if and only if $\tau_i$ is admissible for $f_i,i=1,\dots,r$.
\end{lemma}
\begin{proof} Let us prove first that functions $f_i$ and $f$ coincide on the segment $[a_i,b_i]$ for $1\le i\le r$. The following equalities are easy to check
$$
\varphi\circ f_{i}=\varphi\circ f+2\sum_{j=1}^{i-1}(\delta_{b_j}-\delta_{a_j}),\,\,\, i=1,\dots, r
$$
$$
f_i^{-1}(\times)=\left(f^{-1}(\times)\setminus\{a_1,\dots,a_{i-1}\}\right)\cup\{b_1,\dots,b_{i-1}\}
$$
If $t\in [a_i,b_i]$ then from the conditions of the Lemma it follows that $\delta_{a_j}(t)=\delta_{b_j}(t)=0$ for any $1\le j<i$. Therefore $\varphi\circ f_i(t)=\varphi\circ f(t)$ on the segment $[a_i,b_i]$. Now Lemma follows from Definition \ref{defadm}.
\end{proof}
\begin{corollary}\label{adm1} We will keep the notations from Lemma \ref {adm}. Suppose that
$
\tau_1=\pi_{a_1}^{b_1},\dots,\tau_r=\pi_{a_r}^{b_r}
$
is a set of transpositions such that $a_1<a_2<\dots<a_r$ and $a_i\ne b_j,\, 1\le i,j\le r$.
Suppose also that for any $i=1,\dots r$ transposition $\tau_i$ is admissible for $f_i$. Then all transpositions $\tau_1,\dots,\tau_r$ are admissible for $f$.
\end{corollary}
\begin{proof} Let us prove by induction that conditions of Lemma \ref{adm} are fulfilled. We will use induction on $r$. If $r=1$ then the statement of the Lemma is trivial. Let $r>1$. By inductive assumption
transpositions $\tau_1,\dots,\tau_{r-1}$ are admissible for $f$. Therefore
$$
f_{r}^{-1}(\times)=\left(f^{-1}(\times)\setminus\{a_1,\dots,a_{r-1}\}\right)\cup\{b_1,\dots,b_{r-1}\}
$$
Since $\tau_r$ is admissible for $f_r$ we have $a_r\in f_{r}^{-1}(\times)$. By assumptions of the Lemma $a_r\ne b_1,\dots,b_{r-1}$ therefore $a_r\in f^{-1}(\times)$. Let $\pi_{a_r}^c$ be the corresponding admissible transposition for $f$. Then for any $i\le r$ one of the following conditions holds true
$$
[a_i,b_i]\cap [a_r,c]=\emptyset,\,\,[a_r,c]\subset (a_i,b_i),\,\, [a_i,b_i]\subset (a_r,c)
$$
The last condition is impossible since $a_r>a_i$. Therefore by Lemma \ref{adm} transposition $\pi_{a_r}^c$ is admissible for $f_r$. Therefore $\pi_{a_r}^{b_r}=\pi_{a_r}^c$ is admissible for $f$.
\end{proof}
\begin{corollary}\label{admis} Suppose that
$
\tau_1=\pi_{a_1}^{b_1},\,\dots,\tau_r=\pi_{a_r}^{b_r}
$
is a set of transpositions such that $ b_1>b_2>\dots>b_r$ and $a_i\ne b_j,\, 1\le i,j\le r$. Suppose also that $\tau_i$ is admissible for $f_i,\,i=1,\dots,n$. Then $\tau_i,\,i=1,\dots, r$ is admissible for $f$.
\end{corollary}
\begin{proof} Let us prove by induction that conditions (\ref{cond}) are fulfilled. We will use induction on $r$. If $r=1$ then the statement of the Lemma is trivial. Let $r>1$. By inductive assumption the conditions (\ref{cond}) are fulfilled therefore by Lemma \ref{adm}
transpositions $\tau_1,\dots,\tau_{r-1}$ are admissible for $f$. Therefore
$$
f_{r}^{-1}(\times)=\left(f^{-1}(\times)\setminus\{a_1,\dots,a_{r-1}\}\right)\cup\{b_1,\dots,b_{r-1}\}
$$
Since $\tau_r$ is admissible for $f_r$ we have $a_r\in f_{r}^{-1}(\times)$. By our assumptions $a_r\ne b_1,\dots,b_{r-1}$ therefore $a_r\in f^{-1}(\times)$. Besides since $b_r\ne a_1,\dots,a_{r-1}$ we have $b_r\in f^{-1}(\circ)$. Let $\pi_{a_r}^c$ be the corresponding admissible transposition for $f$. Suppose that $b_r<c$, then $b_r\in [a_r,c]$. Therefore by Lemma \ref{prop} there exist an admissible for $f$ transposition $\pi_a^{b_r}$. Then for any $i< r$ one of the following conditions holds true
$$
[a_i,b_i]\cap [a,b_r]=\emptyset,\,\,[a,b_r]\subset (a_i,b_i),\,\, [a_i,b_i]\subset (a, b_r)
$$
The last condition is impossible since $b_r<b_i$. Therefore by Lemma \ref{adm} transposition $\pi_{a}^{b_r}$ is admissible for $f_r$. Therefore $\pi_{a_r}^{b_r}=\pi_{a}^{b_r}$ is admissible for $f$. If $b_r\ge c$ then again condition $[a_i,b_i]\subset (a_r, c)$
is impossible and $\pi_{a_r}^{b_r}=\pi_a^c$ is admissible for $f$.
\end{proof}
\begin{corollary}\label{irr} Irreducible module $L(f)$ is a subquotient of Kac module $K(g)$ if and only if there exist a sequence of transpositions
$$\sigma_1=\pi_{c_1}^{,d_1},\,\dots,\sigma_r=\pi_{c_r}^{d_r}
$$ where $ c_i<d_i,\,i=1,\dots,r$ such that
$1)$ $\sigma_i$ is admissible for $\sigma_i\circ\dots\circ\sigma_1(g)$, $i=1,\dots, r$ and $\sigma_r\circ\dots\circ\sigma_1(g)=f$
$2)$ $c_1>c_2>\dots>c_r$
$3)$ $c_i\ne d_j,\, 1\le i,j\le r $
\end{corollary}
\begin{proof} Suppose that all conditions of the Corollary are fulfilled. Then
$$
g=\sigma_1\circ\sigma_{2}\circ\dots\circ\sigma_r(f)
$$
If we set $\tau_i=\sigma_{r+1-i},\,a_i=c_{r-i+1},\,b_i=d_{r-i+1}$ where $i=1,\dots,r$ then it is easy to see that all conditions of Corollary \ref{adm1} are fulfilled. Therefore $K(g)$ is a subquotient of $P(f)$. Therefore by $BGG$ reciprocity \cite{Z} $L(f)$ is a sub quotient of $K(g)$.
Now let us suppose that $L(f)$ is a subquotient of Kac module $K(g)$. Then again by $BGG$ reciprocity $K(g)$ is a sub quotient of $P(f)$. Therefore by Theorem \ref{brundan}
$
g=\pi_A(f),\, A\subset f^{-1}(\times).
$
Let $A=\{a_1,a_2,\dots, a_r\}$ where $a_1<a_2<\dots<a_r$. Since admissible transpositions pairwise commute we have
$$
g=\tau_r\circ\tau_2\circ\dots\circ\tau_1(f)
$$
where $\tau_i=\pi_{a_i}^{b_i}$. Let us check that conditions (\ref{cond}) are fulfilled. It is enough to verify that inclusion $[a_j,b_j]\subset (a_i,b_i)$ is impossible if $i>j$. Indeed if it is so then $a_j>a_i$ and we get a contradiction. Therefore by Lemma \ref{adm} $\tau_i$ is admissible for $f_i=\tau_{i-1}\circ\circ\dots\circ\tau_1(f)$ and we can set
$$
\sigma_{i}=\tau_{r-i+1},\,c_i=a_{r-i+1},\,d_i=b_{r-i+1}\,\,\,i=1,\dots,r.
$$
\end{proof}
In the same way we can prove the following Corollary.
\begin{corollary}\label{irr1} Irreducible module $L(f)$ is a subquotient of Kac module $K(g)$ if and only if there exist a sequence of transpositions
$$\sigma_1=\pi_{c_1}^{,d_1},\,\dots,\sigma_r=\pi_{c_r}^{d_r}
$$ where $ c_i<d_i,\,i=1,\dots,r$ such that
$1)$ $\sigma_i$ is admissible for $\sigma_i\circ\dots\circ\sigma_1(g)$, $i=1,\dots, r$ and $\sigma_r\circ\dots\circ\sigma_1(g)=f$
$2)$ $d_1<d_2<\dots<d_r$
$3)$ $c_i\ne d_j,\, 1\le i,j\le r $
\end{corollary}
The above corollaries can be used to calculate the irreducible subquotients of Kac modules.
\begin{definition} $
\mathcal K(g)=\{f\mid Hom_{\frak g}(P(f),K(g))\ne0\}
$
\end{definition}
\begin{example}
Let $m=n=2$ and $g^{-1}(\times)=\{2,3\}$. We are going to describe the set $\mathcal K(g)$.
As the first step we are going to find a transpositions $\pi_{a}^b$ such that $b\in g^{-1}(\times),$ and $\pi_a^b$ is admissible for $\pi_{a}^b(g).$ And it is easy to see that there exists only one such transposition $\pi_{1}^2$.
The next step is to find a transposition $\pi_a^b$ such that $b\in \pi_1^2(g)^{-1}(\times),\, \pi_a^b$ is admissible for $\pi_a^b\circ \pi_1^2(g)$ and $a<1$. It is easy to check that there exists only one such transposition $\pi_0^3$.
So we have
$$
\mathcal K(g)=\left\{g,\, \pi_1^2(g),\,\pi_{0}^3\circ\pi_1^2(g)\right\}
$$
\end{example}
\begin{remark} We should mention that our algorithm is essentially the same as in the paper \cite{MS}.
Legal move of weight zero
$
g\xrightarrow{[b,a]} f,\,\, a<b
$ in the sense of \cite{MS} is the same as $\sigma= \pi_{a}^b$ is an admissible transposition for $f=\sigma(g)$. And a regular increasing pass from $g$ to $f$ is the same as the sequence of transpositions
$$
\sigma_1=\pi_{a_1}^{b_1},\dots,\sigma_r=\pi_{a_r}^{b_r},\quad a_1<b_1,\dots, a_r<b_r
$$ such that
$1)$ $\sigma_i$ is admissible for $\sigma_i\circ\dots\circ\sigma_1(g)$, $i=1,\dots, r$ and $\sigma_r\circ\dots\circ\sigma_1(g)=f$
$2)$ $b_1<b_2<\dots<b_r$
$3)$ $a_i\ne b_j,1\le i,j\le r$.
\end{remark}
\section{A bilinear form on the ring $P_{m,n}$}
In this section we are going to define a bilinear form on the ring of Laurent polynomials $P_{m,n}$ and connect this bilinear form with the canonical bilinear form on the Grothendieck ring of Lie superalgebra $\frak g=\frak{gl}(m,n)$. Let $p\rightarrow p^*$ be the followinng automorphism of $P_{m,n}$
$$
x_i^*=x_i^{-1},\,i=1,\dots,m,\quad y_j^*=y_j^{-1},\,j=1,\dots,n
$$
\begin{definition} Let us set
$$
\Delta(x)=\prod_{i>j}\left(1-\frac{x_i}{x_j}\right),\, \Delta(y)=\prod_{i>j}\left(1-\frac{y_i}{y_j}\right),\,\,\Delta(x,y)=\prod_{i,j}\left(1+\frac{y_j}{x_i}\right)
$$
and for $p,q\in P_{m,n}$ let us define
\begin{equation}\label{form1}
(p,q)=\frac{1}{m!}\frac{1}{n!}\left[ p^*q\frac{\Delta(x)\Delta(x)^*\Delta(y)\Delta(y)^*}{\Delta(x,y)\Delta(x,y)^*}\right]_0
\end{equation}
where $[\,,\,]_0$ means the constant term and $(\Delta(x,y)\Delta(x,y)^*)^{-1}$ should be understood as
$$
(\Delta(x,y)\Delta(x,y)^*)^{-1}=\frac{(y_1\dots y_n)^m}{(x_1\dots x_m)^n}\left[\prod_{i.j}\left(1+\frac{y_j}{x_i}\right)\right]^{-2}
.$$
\end{definition}
\begin{thm} \label{form} The following equality hold true
$$
\dim Hom_{\frak{g}}(P,L)=(ch P,ch L)
$$
where $P$ is a finite dimensional projective module, $L$ is any finite dimensional module.
\end{thm}
\begin{proof} We are going to prove the Theorem in several steps.
First we are going to prove that characters of Kac modules are pairwise orthogonal with respect to the pairing $(\,,\,)$.
Let $K(f),K(g)$ be two Kac modules and $\chi=(\lambda,\mu) $ and $\tilde\chi=(\nu,\tau)$ are the corresponding highest weights, where $\lambda,\nu$ are highest weights of $\frak{gl}(m)$ and $\mu,\tau$ are highest weights of $\frak{gl}(n)$. Then we have
$$
ch K(f)=\Delta(x,y)s_{\lambda}(x)s_{\mu}(y),\,\,
chK(g)=\Delta(x,y)s_{\nu}(x)s_{\tau}(y),\,\,
$$
where $s_{\lambda}, s_{\mu},s_{\nu}, s_{\tau}$ are Schur functions. Therefore we have
$$
(f,g)=\frac{1}{m!}\frac{1}{n!}\left[s^*_{\lambda}s^*_{\mu}\Delta(x)^*\Delta(y)^*s_{\nu}s_{\tau}\Delta(x)\Delta(y)\right]_0=\delta_{\lambda,\nu}\delta_{\mu,\tau}
$$
according to the orthogonality of Schur polynomials.
Now let $P(f)$ be the projective cover of the irreducible module $L(f)$ and $K(g)$ be a Kac module. Then we are going to prove that
\begin{equation}\label{equa1}
\dim Hom_{\frak g}(P,K)=(ch P, ch K)
\end{equation}
We can suppose that $P^+(f)=P(f) ,\,K^+(g)=K(g)$. In other words the parity of every weight vector coincides with the parity of the weight. We have
$$
\dim Hom_{\frak{g}}(P(f),K(g))=n_{g,f}
$$
where $n_{g,f}$ is the multiplicity of irreducible module $L(f)$ in the Jordan - Helder series of the module $K(g)$. On the other hand from the orthonormality of Kac modules it follows that $(P(f),K(g))=m_{f,g}$, where $m_{f,g}$ is the multiplicity of Kac module $K(g)$ in the Kac flag of the module $P(f)$.
But by BGG reciprocity $m_{f,g}=n_{g,f}$ and we proved equality (\ref{equa1}).
To complete the proof, it just remains to show that the following equality
$$
\dim Hom_{\frak g} (P(f), L)=(ch\,P(f),ch\, L)
$$
is true for any finite dimensional module $L$.
For this, we give two different arguments, the first based on a fact proved by Serganova in \cite{Serga1} and the second using instead completion in the spirit of Brundan (\cite{Brun1} \S, 4c).
Now let $L$ be a module which has a Kac flag. Then
$$
\dim Hom_{\frak g} (P(f), L)=\sum_{g}\dim Hom_{\frak g} (P(f),K(g))=
$$
$$
\sum_{g}(ch\, P(f),ch\,K(g))=(ch\,P(f),ch\, L)
$$
where $K(g)$ runs over all subquotients of $L$
Now let $L$ be any finite dimensional module and $P$ be a projective module. By Serganova \cite {Serga1} there exist a resolvent of $L$
\begin{equation}\label{res}
\dots \rightarrow K_i\rightarrow K_{i-1}\rightarrow \dots \rightarrow K_1\rightarrow L\rightarrow 0
\end{equation}
where every $K_i$ has a flag of Kac modules. Therefore we have an exact sequence of vector spaces
$$
\dots \rightarrow Hom_{\frak g}(P,K_i)\rightarrow \dots \rightarrow Hom_{\frak g}(P, K_1)\rightarrow Hom_{\frak g}(P, L)\rightarrow 0
$$
For any finite dimensional module $V$ let us denote by $wt(V)$ the set of the weights of the module $V$. Let $N$ be such that for any $i> N$ we
have $wt(P)\cap wt(K_i)=\emptyset.$ Then for any $i> N$ we have $Hom_{\frak g}(P,K_i)=0$
and
\begin{equation}\label{e1}
dim(P,L)=dim(P,K_1)-dim(P,K_2)+\dots+(-1)^{i+1}dim(P,K_i)
\end{equation}
On the other hand from equality (\ref{res}) we have
$$
sch L-sch K_1+sch K_2-\dots+(-1)^isch K_i+\dots=0
$$
The above sum makes sense since every weight entries the sum with finite multiplicity.
Now let us calculate $(ch P, ch K_i)$. We have by definition
$$
(ch P, ch K_i)=\frac{1}{m!}\frac{1}{n!}\left[(ch P)^*ch K_i\frac{\Delta^*(x)\Delta(y)^*\Delta(x)\Delta(y)}{\Delta(x,y)^*\Delta(x,y)}\right]_0=
$$
$$
=\frac{1}{m!}\frac{1}{n!}\left[(ch P)^*ch K_i\Delta^*(x)\Delta(y)^*\Delta(x)\Delta(y)\sum\prod\left(\frac{y_j}{x_i}\right)^{n_{ij}}\right]_0
$$
Now let us take $M$ such that for any $i>M$ all monomials of the polynomial $(ch P)^*ch K_i\Delta^*(x)\Delta(y)^*\Delta(x)\Delta(y)$ were negative degree with respect to $x_1,\dots,x_m$. Therefore all monomials in the above expansion have negative degree with respect to $x_1,\dots,x_m$. Therefore
$(ch P, ch K_i)=0$. Therefore for $i>M$ we have
\begin{equation}\label{e2}
(ch P,ch L)-(ch P,ch K_1)+\dots+(-1)^i(chP, ch K_i)=0
\end{equation}
Therefore if we take $i>\max\{N,M\}$ then from the equalities (\ref{e1}), (\ref{e2}) we have $\dim Hom_{\frak g}(P,L)=(ch P, ch L)$ and Theorem \ref{form} is proved.
Now let us use a completion. For $\chi=\lambda_1\varepsilon_1+\dots+\lambda_m\varepsilon_m+\mu_1\delta_1+\dots+\mu_n\delta_n$ let us set $m(\chi)=\mu_1+\dots+\mu_n$. Let $K(\mathcal F)_d$ be the subgroup of the Grothendieck group $K(\mathcal F)$ generated by $\{L(\chi)\}$ for $\chi\in P^+$ with $m(\chi)\ge d$. We know that $[K(\chi)]$ is the finite linear combination of $L(\chi)$ and $[L(\tilde\chi)]$ where $\tilde\chi<\chi$. Therefore we can find the sequence $\{A_i\}_{i\ge1}$ of finite subsets in $P^+$ such that $A_i\subset A_{i+1}$ and
$$
[L(\chi)]-\sum_{\tilde\chi\in A_i}c_{\tilde\chi}[K(\tilde\chi)]\in K(\mathcal F)_{d_i}
$$
where $d_1<d_2<d_3\dots.$ It is easy to see that for given projective module $P$ there exists $N_1$ such that for any $d\ge N_1$ we have $\dim_{\frak g}(P, L)=0$ for any irreducible module $L\in K_d$. And it follows from formula (\ref{form1}) that there exists $N_2$ such than for any $d\ge N_2$ we have $(ch P, ch L)=0$ for any irreducible module $L\in K_d$. Therefore for $d_i\ge\max\{N_1,N_2\}$ we have
$$
\dim Hom (P, L(\chi))=\sum_{\tilde\chi\in A_i}c_{\tilde\chi}\dim Hom(P,K(\tilde\chi))=
$$
$$
=\sum_{\tilde\chi\in A_i}c_{\tilde\chi}(P,K(\tilde\chi))=(P,L(\chi))
$$
and we proved the Theorem in this way.
\end{proof}
\begin{corollary}
$$
\mathcal P(f)=\{g\mid (ch\,P(f),ch\,K(g))\ne0\}
$$
\end{corollary}
\section{Kac modules and Euler characters}
Now we are going to calculate the number $(ch\,K(f),ch\,E(g))$ where $K(f)$ is a Kac module and $E(g)$ is an Euler virtual module. General formula for Euler characters was given by V. Serganova in \cite {Serga1}. For any parabolic subalgebra $\frak{p}\subset\frak{g}$ and any finite dimensional module $M$ of $\frak{p}$ by a super version of Borel - Weil - Bott construction one can define the virtual Euler module $ E^{\frak p}(M)$. According to the general formula due to Serganova \cite {Serga1}
$$
ch E^{\frak p}(M)=\sum_{w\in W}w\left(\frac {De^{\rho} ch M}{\prod_{ \alpha\in R_{\frak{p}}\cap R_1^+}(1-e^{\alpha})}\right)
$$
with
$$
D=\frac{\prod_{\alpha\in R_1^+}(e^{\alpha/2}-e^{-\alpha/2})}{\prod_{\alpha\in R_0^+}(e^{\alpha/2}-e^{-\alpha/2})}
$$
Here $\rho$ is the half-sum of the even positive roots minus the half-sum odd positive roots, $R_{\frak p}$ is the set of roots $\alpha$ such that $\frak g_{\pm\alpha}\subset\frak p$. Consider now $\frak g=\frak{gl}(m,n)$ and let $(r,s)$ be a pair of integers such that $0\le r \le m,\,$ $ 0\le s\le n,\,\,r-s=m-n$. We will denote the set of such pairs as $P(m,n)$. Next let us choose for $(r,s)\in P(m,n)$ the following system of simple roots
$$
\{\varepsilon_i-\varepsilon_{i+1},\delta_j-\delta_{j+1}, \varepsilon_r-\delta_1,\delta_s-\varepsilon_{r+1},
\varepsilon_m-\delta_{s+1}\},\,i\in[1,m]\setminus\{r\},\,j\in[1,n]\setminus\{s\}.
$$
So we have the corresponding set of positive even and odd roots.
Consider now the parabolic subalgebra $\frak p$ with
$$
R_{\frak p}=\{\varepsilon_i-\varepsilon_j,\,\delta_p-\delta_q,\, \pm (\varepsilon_i-\delta_p)\},
$$
where $r+1\le i,j\le m,\,i\ne j$ and $s+1\le p,q\le n,\,p\ne q$.
If we set
$$
\chi_{r,s}=\sum_{i=1}^r\tau_i\varepsilon_i+\sum_{j=1}^s\nu_j\delta_j
$$
where
$$
\tau=(\tau_1,\dots,\tau_r),\,\,\nu=(\nu_1,\dots,\nu_s)
$$
are non increasing sequences of integers
then $\chi$ defines one dimensional representation of $\frak p$.
For any function $f(x_1,\dots,x_m,y_1,\dots, y_n)$ let us define the following alternation operation
$$
\{f(x,y)\}=\sum_{w\in S_m\times S_n}\varepsilon(w)w(f(x,y)).
$$
Then it is easy to check that Euler character is given by the following formula
$$
ch\,E(\chi_{r,s})\Delta(x)x^{\rho_m}\Delta(y)y^{\rho_n}
$$
\begin{equation}\label{Eulerch}
=\left\{\prod_{(ij)\in D_{+}}\left(1+\frac{y_j}{x_i}\right)\prod_{(ij)\in D^{-}}\left(1+\frac{x_i}{y_j}\right)x^{\tau}y^{\nu}x^{\rho_m}y^{\rho_n}\right\}
\end{equation}
where
$$
D_{+}=[1,r]\times[1,n],\quad D_{-}=[ r+1,m]\times[1,s].
$$
\begin{remark} If we apply to the formula (\ref{Eulerch}) the automorphism $\omega$ which acts identically on $x_1,\dots,x_n$ and acts multiplication by $-1$ on $y_1,\dots,y_m$ then we get the Euler supercharacter (see \cite{Ser1} Proposition 5.10). And it was proved in \cite{Ser1} that Euler supercharacters $\omega(E(\chi_{r,s}))$
where $(r,s)\in P(m,n)$ form a basis in the ring of superchsracters. Therefore $ch E(\chi_{r,s})$ where $(r,s)\in P(m,n)$ form a basis in the ring $K(\mathcal F)$.
\end{remark}
As before we can use diagram $g=(A,B)$ where
$$
A=\{\tau_1,\tau_2-1,\dots,\tau_r+1-r\},\,
$$
$$
B=\{s-r-\nu_s,s-r-\nu_{s-1}-1,\dots,-r-\nu_1\}
$$
As a particular case we have the formula for character of Kac module $K(\tilde\chi)$ where
$\tilde\chi=(\lambda,\mu)$ and
$
\lambda=(\lambda_1,\dots,\lambda_m),\,\,\mu=(\mu_1,\dots,\mu_n),
$
are non increasing sequences of integers.
In this case we have
$$
ch K(\tilde\chi)=\Delta(x,y)s_{\lambda}(x)s_{\mu}(y)
$$
and the corresponding diagram $f=(\tilde A,\tilde B)$ where
$$
\tilde A=\{\lambda_1,\lambda_2-1,\dots,\lambda_m+1-m\},\,
$$
$$
\,\tilde B=\{n-m-\mu_n,n-m-\mu_{n-1}-1,\dots,-m-\mu_1\}
$$
\begin{definition} Let $X,Y$ be two sets of integers such that $X\cap Y=\emptyset$. Let $ x_1>x_2>\dots,x_m$ be the elements of $X$ in decreasing order and $ y_1>y_2>\dots>y_n$ be the elements of $Y$ in decreasing and $ z_1>x_2>\dots,z_{m+n}$ be the elements of $Z=X\cup Y$ in decreasing order. The sign of a permutation $\sigma$ such that
$$
\sigma(x_1,\dots,x_m,y_1,\dots,y_n)=(z_1,\dots,z_{n+m})
$$
will be denoted by $\varepsilon(X,Y)$.
\end{definition}
Let us keep the notation of the above definition. Then the following Lemma can be easily proved.
\begin{lemma}\label{sign} Let us set
$$
a_i=|X\cap(-\infty, x_i)|,\, \,i=1,\dots,m \quad b_j=[Y\cap(y_j,+\infty)|,j=1,\dots,n.
$$
where $|A|$ means the cardinality of $A$.
Then the following equalities hold true
$$
\varepsilon(X,Y)=(-1)^{a_1+\dots+a_m}=(-1)^{b_1+\dots+b_n}
$$
\end{lemma}
\begin{definition} Let $h$ be a diagram and $C\subset h^{-1}(\circ)$. Then by $h*C$ we will denote the following diagram
$$
(h*C)^{-1}(x)=\begin{cases}
h^{-1}(x),\,x=<,>\\
h^{-1}(x)\cup C,\, x=\times\\
h^{-1}(x)\setminus C,\, x=\circ
\end{cases}
$$
\end{definition}
Now we can formulate the main result of this section.
\begin{thm} \label{KacEuler} The following statement holds true:
\begin{equation}\label{main1}
(ch\,K(f),ch\, E(g))=\begin{cases}\varepsilon(f,g),\, f=g*C, C\subset g^{-1}({\circ})\cap\Bbb Z_{\le n-m}\\
0,\,\,\text{otherwise}
\end{cases}\end{equation}
where
$$
\varepsilon(f,g)=(-1)^{\frac12r(r-1)+\frac12m(m-1)+s(m-r)+S(C)}\varepsilon(A,C)\varepsilon(C,B)
$$
and $S(C)$ is equal to the sum of the elements of $C$.
\end{thm}
\begin{proof} We have from the definition of Kac module that
$$
\frac{ch\,K(f)}{\Delta(x,y)}=s_{\lambda}(x)s_{\mu}(y)
$$
and from the definition of Euler character
$$
\frac{ch\, E(g)\Delta(x)x^{\rho_m}\Delta(y)y^{\rho_n}}{\Delta(x,y)}=\left\{\frac{\prod_{(ij)\in D_{+}}\left(1+\frac{y_j}{x_i}\right)\prod_{(ij)\in D_{-}}\left(1+\frac{x_i}{y_j}\right)}{\Delta(x,y)}x^{\tau}y^{\nu}x^{\rho_m}y^{\rho_n}\right\}
$$
$$
=\frac{(x_{r+1}\dots x_m)^s}{(y_1\dots y_s)^{m-r}}\left\{\prod_{(i,j)\in D_{r,s}}\left(1+\frac{y_j}{x_i}\right)^{-1}x^{\tau}y^{\nu}x^{\rho_m}y^{\rho_n}\right\}
$$
where $D_{r,s}=[r+1,m]\times[s+1,n]$. Therefore
$$
\frac{ch\, E(g)\Delta(x)x^{\rho_m}\Delta(y)y^{\rho_n}}{\Delta(x,y)}=
$$
$$
\sum_{a_1\ge a_2\ge\dots\ge a_{m-r}\ge0}(-1)^{|a|}\left\{\frac{(x_{r+1}\dots x_m)^s}{(y_1\dots y_s)^{m-r}}s_{a}( y_{s+1},\dots,y_n)s_a( x^{-1}_{r+1},\dots,x^{-1}_{m})\right\}
$$
where $|a|=a_1+\dots+a_{m-r}$. Further we have
$$
\left\{x^s_{r+1}\dots x^s_ms_{a}(x^{-1}_{r+1},\dots,x_m^{-1})x_1^{\tau_1}\dots x_r^{\tau_r} x^{\rho_m}\right\}=
$$
$$
\{x_1^{\tau_1}\dots x_r^{\tau_r}x_{r+1}^{s-a_{m-r}}\dots x_m^{s-a_1}x^{\rho_m}\}=
s_{\tau_1,\dots,\tau_r,s-a_{m-r},\dots,s-a_1}\Delta(x)x^{\rho_m}.
$$
In the same way it is easy to see that
$$
\left\{y_1^{r-m}\dots y_s^{r-m}s_{a}(y_{s+1},\dots,y_n)y_1^{\nu_1}\dots y_s^{\nu_s} y^{\rho_n}\right\}=
$$
$$
=s_{\nu_1+s-n,\dots,\nu_s+s-n,\,a_{1},\dots,a_{n-s}}\Delta(y)y^{\rho_n}
$$
Therefore
$$
(ch\,K(f),ch\,E(g))=
$$
$$
\sum_{a}(-1)^{|a|}(s_{\lambda},s_{\tau,s-a_{m-r},\dots, s-a_1})(s_{\mu},s_{\nu_1+r-m,\dots,\nu_{s}+r-m,a})
$$
It is easy to check that for given $\lambda$ there exists a unique sequence $a$ and a permutation $\sigma\in S_{m}$ such that
$$
(\lambda_1,\dots,\lambda_m)+\rho_m=\sigma((\tau_1,\dots,\tau_r,s-a_{m-r},\dots,s-a_1)+\rho_m)
$$
or in an equivalent form $\tilde A=\sigma(A,C)$
where
$$
C=\{s-r-a_{m-r},\dots, s+1-m-a_1\}.
$$
In the same way there exists a permutation $\tau\in S_n$ such that
$$
(\mu_1,\dots,\mu_n)+\rho_n=\tau((\nu_1+r-m,\dots,\nu_s+r-m,\,a_{1},\dots,a_{n-s})+\rho_n)
$$
or in the equivalent form $\tilde B=w_n\circ\tau\circ w_n(CB)$, where $w_n(i)=n-i+1,\, i=1,\dots, n$. Therefore
$$
(K(f),E(g))=(-1)^{|a|}sign(\sigma)sign(\tau).
$$
But
$$
S(C)=s(m-r)+\frac12r(r-1)-\frac12m(m-1)-|a|
$$
and the Theorem is proved.
\end{proof}
\begin{corollary}\label{prev} Let $f,g$ be such diagrams that
$
(K(f),E(g))\ne0
$
and $g=(A,B)$.
Let us also suppose that for transposition $\tau=\pi_a^b$ we have $\tau(g)=g,\,a\in f^{-1}(\times)$ and $a,b\le n-m$. Then
$$
(ch\,K(\tau(f)),ch\,E(g))=(-1)^{n_{ab}+m_{ab}+a-b}(ch\,K(f),ch\,E(g))
$$
where $n_{a,b}=|A\cap (a,b)|,\,m_{a,b}=|B\cap(a,b)|$.
\end{corollary}
\begin{proof} By Theorem \ref{KacEuler} $f=g*C$ where $C\subset g^{-1}({\circ})\cap\Bbb Z_{\le n-m}$. Since $\tau(g)=g$ we have $\tau(f)=g*\tau(C)$ and by Theorem \ref{KacEuler} we have
$$
(K(f),E(g))=(-1)^{\frac12r(r-1)+\frac12m(m-1)+s(m-r)+S(C)}\varepsilon(A,C)\varepsilon(C,B)
$$
$$
(K(\tau(f)),E(g))=(-1)^{\frac12r(r-1)+\frac12m(m-1)+s(m-r)+S(\tau(C))}\varepsilon(A,\tau(C))\varepsilon(\tau(C),B).
$$
Further we have the following equalities in $\Bbb Z_2$: $S(C)-S(\tau(C))=a-b$ and by Lemma \ref{sign}
$$
\varepsilon(A,C)-\varepsilon(A,\tau(C))=|A\cap(a,b)|,\,\, \varepsilon(C,B)-\varepsilon(\tau(C),B)=|B\cap(a,b)|.
$$
Lemma is proved.
\end{proof}
\begin{corollary}\label{prod} Let $f,g,h$ be such diagrams that
$$
(ch\,P(f),ch\,K(g))\ne0,\quad(ch\,K(g),ch\,E(h))\ne0
$$
and $\tau=\pi_a^b,\,a,b\le n-m$ be an admissible transposition for $f$ such that $a,b\notin h^{-1}(\times)$. Then
$$
(ch\,K(\tau(g)),ch\,E(h))+(ch\,K(g),ch\,E(h))=0
$$
\end{corollary}
\begin{proof} By Corollary \ref{prev} it is enough to prove that $n_{ab}+m_{ab}+a-b$ is an odd number. Let us denote by $(a,b)_{x}=g^{-1}(x)\cap (a,b)$. Then we have
$$
(a,b)=(a,b)_{>}\cup (a,b)_{<}\cup (a,b)_{\times}\cup (a,b)_{\circ}
$$
Therefore
$$
b-a-1=|(a,b)_{>}|+| (a,b)_{<}|+| (a,b)_{\times}|+| (a,b)_{\circ}|.
$$
where $|A|$ means the cardinality of the set $A$.
But
$$
n_{a,b}=|(a,b)_{>}|+| (a,b)_{\times}|,\, m_{a,b}=|(a,b)_{<}|+| (a,b)_{\times}|,\,
$$
Therefore it is enough to prove that
$
| (a,b)_{\times}|+| (a,b)_{\circ}|
$
is an even number. Let $C=\{c_1,\dots, c_r\}$. We have
$$
\varphi\circ g=\varphi\circ f+2\sum_{i=1}^r(\delta_{d_i}-\delta_{c_i})
$$
and by definition admissible transposition we have $\sum_{i\in (a,b)}\varphi\circ f(i)=0$. Therefore
$$
| (a,b)_{\times}|-| (a,b)_{\circ}|=\sum_{i\in (a,b)}\varphi\circ g(i)=2\sum_{i=1}^r(\delta_{d_i}-\delta_{c_i}).
$$
Corollary is proved.
\end{proof}
\section{Partially polynomial representations}
\begin{definition} A weight $\chi\in P$
$$
\chi=\lambda_1\varepsilon_1+\dots+\lambda_m\varepsilon_m+\mu_1\delta_1+\dots+\mu_n\delta_n
$$is called partially polynomial (in $y_1,\dots, y_m$) if $\mu_1,\dots,\mu_m\in \Bbb Z_{\ge0}$.
\end{definition}
\begin{corollary} A diagram $f=(A,B)$ corresponds to the partially polynomial highest weight if and only if all elements of $B$ are not grater than $n-m$.
\end{corollary}
\begin{definition} A representation $V$ of $\frak{gl}(m,n)$ is called partially polynomial (in $y_1,\dots, y_n$) if all its weights are partially polynomials or its character is a polynomial in $y_1,\dots, y_n$.
\end{definition}
We should note that there is no loss of generality in restricting our attention to partially polynomial representations, since an arbitrary finite dimensional irreducible representation of $\frak{gl}(m,n)$ can be obtained from some partially polynomal representation by tensoring with a one dimensional representation.
\begin{example} Standard representation with the character $x_1+\dots+x_n+y_1+\dots+y_m$ is partially polynomial. One dimensional representation with character $\frac{y_1\dots y_n}{x_1\dots x_m}$ is also partially polynomial representation.
\end{example}
The subcategory of the modules with partially polynomials weights will be denote by $\mathcal F^+$.
\begin{definition} For any $M\in \mathcal F$ let us denote by $M^{-}$ the submodule generated by all weight vectors with non partially polynomials weights. Let us also define a functor $F^+: \mathcal F\rightarrow \mathcal F^+$ by the following formula
$$
F^+(M)=M/M^-
$$
\end{definition}
\begin{lemma}\label{exact}
$1)$ Functor $F^+$ is right exact.
$2)$ Functor $F^+$ maps projective objects in $\mathcal F$ to projective objects in $\mathcal F^+$.
\end{lemma}
\begin{proof}
$1)$
By definition of $M^{-}$ we have the following equality for all $N\in \mathcal F^+$
$$
Hom_{\mathcal F}(M,N)=Hom_{\mathcal F^+}(F(M),N)
$$
Consider a functor $ G: \mathcal F^+\rightarrow \mathcal F$ such that $G(N)=N$. Then the above equality means that $G$ is right adjoint to $F$. Therefore $F$ is right exact.
$2)$ follows from $1)$.
\end{proof}
\begin{lemma} The following statements hold true
$1)$ Let $\chi\in P^+$ and $ K(\chi)$ be the corresponding Kac module, then
$
F^+(K(\chi))=K(\chi)
$ if $\lambda$ is a partially polynomial weight and $0$ otherwise.
$2)$ Let $L(\chi)$ be the irreducible finite dimensional module corresponding to the weight $\chi$. Then $F^+(L(\chi))=L(\chi)$ if $\chi$ is a partially polynomial weight and $0$ otherwise.
\end{lemma}
\begin{proof}
$1)$ Let $\chi$ be a partially polynomial highest weight. Therefore $\chi-\alpha$ is also partially polynomial for any positive root $\alpha$. Therefore all weights of the module $K(\chi)$ are partially polynomials, so $K(\chi)^-=0$ and $F^+(K(\chi))=K(\chi)$. If $\chi$ is not partially polynomial then $K(\chi)^-=K(\chi)$ since it is generated by the vector of the weight $\chi$. Therefore $F^+(K(\chi))=0$.
$2)$ Let $\chi$ be a partially polynomial weight. Since $L(\chi)$ is a quotient of $K(\chi)$ then by the first statement we have $F^+(L(\chi))=L(\chi)$. If $\chi$ is not partially polynomial then by the first statement of the Lemma and by Lemma \ref{exact} $F^+(L(\lambda))=0$.
\end{proof}
\begin{corollary} Let $M\in \mathcal F$ and suppose that it has composition series of Kac modules and in the Grothendieck group of $\mathcal F$ we have
$$
[M]=\sum_{\lambda\in I}m_{\chi}[K(\chi)]
$$
Then in the Grothendieck group of $\mathcal F^+$ we have
$$
[F^+([M])=\sum_{\chi\in I^{pol}} m_{\chi}[K(\chi)]
$$
where $I^{pol}$ is a subset of partially polynomial weights of $I$.
\end{corollary}
\section{Projective covers and Euler characters}
In this section we are going to give an algorithm to represent Euler characters as the sum of characters of irreducible modules. For any finite dimensional module $V$ we have the following formula in the ring $\Lambda^{\pm}_{m,n}$
$$
ch\,V=\sum_{P}(ch\, P,\,ch\,V)ch\,L_P
$$
where sum is taken over all projective covers and $L_P$ is the irreducible module corresponding to $P$.
So if $V=E(h)$ is an Euler character then we need to calculate $(ch\, P(f),\,ch\,E(h))$ for fixed $h$ and all $f$. In order to do so we calculate first $(ch\, P(f),\,ch\,E(h))$ for fixed $f$ and all $h$.
\begin{definition}
Let $f$ be a diagram. Let us set
$$
f^{-1}_0(\times)=\{a\in f^{-1}(\times)\mid \pi_a^b\,\,\text {is admissible and}\,\, b\le n-m\},
$$
$$
f^{-1}_1(\times)=\{a\in f^{-1}(\times)\mid \pi_a^b\,\,\text {is admissible and}\,\, b> n-m\},
$$
\end{definition}
\begin{definition}
Let us set
$$
\mathcal E(f)=\{h\mid (ch\,P(f),ch\,E(h))\ne0\}
$$
\end{definition}
\begin{definition}\label{prod1} Let $f$ be a diagram and $B\subset f^{-1}(\times)$. We will denote by $f_B$ the following diagram
$$
f^{-1}_B(x)=\begin{cases} f^{-1}(x),\,x=<,>\\
f^{-1}(\times)\setminus B\\
f^{-1}(\circ)\cup B\\
\end{cases}
$$
And for a diagram $f$ we will denote by $f_{>d}$ the diagram $f_B$ in the case when $B=f^{-1}(\times)\cap \Bbb Z_{>d}$. In other words $f_{> d}$ is the diagram which can be obtained from $f$ by deleting from $f^{-1}(\times)$ all the numbers which are strictly grater that $d$ and adding them to $f^{-1}(\circ)$.
\end{definition}
The following Theorem describes the pairing between projective covers and Euler characters.
\begin{thm} \label{Euler1} The following equalities hold true
$1)$
\begin{equation}\label{pq}
\mathcal E(f)=\{h\mid h*A\in \mathcal P(f),\, A\subset f_1^{-1}(\times)\cap \Bbb Z_{\le n-m}\}
\end{equation}
$2)$ If $h\in \mathcal E(f)$ then
$$
(ch\,P(f), ch\,E(h))=(ch\,K(h*A),ch\,E(h))
$$
\end{thm}
\begin{proof} Let us prove the first statement. We will denote by $\mathcal Q$ the write hand side of the equality (\ref{pq}). Let $h\in \mathcal E(f)$. We are going to prove that $h\in \mathcal Q$. Let us denote by $\mathcal A(h)$ the set of all $ g$ such that
$$
( ch\,P(f),\,ch\,K( g))\ne0,\,\,\,(ch\,K(g),ch\,E(h))\ne0
$$
Then we have
$$
(ch\,P(f),\,ch\,E(h))=\sum_{g\in\mathcal A(h)}(ch\,K(g),ch\,E(h))
$$
Since $h\in \mathcal E(f)$ the set $\mathcal A(h)$ is not empty and there exists $g\in \mathcal A(h)$. By Theorem \ref{KacEuler} we have
$$
g=h*A,A\subset \Bbb Z_{\le n-m}
$$
So we only need to prove that $A\subset f_1^{-1}(\times)$.
If $A=\emptyset$ then $A\subset f_1^{-1}(\times)$. Let $A\ne\emptyset$ and $a\in A$.
There are two cases $a\notin f^{-1}(\times)$ and $a\in f^{-1}(\times)$.
Let us consider the first case. Let $\tau= \pi_{c}^a$ be the corresponding admissible transposition then $c\notin g^{-1}(\times)$ and therefore $a\notin h^{-1}(\times)$.
By Corollary \ref{prod} the set $\mathcal A(h)$ is invariant under the action of $\tau$ and for any $\tilde g\in \mathcal A(h)$ we have
$$
(ch\, K(\tilde g), ch E(h))+(ch\, K(\tau (\tilde g)), ch E(h))=0
$$
Therefore $(ch\,P(f),\,ch\,E(h))=0$ and the first case is impossible.
Consider the second case $a\in f^{-1}(\times)$. Let $\tau=\pi_a^b$ be the corresponding admissible transposition then $b\notin g^{-1}(\times)$ and therefore $b\notin h^{-1}(\times)$. We have two possibilities $b\le n-m$ and $b>n-m$. If $b\le n-m$ then in the same way as above we can prove that $(ch\,P(f),\,ch\,E(h))=0$. So the only possibility left $b>n-m$. Therefore $A\subset f^{-1}_1(\times)$ and $\mathcal E(f)\subset\mathcal Q$.
Now let us prove the opposite inclusion. Let $h*A\in \mathcal P(f)$ and $A\subset f^{-1}(\times)\cap \Bbb Z_{\le n-m}$. Suppose that $h*\tilde A\in \mathcal P(f)$. Clearly $(K(h*A),E(h))\ne 0$. Let $g\in\mathcal P(f)$ such that $(ch\,K(g), ch\,E(h))\ne0$. Then by Theorem \ref{KacEuler} we have $g=h*\tilde A,\,\tilde A\subset \Bbb Z_{n-m}$. Let $a\in A$ and $\pi_{a}^b$ be the corresponding admissible transposition. Then one of the elements $a,b$ belongs to $\tilde A$.
Suppose that $a\notin \tilde A$. Therefore $b\in \tilde A$ but it is impossible since $b>n-m$. So $a\in \tilde A$ . Therefore $A=\tilde A$. So we see that
$$
(ch\,P(f), ch\,E(h))=(ch\,K(h*A),ch\,E(h))\ne0
$$
So we proved the inclusion $\mathcal E(f)\supset\mathcal Q$ and the second statement. The Theorem is proved.
\end{proof}
Now we are going to investigate the case of partially polynomial representations in more details.
\begin{definition}
Let us denote by $\mathcal E^{+}(f)$ the set of partially polynomial diagrams $h$ such that $(ch\,P(f),ch\,E(h))\ne0$.
\end{definition}
\begin{corollary}\label{cor} Let $f$ be a partially polynomial diagram then the following equality holds true
$$
\mathcal E^+(f)=\{h\mid h=\pi_C(f)_{>n-m},\,C\subset f^{-1}(\times)\}
$$
\end{corollary}
\begin{proof} Let us denote the right hand side the above equality by $\mathcal R$ and by $\mathcal Q^+$. we will denote the set of partially polynomial diagrams in $\mathcal Q$, where $\mathcal Q$ is the same as in the proof of Theorem \ref{Euler1}. By definition we have
$$
\mathcal E^+(f)=\mathcal Q^+
$$
and we need to prove that $\mathcal R=\mathcal Q^+$. Let $h\in\mathcal Q^+$ then
by Theorem \ref{Euler1} we have $g=h*A\in \mathcal P(f),\,A\subset f_1^{-1}(\times)\cap \Bbb Z_{\le n-m}$ and since $h,f$ are partially polynomial diagrams then $g$ is a partially polynomial diagram too. Futher we have
$$
h=g_{A}=(\pi_{A}(g))_{> n-m}
$$
Besides since $g\in P(f)$ we have $g=\pi_{B}(f),\, B\subset f_0^{-1}(\times)$. Therefore
$$
h=(\pi_A\pi_B(f))_{>n-m}=(\tau_C(f))_{> n-m},\,\,C=A\cup B
$$
So $h\in \mathcal R$. Now let us take $h\in \mathcal R$. Then by definition
$$
h=\pi_C(f)_{>n-m}, C\subset f^{-1}(\times)
$$
Let us set $B=f_0^{-1}(\times)\cap C, A=f_1^{-1}(\times)\cap C$. Then $h*A=\pi_B(f)\in\mathcal P(f).$ Therefore $h\in \mathcal Q^+$ and we proved the Corollary.
\end{proof}
\begin{corollary}\label{irre} Let $h$ be a partially polynomial diagram and
$$
ch\, E(h)=\sum_{f}b_{f,h}ch\, L(f)
$$
be the decomposition of Euler character $E(h)$ in terms of characters of irreducible modules.
Then $b_{f,h}=0,\pm1$ and it is nonzero if and only if there exists the sequence of transpositions
$$
\sigma_1=\pi_{c_1}^{d_1},\dots, \sigma_s=\pi_{c_s}^{d_s},\,\sigma_{s+1}=\pi_{c_{s+1}}^{d_{s+1}},\dots, \sigma_r=\pi_{c_r}^{d_r}
$$
such that
$$
f=\sigma_{r}\circ\dots\circ\sigma_{s+1}\left(\left(\sigma_{s}\circ\dots\sigma_1(h)\right)*\{d_{s+1},\dots, d_r\}\right)
$$
and
$1)$ $\sigma_i$ is admissible for $h_i=\sigma_i\circ\dots\circ\sigma_1(h)$, $i=1,\dots, s$
$2)$ $c_1>c_2>\dots>c_{s}$, $d_1,\dots,d_{s}\le n-m$ and $c_i\ne d_j ,\, 1\le i,j\le s$
$3)$ $c_{s+1}>\dots>c_{r}$, $d_{s+1},\dots,d_{r}> n-m$
$4)$ $\sigma_i$ is admissible for $h_i=\sigma_i(h_{i-1}*\{d_i\})$, $i= s+1,\dots,r$
$5)$
$
\{c_1,\dots,c_s\}\cap\{c_{s+1},\dots,c_r\}=\emptyset,\,\,\, \{c_1,\dots,c_r\}\cap\{d_1,\dots,d_s\}=\emptyset
$
\end{corollary}
\begin{proof} Suppose that all conditions of the Corollary are fulfilled. Then
$$
f=\sigma_{r}\circ\dots\circ\sigma_{s+1}\left(\left(\sigma_{s}\circ\dots\sigma_1(h)\right)*\{d_{s+1},\dots, d_r\}\right)
$$
Since $d_1,\dots,d_{s}\le n-m$ and $d_{s+1},\dots,d_{r}> n-m$ we can rewrite the above equality in the form
$$
f=\sigma_{r}\circ\dots\circ\sigma_{s+1}\circ\sigma_{s}\circ\dots\sigma_1(h*\{d_{s+1},\dots, d_r\})
$$
Now we are going to prove that $\sigma_1,\dots,\sigma_r$ are admissible for $f$. Let us set
$$
\tau_i=\sigma_{r-i+1}, a_i=c_{r-i+1},\,b_i =d_{r-i+1}\,\,\,1\le i\le r
$$
Then we have
$$
h*\{b_{r-s},\dots,b_1\}=\left(\tau_r\circ\dots\circ\tau_{r-s+1}\circ\tau_{r-s}\circ\dots\circ\tau_1(f)\right)
$$
Again from the conditions of the Lemma it follows that $\tau_i,\, i=1,\dots,r$ is admissible for $f_i=\tau_{i-1}\circ\dots\circ\tau_1(f)$
We have $a_{r-s+1}<a_{r-s+2}<\dots<a_r$ and by our assumptions $a_{r-s+i}\ne b_{r-s+j},\,1\le1,j\le s$. Therefore by Corollary \ref{adm1} $\tau_{r-s+1},\dots,\tau_r$ are admissible for $f_{r-s+1}$. Further again by our assumptions $a_1<\dots<a_{r-s}$ and since $b_1,\dots,b_{r-s}>n-m$ we have $b_1>\dots>b_{r-s}$ Let us take $ r-s+1\le j\le r$. Then $b_j<b_{r-s}$. Therefore by Corollary \ref{admis} $\tau_1,\dots,\tau_{r-s+1},\tau_j$ are admissible for $f$.
Now let us suppose that $(P(f),E(h))\ne0$. Therefore by the proof of Corollary \ref{cor}
$$
h=(\pi_{A_1}\pi_{A_0}(f))_{>n-m},\, A_1\subset f_1^{-1}(\times),\,A_0\subset f_0^{-1}(\times)
$$
Let $A_0=\{a_1,a_2,\dots, a_{r-s}\}$ where $a_1<a_2<\dots<a_{r-s}$ and also \\ $A_1=\{a_{r-s+1},\dots, a_{r}\}$ where $a_{r-s+1}<\dots<a_{r}$. Since admissible transpositions pairwise commute we have
$$
h*\{b_{r-s},\dots,b_1\}=\tau_r\circ\tau_2\circ\dots\circ\tau_1(f)
$$
where $\tau_i=\pi_{a_i}^{b_i}$. Let us check that conditions (\ref{cond}) are fulfilled. It is enough to verify that inclusion $[a_j,b_j]\subset (a_i,b_i)$ is impossible if $i>j$. Indeed if $i\le r-s$ or $j>r-s$ then $a_j>a_i$ and we get a contradiction. If $i>r-s,\, j\le r-s$ then $b_i \le n-m$ and $b_j>n-m$ and this is again a contradiction. Therefore conditions $(\ref{cond})$ are fulfilled and by Theorem \ref{adm} $\tau_i$ is admissible for $f_i=\tau_{i-1}\circ\circ\dots\circ\tau_1(f)$ and we can set
$$
\sigma_{i}=\tau_{r-i+1},\,c_i=a_{r-i+1},\,d_i=b_{r-i+1}\,\,\,i=1,\dots,r.
$$
\end{proof}
\begin{example}
Let $n=m=2$ and $h^{-1}(\times)=-1,\, h^{-1}(<)=h^{-1}(>)=\emptyset$. We are going to describe the set $\mathcal E^+(h).$
In this case there are two possibilities for the first step.
$1)$ We are going to find a transpositions $\pi_{a}^b$ such that $b\in h^{-1}(\times),$ and $\pi_a^b$ is admissible for $\pi_{a}^b(h).$ And it is easy to see that there exists only one such transposition $\pi_{-2}^{-1}$.
$2)$
We also need to find a transposition $\pi_{a}^b$ such that $\pi_a^b$ is admissible for $\pi_{a}^b(h*\{b\})$ and $b>0.$ It is also easy to check that there exist two such transpositions $\pi_0^1,\pi_{-2}^1$.
In the second step there are also two possibilities.
$1)$ We are going to find a transpositions $\pi_{a}^b$ such that $b\in (\pi_{-2}^{-1}h)^{-1}(\times)$ and $\pi_a^b$ is admissible for $\pi_{a}^b\circ\pi_{-2}^{-1}(h).$ And it is easy to see that there is no such a transposition which satisfies conditions $1),2),3)$ of Corollary \ref{irre}.
$2)$ We are going to find a transpositions $\pi_{a}^b$ such that $\pi_a^b$ is admissible for $\pi_{a}^b\circ\pi_{-2}^{-1}(h*\{b\})$ and $b>0$. And it is easy to see that there is only one such transposition $\pi_0^1.$
So we have
$$
\mathcal E^+(h)=\{\pi_0^1((\pi_{-2}^{-1}(h))*\{1\}),\,\,\pi_0^1(h*{1}),\,\,\pi_{-2}^1(h*{1})\}
$$
And it is easy to see that
$$
E(h)=-L(\pi_0^1((\pi_{-2}^{-1}(h))*\{1\}))-L(\pi_0^1(h*{1}),)-L(\pi_{-2}^1(h*{1}))
$$
\end{example}
\section{Some special classes of irreducible modules}
In this section we will only consider diagrams of the form $f=(A,A)$ where $A\subset \Bbb Z_{\le 0}$ and instead of $E(f)$ we will write $E(A)$ for Euler virtual module and $L(A),\,P(A)$ for irreducible module and for projective indecomposable module correspondently. Below $|A|$ means the number of elements in the set $A$. Our aim in this section is to give an explicit formula for characters of irreducible modules for the most atypical block of Lie superalgebra $\frak{gl}(2,2)$.
\begin{definition} Let us set
$$
\mathcal P_n^{(m)}=\{A\subset \Bbb Z_{\le 0}, |A|=n\mid \exists\ B\subset\Bbb Z_{\le 0},\, |B|\le m, \,(ch\,P(A),ch\,E(B))\ne0 \}
$$
and denote by $\omega$ the following shift
$$
\omega: \Bbb Z\rightarrow \Bbb Z,\,\,\, \omega(x)=x-1
$$
\end{definition}
In the following Lemma we give an inductive description of the set $\mathcal P_n^{(m)}$.
\begin{lemma} \label{ind}The following formulae hold true for $m<n$
$$
\mathcal P_{n}^{(m)}=\bigcup_{i=0}^{m}\mathcal P_{n,i}^{(m)},\,\quad\mathcal P_{n,i}^{(m)}=\left\{\{-i, C\}\mid C\in \omega^{i+1}( \mathcal P_{n-1}^{(m-i)})\right\}
$$
\end{lemma}
\begin{proof} It is easy to see that
$$
\mathcal P_{n}^{(m)}=\bigcup_{i=0}^{m}\mathcal A_i,\quad \mathcal A_i=\{A\in \mathcal P_{n}^{(m)}\mid \max_{a\in A}a=i\}
$$
and we only need to show that $\mathcal A_i=\mathcal P_{n,i}^{(m)}$.
First let us note that $A\in \mathcal P_{n}^{(m)}$ if and only if there exist at least $n-m$ elements from $A$ such that every corresponding admissible transposition has one positive element. If in addition $A$ contains $0$ then transposition $\pi_0^1$ is admissible for $A$. Therefore for the set $A\setminus \{0\}$ there must be at least $n-m-1$ admissible transpositions with one positive element. This proves the equality $\mathcal A_0=\mathcal P_{n,0}^{(m)}$. The same arguments work for any $0<i\le m$. Lemma is proved.
\end{proof}
In the case $m=1$ we can give an explicit description of the set $\mathcal P_n^{(m)}$.
\begin{lemma} We have $\mathcal{P}^{(1)}_{n}= S_1\cup S_2$ where
$$
S_1=\{ \{0,-1,-2,\dots,2-n, a\}\mid a\le 1-n\},\,
$$
$$
S_2=\{\{0,-1,-2,\dots,-n\}\setminus\{b\}\mid b=0,-1,\dots, 1-n,-n\}
$$
\end{lemma}
\begin{proof} Use of Lemma \ref{ind} and
induction on $n$.
\end{proof}
The following Lemma gives the value of our bilinear form on some pairs of projective modules and Euler characters.
\begin{lemma}\label{rank1}Let $A_0=\{0,-1,-2,\dots,1-n,\,-n\}$. Then we have
$$
\mathcal E^+(A_0\setminus\{-n\})\cap\mathcal{P}^{(1)}_{n}=\{E(\emptyset), E(0),\dots,E(1-n)\},\quad (1,1,-1\dots,(-1)^{n-1})
$$
$$
\mathcal E^+(A_0\setminus\{b\}))\cap\mathcal{P}^{(1)}_{n}=\{E(b),\,E(b-1)\},\quad ((-1)^{n-1},(-1)^{n-1}),\,\, b=0,\dots, 1-n
$$
$$
\mathcal E^+(0,-1,\dots,2-n,b)\cap\mathcal P^{(1)}_n=\{E(b+1),\,E(b)\},\, ((-1)^{n-1},\,(-1)^{n-1}),\,b\le -n
$$
we also indicate on the right the corresponding value $(ch\,P(A), ch\,E(B))$.
\end{lemma}
\begin{proof} It easily follows from Theorem \ref{Euler1}.
\end{proof}
\begin{corollary}\label{graph} The following formulae hold true
$1)$
$$
ch\,L(A_0\setminus\{-n\})=ch\,E(\emptyset),\,\,
$$
$2)$ if
$b\in [1-n,0]$ then
$$
ch\,L(A_0\setminus\{b\})=(-1)^{n-1}[ch\,E(b)-ch\,E(b+1)+\dots +
$$
$$
(-1)^{b}ch\,E(0)+(-1)^{b+1}(1-b)ch\,E(\emptyset)]
$$
$3)$ if $b\le -n$ then
$$
ch\,L(0,-1,\dots, 2-n,b)=
(-1)^{n-1}[ch\,E(b+1)-\dots+
$$
$$
(-1)^{b+1} ch\,E(0)+(-1)^{b}n\, ch\,E(\emptyset)]
$$
\end{corollary}
\begin{proof} Let us prove the first statement. It is enough to check that
\begin{equation}\label{one}
(P(A_0\setminus\{-n\}),E(\emptyset))=1,
\end{equation}
and
\begin{equation}\label{two}
(P(B),E(\emptyset))=0,\, \text{for any}\, B,\,|B|=n,\,B\subset\Bbb Z_{\ge0},\,B\ne A.
\end{equation}
Equality (\ref{one}) follows from Lemma \ref{rank1}. Besides if $B\notin \mathcal P^{(1)}_n$ then equality (\ref{two}) follows from the definition of the set $\mathcal P^{(1)}_n$ and if $B\in \mathcal P^{(1)}_n$ then this equality follows from Lemma \ref{rank1}. The other two statements can be proved in the same manner.
\end{proof}
\begin{definition} Let us define a linear operator on the characters of Kac modules by the formula
$$
T(ch\,K(A))=ch\,K(\omega(A)),\quad \omega(a)=a-1
$$
\end{definition}
It is easy to see that in the category $\mathcal F^+$ the operator $T$ corresponds to tensor multiplication on one dimensional module with the character $\frac{y_1\dots y_n}{x_1\dots x_m}$.
Now we are going to describe the action of the linear operator $T$ on the Euler characters.
\begin{lemma} The following formulae hold true (we suppose that $A,B\subset\Bbb Z_{\le0})$
\begin{equation}\label{formula2}
T(ch\,E(B))=(-1)^{n+p}\left[ch\,E(\omega(B))-(-1)^pch\,E(\{0\}\cup\omega(B))\right]
\end{equation}
where $p=|B|$ and we suppose that $E(B)=0$, if the number of elements in $B$ is strictly grater than $n$.
\end{lemma}
\begin{proof} It is enough to prove the following equality for any set $A$ such that $|A|=n$
$$
(ch\,K(A),ch\,T(E(B)))=
$$
$$
(ch\,K(A),\,(-1)^{n+p}\left[ch\,E(\omega(B))-(-1)^pch\,E(\{0\}\cup\omega(B))\right])
$$
So let us calculate separately the left hand side and the right hand side. We have
$$
(ch\,K(A), ch\,TE(B))=\left(ch\,K(A),\frac{y_1\dots y_n}{x_1\dots x_n}ch\,E(B)\right)=
$$
$$
\left(\frac{x_1\dots x_n}{y_1\dots y_n}ch\,K(A),ch\,E(B)\right)
$$
$$
=(ch\,K(\omega^{-1}(A)),ch\,E(B))
$$
But by Theorem \ref{KacEuler} we have
$$
(ch\,K(\omega^{-1}(A)),ch\,E(B))=\begin{cases}0,\,\,\text{if}\,\, B\not\subset\omega^{-1}(A),\,\text{or}\,\,0\in A\\
(-1)^{\frac12p(p-1)+\frac12n(n-1)+S(\omega^{-1}(A)\setminus B)}
\,\text{otherwise}\end{cases}
$$
Let $B\not\subset \omega^{-1}(A)$ then $\omega(B)\not\subset A $ and $\omega(B)\not\subset \{0\}\cup A $. Therefore
$$
\left(ch\,K(A),ch\,E(\omega(B)\right)=0,\,\, \left(ch\,K(A),ch\,E(\{0\}\cup\omega(B)\right)=0
$$
Suppose that $0\in A$ and $B\subset \omega^{-1}(A)$. Then we have $\{0\}\cup\omega(B)\subset A$ and therefore
$$
\left(ch\,K(A),ch\,E(\omega(B)\right)=(-1)^{\frac12p(p-1)+\frac12n(n-1)+S(A\setminus\omega(B))}
$$
$$
\left(ch\,K(A),ch\,E(\{0\}\cup\omega(B)\right)=(-1)^{\frac12p(p+1)+\frac12n(n-1)+S(A\setminus\omega(B)\cup\{0\})}
$$
So formulae (\ref{formula2}) holds true in this case.
Now consider the last possible case $0\notin A$ and $B\subset \omega^{-1}(A)$. In this case we have $\{0\}\cup\omega(B)\not\subset A$. Therefore
$$
\left(ch\,K(A),ch\,E(\{0\}\cup\omega(B)\right)=0
$$
and
$$
\left(ch\,K(A),ch\,E(\omega(B)\right)=(-1)^{\frac12p(p-1)+\frac12n(n-1)+S(A\setminus\tau(B))}
$$
Lemma is proved.
\end{proof}
Now consider the case of the most atypical bloc for Lie superalgebra $\frak{gl}(2,2)$ in the category $\mathcal F^+$. In order to give a reasonable description of the irreducible characters in this bloc we need some special type of graphs.
\begin{definition} Let $n\in Z_{\ge0},\,m\in\Bbb Z_{\ge1}$. Let us denote by $\Gamma_{n,m}$ the graph with $n+m$ vertices that are integers from the segment $[1-n-m,0]$ such that :
$1)$ there exists exactly one edge containing any two vertices from $[1-n, 0]$;
$2)$ there exists exactly one edge joining every vertex from $[1-n, 0]$ with every vertex from $[-n, 1-n-m]$.
\end{definition}
\begin{definition} For every graph $\Gamma=\Gamma_{n,m}$ let us define the following element of the Grothendieck ring by the formula
$$
\chi(\Gamma)=ch\,E(\emptyset)-\sum_{v}\varepsilon(v)ch\,E(v)-\sum_{e}\varepsilon(e)ch\,E(e)
$$
where $\varepsilon (v)=(-1)^{i}$ if $v=\{i\}$ and $\varepsilon(e)=\varepsilon(v)\varepsilon(u)$ if $e$ contains $v,u$.
\end{definition}
\begin{remark} It is convenient to define $\chi(\Gamma_{n,m})$ in the case when $n=-1$. In such a case we set $\chi(\Gamma_{n,m})=E(\emptyset)$.
\end{remark}
\begin{example}
$$
\chi(\Gamma_{-1,3})=ch\,E(\emptyset)
$$
$$
\chi(\Gamma_{2,1})=ch\,E(\emptyset)-ch\,E(0)+ch\,E(-1)-ch\,E(-2)+
$$
$$
ch\,E(0,-1)-ch\,E(0,-2)+ch\,E(-1,-2)
$$
$$
\chi(\Gamma_{0,2})=ch\,E(\emptyset)-ch\,E(0)+ch\,E(-1)
$$
\end{example}
\begin{thm} The following equalities hold true
\begin{equation}\label{first}
ch\,L(a,a-1)=\chi(\Gamma_{|a|-1,1}),\,\, a\le0
\end{equation}
\begin{equation}\label{second}
ch\,L(a,b)=(-1)^{a-b-1}\left[\chi(\Gamma_{|a|-1,1})+\chi(\Gamma_{|a|,a-b})\right],\,\, a-b\ge2,\,a\le0
\end{equation}
\end{thm}
\begin{proof} We are going to use the functor $T$.
In the case of $n=2$ the functor acts by the following formulae
$$
T(ch\,E(\emptyset))=ch\,E(\emptyset)-ch\,E(0),\,\,
$$
$$
T(ch\,E(a))=-ch\,E(a-1)-ch\,E(0,a-1)
$$
and
$$
T(ch\,E(a,b))=ch\,E(a-1,b-1)
$$
It is not difficult to verify the following equality
$$
T(\chi(\Gamma_{n,m}))=\chi(\Gamma_{n+1,m})
$$
Further we see that $ch\,L(a,a-1)=T^{|a|}(ch\,E(\emptyset))$. We will prove equality (\ref{first}) induction on $|a|$ . If $a=0$ the equality is trivial $ch\,L(0,-1)=ch\,E(\emptyset)$. Let $|a|>0$ then we have
$$
T^{|a|}(ch\,E(\emptyset))=T(T^{|a|-1}(ch\,E(\emptyset)))=T(\chi(\Gamma_{|a|-2,1}))=\chi(\Gamma_{|a|-1,1})
$$
Now let us prove equality (\ref{second}) also induction on $|a|$. If $a=0$ and $b\le-2$ then by corollary (\ref{graph}) we have
$$
ch\,L(0,b)=(-1)^{b+1}(2ch\,E(\emptyset)-ch\,E(0)+\dots+
$$
$$
(-1)^{b}ch\,E(b+1))=(-1)^{b+1}[ ch\,E(\emptyset)+\chi(\Gamma_{0,|b|})]
$$
If we apply to both sides of the above formula functor $T^r$ then we get
$$
ch\,L(-r,b-r)=(-1)^{b+1}T^r[ ch\,E(\emptyset)+\chi(\Gamma_{0,|b|})]=
$$
$$
(-1)^{b+1}[\chi(\Gamma_{r-1,1})+\chi(\Gamma_{r,|b|})]
$$
If we replace $r$ by $-a$ and $b$ by $b-a$ we get the statement.
\end{proof}
\section{Acknowledgments}
This work was supported by the Ministry of Science and Higher Education of the Russian Federation in the framework of the basic part of the scientific research state task, project FSRR-2020-0006.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Conclusion and outlook}\label{sec: conclude}
\section{Introduction}\label{sec-1}
The sea-surface disturbances whose trains of curved wavefronts trace the propagation of internal gravity waves on the ocean thermocline hundreds of meters below the surface may be observed in many areas of strong tidal flow.
For example, the passage of the Atlantic Ocean tides through the Gibraltar Strait produces trains of curved sea-surface wavefronts expanding into the Mediterranean Sea. Likewise, the passage of the Pacific Ocean tides through the Luzon Strait between Taiwan and the Philippines produces trains of curved sea-surface wavefronts expanding into the South China Sea. These coherent trains of expanding curved wavefront disturbances are easily observable because they are strongly nonlinear. Their sea-surface signatures in the South China Sea may even be seen from the Space Shuttle \cite{BLLH2017,LCHL1998,ZLL2014}, as prominent crests of wave trains move in great arcs hundreds of kilometres in length and traverse sea basins thousands of kilometres across. Figures \ref{fig: Gibralter} and \ref{fig: South China Sea} show SAR images of the signatures of internal waves on sea surface.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth, height = \textwidth]{figures/13151-2871-2889-ERS-1_Gibraltar_full}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth, height = \textwidth]{figures/7661-0711-ERS-1_Gibraltar_full}
\end{subfigure}
\caption{Synthetic Aperture Radar (SAR) Image of internal-wave signatures near Gibraltar Strait
Taken from \url{https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves}}
\label{fig: Gibralter}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth, height = .75\textwidth]{figures/12918-RADARSAT_South_China_full.jpg}
\caption{Synthetic Aperture Radar (SAR) Image of internal-wave signatures on the South China Sea. Taken from \url{https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves}}
\label{fig: South China Sea}
\end{figure}
\paragraph{Multi-layer modelling of internal waves.}
The emission of surface effects near the Gibralter Strait observed in the SAR images seen in figure \ref{fig: Gibralter} are short term effects. In contrast, the surface effects seen in SAR images near Dong Sha Atoll in the South China Sea in figure \ref{fig: South China Sea} are long term effects involving internal wave propagation over hundreds of kilometers. The short term behaviour of internal waves has been modelled with some success using the well known multi-layer Green-Naghdi (MGN) equations \cite{JC2002}. However, longer term modelling of these waves has been problematic, because MGN and its rigid-lid version, the Choi-Camassa (CC) equation \cite{CC1996,CC1999}, were both shown in \cite{LW1997} to be \emph{ill-posed} in the presence of either bathymetry or shear. For example, even the shear induced by a single travelling wave causes the linear growth-rate of a perturbation of MGN or CC solutions to increase without bound as a function of wave number.
Until recently, the ill-posedness of MGN or CC solutions had prevented convergence under grid refinement of the numerical simulations of these waves over long times, because the cascade of energy to smaller scales would eventually build up at the highest resolved wave number. Regularisation was possible by keeping higher-order terms in an asymptotic expansion, as in for example \cite{BC2009}. However, such methods tended to destroy the Hamiltonian property of the system and also degrade its travelling wave properties. Moreover, if one is to consider the problem of wave generation and propagation at sea, one must consider the effects of bathymetry and shear, both of which may induce instability. Thus, the MGN equations had to be modified to make them well-posed. A recent review of the various approaches to regularising the MGN is given in \cite{D2021}. The analysis in \cite{D2021} focuses on the \emph{Camassa-Holm regime} of asymptotic expansion for nonlinear shallow water waves defined in \cite{CL2009}. The present paper also focuses on this asymptotic expansion.
\noindent
The multi-layer square-root-${D}$ ($ML\sqrt{D}$) system of nonlinear wave equations introduced in \cite{CHP2010} satisfies three fundamental properties that would be desired in a viable regularisation of MGN. Namely,
\begin{enumerate}
\item $ML\sqrt{D}$ is linearly well-posed and Hamiltonian;
\item $ML\sqrt{D}$ preserves the MGN linear dispersion relation for fluid at rest;
\item $ML\sqrt{D}$ travelling wave solutions agrees with the MGN and KdV $sech^2$ travelling wave form in the absence of imposed background shear.
\end{enumerate}
Thus, the $ML\sqrt{D}$ Hamiltonian system remains well-posed in the presence of shear and its solutions agree with those of the MGN system in the absence of shear \cite{CHP2010}. With these properties in mind, we shall choose the $ML\sqrt{D}$ Hamiltonian system as the basis for the present work.
\paragraph{Aims of the present paper.} The overall aim of the present paper is to model the internal-wave surface signatures seen by SAR images such as those in figures \ref{fig: Gibralter} and \ref{fig: South China Sea}. For this purpose, the investigations of the present paper will focus on the theoretical and computational simulation properties of the solutions of the \emph{single-layer} case of $ML\sqrt{D}$, known as $1L\sqrt{D}$. The $1L\sqrt{D}$ model possesses three well-known variants. These are the two-component Camassa-Holm equation (CH2) and the modified CH2 equation (ModCH2) with $H^1$ and $H_{div}$ kinetic energy norms. We will derive these variants and then focus computational simulations on the ModCH2 equation with the $H^1$ kinetic energy norm, which relates to previous work in \cite{FHT2001,FHT2002,HOT2009,HS2013,HT2009}.
We are inspired by the Synthetic Aperture Radar (SAR) images of the internal-wave signatures of wavefronts on the sea surface shown in figure \ref{fig: Gibralter} and figure \ref{fig: South China Sea}. As mentioned earlier, these wavefronts are known to be driven by internal waves propagating on the interfaces of the stratified layers lying beneath the sea surface \cite{BLLH2017,LCHL1998,ZLL2014}. However, the SAR data only contains the wavefront signatures of the internal waves on the sea surface, as seen from a distance overhead by the Space Shuttle, for example. This means the below-surface processes of their formation cannot be directly observed. To describe the interactions among these wavefront signatures on the surface, we seek a \emph{minimal description} of their dynamics which involves only observable quantities. This minimal model is based on the single-layer version of $ML\sqrt{D}$ which accounts for both kinetic and potential energy. Specifically, we seek to model the formation and dynamics of trains of wavefronts arising from an initial impulse of momentum, or from an initial gradient of surface elevation. We also seek to derive the dynamics of their collisions, including their nonlinear reconnections. In fact, the model we seek would treat the data only as the motion of curves in two dimensions which make optimal use of their kinetic and potential energy over a certain horizontal interaction range. In particular, the minimal model would not attempt to describe the interactions among internal-waves beneath the surface which are believed to produce these wavefronts.
To formulate such a minimal model of wavefront dynamics, we will derive a sequence of approximate equations in the so-called \emph{Camassa-Holm regime} of nonlinear wave dynamics \cite{D2021}. Starting from the single-layer case ($1L\sqrt{D}$) we will derive the 2D version of the two-component Camassa-Holm equation (CH2). The 1D version of CH2 is well-known for its completely integrable Hamiltonian properties. However, here we will be working in 2D. From CH2, we will obtain the modified two-component Camassa-Holm equation (ModCH2). In 1D, ModCH2 possesses emergent weak solutions supported on points moving along the real line \cite{HT2009,HOT2009}. In the 2D doubly periodic planar case treated here, ModCH2 possesses emergent weak solutions supported on smooth curves embedded in 2D. However, the simulations here do not always capture the singular solutions, indicating that the formation of the singular solution may occur quite slowly. The moving curves in the simulations are meant to model the dynamics of the sea-surface signature wavefronts driven by internal waves interacting below, as seen in the SAR image data.
Computational simulations of ModCH2 in 2D will be used here to display the various types of interactions among these emergent wave profiles in 2D. These simulations show wave trains with both singular and non-singular profiles emerging from smooth initial conditions. This emergence is followed by reconnections among the wavefronts during their nonlinear interactions. Some of the intricate patterns seen during these 2D simulations turn out to be strikingly similar to those seen in the SAR images shown in figures \ref{fig: Gibralter} and \ref{fig: South China Sea}.
\paragraph{Plan of the paper.} The plan of the paper is as follows.
Section \ref{sec-1} has provided the problem statement and the main goal of the paper. Namely, we aim to formulate a minimal model of the dynamical wavefront behaviour seen in SAR images such as those shown in figures \ref{fig: Gibralter} and \ref{fig: South China Sea}. We have listed the desired aspects of such a model. These desiderata have already been accomplished in deriving the $ML\sqrt{D}$ model, which provides a multi-layer well-posed description of internal waves in \cite{CHP2010}. Thus, this section has set the context for what follows in the remainder of the paper's investigation of single-layer wavefront interaction dynamics.
Section \ref{sec-2} begins by showing that a certain approximation of the $1L\sqrt{D}$ system easily yields the Hamiltonian two-component Camassa-Holm equation (CH2) in 2D. In one spatial dimension (1D), the CH2 equation is known to be completely integrable by the isospectral method \cite{CZ2006,F2005}. Its 2D behaviour will be discussed here, briefly but not extensively, because our main goal is the study of a further approximation. The further approximation yields the modified CH2 system (ModCH2). As discovered in \cite{HT2009}, the solution ansatz for the dominant behaviour of ModCH2 is given by the singular momentum map discussed in Theorem \ref{singsolnmommap-thm} equation \eqref{sing-soln-thm} in any number of spatial dimensions.
Its 1D singular solutions were shown to emerge and dominate the ModCH2 dynamics arising from all smooth, confined, initial conditions discussed in \cite{HOT2009}. As we shall see, the 2D ModCH2 solutions simulated here will not always capture the sharp peaks of the singular solutions.
Section \ref{sec-3} presents selected computational simulations for several classes of solution behaviour which include wavefront collisions and nonlinear reconnections that are quite reminiscent of the wavefront interaction data shown in the SAR images in figure 1 and figure 2. Our simulations are meant to illuminate the variety of interaction behaviours of the singular momentum map solutions of ModCH2 in 2D. These simulations and our observations of their solution behaviour comprise the primary contribution of the present paper. Additional simulations and videos of their dynamics are provided in the supplementary materials.
\begin{comment}
We also examine the robustness of these results by comparing the characteristic ModCH2 solution behaviour in some of our key computational simulations with those of the more accurate equations they approximate.
\todo[inline]{DH: Maybe we can examine the robustness of the present results in another paper.}
\paragraph{Modelling signature of internal waves on sea surface}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth, height = \textwidth]{figures/7661-0711-ERS-1_Gibraltar_full}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth, height = \textwidth]{figures/13151-2871-2889-ERS-1_Gibraltar_full}
\end{subfigure}
\caption{Synthetic Aperture Radar (SAR) Image of internal waves at Gibraltar \footnote{{\tiny Taken from \url{https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves}}}}
\end{figure}
\paragraph{}
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth, height = .55\textwidth]{figures/12918-RADARSAT_South_China_full}
\caption{Synthetic Aperture Radar (SAR) Image of internal waves at South China sea \footnote{{\tiny Taken from \url{https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/ers/instruments/sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/content/oceanic-internal-waves}}}}
\end{figure}
\end{comment}
\section{Euler-Poincar\'e equations for nonlinearly dispersive gravity waves}\label{sec-2}
\paragraph{Euler-Poincar\'e formulation of the $1L\sqrt{D}$ equation in 2D}$\,$
The Euler-Poincar\'e formulation of the 2D $1L\sqrt{D}$ equation follows from Hamilton's principle $\delta S = 0$ with $ S = \int_0^T \ell(\bs{u},D)dt$ for the following Lagrangian,%
\footnote{In the $1L\sqrt{D}$ Lagrangian, the term representing kinetic energy of vertical motion is proportional to the Fisher-Rao metric, which appears in probability theory. See e.g., \cite{BR1982} for a fundamental discussion of the Fisher-Rao metric and other generalised information metrics in probability theory. The Fisher-Rao metric is also important in information geometry \cite{V2019}. An equivalent form of the Lagrangian $\ell_{1L\sqrt{D}}$ in terms of spatial gradients is given in \eqref{Lag-1L-B}.}
\begin{align}
\ell_{1L\sqrt{D}}(\bm{u},D)=\frac{1}{2}\int D|\bm{u}|^{2}
+\frac{d^{2}}{3}\left(\frac{\partial}{\partial t}\sqrt{D}\right)^{2}
-g\big(D-b(\bs{x})\big)^{2}\mathrm{d}x\,\mathrm{d}y\,.
\label{Lag-1L-A}
\end{align}
Here, we denote fluid velocity as $\bm{u}$, constant mean layer thickness as $d$, bathymetry as $b(\bs{x})$ with $\bs{x}=(x,y)$, and the total depth as $D$, the last of which satisfies the following advection equation,
\begin{align}
\frac{\partial D}{\partial t} = -\, {\rm div} (D \bm{u})
\,.
\label{eq-D-cont}
\end{align}
In standard form for fluid dynamics, the motion equation for $1L\sqrt{D}$ is expressed as
\begin{align}
\frac{\partial\bm{u}}{\partial t}+\bm{u}\cdot\nabla\bm{u}=
-g\nabla\big(D-b(\bs{x})\big)
-\frac{d^{2}}{6}\nabla\left(\frac{1}{\sqrt{D}}\frac{\partial^{2}\sqrt{D}}{\partial t^{2}}\right)
\,.
\label{eq-fluid-1L}
\end{align}
\begin{remark}[An alternative form of the $1L\sqrt{D}$ Lagrangian and energy conservation]$\,$
Upon substituting the continuity equation \eqref{eq-D-cont} into the $1L\sqrt{D}$ Lagrangian in \eqref{Lag-1L-A}, one finds an equivalent Lagrangian, written as the difference of the kinetic and gravitational potential energies,
\begin{align}
\ell_{1L\sqrt{D}}(\bm{u},D)=\frac{1}{2}\int \bm{u}\cdot Q_{op(D)}\bm{u}
-g\big(D-b(\bs{x})\big)^{2}\mathrm{d}x\,\mathrm{d}y\,.
\label{Lag-1L-B}
\end{align}
Here, the symmetric, positive-definite operator $Q_{op}(D)$ is defined by its action on the velocity vector, as
\begin{align}
Q_{op(D)}\bm{u} = \Big(D - \frac{d^2}{12}\big(D(\nabla D^{-1}{\rm div})D\big)\Big)\bm{u}
\,.
\label{def-Qop}
\end{align}
After an integration by parts, the conserved sum of the kinetic and potential energies may be expressed as
\begin{align}
E_{1L\sqrt{D}}(\bm{u},D)=\frac{1}{2}\int D\left[|\bm{u}|^2 + \frac{d^2}{12 D^2}({\rm div}D\bm{u})^2\right]
+ g\big(D-b(\bs{x})\big)^{2}\mathrm{d}x\,\mathrm{d}y\,.
\label{Lag-1L-erg}
\end{align}
The conserved total energy in \eqref{Lag-1L-erg} can be regarded as a metric on the space of smooth functions of vector fields and densities over $\mathbb{R}^2$, $(\bm{u},D)\in\mathfrak{X}(\mathbb{R}^2)\times {\rm Den}(\mathbb{R}^2)$. Hence, one can write the total energy for the $1L\sqrt{D}$ in \eqref{Lag-1L-erg} as a squared norm which defines the following metric on $\mathfrak{X}(\mathbb{R}^2)\times {\rm Den}(\mathbb{R}^2)$,
\begin{align}
E_{1L\sqrt{D}}(\bm{u},D)=: \frac{1}{2}\|(\bm{u},D)\|^2 =: \frac{1}{2} G\big((\bm{u},D)\,,\,(\bm{u},D) \big)
\,.\label{Lag-1L-erg-metric}
\end{align}
The Lie-Poisson Hamiltonian structure of the $1L\sqrt{D}$ model in equations \eqref{eq-D-cont} and \eqref{eq-fluid-1L} with energy \eqref{Lag-1L-erg} is discussed along with two other models to follow in remark \ref{remark-LPstructure}.
\end{remark}
\begin{remark}[Kelvin circulation theorem for the $1L\sqrt{D}$ motion equation \eqref{eq-fluid-1L}]$\,$
Substituting the Lie-derivative relation,
\begin{align}
\mathcal{L}_{u}(\bm{u}\cdot d\bm{x})
= \Big(\bm{u}\cdot\nabla\bm{u} + \nabla \frac{|\bm{u}|^2}{2}\Big)\cdot d\bm{x}
\,,
\label{eq-vector-id}
\end{align}
into the motion equation for $1L\sqrt{D}$ in \eqref{eq-fluid-1L}
implies the following Kelvin theorem for preservation of circulation,
\begin{align}
\frac{d}{dt}\oint_{c(\bm{u})} \bm{u}\cdot {\rm d}\bm{x}
= \oint_{c(\bm{u})} \big(\partial_t
+ \mathcal{L}_{u} \big)(\bm{u}\cdot d\bm{x})
= 0\,,
\label{Kel-thm}
\end{align}
for any material loop $c(\bm{u})$ moving with the fluid flow.
\paragraph{The $1L\sqrt{D}$ model admits potential flows.}
The motion equation for $1L\sqrt{D}$ in \eqref{eq-fluid-1L} and the vector calculus relation in \eqref{eq-vector-id} imply that if ${\rm curl}\bm{u}=0$ initially, then it will remain so. In this case, the corresponding velocity potential $\phi(\bm{x},t)$ for curl-free flows given by $\bm{u}=\nabla\phi$ satisfies a Bernoulli equation given by,
\begin{align}
\partial_t \phi + \frac12 |\nabla\phi|^2 = -\, g \big(D-b(\bs{x})\big)
-\frac{d^{2}}{6} \left(\frac{1}{\sqrt{D}}\frac{\partial^{2}\sqrt{D}}{\partial t^{2}}\right)
\,,
\label{eq-Bernoulli}
\end{align}
and the continuity equation in \eqref{eq-D-cont} becomes
\begin{align}
\frac{\partial D}{\partial t} = -\, {\rm div} (D \nabla\phi)
\,,
\label{eq-D-cont-phi}
\end{align}
for potential flows $\bm{u}=\nabla\phi$.
\smallskip
\noindent
\end{remark}
\begin{proposition}
The Euler-Poincar\'e equation for the Lagrangian functional $\ell_{1L\sqrt{D}}(\bm{u},D)$ in \eqref{Lag-1L-A} yields the $1L\sqrt{D}$ motion equation \eqref{eq-fluid-1L} in 2D.
\end{proposition}
\begin{proof}
The Euler-Poincar\'e equation for a Lagrangian functional $\ell(\bm{u},D)$ is given by \cite{CHMR1998,HMR1998}
\begin{align}
\partial_t \frac{\delta \ell}{\delta \bm{u}} + (\bm{u}\cdot \nabla) \frac{\delta \ell}{\delta \bm{u}}
+ \frac{\delta \ell}{\delta \bm{u}}({\rm div} \bm{u})
+ \frac{\delta \ell}{\delta u^j} \nabla u^j
=
D \nabla \frac{\delta \ell}{\delta D}
\,.
\label{EP-eqn}
\end{align}
The corresponding variational derivatives of the Lagrangian functional $\ell_{1L\sqrt{D}}(\bm{u},D)$ in \eqref{Lag-1L-A} are given by
\begin{align}
\frac{\delta \ell_{1L\sqrt{D}}}{\delta \bm{u}} = D\bm{u}
\quad\hbox{and}\quad
\frac{\delta \ell_{1L\sqrt{D}}}{\delta D} = \frac12|\bm{u}|^2 - g \big(D-b(\bs{x})\big)
-\frac{d^{2}}{6} \left(\frac{1}{\sqrt{D}}\frac{\partial^{2}\sqrt{D}}{\partial t^{2}}\right)
\,,
\label{Lag-var-1LD}
\end{align}
in which the variations with respect to $\bm{u}$ and $D$ are taken independently.
Substitution of the variational derivatives in \eqref{Lag-var-1LD} and the continuity equation \eqref{eq-D-cont} into the Euler-Poincar\'e equation \eqref{EP-eqn} yields the motion equation for $1L\sqrt{D}$ in \eqref{eq-fluid-1L}.
\end{proof}
\paragraph{Deriving the 2-component Camassa-Holm equation (CH2) in 2D}$\,$
The CH2 equation can be immediately derived as a certain approximation of the $1L\sqrt{D}$ equation. To show this derivation directly, we first introduce an alternative form for the $1L\sqrt{D}$ Lagrangian.
The alternative form is derived by inserting the advection equation \eqref{eq-D-cont} into formula \eqref{Lag-1L-A} for the $1L\sqrt{D}$ Lagrangian to find the equivalent expression,
\begin{align}
\ell_{1L\sqrt{D}}(\bm{u},D)=\frac{1}{2}\int D \left(|\bm{u}|^{2}
+\frac{d^{2}}{12D^2}\left({\rm div}D\bm{u}\right)^{2}\right)
-g\big(D-b(\bs{x})\big)^{2}\mathrm{d}x\,\mathrm{d}y
\,.
\label{Lag-1L-B-redux}
\end{align}
We now set $D=d$ in the \emph{kinetic energy} terms only, to find
\begin{align}
\ell_{CH2}(\bm{u},D)=\frac{1}{2}\int d \left(|\bm{u}|^{2}
+\frac{d^{2}}{12}\left({\rm div}\bm{u}\right)^{2}\right)
-g\big(D-b(\bs{x})\big)^{2}\mathrm{d}x\,\mathrm{d}y
\,.
\label{Lag-CH2}
\end{align}
\begin{proposition}
The Euler-Poincar\'e equation for the Lagrangian functional $\ell_{CH2}(\bm{u},D)$ in \eqref{Lag-CH2} yields the CH2 equation with bathymetry $b(\bs{x})$ in 2D, as follows.
\begin{align}
\partial_t \bm{m'} + (\bm{u}\cdot \nabla) \bm{m'}
+ \bm{m'}({\rm div} \bm{u})
+ m'_j \nabla u^j
=
-gD\nabla\big(D-b(\bs{x})\big)
\,.
\label{eq-CH2}
\end{align}
\end{proposition}
\begin{proof}
The corresponding variational derivatives of the Lagrangian functional $\ell_{CH2}(\bm{u},D)$ in \eqref{Lag-CH2} are given by
\begin{align}
\bm{m'}:=\frac{\delta \ell_{CH2}}{\delta \bm{u}}
= d \Big(\bm{u} - \frac{d^{2}}{12}\nabla {\rm div}\bm{u} \Big)
=: Q_{op}(d)\bm{u}
\qquad\hbox{and}\qquad
\frac{\delta \ell_{CH2}}{\delta D} = -\, g \big(D-b(\bs{x})\big)
\,.
\label{Lag-var-CH2}
\end{align}
The Euler-Poincar\'e equation \eqref{EP-eqn} for the variational derivatives in \eqref{Lag-var-CH2} yields the CH2 equation in \eqref{eq-CH2}.
\end{proof}
\begin{remark}
Setting $g=0$ and $d=2$ in the CH2 equation \eqref{eq-CH2} recovers the two-dimensional version of the Camassa-Holm equation derived in \cite{KSD2001}. For a comprehensive survey of the role of the Camassa-Holm equation in the wider context of nonlinear shallow water equations, see \cite{I-K2021}.
\end{remark}
\begin{remark}[Solving for velocity $\bm{u}$ from momentum $\bm{m'}$ with the grad-div operator in \eqref{Lag-var-CH2}]\label{rem CH2-solve}$\,$
In equation \eqref{Lag-var-CH2} we see that ${\rm curl}(\bm{m'}/d) ={\rm curl} \bm{u}$ and ${\rm div} (\bm{m'}/d) = (1-\frac{d^2}{12}\Delta){\rm div} \bm{u}$. To take a time step in equation \eqref{eq-CH2}we need to determine $\bm{u}$ from $(\bm{m'}/d)$. One could formally write $G_{Q_{op}(d)}*(\bm{m'}/d) = \bm{u}$ in which one implicitly defines the Green function $G_{Q_{op}(d)}$ for the operator $Q_{op}(d)$ in equation \eqref{Lag-var-CH2}. However, it is also useful to verify that the solution for the velocity $\bm{u}$ from the CH2 momentum $(\bm{m'}/d)$ can be implemented directly, without solving for the Green function explicitly.
The velocity $\bm{u}$ can be obtained from momentum $\bm{m'}$ via their linear operator relation for velocity $\bm{u}$ in equation \eqref{Lag-var-CH2}. For this, one begins with the Hodge decomposition for the velocity. Namely,
\begin{align}
\bm{u} = {\rm curl}\bm{A} + \nabla \phi
\,.\label{Hodge-CH2}
\end{align}
Here, the vector potential $\bm{A}$ is divergence-free ${\rm div}\bm{A}=0$, has zero mean $\int_{\cal D} \bm{A} d^2x =0$, and satisfies Neumann boundary conditions, $\partial_n\bm{A}|_{\partial \cal D}=0$. The scalar potential $\phi$ vanishes at the boundary. With these conditions, the vector and scalar potentials each satisfy Poisson equations,
\begin{align}
{\rm div}\bm{u}=\Delta\phi
\,, \quad\hbox{and}\quad
{\rm curl}\bm{u}=-\Delta\bm{A}
\,.
\label{PoissonEqns-CH2}
\end{align}
Taking the curl and divergence of the defining relation in \eqref{Lag-var-CH2} for the momentum $\bm{m'}$ in terms of the velocity $\bm{u}$ then yields the inversion formulas for the velocity potentials,
\begin{align}
{\rm div}(\bm{m'}/d) = (1-\alpha^2\Delta){\rm div}\bm{u} = (1-\alpha^2\Delta)\Delta\phi
\,, \quad\hbox{and}\quad
{\rm curl}(\bm{m'}/d) = {\rm curl}\bm{u}=-\Delta\bm{A}
\,.\label{m-u-CH2}
\end{align}
Inverting the relations in \eqref{m-u-CH2} for the vector and scalar velocity potentials $\bm{A}$ and $\phi$ then yields the velocity via the Hodge decomposition in \eqref{Hodge-CH2}.
\end{remark}
\begin{remark}[Kelvin theorem and conservation laws for the CH2 motion equation \eqref{eq-CH2}]\label{KelThm-conslawsCH2}$\,$
A geometric way of writing the Euler-Poincar\'e equation for any Lagrangian functional $\ell(\bm{u},D)$ in $n$ dimensions arises by regarding the fluid velocity as a vector field, denoted $u$, the depth as an $n$-form, denoted ${\sf D}$, and its dual momentum density as a 1-form density, denoted $m:={\delta \ell}/{\delta u}$. In coordinates, this is \cite{CHMR1998,HMR1998}
\begin{align}
u :=\bm{u}\cdot\nabla \in \mathfrak{X}(\mathbb{R}^n)
\,,\quad
{\sf D}=D\,{\rm d}^nx\in {\rm Den}(\mathbb{R}^n)
\quad\hbox{and}\quad
\frac{\delta \ell}{\delta u}
:= \frac{\delta \ell}{\delta \bm{u}} \cdot {\rm d} \bm{x}\otimes {\rm d}^nx \in \mathfrak{X}^*(\mathbb{R}^n)
\,.
\label{u&m-def}
\end{align}
For the Euler-Poincar\'e variational principle, one also assumes a natural $L^2$ pairing, $\langle\,\cdot\,,\,\cdot\,\rangle$, so that
\begin{align}
0 = \delta S = \int \left\langle \frac{\delta \ell}{\delta u}\,,\, \delta u \right\rangle
+ \left\langle \frac{\delta \ell}{\delta {\sf D}}\,,\, \delta {\sf D} \right\rangle
\,{\rm d}t
=
0
\,.
\label{EP-pair}
\end{align}
In this framework, the Euler-Poincar\'e equation \eqref{EP-eqn} for a Lagrangian functional $\ell(u,{\sf D})$ and the auxiliary equation for the advection of density $D$ are given by \cite{CHMR1998,HMR1998}
\begin{align}
\big({\partial_t} + \mathcal{L}_u\big) \frac{\delta \ell}{\delta u}
=
{\sf D}\, {\rm d} \frac{\delta \ell}{\delta {\sf D}}
\quad\hbox{and}\quad
\big({\partial_t} + \mathcal{L}_u\big) {\sf D} = 0
\,.
\label{EP-eqn-Lie}
\end{align}
where ${\sf D}$ is a density and ${\delta \ell}/{\delta {\sf D}}$ is a scalar function, according to our $L^2$ pairing in \eqref{EP-pair}.
The Kelvin circulation theorem in this framework is then proved, as follows,
\begin{align}
\frac{d }{d t}\oint_{c(u)} \frac{1}{\sf D}\frac{\delta \ell}{\delta u}
=
\oint_{c(u)} \big({\partial_t} + \mathcal{L}_u\big) \Big(\frac{1}{\sf D}\frac{\delta \ell}{\delta u}\Big)
=
\oint_{c(u)} {\rm d} \frac{\delta \ell}{\delta {\sf D}} = 0
\,.
\label{Kel-Lie}
\end{align}
In particular, according to \eqref{eq-CH2} and \eqref{Lag-var-CH2}, the Kelvin circulation theorem for CH2 with Lagrangian $\ell_{CH2}$ in \eqref{Lag-CH2} is given by
\begin{align}
\frac{d }{d t}\oint_{c(u)} \frac{1}{\sf D}\frac{\delta \ell_{CH2}}{\delta u}
=
\frac{d }{d t}\oint_{c(u)} \frac{d}{D}\Big(\bm{u} - \frac{d^{2}}{12}\nabla {\rm div}\bm{u} \Big)\cdot {\rm d}\bm{x}
= 0
\,.
\label{Kel-CH2}
\end{align}
For CH2 in 2D, applying the Stokes theorem to the Kelvin theorem \eqref{Kel-CH2} implies conservation along flow trajectories of a \emph{potential vorticity}. Namely,
\begin{align}
\partial_t \sigma + \bm{u} \cdot\nabla \sigma = 0
\,,\quad\hbox{where}\quad
\sigma := \bm{\hat{z}}\cdot {\rm curl}\left(
\frac{d}{D}\Big(\bm{u} - \frac{d^{2}}{12}\nabla {\rm div}\bm{u} \Big)\right)
\,.
\label{Stokes-CH2-2D}
\end{align}
This means, in particular, that if $\sigma$ vanishes initially, it will continue to do so.
Moreover, equation \eqref{Stokes-CH2-2D} and the continuity equation in \eqref{eq-D-cont} imply preservation of the integral quantities (enstrophies) given by
\begin{align}
C_\Phi:=\int \Phi(\sigma)\,D{\rm d}x{\rm d}y
\,,
\label{Casimirs-CH2}
\end{align}
for any differentiable function $\Phi$.
For CH2 in 3D, applying the Stokes theorem to the Kelvin circulation conservation law for the CH2 model in {Kel-CH2} implies the advection of a potential-vorticity vector field, $\bm{\sigma}$, given in components by
\begin{align}
\partial_t \bm{\sigma} + \bm{u} \cdot\nabla \bm{\sigma} - \bm{\sigma}\cdot\nabla\bm{u} = 0
\,,\quad\hbox{where}\quad
\bm{\sigma} := D^{-1} {\rm curl}\bm{v}
\,,\quad\hbox{with}\quad
\bm{v} = D^{-1}\bm{m'} =
\frac{d}{D}\Big(\bm{u} - \frac{d^{2}}{12}\nabla {\rm div}\bm{u} \Big)
\,.
\label{Stokes-CH2-3D}
\end{align}
In 3D, the CH2 equation \eqref{Stokes-CH2-2D} and the continuity equation in \eqref{eq-D-cont} imply preservation of the integral quantity (helicity) given by
\begin{align}
\Lambda :=\int_{\mathcal{B}_t } \bm{v}\cdot{\rm curl}\bm{v} \,{\rm d}^3x
= \int_{\mathcal{B}_t } \bm{v}\cdot{\rm d}\bm{x}\wedge {\rm d} (\bm{v}\cdot{\rm d}\bm{x})
\,.
\label{helicity-CH2-3D}
\end{align}
The helicity integral in \eqref{helicity-CH2-3D} is taken over any volume (blob) ${\cal B}_t = \phi_t{\cal B}_0$ of fluid moving with the flow, $\phi_t$, with outward normal boundary condition ${\rm curl}\bm{v}\cdot \bm{\hat{n}}=0 $ on the surface $\partial {\cal B}$.
\end{remark}
\paragraph{Deriving the Modified 2-component Camassa-Holm equation (ModCH2) in 2D and 3D}$\,$
\noindent
To derive the ModCH2 model equations, we modify the \emph{potential energy} terms in the Lagrangian \eqref{Lag-CH2} for the CH2 model, as follows
\begin{align}
\begin{split}
\ell_{ModCH2}(\bm{u},D)&=\frac{1}{2}\int d \left(|\bm{u}|^{2}
+\frac{d^{2}}{12}\left({\rm div}\bm{u}\right)^{2}\right)
\\& \qquad -g\left[\big(D-b(\bs{x})\big)G_{Q_{op}(d)}*\big(D-b(\bs{x})\big)
\right]\mathrm{d}^nx
\,,
\end{split}
\label{Lag-ModCH2}
\end{align}
where convolution with the Green function $G_{Q_{op}(d)}$ acts as a smoothing operator in the potential energy term in the ModCH2 Lagrangian. The $({\rm div}\bm{u})^2$ term in \eqref{Lag-ModCH2} effectively replaces the vertical kinetic energy by the divergence of the horizontal velocity.
\begin{proposition}
The Euler-Poincar\'e equation for the Lagrangian functional $\ell_{ModCH2}(\bm{u},D)$ in \eqref{Lag-ModCH2} yields the following \emph{modified} CH2 equation,
\begin{align}
\partial_t \bm{m} + (\bm{u}\cdot \nabla) \bm{m}
+ \bm{m}({\rm div} \bm{u})
+ m_j \nabla u^j
=
-gD\nabla G_{Q_{op}(d)}*\big({D}-{b}(\bs{x})\big)
=:
-gD\nabla\big(\overline{D}-\overline{b}(\bs{x})\big).
\label{eq-ModCH2}
\end{align}
\end{proposition}
\begin{proof}
The corresponding variational derivatives of the Lagrangian functional $\ell_{ModCH2}(\bm{u},D)$ in equation \eqref{Lag-ModCH2} are given by
\begin{align}
\bm{m} := \frac{\delta \ell_{ModCH2}}{\delta \bm{u}}
= d \Big(\bm{u} - \alpha^2\nabla {\rm div}\bm{u} \Big)
\quad\hbox{and}\quad
\frac{\delta \ell_{ModCH2}}{\delta D} = -g \big(\overline{D}-\overline{b}(\bs{x})\big)
\,.
\label{Lag-var-ModCH2}
\end{align}
The Euler-Poincar\'e equation \eqref{EP-eqn} for the variational derivatives in \eqref{Lag-var-ModCH2} yields the ModCH2 equation in \eqref{eq-ModCH2}.
\end{proof}
In summary, the velocity $\bm{u}$ in the ModCH2 equation \eqref{eq-ModCH2} is obtained from momentum $\bm{m}$ at each time step by inverting the grad-div Helmholtz operator in \eqref{Lag-var-ModCH2} as explained in remark \ref{rem CH2-solve}. This procedure is valid in both 2D and 3D.
\begin{remark}[Kelvin theorem, conservation laws and an additional property for ModCH2]\label{KelThmConsMomap-ModCH2}$\,$
\noindent
In combination with the continuity equation for the total depth as $D$ in \eqref{eq-D-cont}, the Kelvin theorem and conservation laws for ModCH2 may be obtained as analogues of those for CH2 in both 2D and 3D in remark \ref{KelThm-conslawsCH2}.
However, ModCH2 was introduced in \cite{HOT2009} to provide an additional structural feature which goes beyond the CH2 equation. Namely, ModCh2 is both a geodesic equation and an Euler-Poincar\'e equation. In the next section, we will discuss the implications of these dual properties.
\end{remark}
In addition to possessing the same Kelvin theorem and all of the corresponding conservation laws for CH2 discussed in remark \ref{KelThm-conslawsCH2} which accompany its derivation as an Euler-Poincar\'e equation, the Lie-Poisson Hamiltonian formulation of the ModCH2 equation places it into a class of equations which admit singular momentum map solutions in any number of dimensions. This is the subject of the next section.
\subsection{Singular momentum map solutions for Modified CH2 (ModCH2)}\label{ssec: singmomap}
The purpose of this section is to explain how the dual properties of ModCH2 in being both a geodesic equation and an Euler-Poincar\'e equation endow it with singular momentum map solutions in any number of dimensions.
That is, ModCH2 admits singular solutions that are represented as a sum over Dirac deltas supported on curves in the plane, or surfaces in three dimensions, which are advected by the flow of the currents which they themselves induce throughout the rest of the domain.
Specifically, the singular solutions are given in Theorem \ref{singsolnmommap-thm} by
\begin{equation}
\big({\bf m},D\big)=\sum_{i=1}^N\int\!\big({\bf P}_i(s,t),w_i(s)\big)\,\delta\!\left({\bf
x-Q}_i(s,t)\right)\,{\rm d}^ks
\,,
\label{SDsingsoln}
\end{equation}
where $s$ is a coordinate on a submanifold $S$ of $\mathbb{R}^n$, exactly as in the case of EPDiff. For $\mathbb{R}^2$, the case dim$\,S=1$ yields fluid variables supported on filaments moving under the action of the diffeomorphisms, while for $\mathbb{R}^3$ dim$\,S=2$ yields fluid variables supported on moving surfaces.
The geometric setting of the peakon solutions of the Camassa-Holm equation and its extension to pulson solutions of EPDiff was established in \cite{HMR1998}. Following the reasoning in \cite{HM2005,HT2009},
one may interpret ${\bf Q}_i$ in \eqref{SDsingsoln} as a smooth embedding in Emb$(S,\mathbb{R}^n)$ and $P_i={\bf P}_i\cdot{\rm d}{\bf Q}_i$ (no sum) as the canonical 1-form on the cotangent bundle $T^*$Emb$(S,\mathbb{R}^n)$ for the $i$-th smooth embedding.
In a sense, the singular ModCH2 wave-currents are analogues for nonlinear wave dynamics of point vortices in 2D and vortex lines in 3D for Euler fluid dynamics. However, unlike vortices and vortex lines in 3D for Euler fluids, the singular ModCH2 wave-currents can emerge spontaneously from smooth, spatially confined initial conditions, while the point vortices and vortex lines do not emerge spontaneously in Euler fluid dynamics.
\begin{remark}[Shared Lie-Poisson Hamiltonian structure]\label{remark-LPstructure}
As we have seen, all of the models $1L\sqrt{D}$, CH2 and ModCH2 yield semiditect-product Euler-Poincar\'e equations in the class EP(Diff$\,\circledS\,\mathcal{F}$) in equation \eqref{EP-eqn}. Here, $\mathcal{F}$ comprises the smooth scalar functions of the densities ${\sf D}=D\,{\rm d}^nx\in {\rm Den}(\mathbb{R}^n)$ and $\circledS$ denotes the semidirect-product action \cite{CHMR1998,HMR1998}.
In $n$ dimensions, the corresponding Lie-Poisson Hamiltonian equations can be obtained from the Legendre transformation,
\begin{align}
h(m,{\sf D}):= \langle m,u\rangle - \ell(u,{\sf D})
\,.\label{Leg-xform}
\end{align}
The variational derivatives of the Hamiltonian are given by
\begin{align}
\delta h(m,{\sf D}) = \big\langle \delta m,u\big\rangle
+ \Big\langle m - \frac{\delta \ell}{\delta u},\delta u \Big\rangle
+ \Big\langle -\,\frac{\delta \ell}{\delta {\sf D}},\delta {\sf D} \Big\rangle
\,.\end{align}
Under the Legendre transformation \eqref{Leg-xform}, the semidirect-product Lie-Poisson Hamiltonian equations corresponding to the Euler-Poincar\'e equations in \eqref{EP-eqn} can be written in three-dimensional matrix component form, as \cite{CHMR1998,HMR1998}
\begin{align}
\frac{\partial }{\partial t }
\begin{bmatrix}
m_i \\
D
\end{bmatrix}
=
-
\begin{bmatrix}
\partial_j m_i + m_i \partial_j & D \partial_i
\\
\partial_j D & 0
\end{bmatrix}
\begin{bmatrix}
\frac{\delta h}{\delta m_j} = u^j \\
\frac{\delta h}{\delta D} = - \frac{\delta \ell}{\delta D}
\end{bmatrix}
\,.
\label{LP-eqn}
\end{align}
In \eqref{LP-eqn}, one sums over repeated spatial component indices, $i,j=1,2,3$, for each of the Lagrangians $\ell_{1L\sqrt{D}}$, $\ell_{CH2}$, and $\ell_{ModCH2}$, and all three motion equations share the continuity equation for the total depth, $D$,
\begin{align}
\frac{\partial D}{\partial t} = -\, {\rm div} (D \bm{u})
\,.
\label{eq-D-cont-redux}
\end{align}
When the Lie-Poisson matrix form \eqref{LP-eqn} is extended to $n$ dimensions, the $1L\sqrt{D}$ equations
describe geodesic motion with respect to the following metric Hamiltonian
\begin{align}
h_{1L\sqrt{D}}(\bm{u},D)=\frac{1}{2}\int \bm{m}\cdot G_{Q_{op}(D)}*\bm{m}
+g\big(D-b(\bs{x})\big)^{2}\mathrm{d}^nx
\,,
\label{Ham-1L-B}
\end{align}
in which $G_{Q_{op}(D)}$ is the Green function for the symmetric operator $Q_{op}(D)$ in equation \eqref{def-Qop}. That is,
\begin{align}
G_{Q_{op}(D)}*\bm{m}=\bm{u}
\label{def-Qop(D)}
\end{align}
is the velocity vector for the $1L\sqrt{D}$ model.
Likewise, the ModCH2 equations describe geodesic motion with respect to the metric Hamiltonian obtained by replacing $G_{Q_{op}(D)}$ by $G_{Q_{op}(d)}$ in equation \eqref{Ham-1L-B}. The ModCH model also has the special feature that its Hamiltonian
\begin{align}
\begin{split}
h_{ModCH2}(\bm{m},D)&=\frac{1}{2}\int \bm{m}\cdot G_{Q_{op}(d)}*\bm{m}\
+\
g \big(D-b(\bs{x})\big)G_{Q_{op}(d)}*\big(D-b(\bs{x})\big) \,\mathrm{d}^nx
\,,
\end{split}
\label{Ham-ModCH2}
\end{align}
lies in the following class of general metrics (Green functions),
\begin{equation}\label{EPGosH-Ham}
H({\bf m},D)=
\frac12\iint\!{\bf m}({\bf x},t)\cdot \,G_1({\bf x-x}')\,{\bf m}({\bf x}',t)\,{\rm d}^n{\bf x}\,{\rm d}^n{\bf x}'
+
\frac12\iint\!D({\bf x},t)\,G_2({\bf x-x}')\,D({\bf x}',t)\,{\rm d}^n{\bf x}\,{\rm d}^n{\bf x}'
\,.\end{equation}
Importantly for the remainder of the present work, the class of Hamiltonians in \eqref{EPGosH-Ham} admits emergent singular solutions supported on advected embedded spaces.
\end{remark}
In preparation for displaying the computational simulations of the singular solution behaviour for ModCH2, we write the equations in dimension-free form.
\begin{remark}[Dimension-free form of ModCH2 Lagrangian, $\ell_{ModCH2}$]\label{non-dim-scales}
The Lagrangian for ModCH2 in \eqref{Lag-ModCH2} may be cast into dimension-free form by introducing natural units for horizontal length, $[L]$, horizontal velocity, $[U]$, and time, $[T]=[L]/[U]$, as well as spatially mean vertical depth, denoted $d=\langle D \rangle$, and spatially mean vertical wave elevation, $[\zeta]=\langle D-b(\bs{x})\rangle=d-\langle b\rangle$. In terms of these units, one may define the following two dimension-free parameters: aspect ratio, $\sigma = [d]/[L]$ and elevation Froude number squared, $Fr^2=[U]^2/([g][\zeta])$.
In addition, the dimension-free form of the symmetric operator $Q_{op}(\sigma)$ is redefined with $\alpha^2:=\sigma^2/12$ as
\begin{align}
Q_{op(\alpha)}\bm{u} := \Big(1 - \alpha^2\big(\nabla {\rm div}\big)\Big)\bm{u}
\,.
\label{def-Qop-ndim}
\end{align}
Consequently, the dimension-free form of the Lagrangian for ModCH2 is given by
\begin{align}
\begin{split}
\ell_{ModCH2}(\bm{u},D)&=\frac{1}{2}\int \left(|\bm{u}|^{2}
+\alpha^2\left({\rm div}\bm{u}\right)^{2}\right)
\\& \qquad - Fr^{-2}\left[\big(D-b(\bs{x})\big)G_{Q_{op}(\alpha)}*\big(D-b(\bs{x})\big)
\right]\mathrm{d}^nx
\,,
\end{split}
\label{Lag-ModCH2-ndim}
\end{align}
The constants $\sigma^2\ll1$ and $Fr^{-2}=O(1)$ here are, respectively, the squares of the aspect ratio and the Froude number, which have been obtained in making the expression dimension-free. The final dimension-free number to be defined in the simulations will be the ratio of widths obtained by dividing the width of the initial condition by the filter width, or interaction range, $\alpha=d/\sqrt{12}$ in $Q_{op(\alpha)}$, as defined in \eqref{def-Qop-ndim}.
\end{remark}
\begin{proposition}
The Euler-Poincar\'e equation for the dimension-free Lagrangian functional $\ell_{ModCH2}(\bm{u},D)$ in \eqref{Lag-ModCH2-ndim} yields the dimension-free form of the ModCH2 equation, as follows.
\begin{align}
\partial_t \bm{m} + (\bm{u}\cdot \nabla) \bm{m}
+ \bm{m}({\rm div} \bm{u})
+ m_j \nabla u^j
=
-Fr^{-2}D\nabla G_{Q_{op}(\alpha)}*\big({D}-{b}(\bs{x})\big)
=:
-Fr^{-2}D\nabla\big(\overline{D}-\overline{b}(\bs{x})\big).
\label{eq-ModCH2-ndim}
\end{align}
\end{proposition}
\begin{proof}
The corresponding variational derivatives of the Lagrangian functional $\ell_{ModCH2}(\bm{u},D)$ in equation \eqref{Lag-ModCH2-ndim} are given by
\begin{align}
\bm{m} := \frac{\delta \ell_{ModCH2}}{\delta \bm{u}}
= \Big(\bm{u} - \alpha^2\nabla {\rm div}\bm{u} \Big)
\quad\hbox{and}\quad
\frac{\delta \ell_{ModCH2}}{\delta D} = -Fr^{-2} \big(\overline{D}-\overline{b}(\bs{x})\big)
\,.
\label{Lag-var-ModCH2-ndim}
\end{align}
The Euler-Poincar\'e equation \eqref{EP-eqn} for the variational derivatives in \eqref{Lag-var-ModCH2-ndim} yields the ModCH2 equation in \eqref{eq-ModCH2-ndim}.
\end{proof}
\paragraph{Singular ModCH2 solutions.}
In ideal fluid dynamics, various conservation laws are expressed on advected embedded spaces such as loops and surfaces, as we have seen for example, in the case of CH2 in 2D in the previous discussion. As explained in \cite{HT2009}, ModCH2 dynamics is dominated by the emergence of weak solutions supported on advected embedded spaces. These emergent weak solutions for ModCH2 on embedded spaces in any dimension define the momentum map for the left action of the diffeomorphisms on functions taking values on the semidirect-product Lie algebra $\mathfrak{X}(\mathbb{R}^n)\circledS V(\mathbb{R}^n)$. (This is the Lie algebra of vector fields acting on functions which take values on vector spaces $V$ over $\mathbb{R}^n$, \cite{G-BV2012,HM2005,HT2009}.%
\footnote{
In contrast, point vortices in Euler flow define a \emph{symplectic} momentum map \cite{MW1983} which also generalises to higher-order derivatives, \cite{HJ2009,CHJM2014,CEHJM2016}.}
The singular momentum map we shall discuss here arises as part of a dual pair.%
\footnote{See \cite{W1983} for the definitive discussion of dual pairs.}
The rigid body provides a familiar example of a dual pair. In the rigid body, the two legs of the dual pair correspond to the cotangent lift momentum maps for right and left actions, respectively. The dual pair for Euler fluids implies (from right-invariance) that the momentum map $J_R$ is conserved. For Euler fluids, the conservation of the right momentum map $J_R$ is equivalent to Kelvin's circulation theorem. For Euler fluids, the left momentum map $J_L$ maps Hamilton's canonical equations on $T^*(SDiff)$ to their reduced Lie-Poisson form and at the same time implies that the solutions on $T^*(SDiff)$ can be defined on embedded subspaces of the domain of flow which are pushed forward by the left action of $SDiff$ \cite{HM2005}. These results for ideal incompressible Euler fluids were generalised to semidirect-product left action of $Diff$ on embedded subspaces of the domain of flow for ideal compressible fluids in \cite{HT2009}. For the fundamental proofs that these maps satisfy the technical conditions required for verifying them as dual pairs, see \cite{G-BV2012}.
In summary, for the semidirect-product case of EP(Diff$\,\circledS\,\mathcal{F}$), the weights $w_i$ for $i=1,\dots,N$ in \eqref{SDsingsoln} are considered as maps $w_i:S\to\mathbb{R}^*$. That is, the weights $w_i$ are distributions on $S$, so that $w_i\in{\rm Den}(S)$, where ${\rm Den}:=\mathcal{F}^*$. In particular, considering the triple
\[
({\bf Q}_i,{\bf P}_i, w_i)\ \, \text{\large $\in$} \ \,
T^*{\rm Emb}(S,\mathbb{R}^n)\,\times\,{\rm Den}(S)
\,,
\]
leads to the following solution momentum map introduced in \cite{HT2009}.
\begin{theorem}[Singular solution momentum map \cite{HT2009}]
\label{singsolnmommap-thm}$\,$
\noindent
The singular solutions of the semidirect-product Lie-Poisson equations in \eqref{LP-eqn} for $\ell=\ell_{ModCH2}$ in \eqref{Lag-ModCH2} are given by
\begin{align}
\big({\bf m},D\big)=\sum_{i=1}^N\int\!\big({\bf P}_i(s,t),w_i(s)\big)\,\delta\!\left({\bf
x-Q}_i(s,t)\right)\,{\rm d}^ks
\,.
\label{sing-soln-thm}
\end{align}
The expressions for $({\bf m},D)\in\mathfrak{X}^*(\mathbb{R}^n)\ \text{\large $\circledS$}\ {\rm Den}(\mathbb{R}^n)$ in \eqref{sing-soln-thm} identify a momentum map
\begin{align}
{\bf J}:\underset{i=1}{\overset{N}{\text{\LARGE$\times$}}}\Big(T^*{\rm Emb}(S,\mathbb{R}^n)\,\times\,{\rm Den}(S)\Big)
\,\rightarrow\,
\mathfrak{X}^*(\mathbb{R}^n)\ \text{\large $\circledS$}\ {\rm Den}(\mathbb{R}^n)
\,.
\end{align}
After substituting the formulas in \eqref{sing-soln-thm} for the singular solutions into the Hamiltonian $H({\bf m},D)$ in equation \eqref{EPGosH-Ham}, the dynamical equations for the variables
$({\bf Q}_i,{\bf P}_i, w_i)$ are given by the integral expressions
\begin{align}
\begin{split}
\partial_t{{\bf Q}_i(s,t)}
=&\ \sum_j\int\!{\bf P}_j(s',t)\, G_1({\bf
Q}_i(s,t)-{\bf Q}_j(s',t))\ {\rm d}^ks' \,,
\\
\partial_t{{\bf P}_i(s,t)} =&\ -\sum_j\int\! {\bf P}_i(s,t)\cdot{\bf
P}_j(s',t)\,\text{\large$\nabla$}_{\!{\bf Q}_i}G_1({\bf
Q}_i(s,t)-{\bf Q}_j(s',t))\ {\rm d}^ks'
\\
&\qquad - \sum_j\int\! w_i(s)\,w_j(s')\, \text{\large$\nabla$}_{\!{\bf
Q}_i}G_2({\bf Q}_i(s,t)-{\bf Q}_j(s',t))\ {\rm d}^ks' \,,
\end{split}
\label{singsoldynamics}
\end{align}
with $\partial_t{w}_i(s)=0$, for all values of $i$.
\end{theorem}
The considerations discussed in \cite{HT2009,G-BV2012} derive the above singular momentum map as the left-invariant leg of a defined dual pair. However, these considerations will not be reviewed here. Instead, the next section will start a series of illustrations by numerical simulations of the dynamical behaviour of the solutions of the ModCH2 equations \eqref{eq-ModCH2} in 2D with periodic boundary conditions.
\begin{comment}
\vspace{10cm}
\begin{remark}
{\color{blue}
To carry the analogy with the rigid body farther the latter momentum map is analogous to the left action of $SE(3)$ on $\mathbb{R}^3\times\mathbb{R}^3$ which characterises the motion of a heavy top \cite{HvolII2011, HSS2009}.
But what does all this have to do with the wavefronts appearing in the SAR images shown in figures 1 and figure 2?
The SAR wavefronts are being carried by a flow that has been set in motion by the propagation of internal waves in the stratified layers of the sea below the surface.
The remainder of this paper will consider the flows ModCH2 acting on embedded curves on the plane and surfaces in a volume. As just described, the flows act on the embedded spaces by left action \dots .
}
\end{remark}
\begin{remark}[Summary]$\,$
\noindent
So far, we have discussed the following points.
\begin{itemize}
\item The $1L\sqrt{D}$ Lagrangian in 2D yields a regularisation of the Green-Naghdi equation in the multi-layer case. The present work focuses on the single-layer case.
\item The 2-component Camassa-Holm equation (CH2) in 2D arises in single-layer shallow water theory from setting $D=d$ with constant mean depth $d$ in the kinetic energy terms only of the $1L\sqrt{D}$ Lagrangian in 2D.
\item The Modified 2-component Camassa-Holm equation (ModCH2) in 2D arises in single-layer shallow water theory from smoothing the quadratic terms in the potential energy part of the CH2 Lagrangian.
\end{itemize}
The kinetic energy for the ModCH2 Lagrangian is the same as for the CH2 model, so the momentum in \eqref{eq-ModCH2} is also the same as for CH2. The potential energy has been changed, though, for a purpose. The slight change in the potential energy will enable the ModCH2 equations to support emergent singular solutions in 2D, which will generalise those introduced and investigated for their 1D dynamics in \cite{HOT2009}.
\end{remark}
\subsection{Nonlinear dispersion (NLD) in EPDiff }
\begin{outline}[enumerate]
\0 NLD Water waves in 1D (CH)
\1 Camassa and Holm 1993
\2 Singular solutions for CH in 1D -- Peakons -- waterfall peakon figure
\2 $b$-equation Holm and Staley (2005), Degasperis, Holm and Hone (2005)
\0 Euler-Poincar\'e equations and Lie-Poisson brackets \cite{CHMR1998,HMR1998}
\1 Introduction of advected quantities
\0 NLD in CH in N Dimensions
\1 Holm, Marsden, Ratiu 1998
\2 EPDiff with advected quantities
\2 Regularised EPDiff
\3 Regularised turbulence models
\4 Euler-alpha: Marsden, Shkoller 2001
\4 Navier-Stokes-alpha: Foias, Holm, Titi 2001, 2002 \cite{CFHOTW1998,FHT2001,FHT2002}
\1 Holm and Marsden 2005
\2 Singular momentum map of NLD for CH equation in $N$-dimensions
\0 NLD CH in 2D
\1 Holm and Staley 2004, 2013
\2 Singular solution behaviour of NLD for CH in 2D -- Singular wavefronts
\0 Re-introducing potential energy -- CH2 in 1D
\1 Falqui 2005, Chen \& Zhang 2006 \cite{F2005,CZ2006}
\2 Relation to shallow water equations \cite{CI2008}
\0 MCH2 in NLD in 1D -- Back to singular solutions, this time with potential energy
\1 Semidirect-product singular momentum map -- Holm, O'N\'araigh and Tronci 2009
\end{outline}
\end{comment}
\section{Solution behaviour -- simulations}\label{sec-3}
\subsection{Background -- Euler-Poincar\'e and Lie-Poisson derivations}
\noindent
This section reports computational simulations of the interaction dynamics of wave fronts. Before embarking on our report of these computational simulations, let us place them into the context of the previous literature, which is based on approximating the Lagrangian in Hamilton's principle for fluid dynamics. Such approximations have been designed before to preserve the transport and topological properties of variational principles \cite{CHMR1998,Holm2001,Holm2015,FHT2001,FHT2002,HMR1998,HOT2009,HS2013}.
Specifically, this section reports simulations in which the Lagrangian functional $\ell_{ModCH2}(\bm{u},D)$ in equation \eqref{Lag-ModCH2-ndim} has been augmented to complete the $H^1$ norm in the kinetic energy of the dimension-free form of the Lagrangian. Namely, we modify the Lagrangian $\ell_{ModCH2}$ in \eqref{Lag-ModCH2-ndim} to include the full H1 norm by writing,
\begin{align}
\begin{split}
\ell_{H1ModCH2}(\bm{u},D)&=\frac{1}{2}\int \left(|\bm{u}|^{2}
+\alpha^2\left|\nabla\bm{u}\right|^{2}\right)
\\& \qquad - Fr^{-2}\left[\big(D-b(\bs{x})\big)G_{Q^{H1}_{op}(\alpha)}*\big(D-b(\bs{x})\big)
\right]\mathrm{d}^nx
\,.
\end{split}
\label{Lag-H1ModCH2-ndim}
\end{align}
The corresponding dimension-free form of the symmetric operator $Q^{H1}_{op}(\sigma)$ for the H1ModCH2 equation is defined as, cf. $Q_{op(\alpha)}\bm{u}$ in equation \eqref{def-Qop-ndim},
\begin{align}
Q^{H1}_{op(\alpha)}\bm{u} := \Big(1 - \alpha^2\big({\rm div}\nabla \big)\Big)\bm{u}
\,.
\label{def-Qop-ndim}
\end{align}
\begin{proposition}
The Euler-Poincar\'e equation for the dimension-free Lagrangian functional $\ell_{H1ModCH2}(\bm{u},D)$ in \eqref{Lag-H1ModCH2-ndim} yields the dimension-free form of the H1ModCH2 equation, as follows.
\begin{align}
\begin{split}
\partial_t \bm{\widetilde m} + (\bm{u}\cdot \nabla) \bm{\widetilde m}
+ \bm{\widetilde m}({\rm div} \bm{u})
+ {\widetilde m}_j \nabla u^j
&=
-Fr^{-2}D\nabla G_{Q^{H1}_{op}(\alpha)}*\big({D}-{b}(\bs{x})\big)
\\&=:
-Fr^{-2}D\nabla\big(\widetilde{D}-\widetilde{b}(\bs{x})\big),
\end{split}
\label{eq-H1ModCH2-ndim}
\end{align}
where $\widetilde{(\,\cdot\,)}:=G_{Q^{H1}_{op}(\alpha)}*(\,\cdot\,)$
and ${\widetilde m}_j:={\delta \ell_{H1ModCH2}}/{\delta {u^j}}=Q^{H1}_{op(\alpha)}u_j$.
\end{proposition}
\begin{proof}
The corresponding variational derivatives of the Lagrangian functional $\ell_{H1ModCH2}(\bm{u},D)$ in equation \eqref{Lag-H1ModCH2-ndim} in $n$ dimensions are given by
\begin{align}
\bm{\widetilde m} := \frac{\delta \ell_{H1ModCH2}}{\delta \bm{u}}
= \Big(\bm{u} - \alpha^2 \big({\rm div}\nabla \big) \bm{u} \Big)
\quad\hbox{and}\quad
\frac{\delta \ell_{H1ModCH2}}{\delta {\widetilde D}}
= -Fr^{-2} \big(\widetilde{D}-\widetilde{b}(\bs{x})\big)
\,.
\label{Lag-var-H1ModCH2-ndim}
\end{align}
The Euler-Poincar\'e equation \eqref{EP-eqn} for the variational derivatives in \eqref{Lag-var-ModCH2-ndim} yields the H1ModCH2 equation in \eqref{eq-H1ModCH2-ndim}. After a Legendre transformation, the semidirect-product Lie-Poisson Hamiltonian equations in \eqref{LP-eqn} also yield the H1ModCH2 equation in \eqref{eq-H1ModCH2-ndim}, as well as the continuity equation for the depth, $D$.
\end{proof}
\begin{remark}
The Hamiltonian for the H1ModCH2 equation in \eqref{eq-H1ModCH2-ndim} is given by
\begin{align}
\begin{split}
h_{H1ModCH2}(\bm{\widetilde m},D)&=\frac{1}{2}\int \bm{\widetilde m}\cdot G_{Q^{H1}_{op}(d)}*\bm{\widetilde m}\
+\
g \big(D-b(\bs{x})\big)G_{Q^{H1}_{op}(d)}*\big(D-b(\bs{x})\big) \,\mathrm{d}^nx
\,,
\end{split}
\label{Ham-H1ModCH2}
\end{align}
This Hamiltonian also lies in the class the class of Hamiltonians in \eqref{EPGosH-Ham}. Consequently, the H1ModCH2 equation will admit emergent singular solutions supported on advected embedded spaces which dominate their asymptotic behaviour.
\end{remark}
\begin{remark}[Interaction dynamics of singular wave fronts with both kinetic and potential energy]$\,$
\noindent
When gravitational potential energy is neglected, then $Fr^{-2}\to0$ in equations \eqref{eq-H1ModCH2-ndim} and \eqref{Lag-var-H1ModCH2-ndim}. The dimension-free form of the ModCH2 equation \eqref{eq-H1ModCH2-ndim} in $n$-dimensions then reduces to the $n$D Camassa-Holm equation, which was introduced in \cite{HMR1998} and studied numerically in 2D and 3D in \cite{HS2013}. For divergence-free flows, the $n$D Camassa-Holm equation is also known as the Euler-$\alpha$ model in a class of other $\alpha$-fluid models \cite{HMR1998PRL}, and it was the source of the Lagrangian-Averaged Navier-Stokes $\alpha$ model (LANS-$\alpha$ model) of divergence-free turbulence in \cite{CFHOTW1998,CFHOTW1999,FHT2001,FHT2002}. In the remainder of the present paper, we will present computational simulations of equation \eqref{eq-H1ModCH2-ndim} in 2D which include the effects of potential energy as well as the vorticity in the interaction of singular wave fronts. Consequently, the results we will obtain may be compared with computational simulations of the Camassa-Holm equation in 2D and 3D of \cite{HS2013}, in order to see the differences made in the solution behaviour due to the presence of gravitational potential energy. The 1D version of these comparisons have already been made for solutions of both the ModCH2 and H1ModCH2 equations in \cite{HOT2009}. In 1D the same run accomplishes the comparisons of the Camassa-Holm solutions with those of both ModCH2 and H1ModCH2, because in 1D the operators ${\rm div}\nabla$ and $\nabla{\rm div}$ are the same. The work here presents solutions of H1ModCH2 in 2D for comparisons with corresponding solutions of the Camassa-Holm equation in \cite{HS2013}. The comparisons of ModCH2 in 2D and 3D with corresponding solutions of the Camassa-Holm equation in \cite{HS2013} will be deferred to a later paper. In the later paper, we will also present comparisons of Camassa-Holm solutions in 2D and 3D with the corresponding solutions of EPDiff$(H_{div}$), as introduced in \cite{KSD2001}.
\end{remark}
\color{black}
\subsection{Simulations of emergent H1ModCH2 solutions}
In the rest of this section, we consider computational simulations of the H1ModCH2 equation \eqref{eq-H1ModCH2-ndim} dynamics in 2D. We will present five initial conditions in the paper and ten initial conditions in the supplementary materials. For each initial condition, we consider the dynamical exchange between kinetic and potential energy. This will be illustrated by starting with only kinetic energy with zero initial elevation in sections \ref{ssec: plate}, \ref{ssec: skew} and \ref{ssec: wedge}, or with only potential energy in \ref{ssec: single dam break} and \ref{ssec: dual dam break}. Each of five these initial conditions is simulated for two values of the interaction range, $\alpha$, relative to the characteristic width of the initial condition, $w_0$, which all of the simulations have in common. The two values of $\alpha$ selected in these simulations are given by $\alpha = w_0$ and $8\alpha = w_0$. For each of these five initial conditions and the two chosen values of $\alpha$, two figures with six panels are presented corresponding to the evolution of $|u|^2$ and $(\overline{D} - \overline{b}(\bs{x}))$, displayed as $6$ snapshots. The quantity $(\overline{D} - \overline{b}(\bs{x})) := G_{Q^{H1}_{op}(\alpha)}*({D}-{b}(\bs{x}))$ is the elevation, smoothed by convolution with the Green function $G_{Q^{H1}_{op}(\alpha)}$ defined in \eqref{eq-H1ModCH2-ndim}. For convenience, in this section we will refer to the smoothed difference $(\overline{D} - \overline{b}(\bs{x}))$ as the \emph{elevation}.
The first panel (top left panel) in each figure corresponds to the initial condition. The subsequent panels, reading across the first row and then across the second row, are snapshots at the subsequent times. The domain is $[0,2\pi]\times [0,2\pi]$ with doubly periodic boundary conditions. Coordinates are $x$ horizontally and $y$ vertically. We use the colour map shown in figure \ref{fig:velocity wave cmap} for the L2 norm of the velocity, $|u|^2$, where the minimal values and maximal values appear grey and white respectively. This is the same approach used in \cite{HS2013} where the black colour at $12.5\%$ intensity exists to show the outlines of spatially confined velocity segments. While the colour map \ref{fig:velocity wave cmap} is apt at showing small scale features for positive definite fields, it is not suitable to plot figures that take on negative values. Thus, the elevation figures will use the standard colour map \emph{turbo}. In each figure, the colour map is determined for each panel separately, so that the features of each snapshots are visible. The scales of the 2D plots are included in each figure alongside the colour bar such that the variation of the intensity across each panel is clear.
Four 1D slices of the domain are included in each snapshot in the directions shown in figure \ref{fig:1d profile}. Specifically, the solid black line is the profile along the horizontal $y = \pi$, the dashed red line is along the vertical $x = \pi$, the solid green line is along the upward diagonal $y = x - \pi$, and the dashed blue line is along the downward diagonal $y = \pi-x$. Similarly to the 2D snapshots, the scales of the 1D plots are also determined per panel for maximal clarity.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth, height=0.05\textwidth]{figures/wave_cmap.png}
\caption{Custom colour map for the upcoming $|u|^2$ plots.}
\label{fig:velocity wave cmap}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth, height=0.3\textwidth]{figures/1d_profile.png}
\caption{Locations of the 1D profiles of $|\bs{u}|$ and $\overline{D}$ in the 2D numerical simulations.}
\label{fig:1d profile}
\end{figure}
The simulations apply a Fourier spectral method with $2048$ Fourier nodes in both the $x$ and $y$ directions. De-aliasing is accomplished by truncating the highest $\frac{1}{3}$ wave numbers. The time stepping method is the $5^{th}$ order Runge Kutta Fehlberg (RKF45) method with adaptive time stepping corresponding to the well known formula for step control,
\begin{align}
h_i = \gamma h_{i-1}\left(\frac{\epsilon|h_{i-1}|}{|| \Bar{u}_i - \Hat{u}_i ||}\right)^{1/p}. \label{eq:stepsize}
\end{align}
In \eqref{eq:stepsize} we denote, as follows. At step $i$, we have the fourth order solution $\Bar{u}_i$ and fifth order solution $\Hat{u}_i$ as well as the previous time step $h_{i-1}$. The value $p = 4$ is the order of the solution $\Bar{u}_i$ and the order of $\hat{u}_i$ is $p+1$. If the L2 norm of $\Bar{u}_i - \Hat{u}_i$ is less than the tolerance $\epsilon$, the step size for the next time step is derived from \eqref{eq:stepsize}. If the $||\Bar{u}_i - \Hat{u}_i|| > \epsilon$, the current step is repeated with the step size derived from \eqref{eq:stepsize}. The relative tolerance and safety factor used in this work are $\epsilon = 10^{-5}$ and $\gamma = 0.9$ respectively.
\newpage
\subsubsection{Plate}\label{ssec: plate}
In figures \ref{fig:snapshot plate u 1}-\ref{fig:snapshot plate rhobar 4}, we consider the combined dynamics of the velocity magnitude $|\bs{u}|$ and elevation $(\overline{D} - \overline{b}(\bs{x}))$ in the interplay of kinetic and potential energy for different values of $\alpha$, starting from the same initial conditions in a doubly periodic square domain with a flat bottom topography, so $\overline{b}(\bs{x})=const$. The Plate initial condition is inspired by the two SAR images in figure \ref{fig: Gibralter}. The first of these two SAR images shows the surface signature of an internal wave propagating midway through the Gibralter Strait. The second SAR image shows the train of wavefront surface signatures which develops after the internal wave has propagated into the open Mediterranean Sea. Initially, the momentum $\bs{m}$ shown in the first panel of the Plate figure is distributed along a line segment whose corresponding velocity falls off exponentially as $e^{-|x|/w_0}$ at either end of the segment and also in the transverse direction. Thus, the transverse slice of the fluid velocity profile shown in the rectangular strip below the panel as a black curve has a contact discontinuity, i.e., a jump in its derivative. The name ``Plate'' also refers to the corresponding case for the 2D CH dynamics simulated in \cite{HS2013}. The advected depth variable $D$ is initially at rest and the elevation is flat, so $\overline{D}(\bs{x},0)=const$.
Figure \ref{fig:snapshot plate u 1} shows snapshots of the velocity profile of the initially rightward moving line segment. The support of the velocity solution develops a curvature and ``balloons'' outward as it moves rightward. It also stretches because the endpoints of its profile are fixed by the imposed exponential fall-off of velocity there. The shapes of the velocity profiles in the transverse direction of travel are shown by the 1D plots beneath the 2D snapshots. The bottom panels of \ref{fig:snapshot plate u 1} show the smoothing of the initial contact curves. Figure \ref{fig:snapshot plate rhobar 1} shows the snapshot of elevation $(\overline{D}(\bs{x},t) - \overline{b}(\bs{x}))$ accompanying the evolution of velocity in figure \ref{fig:snapshot plate u 1}. Note that the moving peak in elevation is accompanied by a trailing depression. This happens because of conservation of total mass. Namely, mass conservation implies that the moving surface elevation of an initially flat elevation profile must be accompanied by a corresponding moving depression of the elevation. The peak of the elevation follows the motion of the velocity profile. However, the profiles of velocity and elevation do not develop the same shape, because of the trailing depression below the mean elevation. The region of depression formed behind the peak extends from the initial position of the velocity profile to the tail of the current velocity profile.
\paragraph{Wavefront emergence}
When $\alpha < w_0$, in figure \ref{fig:snapshot plate u 4}, the unstable initial velocity profile produces a train of peakon segments emerging as the initial profile breaks up. Each of the emergent wavefronts is curved because it velocity vanishes at the initial endpoints. The number of wavefronts depends on the size of $\alpha$. In figures \ref{fig:snapshot plate u 4}-\ref{fig:snapshot plate rhobar 4}, the first emitted velocity wavefronts have the highest velocity and subsequent wavefronts have lower velocity. Consequently, they will not overtake each other and a wave train will be formed. The material peaks travelling along with the velocity profiles also have the feature that the first peak is the highest and all subsequent peaks are lower. The depression region is now bounded by the location of initial velocity profile and the arc defined by the slowest emitted wavefront. The process of velocity wavefront emergence takes time to complete. This is shown in the last panel in figure \ref{fig:snapshot plate u 4}, where the initial condition has evolved into $6$ fully formed segments ahead of ramps. As time progresses further, the ramps will develop into a train of wavefront segments.
Figure \ref{fig:snapshot plate rhobar 4} shows the elevation associated to the velocity in figure \ref{fig:snapshot plate u 4}. Panel $1$ of figure \ref{fig:snapshot plate rhobar 4} shows the initially flat elevation. Panel 2 shows the early development of a wavetrain of positive elevation. As expected, the leading wave is the tallest. In panels $2$, $3$ and $4$ one sees the rightward propagation of mass as the wave train moves away from the initial rightward impulse. In the subsequent panels one sees the continued development of a leftward-moving depression of the surface due to the emission of the rightward moving wave train of positive elevation. The grey rectangular strips below the panels show details of the wave-forms along the colour-coded directions in figure \ref{fig:1d profile}.
Note, however, that the elevation of the surface between the successive wavefronts in the wave train is \emph{less} that the initial level of the fluid at rest. Indeed, the depression is developing a counterflow in the opposite direction which might eventually cause a large scale oscillation in the wake of the plate. Comparing the properties of the fastest wavefronts for different values of $\alpha$, we see that both the material and velocity wavefronts are higher for smaller values of $\alpha$. This is due to conservation of mass and energy, in which $\alpha$ controls the width of the wave profile.
\begin{comment}
\begin{framed}
{\color{blue}RH: Some description of stretching and time reversal,
\paragraph{Time reversal}
As the system is Hamiltonian, the motion is reversible. We test the reversibility of the numerical scheme in the different initial conditions by integrating the EP equations backwards in time for different values of $\alpha$. }
\end{framed}
\end{comment}
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/one_plate/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``plate'' initial condition with $\alpha = w_0$}
\else
\caption{\small Evolution of $|u|^2$ with the ``plate'' initial condition with $\alpha = w_0/\alphaValue$}
\fi
\label{fig:snapshot plate u \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/one_plate/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``plate'' initial condition with $\alpha = w_0$. }
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``plate'' initial condition with $\alpha = w_0/\alphaValue$. }
\fi
\label{fig:snapshot plate rhobar \y}
\end{figure}
}
\newpage
\subsubsection{Skew}\label{ssec: skew}
Skew flows in figures \ref{fig:snapshot skew u 4} and \ref{fig:snapshot skew rhobar 4} are initiated with two peakon segments of the same width and with constant elevation. The peakon segment located at the back has $1.5$ times the amplitude of the peakon segment moving horizontally. Thus, the waves emerging from the back peakon will overtake the waves emerging from the peakon moving to the right by moving along the negative diagonal. Panel $2$ of figure \ref{fig:snapshot skew u 4} shows the result of collisions of the first emitted curved velocity segments. Here, both overtaking and head-on collisions have occurred along different axes and the resulting non-linear transfer of momentum has resulted in the merging, or \emph{reconnection}, of the wave segments. The collision has also produced a \emph{hotspot} of momentum and elevation located at the intersection point. This hotspot expands rapidly outward to form the red region of the rightmost wavefront in panel $3$. The appearance of hotspots during the reconnection of wavefronts is also seen in the dynamics of doubly periodic solutions of the Kadomtsev–Petviashvili equation \cite{CK2009} and also observed, for example, in a famous photograph of crossing swells in the Atlantic Ocean \cite{S2012}.
The notion of Lagrangian \emph{memory wisps} introduced in \cite{HS2013} is particularly visible in panel $3$ of the elevation evolution in figure \ref{fig:snapshot skew rhobar 4}, where two wisps can be found connecting the boundaries of the expanding hotspot to edge points of elevation segments. By examining intermediate snapshots, we see that the initial memory wisp connects from the hotspot to the edge of the elevation segment after the collision travelling downwards. Via hotspot expansion and the emission of additional wavefronts from the inital conditions, the wisp splits into two and connects to different elevation segments. In panel $4$ to $6$, we see the same interaction of subsequently emitted wavefronts with multiple collisions and reconnections. In each of the collisions, memory wisps are produced as seen between resultant wavefronts which suggests the hotspots are part of the mechanism creating the wisps. We note that the persistent memory wisps in panel $6$ between the most and second most rightward elevation segments are the same memory wisps seen in panel $3$. This suggests that the memory wisps are not produced by the numerical method, instead they are products of the wavefront collisions which preserve the reversibility of the evolution.
\newpage
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/skew/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``skew'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $|u|^2$ with the ``skew'' initial condition with $\alpha = w_0/8$.}
\fi
\label{fig:snapshot skew u \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/skew/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``skew'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``skew'' initial condition with $\alpha = w_0/8$.}
\fi
\label{fig:snapshot skew rhobar \y}
\end{figure}
}
\newpage
\subsubsection{Wedge} \label{ssec: wedge}
The ``wedge'' initial condition is a modification of the skew collision in which the initial upper peakon segment travels downward along the negative diagonal and the initial lower peakon segment travels upward along the positive diagonal. The magnitudes of the velocities are the same and there is a reflection symmetry along the horizontal axis in the middle of the domain, along $y = \pi$. When the emergent wavefronts meet along $y=\pi$, their vertical momentum components collide in opposite directions (head-on). The ``wedge'' initial condition can be seen on the left of the lower panels of figure \ref{fig:snapshot wedge u 1}, emerging from the line $y=\pi$. In panel $3$ of figure \ref{fig:snapshot wedge u 1}, the collision of the velocity segments forms a hotspot along the mid-line, which expands outward away during the reconnection process near the center of panel $4$. As these hotspots expand further in the next panels, they leave behind memory wisps in the velocity which are visible in panel $6$. These memory wisps are not seen, though, in the snapshots of elevation in figure \ref{fig:snapshot wedge rhobar 1}, as they are obscured near the boundaries between the depression regions and the elevation of the material wave segments.
For $w_0 = 8\alpha$ in figures \ref{fig:snapshot wedge u 4} and \ref{fig:snapshot wedge rhobar 4}, multiple ``wedge'' collisions occur from the wave train emerging from the initial conditions. They also interact among wavefronts from the same wave train due to the elastic collision property. This interaction produces fast, small-scale oscillations which resemble the emergent wavefronts, broken into even shorter ``shards'' seen in panel $3$ to $6$ of figure \ref{fig:snapshot wedge u 4} and in \ref{fig:snapshot wedge rhobar 4}. These broken shards of wave segments arise when the numerical method can no longer resolve the smallest scale behaviour. Lowering the values of $\alpha$ narrows both the velocity and elevation wave segments. It also has the effect of highlighting the presence of memory wisps, as more collisions occur with higher transfer of momentum and thus greater separation of the wavefronts, as seen in the panel $6$ of figure \ref{fig:snapshot wedge rhobar 4}.
Other aspects of head-on collisions will be discussed next in section \ref{ssec: parallel} for the ``parallel'' initial conditions.
\newpage
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/wedge/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``wedge'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $|u|^2$ with the ``wedge'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot wedge u \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/wedge/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``wedge'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``wedge'' initial condition with $\alphaValue\alpha = w_0/$.}
\fi
\label{fig:snapshot wedge rhobar \y}
\end{figure}
}
\newpage
\subsubsection{Parallel} \label{ssec: parallel}
The initial condition for the ``parallel'' collision comprises two peakon segments of equal and opposite magnitudes moving toward each other along vertically offset parallel horizontal lines, as shown in figure \ref{fig:snapshot parallel u 1}. This situation differs from the overtaking (rear-end) collisions seen in the ``skew'' initial conditions, as the collisions are head-on; so they involve wavefronts with positive and negative velocity components. In 1D, when the wavefronts are peakons, no vertical offset can occur and an antisymmetric initial condition on the real line produces a collision in which the two weak solutions bounce off each other elastically in opposite directions. In the 2D case, the offset initial condition introduces angular momentum into the system. Consequently, the offset head-on collision can access angular degrees of freedom and thus it will show more complex behaviour than the head-on collision in 1D.
Consider the case where $\alpha = w_0$ in figure \ref{fig:snapshot parallel u 1}. The initial velocity segments balloon outwards and the shape is smoothed as occurs in the ``plate'' condition. When the wavefronts collide in panel $3$, the magnitude of velocity along the collision front vanishes and the velocity profile becomes very steep as seen also in 1D peakon collisions. In panel $4$, we see that the wavefront segments which did not undergo head-on collisions contain hotspots. The hotspots indicate where reconnections have occurred. These hotspots expand in panels $5$ and $6$ into a velocity profile which balloons outwards with an angle away from the vertical axes. The results of the head-on collisions are the dark segments connecting the upper and lower velocity wavefronts. The scattering angle seen clearly in the third panel of figure \ref{fig:snapshot parallel rhobar 1} is due to the conservation of angular momentum during the offset head-on collision.
Figure \ref{fig:snapshot parallel rhobar 1} shows snapshots of the elevation during the offset head-on collision. As the elevation segments are advected with the velocity profile, we see an elevation head-on collision in panel $3$. In contrast with the velocity profile where the velocity is tends to zero along the collision front, the elevations are rising in the collision to create a elevation segment of large amplitude. This reinforced elevation then decreases in height in panel $4$ to $6$ as the elevation wavefront emerges from the head-on collisions. This is clearest from the black 1D profile in the grey rectangular strip below panel $6$.
When $\alpha < w_0$, the evolution becomes even more complex because entire trains of wavefronts are involved, as seen in figures \ref{fig:snapshot parallel u 4} and \ref{fig:snapshot parallel rhobar 4}. In these figures, one see the reconnections of velocity and elevation segments which had undergone head-on collisions with those segments that had not collided. The complexity builds, as the head-on collisions and reconnections recur again and again, as additional wavefronts continue to be emitted from the initial conditions.
\newpage
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/parallel/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``parallel'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $|u|^2$ with the ``parallel'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot parallel u \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/parallel/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``parallel'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``parallel'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot parallel rhobar \y}
\end{figure}
}
\newpage
\subsubsection{Dam Break}\label{ssec: single dam break}
A Dam Break, or Lock Release, flow is produced when at time $t=0$ a volume of fluid at rest behind a dam, or lock, is suddenly released. Gravity then drives the flow, as potential energy is converted in kinetic energy. Here, we treat the case of a radially symmetric Gaussian distribution of initial depth
\[\overline{D} = 2e^{-((x-\pi)^2 + (y-\pi)^2)/{w_0}} + b(\bs{x}),\] with constant, non zero bathymetry, $b(\bs{x}) = const > 0$. This corresponds to the case where the radially symmetric Gaussian distribution of initial surface elevation is released into a fluid at rest with a flat surface over a constant bathymetry. Consider the elevation profile $\overline{D}-\overline{b}$ in figure \ref{fig:snapshot dam break rhobar 1}. The first panel shows the initial condition for the elevation, while the second panel show the plateauing of the elevation peak in lowering of its initial Gaussian profile which, in turn, becomes wider as mass is pushed outward radially by gravity. When the critical width $\alpha$ of the expansion is reached, then a wavefront is emitted radially outwards on the left and right hand sides of the domain, as seen in panel $3$ and $4$. We note that the elevation becomes negative behind the formation of the material wavefront and it becomes more negative in subsequent panels after panel $3$. The leading edge of the wavefront subsides exponentially as it evolves, as does the shape of the leading edge of the velocity wavefront in figure \ref{fig:snapshot dam break u 1}. In the velocity profile, we see that the emerging wavefront takes the peakon form in the third panel. As the system evolves, the leading edge of the velocity wavefront is similar in shape to the material wave profiles in panels three, four and five.
For smaller values of $\alpha$, a train of velocity and material wavefronts rapidly develops, as seen in figures \ref{fig:snapshot dam break u 4} - \ref{fig:snapshot dam break rhobar 4} where $w_0 = 8\alpha$. Similarly to the ``plate'' initial condition, the first wavefront has the highest velocity and elevation, while subsequent wavefronts have lower velocity and elevation. The elevation ahead of the front of the first wavefront remains flat, but as the expansion continues the level of the fluid surface drops behind the expanding wave train. If one looks closely, one sees that the level of the surface between the wavefronts in the wave train is lower that the initial level at rest. Perhaps this would eventually produce a counter flow.
Now we consider a variation of the Dam break initial condition that does produce persistent velocity peakon wavefronts. The initial conditions for the figures \ref{fig:snapshot dam break no bathymatry rhobar 1} - \ref{fig:snapshot dam break no bathymatry u 4} are \[\overline{D} = 2e^{-((x-\pi)^2 + (y-\pi)^2)/{w_0}} + b(\bs{x}),\] with $b(\bs{x}) = 0$ and $\bs{u} = 0$. This corresponds to the case where the initial surface elevation is released into the constant bathymetry. Comparing the velocity profile for the same value of $\alpha$ in figure \ref{fig:snapshot dam break no bathymatry u 1} and \ref{fig:snapshot dam break u 1}, we see that the evolution are very similar for the first three panels as the development of peakon wavefronts. However, we see in the bottom three panels of figure \ref{fig:snapshot dam break no bathymatry u 1} that the peakons persist and travel outwards radially in a wave train with decreasing amplitude. The elevation profile in figure \ref{fig:snapshot dam break no bathymatry rhobar 1} also start with the plateauing and lowering of the initial Gaussian elevation in panel two. In panel three, one sees the start of the formation of a material wavefront. Instead of the wavefront being formed and emitted like the velocity wavefront, the material wavefront travels outwards, loses amplitude and vanishes when it reaches the edge of the elevation distribution. The width of the elevation distribution is widened in this process, as seen in the bottom three panels.
Similarly, for smaller values of $\alpha$, a train of velocity and material wavefronts appears from the initial condition, as seen in figures \ref{fig:snapshot dam break no bathymatry rhobar 4} - \ref{fig:snapshot dam break no bathymatry u 4} where $w_0 = 8\alpha$. The process of formation and annihilation of material peakons persists and the shape of the wavefront reassemble the peakon shape for more subsequent waves in the wave train. Since the elevation is not negative in this flow, no counter flow would be produced.
From these two variations of the Dam Break problem we see that from a smooth, spatial confined initial conditions, the time varies for persistent, peakon-shaped wavefronts to develop. However, the issue of the time of formation of peakon wave profiles is beyond the scope addressed in the present paper.
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dam break'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dam break'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot dam break rhobar \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``dam break'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $|u|^2$ with the ``dam break'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot dam break u \y}
\end{figure}
}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/1_0/\x_elev_comp.jpg}
\end{subfigure}
}
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dam break'' initial condition with $\alpha = w_0$ and zero bathymetry.}
\label{fig:snapshot dam break no bathymatry rhobar 1}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/1_0/\x_usqr_comp.jpg}
\end{subfigure}
}
\caption{\small Evolution of $|u|^2$ with the ``dam break'' initial condition with $\alpha = w_0$ and zero bathymetry.}
\label{fig:snapshot dam break no bathymatry u 1}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/4_0/\x_elev_comp.jpg}
\end{subfigure}
}
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dam break'' initial condition with $8\alpha = w_0$.}
\label{fig:snapshot dam break no bathymatry rhobar 4}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dam break/4_0/\x_usqr_comp.jpg}
\end{subfigure}
}
\caption{\small Evolution of $|u|^2$ with the ``dam break'' initial condition with $8\alpha = w_0$.}
\label{fig:snapshot dam break no bathymatry u 4}
\end{figure}
\clearpage
\subsubsection{Dual Dam Break} \label{ssec: dual dam break}
Here, we treat the Dam Break flow in which the initial condition contains two radially symmetric Gaussian distributions of initial surface elevation. These are simultaneously released into a fluid at rest with a flat surface over a constant bathymetry. To study the interaction of both velocity and elevation wavefronts, we consider the case where the bathymetry $b > 0$. The emergence property of wavefronts remains the same of the single Dam Break in section \ref{ssec: single dam break}. In figure \ref{fig:snapshot dual dam break rhobar 1}, panel $1$ is the initial condition and panel $2$ is the emergence of elevation wavefronts. In the middle of the panel $3$, one sees the head-on collisions of these emergent wavefronts. In the center of the domain in panel $4$ to $6$, one sees the head-on collisions of the emitted radial peakons and their reconnections in the form of two rapidly expanding hotspots located along $x = \pi$. As one part of the elevation wavefront expands radially away from the center of the domain, it leaves a widening region of depression behind it which creates a counter-flow, which one sees developing in panel $6$ as the dark purple region.
The corresponding velocity profile in figure \ref{fig:snapshot dual dam break u 1} evolves similarly to the ``Parallel'' and ``Single Dam Break'' flows for head-on collisions and emergence of wavefronts respectively.
For smaller values of $\alpha$, a train of peakon wavefronts rapidly ensues, as seen in figures \ref{fig:snapshot dual dam break u 4} - \ref{fig:snapshot dual dam break rhobar 4} where $w_0 = 8\alpha$. Consider the interaction in the centre of the domain after the initial head-on collision of the first wave in the emergent wave train in panel $3$. Panel $4$ shows the ``rebound'' wavefront interacting with the subsequent wavefront in the wave train to create hotspots above and below $x = \pi$. This process repeats for every wavefront and creates a checkboard pattern in the region above and below $x = \pi$. The connection between wavefronts are the memory wisps, seen in panel $5$ and $6$. Thus, this interaction produces a cellular elevation profile which is locally similar to that of the doubly periodic cnoidal waves seen in solutions of the Kadomtsev–Petviashvili equation \cite{S2012,CK2009}.
\newpage
\foreach \y in {1,4}{
\pgfmathtruncatemacro{\alphaValue}{2^(\y -1 )}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dual dam break/\y/\x_usqr_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $|u|^2$ with the ``dual dam break'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $|u|^2$ with the ``dual dam break'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot dual dam break u \y}
\end{figure}
\begin{figure}[H]
\centering
\foreach \x in {0,1,...,5}{
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{figures/dual dam break/\y/\x_elev_comp.jpg}
\end{subfigure}
}
\ifnum \alphaValue = 1
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dual dam break'' initial condition with $\alpha = w_0$.}
\else
\caption{\small Evolution of $\overline{D}-\overline{b}$ with the ``dual dam break'' initial condition with $\alphaValue\alpha = w_0$.}
\fi
\label{fig:snapshot dual dam break rhobar \y}
\end{figure}
\clearpage
}
\begin{comment}[Summary]$\,$
\noindent
Above, we have discussed the following additional points.
\begin{itemize}
\item Minimal description of a fluid.
\item Emergent singular solutions for fluid momentum and advected quantities.
\item Interaction of curves, their collisions and reconnections.
\item Emergent singular solutions from smooth initial conditions.
\end{itemize}
\end{comment}
\section{Conclusions and outlook}
Inspired by the SAR images of sea-surface wavefronts regarded as the signatures of the dynamics of internal waves propagating below the surface, we proposed in the introduction to derive a single-layer minimal model of the surface velocity and elevation whose solution behaviour would mimic the dynamics of the curved wavefronts seen in the SAR images, figure \ref{fig: Gibralter} and \ref{fig: South China Sea}. The computationally simulated solutions of the H1ModCH2 minimal model illustrated the emergence of trains of wavefronts which evolved into complex patterns as they propagated away from localised disturbances of equilibrium and interacted nonlinearly with each other through collisions, stretching and reconnection.
To mimic the wave-current interaction that drives the curved wavefronts seen in SAR images, we investigated a variety of computational simulation scenarios which addressed two questions. First, we asked how would an initial condition evolve if there were a current possessing kinetic energy, but the surface were flat and, thus, had no gravitational potential energy? This question was answered in sections \ref{ssec: plate}, \ref{ssec: skew} and \ref{ssec: wedge} to which we refer for details. Second, we considered the converse question for initial conditions in which a stationary elevation was released into still water, as discussed in sections \ref{ssec: single dam break} and \ref{ssec: dual dam break}. In addition, we have also mimicked the reconnection properties of the internal waves signatures in the cases where wavefront collisions occur.
We had hoped that the singular momentum map solutions discussed in section \ref{ssec: singmomap} would emerge from our simulations of wavefront trains arising from localised disturbances. This would have reduced the problem of wave-current interaction among sea-surface wavefronts to the much simpler problem of mutual advection among curves in the plane. This would have been the case, of course, if we had started with the singular momentum map in equation \eqref{sing-soln-thm} as the solution ansatz which would follow the dynamics in \eqref{singsoldynamics}. However, we hoped to see wavetrains of singular solutions on embedded curve segments emerge from generic smooth confined initial conditions. In fact, we did see that effect in some of the simulations. We saw that wavetrains of peakon curves did form in some cases of our suite of simulated energy exchange dynamics, more specifically, the dam break problem with zero bathymetry in section \ref{ssec: single dam break}. However, in some other cases such as the ``plate'' in section \ref{ssec: plate}, the wavetrains of peakon curves did not form completely. That is, the singular solutions supported on embedded curves did not always form completely during the time intervals of our simulations. Moreover, in the ``dam break'' initial condition in section \ref{ssec: single dam break}, the leading peaks in the elevation began to form, and then slowly ebbed away and disappeared as other peaks emerged behind them and then disappeared later, as well.
So, the question of emergence of the singular momentum map solutions in \eqref{sing-soln-thm} for ModCH2 from a smooth confined initial condition dynamics remains open. In particular, the question is, under what conditions will a solution of the 2D ModCH2 equations starting from a smooth confined initial condition indeed produce a train of singular peakon curve segments, if ever?
\section*{Acknowledgements}
This paper was written in honour of Tony Bloch's 65th birthday. Happy birthday, Tony! Best wishes for many more fruitful happy years of collaboration with your friends in the geometric mechanics community.
We would also like to thank our other friends and colleagues who have generously offered their attention, thoughts and encouragement in the course of this work during the time of COVID-19.
DH is grateful for partial support from ERC Synergy Grant 856408 - STUOD (Stochastic Transport in Upper Ocean Dynamics). RH is supported by an EPSRC scholarship [grant number EP/R513052/1].
\input{references.tex}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Ensembles of coupled oscillators is a popular object in studies of complex systems,
with a wide range of applications; from physical systems (lasers~\cite{Nixon_etal-13},
Josephson junctions~\cite{Wiesenfeld-Swift-95,Wiesenfeld-Colet-Strogatz-98}, chemical reactions~\cite{Totz_etal-18}), to engineering (pedestrians on a bridge~\cite{Eckhardt_et_al-07}) and life sciences (neurons~\cite{Luke-Barreto-So-13,Laing-14}, nephron cells~\cite{Holstein_etal-01}, genetic circuits~\cite{Prindle_etal-12}). A common theoretical approach
includes different levels of reductions and idealizations. If the units are self-sustained periodic oscillators, and the coupling is weak, one can perform a phase reduction, neglecting variations of the oscillators' amplitudes that appear in the higher orders in coupling strength~\cite{Kuramoto-84}. As a result, each oscillator is described by just one variable on a unit circle -- the phase, which enormously simplifies the analysis. Another idealization, which is appropriate for large ensembles, is the thermodynamic limit of an infinite
number of units. This allows for a formulation of the evolution
in terms of kinetic equations for the distribution of the phases. An important class of models are those with
global (or mean-field) coupling. Such models naturally appear, e.g., for Josephson junctions with a common load and for pedestrians on a bridge; in other cases (like, e.g., for neural ensembles) they
are justified by a huge number of interconnections between the units.
Among the setups for ensembles of globally coupled phase oscillators, the paradigmatic Kuramoto
model~\cite{kuramoto_model} and its generalizations~\cite{Sakaguchi-Kuramoto-86,Acebron-etal-05} are particularly popular. Here one assumes a relatively
simple coupling, where the dynamics of the oscillator's phase depends only on the first harmonics of
the phase itself. To define the coupling, one introduces mean fields which are circular moments
of the phase distribution. Different setups with identical deterministic units, as well as ones having different natural frequencies and/or being driven by noise have been considered in the literature.
One of the striking properties of the Kuramoto-type models is a possibility to reduce the dynamics
to a finite-dimensional one. Watanabe and
Strogatz~\cite{watanabe_strogatz_1993,Watanabe-Strogatz-94} (WS) have demonstrated that ensembles
of identical, noise-free units can be exactly reduced to three dynamical equations (plus constants of motion). Ott and Antonsen~\cite{ott_antonsen_2008} (OA) found a particular family of phase distributions (wrapped Cauchy distribution) that is invariant under the dynamical evolution. This holds not only for identical units, but also
for ones with a Cauchy distribution of natural frequencies,
and for ones driven by white Cauchy
noise~\cite{Tanaka-20,tonjes_pikovsky_2020}. In contradistinction to WS theory, the OA reduction is not valid for arbitrary initial states - they should belong to the OA invariant manifold. However, because there are arguments that the OA manifold is attracting (although not in a trivial sense, see
discussion in \cite{Ott-Antonsen-09,pietras_daffertshofer_2016,engelbrecht_mirollo_2020}), the OA equations correctly
describe the asymptotic in time regimes.
The goal of this paper is to fill, at least partially, the gap between WS and OA theories. We will develop,
in the thermodynamic limit,
a low-dimensional description of the Kuramoto-type phase ensembles with Cauchy noise and/or Cauchy distribution
of natural frequencies, valid for arbitrary initial conditions. Of course, this reduction contains WS and OA equations as particular cases.
The paper is organized as follows. In section \ref{sec:pf} we formulate the problem.
In section \ref{sec:fdr} we introduce our basic tools (generating functions) and
define a family of finite-dimensional invariant manifolds (these results have been also
presented in a short communication~\cite{cestnik_pikovsky_2022}). Section \ref{sec:tcv} contains the main result - we show how the evolution of generic states can be reduced to three complex variables plus a constant function.
Here we also discuss different possibilities of introducing these variables based on initial conditions.
In section \ref{sec:gsoa} we demonstrate stability of the OA manifold in the presence of noise.
In section \ref{sec:nfc} we consider identical noise-free oscillators, and demonstrate that
the dynamics reduces to the WS equations.
In section \ref{sec:res} we discuss how our approach allows for finding the
evolution outside of the OA manifold.
We conclude and discuss possible further developments in section~\ref{sec:concl}.
Many technical details are shifted from the main text to appendices.
\section{Problem formulation}
\label{sec:pf}
In this paper we consider populations of phase oscillators, subject to global coupling or to
a global common force, in the thermodynamic limit
of infinite number of units. Consequently, the proper description is in terms of the phase distribution functions.
Our theory is valid for a restricted class of systems: (i) important is that the coupling/forcing is proportional
to the first harmonics of the phase only, and (ii) the oscillators can differ from each other only in additive terms
in their phase dynamics, which are either Cauchy-distributed white noise terms, or Cauchy-distributed frequency constants, or a combination of both.
In this section we introduce these models.
\subsection{Ensemble of phase oscillators with independent Cauchy noise forces}
\label{sec:pfcn}
We consider an ensemble of noisy phase oscillators coupled in the first harmonic:
\begin{equation}
\dot{\varphi}_j = \omega(t) + \text{Im}\big[2 h(t) e^{-\mathrm{i} \varphi_j}\big] + \gamma \xi_j(t)\;.
\label{eq:phase_system_noise}
\end{equation}
Here $\omega$ is a combination of a natural frequency and a
real-valued additive force, and $h(t)$ is a complex-valued force
that includes the first harmonic of the phase. Both these quantities can potentially depend
on the mean fields of the population, thus readily describing global coupling. There is no restriction on these forces,
e.g., they can include noise which is then the common noise for all elements of the
population (cf.~\cite{Braun-etal-12,Gong_etal-19}).
The terms $\xi_j(t)$ represent independent, normalized Cauchy white noise forces, with $\gamma$ being the real and positive
noise strength~\cite{Chechkin_etal-03,Toenjes_etal-13,Toenjes-Pikovsky-20,Tanaka-20}.
We consider the thermodynamic limit of infinitely many oscillators. In this case it is natural to describe the state with the
phase density function $P(\varphi,t)$, and express the original dynamics in terms of the continuity equation,
a partial differential equation (PDE) where the Cauchy noise enters as a fractional derivative on the right-hand side:
\begin{equation}
\frac{\partial}{\partial t}P + \frac{\partial}{\partial \varphi}\big( \dot{\varphi}P \big) =
-\gamma \Big|\frac{\partial}{\partial \varphi} \Big| P\;.
\label{eq:continuity}
\end{equation}
With $\Big|\frac{\partial}{\partial \varphi} \Big|$ we denote an operator, which in the Fourier-representation reduces
to a multiplication with $|n|$, where $n$ is the mode number.
The phase density is commonly expressed as a Fourier series:
\begin{equation}
\begin{split}
P(\varphi,t) &= \frac{1}{2\pi} \Big( -1 + \sum\limits_{n=0}^\infty Z_n(t) e^{-\mathrm{i} n\varphi} + \text{c.c.} \Big)\;,\\
\qquad Z_n(t) &= \langle e^{\mathrm{i} n\varphi_j} \rangle=\int\limits_{0}^{2\pi} d\varphi \, e^{\mathrm{i} n\varphi} P(\varphi,t)\;.
\end{split}
\label{eq:four}
\end{equation}
Quantities $Z_n$ represent complex order parameters, also known as Kuramoto-Daido order parameters~\cite{kuramoto_model,daido_1996}. These circular moments of the phase distribution are in fact the ``mean fields'' which may govern
the ensemble.
In terms of these order parameters, the dynamics is represented as an infinite set of ordinary
differential equations (ODE):
\begin{equation}
\frac{1}{n}\dot{Z}_n = (\mathrm{i}\omega-\gamma) Z_n +h Z_{n-1}-h^*Z_{n+1}\;, \quad n \geq 1\;,
\label{eq:Z_dyn}
\end{equation}
(here one can restrict to consideration of positive $n$ only, so we replace $|n|$ in the noisy term by $n$).
These equations have been discussed in Ref.~\cite{Toenjes-Pikovsky-20} and represent the exact
dynamics of system~\eqref{eq:phase_system_noise} in the thermodynamic limit, without any approximation or assumption. Normalization of the phase density implies $Z_0=1$.
\subsection{Ensemble with a Cauchy distribution of natural frequencies}
\label{sec:pfcf}
Equivalent equations can also be derived to represent the case of Cauchy distributed natural frequencies, the situation
widely considered starting from the initial formulation by Kuramoto~\cite{Kuramoto-75,Kuramoto-84}.
In this case we consider the terms $\xi_j$ in Eq.~\eqref{eq:phase_system_noise} as constants with a
normalized Cauchy distribution $g(\xi)=\pi^{-1}(1+\xi^2)^{-1}$.
The total additive force $\omega(t)+\gamma\xi_j$ can be interpreted as an instantaneous frequency of oscillator $j$.
If $\omega=\omega_0$ is a constant, then $\omega_0+\gamma \xi_j$ is the natural frequency of oscillator $j$.
In our derivation we follow the way presented recently in \cite{engelbrecht_mirollo_2020}.
One introduces the parameter $\xi$ into the distribution of phases $P(t,\varphi;\xi)$,
and the equation for this distribution \eqref{eq:continuity} then reads
\begin{equation}
\frac{\partial}{\partial t}P\big\rvert_\xi + \frac{\partial}{\partial \varphi}\left( \left[\omega+\gamma\xi-\mathrm{i} he^{-\mathrm{i}\varphi}+\mathrm{i} h^*e^{\mathrm{i} \varphi}\right]P\big\rvert_\xi \right) =0\;,
\label{eq:con}
\end{equation}
where we used compact notation $P\big\rvert_\xi \equiv P(\varphi,t;\xi)$.
Of interest are the order parameters (circular moments), averaged over the additions to the frequency $\xi$:
\begin{equation}
Z_n(t)=\int\limits_{-\infty}^\infty d\xi\int\limits_{0}^{2\pi} d\varphi\, e^{\mathrm{i} n\varphi} P\big\rvert_\xi g(\xi)\;.
\label{eq:opxi}
\end{equation}
The main assumption allowing for explicit equations for these order parameters is analyticity of the distribution
$P(\varphi,t;\xi)$ in the upper halfplane of complex $\xi$. This assumption has been first introduced
by Ott and Antonsen in their seminal paper~\cite{ott_antonsen_2008}. The main reason behind it is a possibility
to calculate the integrals via residue integration.
Indeed, employing the residue theorem for a contour closing the upper halfplane in \eqref{eq:opxi}
and taking the only pole at $\xi=\mathrm{i}$, one reduces \eqref{eq:opxi} to $Z_n(t)=\int d\varphi\, e^{\mathrm{i} n\varphi}P(\varphi,t;\mathrm{i})$.
Now, let us multiply \eqref{eq:con} with $e^{\mathrm{i} n\varphi} g(\xi)$ and integrate in $\xi$ and $\varphi$.
The only additional integral to be calculated (again by virtue of the residue method) is
\[
\int\limits_0^{2\pi} d\varphi \int\limits_{-\infty}^\infty d\xi\; \xi e^{\mathrm{i} n\varphi} P\big\rvert_\xi g(\xi)=\mathrm{i} \int\limits_{0}^{2\pi} d\varphi \, e^{\mathrm{i} n\varphi} P\big\rvert_\mathrm{i}=\mathrm{i} Z_n\;.
\]
This yields the system of equations
\[
\dot{Z}_n=\mathrm{i} n(\omega+\mathrm{i}\gamma)Z_n+n h Z_{n-1}-nh^*Z_{n+1}\;,
\]
which coincides with \eqref{eq:Z_dyn}.
We end this section with two remarks. First, the validity of Eqs.~\eqref{eq:Z_dyn} for Cauchy independent noises is unconditional, while
for the Cauchy distributed constant additions to the frequency, an extra assumption of analyticity has to be adopted; the validity
of this assumption is commonly assumed in the OA theory and its applications, but one can construct distributions of the phase which, at least
during some time interval, violate this assumption~\cite{Pikovsky-Rosenblum-11}.
The second remark is that if one has both Cauchy-distributed constant and noisy additions to the frequency, with
intensities $\gamma_1$ and $\gamma_2$, then one can use Eqs.~\eqref{eq:Z_dyn} with the total
intensity $\gamma=\gamma_1+\gamma_2$.
\section{Generating functions and finite-dimensional reductions of the dynamics}
\label{sec:fdr}
\subsection{Ordinary and exponential generating functions}
In our treatment of the infinite system \eqref{eq:Z_dyn} we will make use of generating functions, which are formal power series.
We will use both ordinary generating function (OGF)~\footnote{The sum defining ordinary generating functions is typically considered from index $n=0$, but in our context it is convenient to start from $n=1$; note that we always have $f_0=1$ for normalization reasons.}
\[
\mathcal{F}(k)=\sum_{n=1}^\infty f_n k^n\;,
\]
and the exponential generating function (EGF)
\[
\mathsf{F}(k)=\sum_{n=0}^\infty f_n \frac{k^n}{n!}\;.
\]
There is no simple relation between these functions for the same sequence $\{f_n\}$, and in different situations we will use different
generating functions.
\subsection{Finite-dimensional reductions of the infinite system for circular moments}
In this section we briefly introduce the finite-dimensional reductions described in our recent letter \cite{cestnik_pikovsky_2022}.
First, we characterize the state with complex
order parameters $Z_n$ by introducing the complex-valued EGF:
\begin{equation}
\mathsf{Z}(k,t) = \sum\limits_{n=0}^\infty Z_n(t) \frac{k^n}{n!}\;.
\end{equation}
Then the dynamics~\eqref{eq:Z_dyn} are recast as a single
PDE (see Appendix \ref{ap:gf} for the derivation), which in contrast to Eq.~\eqref{eq:continuity} is generally complex (prime denotes
derivative with respect to $k$):
\begin{equation}
\dot{\mathsf{Z}} = (\mathrm{i} \omega-\gamma) k \mathsf{Z}' + h k \mathsf{Z} - h^* k \mathsf{Z}''\;.
\label{eq:F}
\end{equation}
The normalization condition is $\mathsf{Z}(0,t)=1$.
The structure of this equations allows for a particular solution with the exponential
ansatz $\mathsf{Z}(k,t) = e^{k Q(t)}$, revealing a single ODE for the complex variable $Q(t)$:
\begin{equation}
\dot{Q} = (\mathrm{i}\omega-\gamma) Q + h - h^*Q^2\;.
\label{eq:Q}
\end{equation}
This is commonly known as the Ott-Antonsen ansatz~\cite{ott_antonsen_2008}, which reveals a two dimensional invariant manifold
in the infinite system \eqref{eq:Z_dyn}. In this case, higher circular moments are powers of the first one: $Z_n=Q^n$. The distribution of the phases is the wrapped Cauchy
distribution (a.k.a. Poisson kernel):
\begin{equation}
P(\varphi,t) = \frac{1}{2\pi} \frac{1-|Q|^2}{|1-Qe^{-\mathrm{i} \varphi}|^2}\;.
\label{eq:cdist}
\end{equation}
We recently generalized this solution with an ansatz allowing for an additional function~\cite{cestnik_pikovsky_2022}:
\begin{equation}
\mathsf{Z}(k,t) = e^{kQ(t)}\mathsf{B}(k,t)\;,
\label{eq:zb}
\end{equation}
in which case we obtain, in addition to \eqref{eq:Q}, another PDE for the newly
introduced function $\mathsf{B}(k,t)$:
\begin{equation}
\dot{\mathsf{B}} = \big( \mathrm{i} \omega -\gamma- 2h^*Q \big) k \mathsf{B}' - h^* k \mathsf{B}''\;.
\label{eq:bgf}
\end{equation}
Although at first glance this equation is similar to Eq.~\eqref{eq:F}, it does not contain
a term without $k$-derivative of $\mathsf{B}(k,t)$, and thus allows for a more general dimensionality reduction.
Namely, we expand the function $\mathsf{B}(k)$ as an
EGF:
\begin{equation}
\mathsf{B}(k,t) = \sum\limits_{n=0}^\infty \beta_n(t)\frac{k^n}{n!}\;,
\end{equation}
($\beta_0\equiv 1$ due to normalization), thus introducing new dynamical
variables $\beta_n(t)$, that describe the dynamics with an infinite set of ODEs
(plus one ODE \eqref{eq:Q} for $Q$):
\begin{subequations}
\begin{align}
\dot{Q} &= (\mathrm{i}\omega-\gamma) Q + h - h^* Q^2\; \label{eq:eqQ}\;,\\
\frac{1}{n}\dot{\beta}_{n} &= (\mathrm{i}\omega -\gamma -2h^*Q)\beta_{n}-h^*\beta_{n+1}\;, \quad n \geq 1\; .\label{eq:beta}
\end{align}
\label{eq:Qbeta}
\end{subequations}
(see Appendix \ref{ap:gf} for the relation of \eqref{eq:beta} to \eqref{eq:bgf}).
Notice how the right-hand side of \eqref{eq:beta} only contains terms proportional to $\beta_n$ and $\beta_{n+1}$,
no term with $\beta_{n-1}$ is present.
This means that if the system is truncated at a finite number $N$ of variables $\beta_n$ (i.e. assuming that all higher terms
vanish identically: $\beta_{n\geq N} = 0$), the dynamics is exactly described by the first $N$ equations of
system~\eqref{eq:beta} for all times. These truncations represent dynamically invariant finite-dimensional manifolds.
The $\beta_n$ variables relate to the Kuramoto-Daido order parameters $Z_n$ via a modified binomial transform~(cf. \cite{number_theory_1993,number_theory_1994}):
\begin{equation}
\begin{split}
Z_n(t) &= \sum\limits_{m=0}^n \binom{n}{m} \beta_m(t) \big[Q(t)\big]^{n-m}\;, \\
\beta_n(t) &= \sum\limits_{m=0}^n \binom{n}{m} Z_m(t) \big[-Q(t)\big]^{n-m}\;.
\end{split}
\label{eq:beta_Z_trans}
\end{equation}
For example, the first three order parameters are expressed with the newly introduced variables as
\begin{equation*}
\begin{aligned}
Z_1 &=Q+\beta_1\;,\qquad Z_2 = Q^2+2Q\beta_1+\beta_2\;,\\
& \quad Z_3 = Q^3+3Q^2\beta_1+3Q\beta_2+\beta_3\;.
\end{aligned}
\end{equation*}
\section{Reduction of the dynamics to three complex variables}
\label{sec:tcv}
As outlined in the previous section~\ref{sec:fdr}, there are many finite-dimensional invariant manifolds (with a finite
number of additional variables $\beta_n$) beyond the OA two-dimensional manifold (which corresponds to vanishing $\beta_n$ for $n\geq 1$).
However, as already mentioned in \cite{cestnik_pikovsky_2022}, it is not excluded that different $\beta_n$ could be dependent.
Below we show that this is indeed the case, and the dynamics of the whole (even infinite) hierarchy
of variables $\beta_n(t)$ can be reduced to two complex equations.
\subsection{Six-dimensional reduction}
We now introduce two new complex variables $y(t),s(t)$ and new dynamical equations:
\begin{subequations}
\begin{align}
\dot{Q} &= (\mathrm{i}\omega -\gamma) Q + h - h^* Q^2\;, \label{eq:eqQ_noise}\\
\dot{y} &= (\mathrm{i}\omega - \gamma - 2h^*Q) y\;, \label{eq:eqp_noise}\\
\dot{s} &= h^*y\;. \label{eq:eqs_noise}
\end{align}
\label{eq:zy_eqs}
\end{subequations}
Our goal below is to demonstrate, that these equations are equivalent to the infinite system~\eqref{eq:Qbeta} and therefore to the original system~\eqref{eq:Z_dyn}.
To show how this system represents dynamics~\eqref{eq:Qbeta}, we first introduce additional auxiliary variables $\alpha_n(t)$ by transforming $\beta_n(t)$:
\begin{equation}
\beta_n(t) = y^n (t) \alpha_n (t)\;.
\label{eq:relation_br}
\end{equation}
We take the time derivative of this relation, and divide both sides by $n \beta_n$
\[
\frac{\frac{1}{n}\dot{\beta}_n}{\beta_n} = \frac{\dot{y}}{y} + \frac{\frac{1}{n}\dot{\alpha}_n}{\alpha_n} \;,
\]
and then insert the dynamics of $\beta_n$~\eqref{eq:beta} and $y$~\eqref{eq:eqp_noise} :
\[
(\mathrm{i}\omega-\gamma-2h^*Q) - h^*\frac{\beta_{n+1}}{\beta_n} = (\mathrm{i}\omega-\gamma-2h^*Q) + \frac{\frac{1}{n}\dot{\alpha}_n}{\alpha_n} \;.
\]
Notice how the majority of the terms cancel, including all the effects of frequency $\omega(t)$ and noise $\gamma$.
As a result, the dynamics of the variables $\alpha_n(t)$ simplifies to:
\begin{equation}
\frac{1}{n} \dot{\alpha}_n = - h^* y\; \alpha_{n+1}\;.
\label{eq:Bn_dynamics}
\end{equation}
Now let us introduce the OGF of the variables $\alpha_n(t)$:
\begin{equation}
\mathcal{A}(k,t) = \sum\limits_{n=1}^\infty \alpha_n(t) k^n\;,
\end{equation}
and express the dynamics \eqref{eq:Bn_dynamics} in terms
of this OGF (see Appendix \ref{ap:gf} for the derivation):
\begin{equation}
\dot{\mathcal{A}} = -h^* y \Big[\mathcal{A}'-\frac{1}{k}\mathcal{A}\Big]\;.
\label{eq:R_dyn}
\end{equation}
Next we introduce yet another set of variables $\mu_n$.
This time we launch them not directly, but via an expression of the corresponding OGF $\mathcal{M}(k,t) = \sum\limits_{n=1}^\infty \mu_n k^n$ in terms of $\mathcal{A}(k,t)$:
\begin{equation}
\frac{\mathcal{M}(k)}{k} = \frac{\mathcal{A}(k+s)}{k+s}
\label{eq:transform}
\end{equation}
(where $s$ is the variable in \eqref{eq:eqs_noise}).
By taking the time derivative of this equation, we obtain:
\[
\frac{\dot{\mathcal{M}}(k)}{k} = \frac{\dot{\mathcal{A}}(k+s)}{k+s} + \dot{s} \Big[ \frac{\mathcal{A}'(k+s)}{k+s} - \frac{\mathcal{A}(k+s)}{(k+s)^2} \Big]
\]
Now we insert the dynamics of $s$ according to Eq.~\eqref{eq:eqs_noise}, as well as the dynamics of $\mathcal{A}$ according to Eq.~\eqref{eq:R_dyn}
and behold, the right-hand side of this relation is zero. This means that the OGF $\mathcal{M}$ and the corresponding
variables $\mu_n$ are constant in time: $\dot{\mu}_n = 0$, $\dot{\mathcal{M}}(k) = 0$. In other words, the variables $\mu_n$
are integrals of motion.
This completes the proof that Equations~\eqref{eq:zy_eqs} are equivalent to Equations~\eqref{eq:Qbeta}. For a given set
of integrals $\mu_n$, using the dynamical variable $s(t)$, one can find from relation \eqref{eq:transform}
a set of variables $\alpha_n(t)$. From these
variables, one can restore, by virtue of relation \eqref{eq:relation_br},
all the variables $\beta_n(t)$ using another dynamical variable $y(t)$.
It is instructive to rephrase the relation~\eqref{eq:transform}, above formulated in terms of OGFs, to the level of the variables
(where it corresponds to a modified binomial transform, see Appendix \ref{sec:tranf_der} for the derivation):
\begin{equation}
\begin{split}
\mu_n &= \sum\limits_{m=n}^\infty \binom{m-1}{n-1} \alpha_m(t) \big[s(t)\big]^{m-n}\;, \\
\alpha_n(t) &= \sum\limits_{m=n}^\infty \binom{m-1}{n-1} \mu_m \big[-s(t)\big]^{m-n}\;.
\end{split}
\label{eq:BM_transform}
\end{equation}
Notice that we do not write the time argument of $\mu_n$ because these quantities are constants.
Using this relation, as well as how the variables $\alpha_n(t)$ relate to $\beta_n(t)$~\eqref{eq:relation_br}, and then how $\beta_n(t)$ relate to the order parameters $Z_n(t)$~\eqref{eq:beta_Z_trans}, we can express the order parameters in terms of the constant function $\mathcal{M}(k)$ and the three dynamic variables $Q(t),y(t),s(t)$ (see Appendix \ref{sec:moments_with_M} for the derivation) for all times (here we omit the time dependence in notation for convenience):
\begin{equation}
Z_n = Q^n - \sum\limits_{m=1}^n \binom{n}{m} Q^{n-m} y^m \sum\limits_{d=0}^{m-1} \frac{s^{d-m}}{d!} \mathcal{M}^{(d)} (-s)\;,
\label{eq:moments_with_mu}
\end{equation}
where $\mathcal{M}^{(d)}$ denotes the $d^\text{th}$ derivative of $\mathcal{M}$ with respect to $k$.
In particular, the first circular moment (the Kuramoto order parameter) expresses as:
\begin{equation}
Z_1 = Q-y \frac{\mathcal{M}(-s)}{s} \;.
\label{eq:first_moment}
\end{equation}
Notice how at $s=0$ one has to take the limit $\lim\limits_{\varepsilon\to0} \frac{\mathcal{M}(-\varepsilon)}{\varepsilon} = -\mu_1$.
For all moment expressions~\eqref{eq:moments_with_mu} we have to consider
similar limits: $\lim\limits_{\varepsilon\to0}\sum\limits_{d=0}^{m-1} \frac{\varepsilon^{d-m}}{d!}\mathcal{M}^{(d)}(-\varepsilon) = -\mu_m$.
When performing numerical integration, one requires an expansion
of the above expression~\eqref{eq:moments_with_mu} for small $Q$ and small $s$:
\begin{equation}
Z_n = y^n \left[\mu_n + n \mu_{n-1} \frac{Q}{y} - n \mu_{n+1} s \right] +
\mathcal{O}(Q^2, Qs, s^2) \;,
\label{eq:Z_expansion}
\end{equation}
we remind that $\mu_0 \equiv 1$ for normalization reasons.
\subsection{Initial conditions}
\label{sec:ic}
The new set of variables $Q,y,s,\mu_n$ is not uniquely determined by the initial order parameters $Z_n$,
and in this section we discuss possible variants of determining them.
Different choices for the initial conditions of $Q,y,s,\mu_n$ define the constant function $\mathcal{M}(k)$ differently.
We first illustrate this with the simplest example of the OA manifold.
\subsubsection{Different choices of variables for the OA initial conditions}
As discussed above, on the OA manifold the EGF reads $\mathsf{Z}(k,t)=\exp[k Z(t)]$ and the order parameter $Z$
obeys $\dot Z=(\mathrm{i}\omega -\gamma) Z+h-h^* Z^2$. Suppose, having a set of initial moments $Z_n(0)=Z^n(0)$, we
want to introduce new variables $Q,\beta_n$ according to \eqref{eq:zb}. One can immediately see
that a set $\beta_n(0)=\beta^n(0)$ is admissible if $Q(0)+\beta(0)=Z(0)$, in this case $\mathsf{B}(k,0)=\exp[k \beta(0)]$.
Furthermore, relation \eqref{eq:relation_br} allows for different choices of $y(0)$ and $\alpha_n(0)$. For
any choice of $y(0)$,
we obtain $\alpha_n(0)=\alpha^n$, with $\alpha=\beta(0)/y(0)$. It is easy to see that independently on the choice
of $\beta(0)$ and $y(0)$, the dynamics of the order parameters is the same. Indeed, in this case $\mathcal{M}(k)=\alpha k/(1-\alpha k)$
and the main order parameter according to \eqref{eq:first_moment} is $Z_1=Q+\alpha y/(1+\alpha s)$. Calculation of the
derivative $\dot Z_1$ from the general dynamical equations \eqref{eq:zy_eqs} yields the correct equation $\dot Z_1=(\mathrm{i}\omega -\gamma) Z_1+h-h^* Z_1^2$, i.e. the system remains on the OA manifold. In this specific case of pure OA dynamics, it is natural to consider $\alpha = 0$ such that $\mathcal{M} = 0$ and the only relevant equation is the OA equation \eqref{eq:eqQ_noise}~\cite{pikovsky_rosenblum_2008,pikovsky_rosenblum_2011}.
\subsubsection{Variant A: A simple choice of initial variables}
\label{sec:icsimp}
Here we present possibly the simplest choice of initial conditions for $Q,y,s,\mu_n$. We initially set $Q$ and $s$ to zero and set $y$ to 1, so that all the variables
sets $\mu_n, \alpha_n, \beta_n$ and $Z_n$ coincide:
\begin{equation}
\begin{split}
&Q(0) = 0\;,\\
&y(0) = 1\;,\\
&s(0) = 0\;,\\
&\mu_n = \alpha_n(0) = \beta_n(0) = Z_n(0)\;.
\end{split}
\label{eq:init_cond}
\end{equation}
Then the constant function can simply be determined by the initial order parameters:
\begin{equation}
\mathcal{M}(k) = \sum\limits_{n=1}^\infty Z_n(t=0)\, k^n\;.
\label{eq:M_A}
\end{equation}
As discussed above, this is not the only possible choice of initial conditions and in some cases might not be optimal.
Notice that for this choice, the OA manifold corresponds to the
constants being powers of $Z_1(0)$: $\mu_n = Z^n_1(0)$, so they do not vanish,
as commonly considered~\cite{pikovsky_rosenblum_2008,pikovsky_rosenblum_2011}. The two descriptions are equivalent.
Next we list specific functions $\mathcal{M}(k)$ for some examples of the initial states in
this variant A of the initial conditions~\eqref{eq:init_cond}. We stress here, that a representation
via an initial distribution density is valid for the interpretation of the system with identical
oscillators under Cauchy white noise (section \ref{sec:pfcn}). For the case of non-identical oscillators with a distribution
of frequencies (section \ref{sec:pfcf}), one should operate with the order parameters directly.
\begin{itemize}
\item A uniform distribution $P(\varphi,0) = \frac{1}{2\pi}$ corresponds to $\mathcal{M}(k) = 0$. Here all the moments vanish,
which is a trivial invariant state of the dynamics~\eqref{eq:continuity}.
\item A delta distribution of the phases $P(\varphi,0)=\delta(\varphi-\varphi_0)$ corresponds to $Z_n=\exp(\mathrm{i} n\varphi_0)$ and thus $\mathcal{M}(k) = \frac{e^{\mathrm{i} \varphi_0}k}{1-e^{\mathrm{i} \varphi_0}k}$.
\item A wrapped Cauchy distribution $P(\varphi,0) = \frac{1}{2\pi} \frac{1-|\mu|^2}{|1-\mu e^{-\mathrm{i}\varphi}|^2}$
with a complex parameter $\mu \in \mathbb{C}$
corresponds to $\mathcal{M}(k) = \frac{\mu k}{1-\mu k}$. The moments are powers of the parameter $\mu$:
$\mu_n = \mu^n$, which means this state is on the OA manifold~\cite{ott_antonsen_2008}.
\item A Kato-Jones distribution~\cite{kato-jones} corresponds to $\mathcal{M}(k) = c\frac{\mu k}{1-\mu k}$, with a complex constant $c \in \mathbb{C}$. It is a skewed/asymmetric generalization of the wrapped Cauchy distribution, its
moments are described by powers multiplied with a complex constant: $\mu_n = c\, \mu^n$.
\item Distributions with a finite number of moments
$P(\varphi,0) = \frac{1}{2\pi} (1+\sum\limits_{n=1}^N (\mu_n e^{\mathrm{i} n\varphi}+\mu_n^* e^{-\mathrm{i} n\varphi}))$
correspond to polynomials $\mathcal{M}(k) = \sum\limits_{n=1}^N \mu_n k^n$.
\item Distributions with binomial moments $\mu_n = \binom{n}{m} \mu^n$, $m \geq 1$, correspond to rational functions $\mathcal{M}(k) = \frac{(\mu k)^m}{(1-\mu k )^{m+1}}$.
\item A half-uniform distribution
$$P(\varphi,0) = \begin{cases} \frac{1}{\pi} & \text{if}\ \varphi \in (\varphi_0,\varphi_0+\pi)\;,\\0 & \text{else}\;,\end{cases}$$
corresponds to $\mathcal{M}(k) = \frac{2\mathrm{i}}{\pi}\arctanh(e^{\mathrm{i}\varphi_0}k)$. Here odd moments are fractions: $\mu_{2n-1} = \frac{2\mathrm{i}}{\pi} \frac{\exp(\mathrm{i}(2n-1)\varphi_0)}{2n-1}$,
and even ones are equal to zero: $\mu_{2n} = 0$, $n\geq1$.
\item A sawtooth distribution $P(\varphi,0) = \frac{1}{\pi}(1-\frac{\varphi-\varphi_0}{\pi}+ \lfloor \frac{\varphi-\varphi_0}{\pi}\rfloor) $
corresponds to $\mathcal{M}(k) = -\frac{i}{\pi}\log(1-e^{\mathrm{i} 2\varphi_0}k^2)$. Here even moments are fractions: $ \mu_{2n} = \frac{2\mathrm{i}}{\pi}\frac{\exp(\ii2n\varphi_0)}{2n}$,
and odd ones are equal to zero: $\mu_{2n-1} = 0$, $n\geq1$.
\end{itemize}
We end this subsection with the following remark: if the initial distribution of the phases is a weighted sum
of ``elementary'' distributions $P(\varphi,0)=\sum_m c_m P_m(\varphi,0)$ with complex weights $c_m \in \mathbb{C}$ that add up to 1: $\sum_m c_m=1$ (additionally
one should ensure that $P(\varphi,0)\geq 0$), then the constant generating function is the
weighted sum of corresponding
``elementary'' generating functions $\mathcal{M}(k)=\sum_m c_m \mathcal{M}_m(k)$. In particular,
in \cite{engelbrecht_mirollo_2020,Ichiki-Okumura-20}
a superposition of several wrapped Cauchy distributions has been considered as an initial state;
in terms of the approach above this corresponds to $\mathcal{M}(k)=\sum_m c_m \frac{\varkappa_m k}{1-\varkappa_m k}$,
where complex parameters $\varkappa_m$ characterize partial distributions.
\subsubsection{Variant B: Initial conditions on the base of the OA manifold}
\label{sec:icoa}
Often initial states that are close to the OA manifold are of interest. Suppose that the order parameters are
well described as powers of a complex constant, with minor perturbations:
\begin{equation}
Z_n(0) = R^n + \varepsilon_n\;.
\label{eq:perturbation}
\end{equation}
In this case a different initial condition appears natural:
\begin{equation}
\begin{split}
&Q(0) = R\;,\\
&y(0) = 1\;,\\
&s(0) = 0\;,\\
&\mu_n = \alpha_n(0) = \beta_n(0) = \sum\limits_{m=1}^n \binom{n}{m}\varepsilon_m (-R)^{n-m}\;,
\end{split}
\label{eq:init_cond_pert}
\end{equation}
and the constant function $\mathcal{M}(k)$ expresses as:
\begin{equation}
\mathcal{M}(k) = \sum\limits_{n=1}^\infty k^n \sum\limits_{m=1}^n \binom{n}{m} \varepsilon_m (-R)^{n-m}\;.
\label{eq:M_B}
\end{equation}
This function is small if values of $\varepsilon_n$ are small. Notice however, that this ``perturbation'' approach is actually global,
because smallness of $\varepsilon_n$ is not needed, and $\mathcal{M}(k)$ need not be small for this description to be valid.
Note that the definition of function $\mathcal{M}(k)$ differs for different consideration of initial conditions (c.f. \eqref{eq:M_A} and \eqref{eq:M_B}), so the list of specific $\mathcal{M}(k)$ functions in Section~\ref{sec:icsimp} does not apply here. The moments are still described with Eq.~\eqref{eq:moments_with_mu} but for numerical integration one needs to expands them beyond~\eqref{eq:Z_expansion} for only small $s$.
A simple specific example where the perturbation to the OA manifold
in \eqref{eq:perturbation} is one where only the first harmonic term is perturbed: $\varepsilon_n=0$ for $n>1$. In this case $\mathcal{M}(k) = \varepsilon_1 \frac{k}{(1+Rk)^2}$ and so the first moment is given by $Z_1 = Q + \varepsilon_1 \frac{y}{(1-Rs)^2}$. The dynamics follow~\eqref{eq:zy_eqs}.
We mention here that in some cases, an extension of the set of variables might be appropriate. As an example,
we show in Appendix \ref{sec:ap} a possibility to describe an initial
state similar to~\eqref{eq:perturbation} with a system of four complex variables.
\section{Global stability of the OA manifold}
\label{sec:gsoa}
Stability of the OA manifold has been discussed in Refs.~\cite{Ott-Antonsen-09,pietras_daffertshofer_2016,engelbrecht_mirollo_2020}.
Here we demonstrate how these results are reproduced in our approach. To show the attractiveness of the OA manifold it
is enough to demonstrate that the variable $y$ tends to zero $y\to 0$. Indeed, for $y=0$ we have from \eqref{eq:relation_br}
$\beta_n=0$, $n\geq 1$, and from \eqref{eq:zb} it follows that the solution is on the OA manifold.
Let us introduce two real non-negative variables $Y=|y|$ and $X=1-|Q|^2$. Applying Eqs.~\eqref{eq:eqQ_noise},\eqref{eq:eqp_noise} we obtain
\begin{equation}
\frac{\dot Y}{Y} =-\gamma -(h^*Q-hQ^*),\quad \frac{\dot X}{X} = - 2\gamma +\frac{2\gamma}{X} -(h^*Q+hQ^*) \;.
\label{eq:yh}
\end{equation}
Combining the two equations through term $(h^*Q-hQ^*)$ we obtain an equation
\[
\frac{\dot Y}{Y} = \frac{\dot X}{X}+\gamma-\frac{2\gamma}{X}\;,
\]
that can easily be integrated on the interval $(0,t)$:
\begin{equation}
Y(t) = Y(0) \frac{X(t)}{X(0)}\exp\left[\gamma t -2\gamma \int\limits_{0}^t \frac{1}{X(t')} dt'\right]\;.
\label{eq:Y}
\end{equation}
If we use initial conditions in variant A, then $Y(0)=X(0)=1$. Furthermore, because $0\leq |Q|\leq 1$, the variable $X$ is bounded $0\leq X \leq 1$ (one can easily see this from \eqref{eq:yh}, which implies that at $X=0$ the derivative $\dot X=2\gamma$
is positive, thus $X$ cannot vanish). This property ensures the inequality $\frac{1}{X}\geq 1$, which from \eqref{eq:Y} yields the upper bound on the convergence of variable $y$ to zero:
\begin{equation}
Y(t) = |y(t)|\leq \exp[-\gamma t]\;.
\label{eq:y_bound}
\end{equation}
The $y$ variable thus decays at least exponentially with exponent $\gamma$.
This proves attractiveness of the OA manifold for the system \eqref{eq:Z_dyn} for $\gamma>0$.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\columnwidth]{OA_stability.pdf}
\caption{Time series of the coupled Josephson junctions array~\eqref{eq:jj1} for three different noise strengths; $\gamma = 0$ depicted with red, $\gamma = 10^{-4}$ depicted with blue and $\gamma = 2\times10^{-4}$ depicted with green. In panel $(a)$ the norm of the first order parameter $Z_1$, and in panel $(b)$ the norm of variable $y$. The initial conditions for all three cases are the same: $Q(0) = s(0) = 1-y(0) = 0$ (variant A) and constant function $\mathcal{M}(k) = 0.4 k$, which means the initial distribution has only one harmonic. In the case of no noise, chaotic dynamics are observed, while in the noisy case there is an initial chaotic stage which is followed by a clear exponential decay of the $y$ variable. In panel $(b)$ the upper bound~\eqref{eq:y_bound} for all three cases is shown with a dashed black line. }
\label{fig:jj}
\end{figure}
\subsection{Example}
Here we illustrate stability
of the OA manifold numerically. We take the example already explored in~\cite{cestnik_pikovsky_2022}: an array of overdamped noisy Josephson junctions coupled via
a resistive load~\cite{Watanabe-Strogatz-94}. The equations for the Josephson phases read
\begin{equation}
\dot\varphi_j=1 + a\sin(\varphi_j)+\frac{\varepsilon}{N}\sum_{n=1}^N\sin(\varphi_n)+ \gamma \xi_j(t)\;.
\label{eq:jj1}
\end{equation}
In terms of the basic model \eqref{eq:phase_system_noise}, this corresponds to the choice of $\omega,h$:
$\omega=1+\varepsilon\,\text{Im}\langle Z_1 \rangle$ and $h=-\frac{a}{2}$.
In \cite{cestnik_pikovsky_2022} it was demonstrated that for $\gamma=0$ outside of the OA
manifold the dynamics for $a=1.5$, $\varepsilon=-0.7$, and the initial state $Z_1(0) = 0.4$, $Z_{n>1}(0) = 0$
is chaotic.
In Figure \ref{fig:jj} we again demonstrate chaotic behavior
in this system for $\gamma=0$, and a transition
to regular dynamics for $\gamma=10^{-4}$ and $\gamma=2\times10^{-4}$.
The exponential decay of $|y|$, which is bounded by~\eqref{eq:y_bound} is evident at large times.
\section{Noise-free case and a relation to the Watanabe-Strogatz theory}
\label{sec:nfc}
Watanabe and Strogatz~\cite{watanabe_strogatz_1993,Watanabe-Strogatz-94} demonstrated that a population of identical noiseless oscillators can be
reduced to three real dynamical variables plus constants of motion. To see that this case is included
in our theory,
let us consider identical oscillators and no noise, thus taking $\gamma = 0$.
In this case one can reduce two complex variables $y(t)$ and $s(t)$ to one angle variable $\theta(t)$
by virtue of
\begin{equation}
e^{\mathrm{i} \theta} = \frac{y}{1-|Q|^2},\qquad s=Q^*e^{\mathrm{i} \theta}\;.
\end{equation}
The evolution of the new variable follows from \eqref{eq:eqp_noise},\eqref{eq:eqs_noise}:
\begin{equation}
\dot{\theta} = \omega + hQ^* - h^*Q\;.
\label{eq:theta_dyn}
\end{equation}
This variable $\theta$ just corresponds to the WS angle variable, while $Q$ is the WS order parameter~\cite{watanabe_strogatz_1993,Watanabe-Strogatz-94}. The two equations~\eqref{eq:eqQ_noise} (with $\gamma=0$) and \eqref{eq:theta_dyn} then represent the exact evolution.
In the framework of the WS theory, the complex constants $\mu_n$ are interpreted as the circular moments
of the constant transformed phase variables $\psi_j$: $\mu_n = \langle e^{\mathrm{i} n\psi_j} \rangle$.
The relation between the original phases $\varphi_j$ and constant phases $\psi_j$ is given by the
M\"obius transform \cite{Marvel-Mirollo-Strogatz-09}:
\begin{equation}
e^{\mathrm{i} \varphi_j} = \frac{e^{\mathrm{i} (\psi_j+\theta)}+Q}{1+Q^* e^{\mathrm{i} (\psi_j+\theta)}} \; , \qquad e^{\mathrm{i} (\psi_j+\theta)} = \frac{e^{\mathrm{i} \varphi_j}-Q}{1-Q^* e^{\mathrm{i} \varphi_j}} \;.
\label{eq:mobius}
\end{equation}
Watanabe and Strogatz have shown that these transformations are also valid for a finite number of oscillators,
but this case is not covered by our approach.
We mention here, that in the WS formalism there is also a freedom in choosing the order parameter $Q$ and the phase variable $\theta$;
this freedom is similar to the one discussed in Section \ref{sec:ic} above.
\section{Lyapunov spectrum}
Our theory describes the dynamics outside of the OA manifold, and is thus suitable for consideration
of small perturbations transversal to this manifold. Such perturbations define the Lyapunov spectrum of the dynamics, together
with the perturbations tangential to this manifold. The system of equations \eqref{eq:Qbeta} is most suitable for this
analysis. The OA manifold corresponds to vanishing $\beta_n$, therefore Eqs.~\eqref{eq:beta} define the transversal perturbations.
Since these equations are a skew system, each $\beta_n$ defines two Lyapunov exponents (because $\beta_n$ are complex).
One can straightforwardly derive from \eqref{eq:beta}, omitting the skew term $\sim \beta_{n+1}$ on the r.h.s.,
the averaged evolution for the magnitude of a perturbation:
\[
\frac{1}{2n}\av{\frac{d}{dt} \ln |\beta_n|^2}=\av{-\gamma-h^*Q-hQ^*}=\Lambda\;.
\]
Thus, the Lyapunov spectrum consists of the exponents within the OA manifold (which are calculated
using linearised Eq.~\eqref{eq:Q}), and of doubly degenerated values $n\Lambda$, $n=1,2,3,\ldots$.
\section{Response of the Ott-Antonsen regime to a resetting}
\label{sec:res}
As has been already discussed in the literature~\cite{Ott-Antonsen-09,pietras_daffertshofer_2016,engelbrecht_mirollo_2020}, in the system of equations \eqref{eq:zy_eqs}
the OA manifold is attracting if $\gamma > 0$ (at least in the weak sense, but because we follow only
the moments of the phase distribution, such an attraction is enough).
In terms of variables $Q,y,s$ with a nontrivial constant
function $\mathcal{M}(k)$, this corresponds to $y\to 0$ as $t\to\infty$ (see Section \ref{sec:gsoa}).
For the conservative case $\gamma = 0$, see Section \ref{sec:nfc} above.
The approach above allows for calculating the evolution from an arbitrary state to the OA manifold via solutions of \eqref{eq:zy_eqs}. One can reformulate such a problem as a resetting one: One starts with the dynamics on the OA manifold; then an instant ``resetting'' to a state outside of this manifold is performed.
The evolution of \eqref{eq:zy_eqs} then shows what will be the final state after re-attraction to the OA manifold. A particular question of interest
depends on the type of the attractors on the OA manifold. If there is only one global attractor, then the trajectory returns to it. If this
attractor is periodic or quasiperiodic, the returning trajectory will be phase shifted with respect to the unperturbed one (in the
quasiperiodic case one expects phase shifts in every direction of independent oscillations). Here one speaks about a phase resetting or a
phase response curve (PRC)~\cite{Canavier-06,smeal2010phase}. For a chaotic global attractor, generally one does not expect a resetting to have a drastic effect (although for strange
attractors with well-defined phase variables a phase resetting similar to the periodic case can be defined~\cite{Schwabedal-Pikovsky-Kralemann-Rosenblum-12}; it can lead to phase
synchronization of chaos if periodically repeated \cite{Pikovsky-Rosenblum-Osipov-Kurths-97}). In the case of multistability, the
most drastic effect of resetting would be a jump to another basin of attraction, so that the final state will be another attractor on the OA
manifold (in case of multistable periodic attractors one can additionally follow the phase
response~\cite{Grines-Osipov-Pikovsky-18}). Below we consider several examples, for small and large resettings.
\subsection{Perturbation theory in terms of an (infinitesimal) PRC}
Suppose we have a state on the OA manifold with a complex order parameter $R$, so that $\langle e^{\mathrm{i} n\varphi_j}\rangle=R^n$.
Let us apply to all the phases a transformation
\begin{equation}
\varphi\to\varphi+\varepsilon f(\varphi)\;,
\label{eq:phtr}
\end{equation}
where $f(\varphi)=\sum_m f_m e^{\mathrm{i} m\varphi}$ is a PRC function (given by its Fourier representation) and $\varepsilon \ll 1$ is assumed to be small. Let us calculate the circular moments just after a resetting, in order $\varepsilon$:
\begin{gather*}
Z_n=\av{e^{\mathrm{i} n(\varphi+\varepsilon f(\varphi))}}\approx \av{e^{\mathrm{i} n \varphi}(1+\mathrm{i} n\varepsilon f(\varphi))}= \\
=R^n+\mathrm{i} n \varepsilon \sum\limits_{m=-\infty}^\infty f_m \av{e^{\mathrm{i} (n+m)\varphi}}\;.
\end{gather*}
Since $m$ can be negative, calculation of the latter average is not a simple expression, because
\[
\av{e^{\mathrm{i} (n+m)\varphi}}=\begin{cases} R^{n+m} &n+m\geq 0\;,\\
(R^*)^{|n+m|} & n+m<0\;.
\end{cases}
\]
Therefore we restrict ourselves to two simplest cases.
\subsubsection{First harmonics resetting}
In this case $f(\varphi)=f_1e^{\mathrm{i} \varphi}+f_1^* e^{-\mathrm{i}\varphi}$.
In this situation, for $n\geq 1$ we have $n+m\geq 0$ and therefore for both $m=\pm 1$
we can write $
\av{e^{\mathrm{i} (n+m)\varphi}}=R^{n+m}
$.
Thus
\[
Z_n=R^n(1+\mathrm{i} n\varepsilon (f_1R+f_1^*R^{-1}))\;,\qquad n\geq 0\;.
\]
Calculation of the EGF yields
\[
\mathsf{Z}(k,0)=e^{kR}(1+\mathrm{i}\varepsilon k(f_1R^2+f_1^*))\;,
\]
where we used $\sum_n n \frac{x^n}{n!}=xe^{x}$.
Let us now transform to variables $Q,y,s$ and take $Q(0)=R$ (like in variant B, Section~\ref{sec:icoa}).
This means that the EGF $\mathsf{B}$ is
\[
\mathsf{B}(k,0)=1+\mathrm{i}\varepsilon k(f_1R^2+f_1^*)\;.
\]
We come to the conclusion, that only one variable $\beta_1$ is non-zero, and the system can be directly and exactly solved with variables $Q,\beta_1$ by virtue of Eqs.~\eqref{eq:Qbeta}; there is no need to go to the full system \eqref{eq:zy_eqs}. Alternatively, one can consider only the first two equations of system~\eqref{eq:zy_eqs}
and function $\mathcal{M}(k) = \beta_1(0) k$ and then the third variable $s$ does not matter. If we rewrite Eqs.~\eqref{eq:Qbeta} in terms of variables $(Z_1,\beta_1)$, we obtain
\begin{equation*}
\begin{aligned}
\dot Z_1&=(\mathrm{i}\omega -\gamma)Z_1+h-h^*Z_1^2+h^*\beta_1^2\;,\\
\dot \beta_1&=(\mathrm{i}\omega-\gamma-2h^*(Z_1-\beta_1))\beta_1\;.
\end{aligned}
\label{eq:Zbeta}
\end{equation*}
One can see that the correction to the standard OA equation is $h^*\beta_1^2\sim \varepsilon^2$. Thus,
in the first order in $\varepsilon$, inclusion of the additional variable $\beta_1$ is irrelevant and the resetting is well described within the OA equation.
\subsubsection{Second harmonics resetting}
In this case $f(\varphi)=f_2e^{\mathrm{i} 2\varphi}+f_2^* e^{-\ii2\varphi}$, and we have
$\sum_m f_m \av{e^{\mathrm{i} (n+m)\varphi}}=f_2\av{e^{\mathrm{i}(n+2)\varphi}}+
f_2^* \av{e^{\mathrm{i} (n-2)\varphi}}$. Thus, the term with $n=1$ reads $f_2R^3+f_2^* R^*$,
while all the higher-order terms $n\geq 2$ can be written in a unified way $f_2 R^{n+2}+f_2^* R^{n-2}$.
Rewriting the term with $n=1$ as
$
f_2R^3+f_2^*R^{-1}+[f_2^*R^*-f_2^*R^{-1}]
$, we
obtain
\[
Z_n=R^n+\mathrm{i} \varepsilon n R^n[f_2R^2+f_2^* R^{-2}]+\mathrm{i}\varepsilon \delta_{n,1}[f_2^*R^*-f_2^*R^{-1}]\;,
\]
where $\delta_{n,1}$ is the Kronecker delta.
This yields the following EGF
\[
\mathsf{Z}(k,0)=
e^{kR}(1+\mathrm{i} \varepsilon k(f_2 R^3+f_2^* R^{-1}))+ \mathrm{i}\varepsilon k[f_2^*R^*-f_2^*R^{-1}]\;.
\]
Now the EGF $\mathsf{B}(k,0)$ is, if we choose $Q(0)=R$, nontrivial
\[
\mathsf{B}(k,0)=
1+\mathrm{i} \varepsilon k (f_2 R^3+f_2^* R^{-1})+\mathrm{i}\varepsilon ke^{-kR} [f_2^*R^*-f_2^*R^{-1}]\;.
\]
This allows for obtaining a closed expression for the
constant function (for choice $y(0)=1,\;s(0)=0$) as
\[
\mathcal{M}(k)=\mathrm{i}\varepsilon k\left[f_2R^3+f_2^*\frac{R^*+2k+ R k^2}{(1+Rk)^2}\right]\;.
\]
After this, the system \eqref{eq:zy_eqs} is to be solved.
\subsection{Large resettings}
Unfortunately, a transformation of the type \eqref{eq:phtr} is hardly tractable for large $\varepsilon$. Here we
discuss another way of resetting, which leads to closed expressions even for large changes of the phases. This approach is applicable to identical oscillators subject to Cauchy white noise, but not
for the distribution of natural frequencies.
Suppose, in the OA state with order parameters $Z_n=R^n$,
we randomly choose a portion $\varepsilon$ of all oscillators and reset
them completely (this means that they ``forget'' their old states), cf. \cite{Gupta-resetting}. We consider two variants below.
\subsubsection{Random resetting}
Here we assume that the new phases in the affected set become uniformly distributed in the interval $[0,2\pi)$.
These oscillators do not contribute to new order parameters which thus take the values $Z_n=(1-\varepsilon)R^n$.
This corresponds to an offset Cauchy distribution, or specifically, a weighted superposition of the Cauchy distribution and the uniform distribution.
If one uses variant A of initial conditions, then evolution
starts from the initial values $Q(0)=s(0)=0$, $y(0)=1$ and $\mathcal{M}(k)=(1-\varepsilon)Rk/(1-Rk)$ is determined via Eq.~\eqref{eq:M_A}. Alternatively,
adopting variant B, one can start from the initial conditions $Q(0)=R$, $y(0)=1$, $s(0)=0$ and then $\mathcal{M}(k)=-\varepsilon Rk/(1+Rk)$ is determined via Eq.~\eqref{eq:M_B}. Then the system is evolved by integrating Eqs.~\eqref{eq:zy_eqs}.
However, due to the simplicity of this example, there is an even easier way of treating this situation. Notice how
moments can be viewed as a superposition of two OA contributions, referred to as Poisson kernels by Ref~\cite{engelbrecht_mirollo_2020}:
\begin{equation}
Z_n = (1-\varepsilon)Q_1 + \varepsilon Q_2\;,
\label{eq:two_kernels}
\end{equation}
where initially $Q_1(0) = R$ and $Q_2(0) = 0$.
In this case therefore, one can evolve the system by considering two OA equations~\eqref{eq:Q}, which only interact through the forcing $h$, and the solution maintains the form~\eqref{eq:two_kernels} for all time.
\subsubsection{Coherent resetting}
\label{sec:res-cr}
Consider now that reset phases do not distribute uniformly but rather take on another distribution $P^\text{(res)}(\varphi)$. If this distribution is a wrapped Cauchy (which includes the uniform and the delta distribution) the setting can again be treated simply as a superposition of the OA modes (a.k.a. Poisson kernels)~\footnote{As shown in Ref.~\cite{cestnik_pikovsky_2022} any Kato-Jones distribution~\cite{kato-jones} (asymmetric generalization of the wrapped Cauchy) can still be considered as just one OA mode.}. However, the reset distribution can generally have a different form. Below we consider two cases.
Case (i): Partially coherent resetting.
Here the reset phases are distributed according to a single harmonic
density: $P^\text{(res)}(\varphi) = \frac{1}{2\pi} \left[1 + 2c\cos(\varphi-\varphi_0)\right]$, $c,\varphi_0\in\mathbb{R}$. The reset distribution therefore has only one non-zero moment: $Z_1^\text{(res)} = ce^{\mathrm{i}\varphi_0}$, and the full distribution after a portion $\varepsilon$ of phases are reset is described by: $Z_n = (1-\varepsilon) R^n + \varepsilon \delta_{n,1} c e^{\mathrm{i}\varphi_0}$.
If using variant A initial conditions, the variables initialize as $Q(0) = s(0) = 0$, $y(0) = 1$ and $\mathcal{M}(k) = (1-\varepsilon)\frac{Rk}{1-Rk}+\varepsilon c e^{\mathrm{i}\varphi_0}k$ according to Eq.~\eqref{eq:M_A}. Alternatively if choosing variant B, then $Q(0) = R$, $y(0) = 1$, $s(0) = 0$ and $\mathcal{M}(k) = -\varepsilon\left[ \frac{Rk}{1+Rk}+ \frac{ce^{\mathrm{i}\varphi_0} k}{(1+Rk)^2}\right]$ according to Eq.~\eqref{eq:M_B}. Then the system is evolved by integrating Eqs.~\eqref{eq:zy_eqs}.
One could also treat this setting as one OA mode and one general contribution described by full Eqs.~\eqref{eq:zy_eqs}, we provide such a description in Appendix~\ref{sec:ap}.
Case (ii): Fully coherent resetting. Here the reset phases take the same value $\varphi_0$. The new order parameters are thus
$Z_n=(1-\varepsilon)R^n+\varepsilon e^{\mathrm{i} n \varphi_0}$. Again both variants of initializing the variables after the reset are possible. In variant A one sets $Q(0)=s(0)=0$, $y(0)=1$ and $\mathcal{M}(k)=(1-\varepsilon)\frac{Rk}{1-Rk}+\varepsilon
\frac{e^{\mathrm{i} \varphi_0}k}{1- e^{\mathrm{i} \varphi_0}k}$ is determined via Eq.~\eqref{eq:M_A}, while following variant B, we can set
$Q(0)=R$, $y(0)=1$, $s(0)=0$ and $\mathcal{M}(k) = \varepsilon \frac{(e^{\mathrm{i} \varphi_0}-R)k}{1-(e^{\mathrm{i} \varphi_0}-R)k}$ is determined via Eq.~\eqref{eq:M_B}.
As mentioned before, since the delta distribution is a special case of the wrapped Cauchy, this case could also be described by just two OA modes~\eqref{eq:two_kernels}.
The expressions above can be readily extended to a setup where several randomly chosen subpopulations
of oscillators are reset with different distributions. Such an approach has been discussed in the context of application
to de-synchronization of neurons for Parkinson patients~\cite{Tass-99}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\columnwidth]{large_resetting.pdf}
\caption{Switching domains for large resetting in the bistable system~\eqref{eq:bistable}. Both the synchronous and asynchronous regimes are stable. Starting from the synchronous regime on the OA manifold, we reset an $\varepsilon$ portion of phases according to three example distributions: uniform (red domain), case (i) -- single harmonic density with amplitude $c=0.5$ (blue domain), and case (ii) -- delta distribution (green domain). The shaded regions mark areas that induce a switch to asynchrony. }
\label{fig:cr}
\end{figure}
\subsubsection{Numerical example for large resettings}
As an example we consider a simple Kuramoto-type system with a synchrony-asynchrony
bistability~\cite{Pikovsky-Rosenblum-09}. In this setup $\omega=\text{const}$ (and one can without loss of generality set
this parameter to zero), and force is
\begin{equation}
h=Z_1 \exp[\mathrm{i}\theta_0+\mathrm{i}\theta_1 |Z_1|^2]\;.
\label{eq:bistable}
\end{equation}
We set $\gamma=0.1$, $\theta_0=0.8\pi$
and $\theta_1=4$. For these parameters the states with $Z_1=0$ and $|Z_1|\approx 0.948$ are both stable.
We start with the latter state of a nearly synchronized ensemble, and apply the three types of resetting as described above. By solving the reduced six-dimensional equations~\eqref{eq:zy_eqs}, we obtain the domain of parameters $\varepsilon,\varphi_0$ for which the resettings lead to a transition to the asynchronous state $Z_1=0$.
For the random resetting, there is no dependence on $\varphi_0$, and the corresponding domain is $\varepsilon>0.196$ (above red line in Fig.~\ref{fig:cr}). For the coherent resetting, we consider two cases discussed above, (i) and (ii), and
their corresponding basins are depicted in Fig.~\ref{fig:cr} with blue and green domains, respectively. One can see that in all
three cases, a finite
perturbation is needed to suppress synchrony. For the coherent resettings, there is an optimal combination
of $\varepsilon$ and $\varphi_0$; the coherent subpopulation should be phase shifted around $\pi$ relative to the phase
of the mean field of the non-reset units. For case (ii) we also see that if $\varepsilon$ is too large, the reset units form a new cluster and the synchrony remains.
\section{Conclusion}
\label{sec:concl}
First, we summarize the approach and findings of this paper. Our starting point is an infinite
system of equations for the circular moments (order parameters). These equations contain damping due to
either Cauchy white noise, or due to a Cauchy distribution of natural frequencies.
By virtue of several transformations, which
are formulated in terms of generating functions, we reduce this system to three complex equations. Additionally, a complex-valued function of one variable is defined, which remains constant during the evolution.
The order parameters at each moment of time are represented through this function
and three complex dynamical variables.
The original set of equations for the order parameters have the same form in two situations:
if the phase oscillators are subject to a Cauchy white noise, and if the natural frequencies
are Cauchy distributed (but time-independent). Only in the former case there is a simple unique correspondence between the order parameters and the distribution of the phases. In the latter situation,
one can calculate the order parameters from the distribution of the phases (under the assumption of analyticity of the density in the upper complex plane of frequencies), but it appears impossible to
reconstruct this distribution from the order parameters without further assumptions. Therefore, the results of the paper are fully
applicable to noisy ensembles, but some approaches (e.g., phase resetting) are not suitable for the oscillators with distributed frequencies.
The theory includes both the WS description (noise-free identical oscillators) and the OA manifold (on which
the dynamical variable $y$ vanishes). In the framework of our approach,
one can simply demonstrate that the dynamical variable $y(t)$ tends to zero,
which corresponds to the weak stability of the OA manifold discussed in the literature. Therefore, our approach
is an essential improvement compared to OA theory, if a transient evolution from an initial state
outside of the OA manifold is important. In particular, it allows for a calculation of the basins
of different attractors lying on the OA manifold.
In this paper we operated with the phase equations. In some cases it is convenient to transform
the phase equations to other variables (e.g., theta-neurons, equations which belong to class
\eqref{eq:phase_system_noise}~\cite{Luke-Barreto-So-13,Laing-14}, can be transformed to so-called quadratic integrate-and-fire neurons
\cite{laing_2015,montbrio_pazo_roxin_2015,bick_goodfellow_laing_martens_2020}). An extension of the theory
to quadratic integrate-and-fire neurons is a work in progress.
\acknowledgments
We thank L. Smirnov and R. Toenjes for useful discussions. The work was supported by DFG (grant No. PI 220/21-1).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Amalgamation diagrams}
Given $M_j$ and $M_k$ we intend to look at algebras of the form $M_j
\free{D} M_k$ where $D$ is a copy of $\mathbb{C}^n$ embedded into
the two algebras as a diagonal subalgebra of the matrix algebras. Of
course there are many different ways to do this embedding. We
introduce some notation to describe how $\mathbb{C}^n$ embeds into
$M_j$.
We use the diagram \begin{align*} & M_j: \boxed{j_1} \boxed{j_2}
\boxed{j_3} \cdots \boxed{j_n} \boxed{0}
\end{align*} to describe the embedding \[ \begin{bmatrix} \lambda_1 \\ \lambda_2
\\ \vdots \\ \lambda_n \end{bmatrix} \mapsto \begin{bmatrix} \lambda_1 I_{j_1} & 0 &
0 & \cdots & 0 \\ 0 & \lambda_2I_{j_2} & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots &
\lambda_{j_n}I_{j_n} & 0 \\ 0 & 0 & \cdots & 0 & 0
\end{bmatrix} \in \begin{bmatrix} M_{j_1} & 0 & 0 & \cdots & 0 \\ 0 & M_{j_2} & 0
& \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots
\\ 0 & \cdots & 0 & M_{j_n} & 0 \\ 0 & \cdots & 0 & 0 & 0
\end{bmatrix} \subseteq M_j. \] Here $I_{\alpha}$ is the $\alpha \times \alpha$
identity matrix in $M_{\alpha}$. If there is a box containing a zero
then we call such a box a {\em zero-box}. Further, in our notation
there is at most one zero box.
Notice that the embedding diagram tells us:
\begin{enumerate} \item the value of $n$,
\item and whether the embedding is non-unital, indicated by the presence
of a zero-box. \end{enumerate} For the purposes of our results it
is safe to assume that through the use of elementary row operations
that any zero-box is listed last.
Now when looking at the free product of two matrix algebras with
amalgamation over $\mathbb{C}^n$ it is clear that just writing $M_j
\free{\mathbb{C}^n} M_k$ will be unsuitable because it is not clear
how we are embedding $\mathbb{C}^n$ into the two matrix algebras. To
see the amalgamation we will use pairs of embedding diagrams. We
will call a pair of embedding diagrams an {\em amalgamation diagram}
since they represent the amalgamating subalgebra in a free product.
We will present two examples to illustrate how this will work.
\begin{example}\label{m2on} We start with an example from W.\ Paschke
\cite[Example 3.3]{Brownexact}. There it is noted that with suitable
amalgamation $M_{j+1} \free{\mathbb{C}^2} M_2$ is isomorphic to
$M_{j+1} \otimes \mathcal{O}_j$. The amalgamation can be described
using the
amalgamation diagram \begin{align*} & M_j:\boxed{1} \boxed{j-1} \\
& M_2:\boxed{1} \boxed{1}. \end{align*} Here the $M_j$-row represents
$\mathbb{C}^2$ as the subalgebra of $M_j$ given by
\[ \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0
\\ \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 &
\lambda_2 \end{bmatrix}. \] The $M_2$-row represents the usual
embedding of $\mathbb{C}^2$ as the diagonal subalgebra of $M_2$.
Notice that for both embeddings $\mathbb{C}^2$ is a unital
subalgebra of the associated algebra.
\end{example}
\begin{example}\label{mntensora} The next example is from
\cite[Chapter 6]{Loring}. There it is shown that for unital $A$ and
appropriate choice of embedding we have that $ M_j \free{\mathbb{C}}
A $ is isomorphic to $M_j(A)$. For our notation we will look at the
specific case of $A = M_k$ and
amalgamation diagram \begin{align*} & M_j:\boxed{1} \boxed{0} \\
& M_k:\boxed{k} . \end{align*} Here the scalar multiples of the
identity in $M_k$ are matched up with the $1\times 1$ entry in
$M_j$. \end{example}
\begin{example}\label{Mk} Finally we have the following example which,
while not of the form described above will allow us to make some
computations later. The algebra \[ A:= \begin{bmatrix} M_k & 0 \\
0 & \mathbb{C} \end{bmatrix} \free{\mathbb{C}^{k+1}} \begin{bmatrix}
\mathbb{C}^{k-1} & 0 \\ 0 & M_2 \end{bmatrix} \] is isomorphic to $
M_{k+1}$, where $\mathbb{C}^{k+1}$ is the canonical inclusion as
diagonal matrices. Certainly there is an onto $*$-representation
$\pi: A \rightarrow
M_{k+1}$ induced by the inclusions \begin{align*} \iota_{k,1}: \begin{bmatrix} M_k & 0 \\
0 & \mathbb{C} \end{bmatrix} & \subseteq M_{k+1} \\ \iota_{k-1,2}
\begin{bmatrix} \mathbb{C}^{k-1} & 0 \\ 0 & M_2 \end{bmatrix}
& \subseteq M_{k+1}. \end{align*} We need only show that $M_{k+1}$
satisfies the requisite universal property. So let $B$ be a
$C^*$-algebra and assume that we have $*$-representations $\pi_1:
\begin{bmatrix} M_k & 0 \\ 0 & \mathbb{C} \end{bmatrix}
\rightarrow B$ and $ \pi_2: \begin{bmatrix} \mathbb{C}^{k-1} & 0 \\
0 & M_2 \end{bmatrix} \rightarrow B$ with $\pi_1|_D = \pi_2|_D$ for
the subalgebra of diagonal matrices $D$. Then for the elementary
matrices $e_{i,j} \in M_{k+1}$ define
\[\pi(e_{i,j}) = \begin{cases} \pi_1(e_{i,j}) & 1 \leq i, j < k+1 \\
\pi_2(e_{k+1,k+1}) = \pi_1(e_{k+1,k+1}) & i = j = k+1 \\
\pi_1(e_{i,k})\pi_2(e_{k,k+1}) & 1 \leq i < k+1, j = k+1 \\
\pi_2(e_{k+1,k})\pi_1(e_{k,j} & 1 \leq j < k+1, i = k+1
\end{cases}.\] Notice that in the second case since $\pi_1|_D =
\pi_2|_D$ which tells us that the second case is well defined. For
general matrices we extend using linearity. We need only show that
$\pi$ induces a $*$-representation on $M_{k+1}$. To verify this we
notice first that $\pi$ is linear by construction. Next, to show
that $\pi(A^*) = \pi(A)^*$ we only need show, by linearity, that $
\pi({e_{i,j}}^*) = \pi(e_{i,j})^*$ for all $i, j$. This is trivial
if $1 \leq i,j \leq k$ or $ i=j=k+1$ since $\pi_1$ and $\pi_2$ are
$*$-representations. So assume that $i < k+1$ and consider
\begin{align*} \pi(e_{i,k+1})^* & = (\pi_1(e_{i,k})
\pi_2(e_{k,k+1}))^* \\ & = \pi_2(e_{k,k+1})^*\pi_1(e_{i,k+1})^* \\
&= \pi_2(e_{k+1,k})\pi_1(e_{k+1,i}) \\ &= \pi(e_{k+1,i}) =
\pi({e_{i,k+1}}^*). \end{align*} The third equality follows since
$\pi_1$ and $\pi_2$ are $*$-representations. The case where $j <
k+1$ and $i=k+1$ is similar.
We next need to show that $\pi$ is multiplicative. We will consider
products of the form $e_{i,m} = e_{i,j}e_{j,m}$. Again this follows
using cases. If $1 \leq i,j,m < k+1$, or $i=j=m=k+1$ then
$\pi(e_{i,m}) = \pi_1(e_{i,m})= \pi_1(e_{i,j} e_{j,m}) =
\pi_1(e_{i,j}) \pi_1(e_{j,m}) = \pi(e_{i,j})\pi(e_{j,m})$. There are
six remaining cases, we will do two of them, the remainder will
follow in a similar fashion.
Assume that $ j=k+1$ and $1 \leq i, m<k+1$, then
\begin{align*} \pi(e_{i,j}) \pi(e_{j,m}) & = \pi_1(e_{i,k}) \pi_2(e_{k,k+1})
\pi_2(e_{k+1,k})\pi_1(e_{k,m}) \\ &= \pi_1(e_{i,k})
\pi_2(e_{k,k})\pi_1(e_{k,m}) \\ &= \pi_1(e_{i,k})
\pi_1(e_{k,k})\pi_1(e_{k,m}) \\ &= \pi_1 (e_{i,k}e_{k,k}e_{k,m}) =
\pi_1(e_{i,m}) = \pi(e_{i,m}). \end{align*} Notice that in the third
equality we used that $\pi_2$ is a homomorphism, in the next line we
used that $\pi_1|_D = \pi_2|_D$, and then in the line after we use
the fact that $\pi_1$ is a homomorphism.
Next consider the case that $i = k+1, m=k+1$ and $1 \leq j < k+1$
and compute \begin{align*} \pi(e_{i,j})\pi(e_{j,m}) &=
\pi_2(e_{k+1,k}) \pi_1(e_{k,j}) \pi_1(e_{j,k}) \pi_2(e_{k,k+1}) \\
&= \pi_2(e_{k+1,k} \pi_1(e_{k,k}) \pi_2(e_{k,k+1}) \\ &=
\pi_2(e_{k+1,k} \pi_2(e_{k,k}) \pi_2(e_{k,k+1}) \\ &=
\pi_2(e_{k+1,k}e_{k,k}e_{k,k+1})\\ & = \pi_2(e_{k+1,k+1}) =
\pi(e_{i,j}e_{j,m}). \end{align*}
Similar calculations finish the remaining cases and then applying
linearity completes the proof that $M_{k+1}$ has the requisite
universal property and hence is isomorphic to $A$.
\end{example}
The following will be useful in analyzing exactness and nuclearity
for free products.
\begin{theorem}\label{inclusions} If $D$ is a $C^*$-subalgebra of $A_1$ and $A_2$ then
there exists a canonical onto $*$-representation $ \pi: A_1
\free{}A_2 \rightarrow A_1 \free{D}A_2$. If, in addition, $C$ is a
$C^*$-algebra with $D \subseteq C \subseteq A_i$ for each $i = 1,2$
then there is a canonical onto $*$-representation $\sigma: A_1
\free{D}A_2 \rightarrow A_1 \free{C} A_2$.
\end{theorem}
\begin{proof} Let $\iota_i: A_i \rightarrow A_1\free{D}A_2 $ be the
canonical inclusion (i.e. $A_i \subseteq A_1 \free{D}A_2$). Then by
the universal property of $A_1 \free{}A_2$ there exists a canonical
$*$-representation $\iota_1 \free{} \iota_2: A_1 \free{}A_2
\rightarrow A_1 \free{D} A_2$. This map is onto since a generating
set for $A_1 \free{D} A_2$ is contained in the image of $\iota_1
\free{} \iota_2$.
Next let $\beta_i: A_i \rightarrow A_1 \free{C} A_2$ be the
canonical inclusion. Notice that $ \beta_1(d) = \beta_2(d)$ for all
$d \in D$ since $D \subseteq C$ and hence there is an induced
$*$-representation $\beta_1 \free{D} \beta_2: A_1 \free{D} A_2
\rightarrow A_1 \free{C} A_2$ which is onto for the same reason as
the previous map. \end{proof}
The following is immediate since both nuclearity and exactness pass
to quotients.
\begin{corollary}\label{subalgebra} If $A \free{} B$ is nuclear so is $A \free{D} B$
for any $C^*$-algebra $D$ with $D \subseteq A$ and $D \subseteq B$.
If $A \free{D} B$ is not exact then neither $A \free{} B$ nor $A
\free{C} B$ is exact for any subalgebra $C$ with $D \subseteq C
\subseteq A$ and $ D \subseteq C \subseteq B$.
\end{corollary}
Finally we have one more well-known example which will provide a
standard non-exact $C^*$-algebra for our results.
\begin{example}\label{ctct} The algebra $C(\mathbb{T}) \free{\mathbb{C}}
C(\mathbb{T})$ is isomorphic to the non-exact $C^*$-algebra
$C^*(\mathbb{Z}) \free{\mathbb{C}} C^*(\mathbb{Z}) = C^*(\mathbb{Z}
\free{} \mathbb{Z}) = C^*(F_2)$ (see \cite{Wassermann} for a proof
that the latter is not exact).
\end{example}
\section{Algebras of the form $M_j \free{D} M_k$}
We have already seen two examples of these type of algebras, both of
which were nuclear. The general case will be more complicated and
will depend on the nature of $D$, and on the embedding diagrams for
$D \subseteq M_j$ and $D \subseteq M_k$.
\begin{proposition} The algebra $M_3 \free{\mathbb{C}^3} M_3$ is
isomorphic to $M_3 \otimes (C(\mathbb{T}) \free{\mathbb{C}}
C(\mathbb{T}))$ and hence is not exact. \end{proposition}
\begin{proof} We know from Example \ref{Mk} that $M_3 = (M_2 \oplus
\mathbb{C}) \free{\mathbb{C}^3} (\mathbb{C} \oplus M_2)$ and hence
$M_3 \free{\mathbb{C}^3} M_3$ can be rewritten as \[ \left(( M_2
\oplus \mathbb{C}) \free{\mathbb{C}^3} (\mathbb{C} \oplus M_2
)\right) \free{\mathbb{C}^3} \left( (M_2 \oplus \mathbb{C})
\free{\mathbb{C}^3} (\mathbb{C} \oplus M_2) \right). \] Of course
by rearranging we can rewrite this as \[ \left( (M_2 \oplus
\mathbb{C}) \free{\mathbb{C}^3} (M_2 \oplus \mathbb{C}) \right)
\free{\mathbb{C}^3} \left( (\mathbb{C} \oplus M_2)
\free{\mathbb{C}^3} (\mathbb{C} \oplus M_2) \right)\] which by
Example \ref{m2on} is isomorphic to $\left(M_2(C(\mathbb{T})) \oplus
\mathbb{C}\right) \free{\mathbb{C}^3} \left(\mathbb{C} \oplus
M_2(C(\mathbb{T})) \right)$. The latter algebra has a canonical
representation onto a generating set for the algebra $ M_3 \otimes
(C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T}))$ via the inclusion
maps. It is a simple matter to see that the algebra $ M_3 \otimes
(C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T}))$ satisfies the
universal property for \[\left(M_2(C(\mathbb{T})) \oplus \mathbb{C}
\right) \free{\mathbb{C}^3} \left(\mathbb{C} \oplus
M_2(C(\mathbb{T}))\right). \] Lack of exactness now follows using
Example \ref{ctct} \end{proof}
\begin{proposition}\label{mjmk} If $D$ is a unital diagonal subalgebra of $M_j$
and $M_k$ such that $\dim{D} \geq 3$, then $M_j \free{D} M_k$ is not
exact. \end{proposition}
\begin{proof} First notice that there is a $3$-dimensional
subalgebra of $D$, call it $E$ and denote by $e$ the identity of
$E$. Then set $A = \{x \in M_j: xe = ex = x \}$ and $ B = \{ x \in
M_k: xe=ex=x \}$. It is routine to verify that $ A \cong M_3$ and $B
\cong M_3$. Then applying \cite[Proposition
2.4]{Armstrong-Dykema-Exel-Li:2003} with the canonical conditional
expectations given by projections onto the appropriate subalgebras
$M_3$ we have that $ A \free{E} B \subset M_j \free{D} M_k$ and
hence $M_j \free{D} M_k$ is not exact.
\end{proof}
We let $m_i$ denote the minimum value in the $i$th row of the
amalgamation diagram for $M_j \free{\mathbb{C}^k} M_k$. Define the
{\em minimum value of the diagram} to be the sum of the $m_i$ as $i$
ranges over each row of the amalgamation diagram.
\begin{proposition}\label{submjmk} Let $D$ be a unital diagonal subalgebra of $M_j$
and $M_k$. If the minimum value of the amalgamation diagram for $M_j
\free{D} M_k$ is greater than or equal to $3$, then $M_j \free{D}
M_k$ is not exact.
\end{proposition}
\begin{proof} By hypothesis there exist diagonal subalgebras
$E \subseteq M_j$ and $E \subseteq M_k$ such that the dimension of
$E$ is greater than or equal to $3$. As in the previous proposition
we let $e$ denote the identity in $E$ and set $ A = \{x \in M_j: xe
= ex = x \}$ and set $B = \{x \in M_k: xe = ex = x \}$. Notice that
$ A \cong M_{\dim E} \cong B$. The result will follow by applying
\cite[Proposition 2.4]{Armstrong-Dykema-Exel-Li:2003} with the
canonical conditional expectations and noting that $ A \free{E} B
\subseteq M_j \free{E} M_k$.\end{proof}
\begin{theorem}\label{graph} Let $D$ be a unital diagonal subalgebra of $M_j$ and
$M_k$ such that the minimum value of the amalgamation diagram for
$M_j \free{D} M_k$ is $2$. If $\dim D = 2$ then the algebra is
nuclear. \end{theorem}
\begin{proof} We will show that such an algebra is a directed graph
$C^*$-algebra and hence is nuclear. Let $G$ be the directed graph
with $2$-vertices $\{ v_1, v_2 \}$ and $(j-1)+(k-1)$ edges $\{ e_1,
e_2, \cdots, e_{j-1}, f_1, f_2, \cdots, f_{k-1} \}$ with $r(e_i) =
v_1$, $s(e_i) = v_2$ and $ r(f_i) = v_2, s(f_i) = v_1$. We claim
that $C^*(G)$ is isomorphic to $M_j \free{D} M_k$. Notice that
$e_{i+1,1} \in M_j$ and $e_{j,k} \in M_k$ form a collection of
partial isometries which form a Cuntz-Krieger family for the graph
$G$. Further notice that this Cuntz-Krieger family generates the
algebra $M_j \free{D} M_k$. By \cite[Proposition
1.21]{Raeburn:2006} there is a $*$-representation $\pi: C^*(G)
\rightarrow M_j \free{D} M_k$. Notice further that the directed
graph thus constructed is cofinal and every cycle has an entry hence
$C^*(G)$ is simple by \cite[Proposition 4.2]{Raeburn:2006}. It
follows that $\pi$ is one-to-one and hence the free product algebra
is isomorphic to $C^*(G)$ and is nuclear.\end{proof}
Notice that the minimum value of an amalgamation diagram for $M_j
\free{D} M_k$ is never equal to $1$. For unital amalgamations of
finite dimensional algebras we have one case remaining.
\begin{proposition} The algebra $M_2 \free{\mathbb{C}} M_2$ is not exact.
\end{proposition}
\begin{proof} Define $\pi_1: M_2 \rightarrow M_2(C(\mathbb{T})
\free{\mathbb{C}} C(\mathbb{T}))$ by \[ \pi_1\left( \begin{bmatrix} a & b \\
c & d
\end{bmatrix}\right) = \begin{bmatrix} a & bz_1 \\ c \overline{z_1} & d
\end{bmatrix} \] where $z_1$ is the usual generator for
$C(\mathbb{T})$ in the first copy of $C(\mathbb{T}) \subseteq
C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T})$. A routine
calculation shows that $\pi_1$ is a $*$-representation.
Next define $\pi_2: M_2 \rightarrow M_2(C(\mathbb{T})
\free{\mathbb{C}} C(\mathbb{T}))$ by \[ \pi_2\left( \begin{bmatrix} a & b \\
c & d
\end{bmatrix}\right) = \begin{bmatrix} \frac{a + d - c\overline{z_2} + b
z_2}{2} & \frac{a-d-c\overline{z_2}+bz_2}{2} \\
\frac{a-d+c\overline{z_2} - bz_2}{2} & \frac{
a+d+c\overline{z_2}+bz_2}{2} \end{bmatrix} \] where $z_2$ is the
usual generator for $C(\mathbb{T})$ in the second copy of
$C(\mathbb{T}) \subseteq C(\mathbb{T}) \free{\mathbb{C}}
C(\mathbb{T})$. Again, a routine calculation shows that $\pi_2$ is a
$*$-representation.
Now $\pi_1\left(\begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix}\right)
= \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} = \pi_2\left(
\begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix}\right)$ and hence there is a
$*$-representation $\pi_1 \free{} \pi_2: M_2 \free{\mathbb{C}} M_2
\rightarrow M_2( C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T}))$.
Notice that \[ \begin{bmatrix} z_1 & 0 \\ 0 & 0 \end{bmatrix} =
\pi_1\left(\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\right)
\pi_2\left( \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\right)
\] and \[
\begin{bmatrix} z_2 & 0 \\ 0 & 0 \end{bmatrix} =
\pi_1\left(\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\right)
\pi_2\left(
\begin{bmatrix} 0 & 2 \\ 0 & 0 \end{bmatrix}\right) \pi_1\left( \begin{bmatrix}
1 & 0 \\ 0 & 0 \end{bmatrix}\right) \] and hence the non-exact
subalgebra $
\begin{bmatrix} C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T}) & 0 \\
0 & 0 \end{bmatrix}$ is contained as a subalgebra in the image of
$\pi_1 \free{}\pi_2$. It follows that $ M_2 \free{\mathbb{C}} M_2$
can not be exact. \end{proof}
It is not hard to see that, in the previous proof, the mapping
$\pi_1 \free{} \pi_2$ is not one-to-one. This follows since $
\mathbb{C}^2 \free{\mathbb{C}} \mathbb{C}^2$ is a subalgebra of $
M_2 \free{\mathbb{C}} M_2$, but the image of $ \mathbb{C}^2
\free{\mathbb{C}} \mathbb{C}^2$ under the mapping $\pi_1 \free{}
\pi_2$ is finite dimensional. However it is well known, see
\cite[Example IV.1.4.2]{Blackadar} that $ \mathbb{C}^2
\free{\mathbb{C}} \mathbb{C}^2$ is isomorphic to
\[ \left\{ \begin{bmatrix} f_{1,1}(t) & f_{1,2}(t) \\ f_{2,1}(t) &
f_{2,2}(t) \end{bmatrix}: f_{i,j} \in C([0,1]), f_{1,2}(0) =
f_{2,1}(0) = f_{1,2}(1) = f_{2,1}(1) = 0 \right\}. \]
\section{Algebras of the form $M_j \free{D} M_k \free{D} M_l$}
We first note that $M_j \free{D} M_k \free{D} M_l = M_j \free{D} M_l
\free{D} M_k$ and hence if any two of $j,k$, or $l$ give rise to
amalgamation diagrams with minimum value greater than or equal to
$3$, then the algebra $M_j \free{D} M_k \free{D} M_l$ is not exact.
\begin{theorem} The algebra $M_2 \free{\mathbb{C}^2} M_2 \free{
\mathbb{C}^2} M_2$ is isomorphic to $M_2(C(\mathbb{T})
\free{\mathbb{C}} C(\mathbb{T}))$ and hence is not exact.
\end{theorem}
\begin{proof} By Example \ref{m2on}, the algebra $M_2
\free{\mathbb{C}^2} M_2$ is isomorphic to $M_2 \otimes
C(\mathbb{T})$. Further there is a canonical $*$-isomorphism \[
\pi: (M_2 \free{\mathbb{C}^2} M_2) \free{M_2} (M_2
\free{\mathbb{C}^2} M_2) \rightarrow M_2 \free{\mathbb{C}^2} M_2
\free{\mathbb{C}^2} M_2.\] Now assume that $\pi_i: C(\mathbb{T})
\otimes M_2 \rightarrow A$ are unital $*$-representations satisfying
$ \pi_1 (1 \otimes d) = \pi_2(1 \otimes d) $ for all $ d \in M_2$.
For $ a \in C(\mathbb{T})$ we define $ \sigma_i (a) = \pi_i(a
\otimes 1)$. Then there exists $\sigma_1 \free{} \sigma_2 :
C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T}) \rightarrow A$.
Further, if we set $\sigma: M_2 \rightarrow A$ by $\sigma(d) =
\sigma_1(1 \otimes d)$ then we know that $ \sigma(d) \sigma_1(a) =
\sigma_1(a) \sigma(d)$ and $\sigma(d) \sigma_2(a) = \sigma_2(a)
\sigma(d)$ for all $a \in C(\mathbb{T})$ and $ d \in M_2$ and hence
$ \sigma_1 \free{}\sigma_2 (x) \sigma(d) = \sigma(d) \sigma_1
\free{}\sigma_2 (x)$ for all $x \in C(\mathbb{T}) \free{\mathbb{C}}
C(\mathbb{T})$ and $d \in D$. It follows by the universal property
of the tensor product that there exists $\tau: C(\mathbb{T})
\free{\mathbb{C}} C(\mathbb{T}) \otimes M_2 \rightarrow A$ extending
the canonical inclusions of $C(\mathbb{T}) \otimes M_2$ into
$(C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T})) \otimes M_2$. Hence
$(M_2 \free{\mathbb{C}^2} M_2) \free{M_2} (M_2 \free{\mathbb{C}^2}
M_2)$ is isomorphic to $(C(\mathbb{T}) \free{\mathbb{C}}
C(\mathbb{T})) \otimes M_2$ which is not exact since it contains a
copy of $C(\mathbb{T}) \free{\mathbb{C}} C(\mathbb{T})$. \end{proof}
\begin{corollary} Let $D$ be a diagonal subalgebra of $M_2$, then $M_2 \free{D} M_2
\free{D} M_2$ is not exact. \end{corollary}
\begin{proposition} Let $D$ be a unital diagonal subalgebra of $M_j,
M_k$ and $M_l$. The algebra $M_j \free{D} M_k \free{D} M_l$ is not
exact. \end{proposition}
\begin{proof} If the dimension of $D$ is greater than or equal to
$3$ then by Proposition \ref{mjmk} the algebra can not be exact, so
we look only at the case that $\dim D \leq 2$.
If $\dim D = 2$, then again we can assume without loss of generality
that in the threefold amalgamation diagram for the free product that
in any given row at most one box is not equal to $1$. Thus at least
one of $j,k,$ or $l$ must equal $2$. So without loss of generality
assume that we have $l = 2$ and we are in the case of $M_j
\free{\mathbb{C}^2} M_k \free{\mathbb{C}^2} M_2$. Now, as in
Proposition \ref{submjmk} we can see that there is a copy of $M_2
\free{\mathbb{C}^2} M_2$ inside $M_j \free{\mathbb{C}^2} M_k$, and
applying \cite{Armstrong-Dykema-Exel-Li:2003} again we have that $
M_2 \free{\mathbb{C}^2} M_2 \free{\mathbb{C}^2} M_2$ is a subalgebra
of $M_j \free{\mathbb{C}^2} M_k \free{\mathbb{C}^2} M_l$ and hence
the latter is not exact.
The case of $\dim D = 1$ now follows by Corollary \ref{subalgebra}.
\end{proof}
\section{Free products with no amalgamation and some nonunital amalgamations}
We know by applying Proposition \ref{mjmk} and Proposition
\ref{subalgebra} that the following is true.
\begin{proposition} The algebra $M_j \free{} M_k$ is not exact for
any $k,j \geq 2$. \end{proposition}
We now focus on the case in which the diagonal subalgebra $D$
contains the identity of $M_j$ but not that of $M_k$. In this case
the amalgamation diagram is
of the form \begin{align*} & M_j: \boxed{j_1} \boxed{j_2} \cdots \boxed{j_m} & \\
& M_k:\boxed{k_1} \boxed{k_2} \cdots \boxed{k_m} \boxed{0} \\
\end{align*} where $m$ is the dimension of $D$. We will write
$k(D)$ for the integer given by $ k - \sum_{i=1}^m k_i$
\begin{theorem} Let $D$ be a unital diagonal subalgebra of $M_j$,
where $D$ is a diagonal subalgebra of $M_k$ which does not contain
the unit of $M_k$. Then the algebra $M_j \free{D} M_k$ is exact if
and only if $M_j \free{D} M_{k-k(D)}$ is exact in which case $M_j
\free{D} M_k$ is nuclear.
\end{theorem}
\begin{proof} Clearly, since $M_j \free{D} M_{k-k(D)}$ is a
subalgebra of $M_j \free{D} M_k$ if the former is not exact neither
is the latter. We will focus on the case in which $M_j \free{D}
M_{k-K(D)}$ is exact. This breaks down into two cases.
Case 1 ($\dim D = 1$): In this case, either $j = 1$ which is
trivial, or $k-k(D) = 1$ which puts us in the context of Example
\ref{mntensora}.
Case 2 ($\dim D = 2$): In this case the subalgebra $M_j \free{D}
M_{k-K(D)}$ is a directed graph algebra, see Proposition
\ref{graph}. The corresponding directed graph has two vertices $\{
v_1, v_2\}$ and $ j-1$ edges from $v_1$ to $v_2$ and $ k-k(D)-1$
edges from $v_2$ to $v_1$. Create a new graph $G$ by adding a
vertex $v_3$ and $k-k(D)$ edges from $v_2$ to $v_3$. We claim that
the algebra $C^*(G)$ is isomorphic to $M_j \free{D} M_k$ and hence
the algebra is nuclear.
Let $\{ E, P \}$ be the Cuntz-Krieger system given by the generators
for the graph $C^*$-algebra $M_j \free{D} M_{k-k(D)}$. Now look at
the associated Cuntz-Krieger system \[ \left\{ E \cup \{e_{j,k}: 1
\leq j \leq k-1\}, P \cup \left\{ \sum_{m= k-k(D)+1}^{k}e_{m,m}
\right\} \right\},\] where $e_{i,j} \in M_{k} \subset M_j \free{D}
M_{k}$. Notice that this new Cuntz-Krieger system generates $M_{j}
\free{D} M_{k}$ as a $C^*$-algebra and hence a standard result for
graph algebras, \cite[Proposition 1.21]{Raeburn:2006}, gives an onto
representation $: \pi: C^*(G) \rightarrow M_j \free{D} M_{k}$. Now
since graph algebras are nuclear the algebra $M_j \free{D} M_k$ is
nuclear.\end{proof}
Finally we can make some progress on the general case. We know that
there is a canonical onto $*$-representation $\pi: M_3 \free{D} M_3
\rightarrow M_3 \free{\mathbb{C}^3} M_3$ for any diagonal subalgebra
$D$ and hence $M_3 \free{D} M_3$ is not exact for any diagonal
subalgebra $D$. We have also seen that $M_2 \free{} M_2$ and the
free product with unital amalgamation, $M_2 \free{\mathbb{C}} M_2$,
are not exact.
Now write the amalgamation diagram for $M_j \free{D} M_k$ as
\begin{align*} & M_j:\boxed{j_1} \boxed{j_2} \cdots \boxed{j_m} \boxed{0} \\
& M_k:\boxed{k_1} \boxed{k_2} \cdots \boxed{k_m} \boxed{0}.
\end{align*}
\begin{proposition} Let $D$ be a non-unital diagonal subalgebra of $M_j$ and
$M_k$. If $\dim D \geq 2$ then $M_j \free{D} M_k$ is not exact. If
either $ j-\sum j_i$ or $k-\sum k_i$ is greater than or equal to $2$
then $M_j \free{D} M_k$ is not exact. \end{proposition}
\begin{proof} We deal first with $\dim D \geq 2$. Notice that
there will be an embedding of $M_3$ into $M_j$ and $M_k$ so that the
subalgebra will have amalgamation diagram \begin{align*} & M_3:
\boxed{1} \boxed{1} \boxed{0} \\ & M_3:\boxed{1} \boxed{1} \boxed{0}
\end{align*} which will have as a quotient the non-exact algebra
$M_3 \free{\mathbb{C}^3} M_3$ and hence $M_j \free{D} M_k$ will not
be exact.
For the other situation we notice that there will be a subalgebra of
the form $\mathbb{C} \free{} (\mathbb{C} \oplus \mathbb{C})$. This
non-unital $C^*$-algebra satisfies \[ (\mathbb{C} \free{}
(\mathbb{C} \oplus \mathbb{C}))^1 \cong \left(\mathbb{C} \oplus
\mathbb{C}\right) \free{\mathbb{C}} \left( \mathbb{C} \oplus
\mathbb{C} \oplus \mathbb{C}\right).\] The latter algebra is
isomorphic to $C^*(\mathbb{Z}_2) \free{\mathbb{C}} C^*(\mathbb{Z}_3)
\cong C^*(\mathbb{Z}_2 \free{} \mathbb{Z}_3)$ which contains a copy
of $C^*(\mathbb{Z}\free{}\mathbb{Z})$ which is not exact. It
follows that since the unitization of $\mathbb{C} \free{}
(\mathbb{C} \oplus \mathbb{C})$ is not exact the algebra is not
either and hence $M_j \free{D} M_k$ is not exact.
\end{proof}
The only case that remains is the free product $M_2
\free{\mathbb{C}} M_k$ with amalgamation diagram \begin{align*} &
M_2:\boxed{1} \boxed{0} \\ & M_k:\boxed{k-1} \boxed{0}.
\end{align*} We do not, as yet, have a satisfactory answer for this
situation.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Model}
\label{sec:models}
\subsection*{Approach}
Light propagating through a composite material is scattered when the
refractive indices of the component materials differ and is absorbed
when either of the materials has a refractive index with a non-zero
imaginary component. The scattered waves can then interfere with one
another. Furthermore, depending on the refractive indices and
nanostructure, light might scatter repeatedly before exiting the
material. Thus, modeling structural color requires knowing the complex
refractive indices of the materials, the nanostructure, and the
detection geometry.
There are many approaches to modeling the relation between scattering
and color. The most venerable is radiative transfer theory, and in
particular the Kubelka-Munk theory~\cite{kubelka_beitrag_1931}, which
has been used extensively for predicting colors in mixtures of
paints~\cite{klein_industrial_2010, diebold_application_2014}. However,
radiative transfer theory does not in general capture interference
effects characteristic of structurally colored materials. Numerical
methods such as finite-difference-time-domain and finite-element
techniques~\cite{yin_amorphous_2012, dong_structural_2010,
lo_structural_2014, cheng_structural_2015, galinski_scalable_2017,
xiao_bioinspired_2017, chandler_structural_2017} do capture such
effects but are computationally intensive and difficult to use in design
because they are not parameterized in terms of experimental properties.
Approaches with a more natural parameterization include
single-scattering models based on effective-medium
approximations~\cite{magkiriadou_absence_2014,
hwang_stephenson_effects_2019, maiwald_ewald_2018}. These models can
predict the wavelength of the reflection peak, but they do not account
for multiple scattering, which controls the color
saturation~\cite{hwang_stephenson_effects_2019}.
A more general approach is Monte Carlo simulation of photon
trajectories. In this approach, photon ``packets'' propagate through a
system, taking steps that are sampled from a step-size distribution and
scattering into directions sampled from a phase function
\cite{wang_mcml_1995, zolek_optimization_2006}. Monte Carlo methods have
been used to model multiple scattering in a variety of
systems~\cite{ding_influence_2016, dhawan_optical_2010,
vinckenbosch_monte_2015, zhu_review_2013}, but they generally do not
account for constructive interference. Furthermore, achieving
quantitative agreement with experimental data requires a careful choice
of the step-size distribution and phase function, as we shall show.
\begin{figure*}[htbp]
\centering
\includegraphics{fig2_model-compressed}
\caption{Overview of Monte Carlo model for angle-independent structural
color. (a) Cartoon of Monte Carlo method. We simulate the trajectories
of photon ``packets'' scattering and propagating in an effective
medium. Each packet first takes a step into the sample, where the step
size is sampled from an exponential probability distribution whose
mean is the scattering length. Part of the packet can be absorbed
during this step, as illustrated by the decrease in the width of the
orange arrow and as determined by the complex effective refractive
index. Then the packet scatters into a new propagation direction,
which is sampled from the phase function. Both the step-size
distribution and phase function are calculated using the form and
structure factors in the complex effective medium, and both depend on
the wavelength. This process repeats until the packet exits the film.
(b) Rendering of photon trajectories for a Monte Carlo simulation
obeying the above rules. After simulating thousands of trajectories at
different wavelengths, we calculate the reflectance spectrum by
counting the fraction of packets that are reflected at each
wavelength, as opposed to transmitted or absorbed. (c) Schematic
showing how absorption is implemented in the model. In an absorbing
system, the incident fields decay as they travel through the sample. The
arrows depict the incident field and the curves represent the
amplitude of the scattered fields. The dashed line is the surface of
the particle, where we integrate the differential scattering
cross-section. (d) Reflectance spectra calculated from the model,
including and excluding the contribution of absorption, compared to
experimental measurements of a 85-$\upmu$m film of 218 nm polystyrene
spheres in air. Gray regions show the uncertainty on the measurement
(see SI Appendix). (e) Diagram showing how roughness is modeled. The
fine roughness parameter is the fraction of light that encounters
roughness on the scale of a single particle upon incidence on the
film, and is between 0 and 1. Coarse roughness corresponds to a
tilted, though locally smooth, surface. The coarse roughness parameter
is the root-mean-squared slope of the
surface~\cite{van_ginneken_diffuse_1998}, and is 0 for a flat surface.
While there is no upper bound, a large slope means that light cannot
hit the sample; therefore, most systems have a coarse roughness
between 0 and 1.1. (f) Reflectance spectra including and excluding the
contribution of surface roughness, compared to experimental
measurements of a 85-$\upmu$m film of 218 nm polystyrene spheres in
air. Gray regions show the uncertainty on the measurement (see SI
Appendix).}
\label{fig:model}
\end{figure*}
Our multiple scattering model is based on the Monte Carlo method, but we
use a phase function that accounts for constructive interference and
wavelength-dependent absorption, which we describe in more detail below.
With this phase function and the step-size distribution, we simulate the
random-walk trajectories of photon packets as they propagate through the
material, as shown in Fig.~\ref{fig:model}a. We consider the material to
be a film containing a disordered arrangement of spherical particles or
voids inside a matrix material. This film and the detector are embedded
in a medium, which we assume to be air in all the calculations that
follow. The film is parameterized in terms of both material and
structural quantities, including those shown in
Fig.~\ref{fig:overview}e. We assume that each packet is incident
normally on the film or at an angle determined by the experimental
setup, and we calculate the reflection spectrum by counting
trajectories, as shown in Fig.~\ref{fig:model}b.
\subsection*{Technique}
Our model accounts for constructive interference through the phase
function. There are two contributions to this function: the form factor,
which describes the angle-dependence of scattering from individual
particles and can be calculated from Mie theory; and the structure
factor, which describes the constructive interference of waves scattered
by different particles and can be calculated from liquid-state
theory~\cite{magkiriadou_absence_2014}. Although we account for
constructive interference within each trajectory through the structure
factor, we do not model constructive interference among different
trajectories.
We assume that the scattering occurs in an effective medium determined
by the average properties of the material (Fig.~\ref{fig:model}a). Our
effective-medium theory, which is described in more detail in SI
Appendix, is based on the Bruggeman
approximation~\cite{markel_introduction_2016}, which can account for
complex refractive indices, though not for near-field effects (see
Discussion).
To account for wavelength-dependent absorption, we use a modification of
Mie theory that accounts for absorption in both the particles and
matrix. In an absorbing system, the scattered fields are absorbed as
they propagate away from the scatterer, such that the differential
scattering cross-section of a particle decreases with
distance~\cite{bohren_absorption_2004,fu_mie_2001,mundy_mie_1974}. This
consideration applies not only to systems with absorption in the matrix,
but also to those with only absorbing particles, because in both cases
the imaginary index (and hence the absorption coefficient) of the
effective medium is non-zero. Therefore, when the particle or matrix has
a complex refractive index, we obtain the total scattering cross-section
by integrating the differential scattering cross-section at the surface
of the scatterer~\cite{sudiarta_mie_scattering_2001,
frisvad_computing_2007}. We then account for absorption of the photon
packets traveling through the effective medium with an exponential decay
function based on the Beer-Lambert law. Lastly, we correct for the
variation in the amplitude of the incident field as a function of
position on the sphere~\cite{sudiarta_mie_scattering_2001}
(Fig.~\ref{fig:model}c). For more details on the model, see SI Appendix.
Modeling absorption leads to better agreement between the predicted and
measured reflection spectrum (Fig.~\ref{fig:model}d). The small amount
of absorption in polystyrene particles, for example, changes the
predicted reflectance spectrum from that of a sample without absorption,
especially at short wavelengths.
We also account for surface scattering, which can arise from the
roughness inherent to most experimental samples. We model this roughness
at two different scales: coarse and fine. Coarse roughness is large
compared to the wavelength, such that incident light encounters a
locally smooth surface that is angled with respect to the incident
direction. We model coarse roughness by accounting for the refraction of
light when it encounters the boundary of the film
(Fig.~\ref{fig:model}e). Fine roughness arises from wavelength-scale
features such as particles protruding from the surface. To model fine
roughness, we sample the initial step size of a trajectory from the
scattering properties of a single nanoparticle, ignoring the
contribution of the structure factor.
For many of the samples we examine, such as those dispersed in a liquid,
we cannot easily measure the roughness parameters. Indeed, as we note in
Discussion, the roughness parameters can be viewed more generally as
correcting for the failure of the effective-medium approximation at the
boundary of the sample. Therefore we determine these parameters by
fitting them to measurements. When we do this, we find that including
coarse and fine roughness brings the model into quantitative agreement
with experiment (Fig.~\ref{fig:model}f). Although the parameters are
fitted, they are constant with wavelength. Therefore the agreement
between the fitted model and the data as a function of wavelength shows
that our model for roughness captures a physical effect of the sample
boundary.
In the validation experiments that follow, we do not fit each
measurement individually; instead, because we expect the roughness
values to largely depend on the sample assembly method, we fit the
values to all samples fabricated with the same technique.
\section*{Results}
\label{sec:results}
\subsection*{Model validation}
\begin{figure*}[tbhp]
\centering
\includegraphics{fig3_validations}
\caption{Experiments validate the Monte Carlo model. Measured (solid
lines) and predicted (dotted lines) reflectance spectra for disordered
packings of polystyrene particles as a function of (a) particle
radius, (b) absorption, and (c) film thickness. Gray regions show
uncertainties on the measurements (see SI Appendix). Insets are color
swatches calculated from the experimental and predicted spectra using
the CIELAB colorspace. The model parameters are as follows: (a)
Polystyrene spheres with radii of 94, 109, 138 nm in a matrix of air,
with volume fractions of 0.52, 0.52, 0.56, and thicknesses of 119, 85,
and 77 $\upmu$m. The fine roughnesses are 0.5, and the coarse
roughnesses are 0.9 for all samples. (b) 101-nm-radius polystyrene
particles in water with carbon black at concentrations of 0.03, 0.055,
and 0.1\% by weight in water. The particle volume fractions are 0.415,
0.406, and 0.386, and the thicknesses are 96, 71, and 84 $\upmu$m. The
fine roughness is 0.28 and the coarse roughness is 0.2 for all three
samples. (c) Films of 138-nm-radius polystyrene particles at
thicknesses of 3930, 77, 13, and 6 $\upmu$m. The corresponding volume
fractions are 0.5, 0.56, 0.58, 0.58. The fine roughnesses are 1 for
the 3930-$\upmu$m film and 0.5 for all others, and the coarse
roughnesses are 0.9. The thickness of the 3930-$\upmu$m film is chosen
to be much larger than the maximum transport length, 47 $\upmu$m, to
ensure strong multiple scattering. The thickness of the 6-$\upmu$m
film is chosen to be smaller than the minimum transport length, 8
$\upmu$m, to minimize multiple scattering.}
\label{fig:validation_results}
\end{figure*}
We validate the model by comparing the predicted and measured
reflectance spectra for samples with different physical parameters (see
SI Appendix for a description of the sample fabrication and
characterization). In each of the simulations, we take the average of
20,000 trajectories at each wavelength. For such a large number of
trajectories, the Monte Carlo uncertainty in the predicted reflection
spectrum is much smaller (standard deviation 0.4\%) than the uncertainty
of the experimental spectrum, which is determined by taking measurements
from different parts of the same sample.
We first examine the effect of particle radius. We calculate
reflectance spectra for packings of polystyrene particles in air for
three different polystyrene radii: 94, 109, and 138 nm. As shown in
Fig.~\ref{fig:validation_results}a, the model accurately captures the
redshift of the reflectance peak with increasing particle size while
also reproducing a rise in scattering with toward small wavelengths.
The predicted spectra quantitatively match the data in both the location
of the reflectance peak and the reflectance magnitude across the visible
range with only small deviations. The model also captures the broadening
and averaging of the peak when two particle radii are mixed, which
validates our implementation of polydispersity (Fig.~S1a). As a result,
the colors predicted by the model visually match the color renderings
calculated from the measured reflectance.
Having shown previously that a small amount of absorption can alter the
reflection spectrum (Fig.~\ref{fig:model}d), we must now further confirm
that our model accurately captures the effects of absorption in
experimental samples. We make concentrated samples of polystyrene
spheres in water, and we tune the amount of absorption by adding varying
amounts of carbon black. To model these samples, we assume a matrix with
a real refractive index of water and an imaginary index corresponding to
the concentration of carbon black (see SI Appendix). Thus we neglect any
scattering from the carbon black particles, which is a reasonable
approximation, given that the carbon black particles are approximately
10 nm, much smaller than the wavelength. We again find that the model
accurately predicts the reflectance and color of samples with varying
amounts of absorption (Fig.~\ref{fig:validation_results}b).
In addition, we explore the validity of the model over a range of film
thicknesses. In the thickest sample, the thickness is much larger than
its transport length, which is the length scale at which the direction
of light is randomized. In the thinnest, the thickness is smaller than
its transport length at all wavelengths. The model agrees with
experiment when the thickness is large, but starts to deviate from
experimental data in thin samples and at large wavelengths
(Fig.~\ref{fig:validation_results}c). The discrepancy likely arises
because for very thin samples, the distinction we make in our model
between surface scattering and bulk scattering starts to break down.
However, most structurally colored samples are not as thin as the
thinnest sample we show here. Furthermore, the predicted colors in all
samples are similar to those of the experimental samples, despite the
deviations in the predicted reflection spectrum for thin samples.
In SI Appendix, we further validate the model on bidisperse
samples (Fig.~S1a) and samples with varying volume fraction (Fig.~S1b).
For the volume-fraction experiments, we prepare samples of polystyrene
spheres in water, in which case the volume fraction can be varied by
changing the particle concentration.
\begin{figure*}[htbp]
\centering
\includegraphics{Fig4_limits}
\caption{The range of designable colors depends on the chosen materials.
Each of the two grids is a representation of the color gamut in a
four-dimensional parameter space spanned by the particle radius,
matrix imaginary index (a proxy for carbon black concentration),
particle volume fraction, and sample thickness. Each rectangle in the
grid is a color swatch calculated from a reflection spectrum predicted
by the Monte Carlo model. The rectangles are organized into 5$\times$5
subgrids. Particle radius increases with subgrid from left to right,
and matrix imaginary index increases with subgrid from top to bottom.
Within each subgrid, volume fraction increases from left to right and
thickness from bottom to top. (a) Color gamut for polystyrene
particles in a matrix of air. We use the same roughness parameters as
for the film of 276-nm particles in
Fig.~\ref{fig:validation_results}a: coarse roughness of 0.9 and fine
roughness of 0.5. (b) Color gamut for polystyrene particles in a
matrix of water. We use the same roughness parameters as for the
polystyrene-in-water films in Fig.~\ref{fig:validation_results}b:
coarse roughness of 0.2 and fine roughness of 0.28. In both gamuts the
imaginary refractive index for polystyrene is fixed at $2\times
10^{-5}i$ and the polydispersity index of the polystyrene particles is
0.03.}
\label{fig:gamuts}
\end{figure*}
\subsection*{Finding the limits of the design space}
With a validated model, we can calculate the limits of the design
space---that is, the range of structural colors that can be made for a
given set of materials or other constraints. As an example, we calculate
a gamut for packings of polystyrene particles in air with added carbon
black, with varying particle radius, volume fraction, sample thickness,
and carbon black concentration (Fig.~\ref{fig:gamuts}a).
To describe how the colors change as a function of these parameters, we
use terminology from color science: hue, chroma (or perceived
vividness), and luminance. Each of these can be calculated by
transforming the computed reflectance spectra used to generate
Fig.~\ref{fig:gamuts}a to the CIELUV perceptual colorspace. We find that
small particle radii give rise to colors in the blue and green, as
expected, but red hues remain inaccessible, in agreement with the
results of Schertel and coworkers~\cite{schertel_structural_2019}. We
also find that increasing the volume fraction can significantly increase
the chroma and blue-shift the hue while decreasing the luminance.
Increasing the thickness does not affect the hue. Instead, it slightly
increases the chroma and luminance at small imaginary indices but not at
the largest imaginary indices, where the absorption length becomes
comparable to or smaller than the sample
thickness~\cite{hwang_stephenson_effects_2019}. When we replace the air
matrix with water, we find that increasing the radii leads to a range of
browns instead of pinks and purples (Fig.~\ref{fig:gamuts}b), because
the lower index contrast between polystyrene and water leads to flatter
and broader reflectance peaks. Increasing the volume fraction
blue-shifts the hue and increases the chroma. The thickness does not
affect the hue or chroma, but only increases the luminance when the
absorption is low, as in the polystyrene-in-air system. In both systems,
increasing absorption only decreases the luminance and does not change
the hue or chroma.
To demonstrate the predictive power of the model, we make three colors
from these gamuts (outlined swatches in Fig.~\ref{fig:gamuts}). The
colors are chosen from across the visible spectrum. We make a green
sample with polystyrene particles in air, and a blue and a light brown
sample with polystyrene particles in water. We make samples with
parameters as close as possible to the values used in the simulations,
and we find that the target and the achieved colors agree well, with
some small deviations at large wavelengths
(Fig.~\ref{fig:particle_design}).
\begin{figure}[htbp]
\centering
\includegraphics{fig5_particle_design_panels}
\caption{Designing colors from the gamut. Each plot shows the
reflectance spectra of the target color (dotted line) and of the color
that is achieved (solid line) in a sample made using the model
parameters for the target. To the right of each plot is a colormap
showing the CIE chromaticity coordinates of the target (circles) and achieved
(crosses) colors. The target colors are chosen from the color gamuts
in Fig.~\ref{fig:gamuts}. The parameters are as follows. (a) Blue
target: radius 82 nm, volume fraction 0.42, thickness 130 $\upmu$m,
and matrix imaginary index 0.0003$i$, corresponding to 0.08\% by
weight of carbon black. (b) Green target: radius 110 nm, volume
fraction 0.4975, thickness 40 $\upmu$m, and matrix imaginary index
0.0017$i$, corresponding to a carbon black concentration of 0.42\% by
weight. (c) Brown target: radius 112 nm, volume fraction 0.35,
thickness 110 $\upmu$m, and matrix imaginary index 0.000055$i$,
corresponding to 0.016\% by weight of carbon black. The uncertainties
in the achieved spectra are shown in gray and represent two standard
deviations about the mean of measurements from 11 (blue spectrum), 19
(green) and 12 (brown) locations on the sample.}
\label{fig:particle_design}
\end{figure}
\subsection*{Finding the parameters to design a target color}
In addition to targeting colors in the gamut, we can also target a
particular color in a colorspace. We use the perceptual colorspace
defined by the CIELAB coordinates~\cite{cieluv2} because applications
such as cosmetics or coatings are aimed at the human eye. Using such an
approach increases the number of available designs because we can
exploit the eye's insensitivity to variations in certain parts of the
spectrum.
To implement this approach, we choose a target color in CIELAB
coordinates, then use Bayesian
optimization~\cite{fernando_bayesianoptimization_2019} to find the model
parameters that minimize the sum of squared differences between the
target CIELAB coordinates and those corresponding to the modeled
reflection spectrum. We call the optimal solution the ``best fit'' to
the target.
We choose the color of the mountain bluebird as our target
(Fig.~\ref{fig:mountain_bluebird_design}a), because the feathers show an
angle-independent structural blue
(Fig.~\ref{fig:mountain_bluebird_design}b). This color arises from a
porous internal structure (Fig.~\ref{fig:mountain_bluebird_design}c)
that likely evolved to meet constraints other than color, including
(perhaps) minimizing weight and maximizing insulating ability.
We impose a different set of constraints. Because the bluebird's
``inverse'' structure of pores inside a solid matrix is not as easy to
fabricate as a ``direct'' structure of solid spheres in air or water, we
design the color using a direct structure instead. Furthermore, we
constrain the materials to those we have on hand: polystyrene spheres
and a matrix of either air or water. We use Bayesian optimization to
determine the optimal particle radius, volume fraction, film thickness,
and concentration of carbon black. To ensure that the optimal values can
be experimentally achieved, we set ranges for these
parameters: the particle radius is 74, 101, 110, 112.5, or 138 nm; the
thickness is between 20 and 150 $\upmu$m; and the range for the matrix
imaginary index is between 0 and 0.005$i$. We use the same values of
roughness as in the samples in Fig.~\ref{fig:validation_results}a: fine
roughness of 0.5 and coarse roughness of 0.9.
\begin{figure*}[htbp]
\centering
\includegraphics{mountain_bluebird_design-compressed}
\caption{Targeting a specific color in the colorspace. (a) Photograph of
male mountain bluebird (Specimen MCZ:Orn:190556. \textit{Sialia
currucoides}. North America: United States: Montana: Meagher.
Martinsdale. Robert S. Willians). Circle
denotes area of reflectance measurement for the target color. (b)
Photograph of a feather from the back of the bird. (c) SEM of a
cross-section of the feather's internal structure, obtained after
focused-ion beam milling. Image credits for (a-c): Museum of
Comparative Zoology, Harvard University, \copyright President and
Fellows of Harvard College. (d) CIELAB values and color renderings of
the target, best-fit, and achieved colors. The parameters of the
best-fit solution that satisfies the constraints are as follows:
101-nm-radius polystyrene spheres at a volume fraction of 0.54 in an
air matrix, 50 $\upmu$m film thickness, 0.9\% by weight of carbon
black, fine roughness of 0.5, and coarse roughness of 0.9. (e) CIE
chromaticity color map comparing the target (circle), best-fit
(cross), and achieved colors (triangle, uncertainty shown in gray).
(f) Reflectance spectra of bird feather (solid line), model fit
(dotted line), and polystyrene-in-air sample (dashed line). Gray
regions for the achieved reflectance represent two standard deviations
about the mean of measurements at 3 locations on the sample. Note that
we do not try to match the model and target spectra; instead, the
optimization is performed in the $L*, a*, b*$
space.}
\label{fig:mountain_bluebird_design}
\end{figure*}
When we minimize the difference between the target color and the color
obtained from the model, we find a good match in CIELAB space
(Fig.~\ref{fig:mountain_bluebird_design}d, e). Note that we match the
color and not the reflectance spectrum, because matching the spectrum
may not be possible for the given materials. Indeed, the spectrum of the
best-fit solution has a narrower peak than that of the target, with the
target having a larger reflectance at wavelengths less than 450 nm
(Fig.~\ref{fig:mountain_bluebird_design}f). Because the eye is
insensitive to such short wavelengths, the best-fit solution need not
duplicate this feature to match the color in the CIELAB space. We find
that the best-fit solution has a CIE76 color difference of 3.9 from the
target color, which is close to the just-noticeable difference (JND) of
2.3~\cite{sharma_digital_2002}.
We make a film with parameters as close as possible to those of the best
fit. We find that both the resulting spectrum and color are close to
those of the best fit, as shown in
Fig.~\ref{fig:mountain_bluebird_design}d-f. The CIE76 color difference
between the achieved and target colors is 5.1, larger than the
difference between target and best-fit, but still less than twice the
JND. The difference between the best-fit and achieved colors may come
from the values of the roughness parameters we use in the model.
Although we use the same preparation technique and thus the same fine
and coarse roughness values as for the polystyrene films from
Fig.~\ref{fig:validation_results}a, the true roughness values of the
sample might differ from these values. Nonetheless, the agreement
between the achieved and target colors shows that one can design the
color of the feather without mimicking its structure, instead using a
system that satisfies a different set of constraints.
\section*{Discussion}
Having shown that our model can be used to design colors, we now examine
its limitations and why it works despite these limitations. The central
approximation is that of the effective medium: we assume that between
scattering events, light propagates through a homogeneous medium with an
effective refractive index. This effective index, which we calculate
using the Bruggeman weighted average~\cite{markel_introduction_2016},
underlies all of the calculations. For example, the index difference
that determines the phase function is the difference between the
effective index and the index of the spheres. An alternative
effective-index theory, called the energy coherent potential
approximation (ECPA), includes corrections for near-field effects in a
dense packing of spherical scatterers~\cite{busch_transport_1996,
*busch_transport_1995}. Schertel and coworkers used the ECPA in
concert with the diffusion approximation to predict structural
colors~\cite{schertel_structural_2019}.
Their model, like ours, can predict reflectance spectra, but it is not
suitable for our purposes for two reasons. First, the diffusion
approximation is valid only when light scatters many times before it
exits the sample, whereas in many structurally colored samples, the
thickness is chosen to minimize the amount of multiple
scattering~\cite{hwang_stephenson_effects_2019}. Second, the ECPA is
valid only for real dielectric permittivities, and therefore does not
account for absorption. Schertel and coworkers compensate for this
limitation by approximating absorption as a cutoff of the sample
thickness. Our approach uses the Bruggeman effective medium
approximation, which has the disadvantage that it does not account for
near-field effects, but the advantage that it can account for complex
refractive indices. Therefore it can directly handle absorption and its
dependence on wavelength.
To understand why our approach correctly predicts spectra despite the
absence of near-field corrections, we calculate the scattering strength
of polystyrene particles in air. The scattering strength is the ratio of
the wavelength to the transport length, where the transport length is
calculated using both the form and structure factors and the Bruggeman
effective index. We calculate the scattering strength as a function of
the ratio of radius to wavelength. We find that the peaks in scattering
strength match experimental measurements by Aubry and
colleagues~\cite{aubry_resonant_2017} up to a radius-to-wavelength
ratio of roughly 0.5 (Fig.~S2), which covers the range used in our
study. Near-field effects can be neglected in this range because the
experimental transport length is at least four times larger than the
wavelength.
Because our model does not account for near-field effects, we do not
expect it to work at high scattering strengths, or when the transport
length is comparable to the wavelength. Furthermore, the discrepancies
at small and large thicknesses (Fig.~\ref{fig:validation_results}c)
suggest that our model works best in the regime of weak multiple
scattering, where the film thickness is on a similar order of magnitude
as the transport length. When multiple scattering is stronger,
interference between multiply scattered photon trajectories, which our
model does not account for, might become important.
However, these situations may not be relevant to structurally colored
materials. Color saturation requires that multiple scattering be weak
and the transport length be comparable to the thickness of the material.
This is the fundamental reason why the model works so well. Furthermore,
we take advantage of the limited color capacity of the human visual
system---especially at long wavelengths---because our eyes detect colors
based on three receptors rather than on full reflectance spectra.
Therefore, our model can be used to design perceptual colors even when
the reflection spectrum cannot be matched to the target spectrum, as we
have shown.
We have also shown that it is necessary to model surface roughness to
achieve quantitative agreement between model and experiment. When we
introduced the roughness parameters, we argued that they account for the
topography of the samples. But more generally, the roughness parameters
account for the breakdown of the effective-medium approximation at the
surface of the sample. The breakdown occurs not only because of
topography, but because Mie theory does not accurately describe the
initial interaction of light with the film. Mie theory is derived for a
particle embedded in the same (effective) medium on all sides, whereas
particles at the surface have other particles on one side and a
homogeneous material on the other. Furthermore, the effective-medium
approximation we use includes the effects of the structure factor, which
is not well defined at the boundary of the sample. The fine and coarse
roughness parameters compensate for all of these effects, and therefore
topographical measurement techniques such as atomic force microscopy may
not give the appropriate values for these parameters.
Nevertheless, the model is still predictive even though the roughness
parameters must be fitted to experimental data. Indeed, as we have
shown, the parameters need not be fitted to measurements for each
individual sample; instead, one can use the same values for all samples
that are made with the same assembly technique. To improve the
predictive accuracy, one can use an iterative design approach: first,
make an initial guess for the roughness and find the model
parameters that best fit a target color; second, make the sample using
the best-fit parameters and fit the model to the data to improve the
estimates of the roughness parameters; third, use the improved estimates
to find parameters that give a better fit of the model to the target
(Fig.~\ref{fig:overview}d).
\begin{figure}[htbp]
\centering
\includegraphics{Fig7_loosening_constraints}
\caption{The model allows us to explore the limits of colors that can be
achieved with different configurations of a set of materials. All
plots assume an imaginary refractive index for polystyrene of $2\times
10^{-5}i$, volume fraction of 0.64, thickness of 20 $\upmu$m, coarse
roughness of 0.9, and fine roughness of 0.5. (a) Color gamut for
polystyrene (PS) particles in a matrix of air as a function of
particle radius. (b) Reflectance spectra for colors denoted by the
black arrows in (a). (c) Transport length as a function of wavelength
for the purple system in (b) and the pink system in (d). (d)
Reflectance spectra for three colors chosen from the gamut for
particles with air cores and PS shells in a matrix of water. (e) Color
gamut for the core-shell system as a function of core and shell
radius. Circles denote the samples whose reflectance spectra are shown
in (d). (f) Reflectance spectra for three colors from the gamut of a
core-shell system with absorption added to the matrix. (g) Color gamut
for core-shell system as a function of core radius, shell radius, and
matrix imaginary index. Circles denote the samples whose reflectance
spectra are shown in (f).}
\label{fig:loosening_constraints}
\end{figure}
The power of our model lies in providing a physical understanding of how
the experimental parameters change the color. This insight enables a
rational design approach for the nanostructure. Consider a case when the
materials are prescribed---for example, polystyrene in air or
water---but the structure can be varied---for example, by making
composite particles. This situation arises in many applications: the
constituent materials must meet certain requirements (regulatory or
other), but the spatial arrangement of these materials may be
unconstrained. Because there are infinite possible arrangements that
differ from solid spherical particles in a matrix, finding the optimal
arrangement for a target color is a very difficult design problem. We
can, however, use the physical intuition provided by the model to choose
a nanostructure that produces a particular color.
As an example, we consider making colors that are outside the gamut of a
system of polystyrene particles in air, yet use the same materials.
Solid polystyrene spheres in air tend to have low saturation or chroma,
particularly in the red, as shown in the gamut of
Fig.~\ref{fig:loosening_constraints}a. The low saturation comes from
scattering at short wavelengths, as shown in the purple spectrum in
Fig.~\ref{fig:loosening_constraints}b. The short-wavelength scattering
comes from the large scattering cross-section of polystyrene particles
in the blue~\cite{hwang_stephenson_effects_2019}. The model shows that
this large cross-section gives rise to multiple scattering. The
propensity for multiple scattering can be described by the transport
length, which is small at short wavelengths
(Fig.~\ref{fig:loosening_constraints}c).
To decrease this scattering, we design an alternative arrangement of the
materials. First, we invert the particles into air cores with
polystyrene shells~\cite{magkiriadou_absence_2014} to reduce the
scattering cross-section in the blue. Second, we place the core-shell
particles in a matrix of water to decrease the index contrast between
the shell and the matrix (Fig.~\ref{fig:loosening_constraints}d).
Because the resulting colors are still desaturated
(Fig.~\ref{fig:loosening_constraints}e), we suppress multiple scattering
by adding an absorber to the water
(Fig.~\ref{fig:loosening_constraints}f). We use the model to determine
what absorber concentrations lead to optimal saturation. The resulting
gamut shows colors and saturations that are different from those of
polystyrene particles in air---in particular, we now see orange and
brown hues that arise due to the decreased scattering at short
wavelengths (Fig.~\ref{fig:loosening_constraints}g).
From this example, we see that loosening the restrictions on the
arrangement of the materials increases the size of the design space but
also makes it possible to access new colors. The physical intuition
provided by the model is critical for exploring this larger design
space.
\section*{Conclusion}
\label{sec:conclusion}
Engineering materials with prescribed structural colors requires a way
to predict the color from the nanostructure. Doing so efficiently
requires accurate predictions, so as to minimize iteration between
experiment and simulation. We have demonstrated a model of multiple
scattering in disordered packings of spheres that produces accurate
predictions. The model is parameterized in terms of experimental
quantities such as the volume fraction and material optical properties.
As such, it can be used to design structurally colored materials that
meet specific constraints, making it particularly useful for
applications in which only certain materials can be used. The predictive
power is also important for applications such as paints and coatings,
where the color might change substantially as the suspending liquid
dries and the refractive-index contrast increases.
Compared to finite-difference time-domain and similar methods, our model
addresses a smaller range of nanostructures---those involving disordered
arrangements of spheres---but has two principal advantages: it is
parameterized in terms of experimental quantities, and the results can
be interpreted in terms of collective and resonant scattering effects.
The interpretability of the results can be used to rationally design
variant nanostructures with wider color gamuts, as we have shown.
Furthermore, the restriction to spherical pores or voids may not be a
significant limitation. A feature of angle-independent structural color
is that it does not require a complex nanostructure or fabrication
scheme; instead, it can be produced simply by rapidly consolidating
inexpensive spherical nanoparticles. We anticipate that many
applications will take advantage of this feature.
We have validated the model in the regime of weak multiple scattering,
that is, when the transport length is several times the wavelength, and
the sample thickness is on the order of the transport length. Arguably
most applications of structural color lie within this regime, including
paints, coatings, and sensors. Materials that strongly multiply scatter
light look white, whereas materials that have little multiple scattering
look translucent. Other models---for example, the diffusion
approximation in the case of strong multiple scattering and
single-scattering theory in the case of very weak multiple scattering---
may yield better predictions in these two regimes.
Our model can be extended to more complex geometries and other
illumination conditions. Here we have focused on films of spheres in a
matrix, but the boundary conditions can be changed to model so-called
``photonic balls''~\cite{yi2003generation, moon2004electrospray,
vogel2015color} or even films of photonic balls. Also, because the
model accounts for dispersion and wavelength-dependent absorption, it
can be used to design materials with infrared or ultraviolet reflection
peaks, so long as the transport length remains large compared to the
wavelength.
Finally, the model can be used to predict the angle-dependence of the
color. Although the term ``angle-independent'' is used to describe the
color, in reality there is a weak variation of color with the angle
between the source and detector. This variation arises because
constructive interference condition is not completely independent of the
scattering wavevector~\cite{magkiriadou_absence_2014}. In the results
shown above, we have simulated the diffuse reflectance spectrum, which
can be used to determine how the sample would look in ambient light. But
one can also simulate the reflection spectrum as a function of the
incident and detected angles. This approach could help determine the
fundamental physical limits of angle-dependence, chroma, and hue---as
well as the tradeoffs between them---in structurally colored materials.
\matmethods{
Materials and methods can be found in SI Appendix.
\subsection*{Code Availability}
The source code of the model can be found in Ref.~\citenum{structcol}.
}
\showmatmethods{}
\acknow{We thank Rupa Darji, Keith Task, Jerome Fung, Melissa Franklin,
Mark Schroeder, Bernhard von Vacano, and Rupert Konradi for helpful
discussions. We also thank Jin-Gyu Park for providing the polystyrene
nanoparticles used in this work. This work is supported by BASF
Corporation and the BASF Northeast Research Alliance; by the Harvard
MRSEC under National Science Foundation (NSF) grant no.\ DMR-2011754;
and by the NSF Graduate Research Fellowship under grant no.\
DGE-1144152. It was performed in part at the Harvard Center for
Nanoscale Systems, supported by NSF grant no.\ 1541959. We thank
Jeremiah Trimble and Kate Eldridge for assistance with the bird
feathers and specimen, which were borrowed from the Ornithology
Department of the Museum of Comparative Zoology, Harvard University.}
\showacknow{}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Tables of cross sections\label{app:tables}}
\input{supplemental_material}
} \cleardoublepage \section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{BPH-14-001-authorlist.tex}\end{sloppypar}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A palindromic number or numeral palindrome is a number that remains the same when its digits are reversed. Like 16461, for example, it is "symmetrical".
In both cases we start with an initial term which is palindrome and supposing the entire sequence consists of only palindromic numbers we will arrive at a contradiction which shows that this sequence of palindrome eventually terminates. Hence in both cases we will give a bound for the maximum number of terms that the given progressions can possibly contain. Here we are only concerned about decimal representation. Here we assume that $gcd(a,2,3,5,11) = gcd(r,2,3,5,11) = 1$ where $a$ and $r$ are the first term and common ratio respectively of the GP.
\begin{comment}
\begin{prop}
$\mathbb{Z}$
\end{prop}
\begin{thrm}
\end{thrm}
\begin{proof}
\end{proof}
\end{comment}
\section{Arithmetic Progression}
\subsection{Non Existence of any Arithmetic Progression whose each term is Palindrome}
Supposing there is an AP whose all terms are palindromes, let the first term and the common difference be $a$ and $d$ respectively.We will denote the general $n^{th}$ term by $T_{n}$.The central idea will be to show that gaps between two consecutive palindromes can be arbitrarily large. Hence we cannot achieve an infinite AP through a constant common difference.\\
Let us start by getting hold of an algorithm from which it will be possible for us to write a palindrome which exactly succeeds a given one. We simply state here the procedure. Details and explanations can be found at [2].
We start by giving an example from which the procedure will be clear. The procedure will depend on the number of digits of the palindrome.\\
Example:\\
Start with a $5$(odd) digit palindrome, say $17371$. The number $3$ which acts as a pivot will only get replaced by $4$ as to make sure that the new resulting palindrome will be as small as possible such that it exactly succeeds $17371$. So the palindrome right next will be $17471$. The reader may like to notice that in general for palindromes with odd number of digits we are required to keep all the digits same which comes exactly in the first half before the pivot number(assuming it is not $9$).
This along with increasing the pivot digit by 1 will make sure that the number will be
smallest such palindrome which exceeds the given one. The 9 case has to be treated differently.
In case the given palindrome had a pivot which is 9 , say for example 3459543 , we observe the palindrome has to be bigger than at least 3460000 ( because in order to keep it smallest the starting 2 digits should be 34) and since it is palindrome we know it would be a reflection across the pivot number so it ends with 43 and to get the last two digits 43 from 3459543 we have to increase 3459543 by adding the least possible number to it and which can be seen to be possible by changing that 9 to 0 which will give rise to carry 1 in both the succeeded and preceded neighbouring digits of 9. So the palindrome right next to 3459543 would be 3460643. Same can be generalized in case of any palindromes having odd number of digits with occurrence of 9 in pivot. In this paper we do not need the algorithm for even digit palindromes and the reason for the same will be discussed later.\\
\begin{prop}
There exists no AP whose every term is palindrome
\end{prop}
\begin{proof} From the above context it can be readily be seen that if $(a_{1}...a_{n}a_{0}a_{n}...a_{1})$ be a $2n+1$ digit palindrome with $a_{0}$ as the pivot then the palindrome next to it will have the first and last $n-1$ terms same. In case $a_{0}$ were 9, then $a_{n}$ would have increased by 1. So the difference between these two would as least be $10^{n}$. So \\
$ l i m_{n\to\infty}$ $(z_{n+1}-z_{n}) = \infty$ where $z_{n+1},z_{n}$ are the $(n+1)^{th}$ and $(n)^{th}$ palindromes respectively.
Fact: Given an AP with first term $a$ and common difference $d$, there does not exist $M$ such that $T_{n}$ has only even digits for $n\geq M$. This is because consecutive powers of 10's increases unboundedly as $10^{p+1}-10^{p} = 9.10^{p}$ while $d$ is constant. So $10^{p} > d$ as $p\to\infty$ \\
So the given AP is forced to take values with odd number of digits infinitely often. Since $d$ remains constant it is a contradiction to the fact that difference between two consecutive palindromes having odd number of digits can be arbitrarily large.\\
\end{proof}
\subsection{Estimation of the largest AP with palindromes given its first and last terms}
Given the first and last term $a$,$l$ respectively in order to attain the largest AP, we need to choose the common difference $d$ suitably, in particular we need to have a lower bound for $d$. Let ...
\section{Geometric Progression}
\subsection{Non Existence of any Geometric Progression whose each term is Palindrome}
Suppose we are given a geometric progression whose first term $a$ is a palindrome and common ratio $r$ is rational which consists of only palindromes.\\
Claim: $r$ must be an integer.
\begin{proof}
Suppose the distinct prime factors of $a$ are $p_{1},..p_{k}$ and the distinct prime factors of the denominator of $r$ be $q_{1},..q_{m}$. But since the power of $r$ goes on increasing $ar^{s}$ will not remain integer for some $s \geq S$ for some $S \in \mathbb{N}$. So $r$ must be an integer.
\end{proof}
Assume number of digits of $a$ is $L$ and the number of digits of $r$ is $R$. Introduce a parameter $\lambda \in \mathbb{N}$. We would like to know the smallest term of the GP which has at-least $\lambda.L$ digits. If $t_{k}$ denotes the $k^{th}$ term for this GP, the smallest such $k$ such that $T_{k} = ar^{k} \geq \lambda L$ since a $L$ digit palindrome has size at-least $\sim 10^{L}$\\
$\implies k \geq \dfrac{\lambda. L}{log a + log r} \sim \dfrac{\lambda L}{L+ R -2}$ since x has $1 + \floor*{\log{x}}$ digits\\
\\
\begin{prop}
Given $R > L$ $\nexists$ a GP whose every term is palindrome
\end{prop}
\begin{proof}
Let us denote $\mathcal{P_{L}}$ the set of all $L$ digit palindromes.\\
$\mathcal{P_{L}}(q):= \{ n \in \mathcal{P_{L}} : n \equiv 0(\mod q) \}$\\ Proposition 4.2 of [1] asserts that if $(q, g(g^{2} − 1)) = 1$, where $g$ is the base and in our case $g=10$. Then for$L \geq 10 + 2q^{2}.log q$ the following asymptotic formula holds:\\
\\
$\|\mathcal{P_{L}}(q)\| = \dfrac{\|\mathcal{P_{L}}\|}{q} + O(\dfrac{\|\mathcal{P_{L}}\|}{q} exp(-\dfrac{L}{2q^{2}}))$\\
Here we obtain a nontrivial bound on $\mathcal{P_{L}}(q)$ without any restrictions on the size or the arithmetic structure of q.\\
So in our context\\
$\mathcal{P_{\lambda L}}(t_{k}):= \{ n \in \mathcal{P_{\lambda L}} : n \equiv 0(\mod ar^{k}) \}$\\
Using the above result \\
\\
$\|\mathcal{P_{\lambda L}}(t_{k})\| \leq \dfrac{C.10^{\frac{\lambda L}{2}}}{a r^{\frac{\lambda l}{L + R-2}}}$ \\
\\
$\implies \|\mathcal{P_{\lambda L}}(t_{k})\| \leq \dfrac{C}{a} {\dfrac{10^{\frac{\lambda L}{2}}}{r^\frac{\lambda L}{L + R-2}}}$\\
\\
$\implies \|\mathcal{P_{\lambda L}}(t_{k})\| \leq \dfrac{C}{a} ({\dfrac{10^{\frac{ L}{2}}}{r^\frac{ L}{L + R-2}}})^{\lambda}$\\
\\
Since $R > L$ , $R-1 > \dfrac{L+R-2}{2}$ \\
\\
which is equivalent to $r > 10^{\frac{L+R-2}{2}}$ by taking logarithm in base 10.\\
$\implies \ {\dfrac{10^{\frac{ L}{2}}}{r^\frac{ L}{L + R-2}}}$ ($= \alpha$ say) $< 1$ (can be verified by taking $L^{th}$ roots)\\
\\
Now given any $\epsilon > 0$, since the parameter $\lambda$ can be arbitrarily large $\exists \lambda_{0}$ such that $\dfrac{C}{a}. \alpha ^{\lambda} < \epsilon$ for all $\lambda \geq \lambda_{0}$\\
\\
So \\
$\|\mathcal{P_{\lambda_{0} L}}(t_{k})\| \leq \epsilon $ .\\
Contradiction.
\end{proof}
\subsection{There exists no GP whose every term is palindrome}
\begin{proof}
Suppose we are given a GP whose first term $a$ is a palindrome and common ratio $r$ is an integer.(Here we assume gcd(a,2,3,5,11) = gcd(r,2,3,5,11) = 1) We can choose an integer $B$ such that $r^{B}$ has more digits than $a$. Therefore we look at the subsequence:\\
$ {ar^{B},ar^{2B},ar^{3B},ar^{4B}....}$ which obeys every condition for proposition 2.\\
Hence this GP will cannot consist of only palindromes. Since this is a subsequence of the original GP, it also cannot contain infinitely many palindromes.
\end{proof}
\subsection{Given the initial term and common ratio , the size of the largest GP whose every term is palindrome}
\section*{Acknowledgement}
The authors thank Prof. Anirban Mukhopadhyay for raising the question of investigation and the authors are grateful to the referees W. Banks, D. Hart and M. Sakata [1], for the proposition 4.2 taken from their paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Background and statement of the result}
Write $Q_d$ for the $d$-dimensional Hamming cube (the graph whose
vertex set is $\{0,1\}^d$ and in which two vertices are joined by
an edge if they differ in exactly one coordinate).
Set
$$
{\cal F}=\{f \colon V(Q_d)\rightarrow {\bf Z} \colon
f(\underline{0})=0 \mbox{ and } u \sim v \Rightarrow
|f(u)-f(v)|=1\}.
$$
(That is, ${\cal F}$ is the set of graph homomorphisms from $Q_d$
to ${\bf Z}$, normalized to vanish at $\underline{0}$.)
In \cite{BenjaminiHaggstromMossel}, this set of functions is studied
from a probabilistic point of view, a motivating idea being that a
typical element of ${\cal F}$ should exhibit stronger concentration
behavior than an arbitrary element. Put uniform probability measure
on ${\cal F}$, and define the function $R$ on ${\cal F}$ by
$R(f)=\{f(v)\colon v\in V(Q_d)\}$ ($R$ is the {\bf range} of $f$).
In \cite{BenjaminiHaggstromMossel} the following conjecture is made
about the concentration of $|R|$:
\begin{conj}
For each $t>0$, ${\bf P}(|R|>td) \rightarrow 0$ as $d\rightarrow
\infty$.
\end{conj}
In \cite{Kahn}, something stronger is proved, and something
stronger still conjectured:
\begin{thm} \label{rangeconstant}
There is a constant $b$ such that ${\bf P}(|R|>b)=e^{-\Omega(d)}$.
\end{thm}
\begin{conj} \label{conj5isright}
${\bf P}(|R|>5)=e^{-\Omega(d)}$ and ${\bf P}(|R|=5)=\Omega(1)$.
\end{conj}
In this paper we prove Conjecture \ref{conj5isright} by
(asymptotically) counting the number of homomorphisms with various
ranges.
Specifically, if we set
$$
{\cal F}_i = \{f\in {\cal F}\colon |R(f)|=i\},
$$
we prove
\begin{thm} \label{ourresult}
\begin{eqnarray}
|{\cal F}| & = & (2e \pm e^{-\Omega(d)})2^{2^{d-1}} \nonumber \\
|{\cal F}_3| & = & (2 \pm e^{-\Omega(d)})2^{2^{d-1}} \nonumber \\
|{\cal F}_4| & = & (4\sqrt{e}-4 \pm e^{-\Omega(d)})2^{2^{d-1}} \nonumber \\
|{\cal F}_5| & = & (2e - 4\sqrt{e} +2 \pm e^{-\Omega(d)})2^{2^{d-1}}, \nonumber
\end{eqnarray}
\end{thm}
\noindent which gives Conjecture \ref{conj5isright}.
Setting
${\cal F}_{\leq 5} = \cup_{i \leq 5} {\cal F}_i$, we see that
Theorem \ref{ourresult} has the following weaker but more
elegantly formulated consequence:
\begin{cor} \label{nicecor}
$|{\cal F}| \sim |{\cal F}_{\leq 5}| \sim 2e2^{2^{d-1}}$.
\end{cor}
Corollary \ref{nicecor} makes sense: a little thought suggests
that a typical member of ${\cal F}$ should be constant on either
even or odd vertices of the cube, except for a small set of
``blemishes'' on which it takes values $2$ away from the
predominant value, and take just two values on
vertices of the other parity.
The problem under discussion is equivalent to the question of the
number of rank functions on the Boolean lattice $2^{[d]}$ (here
$[d]=\{1, \ldots, d\}$). A {\bf rank function} is an $f\colon
2^{[d]} \longrightarrow {\bf N}$ satisfying $f(\emptyset)=0$ and
$f(A) \leq f(A \cup x) \leq f(A)+1$ for all $A \in 2^{[d]}$ and $x
\in [d]$. An easy lower bound on the number of rank functions is
$2^{2^{d-1}}$ (consider those functions which take the value $k/2$
on each element of the $k$th level of the Boolean lattice for each
even $k$). Athanasiadis \cite{Athanasiadis} conjectured that the
total number of rank functions is $2^{2^{d-1}(1+o(1))}$. This
conjecture is proved in \cite{KahnLawrenz}, where it is further
conjectured that the number is in fact $O(2^{2^{d-1}})$. Theorem
\ref{ourresult} answers this conjecture in the affirmative; for, as
observed by Mossel (see \cite{Kahn}), there is a bijection from the
set of rank functions to ${\cal F}$: identifying a subset $A$ of
$[d]$ with a vertex of $Q_d$ in the natural way, the bijection is
given by $g \longrightarrow f$ where $f(A)=2g(A)-|A|$.
Theorem \ref{ourresult} also provides information about the number
of proper $3$-colourings of $Q_d$. A {\bf proper $3$-colouring} of a
graph $G$ with vertex set $V$ and edge set $E$ is a function
$\chi\colon V \longrightarrow \{0,1,2\}$ satisfying $(x,y) \in E
\Rightarrow \chi(x) \neq \chi (y)$. Theorem \ref{ourresult} implies
that the number of proper $3$-colourings of $Q_d$ is asymptotic to
$6e2^{2^{d-1}}$; for, as observed by Randall \cite{Randall2}, there
is a bijection from ${\cal F}$ to the set of proper $3$-colourings
of $Q_d$ with $\chi(\underline{0}) = 0$: the bijection is given by
$f \longrightarrow \chi$ where $\chi(v)=i$ iff $f(v) \equiv i$ (mod
$3$).
The main inspiration for the proof of Theorem \ref{ourresult} is the
work of A. Sapozhenko, who, in \cite{Sapozhenko2}, gave a relatively
simple derivation for the asymptotics of the number of independent
sets in $Q_d$ (earlier derived in a more involved way in
\cite{KorshunovSapozhenko}). Our Lemma \ref{mainhmsapprox} is a
modification of a lemma in \cite{Sapozhenko}, and our overall
approach is similar to \cite{Sapozhenko2}. The other key ingredient
in our proof is the main lemma from \cite{Kahn}, which was already
used by Kahn to give Theorem \ref{rangeconstant}.
In the rest of this section, we establish basic notation and
gather together the main external ingredients that will be used in
the proof of Theorem \ref{ourresult}, before giving an outline of the rest of the paper.
\subsection{Notation and conventions} \label{subsectionnotation}
For graph theory basics, see e.g. \cite{Bollobas},~\cite{Diestel}.
For basics of the combinatorics of the Hamming cube, see e.g.
\cite{Bollobas4}.
The Hamming cube $Q_d$ is a $d$-regular, bipartite graph.
Write
$V$ for the vertex set of the cube, ${\cal E}$ for the set of even
vertices (those whose $\ell_1$ distance from $\underline{0}$ is
even) and ${\cal O}$ for the set of odd vertices.
Set
$M=2^{d-1}=|{\cal E}|=|{\cal O}|$.
For $u, v \in V$ and $A,C \subseteq V$ we write $u \sim v$ if
there is an edge in $Q_d$ joining $u$ and $v$, $\nabla(A)$ for the
set of edges having exactly one end in $A$ and (when $A \cap
C=\emptyset$) $\nabla(A,C)$ for the set of edges having one end in
each of $A, C$.
Set $N(u)=\{w\in V\colon w \sim u\}$ ($N(u)$ is the {\bf
neighbourhood} of $u$), $N(A)=\cup_{w \in A} N(w)$, $N_C(u)=\{w\in
C\colon w \sim u\}$, $N_C(A)=\cup_{w \in A} N_C(w)$, and
$d_C(u)=|N_C(u)|$. Write $\rho(u,v)$ for the length of the
shortest $u$-$v$ path in $Q_d$, and set $\rho(u,A)=\min_{w \in
A}\{\rho(u,w)\}$ and $\rho(A,C)=\min_{w \in A, w' \in
C}\{\rho(w,w')\}$. Set $B(A)=\{v \in V\colon N(v) \subseteq A\}$.
We say that $A$ is {\bf $k$-linked} if for every $u,v\in A$ there
is a sequence $u=u_0, u_1, \ldots, u_l=v$ in $A$ with
$\rho(u_i,u_{i+1})\leq k$ for $i = 0, \ldots, l-1$.
Note that for
any $k$, $A$ is the disjoint union of its maximal $k$-linked
subsets --- we call these the {\bf $k$-components} of $A$.
Write
$C \prec A$ if $C$ is a $2$-component of $A$, and $c(A)$ for the
number of $2$-components of $A$.
We say that $A$ is {\bf small} if $|A|<\alpha^d$ for a certain
constant $\alpha < 2$ that will be discussed in Section
\ref{sectionreduction} (and {\bf large} otherwise), {\bf sparse}
if all the $2$-components of $A$ are singletons (and {\bf
non-sparse} otherwise), and {\bf nice} if $A$ is small, $2$-linked
and of size at least $2$.
Note that all sets $A$ that we will
consider will satisfy either $A \subseteq {\cal E}$ or $A
\subseteq {\cal O}$.
For integers $a < b$ we define $[a,b] = \{a, \ldots, b\}$.
We use ``$\ln$'' for the natural logarithm and ``$\log$'' always
means the base $2$ logarithm.
The implied constants in the $O$ and
$\Omega$ notation are absolute (independent of $d$).
We always
assume that $d$ is large enough to support our assertions.
No
attempt has been made to optimize constants.
\subsection{External ingredients}
\label{sectionext}
We list here the main results that we will be drawing on
in the rest of the paper.
\medskip
We begin with a lemma bounding the number of connected
subgraphs of a graph. The infinite $\Delta$-branching rooted tree contains precisely
${\Delta n \choose n}/((\Delta-1)n+1)$ rooted subtrees with $n$ vertices
(see e.g. Exercise 11 (p. 396) of \cite{Knuth})
and this implies that if $G$ is a graph with maximum degree $\Delta$ and vertex set $V(G)$
then the number of $n$-vertex subsets of $V(G)$ which contain a fixed vertex and induce a connected
subgraph is at most $(e\Delta)^{n}$. (This fact is rediscovered in \cite{Sapozhenko}.)
We will use the following easy corollary.
\begin{lemma} \label{Tree}
Let $\Sigma$ be a graph with vertex set $V(\Sigma)$ and
maximum degree $\Delta$. For each fixed $k$, the number of $k$-linked subsets of
$V(\Sigma)$ of size $n$ which contain a fixed
vertex is at most $2^{O(n\log \Delta)}$.
\end{lemma}
\noindent This follows from the fact that a $k$-linked subset of
$\Sigma$ is connected in a graph with all degrees
$O(\Delta^{k+1})$.
\medskip
The next lemma is a special case of a fundamental result due to
Lov\'asz \cite{Lovasz} and Stein \cite{Stein} (see also
\cite{Furedi}). For a bipartite graph $\Sigma$ with bipartition
$X\cup Y$, say $Y'\subseteq Y$ {\bf covers} $X$ if each $x\in X$
has a neighbour in $Y'$.
\begin{lemma}\label{Lcor}
If a bipartite graph $\Sigma$ with bipartition $X\cup Y$ satisfies
$d(x)\geq a$ for all $x\in X$ and $d(y)\leq b$ for all $y\in Y$,
then $X$ is covered by some $Y'\subseteq Y$ of size at most
$(|Y|/a)(1+\ln b)$.
\end{lemma}
\medskip
The next lemma is from \cite{Sapozhenko}
(see Lemma 2.1); the reader should have no
difficulty supplying a proof.
\begin{lemma} \label{Lconn}
If $\Sigma$ is a graph on vertex set $V(\Sigma)$ and $A,C \subseteq V(\Sigma)$ satisfy
\medskip
\noindent {\rm (i)} $A$ is $k$-linked
\medskip
\noindent and
\medskip
\noindent {\rm (ii)} $\rho(u,C)\leq l$ for each $u \in A$ and
$\rho(v,A)\leq l$ for each $v \in C$,
\medskip
\noindent then $C$ is $(k+2l)$-linked.
\end{lemma}
The main step from the proof of Theorem \ref{rangeconstant} in
\cite{Kahn} (obtained via entropy arguments) will also be used
here. For $f \in {\cal F}$, set $C(f)=\{v \in V\colon f|_{N(v)}
\mbox{ is constant}\}$.
\begin{lemma} \label{entropylemma}
For $u \sim v$ and ${\bf f}$ drawn uniformly from ${\cal F}$,
${\bf P}(|\{u,v\}\cap C({\bf f})|=1)=1-e^{-\Omega(d)}$.
\end{lemma}
Finally, we need to know something about isoperimetry in the cube.
A {\bf Hamming ball centered at $x_0$} in $Q_d$ is any set of
vertices $B$ satisfying
$$
\{u \in V\colon \rho(u,x_0) \leq k\} \subseteq B \subset \{u \in
V\colon \rho(u,x_0) \leq k+1\}
$$
for some $k<d$. An {\bf even} (resp. {\bf odd}) {\bf Hamming ball}
is a set of vertices of the form $B \cap {\cal E}$ (resp. $B \cap
{\cal O}$) for some Hamming ball $B$. We use the following result
of K\"orner and Wei \cite{KornerWei}.
\begin{lemma} \label{kornerandwei}
For every $C \subseteq {\cal E}$ (resp. ${\cal O}$) and $D \subseteq V$, there
exists an even (resp. odd) Hamming ball $C'$ and a set $D'$ such that
$|C'|=|C|$, $|D'|=|D|$ and $\rho(C',D') \geq \rho(C,D)$.
\end{lemma}
\subsection{Outline}
The rest of the paper is organized as follows.
In Section \ref{sectionreduction} we use Lemma \ref{entropylemma}
to reduce Theorem \ref{ourresult} to the problem of counting the
number of homomorphisms which are predominantly $0$ on ${\cal E}$.
The easy lower bounds on the number of homomorphisms which take on
four and five values are given in Section
\ref{sectionlowerbounds}. In Section \ref{sectionsums} we examine
a general type of sum over small subsets of ${\cal E}$ and
establish some of its properties. In Section \ref{sectionthesum}
we write down an explicit sum of the type examined in Section
\ref{sectionsums} for the number of homomorphisms which are
predominantly $0$ on ${\cal E}$. The rest of the paper is devoted
to estimating this sum. In Section \ref{sectionisoperimetry} we
establish lower bounds on the sizes of neighbourhoods of
single-parity sets in the cube. In Section
\ref{sectionmainapproximation} we arrive at the heart of the
matter, showing that the set of nice subsets of ${\cal E}$ can be
``well-approximated'' in a precise sense by members of a ``small''
collection; this allows us to swiftly complete the proof of
Theorem \ref{ourresult} in Section \ref{sectionprovingo(1)}. We
postpone a more detailed outline of the latter portion of the
argument until the beginning of Section
\ref{sectionmainapproximation}. Finally, in Section
\ref{sectionremarks}, we make some brief remarks on the proof and
possible extensions of the techniques used.
\section{Reduction to mostly constant} \label{sectionreduction}
We begin the proof of Theorem \ref{ourresult} by using Lemma
\ref{entropylemma} to reduce the problem to that of counting
homomorphisms which mainly take a single value on ${\cal E}$.
There is an inherent odd-even symmetry in the problem; we now reformulate
slightly to make use of this. Write
$$
{\cal A}= \{f\colon V \rightarrow {\bf Z} \colon u \sim v
\Rightarrow |f(u)-f(v)|=1\}
$$
and write ${\cal B}$ for the quotient of ${\cal A}$ by the equivalence relation
$$
f \equiv g \iff f - g \mbox{ is constant on $V$}.
$$
For each $f \in {\cal A}$ write $[f]$ for the equivalence class of $f$ in ${\cal B}$.
Noting that $R$ is constant on equivalence classes, we may define
$$
{\cal B}_i = \{[f]\in {\cal B}\colon |R(f)|=i\}.
$$
Clearly $|{\cal B}_i| = |{\cal F}_i|$ for each $i$ (${\cal F}$ is a complete set
of representatives for ${\cal B}$).
For $f \in {\cal A}$, we say that $f$ is {\bf mostly constant on
${\cal E}$} if there is some $c$ such that $\{v \in {\cal E}\colon
f(v) \neq c\}$ is small (see Section \ref{subsectionnotation} for
the definition of small; the constant $\alpha$ in that definition
will be specified in the proof of Lemma \ref{mostlyconstant}), and
we define {\bf mostly constant on ${\cal O}$} analogously. These
definitions respect the equivalence relation, so we may define
$$
{\cal B}^{{\cal E}} = \{[f] \in {\cal B}\colon f \mbox{ is mostly
constant on ${\cal E}$}\}.
$$
Define ${\cal B}^{{\cal O}}$ analogously.
By symmetry, $|{\cal B}^{{\cal E}}|=
|{\cal B}^{{\cal O}}|$ (any automorphism of $Q_d$ that sends ${\cal E}$ to ${\cal O}$
induces a bijection between the two sets).
\begin{lemma} \label{negligibleintersection}
$$
|{\cal B}^{{\cal E}} \cap {\cal B}^{{\cal O}}| = e^{-\Omega(d)}|{\cal B}|.
$$
\end{lemma}
\noindent {\em Proof: }To specify an
$[f] \in {\cal B}^{{\cal E}} \cap {\cal B}^{{\cal O}}$ we first
specify the predominant values of the representative $f$ on
${\cal E}$ and ${\cal O}$. W.l.o.g. we may assume that the predominant value on ${\cal E}$
is $0$, and so the predominant value on ${\cal O}$ is one of $\pm 1$.
We then specify the small sets from ${\cal E}$
and ${\cal O}$ on which $f$ does not take the predominant values,
and finally the values of $f$ on these small sets. Noting that
once $f(v)$ has been specified for any $v \in V$ there are most $2d+1$
values that $f$ can take on any other vertex and that $2^M$ is a
trivial lower bound on $|{\cal B}|$, we get
\begin{eqnarray}
|{\cal B}^{{\cal E}} \cap {\cal B}^{{\cal O}}| & \leq &
2\sum_{i, j \leq \alpha^d} {M \choose i}{M \choose j}(2d+1)^{i+j} \nonumber \\
& \leq & e^{-\Omega(d)}|{\cal B}|. \nonumber
\end{eqnarray}
\qed
\begin{lemma} \label{mostlyconstant}
$$
|{\cal B}| = (2 \pm e^{-\Omega(d)})|{\cal B}_{\cal E}|.
$$
\end{lemma}
\noindent {\em Proof: }For $f \in {\cal A}$, set $C(f)=\{v \in
V\colon f|_{N(v)} \mbox{ is constant}\}$ (extending the definition
given in Section \ref{sectionext}). We choose a uniform member
${\bf [f]}$ of ${\cal B}$ by choosing ${\bf f}$ uniformly from
${\cal F}$. For ${\bf [f]}$ and $u,v \in V$, let $Q_u$ be the
event $\{u \in C({\bf f})\}$, $Q_{\overline{u}}$ the complementary
event, $Q_{u\overline{v}} = Q_u \cap Q_{\overline{v}}$ and
$Q_{\overline{u}\overline{v}}= Q_{\overline{u}} \cap
Q_{\overline{v}}$. Write $K_u = K_u({\bf f})$ for the set of
vertices that can be reached from $u$ in $C({\bf f})$ via steps of
size exactly $2$, and let $Q_{uv}^*$ be the event $\{v \in K_u\}$.
(Note that if $f, g \in {\cal A}$ are equivalent then $C(f)=C(g)$,
so all these events are well defined.)
Let $u$ and $v$ be two vertices of the same parity. We claim that
$Q_{\overline{u}\overline{v}} \cup Q_{uv}^*$ occurs with
probability $1-e^{-\Omega(d)}$. For, let $ua_1a_2 \ldots a_{2k-1}v$ be a $u$-$v$ path of
length at most $d$ (the diameter of $Q_d$). Writing $a_0$ for $u$ and
$a_{2k}$ for $v$, we have
$$
Q_{\overline{u}\overline{v}} \cup
Q_{uv}^* \supseteq \cap_{i=0}^{2k-1}
(Q_{a_i\overline{a_{i+1}}} \cup Q_{\overline{a_i}a_{i+1}}).
$$
By Lemma \ref{entropylemma}, ${\bf P}(Q_{a_i\overline{a_{i+1}}}
\cup Q_{\overline{a_i}a_{i+1}}) = 1-e^{-\Omega(d)}$ for each $i$.
Hence ${\bf P}(Q_{\overline{u}\overline{v}} \cup Q_{uv}^*) \geq
1-de^{-\Omega(d)} = 1-e^{-\Omega(d)}$, as claimed.
We therefore have, for fixed $u \in V$ and any $v$ of the same parity as $u$,
${\bf P}(Q_{uv}^*|Q_u)>1-c^{-d}$, where $c>1$ is fixed. So, conditioning on $Q_u$, we have
$$
{\bf E}(|\{v\colon \rho(u,v) \mbox{ {\em even}},v \not \in K_u
\}|)\leq (2/c)^d,
$$
so that, by Markov's Inequality (with the constant $c'$ chosen so
that $2/c < c' < 2$),
\begin{equation} \label{conditioning}
{\bf P}(|K_u|<M-(c')^d | Q_u) \leq (2/cc')^d=e^{-\Omega(d)}.
\end{equation}
If $u \not \in C({\bf f})$, then $K_u({\bf f}) = \emptyset$, so that
${\bf P}(|K_u|<M-(c')^d | Q_{\overline{u}})=1$.
By symmetry, ${\bf P}(Q_{u \overline{v}})$ is the same for every adjacent $u$ and $v$,
and this together with Lemma \ref{entropylemma} gives
$1/2+e^{-\Omega(d)} > {\bf P}(Q_u), {\bf P}(Q_{\overline{u}}) > 1/2-e^{-\Omega(d)}$.
Combining these observations with (\ref{conditioning}), we get
$$
{\bf P}(|K_u|<M-(c')^d) \leq 1/2+e^{-\Omega(d)}.
$$
Noting that ${\bf f}$ is constant on the neighbourhood of $K_u$,
this says (taking $u$ to be any vertex in ${\cal O}$) that there
is a constant $\beta < 2$ such that
$$
{\bf P}({\bf f} \mbox{ is constant on a subset of ${\cal E}$ of size at least } M-\beta^d)
>1/2-e^{-\Omega(d)}.
$$
Taking $\alpha = \beta$ in the definition of small, this says
$$
|{\cal B}^{{\cal E}}| \geq (1/2 - e^{-\Omega(d)})|{\cal B}|.
$$
The lemma now follows from Lemma \ref{negligibleintersection}. \qed
It is now convenient to choose as a complete set of representatives for
${\cal B}^{{\cal E}}$ the collection
$$
{\cal F}^{{\cal E}} = \{f \in {\cal A}\colon {\cal E}\setminus
f^{-1}(0) \mbox{ is small}\}.
$$
Set
$$
{\cal F}^{{\cal E}}_i=\{f \in {\cal F}^{{\cal E}}\colon
|R(f)|=i\}.
$$
Noting that $|{\cal F}^{{\cal E}}_3| \geq 2^M$, we see that
Theorem \ref{ourresult} will now follow from
\begin{thm} \label{mainresult}
\begin{eqnarray}
|{\cal F}^{{\cal E}}| & \leq & (e+e^{-\Omega(d)})2^M \label{estimatingB} \\
|{\cal F}_4^{{\cal E}}| & \geq & (2\sqrt{e}-2 - e^{-\Omega(d)})2^M \label{lowerboundonf4} \\
|{\cal F}_5^{{\cal E}}| & \geq & (e-2\sqrt{e}+1-e^{-\Omega(d)})2^M \label{lowerboundonf5}.
\end{eqnarray}
\end{thm}
It is this that we proceed to prove.
\section{Lower bounds on $|{\cal F}^{{\cal E}}_4|$ and $|{\cal F}^{{\cal E}}_5|$}
\label{sectionlowerbounds}
The aim of this section is to prove (\ref{lowerboundonf4}) and (\ref{lowerboundonf5}).
With each sparse $A \subseteq {\cal E}$ of size at least $2$
we associate a subset ${\cal F}^{{\cal E}}_5(A) \subseteq {\cal F}^{{\cal E}}_5$
of size
$$
(2^{|A|}-2)2^{M-d|A|}= 2^MM^{-|A|}(1-2^{-|A|+1})
$$
consisting
of those $f \in {\cal F}^{{\cal E}}_5$ for which
$R(f)=[-2,2]$ and $f^{-1}(\{\pm 2\})=A$
(on $A$, choose values for $f$ from $\{\pm 2\}$,
choosing at least one $2$ and at least one $-2$; on ${\cal E}\setminus A$ give $f$ value 0;
and on ${\cal O}\setminus N(A)$ choose values from
$\{\pm 1\}$, all choices made independently).
Then ${\cal F}^{{\cal E}}_5(A) \cap {\cal F}^{{\cal E}}_5(B) = \emptyset$ whenever $A \neq B$.
Noting that there are at least ${M \choose k}-Md^2{M-2 \choose k-2}$
sparse subsets of ${\cal E}$ of size $k$, and that for $k \leq d$, this number is
$(1-e^{-\Omega(d)}){M \choose k}$, we can lower bound $|{\cal F}^{{\cal E}}_5|$ by
\begin{eqnarray}
\left|{\cal F}^{{\cal E}}_5 \right| & \geq &
2^M \sum_{k \geq 2} |\{A \subseteq {\cal E}\colon A \mbox{ sparse, }
|A|=k\}|M^{-k}(1-2^{-k+1}) \nonumber \\
& \geq & 2^M (1-e^{-\Omega(d)}) \sum_{k=2}^d {M \choose k}M^{-k}(1-2^{-k+1}) \nonumber \\
& \geq & 2^M (1-e^{-\Omega(d)}) \sum_{k=2}^d (1-e^{-\Omega(d)})(1/k!)(1-2^{-k+1}) \nonumber \\
& \geq & 2^M (1-e^{-\Omega(d)}) (\sum_{k=2}^d 1/k! - 2\sum_{k=2}^d 2^{-k}/k!) \nonumber \\
& \geq & 2^M (1-e^{-\Omega(d)})((e-2)-2(\sqrt{e}-3/2)) \nonumber \\
& \geq & 2^M (e-2\sqrt{e}+1-e^{-\Omega(d)}), \nonumber
\end{eqnarray}
so we have (\ref{lowerboundonf5}).
We do something
similar for (\ref{lowerboundonf4}). With each nonempty, sparse $A \subseteq {\cal E}$
we associate a subset ${\cal F}^{{\cal E}}_4(A) \subseteq {\cal F}^{{\cal E}}_4$
of size
$$
2^{1+M-d|A|} = 2^MM^{-|A|}2^{-|A|+1}
$$
consisting
of those $f \in {\cal F}^{{\cal E}}_4$ for which either
$R(f)=[-2,1]$ or $R(f)=[-1,2]$ and $f^{-1}(\{\pm 2\})=A$
(choose a value from $\pm 2$ for $f$ to take on $A$; on ${\cal E}\setminus A$ give $f$ value 0;
and choose values from
$\pm 1$ on ${\cal O}\setminus N(A)$, all choices made independently).
So we have
\begin{eqnarray}
\left|{\cal F}^{{\cal E}}_4\right| & \geq &
2^M \sum_{k \geq 1} |\{A \subseteq {\cal E}\colon A \mbox{ sparse, }
|A|=k\}|M^{-k}2^{-k+1} \nonumber \\
& \geq & 2^M (2\sqrt{e}-2-e^{-\Omega(d)}). \nonumber
\end{eqnarray}
\section{Sums over small subsets of ${\cal E}$} \label{sectionsums}
In this section, we examine a certain kind of sum that will arise when we try
to write down an explicit expression for $|{\cal F}^{{\cal E}}|$. Specifically, we prove
\begin{lemma} \label{generalsums}
Suppose that $g\colon 2^{\cal E}\rightarrow{\bf R}^+$ satisfies
\beq{sumcond1} g(A) = \prod \{ g(A_i) \colon A_i \prec A \}, \enq
\beq{sumcond2} g(\{y\}) = c2^{-d} ~\mbox{for all $y \in {\cal E}$
for some constant $c>0$} \enq and \beq{sumcond3} \sum_{\mbox{$A$
nice}} g(A) = e^{-\Omega(d)}. \enq Then for all $D \subseteq {\cal
E}$
$$
\left|\sum_{\mbox{$A \subseteq D$,~$A$ small}} g(A) -
(1+c2^{-d})^{|D|}\right| = e^{-\Omega(d)}.
$$
\end{lemma}
\noindent {\em Remark: }Because $\emptyset \prec \emptyset$, any
$g$ satisfying (\ref{sumcond1}) must also satisfy
$g(\emptyset)=1$.
\medskip
\noindent {\em Proof of Lemma \ref{generalsums}: }All summations
below are restricted to subsets of $D$. We begin by observing that
$(1+c2^{-d})^{|D|}=\sum_A c^{|A|}2^{-d|A|}$ and that if $A$ is
sparse then $g(A)=c^{|A|}2^{-d|A|}$, so that
\begin{equation} \label{whee}
\left|\sum_{\mbox{$A$ small}} g(A)-(1+c2^{-d})^{|D|}\right| \leq
\sum ~\hspace{-1.8mm}' g(A) + \sum ~\hspace{-1.8mm}'' c^{|A|}2^{-d|A|} +
\sum ~\hspace{-1.8mm}''' c^{|A|}2^{-d|A|},
\end{equation}
where $\sum'$ is over $A$ small and non-sparse, $\sum''$ is over $A$ large and
$\sum'''$ is over $A$ non-sparse.
We bound each of the terms on the right-hand side of (\ref{whee}). For the first we have
\begin{eqnarray}
\sum ~\hspace{-1.8mm}' g(A)
& \leq & \sum \left\{ g(A')g(A' \setminus A)\colon
A' \mbox{ nice, }A \mbox{ small, }A' \prec A \right\} \nonumber \\
& \leq & \sum_{\mbox{$A'$ nice}} g(A') \sum_{\mbox{$A$ small}} g(A) \nonumber \\
& = & e^{-\Omega(d)}\sum_{\mbox{$A$ small}} g(A) \label{whee1}.
\end{eqnarray}
\noindent For the second we have
\begin{eqnarray}
\sum ~\hspace{-1.8mm}'' c^{|A|}2^{-d|A|} & \leq &
\sum_{|A| \geq d} c^{|A|}2^{-d|A|} \nonumber \\
& \leq & \sum_{i = d}^{|D|} {|D| \choose i} (c2^{-d})^i \nonumber \\
& \leq & \sum_{i \geq d} c^i/i! \nonumber \\
& = & e^{-\Omega(d)}. \label{whee2}
\end{eqnarray}
\noindent Finally, for the third we have
\begin{eqnarray}
\sum ~\hspace{-1.8mm}''' c^{|A|}2^{-d|A|} & \leq
& \sum_{x,x' \in D,~\rho(x,x')=2} c^22^{-2d} \sum_{A} c^{|A|}2^{-d|A|} \nonumber \\
& \leq & |D|c^2d^22^{-2d}(1+c2^{-d})^{|D|} \nonumber \\
& = & e^{-\Omega(d)}. \label{whee3}
\end{eqnarray}
Combining (\ref{whee1}), (\ref{whee2}) and
(\ref{whee3}) we get
\begin{eqnarray}
\left|\sum_{\mbox{$A$ small}} g(A)-(1+c2^{-d})^{|D|}\right| & = &
e^{-\Omega(d)} \left(\sum_{\mbox{$A$ small}} g(A)+1 \right) \label{intone} \\
& = & e^{-\Omega(d)}. \label{inttwo}
\end{eqnarray}
(We get (\ref{inttwo}) from (\ref{intone}) because the latter
implies that $\sum_{\mbox{$A$ small}} g(A)$ is bounded.) \qed
The most important $g$ that we will be considering is
$$
g(A)= 2^{-|N(A)|+|B(A)|}
$$
(recall that $B(A)=\{v\in N(A)\colon N(v)\subseteq A\}$). It's
easy to see that this satisfies (\ref{sumcond1}) and
(\ref{sumcond2}) (with $c=1$). It is far from obvious that it
satisfies (\ref{sumcond3}); Sections
\ref{sectionmainapproximation} and \ref{sectionprovingo(1)} are
devoted to the proof of this fact, which we state now for use in
Section \ref{sectionthesum}.
\begin{thm} \label{heartofmatter}
$$
\sum_{\mbox{$A \subseteq {\cal E}$ nice}} 2^{-|N(A)|+|B(A)|} =
e^{-\Omega(d)}.
$$
\end{thm}
\section{Proof of (\ref{estimatingB})} \label{sectionthesum}
In this section, we write an explicit sum of the type introduced in Section \ref{sectionsums}
for $|{\cal F}^{{\cal E}}|$ and use Lemma \ref{generalsums} to estimate it, modulo Theorem \ref{heartofmatter}.
This will give (\ref{estimatingB}).
For each small $A \subseteq {\cal E}$, set
$$
{\cal F}^{{\cal E}}(A) = \{f \in {\cal F}^{{\cal E}}\colon
f^{-1}(0)={\cal E}\setminus A\}.
$$
We may specify an $f \in {\cal F}^{{\cal E}}(A)$ by the following
procedure. First, noting that $f$ must be either always positive
or always negative on a $2$-component of $A$, we specify a sign
($\pm$) for each such $2$-component. Next, we specify a nested
sequence
$$
A = C_2 \supseteq C_4 \supseteq \ldots \supseteq C_{2[d/2]}.
$$
For each $i=1, \ldots, [d/2]$, $C_{2i}=\{u \in {\cal E}\colon
|f(u)| \geq 2i\}$. Because the diameter of $Q_d$ is $d$, we have
$|f(u)| \leq 2[d/2]$ for all $u \in {\cal E}$, so this second step
completes the specification of $f$ on ${\cal E}$. Note that not
every sequence of $C_{2i}$'s gives rise to a legitimate $f \in
{\cal F}^{{\cal E}}$.
To specify $f$ on ${\cal O}$, we first specify a value from $\pm
1$ on each vertex of ${\cal O} \setminus N(A)$, and then, for each
$i=1, \ldots, [d/2]$, specify a value from $2i \pm 1$ for $|f(u)|$
for each $u \in B(C_{2i}) \setminus N(C_{2i+2})$ (note that the
sign of $f(u)$ for such $u$ has been determined by the
specification of signs on $A$). To see that this completes the
specification of $f$ on ${\cal O}$, note that we have a choice for
the value of $|f|$ at $u \in N(A)$ iff $f$ is constant on $N(u)$
iff $u \in B(C_{2i}) \setminus N(C_{2i+2})$ for some $1 \leq i
\leq [d/2]$ (setting $C_{2[d/2]+2}=\emptyset$), and that in this
case we can choose from two possible values, $2i \pm 1$ (see
Figure \ref{figure1}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=12cm]{fig1.eps}
\caption{A vertex in $N(A)\setminus B(A)$ has neighbours in
both ${\cal E}\setminus A$ and $A$, and a vertex in $N(C_4)\setminus
B(C_4)$ has neighbours in both $A\setminus C_4$ and $C_4$, but
a vertex in $B(A)\setminus N(C_4)$ only has neighbours in
$A\setminus C_4$.}
\label{figure1}
\end{center}
\end{figure}
So, noting that $N(C_{2i+2}) \subseteq B(C_{2i})$ for each $i=1, \ldots, [d/2]$, we have
$$
|{\cal F}^{{\cal E}}(A)| = 2^{c(A)+ M-|N(A)|+|B(A)|} \sum \prod_{i=2}^{[d/2]} 2^{-|N(C_{2i})|+|B(C_{2i})|}
$$
where the sum --- here and in the next line --- is
over all legitimate choices of $C_2 \supseteq \ldots \supseteq C_{2[d/2]}$. Setting
$$
h(A)=2^{c(A)-|N(A)|+|B(A)|} \sum \prod_{i=2}^{[d/2]} 2^{-|N(C_{2i})|+|B(C_{2i})|}
$$
we get
$$
|{\cal F}^{{\cal E}}|=2^M \sum_{\mbox{$A \subseteq {\cal E}$ small}} h(A).
$$
We claim that $h$ satisfies all the conditions of Lemma
\ref{generalsums}. For $A=\{y\}$ we have $B(A)=\emptyset$, and so
$h(A)=2^{1-d}$; this gives (\ref{sumcond2}) (with $c=2$). To see
that $h$ satisfies (\ref{sumcond3}), note that for each $A
\subseteq {\cal E}$ small, each $C_{2i}$ is a small subset of $A$,
and so we can crudely upper bound $h(A)$ by
\begin{eqnarray}
h(A) & \leq & 2^{c(A)-|N(A)|+|B(A)|}
\left( \sum_{\mbox{$C \subseteq A$ small}} 2^{-|N(C)|+|B(C)|} \right)^{[d/2]}\nonumber \\
& \leq & 2^{c(A)-|N(A)|+|B(A)|}
\left( (1+2^{-d})^{\alpha^d} + e^{-\Omega(d)} \right)^{[d/2]} \label{applyingsumsthm} \\
& \leq & \left(1+o(1)\right)2^{c(A)-|N(A)|+|B(A)|}. \nonumber
\end{eqnarray}
The inequality in (\ref{applyingsumsthm}) is obtained by applying
Lemma \ref{generalsums} and Theorem \ref{heartofmatter}, and
(\ref{sumcond3}) for $h$ now follows directly from Theorem
\ref{heartofmatter}. Finally, to establish (\ref{sumcond1}) for
$h$, note that $C_2 \supseteq C_4 \supseteq \ldots \supseteq
C_{2[d/2]}$ is a legitimate sequence of $C$'s for $A$ iff $C_2\cap
A_i \supseteq C_4\cap A_i \supseteq \ldots \supseteq
C_{2[d/2]}\cap A_i$ is a legitimate sequence for $A_i$ for each
$2$-component $A_i$ of $A$, from which the claimed factorization
of $h(A)$ follows.
We can now easily establish (\ref{estimatingB}), thus completing
the proofs of Theorems \ref{mainresult} and \ref{ourresult}.
Applying Lemma \ref{generalsums}, we have
\begin{eqnarray}
\left||{\cal F}^{{\cal E}}| -e2^M\right| & \leq & 2^M\left(\left|
\sum ~\hspace{-1.8mm}' h(A)
-(1-2^{-d+1})^{|{\cal E}|}\right|
+\left|(1-2^{-d+1})^{|{\cal E}|}-e\right|\right) \nonumber \\
& = & e^{-\Omega(d)}2^{M}, \nonumber
\end{eqnarray}
\noindent where $\sum'$ is over $A \subseteq {\cal E}$ small.
\section{Isoperimetry in the cube} \label{sectionisoperimetry}
The aim of this section is to put some lower bounds on the
neighbourhood size of a small set in $Q_d$. We begin with
\begin{lemma} \label{boundsondelta}
For all $A \subseteq {\cal E}$ or $A \subseteq {\cal O}$ small, $|A|\leq
(1-\Omega(1))|N(A)|$.
\end{lemma}
\noindent {\em Proof: }By symmetry, we need only prove this when $A \subseteq {\cal E}$.
Let small $A \subseteq {\cal E}$ be given. Applying
Lemma \ref{kornerandwei} with $C=A$ and $D=V \setminus (A
\cup N(A))$, we find that there exists an even Hamming ball $A'$
with $|A'|=|A|$ and $|N(A)| \geq |N(A')|$. So we may assume that
$A$ is a small even Hamming ball.
We consider only the case where $A$ is centered at an even vertex,
w.l.o.g. $\underline{0}$, the other case being similar. In this
case,
$$
\{v \in {\cal E}\colon \rho(v,\underline{0}) \leq k\} \subseteq A
\subset \{v \in {\cal E}\colon \rho(v,\underline{0}) \leq k+2\}
$$
for some even $k \leq d/2-\Omega(d)$ (the bound on $k$ coming from
the fact that $A$ is small). For each $0 \leq i \leq (k+2)/2$, set
$B_i=A \cap \{v\colon \rho(v,\underline{0})=2i\}$, and
$N^+(B_i)=N(B_i)\cap\{u\colon \rho(u,\underline{0})=2i+1\}$. It's
clear that $N(A)=\cup_{0 \leq i \leq (k+2)/2} N^+(B_i)$ and that
for $i=0,\ldots,(k+2)/2$
\begin{eqnarray}
\frac{|B_i|}{|N^+(B_i)|} & \leq & \frac{2i+1}{d-2i} \label{LYM} \\
& = & 1-\Omega(1), \label{usingk}
\end{eqnarray}
\noindent from which the lemma follows. The inequality in
(\ref{usingk}) comes from the bound on $k$. The inequality in
(\ref{LYM}) is actually an equality except when $i=(k+2)/2$, in
which case it follows from the observation that each vertex in
$B_{k+2}$ has exactly $d-(k+2)$ neighbours in $N^+(B_{k+2})$, and
each vertex in $N^+(B_{k+2})$ has at most $(k+2)+1$ neighbours in
$B_{k+2}$. \qed
Lemma \ref{boundsondelta} is true for all small $A$, but can be strengthened
considerably when we impose stronger bounds on $|A|$. In this direction, we only need
the simple
\begin{lemma} \label{boundsondeltaAsmall}
If $|A|<d^{O(1)}$, then $|A| \leq O(1/d)|N(A)|$, and if $|A| \leq d/2$,
then $|N(A)| \geq d|A|-2|A|(|A|-1)$.
\end{lemma}
\noindent {\em Remark: }Note that the second statement is true for
all $A$, but vacuously so for $|A|>d/2$.
\medskip
\noindent {\em Proof of Lemma \ref{boundsondeltaAsmall}: }If $|A|<d^{O(1)}$,
then we have $k =O(1)$ in the notation of
Lemma \ref{boundsondelta}, and repeating the argument of that lemma
we get $|A| \leq O(1/d)|N(A)|$.
For the second part, note that each $u \in A$ has $d$ neighbours,
of which at least $d-2(|A|-1)$ must be unique to it, since a pair
of vertices in the cube can have at most two common neighbours.
\qed
From here on, the only properties of the cube that we will use are
the isoperimetric bounds of Lemmas \ref{boundsondelta} and
\ref{boundsondeltaAsmall}.
\section{The main approximation} \label{sectionmainapproximation}
We now begin the proof of Theorem \ref{heartofmatter}. The
approach will be to partition the set of $A$'s over which we are
summing according to the sizes of $A$, $N(A)$, $B(A)$ and
$N(B(A))$ (note that the summand in Theorem \ref{heartofmatter} is
constant on each partition class). The bulk of the work will be in
bounding the sizes of the partition classes.
Given $A \subseteq {\cal E}$, set $G=G(A)=N(A)$, $B=B(A)$ and
$H=H(A)=N(B)$. In what follows, $G$, $B$ and $H$ are always
understood to be $G(A)$, $B(A)$ and $H(A)$ for whatever $A$ is
under discussion. Note that $B \subseteq G$ and $H \subseteq A$.
Given $a$, $g$, $b$ and $h$, set
$$
{\cal H}(a, g, b, h)=\left\{A \subseteq {\cal E}~\mbox{ $2$-linked
$\colon |A|=a, |G|=g, |B|=b$ and $|H|=h$} \right\}.
$$
The aim of this section is to prove
\begin{lemma} \label{mainapprox}
For each $a$, $g$, $b$ and $h$ with $a \leq \alpha^d$,
$$
|{\cal H}(a,g,b,h)| < M 2^{g-b-\Omega(g/\log d)},
$$
\end{lemma}
from which we will
easily derive Theorem \ref{heartofmatter} in Section \ref{sectionprovingo(1)}.
\medskip
From now until the beginning of Section \ref{sectionprovingo(1)},
$a, g, b$ and $h$ are fixed, and we write ${\cal H}$ for ${\cal
H}(a,g,b,h)$. The proof of Lemma \ref{mainapprox} involves the
idea of ``approximation''. We begin with an informal outline. To
bound $|{\cal H}|$, we produce a small set ${\cal U}$ with the
properties that each $A \in {\cal H}$ is ``approximated'' (in an
appropriate sense) by some $U \in {\cal U}$, and for each $U \in
{\cal U}$, the number of $A \in {\cal H}$ that could possibly be
``approximated'' by $U$ is small. (Each $U \in {\cal U}$ will
consist of four parts; one each approximating $G$, $A$, $H$ and
$B$.) The product of the bound on $|{\cal U}|$ and the bound on
the number of $A \in {\cal H}$ that may be approximated by any $U$
is then a bound on $|{\cal H}|$. Another way of saying this is
that we produce a set ${\cal U}$ and a map $app\colon {\cal H}
\rightarrow {\cal U}$; we then bound $|{\cal H}|$ by
$$
|{\cal H}| \leq |{\cal U}|\max_{U \in {\cal U}}|app^{-1}(U)|.
$$
The set ${\cal U}$ is itself produced by an approximation process
--- we first produce a small set ${\cal V}$ with the property that
each $A \in {\cal H}$ is ``weakly approximated'' (in an
appropriate sense) by some $V \in {\cal V}$, and then show that
for each $V$ there is a small set ${\cal W}(V)$ with the property
that for each $A \in {\cal H}$ that is ``weakly approximated'' by
$V$, there is a $W \in {\cal W}(V)$ which approximates $A$; we
then take ${\cal U}=\cup_{V \in {\cal V}} {\cal W}(V)$. (Each $V
\in {\cal V}$ will consist of two parts; one each approximating
$G$ and $H$.)
We now begin the formal discussion of Lemma \ref{mainapprox} by
introducing the two notions of approximation that we will use,
beginning with the weaker notion. A {\bf covering approximation}
for $A \subseteq {\cal E}$ is a pair $(F',P') \in 2^{\cal O}
\times 2^{\cal E}$ satisfying
\begin{equation} \label{cover1}
F' \subseteq G,~N(F') \supseteq A
\end{equation}
and
$$
P' \subseteq H,~N(P') \supseteq B
$$
\noindent (see Figure \ref{figure2}). An {\bf approximating
quadruple} for $A \subseteq {\cal E}$ is a quadruple $(F,S,P,Q)
\in 2^{\cal O} \times 2^{\cal E} \times 2^{\cal E} \times 2^{\cal
O}$ satisfying \beq{quad1} F \subseteq G,~S \supseteq A, \enq
\beq{quad2} d_F(u)>d-\sqrt{d}~~\mbox{for all $u \in S$} \enq
\beq{quad3} d_{{\cal E} \setminus S}(v) > d-\sqrt{d}~~\mbox{for
all $v \in {\cal O} \setminus F$} \enq \beq{quad4} P \subseteq
H,~Q \supseteq B, \enq \beq{quad5} d_P(u)>d-\sqrt{d}~~\mbox{for
all $u \in Q$} \enq and \beq{quad6} d_{{\cal O} \setminus Q}(v) >
d-\sqrt{d}~~\mbox{for all $v \in {\cal E} \setminus P$} \enq
\noindent (see Figure \ref{figure3}). Note that if $x$ is in $A$
then all of its neighbours are in $G$, and if $y$ is in ${\cal O}
\setminus G$ then all of its neighbours are in ${\cal E} \setminus
A$. If we think of $S$ as ``approximate $A$'' and $F$ as
``approximate $G$'', (\ref{quad2}) says that if $x \in {\cal E}$
is in ``approximate $A$'' then almost all of its neighbours are in
``approximate $G$'', while (\ref{quad3}) says that if $y\in {\cal
O}$ is not in ``approximate $G$'' then almost all of its
neighbours are not in ``approximate $A$'', and there are similar
interpretations for (\ref{quad5}) and (\ref{quad6}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=12cm]{fig2.eps}
\caption{$F'$ satisfies both the conditions of (\ref{cover1}).}
\label{figure2}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=12cm]{fig3.eps}
\caption{The pair $(F,S)$ satisfies (\ref{quad1}). To satisfy
(\ref{quad2}) and (\ref{quad3}), each vertex $u \in S$ should have
most (all but $\sqrt{d}$) of its neighbours in $F$, and each vertex $v \in {\cal O}\setminus F$
should have most of its neighbours in ${\cal E}\setminus S$.}
\label{figure3}
\end{center}
\end{figure}
\medskip
There are two parts to the proof of Lemma \ref{mainapprox}; the
``approximation'' step (Lemma \ref{mainhmsapprox}) and the
``reconstruction'' step (Lemma \ref{countingas0}). We now state
these two lemmas (from which Lemma \ref{mainapprox} follows
immediately).
\begin{lemma} \label{mainhmsapprox}
There is a family
$$
{\cal U}={\cal U}(a, g, b, h) \subseteq
2^{\cal O} \times 2^{\cal E} \times 2^{\cal E} \times 2^{\cal O}
$$
with
$$
|{\cal U}| \leq M2^{O(g\log d/\sqrt{d})}
$$
such that every $A \in {\cal H}$ has an approximating quadruple in
${\cal U}$.
\end{lemma}
\begin{lemma} \label{countingas0}
For each $(F,S,P,Q) \in 2^{{\cal O}} \times 2^{{\cal E}} \times
2^{{\cal E}} \times 2^{{\cal O}}$ satisfying (\ref{quad2}),
(\ref{quad3}), (\ref{quad5}) and (\ref{quad6}), there are at most
$2^{g - b - \Omega(g/\log d)}$ $A$'s in ${\cal H}$ satisfying
(\ref{quad1}) and (\ref{quad4}).
\end{lemma}
Lemma \ref{mainhmsapprox} follows directly from the next two
lemmas.
\begin{lemma} \label{firsthmsapprox}
There is a family
$$
{\cal V}={\cal V}(a,g,b,h) \subseteq 2^{\cal O} \times 2^{\cal E}
$$
with
$$
|{\cal V}| \leq M2^{O(g \log^2 d/d)}
$$
such that each $A \in {\cal H}$ has a covering approximation in
${\cal V}$.
\end{lemma}
\begin{lemma} \label{secondhmsapprox}
For each $(F',P') \in 2^{\cal O} \times 2^{\cal E}$ there is a
family
$$
{\cal W}={\cal W}(F',P',a,g,b,h) \subseteq 2^{\cal O}
\times 2^{\cal E} \times 2^{\cal E} \times 2^{\cal O}
$$
with
$$
|{\cal W}| \leq 2^{O(g \log d / \sqrt{d})}
$$
such that any $A \in {\cal H}$ for which $(F',P')$ is a covering
approximation has an approximating quadruple in ${\cal W}$.
\end{lemma}
We prove Lemmas \ref{firsthmsapprox} and \ref{secondhmsapprox} in
Section \ref{subsectionapproxlemmas}. We then prove Lemma
\ref{countingas0} in Section \ref{subsectionprovingmainapprox}.
The main point in the proof of Lemma \ref{secondhmsapprox} is an
algorithm which produces approximating quadruples from covering
approximations; the idea for this algorithm is from
\cite{Sapozhenko}.
\subsection{Proofs of Lemmas \ref{firsthmsapprox} and \ref{secondhmsapprox}: Approximations}
\label{subsectionapproxlemmas}
We begin with a simple observation about sums of binomial
coefficients which we will draw on repeatedly (and usually without
comment) in this section and the next. If $k=o(n)$, we have
\begin{eqnarray}
\sum_{i \leq k} {n \choose i} & \leq & (1+O(k/n)){n \choose k} \nonumber \\
& \leq & (1+O(k/n))(en/k)^k \nonumber \\
& \leq & 2^{(1+o(1))k\log(n/k)}. \label{binomial}
\end{eqnarray}
\noindent {\em Proof of Lemma \ref{firsthmsapprox}: }For each $A
\in {\cal H}$ we obtain a covering approximation for $A$ by taking
$F'(A) \subseteq G$ to be a cover of minimum size of $A$ in the
graph induced by $G \cup A$ and $P'(A) \subseteq H$ to be a cover
of minimum size of $B$ in the graph induced by $H \cup B$. Note
that $P'(A) \subseteq N(F'(A))$.
By Lemma \ref{Lconn}, $F'(A)$ is $4$-linked ($A$ is $2$-linked,
$\rho(x, F'(A)) = 1$ for each $x \in A$ and $\rho(y, A) = 1$ for
each $y \in F'(A)$). By Lemma \ref{Lcor}, $|F'(A)| \leq g(1+\ln
d)/d = O(g \log d / d)$ and $|P'(A)| \leq |H|(1+\ln d)/d = O(g
\log d / d)$ (noting that $h \leq g$).
We may therefore take ${\cal V}$ to be the set of all pairs
$(F',P') \in 2^{\cal O} \times 2^{\cal E}$ with $F'$ $4$-linked
and $P' \subseteq N(F')$, and $F', P'$ both of size at most $O(g
\log d / d)$. By Lemma \ref{Tree}, there are at most
$$
M \sum_{i \leq O(g \log d / d)} 2^{O(i \log d)} = M 2^{O(g \log^2
d / d)}
$$
possibilities for $F'$ (the factor of $M$ is for the choice of a
fixed vertex in $F'$), and, given $F'$, a further
$$
\sum_{i \leq O(g \log d / d)}{|N(F')| \choose i} = 2^{O(g \log^2 d / d)}
$$
choices for $P'$ (here we are using (\ref{binomial}) and the fact
that $|N(F')| \leq dg$). The lemma follows. \qed
\medskip
\noindent {\em Proof of Lemma \ref{secondhmsapprox}: }Fix $A
\subseteq {\cal E}$. We give an algorithm which, for input
$(F',S') \in 2^{\cal O} \times 2^{\cal E}$ satisfying $F'
\subseteq G$ and $S' \supseteq A$ produces an output $(F,S) \in
2^{\cal O} \times 2^{\cal E}$ satisfying (\ref{quad1}),
(\ref{quad2}) and (\ref{quad3}).
\medskip
Fix a linear ordering $\ll$ of $V$.
\medskip
\noindent {\bf Step $1$: }If $\{u \in A\colon d_{G \setminus
F'}(u)\geq \sqrt{d} \}
\neq \emptyset$, pick the smallest (with respect to $\ll$) $u$ in this
set and update $F'$ by $F' \longleftarrow F' \cup N(u)$. Repeat
this until $\{u \in A\colon d_{G \setminus F'}(u)\geq \sqrt{d} \}
= \emptyset$. Then set $F''=F'$ and $S''=S' \setminus \{u \in
{\cal E}\colon d_{{\cal O}\setminus F''}(u) \geq \sqrt{d}\}$ and
go to Step $2$.
\medskip
\noindent {\bf Step $2$: }If $\{w \in {\cal O} \setminus G\colon
d_{S''}(w) \geq \sqrt{d} \} \neq \emptyset$, pick the smallest
(with respect to $\ll$) $w$ in this set and update $S''$ by $S''
\longleftarrow S'' \setminus N(w)$. Repeat this until $\{w \in
{\cal O} \setminus G\colon d_{S''}(w) \geq \sqrt{d} \} =
\emptyset$. Then set $S=S''$ and $F=F''\cup \{w \in {\cal O}
\colon d_S(w) \geq \sqrt{d} \}$ and stop.
\medskip
\begin{claim} \label{algoanalysisoutput}
The output of this algorithm satisfies (\ref{quad1}),
(\ref{quad2}) and (\ref{quad3}).
\end{claim}
\noindent {\em Proof: }To see that $F \subseteq G$ and $S
\supseteq A$, first observe that $F'' \subseteq G$ (since $F'
\subseteq G$, and the vertices added to $F'$ in Step $1$ are all
in $G$) and that $S'' \supseteq A$ (or Step $1$ would not have
terminated). We then have $S \supseteq A$ since Step $2$ deletes
from $S''$ only neighbours of ${\cal O} \setminus G$, and $F
\subseteq G$ since the vertices added to $F''$ at the end of Step
$2$ are all in $G$ (or Step $2$ would not have terminated).
To verify (\ref{quad2}) and (\ref{quad3}), note that
$d_{F''}(u)>d-\sqrt{d}$ for all $u \in S''$ by definition,
$S\subseteq S''$, and $F \supseteq F''$, so that
$d_{F}(u)>d-\sqrt{d}$ for all $u \in S$; and if $w \in {\cal O}
\setminus F$ then $d_S(w) < \sqrt{d}$ (again by definition), so
that $d_{{\cal E} \setminus S}(w)
> d-\sqrt{d}$ for all $w \in {\cal O} \setminus F$. \qed
The proof of Lemma \ref{secondhmsapprox} involves a two-stage
procedure. Stage $1$ runs the algorithm described above with
$(F',{\cal E})$ as input. Stage $2$ runs it with $(P',{\cal O})$
as input and with the roles of ${\cal E}$ and ${\cal O}$ reversed.
By Claim \ref{algoanalysisoutput}, the quadruple $(F,S,P,Q)$,
where $(F,S)$ is the output of Stage $1$ and $(P,Q)$ the output of
Stage $2$, is an approximating quadruple for $A$.
\begin{claim} \label{algoanalysisnum}
The procedure described above has at most $2^{O(g \log d /
\sqrt{d})}$ outputs as the input runs over those $A \in {\cal H}$
for which $(F',P')$ is a covering approximation.
\end{claim}
\noindent Taking ${\cal W}$ to be the set of all possible outputs of the
algorithm, Lemma \ref{secondhmsapprox} follows.
\medskip
\noindent {\em Proof of Claim \ref{algoanalysisnum}: } The output
of Stage $1$ of the algorithm is determined by the set of $u$'s
whose neighbourhoods are added to $F'$ in Step $1$, and the set of
$w$'s whose neighbourhoods are removed from $S''$ in Step $2$.
Each iteration in Step $1$ removes at least $\sqrt{d}$ vertices
from $G$, so there are at most $g /\sqrt{d}$ iterations. The $u$'s
in Step $1$ are all drawn from $A$ and hence $N(F')$, a set of
size at most $d g$. So the total number of outputs for Step $1$ is
at most
$$
\sum_{i \leq g/\sqrt{d}} {d g
\choose i} = 2^{O(g \log d / \sqrt{d})}.
$$
We perform a similar analysis on Step $2$. Each $u \in
S''\setminus A$ contributes more than $d-\sqrt{d}$ edges to
$\nabla(G)$, so initially $|S''\setminus A|\leq g d/(d-\sqrt{d}) =
O(g)$. Each $w$ used in Step $2$ reduces this by at least
$\sqrt{d}$, so there are at most $O(g/\sqrt{d})$ iterations. Each
$w$ is drawn from $N(S'')$, a set which is contained in the fourth
neighbourhood of $F'$ ($S'' \subseteq N(G)$ by construction of
$S''$, $G=N(A)$ and $A \subseteq N(F')$) and so has size at most
$d^4 g$. So as with Step $1$, the total number of outputs for Step
$2$, and hence for Stage $1$, is $2^{O(g \log d / \sqrt{d})}$.
Noting that $h \leq g$, a similar analysis applied to Stage $2$
gives that that stage also has at most $2^{O(g \log d /
\sqrt{d})}$ outputs, and the claim follows. \qed
\subsection{Proof of Lemma \ref{countingas0}: Reconstruction}
\label{subsectionprovingmainapprox}
We first note an important property of approximating quadruples.
\begin{lemma} \label{sleqfqleqp}
If $(F,S,P,Q)$ is an approximating quadruple for $A \in {\cal H}$
then
\begin{eqnarray}
|S| & \leq & |F| + O(g/\sqrt{d}) \label{boundingsbyfspecific} \\
|Q| & \leq & |P| + O(h/\sqrt{d}) \label{boundingqbypspecific}.
\end{eqnarray}
\end{lemma}
\noindent {\em Proof: } Observe that $|\nabla(S,G)|$ is bounded
above by $d|F| + \sqrt{d}|G \setminus F|$ and below by $d|A| +
(d-\sqrt{d})|S \setminus A| = d|S| - \sqrt{d}|S \setminus A|$,
giving
$$
|S| \leq |F| + |(G \setminus F) \cup (S \setminus A)|/\sqrt{d},
$$
and that each $u \in (G \setminus F) \cup (S \setminus A)$
contributes more than $d-\sqrt{d}$ edges to $\nabla(G)$, a set of
size $g d$, giving
$$
|(G \setminus F) \cup (S \setminus A)| \leq 2 g d/(d-\sqrt{d}) = O(g).
$$
These two observations together give (\ref{boundingsbyfspecific}).
The proof of (\ref{boundingqbypspecific}) is similar. \qed
Lemma \ref{countingas0} now follows from
\begin{lemma} \label{countingas}
For each $(F,S,P,Q) \in 2^{{\cal O}} \times 2^{{\cal E}} \times
2^{{\cal E}} \times 2^{{\cal O}}$ satisfying
(\ref{boundingsbyfspecific}) and (\ref{boundingqbypspecific}),
there are at most $2^{g - b - \Omega(g/\log d)}$ $A$'s in ${\cal
H}$ satisfying (\ref{quad1}) and (\ref{quad4}).
\end{lemma}
\noindent {\em Proof: } For $A \in {\cal H}$, write
$$
[A]=\{u \in {\cal E}\colon N(u)\subseteq N(A)\},
$$
and write $a'$ for $|[A]|$. Note
that although $G$ does not determine $A$, it does determine $[A]$.
By Lemma \ref{boundsondelta}, there is an absolute constant
$\gamma>0$ (independent of $a', g, b$ and $h$) such that
\beq{finallyusingdelta} g-a' > \gamma g ~~~~~ \mbox{and} ~~~~~ h-b
> \gamma h. \enq
Say that $Q$ is {\bf tight} if $|Q| < b + \gamma h/\log d$, and
{\bf slack} otherwise, and that $S$ is {\bf tight} if $|S| < g -
\gamma g/(4\log d)$ and {\bf slack} otherwise.
We now describe a procedure which, for input $(F,S,P,Q)$, produces
an output $A$ which satisfies (\ref{quad1}) and (\ref{quad4}). The
procedure involves a sequence of choices, the nature of the
choices depending on whether $S$ and $Q$ are tight or slack.
We begin by identifying a subset $D$ of $A$ which can be specified
relatively ``cheaply'': if $Q$ is tight, we pick $B \subseteq Q$
with $|B|=b$ and take $D=N(B)$; if $Q$ is slack, we simply take
$D=P$ (recalling that $P \subseteq H \subseteq A$).
If $S$ is tight, we complete the specification of $A$ by choosing
$A \setminus D \subseteq S \setminus D$. If $S$ is slack, we first
complete the specification of $G$ by choosing $G \setminus F
\subseteq N(S) \setminus F$. Note that in this case,
(\ref{boundingsbyfspecific}) implies \beq{glessf} |G \setminus F|
< \gamma g /(3\log d). \enq We then complete the specification of
$A$ by choosing $A \setminus D \subseteq [A] \setminus D$ (noting
that we do know $[A] \setminus D$ at this point).
This procedure produces all possible $A \in {\cal H}$ satisfying
(\ref{quad1}) and (\ref{quad4}) (and more). Before bounding the
number of outputs, we gather together some useful observations.
From (\ref{boundingsbyfspecific}) and (\ref{boundingqbypspecific}) we have
\beq{sq}
|S| = O(g) ~~~~~ \mbox{and} ~~~~~ |Q| = O(h).
\enq
If $Q$ is tight then there are at most
\begin{eqnarray}
\sum_{i \leq \gamma h /\log d}{|Q| \choose |Q|-i} & \leq &
\sum_{i \leq \gamma h /\log d}{O(h) \choose i} \nonumber \\
& \leq &
2^{O(\gamma h /\log d)\log(O(\log d/\gamma))} \nonumber \\
& \leq &
2^{\gamma h/2} \label{choicesfordqsmall}
\end{eqnarray}
possibilities for $D$, and in this case $|D|=h$; while if $Q$ is
slack there is just one possibility for $D$, and in this case
(using (\ref{boundingqbypspecific}))
\begin{eqnarray}
|D|=|P| & > & |Q|-\Omega(h/ \sqrt{d}) \nonumber \\
& > & b+\gamma h /\log d -\Omega(h/\sqrt{d}) \nonumber \\
& \geq & b+\gamma h /(2\log d). \label{qlarge}
\end{eqnarray}
If $S$ is slack then (since $|N(S)\setminus F| \leq d|S| \leq
O(dg)$; see (\ref{sq})) the number of possibilities for $G
\setminus F$ is at most
\begin{eqnarray}
\sum_{i<\gamma g /(3\log d)}{O(gd) \choose i} & \leq &
2^{(1+o(1))(\gamma g /(3\log d))\log(O(d\log d/\gamma))} \nonumber \\
& \leq & 2^{\gamma g/2}. \label{choicesforglessfslarge}
\end{eqnarray}
We now bound the number of outputs of the procedure, considering
separately the four cases determined by whether $S$ and $Q$ are
slack or tight.
If $S$ and $Q$ are both tight then the number of possibilities for
$A$ is at most \beq{ssmallqsmall} 2^{[\gamma h/2]+[g - \gamma
g/(4\log d)-h]}
< 2^{g - \gamma g/(4\log d)-b-\gamma h/2}.
\enq (The first term in the exponent on the left-hand side
corresponds to the choice of $D$ (using
(\ref{choicesfordqsmall})), and the second to the choice of
$A\setminus D$ (note that since $S$ and $Q$ are both tight,
$|S\setminus D| \leq g - \gamma g/(4\log d)-h$). To get the
right-hand side, we use the second part of
(\ref{finallyusingdelta}).)
If $S$ is tight and $Q$ is slack then the total is at most
\beq{ssmallqlarge} 2^{[g - \gamma g/(4\log d)-b-\gamma h/(2\log
d)]}. \enq (Here there is no choice for $D$, and the exponent
corresponds to the choice of $A\setminus D$ (using
(\ref{qlarge})).)
If $Q$ is tight then $|[A]\setminus D| = a'-h$, so that if $S$ is
slack (and $Q$ tight) then the number of possibilities for $A$ is
at most
\begin{equation} \label{slargeqsmall}
2^{[\gamma h/2]+[\gamma g/2]+[a'-h]}<2^{g-\gamma g/2-b-\gamma
h/2}.
\end{equation}
(The first term on the left-hand side corresponds to the choice of
$D$ (using (\ref{choicesfordqsmall}), the second to the choice of
$G\setminus F$ (using (\ref{choicesforglessfslarge})) and the
third to the choice of $A\setminus D$. On the right-hand side, we
use both parts of (\ref{finallyusingdelta}).)
Finally, if $Q$ is slack then $|[A]\setminus D| \leq a'- b-\gamma
h /(2\log d)$ (see (\ref{qlarge})), so that if $S$ and $Q$ are
both slack the number of possibilities for $A$ is at most
\begin{equation} \label{slargeqlarge}
2^{[\gamma g/2]+[a'-b-\gamma h/(2\log d)]}
< 2^{g-\gamma g/2-b-\gamma h/(2\log d)}.
\end{equation}
(The first term on the left-hand side corresponds to the choice of
$G\setminus F$ and the second to the choice of $A\setminus D$. The
right-hand side uses the first part of (\ref{finallyusingdelta}).)
Noting that $h \leq g$, the lemma follows from (\ref{ssmallqsmall}), (\ref{ssmallqlarge}),
(\ref{slargeqsmall}) and (\ref{slargeqlarge}). \qed
\section{Proof of Theorem \ref{heartofmatter}} \label{sectionprovingo(1)}
We say that a nice $A \subseteq {\cal E}$ is {\bf of type I} if
$|A|<d/2$, {\bf of type II} if $d/2 \leq |A|< d^{2}$ and {\bf of
type III} otherwise. We consider the portions of the sum in
Theorem \ref{heartofmatter} corresponding to type I, II and III
$A$'s separately.
If $A$ is of type I, then by Lemma \ref{boundsondeltaAsmall},
$|N(A)| \geq d|A| - 2|A|(|A|-1)$. Note also that in this case,
$B(A)=\emptyset$. By Lemma \ref{Tree}, for each $2\leq i < d/2$,
there are at most $M2^{O(i\log d)} < 2^{d+O(i\log d)}$ $2$-linked
subsets of ${\cal E}$ of size $i$. So
\begin{eqnarray}
\sum_{\mbox{$A$ of type I}} 2^{-|N(A)|+|B(A)|} & \leq &
\sum_{i=2}^{d/2} 2^{d+O(i\log d)-di+2i(i-1)} \nonumber \\
& = & e^{-\Omega(d)}. \label{typeIA}
\end{eqnarray}
We do something similar if $A$ is of type II. Here Lemma
\ref{boundsondeltaAsmall} gives $|N(A)| \geq \Omega(d)|A|$ and $|B(A)| \leq O(1/d)|A|$
(recalling that $N(B) \subseteq A$), and so
\begin{eqnarray}
\sum_{\mbox{$A$ of type II}} 2^{-|N(A)|+|B(A)|} & \leq &
\sum_{i=d/2}^{d^{2}} 2^{d+O(i\log d)-\Omega(d)i+O(1/d)i} \nonumber \\
& = & e^{-\Omega(d)}. \label{typeIIA}
\end{eqnarray}
We partition the set of $A$'s of type III according to the sizes
of $A$, $N(A), B(A)$ and $H(A)~(=N(B(A)))$ and use Lemma
\ref{mainapprox} to bound the sizes of the partition classes. In
this case we have $|N(A)| \geq d^{2}$. So (summing only over those
values of $a$, $g$, $b$ and $h$ for which ${\cal H}(a,g,b,h) \neq
\emptyset$ and $g \geq d^2$, and with the inequalities justified
below)
\begin{eqnarray}
\sum_{\mbox{$A$ of type III}} 2^{-|N(A)|+|B(A)|} & = & \sum_{a, g, b, h} |{\cal H}(a,g,b,h)| 2^{-g+b} \nonumber \\
& \leq & M \sum_{a, g, b, h} 2^{-\Omega(g/\log d)} \label{usingmainapprox} \\
& < & M^4 \sum_{g \geq d^{2}} 2^{-\Omega(g/ \log d)} \label{msquared} \\
& \leq & \left(M^4/(1-2^{-\Omega(1/\log d)})^2\right) 2^{-\Omega(d^{2}/\log d)} \nonumber \\
& = & e^{-\Omega(d)}. \label{typeIIIAgpart}
\end{eqnarray}
Here (\ref{usingmainapprox}) is from Lemma \ref{mainapprox} and in (\ref{msquared}) we use the fact that there
are fewer than $M$ choices for each of $a$, $b$ and $h$.
Combining (\ref{typeIA}), (\ref{typeIIA}) and
(\ref{typeIIIAgpart}), we have Theorem \ref{heartofmatter}. \qed
\section{Remarks} \label{sectionremarks}
The point of departure for our proof of Theorem \ref{ourresult} is
Lemma \ref{entropylemma}, which allows us to focus immediately on
those homomorphisms which are predominantly single-valued on one
side of the cube. The proof of this lemma given in \cite{Kahn}
relies heavily on the structure of the cube (in particular on the
fact that the neighbourhoods of adjacent vertices induce a perfect
matching), and it does not seem obvious at the moment how to get
beyond this and generalize Theorem \ref{ourresult} to a larger
class of graphs.
On the other hand, the proofs of Theorem \ref{heartofmatter} and
Lemma \ref{mainapprox} are much less dependent on the specific
structure of the cube, using only the isoperimetric bounds of
Section \ref{sectionisoperimetry}. As such, it should be possible
to extend these results considerably. To illustrate this, it is
worth comparing Lemma \ref{mainapprox} with the main lemma of
\cite{Sapozhenko}. To state that, we need some notation. Let $G$
be a $d$-regular bipartite graph with bounded co-degree (i.e.,
every pair of vertices has a bounded number of common neighbours).
Write $X$ and $Y$ for the bipartition classes of $G$. For any $a'$
and $g$, set
$$
{\cal G}(a',g)=\{A \subseteq X \colon \mbox{$A$ $2$-linked,
$|N(A)|=g, |[A]|\leq a'$}\},
$$
(recall that $[A]=\{x\in X \colon N(x)\subseteq N(A)\}$), and set
$\delta=(g-a')/g$. Using slightly more versatile notions of
approximation than those introduced in Section
\ref{sectionmainapproximation}, the following is proved in
\cite{Sapozhenko}:
\begin{thm} \label{Sapslemma}
For $d$ sufficiently large, and for any $a'$ and $g$ satisfying
$1> \delta >\log^9 d/d^2$,
$$
|{\cal G}(a',g)| \leq |X|2^{g(1-\delta/(6\log d))}.
$$
\end{thm}
Notice that (by the results of Section \ref{sectionisoperimetry})
the sum in Theorem \ref{heartofmatter} is extending only over sets
$A$ which satisfy $(|N(A)|-|A|)/|N(A)| \geq \Omega(1)$, a much
stronger condition than that imposed in Theorem \ref{Sapslemma}.
By slightly modifying our notions of approximation, we may extend
the validity of Lemma \ref{mainapprox} to cover a similar range as
Theorem \ref{Sapslemma}. However, the analysis is considerably
more involved, and we do not do so here.
\bigskip
\noindent {\bf Acknowledgement: }The author thanks Jeff Kahn for numerous helpful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In heavy-ion collisions heavy quarks are produced by the initial hard scattering processes and probe the high energy density medium produced in such collisions. Heavy flavour mesons can be studied via their hadronic or semi-leptonic decay channels. Detailed understanding on the interaction of heavy quarks in the medium can be achieved by studying the azimuthal angular correlations of heavy-flavour mesons. Correlation studies can be used to obtain information on in-medium partonic energy loss and modification of fragmentation function. In pp collisions, heavy-flavour correlations can be used to disentangle charm and beauty and to test the prediction of perturbative Quantum ChromoDynamic (pQCD) calculations. The measurement in pp is also a necessary baseline for Pb-Pb studies.
The azimuthal angular correlation between heavy-flavour decay electrons and charged hadrons and between $\mathrm{D}^{*}$ mesons and charged hadrons are presented here. The shape of the azimuthal angular correlations of heavy-flavour decay electrons and charged hadrons are used to determine the relative beauty contribution. Due to different decay kinematics of $\mathrm B$ and $\mathrm D$ mesons, the width of the near side correlation distribution is larger for $\mathrm B$ mesons compared to $\mathrm D$ mesons. Using the heavy-flavour electron cross section and relative beauty contribution to the heavy-flavour electron yield, we can derive the charm and beauty production cross sections separately.
\section{Azimuthal angular correlation between heavy-flavour decay electrons and charged hadrons}
\subsection{Data sample and trigger selection}
The analysis is performed using pp collisions at 2.76 TeV centre-of-mass energy collected in March 2011 with the ALICE experiment ~\cite{aliceDet}. For the analysis, the detectors used are the Inner Tracking System (ITS) ($|\eta|<0.9$, $0<\phi<360^{\circ}$), the Time Projection Chamber (TPC) ($|\eta|<0.9$, $0<\phi<360^{\circ}$) and the Electromagnetic Calorimeter (EMCal) ($|\eta|<0.7$, $80<\phi<180^{\circ}$). The events which pass EMCal L0 trigger conditions are used. The L0 trigger is a 2$\times$2 tower patch with a trigger cluster energy threshold of 3 GeV.
\subsection{Data Analysis}
Electrons are identified using information from the TPC and EMCal detectors. Particle identification in the TPC is based on the measurement of the specific ionization energy loss in the detector gas. In the EMCal, electron candidates are required to have $E/p$ between 0.8 and 1.2. The non-heavy-flavour electrons (Non-HFE) are mainly produced by internal and in-material conversions and are identified using the invariant mass method, where the partner electron is selected with loose electron identification criteria. Electron pairs which have an invariant mass of less than 50 MeV/$c^{2}$ are tagged as Non-HFE. Above an electron $p_{\rm T}$ of 2 GeV/$c$, the non-HFE finding efficiency is $\approx 50\%$, as estimated from Monte Carlo (MC) simulations. The remaining Non-HFE contamination in the heavy flavour electron (HFE) sample is corrected for using this efficiency.
\begin{SCfigure}[1.2][h]
\centering
\includegraphics[height=6cm, width=6.5cm]{MCSingleEleDPhiNew1PtBin.pdf}
\label{fig:deltaphifit}
\hspace{0.3cm}
\caption{Azimuthal angular correlation between heavy-flavour decay electrons and charged hadrons in pp collisions at 2.76 TeV. The MC distribution for electrons from charm decay is shown in red, while the MC distribution for electrons from beauty decay is shown in blue and the full green curve is the fit to the data points.}
\end{SCfigure}
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[height=6cm,width=7.5cm]{Bratio.pdf}
}
\subfigure{
\includegraphics[height=6cm,width=7.5cm]{Bratio-STAR.pdf}
}
\caption[]{Relative beauty contribution to the heavy-flavour decay electron yield in pp collisions at $\sqrt{\textrm{s}} =$ 2.76 TeV compared with FONLL calculations ~\cite{FONLLcurve} (left) and previous RHIC measurements ~\cite{BottomStar,eh-Bcs-Phenix} (right). }
\label{fig:bratio}
\end{figure}
The azimuthal angular correlations between heavy-flavour decay electron and charged hadrons is constructed using high quality tracks. In order to determine the fraction of electrons from beauty decays $r_{\mathrm B}$, the measured correlation distribution is fit with the function
\begin{equation}
\Delta\phi_{\mathrm {e-h}}^{\mathrm {HF}} = const + r_{\mathrm {B}} \Delta\phi_{\mathrm {e-h}}^{\mathrm {B}} + (1-r_{\mathrm {B}}) \Delta\phi_{\mathrm{e-h}}^{\mathrm D},
\end{equation}
where $r_{\mathrm B}$ = $\frac {\mathrm {e_{B}}} {\mathrm{e_{B}} + \mathrm{e_{D}}}$ is the ratio of electron yield from B-meson decays to that of the heavy-flavour electron yield, $\Delta\phi_{\mathrm{e-h}}^{\mathrm{D}}$ ($\Delta\phi_{\mathrm{e-h}}^{\mathrm{B}}$ ) is the azimuthal angular correlation between electrons from D (B) meson decay and charged hadrons from MC simulations (PYTHIA 6.4 with Perugia-0 tune ~\cite{Pythiatune}) including detector response. $Const$ term describes the uncorrelated background.
The fitting range used is $-1.5 < \Delta\phi < 1.5$ rad. The correlation distribution and the fit are shown in Figure \ref{fig:deltaphifit}. The beauty fraction extracted from the fit as a function of $p_{\rm T}$ is shown in Figure \ref{fig:bratio}. The $r_{\mathrm{B}}$ increases with $p_{\rm T}$ and reaches $\approx 0.5$ ($\frac{\mathrm{e_{B}}}{\mathrm{e_{D}}} \approx 1$) at around 5 GeV/$c$. This measurement is consistent, within uncertainities, with perturbative Quantum ChromoDynamics Fixed Order plus Next-to-Leading Logarithms (pQCD FONLL) calculations ~\cite{FONLLcurve} and with results from RHIC in pp collisions at $\sqrt{\textrm{s}} =$ 200 GeV ~\cite{BottomStar,eh-Bcs-Phenix}.
\section{Beauty and charm decay electron cross section}
The beauty and charm decay electron cross section are computed using the heavy-flavour decay electron cross section measured in ALICE and the relative beauty contribution. The heavy-flavour decay electron cross section is measured using the same data sample as the correlation analysis and the procedure described in ~\cite{7TeVHFE}. Electrons are measured using the TPC and EMCal detectors. The cross section is measured after applying various corrections like tracking efficiency and unfolding, particle identification efficiency and purity and EMCal trigger efficiency. The Non-HFE background is identified and subtracted using a cocktail method where the measured $\eta$ and $\pi^{0}$ cross section are used as an input and all the known sources of background electrons are taken into account ~\cite{7TeVHFE}.
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[scale=0.25]{2012-Jul-30-poster1a.pdf}
\label{fig:hfe crosssection}
}
\subfigure{
\includegraphics[scale=0.25]{2012-Jul-29-BcrossSection.pdf}
\label{fig:b cross section}
}
\subfigure{
\includegraphics[scale=0.25]{2012-Jul-29-CcrossSection.pdf}
\label{fig:b cross section}
}
\caption[]{Heavy flavour decay electron (left), beauty (middle) and charm (right) decay electron cross section in pp collisions at $\sqrt{\textrm{s}} =$ 2.76 TeV compared to FONLL calculations.}
\label{fig:Cross-section}
\end{figure}
The beauty and charm decay electron cross section is computed as
\begin{equation}
\left(\frac{\textrm{d}\sigma}{\textrm{d}p_{\rm T}}\right)_{\mathrm{b}\rightarrow \mathrm{e}} = r_{\mathrm{B}} \times \left(\frac{\textrm{d}\sigma}{\textrm{d}p_{\rm T}}\right)_{\mathrm{b+c}\rightarrow \mathrm{e}}
\end{equation}
\begin{equation}
\left(\frac{\textrm{d}\sigma}{\textrm{d}p_{\rm T}}\right)_{\mathrm{c}\rightarrow \mathrm{e}} = \left(\frac{\textrm{d}\sigma}{\textrm{d}p_{\rm T}}\right)_{\mathrm{b+c}\rightarrow \mathrm{e}} - \left(\frac{\textrm{d}\sigma}{\textrm{d}p_{\rm T}}\right)_{\mathrm{b}\rightarrow \mathrm{e}}
\end{equation}
since $r_{\mathrm{B}}$ is not measured in the full $p_{\rm T}$ range of the HFE cross section, the beauty and charm decay electron cross section is measured from 3 to 9 $\textrm{GeV}/c$. The measured heavy-flavour electron and beauty and charm decay electron cross section is shown in Figure \ref{fig:Cross-section} together with FONLL pQCD calculations ~\cite{FONLLcurve}. The cross section are consistent with FONLL calculations.
\section{Azimuthal angular correlation between $\mathrm{D}^{*}$ mesons and charged hadrons}
The correlation between charged $\mathrm{D}^{*}$ and charged hadrons is performed in pp collisions at $\sqrt{\textrm s}=$ 7 TeV collected in 2010. $\mathrm{D}^{*}$ mesons are reconstructed via the decay channel $\mathrm{D}^{*}\rightarrow\mathrm{D}^{0}(K\pi)\pi$ using the invariant mass method, see Figure \ref{fig:D*} (left). The $\mathrm{D}^{*}$ background is estimated using $\mathrm{D}^{0}$ background candidates selected in the invariant mass side bands $4\sigma<|\mathrm{M}(K\pi)-\mathrm{M}(\mathrm{D}^{0})|<10\sigma$ and combining this fake $\mathrm{D}^{0}$ with pion tracks. The $\mathrm{D}^{*}$ mesons are correlated in azimuth with hadrons which pass quality track selection criteria. The reconstructed $\mathrm{D}^{*}$ mesons and the correlation distribution of $\mathrm{D}^{*}$ mesons with charged hadrons are shown in Figure \ref{fig:D*} (right). The red distribution gives the azimuthal angular correction for all $\mathrm{D}^{*}$ candidates and the blue distribution corresponds to sideband background candidates. The distribution can be fitted with gaussian distribution to extract the correlation parameters (yield and width).
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[scale=0.27]{2012-Jul-27-invmass.pdf}
\label{fig:D*reco}
}
\subfigure{
\includegraphics[scale=0.24]{2012-Jun-06-dataptint.pdf}
\label{fig:D*Correl}
}
\caption[]{(Left) Invariant mass distribution of $\mathrm{D}^{*}$ mesons. (Right) Azimuthal angular correlation between $\mathrm{D}^{*}$ and charged hadrons in pp collisions at $\sqrt{\textrm s} =$ 7 TeV.}
\label{fig:D*}
\end{figure}
\section{Results}
The relative beauty contribution to the heavy-flavour decay electron yield was measured in pp collisions at $\sqrt{\textrm s} =$ 2.76 TeV with the ALICE detector and compared with pQCD calculations and RHIC measurements. Using this beauty fraction, the beauty-decay and charm-decay electron cross sections are derived and they are found to be consistent with FONLL pQCD calculations.
\section{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Isogeometric analysis (IgA) was introduced in \cite{Hughes:2005} with the aim of unifying computer aided design (CAD) and finite element analysis (FEA). It has rapidly become a mainstream analysis methodology within computational engineering and also stimulated new research in geometric design.
The core idea in IgA is to use the same discretization and representation tools for the design as well as for the analysis (in an isoparametric environment), providing a true design-through-analysis methodology~\cite{Cottrell:09,Hughes:2005}.
The isogeometric approach based on B-splines/NURBS shows important advantages over classical $C^0$ FEA. In particular, the inherently high smoothness of B-splines and NURBS allows for a higher accuracy per degree of freedom. This behavior has been numerically observed in a wide range of applications, and recently a mathematical explanation has been given thanks to error estimates in spline spaces with constants that are explicit in the polynomial degree $p$ and the global smoothness $C^k$, the parameters defining the spline spaces; see \cite{BeiraoDaVeiga:2012,Bressan:2019,Sande:2019,Sande:2020}.
Spectral analysis can be used to study the error in each eigenvalue and eigenfunction of a numerical discretization of an eigenvalue problem.
For a large class of boundary and initial-value problems the total discretization error on a given mesh can be recovered from its spectral error \cite{Hughes:2014,Hughes:2008}. It is argued in \cite{Garoni:symbol} that this is of primary interest in engineering applications since practical computations are not performed in the limit of mesh refinement. Usually the computation is performed on a few, or even just a single mesh, and then the asymptotic information deduced from classical error analysis is insufficient. It is more relevant to understand which eigenvalues/eigenfunctions are well approximated for a given mesh size.
Starting with \cite{Cottrell:2006}, the isogeometric approach for eigenvalue
problems has been investigated in several papers; we refer the reader to \cite{Garoni:symbol,Hughes:2014,Hughes:2008} and references therein. Extensive and accurate comparisons of the spectral approximation properties of classical finite
elements against isogeometric methods have been performed. It turns out that
the isogeometric elements improve the accuracy of the spectral approximation significantly in the entire discrete spectrum.
In FEA, the upper part of the discrete spectrum is inaccurate: high-order $p$ elements produce so-called \emph{optical branches}, which cause deteriorating accuracy of the higher modes. On the contrary, it has been observed that maximally smooth spline discretizations of degree $p$ on uniform grids remove the optical branches from the discrete
spectrum, which almost entirely converges for increasing $p$. More generally, the spectral discretization using B-splines of degree $p$ and smoothness $C^k$, $0\leq k\leq p-1$, presents roughly $p-k$ branches and only a single branch (the so-called \emph{acoustical branch}) converges to the true spectrum \cite{Garoni:symbol,Hughes:2014}.
Convergence in $p$ for the $L^2$-projection of the eigenfunctions onto spline spaces has been analyzed in \cite{Sande:2019}. For general smoothness $C^k$ and for a fixed number of degrees of freedom, the error estimates in \cite{Sande:2019,Sande:2020} ensure convergence of the $L^2$-projection for increasing $p$ only for a fraction of the eigenfunctions. The amount of this fraction decreases as the maximum grid spacing increases (for a fixed number of degrees of freedom), in complete agreement with the numerical observations.
Maximally smooth spline spaces on uniform grids are hence an excellent choice for addressing eigenvalue problems. Yet, they still
present a flaw: a very small portion of the eigenfunctions are poorly approximated and the corresponding computed eigenvalues are much larger than the exact values. These spurious numerical values for the frequencies are usually referred to as \emph{outliers} \cite{Cottrell:2006}.
The number of outliers increases with the degree $p$. However, for fixed $p$, it is independent of the degrees of freedom for univariate problems, while a ``thin layer'' of outliers is observed in the multivariate setting; see \cite{Cottrell:09,Garoni:symbol} and references therein. Outliers persist, in the same amount although with mitigated amplitude, when considering isogeometric methods based on problem-dependent spline spaces like generalized B-splines which allow exact representations of some trigonometric functions \cite{Cardinali:2021,Roman:2017}.
Outlier-free discretizations are appealing, not only for their superior description of the spectrum of the continuous operator, but also for their beneficial effects in various contexts, such as an efficient selection of time-steps in (explicit) dynamics and robust treatment of wave propagation. For a fixed degree, the challenge is to remove outliers without loss of accuracy in the approximation of all eigenfunctions.
We also note that there is a widespread awareness that outliers are related to the treatment of boundary conditions; these introduce small-rank perturbations in the matrices arising in the considered discretization process \cite{Garoni:2020,Garoni:symbol}.
Eigenfunction approximation in case of the Laplacian with periodic boundary conditions in the space of periodic splines has been theoretically addressed in \cite[Section~4]{Sande:2019}. It has been proved that
for maximal smoothness $C^{p-1}$ and uniform grid spacing, the Galerkin approximations, in the periodic $n$-dimensional spline space of degree $p$, converge in $p$ to the first $n$ or $n-1$ eigenfunctions (depending on the parity of $n$) of the Laplacian with periodic boundary conditions. This implies that the periodic case incurs at most one outlier.
It was actually conjectured in \cite{Sande:2019} that no outliers can appear in such a context.
In this paper we complete the theory started in \cite{Sande:2019} and show that for the optimal spline spaces described in \cite{Floater:2017,Floater:2018}, as suggested in \cite{Sande:2019}, there are no outliers in isogeometric Galerkin discretizations of the eigenvalue problem related to the Laplace operator with classical boundary conditions (Dirichlet/Neumann/mixed) in the univariate and in the multivariate tensor-product case. Roughly speaking, these optimal spline spaces are obtained from the standard spline space by imposing specific homogeneous boundary conditions and using certain uniform knot sequences.
There are few empirical proposals for outlier removal in the recent literature. In \cite{Cottrell:2006} it was observed that a suitable non-linear parameterization of the domain, obtained through a uniform
distribution of the control points, eliminates outliers
for any $p$. This is actually related to a special treatment of the boundary because a uniform spacing of the control points (i.e., of the Greville abscissae) in the context of open knots implies large boundary knot intervals.
More recently, interesting contributions have been presented in \cite{Deng:2021,Hiemstra:2021}, where they exploit the imposition of suitable additional boundary conditions in a similar manner to the optimal spline spaces from \cite{Floater:2017,Floater:2018,Sande:2019}.
In \cite{Deng:2021} a penalization of high-order derivatives near the
boundary is proposed to remove the outliers
from the isogeometric approximation of the spectrum of the Laplace
operator. This penalization approach deeply mitigates the spurious frequencies. In \cite{Hiemstra:2021} the same high-order derivatives at the boundary are strongly set equal to zero, by using suitable spline subspaces as trial spaces. The approach is also tested for fourth-order operators. There is a clear numerical evidence that the strong imposition of the additional boundary conditions removes the outliers without affecting the accuracy of the approximation for the global spectrum.
Both the approaches in \cite{Deng:2021,Hiemstra:2021} rely on the observation that the exact eigenfunctions satisfy additional homogeneous boundary conditions and on the intuition that adding such features of the exact solution in the discretization (in weak or strong form) would help in fixing the outlier issue. However, a theoretical foundation of the proposed procedures and a proper analysis of the approximation properties of the used spline subspaces and so of the accuracy of the whole process are missing.
The spline subspaces used in \cite{Hiemstra:2021} as trial spaces are not new in the literature. They are the ``reduced spline spaces'' considered in \cite[Section~5.2]{Sande:2020} (see also \cite{Sande:2019,Takacs:2016} for some special cases). Moreover, depending on the parity of the degree $p$, they coincide with optimal spline spaces (in the sense of Kolmogorov $n$-widths) investigated in \cite{Floater:2018}.
The theory of Kolmogorov $n$-widths is an interesting framework to examine approximation properties. It defines and gives a characterization of optimal $n$-dimensional spaces for approximating function classes and their associated norms \cite{Babuska:2002,Kolmogorov:36,Pinkus:85}.
Kolmogorov $n$-widths and optimal subspaces with respect to the $L^2$-norm were studied in \cite{Evans:2009} with the aim of (numerically) assessing the approximation properties of smooth splines in IgA.
In a recent sequence of papers \cite{Floater:2017,Floater:per,Floater:2018}, it has been proved that subspaces of smooth splines of any degree on uniform grids, identified by suitable boundary conditions, are optimal subspaces for $L^2$ Kolmogorov $n$-width problems for certain function classes of importance in IgA and FEA.
The results in \cite{Floater:2017,Floater:2018} were then applied in \cite{Bressan:2019} to show that, for uniform grids, $k$-refined spaces in IgA provide a better accuracy per degree of freedom than $C^0$ FEA and $C^{-1}$ discontinuous Galerkin spaces in almost all cases of practical~interest.
The theory of Kolmogorov $n$-widths and optimal subspaces is closely related to spectral analysis. Assume $A$ is a function class defined in terms of an integral operator $K$. Then, the space spanned by the first $n$ eigenfunctions of the self-adjoint operator $KK^*$ is an optimal subspace for $A$. This is naturally connected to a differential operator through the kernel of $KK^*$ being a Green's function.
By using this general framework, in \cite[Section~7]{Sande:2019} we analyzed how well the eigenfunctions of a given differential operator are approximated in $n$-dimensional optimal subspaces. In particular, for fixed dimension $n$,
we identified the optimal spline subspaces that converge in $L^2$-norm to spaces spanned by the first $n$ eigenfunctions of the Laplacian subject to different types of boundary conditions, as their degree $p$ increases.
Error estimates in $L^2$-norm for approximation in such (optimal) spline subspaces of functions belonging to $H^1$ and $H^1_0$ are provided in \cite{Sande:2020}.
In this paper, based on the results in \cite{Sande:2019,Sande:2020}, we propose a strategy for accurate outlier-free isogeometric Galerkin discretizations for the spectrum of the Laplacian with Dirichlet/Neumann/mixed boundary conditions in the univariate and in the multivariate tensor-product case; we also provide a theoretical analysis of its performance. More precisely,
\begin{itemize}
\item we discretize the eigenvalue problem in optimal spline subspaces identified in terms of vanishing high-order derivatives at the boundary, as suggested in \cite{Sande:2019,Sande:2020};
\item we provide error estimates for Ritz projectors in such optimal spline subspaces;
\item we exploit the above estimates to show that the considered Galerkin discretizations are outlier-free, without any loss of accuracy for the whole spectrum;
\item we produce explicit expressions of B-spline-like bases for the spline subspaces of interest to be used in practical simulations.
\end{itemize}
It turns out that our strategy is very similar (and actually identical in several cases) to the one proposed in \cite{Hiemstra:2021}. However, our path towards it is more theoretical
and allows us to equip the numerical process with a solid mathematical foundation.
A main question is to what extent the proposed outlier-free discretizations can be fruitfully used for addressing general problems with non-homogeneous boundary behavior. As stated above, the outlier-free discretizations are based on strong imposition of additional homogeneous boundary conditions for some derivatives up to a certain order which depends on the degree $p$. While these additional boundary conditions are intrinsically satisfied by the exact eigenfunctions we are dealing with, it is clear that this is not the case when considering the exact solution of a general (second-order) problem. A plain discretization in outlier-free spline subspaces in general leads to a substantial loss of approximation power compared with the corresponding full spline space and this discrepancy worsens as the spline degree increases.
To overcome this issue, for problems identified by sufficiently smooth data, we propose a suitable data-correction process for the missing boundary derivatives analogous to the classical reduction from non-homogeneous to homogeneous Dirichlet boundary conditions. When coupled with this boundary data correction, the discretization in outlier-free spline subspaces achieves full approximation order both in the univariate and in the multivariate case.
The remainder of the paper is divided in seven sections and is organized as follows. In Section~\ref{sec:eigenvalue-problem} we summarize the necessary notation and preliminaries on the eigenvalue problems of interest and their Galerkin discretizations. Section~\ref{sec:counting-outliers} provides theoretical upper bounds, as a function of the degree $p$, for the number of outliers when the discretization process is performed in the usual (full) spline spaces; these upper bounds almost perfectly match the number of outliers numerically observed.
In Section~\ref{sec:outlier-free} we present suitable spline subspaces and we prove that they ensure outlier-free discretizations while enjoying full accuracy. It turns out that these outlier-free subspaces are optimal spline spaces in the sense of the $L^2$ Kolmogorov $n$-widths.
Section~\ref{sec:general-BC} explains how to compensate the homogeneous high-order boundary derivatives --- which characterize the outlier-free spline subspaces --- for general problems identified by smooth data in order to maintain full accuracy in the complete discretization process.
An explicit construction of a B-spline-like basis for the considered outlier-free subspaces is described in Section~\ref{sec:bsplines} by exploiting the properties of cardinal B-splines.
Numerical tests validating the theoretical proposals are collected in Section~\ref{sec:numerics}. We conclude in Section~\ref{sec:conclusion} with some final remarks.
For a smoother reading of the paper, the technical details and the proofs of the error estimates used in Section~\ref{sec:outlier-free} are postponed to Appendix~\ref{Appendix:A}. The proofs of the error estimates used in Section~\ref{sec:general-BC} are postponed to Appendix~\ref{Appendix:B}.
Throughout the paper, for real-valued functions $f$ and $g$ we denote the norm and inner product on $L^2:=L^2(a,b)$ by
$$ \| f\|^2 := (f,f), \quad (f,g) := \int_a^b f(x) g(x)\,\mathrm{d} x, $$
and we consider the Sobolev spaces
$$ H^r:= H^r(a,b)=\{u\in L^2 : \, u^{(\alpha)} \in L^2(a,b),\, \alpha=1,\ldots,r\}. $$
For the sake of simplicity, we set $(a,b)=(0,1)$.
\section {Second-order eigenvalue problems}\label{sec:eigenvalue-problem}
We consider the second-order eigenvalue problem related to the Laplace operator in the univariate and in the multivariate tensor-product case.
\subsection{Univariate case}\label{sec:eigenvalue-problem-1D}
We consider the second-order equation
\begin{equation}
\label{eq:eigenvalue-equation-1D}
-u''= \omega^2 u, \quad \text{in } (0,1),
\end{equation}
and the following standard boundary conditions:
\begin{itemize}
\item Dirichlet boundary conditions (also referred to as fixed or type 0 boundary conditions),
\begin{equation}
\label{type-0-BC}
u(0)=u(1)=0;
\end{equation}
\item Neumann boundary conditions (also referred to as free or natural or type 1 boundary conditions),
\begin{equation}
\label{type-1-BC}
u'(0)=u'(1)=0;
\end{equation}
\item a combination of the previous ones (also referred to as mixed or type 2 boundary conditions),
\begin{equation}
\label{type-2-BC}
u(0)=u'(1)=0.
\end{equation}
\end{itemize}
The non-trivial exact solutions of \eqref{eq:eigenvalue-equation-1D} subject to one of the boundary conditions \eqref{type-0-BC}--\eqref{type-2-BC} form a numerable set of trigonometric functions, respectively,
\begin{alignat}{3}
\label{eq:eig-Laplace-type-0-BC}
u_l(x) &:= \sin(\omega_l x),\quad &\omega_l &:= l \pi, \quad &l &= 1,2,\ldots \\
\label{eq:eig-Laplace-type-1-BC}
u_l(x) &:= \cos(\omega_l x),\quad &\omega_l &:= l \pi, \quad &l &= 0,1,2,\ldots \\
\label{eq:eig-Laplace-type-2-BC}
u_l(x) &:= \sin(\omega_l x),\quad &\omega_l &:= (l-1/2)\pi, \quad &l &=1,2,\ldots
\end{alignat}
For the sake of brevity and simplicity of presentation, in the following we will focus on Dirichlet boundary conditions; the treatment of the other cases is completely analogous. Therefore, we will focus on the problem
\begin{equation}\label{eq:prob-eigenv-1D}
\left\{ \begin{aligned}
-u'' &= \omega^2 u, \quad \text{in } (0,1), \\
u(0) &=0, \quad u(1) = 0,
\end{aligned} \right.
\end{equation}
whose non-trivial exact solutions are given in \eqref{eq:eig-Laplace-type-0-BC}.
The weak form of problem \eqref{eq:prob-eigenv-1D} reads as follows: find non-trivial $u\in H^1_0(0,1)$ and $\omega^2\in \mathbb{R}$ such that
$$ (u',v') = \omega^2 (u,v), \quad \forall v \in H^1_0(0,1). $$
According to the Galerkin approach, we choose a finite dimensional subspace $\mathbb{V}_h$ of $H^1_0(0,1)$ spanned by the basis $\{\varphi_1,\ldots,\varphi_{n_h}\}$ and
we find approximate values $\omega_h$ to $\omega$ by solving
\begin{equation} \label{eq:eig-problem-galerkin}
S_h \textbf{u}_h = (\omega_h)^2 M_h \textbf{u}_h,
\end{equation}
where the matrices $S_h$ and $M_h$ consist of the elements
$$
S_{h,i,j}:= \int_0^1 \varphi'_j(s)\varphi'_i(s)\,\mathrm{d} s,\quad
M_{h,i,j} := \int_0^1 \varphi_j(s) \varphi_i(s)\,\mathrm{d} s,\quad
i,j=1,\ldots,n_h.
$$
This means that each $(\omega_h)^2$ is an eigenvalue of the matrix $ M_h^{-1} S_h$.
Then, for $l=1, \ldots, {n_h}$, an approximation of the frequency $\omega_l$ is obtained by considering
the square root of the $l$-th eigenvalue of $M_h^{-1} S_h$, denoted by $\omega_{h,l}$. Here we assume that those eigenvalues are given in increasing order.
Similarly, for $l=1, \ldots, {n_h}$, an approximation of the eigenfunction $u_l$ is obtained by considering
\begin{equation}
\label{eq:approx-eigenfun}
u_{h,l}:=\sum_{i=1}^{n_h} u_{h,l,i}\varphi_i,
\end{equation}
where
$\textbf{u}_{h,l}:=(u_{h,l,1},\ldots, u_{h,l,{n_h}})$ is the $l$-th eigenvector of $M_h^{-1} S_h$.
Of course, a proper normalization is needed.
More information on this eigenvalue problem can be found in \cite{Boffi:2010}.
We are interested in selecting the discretization spaces such that all the first $n_h$ eigenfunctions and eigenvalues are approximated well.
In this perspective we will choose the approximation space as a proper subspace of maximally smooth spline spaces on the unit interval.
\subsection{Multivariate tensor-product case}
We now consider the eigenvalue problem related to the Laplace operator in the unit cube of $\mathbb{R}^d$, namely
\begin{equation}
\label{eq:eigenvalue-equation}
-\Delta u= \omega^2 u, \quad \text{in } (0,1)^d,
\end{equation}
subject to homogeneous Dirichlet/Neumann/mixed boundary conditions similar to the ones discussed in the univariate case.
It is easy to verify that the $d$-variate eigenvalues and eigenfunctions are given by, respectively, the sum and the product of the corresponding univariate ones.
Again, for the sake of brevity and simplicity of presentation, in the following we will focus on Dirichlet boundary conditions and $d=2$; the treatment of the other cases is completely analogous. Therefore, we will focus on the problem
\begin{equation}\label{eq:prob-eigenv}
\left\{ \begin{aligned}
-\Delta u&= \omega^2 u, \quad \text{in } (0,1)^2, \\
u_{|\partial \Omega} &= 0,
\end{aligned} \right.
\end{equation}
whose non-trivial exact solutions are given by
\begin{equation}\label{eq:eig-Laplace2D-type-0-BC}
u_{l_1,l_2}(x_1,x_2):=\sin(l_1\pi x_1)\sin(l_2\pi x_2), \quad (\omega_{l_1,l_2})^2:=(l_1\pi)^2+(l_2\pi)^2, \quad l_1,l_2=1,2, \ldots
\end{equation}
Note that
\begin{equation}
\label{eq:eig-Laplace2D}
u_{l_1,l_2}(x_1,x_2)=u_{l_1}(x_1)u_{l_2}(x_2), \quad
(\omega_{l_1,l_2})^2=(\omega_{l_1})^2+(\omega_{l_2})^2, \quad l_1,l_2=1,2, \ldots,
\end{equation}
where $u_{l_1},u_{l_2}$ and $\omega_{l_1},\omega_{l_2}$ are the univariate counterparts defined in \eqref{eq:eig-Laplace-type-0-BC}.
In order to discretize problem \eqref{eq:prob-eigenv}, it is natural to consider a finite-dimensional tensor-product discretization space
$\mathbb{V}_{h_1,h_2}:=\mathbb{V}_{h_1}\otimes\mathbb{V}_{h_2}$.
Then, the Galerkin method amounts to solve the following problem: find $\textbf{u}_{h_1,h_2}$ and $\lambda_{h_1,h_2}$ such that
$$
(S_{h_1}\otimes M_{h_2}+M_{h_1}\otimes S_{h_2})\textbf{u}_{h_1,h_2}=\lambda_{h_1,h_2} (M_{h_1}\otimes M_{h_2})\textbf{u}_{h_1,h_2},
$$
where $S_h$ and $M_h$ are univariate stiffness and mass matrices, respectively, defined as in \eqref{eq:eig-problem-galerkin}.
It is known that, using the so-called fast diagonalization method \cite{Horn:2013,Sangalli:2016},
the corresponding eigenfunctions and eigenvalues can be represented as
$$
U_{h_1}\otimes U_{h_2},
\quad
\Lambda_{h_1}\otimes I_{h_2} + I_{h_1}\otimes \Lambda_{h_2},
$$
respectively, where $U_h$ is the matrix of the eigenfunctions associated with the univariate discretization \eqref{eq:eig-problem-galerkin}, $\Lambda_h$ is the diagonal matrix of the corresponding eigenvalues, and $I_h$ is the identity matrix of the same size as $\Lambda_h$. Thus, our approximations take the form
\begin{equation}\label{eq:approx-eig-Laplace2D}
u_{h_1,h_2,l_1,l_2}(x_1,x_2) = u_{h_1,l_1}(x_1)u_{h_2,l_2}(x_2),
\quad
(\omega_{h_1,h_2,l_1,l_2})^2=(\omega_{h_1,l_1})^2+(\omega_{h_2,l_2})^2,
\end{equation}
where $u_{h_1,l_1},u_{h_2,l_2}$ and $\omega_{h_1,l_1},\omega_{h_2,l_2}$ are the univariate counterparts described in Section~\ref{sec:eigenvalue-problem-1D}, very similar to \eqref{eq:eig-Laplace2D}.
The above decomposition does not only reduce the computational cost of the discrete spectrum but also provides a natural safe way to match the approximate eigenfunctions/eigenvalues with the exact ones. This was also observed in \cite{Hiemstra:2021}.
\section{Maximally smooth spline approximations and outliers}
\label{sec:counting-outliers}
Suppose ${\boldsymbol \tau} := (\tau_0,\ldots,\tau_{{n_\mathrm{el}}})$ is a sequence of (break) points that partition the interval $[0,1]$ in ${n_\mathrm{el}}$ elements, i.e.,
\begin{equation*
0=:\tau_0 < \tau_1 < \cdots < \tau_{{n_\mathrm{el}}-1} < \tau_{{n_\mathrm{el}}}:= 1,
\end{equation*}
and let
$I_j := [\tau_{j-1},\tau_{j})$,
$j=1,\ldots,{n_\mathrm{el}}-1$, and $I_{n_\mathrm{el}} := [\tau_{{n_\mathrm{el}}-1},\tau_{{n_\mathrm{el}}}]$.
Let $\mathbb{P}_p$ be the space of polynomials of
degree at most $p$. For $0\leq k\leq p-1$, we define the space $\mathbb{S}^k_{p,{\boldsymbol \tau}}$ of splines of degree $p$ and smoothness $k$ by
$$ \mathbb{S}^k_{p,{\boldsymbol \tau} } := \{s \in C^{k}[0,1] : s|_{I_j} \in \mathbb{P}_p,\, j=1,\ldots,{n_\mathrm{el}} \}, $$
and we set
\begin{equation}
\label{eq:spline-max-smooth}
\mathbb{S}_{p,{\boldsymbol \tau}} := \mathbb{S}^{p-1}_{p,{\boldsymbol \tau}}.
\end{equation}
From classical spline approximation theory we know that for any $u\in H^{r}$ and any ${\boldsymbol \tau}$ there exists $s_p\in \mathbb{S}^k_{p,{\boldsymbol \tau}}$
such that
\begin{equation} \label{eq:classical-err-est}
\|(u-s_p)^{(\ell)} \|\leq C(p,k,\ell,r)h^{r-\ell} \| u^{(r)} \|, \quad 0\leq \ell \leq r\leq p+1, \quad \ell\leq k+1\leq p,
\end{equation}
where
\begin{equation*
h:=\max_{j=1,\ldots,{n_\mathrm{el}}} h_j, \quad h_j:=\tau_{j}-\tau_{j-1}.
\end{equation*}
The above estimates can be generalized to any $L^q$-norm; see, e.g., \cite{Lyche:18,Schumaker2007}.
In the important case of maximally smooth splines ($k=p-1$), the constant in \eqref{eq:classical-err-est} admits simple explicit expressions. In particular, it has been proved in \cite[Theorems~1.1 and~3.1]{Sande:2019} that
for any $u\in H^{r}$ and any ${\boldsymbol \tau}$ there exists $s_p\in \mathbb{S}_{p,{\boldsymbol \tau}}$ such that
\begin{align*}
\| u-s_p \|&\leq \left(\frac {h}{\pi}\right)^{r} \| u^{(r)} \|,
\\
\|(u-s_p)' \|&\leq \left(\frac {h}{\pi}\right)^{r-1} \|u^{(r)} \|,
\end{align*}
for all $p\geq \max\{r-1,1\}$.
More generally, the results in \cite[Theorem~3 and Lemma~2]{Sande:2021} ensure that the above inequalities still hold if we want to approximate $u$ by a spline enjoying the same boundary conditions as $u$. This is summarized in the following theorem.
\begin{theorem}\label{thm:Qspline}
Let $u\in H^r(0,1)$ be given.
For any ${\boldsymbol \tau}$ and $q=0,\ldots,\min\{p,r\}$ there exists a projector $Q_p^{q}$ onto $\mathbb{S}_{p,{\boldsymbol \tau}}$ such that
\begin{equation*}
((Q_p^qu)^{(q)}, v^{(q)})=(u^{(q)}, v^{(q)}), \quad \forall v\in \mathbb{S}_{p,{\boldsymbol \tau}},
\end{equation*}
\begin{equation*}
(Q_p^q)^{(\ell)}(0)= u^{(\ell)}(0), \quad (Q_p^q)^{(\ell)}(1)= u^{(\ell)}(1), \quad \ell=0,\ldots,q-1,
\end{equation*}
and
\begin{equation*}
\|(u-Q^{q}_pu)^{(\ell)}\| \leq \left(\frac {h}{\pi}\right)^{r-\ell}\|u^{(r)}\|, \quad \ell=0,\ldots,q,
\end{equation*}
for all $p\geq \max\{r-1,2q-1\}$.
\end{theorem}
In our context we are interested in the case $q=1$ in the previous theorem. In this case, from \cite[Section~3]{Sande:2021} we also know the stability estimate
\begin{equation}
\label{eq:stab}
\|(Q_p^1u)'\|\leq \|u'\|, \quad u\in H^1(0,1).
\end{equation}
Natural discretization spaces for \eqref{eq:prob-eigenv-1D} are the subspaces
of $H_0^1(0,1)$ given by maximally smooth splines vanishing at the two ends of the interval, i.e.,
\begin{equation}
\label{eq:space-BC}
\mathbb{S}_{p,{\boldsymbol \tau},0}^0:=\{s\in \mathbb{S}_{p,{\boldsymbol \tau}}: s(0)=s(1)=0 \},
\end{equation}
for $p\geq1$ and ${n_\mathrm{el}}>2-p$.
For such space we have
$$n_h=\dim(\mathbb{S}_{p,{\boldsymbol \tau},0}^0)={n_\mathrm{el}}+p-2.$$
From \cite[Section~8]{Boffi:2010} or \cite[Chapter~6]{Strang:2008} we know that the error between the eigenfunction $u_l$ in \eqref{eq:eig-Laplace-type-0-BC} and its approximation $u_{h,l}$ given by the Galerkin method in \eqref{eq:approx-eigenfun} can be bounded by the error between $u_l$ and $Q_p^1u_l$ defined in Theorem~\ref{thm:Qspline}. Since $u_l\in C^\infty$, from Theorem~\ref{thm:Qspline} with $r=p+1$ we deduce
\begin{equation*}
\|u_l- u_{h,l}\| \leq C_l \left(\frac {h}{\pi}\right)^{p+1}\|u^{(p+1)}\|
=C_l (hl)^{p+1},
\end{equation*}
for some constant $C_l$.
Therefore, convergence in $p$ of the approximate eigenfunction $u_{h,l}$ is ensured whenever
$$
hl<1.
$$
For fixed dimension of the approximation space, i.e., for fixed ${n_\mathrm{el}}$ and $p$, the value of $h$ is minimized when the break points are uniformly distributed,
\begin{equation}\label{eq:knots-uniform}
\tau_i=\frac{i}{{n_\mathrm{el}}}, \quad i=0,\ldots,{n_\mathrm{el}},
\end{equation}
and so
$$h=\frac{1}{{n_\mathrm{el}}}.$$
Thus, under the assumption of uniform grid spacing, convergence in $p$ of the approximated eigenfunction $u_{h,l}$ to the exact eigenfunction $u_l$ is ensured for
$$ l=1, \ldots, {n_\mathrm{el}}-1.$$
The arguments in \cite[Section~8]{Boffi:2010}, see also Remark~\ref{rmk:stab} (in Appendix~\ref{Appendix:A}) and \eqref{eq:stab}, show that convergence is ensured also for the corresponding eigenvalues.
A similar discussion can be carried out for natural and mixed boundary conditions. For the former the natural discretization space is the full spline space $\mathbb{S}_{p,{\boldsymbol \tau}}$,
while for the latter the space
\begin{equation}
\label{eq:space-left}
\{s\in \mathbb{S}_{p,{\boldsymbol \tau}}: s(0)=0 \}
\end{equation}
has to be considered. Taking into account that the dimension of such spaces amounts to ${n_\mathrm{el}}+p$ and ${n_\mathrm{el}}+p-1$, respectively, and keeping in mind the expressions of the eigenfunctions in \eqref{eq:eig-Laplace-type-1-BC} and \eqref{eq:eig-Laplace-type-2-BC}, we can summarize the above discussion as follows.
\medskip
\begin{tcolorbox}
Consider problem \eqref{eq:eigenvalue-equation-1D} with boundary conditions \eqref{type-0-BC}, \eqref{type-1-BC} or \eqref{type-2-BC}.
Let $\mathbb{V}_{h}$ be the spline space \eqref{eq:space-BC}, \eqref{eq:spline-max-smooth} or \eqref{eq:space-left}, respectively, with ${\boldsymbol \tau}$ defined as in \eqref{eq:knots-uniform}.
The approximations of the eigenvalues obtained by
finding $u\in \mathbb{V}_{h}$ and $\omega^2\in\mathbb{R}$ such that
\begin{equation*}
( u', v') = \omega^2 (u, v), \quad \forall v\in \mathbb{V}_{h},
\end{equation*}
incur at most the following number of outliers:
\begin{itemize}
\item $p-1$ for Dirichlet boundary conditions;
\item $p$ for Neumann boundary conditions;
\item $p-1$ for mixed boundary conditions.
\end{itemize}
\end{tcolorbox}
\begin{remark}
According to \cite[Table~3]{Hiemstra:2021} the number of outliers observed numerically is
\begin{itemize}
\item $2\lfloor \frac {p-1}{2}\rfloor$ for Dirichlet boundary conditions;
\item $2\lfloor \frac {p}{2}\rfloor$ for Neumann boundary conditions;
\item $ p-1$ for mixed boundary conditions.
\end{itemize}
Our theoretical upper bounds on the number of outliers are closely related to the numerically observed ones and exhibit an exact match in the majority of the cases.
\end{remark}
Outlier-free discretizations can be achieved by identifying spline subspaces of dimension $n_h$ that ensure $L^2$ and $H^1$ convergence in $p$ for the approximations of the first $n_h$ eigenfunctions of the problem we are dealing with.
In the next section we propose a solution to this problem based on optimal spline spaces.
\section{Optimal spline spaces have no outliers}
\label{sec:outlier-free}
In this section we extend the results of \cite[Section 7]{Sande:2019} to prove that there are no outliers in the Galerkin eigenvalue approximation for the Laplacian with various boundary conditions when using spline subspaces that are optimal with respect to the Kolmogorov $n$-width in $L^2$-norm.
We denote by $X$ the $L^2$-projector onto a finite dimensional subspace $\mathbb{X}$ of $L^2$.
For a subset $A$ of $L^2$, let
$$ E(A, \mathbb{X}) := \sup_{u \in A} \|u-Xu\| $$
be the distance to $A$ from $\mathbb{X}$ relative to the $L^2$-norm.
Then, the Kolmogorov $L^2$ $n$-width
of $A$ is defined by
$$ d_n(A) := \inf_{\substack{\mathbb{X}\subset L^2\\ \dim \mathbb{X}=n}} E(A, \mathbb{X}). $$
If $\mathbb{X}$ has dimension at most $n$ and satisfies
\begin{equation*
d_n(A) = E(A, \mathbb{X}),
\end{equation*}
then we call $\mathbb{X}$ an \emph{optimal} subspace for $d_n(A)$.
\begin{example}
Let $A=\{u\in H^r : \|u^{(r)}\|\leq 1\}$.
Then, by looking at $u/\|u^{(r)}\|$, for functions $u\in H^r$, we have for any subspace $\mathbb{X}$ of $L^2$, the sharp estimate
\begin{equation*
\|u-Xu\|\leq E(A, \mathbb{X})\|u^{(r)}\|.
\end{equation*}
Here $E(A, \mathbb{X})$ is the smallest possible constant for the subspace $\mathbb{X}$.
Moreover, if $\mathbb{X}$ is optimal for the $n$-width of $A$, then
\begin{equation}\label{ineq:optimal}
\|u-Xu\|\leq d_n(A)\|u^{(r)}\|,
\end{equation}
and $d_n(A)$ is the smallest possible constant over all $n$-dimensional subspaces $\mathbb{X}$.
\end{example}
For all $p\geq r-1$, let us consider the function classes
\begin{equation}\label{eq:Hr}
\begin{aligned}
H^r_0&:=\{u\in H^r :\, u^{(\alpha)}(0)=u^{(\alpha)}(1)=0,\ \ 0\leq \alpha<r,\ \ \alpha \text{ even}\},
\\
H^r_1&:=\{u\in H^r :\, u^{(\alpha)}(0)=u^{(\alpha)}(1)=0,\ \ 0\leq \alpha<r,\ \ \alpha \text{ odd}\},
\\
H^r_2&:=\{u\in H^r :\, \partial^{\alpha_0} u(0)=\partial^{\alpha_1} u(1)=0,\ \ 0\leq \alpha_0,\alpha_1<r,\ \
\alpha_0 \text{ even}, \ \ \alpha_1 \text{ odd}\},
\end{aligned}
\end{equation}
and
\begin{equation*}
\begin{aligned}
A^r_0&:=\{u\in H^r_0:\, \|u^{(r)}\|\leq 1\},
\\
A^r_1&:=\{u\in H^r_1:\, \|u^{(r)}\|\leq 1\},
\\
A^r_2&:=\{u\in H^r_2:\, \|u^{(r)}\|\leq 1\}.
\end{aligned}
\end{equation*}
By using the representation of these function classes in terms of repeated applications of suitable integral operators, it has been shown \cite{Pinkus:85} that the $n$-dimensional space
consisting of the first $n$ eigenfunctions of the Laplacian satisfying the boundary conditions \eqref{type-0-BC}--\eqref{type-2-BC} (i.e., the first $n$ functions in each of the sequences \eqref{eq:eig-Laplace-type-0-BC}--\eqref{eq:eig-Laplace-type-2-BC}) are optimal for $A^r_i$, $i=0,1,2$, respectively.
Moreover,
\begin{equation} \label{eq:n-width}
d_n(A^r_0)=\left(\frac{1}{(n+1)\pi}\right)^r,
\quad
d_n(A^r_1)=\left(\frac{1}{n\pi}\right)^r,
\quad
d_n(A^r_2)=\left(\frac{2}{(2n+1)\pi}\right)^r.
\end{equation}
For $0\leq \ell\leq p$ let us now consider
the following subspaces of $\mathbb{S}_{p,{\boldsymbol \tau}}$ identified by certain derivatives vanishing at the boundary:
\begin{equation} \label{eq:allS}
\begin{aligned}
\mathbb{S}_{p,{\boldsymbol \tau},0}^\ell &:= \{s\in \mathbb{S}_{p,{\boldsymbol \tau}} :\, s^{(\alpha)}(0)=s^{(\alpha)}(1)=0,\ \ 0\leq \alpha\leq \ell,\ \ \alpha \text{ even}\}, \\
\mathbb{S}_{p,{\boldsymbol \tau},1}^\ell &:= \{s\in \mathbb{S}_{p,{\boldsymbol \tau}} :\, s^{(\alpha)}(0)=s^{(\alpha)}(1)=0,\ \ 0\leq \alpha\leq\ell,\ \ \alpha \text{ odd}\}, \\
\mathbb{S}_{p,{\boldsymbol \tau},2}^\ell&:= \{s\in \mathbb{S}_{p,{\boldsymbol \tau}} :\, s^{(\alpha_0)}(0)= s^{(\alpha_1)}(1)=0,\ \ 0\leq \alpha_0,\alpha_1\leq \ell, \ \ \alpha_0 \text{ even}, \ \ \alpha_1 \text{ odd}\}.
\end{aligned}
\end{equation}
With the aim of constructing optimal spline spaces we focus on the special (degree-dependent) sequences of break points ${\boldsymbol \tau}_{p,n,i}^\mathrm{opt}$, $i=0,1,2$, where
\begin{equation*}
\begin{aligned}
{\boldsymbol \tau}_{p,n,0}^\mathrm{opt} &:= \begin{cases}
\left(0,\frac{1\vphantom{1/2}}{n+1},\frac{2}{n+1},\ldots,\frac{n}{n+1},1\right),\quad\ &p \text{ odd},\\[0.2cm]
\left(0,\frac{1/2}{n+1},\frac{3/2}{n+1},\ldots,\frac{n+1/2}{n+1},1\right),\quad\ &p \text{ even},
\end{cases}\\[0.2cm]
{\boldsymbol \tau}_{p,n,1}^\mathrm{opt} &:= \begin{cases}
\left(0,\frac{1/2}{n},\frac{3/2}{n},\ldots,\frac{n-1/2}{n},1\right),\qquad &p \text{ odd},\\[0.2cm]
\left(0,\frac{1\vphantom{1/2}}{n},\frac{2}{n},\ldots,\frac{n-1}{n},1\right),\qquad &p \text{ even},
\end{cases}\\[0.2cm]
{\boldsymbol \tau}_{p,n,2}^\mathrm{opt} &:= \begin{cases}
\left(0,\frac{2\vphantom{1/2}}{2n+1},\frac{4}{2n+1},\ldots,\frac{2n}{2n+1},1\right),\quad &p \text{ odd},\\[0.2cm]
\left(0,\frac{1\vphantom{1/2}}{2n+1},\frac{3}{2n+1},\ldots,\frac{2n-1}{2n+1},1\right),\quad &p \text{ even},
\end{cases}
\end{aligned}
\end{equation*}
and the spaces
\begin{equation}
\label{eq:opt-spaces}
\mathbb{S}_{p,n,i}^\mathrm{opt}:=\mathbb{S}_{p,{\boldsymbol \tau}_{p,n,i}^\mathrm{opt},i}^p, \quad i=0,1,2.
\end{equation}
We also define the corresponding maximum grid spacings
$$
h_{p,n,0}^\mathrm{opt}:=\frac{1}{n+1}, \quad h_{p,n,1}^\mathrm{opt}:=\frac{1}{n}, \quad h_{p,n,2}^\mathrm{opt}:=\frac{2}{2n+1}.
$$
Note that all the spaces $\mathbb{S}_{p,n,i}^\mathrm{opt}$, $i=0,1,2$ have dimension $n$ and from \eqref{eq:n-width} we get
\begin{equation}
\label{eq:h-nwidths}
d_n(A_i^r)=\left(\frac{ h_{p,n,i}^\mathrm{opt}}{\pi}\right)^r, \quad i=0,1,2.
\end{equation}
Actually, it was shown in \cite[Theorem~2]{Floater:2018} that for all $p\geq r-1$ and $r\geq 1$ the spline spaces
$\mathbb{S}_{p,n,i}^\mathrm{opt}$ are optimal for the function classes $ A^r_i$, $ i=0,1,2$, respectively.
Therefore, \eqref{ineq:optimal} and \eqref{eq:h-nwidths} immediately give
the following result.
\begin{theorem}
\label{thm:L2-error}
Let $S_{p,n,i}^\mathrm{opt}$ be the $L^2$-projector onto
$\mathbb{S}_{p,n,i}^\mathrm{opt}$, $i=0,1,2$. Then, for $u\in H^r_i$ and $p\geq r-1$ we have
\begin{equation*}
\|u-S_{p,n,i}^\mathrm{opt} u\| \leq \left(\frac{h_{p,n,i}^\mathrm{opt}}{\pi}\right)^{r}\|u^{(r)}\|.
\end{equation*}
\end{theorem}
In particular, we have convergence in $p$ in $L^2$-norm of the $L^2$-projections of the functions
\begin{equation*}
\begin{gathered}
\{\sin(\pi x), \sin(2\pi x), \ldots,\sin(n\pi x)\},\\
\{1,\cos(\pi x), \ldots,\cos((n-1)\pi x)\},\\
\{\sin((1/2)\pi x), \sin((3/2)\pi x), \ldots, \sin((n-1/2)\pi x)\}
\end{gathered}
\end{equation*}
onto $\mathbb{S}_{p,n,i}^\mathrm{opt}$, $i=0,1,2$, respectively; see also \cite{Sande:2019}.\pagebreak
To establish convergence of the standard Galerkin approximation we need the following theorem about error estimates for derivatives, analogous to Theorem~\ref{thm:L2-error}. It generalizes the results proved in \cite[Theorem~4.1]{Sande:2019} for the less technical periodic case towards the subspaces $ \mathbb{S}_{p,n,i}^\mathrm{opt}$, $i=0,1,2$.
\begin{theorem}
\label{thm:error-der}
Let $u\in H^r_i$, $i=0,1,2$ for $r\geq 1$ be given. Then, for all $p\geq \max\{r-1,1\}$ there exists $R_{p,n,i}^\mathrm{opt} u\in\mathbb{S}_{p,n,i}^\mathrm{opt}$ such that
\begin{equation*}
\|(u-R_{p,n,i}^\mathrm{opt} u)^{(\ell)}\| \leq \left(\frac{h_{p,n,i}^\mathrm{opt}}{\pi}\right)^{r-\ell}\|u^{(r)}\|, \quad \ell=0,1.
\end{equation*}
\end{theorem}
\begin{proof}
The result follows from Example~\ref{ex:our-classes} and Proposition~\ref{prop:err} (in Appendix~\ref{Appendix:A}) with
\begin{equation*}
\begin{aligned}
R_{p,n,0}^\mathrm{opt} :=&R_{\mathbb{Y}_p}, &&\mathbb{S}_{p,n,0}^\mathrm{opt} =\mathbb{Y}_p,
\\
R_{p,n,1}^\mathrm{opt} :=&P_0\oplus R_{\mathbb{X}_p},\ \ &&\mathbb{S}_{p,n,1}^\mathrm{opt}=\mathbb{P}_0\oplus\mathbb{X}_p,
\\
R_{p,n,2}^\mathrm{opt} :=&R_{\mathbb{X}_p}, &&\mathbb{S}_{p,n,2}^\mathrm{opt}=\mathbb{X}_p,
\end{aligned}
\end{equation*}
taking into account \eqref{eq:h-nwidths} and \eqref{eq:nwidth-r}.
Here $P_0$ stands for the $L^2$-projector onto $\mathbb{P}_0$.
\end{proof}
Then, Corollaries~\ref{cor:Strang} and \ref{cor:galerkin-A} (in Appendix~\ref{Appendix:A}) imply the following result.
\begin{proposition}\label{pro:error-eigvals}
For any $l=1,\dots,n$, let $u_l,\omega_l$ be an exact solution of problem \eqref{eq:eigenvalue-equation-1D} with boundary conditions \eqref{type-0-BC}, \eqref{type-1-BC} or \eqref{type-2-BC},
and let $u_{h,l},\omega_{h,l}$ be their approximation
obtained by
finding $u\in \mathbb{S}_{p,n,i}^\mathrm{opt}$ and $\omega^2\in\mathbb{R}$ such that
\begin{equation*}
( u', v') = \omega^2 (u, v), \quad \forall v\in \mathbb{S}_{p,n,i}^\mathrm{opt},
\end{equation*}
for $i=0,1,2$, respectively. Then,
\begin{equation}\label{ineq:eigvals}
\omega_l \leq \omega_{h,l} \leq \frac{\omega_l}{1-\left(\frac{\omega_l}{\omega_{n+1}}\right)^{p+1}}.
\end{equation}
Moreover,
\begin{equation*}
\frac{\|u_l-u_{h,l}\|}{\|u_l\|} \leq 2(1+\rho_l) \left(\frac{\omega_l}{\omega_{n+1}}\right)^{p+1},
\end{equation*}
where $\|u_l\|=\|u_{h,l}\|$, $(u_l,u_{h,l})>0$ and
\begin{equation*}
\rho_l := \max_{\substack{i=1,\ldots,n \\ i\neq l}} \frac{\omega^2_l}{|\omega^2_l - \omega^2_{h,i}|}
\end{equation*}
is the $l$-th separation constant.
\end{proposition}
Observe that the error estimate for the approximation of the exact frequencies $\omega_l$ by the $\omega_{h,l}$ in \eqref{ineq:eigvals} only depends on the value of the exact frequencies $\omega_l$ and $\omega_{n+1}$. Since $\omega_n<\omega_{n+1}$, we have that $\omega_{h,l}\to\omega_l$ for all $l =1,\ldots,n$ as the degree $p$ of the optimal spline spaces increases. Moreover, $\omega_{n+1}<\omega_{n+2}<\dots$, so for a fixed index $l$ we have that $\omega_{h,l}\to\omega_l$ as the maximum grid spacing $h$ decreases, since $h\to 0$ is equivalent to $n\to\infty$.
The error of the eigenfunctions in $H^1$-seminorm can be deduced from Proposition~\ref{pro:error-eigvals} and the so-called Pythagorean eigenvalue error theorem \cite[page~233]{Strang:2008}:
\begin{equation*}
\frac{\|(u_l-u_{h,l})'\|^2}{\| u'_l\|^2}= \frac{\|u_l-u_{h,l}\|^2}{\|u_l\|^2} + \frac{\omega^2_{h,l}-\omega^2_l}{\omega^2_{l}},
\end{equation*}
where $\|u_l\|=\|u_{h,l}\|$ and $(u_l,u_{h,l})>0$.\pagebreak
We can summarize the above results as follows.
\medskip
\begin{tcolorbox}
Consider problem \eqref{eq:eigenvalue-equation-1D} with boundary conditions \eqref{type-0-BC}, \eqref{type-1-BC} or \eqref{type-2-BC}.
The approximations of the eigenvalues obtained by finding $u\in \mathbb{S}_{p,n,i}^\mathrm{opt}$ and $\omega^2\in\mathbb{R}$ such that
\begin{equation*}
( u', v') = \omega^2 (u, v), \quad \forall v\in \mathbb{S}_{p,n,i}^\mathrm{opt},
\end{equation*}
for $i=0,1,2$, respectively, have no outliers.
\end{tcolorbox}
\begin{remark}\label{rmk:other-reduced-space}
The subspaces $\mathbb{S}_{p,{\boldsymbol \tau},i}^{p-1}$, $i=0,1$, introduced for uniform knot sequences in \cite{Sogn:2018,Takacs:2016} and further analyzed in \cite[Section~5.2]{Sande:2020}, were considered for outlier removal in \cite{Hiemstra:2021}. In Section~\ref{sec:numerics} we will also numerically illustrate their performance with respect to outliers.
Observe that $\mathbb{S}_{p,{\boldsymbol \tau},0}^p \subseteq {\mathbb{S}}_{p,{\boldsymbol \tau},0}^{p-1}$ where equality holds for $p$ odd and that $\mathbb{S}_{p,{\boldsymbol \tau},1}^p \subseteq {\mathbb{S}}_{p,{\boldsymbol \tau},1}^{p-1}$ where equality holds for $p$ even; see their definitions in \eqref{eq:allS}.
\end{remark}
We now extend the univariate results towards higher dimensions.
Let us consider tensor-product spline spaces of the form
\begin{equation}\label{eq:spline-tensor}
\mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}:=\mathbb{S}_{p_1,n_1,i_1}^\mathrm{opt}\otimes\mathbb{S}_{p_2,n_2,i_2}^\mathrm{opt} \otimes\cdots\otimes\mathbb{S}_{p_d,n_d,i_d}^\mathrm{opt}, \quad i_1,\ldots,i_d=0,1,2,
\end{equation}
according to the type of boundary conditions. Thanks to the decomposition of the approximate eigenfunctions/eigenvalues like in \eqref{eq:approx-eig-Laplace2D} and the exact ones like in \eqref{eq:eig-Laplace2D}, both in terms of their univariate counterparts, we immediately arrive at the following multivariate result.
\medskip
\begin{tcolorbox}
Consider problem \eqref{eq:eigenvalue-equation} with homogeneous Dirichlet/Neumann/mixed boundary conditions.
The approximations of the eigenvalues obtained by
finding $u\in \mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}$ and $\omega^2\in\mathbb{R}$ such that
\begin{equation*}
( \nabla u, \nabla v) = \omega^2 (u, v), \quad \forall v\in \mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt},
\end{equation*}
for appropriate choices of $i_1,\ldots,i_d$ according to the type of boundary conditions, have no outliers.
\end{tcolorbox}
\section{Approximations with non-homogeneous boundary}
\label{sec:general-BC}
The optimal spline subspaces in \eqref{eq:opt-spaces}
provide outlier-free approximations for the eigenfunctions of the corresponding eigenvalue problems still maintaining the accuracy of the full spline space because the additional boundary conditions identifying such subspaces are satisfied by the exact eigenfunctions. However, it is clear that their direct use will result in a loss of accuracy for approximating solutions that do not satisfy those boundary conditions; see also Examples~\ref{ex:convergence1D-boundary} and \ref{ex:convergence2D-boundary}.
Let us focus on the problem
\begin{equation}
\label{eq:Laplace}
\left\{ \begin{aligned}
-\Delta u &= f, \quad \text{in } \Omega:=(0,1)^d, \\
u_{|\partial \Omega} &= 0,
\end{aligned} \right.
\end{equation}
and on the approximations of its solution obtained by the Galerkin discretization using (tensor products of) the outlier-free subspace $\mathbb{S}_{p,n,0}^\mathrm{opt}$. In this section we propose a possible strategy for smooth data $f$ to maintain the full accuracy of the usual spline subspace of $H^1_0(\Omega)$, i.e., (tensor products of) $\mathbb{S}_{p,{\boldsymbol \tau},0}^0$ for some ${\boldsymbol \tau}$.
The treatment of other types of boundary conditions is similar.
\subsection{Univariate case}\label{sec:general-BC-1D}
We first consider the case $d=1$.
We recall that the optimal spline space $\mathbb{S}_{p,n,i}^\mathrm{opt}$ defined in \eqref{eq:opt-spaces} has full approximation power for functions taken from the space $H^r_i$ for $i=0,1,2$; see Theorem~\ref{thm:error-der} (and also Theorem~\ref{thm:L2-error}). In the following we are addressing functions $u$ that do not satisfy the boundary conditions on (even) derivatives of the space $H^r_0$. As mentioned before, the other types of boundary conditions can be treated in a similar way.
For sufficiently smooth $f$,
let $s_u\in \mathbb{S}_{p,{\boldsymbol \tau},0}^0$ be such that for even values of $\alpha$, $2\leq \alpha\leq p,$
\begin{align*}
(s_u)^{(\alpha)}(0) &= (u)^{(\alpha)}(0)=-f^{(\alpha-2)}(0),
\\
(s_u)^{(\alpha)}(1) &= (u)^{(\alpha)}(1)=-f^{(\alpha-2)}(1).
\end{align*}
We can then write the solution of \eqref{eq:Laplace} as
$$
u=u_0+s_u,
$$
where $u_0$ solves the problem
\begin{equation*
\left\{ \begin{aligned}
- u''_0 &= f+s''_u, \quad \text{in } (0,1), \\
u_{0} (0) &= u_0(1)=0,
\end{aligned} \right.
\end{equation*}
and
$$
(u_0)^{(\alpha)}(0)=(u_0)^{(\alpha)}(1)=0, \quad 0\leq \alpha\leq p,\ \ \alpha \text{ even}.
$$
Thus, it suffices to construct the Galerkin approximation of $u_0$ in $\mathbb{S}_{p,n,0}^\mathrm{opt}$. The full approximation power of the space $\mathbb{S}_{p,n,i}^\mathrm{opt}$ (see Theorem~\ref{thm:error-der}) ensures no loss of accuracy with respect to the Galerkin approximation of $u$ in the usual space $\mathbb{S}_{p,{\boldsymbol \tau},0}^0$.
For sequences of break points with ${n_\mathrm{el}}> p+1$, the construction of $s_u$ is straightforward. As an example, it can be obtained by solving the following Hermite interpolation problem: find
$$
s_u=\sum_{i=-p}^0 c_iN_{i,{\boldsymbol \xi}}^p+\sum_{i={n_\mathrm{el}}-p-1}^{{n_\mathrm{el}}-1} c_iN_{i,{\boldsymbol \xi}}^p, \\
$$
such that
\begin{alignat}{3}
\label{eq:su-even}
(s_u)^{\alpha}(z)&=u^{(\alpha)}(z), \quad &\alpha&=0,2,4,\ldots, 2\lfloor \tfrac {p}{2}\rfloor, \quad &z&=0,1,
\\
\label{eq:su-odd}
(s_u)^{\alpha}(z)&=0,\quad &\alpha&=1,3,5,\ldots, 2\lfloor \tfrac {p-1}{2}\rfloor+1,\quad &z&=0,1,
\end{alignat}
where the B-splines $N_{i,{\boldsymbol \xi}}^p$ are defined in Section~\ref{sec:basis-full}. Due to the support property of B-splines, for ${n_\mathrm{el}}> p+1$ the above problem decouples in two independent linear systems of size $p+1$ each. These two linear systems are unisolvent because each of them corresponds to Taylor interpolation in the polynomial space $\mathbb{P}_p$. Moreover, when using an open knot sequence ${\boldsymbol \xi}$, the above systems are triangular. Although of limited practical interest, we remark that a function $s_u\in \mathbb{S}_{p,{\boldsymbol \tau}}$ satisfying \eqref{eq:su-even} can be obtained also in case ${n_\mathrm{el}}\leq p+1$ by removing some of the conditions on the odd derivatives in order to match the dimension of the space.
\subsection{Multivariate tensor-product case}\label{sec:general-BC-2D}
In the multivariate setting, we start by showing that the tensor-product spline space $\mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}$ defined in \eqref{eq:spline-tensor} has full approximation power for functions taken from the tensor-product space
\begin{equation}\label{eq:H-tensor}
H^r_{{\boldsymbol i}}:=H^r_{i_1} \otimes H^r_{i_2}\otimes\cdots\otimes H^r_{i_d}.
\end{equation}
We let $\Omega:=(0,1)^d$ and denote the $L^2$-norm on $\Omega$ by $\|\cdot\|_{\Omega}$. We define the maximal grid spacing as
$$
h_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}:=\max\{h_{p_1,n_1,i_1}^\mathrm{opt},h_{p_2,n_2,i_2}^\mathrm{opt},\ldots,h_{p_d,n_d,i_d}^\mathrm{opt}\}.
$$
For simplicity we state the following theorem in the case $d=2$.
From Proposition~\ref{prop:tensorRitz} (in Appendix~\ref{Appendix:B}) we can deduce the following generalization of Theorem~\ref{thm:error-der} to the tensor-product case.
\begin{theorem}\label{thm:error-der-2d}
Let $u\in H^r_{i_1}\otimes H^r_{i_2}$, $i_1,i_2=0,1,2$ for $r\geq 1$ be given. Then, for all $p_1,p_2\geq \max\{r-1,1\}$ there exists $R_{p_1,n_1,i_1}^\mathrm{opt}\otimes R_{p_2,n_2,i_2}^\mathrm{opt} u\in\mathbb{S}_{p_1,n_1,i_1}^\mathrm{opt}\otimes \mathbb{S}_{p_2,n_2,i_2}^\mathrm{opt}$ such that
\begin{equation*}
\|u-R_{p_1,n_1,i_1}^\mathrm{opt}\otimes R_{p_2,n_2,i_2}^\mathrm{opt} u\|_{\Omega} \leq \left(\frac{h_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}}{\pi}\right)^{r}\left(\|\partial_1^ru\|_{\Omega}+\|\partial_2^ru\|_{\Omega}
+\min\left\{\|\partial_1\partial_2^{r-1}u\|_{\Omega}, \,\|\partial_1^{r-1}\partial_2u\|_{\Omega}\right\}\right),
\end{equation*}
and
\begin{equation*
\begin{aligned}
\|\partial_1(u-R_{p_1,n_1,i_1}^\mathrm{opt}\otimes R_{p_2,n_2,i_2}^\mathrm{opt} u)\|_{\Omega}
&\leq \left(\frac{h_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}}{\pi}\right)^{r-1}\left(\|\partial_1^ru\|_{\Omega}+\|\partial_1\partial_2^{r-1}u\|_{\Omega}\right),
\\
\|\partial_2(u-R_{p_1,n_1,i_1}^\mathrm{opt}\otimes R_{p_2,n_2,i_2}^\mathrm{opt} u)\|_{\Omega}
&\leq \left(\frac{h_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}}{\pi}\right)^{r-1}\left(\|\partial_1^{r-1}\partial_2u\|_{\Omega}+\|\partial_2^ru\|_{\Omega}\right).
\end{aligned}
\end{equation*}
\end{theorem}
\begin{proof}
For simplicity of notation, we will only consider the case $p_1=p_2=p$, $n_1=n_2=n$ and $i_1=i_2=i$.
In this case the result follows from Example~\ref{ex:our-classes} (in Appendix~\ref{Appendix:A}) and Proposition~\ref{prop:tensorRitz} (in Appendix~\ref{Appendix:B}) with
\begin{equation*}
\begin{aligned}
R_{p,n,0}^\mathrm{opt} :=&R_{\mathbb{Y}_p}, &&\mathbb{S}_{p,n,0}^\mathrm{opt} =\mathbb{Y}_p,
\\
R_{p,n,1}^\mathrm{opt} :=&P_0\oplus R_{\mathbb{X}_p},\quad &&\mathbb{S}_{p,n,1}^\mathrm{opt}=\mathbb{P}_0\oplus\mathbb{X}_p,
\\
R_{p,n,2}^\mathrm{opt} :=&R_{\mathbb{X}_p}, &&\mathbb{S}_{p,n,2}^\mathrm{opt}=\mathbb{X}_p,
\end{aligned}
\end{equation*}
taking into account \eqref{eq:h-nwidths} and \eqref{eq:nwidth-r}.
\end{proof}
Using standard arguments we can conclude that the Galerkin method applied to problem \eqref{eq:Laplace} in the reduced spline space \eqref{eq:spline-tensor} has full approximation order for any solution belonging to the space \eqref{eq:H-tensor}.
Note that the Ritz projection $R u\in\mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt}$ of $u\in H^1_{{\boldsymbol i}}$ related to the Laplace operator, i.e., given by
\begin{equation*}
(\nabla R u,\nabla v) = (\nabla u, \nabla v), \quad \forall v\in \mathbb{S}_{{\boldsymbol p},{\boldsymbol n},{\boldsymbol i}}^\mathrm{opt},
\end{equation*}
is the best approximation to $u$ in the $H^1$-seminorm and
potentially different from the one considered in Theorem~\ref{thm:error-der-2d}.
The strategy in Section \ref{sec:general-BC-1D} can now be extended to the tensor-product setting. Let us outline the strategy in the case $d=2$ and Dirichlet boundary conditions. In this case we are interested in discretizing problem \eqref{eq:Laplace} in $\mathbb{S}_{p_1,n_1,0}^\mathrm{opt}\otimes \mathbb{S}_{p_2,n_2,0}^\mathrm{opt}$. For $s\in \mathbb{S}_{p_1,n_1,0}^\mathrm{opt}\otimes \mathbb{S}_{p_2,n_2,0}^\mathrm{opt}$ we have
\begin{align*}
\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}s(0,x_2) &= \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}s(1,x_2)=0, \quad 0\leq \alpha_1\leq p_1, \ \ \alpha_1 \ \text{even}, \quad 0\leq \alpha_2\leq p_2,
\\
\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}s(x_1,0) &= \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}s(x_1,1)=0, \quad 0\leq \alpha_2\leq p_2, \ \ \alpha_2 \ \text{even}, \quad 0\leq \alpha_1\leq p_1.
\end{align*}
We can compensate such behavior by approximating $u$ by $u_{0,h}+s_u$ where $u_{0,h}$ is the Galerkin approximation in $\mathbb{S}_{p_1,n_1,0}^\mathrm{opt}\otimes \mathbb{S}_{p_2,n_2,0}^\mathrm{opt}$ of the solution of
\begin{equation*
\left\{ \begin{aligned}
-\Delta u_0 &= f+\Delta s_u, \quad \text{in } \Omega:=(0,1)^2, \\
u_{0|\partial \Omega} &= 0,
\end{aligned} \right.
\end{equation*}
and $s_u$ is a function, possibly belonging to
$\mathbb{S}_{p_1,{\boldsymbol \tau}_{p_1,n_1,0}^\mathrm{opt}}\otimes \mathbb{S}_{p_2,{\boldsymbol \tau}_{p_2,n_2,0}^\mathrm{opt}}$,
that approximates with the same accuracy of the space the corresponding boundary derivatives of $u$, i.e.,
\begin{align*}
\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}u(0,x_2),\quad & \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}u(1,x_2), \quad 0\leq \alpha_1\leq p_1, \ \ \alpha_1 \ \text{even}, \quad 0\leq \alpha_2\leq p_2,
\\
\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}u(x_1,0),\quad & \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}u(x_1,1), \quad 0\leq \alpha_2\leq p_2, \ \ \alpha_2 \ \text{even}, \quad 0\leq \alpha_1\leq p_1.
\end{align*}
For smooth data $f$, the above derivatives can be easily deduced from \eqref{eq:Laplace} by repeated differentiation. As an example, for $0\leq \alpha_1\leq p_1$, $\alpha_1 \text{ even}$, $0\leq \alpha_2\leq p_2$, we have
$$
\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}u(z,x_2)=\sum_{r=1}^{\alpha_1/2}(-1)^r\partial_{x_1}^{\alpha_1-2r}\partial_{x_2}^{\alpha_2+2(r-1)} f(z,x_2)+(-1)^{\alpha_1/2}\partial_{x_2}^{\alpha_1+\alpha_2}u(z,x_2), \quad z=0,1.
$$
The last term in the above expression is zero due to the imposed homogeneous Dirichlet boundary conditions in \eqref{eq:Laplace}; this is not the case for general non-homogeneous Dirichlet boundary conditions.
Once the necessary derivatives of $u$ on the boundary of $\Omega$ are computed, a discrete least squares approach in $\mathbb{S}_{p_1,{\boldsymbol \tau}_{p_1,n_1,0}^\mathrm{opt}}\otimes \mathbb{S}_{p_2,{\boldsymbol \tau}_{p_2,n_2,0}^\mathrm{opt}}$ can be used to obtain $s_u$.
Note that only the derivatives of the form
$$
\partial_{x_1}^{\alpha_1}u(z,x_2), \quad \partial_{x_2}^{\alpha_2}u(x_1,z), \quad z=0,1,\quad 0\leq\alpha_1\leq p_1,\quad 0\leq\alpha_2\leq p_2, \quad \alpha_1,\alpha_2\ \text{even},
$$
have to be deduced from \eqref{eq:Laplace} and used to identify $s_u$ because they determine all the remaining ones by direct differentiation.
The error estimates for (reduced) tensor-product spline spaces ensure no loss of accuracy with respect to the Galerkin approximation of $u$ in the non-reduced space.
\section{B-spline-like bases for outlier-free spline spaces}
\label{sec:bsplines}
In this section we construct a B-spline-like basis for the outlier-free spline spaces we are interested in. For the sake of brevity, we just focus on the subspaces $\mathbb{S}_{p,{\boldsymbol \tau},0}^p$ and
$ \mathbb{S}_{p,{\boldsymbol \tau},0}^{p-1}$ where the interior break points are equally spaced; the other types of subspaces can be treated similarly.
\subsection{B-spline bases for full spline spaces}\label{sec:basis-full}
The full spline space $\mathbb{S}_{p,{\boldsymbol \tau}}$ in \eqref{eq:spline-max-smooth} is usually represented in terms of the classical B-spline basis which is defined through a knot sequence.
For $p\ge0$ and ${n_\mathrm{el}}\ge1$, consider the knot sequence
\begin{equation}\label{eq:knots}
{\boldsymbol \xi}:=\{\xi_{-p}\leq\cdots\leq\xi_{0}<\xi_{1}<\cdots<\xi_{{n_\mathrm{el}}-1}<\xi_{{n_\mathrm{el}}}\leq\cdots\leq\xi_{{n_\mathrm{el}}+p}\},
\end{equation}
such that $\xi_{0}\leq 0<\xi_{1}$ and $\xi_{{n_\mathrm{el}}-1}<1\leq \xi_{{n_\mathrm{el}}}$.
This knot sequence allows us to define ${n_\mathrm{el}}+p$ B-splines of degree $p$,
\begin{equation}
\label{eq:B-splines}
N_{i,{\boldsymbol \xi}}^p:\mathbb{R}\rightarrow \mathbb{R}, \quad i=-p,\ldots,{n_\mathrm{el}}-1,
\end{equation}
defined recursively as follows: for $-p \le i\le {n_\mathrm{el}}+p-1$,
\begin{equation*}
N_{i,{\boldsymbol \xi}}^0(x):=\begin{cases}
1, & x \in [\xi_i,\xi_{i+1}), \\
0, & \text{otherwise};
\end{cases}
\end{equation*}
for $1\le k\le p$ and $-p\le i\le {n_\mathrm{el}}+p-1-k$,
\begin{equation*}
N_{i,{\boldsymbol \xi}}^k(x):=\frac{x-\xi_i}{\xi_{i+k}-\xi_i}N_{i,{\boldsymbol \xi}}^{k-1}(x)+\frac{\xi_{i+k+1}-x}{\xi_{i+k+1}-\xi_{i+1}}N_{i+1,{\boldsymbol \xi}}^{k-1}(x),
\end{equation*}
where a fraction with zero denominator is assumed to be zero.
It is well known that the B-splines $N_{i,{\boldsymbol \xi}}^p$, $i=-p,\ldots,{n_\mathrm{el}}-1$, are linearly independent and they enjoy the following properties (see, e.g.,~\cite{deBoor2001,Lyche:18}).
\begin{itemize}
\item Local support:
\begin{equation*
\text{supp}(N_{i,{\boldsymbol \xi}}^p)=[\xi_i,\xi_{i+p+1}], \quad i=-p,\ldots,{n_\mathrm{el}}-1.
\end{equation*}
\item Smoothness:
\begin{equation*}
N_{i,{\boldsymbol \xi}}^p \in C^{p-1}(0,1), \quad i=-p,\ldots,{n_\mathrm{el}}-1.
\end{equation*}
\item Differentiation:
\begin{equation*
\left(N_{i,{\boldsymbol \xi}}^p(x)\right)' = p\left(\frac{N_{i,{\boldsymbol \xi}}^{p-1}(x)}{\xi_{i+p}-\xi_i}-
\frac{N_{i+1,{\boldsymbol \xi}}^{p-1}(x)}{\xi_{i+p+1}-\xi_{i+1}}\right), \quad i=-p,\ldots,{n_\mathrm{el}}-1, \quad p \geq 1.
\end{equation*}
\item Non-negative partition of unity:
\begin{equation*}
N_{i,{\boldsymbol \xi}}^p(x)\ge0, \quad i=-p,\ldots,{n_\mathrm{el}}-1, \qquad \sum_{i=-p}^{{n_\mathrm{el}}-1}N_{i,{\boldsymbol \xi}}^p(x)=1,\quad x\in[0,1).
\end{equation*}
\end{itemize}
Usually, open knots are employed to identify the B-splines in \eqref{eq:B-splines}, i.e.,
$$\xi_{-p}=\dots=\xi_0=0,\quad 1=\xi_{n_\mathrm{el}}=\dots=\xi_{{n_\mathrm{el}}+p},$$
because with such a configuration it is straightforward to identify a basis for the subspace $\mathbb{S}_{p,{\boldsymbol \tau},0}^0$. However, in our context only the case of uniform grid spacing is of interest because it minimizes the number of outliers, as discussed in Section~\ref{sec:counting-outliers}. Therefore, it is natural to consider B-splines with uniform knots, i.e., the B-splines $N_{i,{\boldsymbol \xi}}^p$, $i=-p,\ldots,{n_\mathrm{el}}-1$, are the restriction to the interval $[0,1]$ of uniformly shifted and scaled versions of a single shape function, the so-called cardinal B-spline ${\cal N}_p:\mathbb{R}\rightarrow \mathbb{R}$, where
\begin{equation*
{\cal N}_0(t) := \begin{cases}
1, & t \in [0, 1), \\
0, & \text{otherwise},
\end{cases}
\end{equation*}
and
\begin{equation*
{\cal N}_p (t) := \frac{t}{p} {\cal N}_{p-1}(t) + \frac{p+1-t}{p} {\cal N}_{p-1}(t-1), \quad p \geq 1.
\end{equation*}
The cardinal B-spline ${\cal N}_p$ belongs to $C^{p-1}(\mathbb{R})$ and is supported on the interval $[0,p+1]$. It is a symmetric function with respect to the midpoint of its support, i.e.,
\begin{equation}\label{eq:symmetry}
{\cal N}_p\left(\frac{p+1}{2}+t\right)={\cal N}_p\left(\frac{p+1}{2}-t\right).
\end{equation}
For other common properties of cardinal B-splines, we refer the reader to \cite{Lyche:18}.
From now on, we consider the knot sequence \eqref{eq:knots} equally spaced, so that we have
\begin{equation}\label{eq:BcardB}
N^{p}_{i,{\boldsymbol \xi}}(x) ={\cal N}_{p}(x{n_\mathrm{el}}-\xi_0), \quad i=-p,\ldots,{n_\mathrm{el}}-1,
\end{equation}
and we explicitly construct a basis for the spaces $\mathbb{S}_{p,{\boldsymbol \tau},0}^p$ and $\mathbb{S}_{p,{\boldsymbol \tau},0}^{p-1}$ in case their interior break points are equally spaced. The basis elements are expressed as linear combinations of the B-splines in \eqref{eq:BcardB}.
\subsection{B-spline-like bases for optimal spline spaces} \label{sec:basis-optimal}
In this subsection we construct a basis for optimal spline spaces of the form $\mathbb{S}_{p,{\boldsymbol \tau},0}^p$ defined in \eqref{eq:opt-spaces}.
Let us first address the case $p$ odd. We note that in this case $\mathbb{S}_{p,{\boldsymbol \tau},0}^p=\mathbb{S}_{p,{\boldsymbol \tau},0}^{p-1}$ and $\dim(\mathbb{S}_{p,{\boldsymbol \tau},0}^p)={n_\mathrm{el}}-1$.
We select the knots in \eqref{eq:knots} as
\begin{equation}
\label{eq:knots:0:odd}
\xi_i=\frac{i}{{n_\mathrm{el}}}, \quad i=-p,\ldots, {n_\mathrm{el}}+p,
\end{equation}
and observe that with such a choice we have
$$
{\boldsymbol \tau}={\boldsymbol \tau}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt},
$$
so that $\mathbb{S}_{p,{\boldsymbol \tau},0}^p=\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$.
Then, we consider the set of B-spline-like spline functions
\begin{equation}
\label{eq:B-spline-0-odd}
\{ N^{p}_{i,{\boldsymbol \xi},0}, \ i=1,\ldots, {n_\mathrm{el}}-1\},
\end{equation}
defined by
\begin{equation}
\label{eq:basis-0-odd}
\begin{bmatrix}
N^{p}_{1,{\boldsymbol \xi},0}\\
N^{p}_{2,{\boldsymbol \xi},0} \\
\vdots \\
N^{p}_{{n_\mathrm{el}}-1,{\boldsymbol \xi},0}
\end{bmatrix}:=
\begin{bmatrix}
\underbrace{\overbrace{\cdots\bigg|\,L_{{n_\mathrm{el}}-1}\,\bigg|\,L_{{n_\mathrm{el}}-1}\,\bigg|}^{\frac{p+1}{2}}I_{{n_\mathrm{el}}-1}\overbrace{\bigg|\,R_{{n_\mathrm{el}}-1}\,\bigg|\,R_{{n_\mathrm{el}}-1}\,\bigg|\cdots}^{\frac{p+1}{2}}}_{{n_\mathrm{el}}+p}
\end{bmatrix}
\begin{bmatrix}
N^{p}_{-p,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{0,{\boldsymbol \xi}}\\
N^{p}_{1,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{{n_\mathrm{el}}-1,{\boldsymbol \xi}}
\end{bmatrix},
\end{equation}
where
\begin{equation}
\label{eq:LR-odd=even}
L_m:=
\begin{bmatrix}
\,I_m\,\bigg|\,\boldsymbol{0}_m\,\bigg|\,-J_m\,\bigg|\,\boldsymbol{0}_m\,
\end{bmatrix},
\quad
R_m:=
\begin{bmatrix}
\boldsymbol{0}_m\,\bigg|\,-J_m\,\bigg|\,\boldsymbol{0}_m\,\bigg|\,I_m\,
\end{bmatrix}.
\end{equation}
Here
$I_m$ denotes the identity matrix of size $m$, $J_m$ denotes the exchange matrix of size $m$, i.e., the matrix with $1$ along the anti-diagonal, and $\boldsymbol{0}_m$ is the zero (column) vector of length $m$.
The notation in the matrix in \eqref{eq:basis-0-odd} has to be interpreted as follows (see also Example~\ref{ex:basis-0-odd}):
\begin{itemize}
\item take the identity matrix of size ${n_\mathrm{el}}-1$;
\item construct the matrices $L_{{n_\mathrm{el}}-1}$ and $R_{{n_\mathrm{el}}-1}$ as in \eqref{eq:LR-odd=even};
\item add copies of the matrix $L_{{n_\mathrm{el}}-1}$ to the left of the central identity matrix and take the first $\frac{p+1}{2}$ columns to the left of the identity matrix;
\item add copies of the matrix $R_{{n_\mathrm{el}}-1}$ to the right of the central identity matrix and take the first $\frac{p+1}{2}$ columns to the right of the identity matrix.
\end{itemize}
\begin{figure}[t!]
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_opt_n=5_p=3_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=5_p=3_l=2}}
\caption{Example~\ref{ex:basis-0-odd}: B-spline-like functions and their second derivatives for the space $\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$ with $p=3$ and ${n_\mathrm{el}}=5$.} \label{fig:basis.0.odd:a}
\bigskip
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=9_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=9_l=2}}\hspace*{0.1cm}
\subfigure[fourth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=9_l=4}}\\
\subfigure[sixth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=9_l=6}}\hspace*{0.1cm}
\subfigure[eighth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=9_l=8}}
\caption{Example~\ref{ex:basis-0-odd}: B-spline-like functions and their even order derivatives for the space $\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$ with $p=9$ and ${n_\mathrm{el}}=3$.} \label{fig:basis.0.odd:b}
\end{figure}
\begin{example}
\label{ex:basis-0-odd}
For $p=3$ and ${n_\mathrm{el}}=5$, the matrix in \eqref{eq:basis-0-odd} has $4$ rows and $8$ columns, and it takes the form
\begin{equation}
\label{eq:ex-matrix-1}
\begin{bmatrix}
-1&0&1&0&0&0&0&0
\\
0&0&0&1&0&0&0&0
\\
0&0&0&0&1&0&0&0
\\
0&0&0&0&0&1&0&-1
\end{bmatrix}.
\end{equation}
For $p=9$ and ${n_\mathrm{el}}=3$, the matrix in \eqref{eq:basis-0-odd} has $2$ rows and $12$ columns, and it takes the form
\begin{equation}
\label{eq:ex-matrix-2}
\begin{bmatrix}
0&0&0&-1&0&1&0&0&0&-1&0&1
\\
1&0&-1&0&0&0&1&0&-1&0&0&0
\end{bmatrix}.
\end{equation}
The graph of the corresponding B-spline-like functions and their even order derivatives are depicted in Figures~\ref{fig:basis.0.odd:a} and~\ref{fig:basis.0.odd:b}. One clearly notices that the functions satisfy the boundary conditions of the space $\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$.
\end{example}
We now address the case $p$ even. We note that here $\dim(\mathbb{S}_{p,{\boldsymbol \tau},0}^p)={n_\mathrm{el}}-2$.
We select the knots in \eqref{eq:knots} as
\begin{equation}
\label{eq:knots:0:even}
\xi_i=\frac{i-1/2}{{n_\mathrm{el}}-1}, \quad i=-p,\ldots,{n_\mathrm{el}}+p,
\end{equation}
and observe that with such choice we have
$$
{\boldsymbol \tau}={\boldsymbol \tau}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt},
$$
so that $\mathbb{S}_{p,{\boldsymbol \tau},0}^p=\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$.
Then, we consider the set of B-spline-like functions
\begin{equation}
\label{eq:B-spline-0-even}
\{ N^{p}_{i,{\boldsymbol \xi},0}, \ i=1,\ldots, {n_\mathrm{el}}-2\},
\end{equation}
defined by
\begin{equation}
\label{eq:basis-0-even}
\begin{bmatrix}
N^{p}_{1,{\boldsymbol \xi},0}\\
N^{p}_{2,{\boldsymbol \xi},0} \\
\vdots \\
N^{p}_{{n_\mathrm{el}}-2,{\boldsymbol \xi},0}
\end{bmatrix}:=
\begin{bmatrix}
\underbrace{\overbrace{\cdots\,\bigg|\,L_{{n_\mathrm{el}}-2}\,\bigg|\,L_{{n_\mathrm{el}}-2}\,\bigg|}^{\frac{p}{2}+1} I_{{n_\mathrm{el}}-2}\overbrace{\bigg|\,R_{{n_\mathrm{el}}-2}\bigg|\,R_{{n_\mathrm{el}}-2}\,\bigg|\,\dots}^{\frac{p}{2}+1}}_{{n_\mathrm{el}}+p}
\end{bmatrix}
\begin{bmatrix}
N^{p}_{-p,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{0,{\boldsymbol \xi}}\\
N^{p}_{1,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{{n_\mathrm{el}}-1,{\boldsymbol \xi}}
\end{bmatrix}.
\end{equation}
Further details for the above constructions can be found in \cite{DiVona:2019}.
\begin{figure}[t!]
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_opt_n=5_p=2_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=5_p=2_l=2}}
\caption{Example~\ref{ex:basis-0-even}: B-spline-like functions and their second derivatives for the space $\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$ with $p=2$ and ${n_\mathrm{el}}=6$.} \label{fig:basis.0.even:a}
\bigskip
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=8_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=8_l=2}}\hspace*{0.1cm}
\subfigure[fourth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=8_l=4}}\\
\subfigure[sixth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=8_l=6}}\hspace*{0.1cm}
\subfigure[eighth derivatives]{\includegraphics[height=4.1cm]{ex_basis_opt_n=3_p=8_l=8}}
\caption{Example~\ref{ex:basis-0-even}: B-spline-like functions and their even order derivatives for the space $\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$ with $p=8$ and ${n_\mathrm{el}}=4$.} \label{fig:basis.0.even:b}
\end{figure}
\begin{example}
\label{ex:basis-0-even}
For $p=2$ and ${n_\mathrm{el}}=6$ the matrix in \eqref{eq:basis-0-even} has $4$ rows and $8$ columns, and it is equal to the matrix \eqref{eq:ex-matrix-1}.
For $p=8$ and ${n_\mathrm{el}}=4$ the matrix in \eqref{eq:basis-0-even} has $2$ rows and $12$ columns, and it is equal to the matrix \eqref{eq:ex-matrix-2}.
The graph of the corresponding B-spline-like functions and their even order derivatives are depicted in Figures~\ref{fig:basis.0.even:a} and~\ref{fig:basis.0.even:b}. One clearly notices that the functions satisfy the boundary conditions of the space $\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$.
\end{example}
Finally, we show that the above sets of B-spline-like functions form a basis of our optimal spline spaces.
\begin{proposition}
\label{prop:basis-0}
The functions defined in \eqref{eq:B-spline-0-odd} are a basis of the optimal space $\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$ for $p$ odd.
Likewise, the functions defined in \eqref{eq:B-spline-0-even} are a basis of the optimal space $\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$ for $p$ even.
\end{proposition}
\begin{proof}
Let us start by recalling that $\mathbb{S}_{p,{\boldsymbol \tau},0}^p=\mathbb{S}_{p,{n_\mathrm{el}}-1,0}^\mathrm{opt}$ for $p$ odd and $\mathbb{S}_{p,{\boldsymbol \tau},0}^p=\mathbb{S}_{p,{n_\mathrm{el}}-2,0}^\mathrm{opt}$ for $p$ even,
where
${\boldsymbol \tau}$ is such that
$$
\tau_i=\xi_i,\quad i=1,\ldots,{n_\mathrm{el}}-1,
$$
and ${\boldsymbol \xi}$ is defined in \eqref{eq:knots:0:odd} and \eqref{eq:knots:0:even}, respectively.
The functions in \eqref{eq:B-spline-0-odd} and \eqref{eq:B-spline-0-even} clearly belong to $\mathbb{S}_{p,{\boldsymbol \tau}}$ for the considered sequences of break points. From the symmetry property of cardinal B-splines, see \eqref{eq:symmetry}, a direct check shows that the functions $ \{ N^{p}_{i,{\boldsymbol \xi},0}\}$ satisfy the additional boundary conditions identifying the subspace $\mathbb{S}_{p,{\boldsymbol \tau},0}^p$. The considered functions are linearly independent because the translates of a cardinal B-spline are linearly independent and the matrices in \eqref{eq:basis-0-odd} and \eqref{eq:basis-0-even} have maximum rank. Therefore, the considered functions are a basis of $\mathbb{S}_{p,{\boldsymbol \tau},0}^p$ because their number equals the dimension of the space.
\end{proof}
\subsection{B-spline-like bases for other reduced spline spaces}
We now construct a basis for the space $\mathbb{S}_{p,{\boldsymbol \tau},0}^{p-1}$ defined by uniformly spaced break points ${\boldsymbol \tau}$, i.e.,
\begin{equation*
\tau_i=\frac{i}{{n_\mathrm{el}}}, \quad i=0,\ldots, {n_\mathrm{el}}.
\end{equation*}
It has been numerically observed in \cite{Hiemstra:2021} that these spaces are outlier-free; see also Remark~\ref{rmk:other-reduced-space}.
The only case to be treated is $p$ even since the odd degree case is the same as before in Section~\ref{sec:basis-optimal}. In the even degree case we have $\dim(\mathbb{S}_{p,{\boldsymbol \tau},0}^{p-1})={n_\mathrm{el}}$, and denote the corresponding space by $\overline{\mathbb{S}}_{p,{n_\mathrm{el}},0}$.
We select the knots in \eqref{eq:knots} as
\begin{equation*}
\xi_i=\frac{i}{{n_\mathrm{el}}}, \quad i=-p,\ldots, {n_\mathrm{el}}+p.
\end{equation*}
Then, we consider the set of B-spline-like functions
\begin{equation}
\label{eq:B-spline-uniform-even}
\{ \overline{N}^{p}_{i,{\boldsymbol \xi},0}, \ i=1,\ldots, {n_\mathrm{el}}\},
\end{equation}
defined by
\begin{equation}
\label{eq:basis-uniform-even}
\begin{bmatrix}
\overline{N}^{p}_{1,{\boldsymbol \xi},0}\\
\overline{N}^{p}_{2,{\boldsymbol \xi},0} \\
\vdots \\
\overline{N}^{p}_{{n_\mathrm{el}},{\boldsymbol \xi},0}
\end{bmatrix}:=
\begin{bmatrix}
\underbrace{\overbrace{\cdots\,\bigg|\,\overline{L}_{{n_\mathrm{el}}}\,\bigg|\,\overline{L}_{{n_\mathrm{el}}}\,\bigg|}^{\frac{p}{2}} I_{{n_\mathrm{el}}}\overbrace{\bigg|\,\overline{R}_{{n_\mathrm{el}}}\,\bigg|\,\overline{R}_{{n_\mathrm{el}}}\,\bigg|\,\dots}^{\frac{p}{2}}}_{{n_\mathrm{el}}+p}
\end{bmatrix}
\begin{bmatrix}
N^{p}_{-p,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{0,{\boldsymbol \xi}}\\
N^{p}_{1,{\boldsymbol \xi}}\\
\vdots \\
N^{p}_{{n_\mathrm{el}}-1,{\boldsymbol \xi}}
\end{bmatrix},
\end{equation}
where
\begin{equation*}
\overline{L}_m:=
\begin{bmatrix}
\,I_m\,\big|\,-J_m\,
\end{bmatrix},
\quad
\overline{R}_m:=
\begin{bmatrix}
-J_m\,\big|\,I_m\,
\end{bmatrix}.
\end{equation*}
\begin{figure}[t!]
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_n=6_p=2_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_n=6_p=2_l=2}}
\caption{Example~\ref{ex:basis-uniform-even}: B-spline-like functions and their second derivatives for the space $\overline{\mathbb{S}}_{p,{n_\mathrm{el}},0}$ with $p=2$ and ${n_\mathrm{el}}=6$.} \label{fig:basis.uniform.even:a}
\bigskip
\centering
\subfigure[B-spline-like functions]{\includegraphics[height=4.1cm]{ex_basis_n=2_p=8_l=0}}\hspace*{0.1cm}
\subfigure[second derivatives]{\includegraphics[height=4.1cm]{ex_basis_n=2_p=8_l=2}}\hspace*{0.1cm}
\subfigure[fourth derivatives]{\includegraphics[height=4.1cm]{ex_basis_n=2_p=8_l=4}}\\
\subfigure[sixth derivatives]{\includegraphics[height=4.1cm]{ex_basis_n=2_p=8_l=6}}\hspace*{0.1cm}
\subfigure[eighth derivatives]{\includegraphics[height=4.1cm]{ex_basis_n=2_p=8_l=8}}
\caption{Example~\ref{ex:basis-uniform-even}: B-spline-like functions and their even order derivatives for the space $\overline{\mathbb{S}}_{p,{n_\mathrm{el}},0}$ with $p=8$ and ${n_\mathrm{el}}=2$.} \label{fig:basis.uniform.even:b}
\end{figure}
\begin{example}
\label{ex:basis-uniform-even}
For $p=2$ and ${n_\mathrm{el}}=6$ the matrix in \eqref{eq:basis-uniform-even} has $6$ rows and $8$ columns, and it takes the form
\begin{equation*}
\begin{bmatrix}
-1&1&0&0&0&0&0&0
\\
0&0&1&0&0&0&0&0
\\
0&0&0&1&0&0&0&0
\\
0&0&0&0&1&0&0&0
\\
0&0&0&0&0&1&0&0
\\
0&0&0&0&0&0&1&-1
\end{bmatrix}.
\end{equation*}
For $p=8$ and ${n_\mathrm{el}}=2$ the matrix in \eqref{eq:basis-uniform-even} has $2$ rows and $10$ columns, and it takes the form
\begin{equation*}
\begin{bmatrix}
1&0&0&-1&1&0&0&-1&1&0
\\
0&1&-1&0&0&1&-1&0&0&1
\end{bmatrix}.
\end{equation*}
The graph of the corresponding B-spline-like functions and their even order derivatives are depicted in Figures~\ref{fig:basis.uniform.even:a} and~\ref{fig:basis.uniform.even:b}. One clearly notices that the functions satisfy the boundary conditions of the space $\overline{\mathbb{S}}_{p,{n_\mathrm{el}},0}$.
\end{example}
With the same line of arguments as the proof of Proposition~\ref{prop:basis-0} we arrive at the following result.
\begin{proposition}
\label{prp:basis-uniform}
The functions defined in \eqref{eq:B-spline-uniform-even} are a basis of the space $\overline{\mathbb{S}}_{p,{n_\mathrm{el}},0}$ for $p$ even.
\end{proposition}
\begin{remark}
The basis functions in \eqref{eq:B-spline-0-odd}, \eqref{eq:B-spline-0-even} and \eqref{eq:B-spline-uniform-even} are defined for any number of elements ${n_\mathrm{el}}>2$ and any degree $ p$. While the case $p\gg {n_\mathrm{el}}$ has a theoretical interest for analyzing the convergence in $p$, the most interesting practical case is ${n_\mathrm{el}}\gg p$. When ${n_\mathrm{el}}$ is large with respect to $p$ the boundary constraints at the two ends of the interval involve disjoint sets of (scaled cardinal) B-splines and the construction of the bases in \eqref{eq:B-spline-0-odd}, \eqref{eq:B-spline-0-even} and \eqref{eq:B-spline-uniform-even} is particularly easy. Only few basis functions near the ends have to be modified and the matrices in \eqref{eq:basis-0-odd}, \eqref{eq:basis-0-even} and \eqref{eq:basis-uniform-even} are basically identity matrices with the addition of very few columns to the left and to the right. The added columns have either zero entries or come from exchange matrices; see Examples~\ref{ex:basis-0-odd}--\ref{ex:basis-uniform-even}.
\end{remark}
\begin{remark}
A similar construction of B-spline-like bases was proposed in \cite{Hiemstra:2021} for the reduced spline spaces considered in that paper. It is also an extraction procedure, but in terms of open-knot B-splines instead of cardinal B-splines. As a consequence, the corresponding extraction matrices are not known in explicit form and need to be computed algorithmically (in the spirit of the MDB-spline construction \cite{Speleers:2019,Toshniwal:2020}). There is also a restriction on the minimum number of elements so that the boundary constraints at the two ends of the interval are well separated.
\end{remark}
\section{Numerical examples}
\label{sec:numerics}
In this section we consider some numerical tests to show the potential of outlier-free spline Galerkin discretizations. For the sake of brevity, we will just focus on problems with Dirichlet boundary conditions; the remaining cases are completely analogous.
\subsection{Univariate problems}\label{sec:numerics-1D}
We first show the numerical performance of the presented strategies in the univariate setting. We consider both the eigenvalue problem \eqref{eq:prob-eigenv-1D} and second-order problems of the form
\begin{equation}
\label{eq:second-order-prob}
\left\{ \begin{aligned}
- u'' &= f, \quad \text{in } (0,1), \\
u(0)&=u(1)=0,
\end{aligned} \right.
\end{equation}
and we approximately solve them by means of Galerkin discretizations in the outlier-free optimal spline space $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and the alternative reduced spline space $\overline{\mathbb{S}}_{p,n,0}$.
We also compare them with the full spline space $\mathbb{S}_{p,{\boldsymbol \tau},0}^0$
of the same dimension $n$, defined by
$$
\tau_i:=\frac{i}{{n_\mathrm{el}}}, \quad i=0,\ldots, {n_\mathrm{el}}, \quad {n_\mathrm{el}}:=n-p+2,
$$
and denote this space by $\mathbb{S}_{p,n,0}$.
Observe that the maximal grid spacing in the considered discretization spaces of dimension $n$ is different, ranging from
$\frac{1}{n+1}$ to $\frac{1}{n-p+2}$.
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l}$ in $\mathbb{S}_{p,200,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_odd_1d}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l}$ in $\mathbb{S}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_odd_1d_zoom}}\hspace*{0.1cm}
\subfigure[zoom out for $\mathbb{S}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_odd_1d}}\\
\subfigure[$e_{u,l}$ in $\mathbb{S}_{p,200,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_odd_1d}}\hspace*{0.1cm}
\subfigure[$e_{u,l}$ in $\mathbb{S}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_full_odd_1d}}
\caption{Example~\ref{ex:eigenvalues1D}: Relative frequency errors $e_{\omega,l}$ in \eqref{eq:error-eigenvalues1D} and $L^2$ relative eigenfunction errors $e_{u,l}$ in \eqref{eq:error-eigenfunctions1D} corresponding to the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\mathbb{S}_{p,n,0}$ for odd degrees $p$ and $n=200$. The frequencies are ordered according to increasing values. No outliers are observed for $\mathbb{S}_{p,n,0}^\mathrm{opt}$. Some outliers of $\mathbb{S}_{p,n,0}$ are outside the visible range in (b) as illustrated in (c).} \label{fig:eigenvalues1D.odd}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l}$ in $\mathbb{S}_{p,200,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_even_1d}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l}$ in $\overline{\mathbb{S}}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_freqs_unif_even_1d}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l}$ in $\mathbb{S}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_even_1d_zoom}}\\
\subfigure[$e_{u,l}$ in $\mathbb{S}_{p,200,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_even_1d}}\hspace*{0.1cm}
\subfigure[$e_{u,l}$ in $\overline{\mathbb{S}}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_unif_even_1d}}\hspace*{0.1cm}
\subfigure[$e_{u,l}$ in $\mathbb{S}_{p,200,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_full_even_1d}}
\caption{Example~\ref{ex:eigenvalues1D}: Relative frequency errors $e_{\omega,l}$ in \eqref{eq:error-eigenvalues1D} and $L^2$ relative eigenfunction errors $e_{u,l}$ in \eqref{eq:error-eigenfunctions1D} corresponding to the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ for even degrees $p$ and $n=200$. The frequencies are ordered according to increasing values. No outliers are observed for $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}$. Some outliers of $\mathbb{S}_{p,n,0}$ are outside the visible range in (c); they are not shown for clarity of the figure.} \label{fig:eigenvalues1D.even}
\end{figure}
\begin{example}\label{ex:eigenvalues1D}
In this example we show the performance of the discretizations for approximating the eigenvalue problem \eqref{eq:prob-eigenv-1D}. Let $(\omega_{h,l})^2$ be the approximate value of the eigenvalue $(\omega_l)^2$; see \eqref{eq:eig-Laplace-type-0-BC}. In Figures~\ref{fig:eigenvalues1D.odd} and~\ref{fig:eigenvalues1D.even} we depict the relative frequency errors
\begin{equation}\label{eq:error-eigenvalues1D}
e_{\omega,l}:=\frac{\omega_{h,l}-\omega_l}{\omega_l}, \quad l=1,\ldots,n,
\end{equation}
obtained by the Galerkin approximations in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ for various degrees and $n=200$. We clearly notice that the optimal spline space $\mathbb{S}_{p,n,0}^\mathrm{opt}$ captures all the eigenvalues without any outlier, still maintaining the accuracy of the full spline space. A similar behavior is observed for the reduced spline space $\overline{\mathbb{S}}_{p,n,0}$.
In the same figures we also report the corresponding $L^2$ relative errors for the eigenfunction approximations
\begin{equation}\label{eq:error-eigenfunctions1D}
e_{u,l}:=\frac{\|u_l-u_{h,l}\|}{\|u_l\|}, \quad l=1,\ldots,n.
\end{equation}
\end{example}
\begin{figure}[t!]
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_odd_1d_hom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_odd_1d_hom}}
\caption{Example~\ref{ex:convergence1D-sin}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for odd degrees $p$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence1D-sin:a}
\bigskip
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_even_1d_hom}}\hspace*{0.1cm}
\subfigure[$\overline{\mathbb{S}}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_unif_even_1d_hom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_even_1d_hom}}
\caption{Example~\ref{ex:convergence1D-sin}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for even degrees $p$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence1D-sin:b}
\end{figure}
\begin{example}\label{ex:convergence1D-sin}
To test the approximation properties of the reduced spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}$ we consider problem \eqref{eq:second-order-prob} with the manufactured solution
$$u(x)=\sin(2\pi x).$$
The exact solution satisfies all the additional conditions on high-order derivatives defining both the spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}$. In Figures~\ref{ex:convergence1D-sin:a} and~\ref{ex:convergence1D-sin:b} we depict the $L^2$ and $H^1$ error of the approximate solutions in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for various values of $p$. For fixed $p$, the three spaces achieve exactly the same error convergence orders as expected: $p+1$ in the $L^2$-norm and $p$ in the $H^1$-norm. Note that in this example, per degree of freedom, the error obtained in $\mathbb{S}_{p,n,0}^\mathrm{opt}$ is noticeably better than in $\mathbb{S}_{p,n,0}$ and also slightly better than in $\overline{\mathbb{S}}_{p,n,0}$ ($p$ even), especially for smaller values of $n$. This can be (partially) attributed to the finer grid spacing.
\end{example}
\begin{figure}[t!]
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_odd_1d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_opt_odd_1d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_odd_1d_nonhom}}
\caption{Example~\ref{ex:convergence1D-boundary}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for odd degrees $p$. Both without and with boundary data correction are considered for the space $\mathbb{S}_{p,n,0}^\mathrm{opt}$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence1D-boundary:a}
\bigskip
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_even_1d_nonhom}}\hspace*{0.1cm}
\subfigure[$\overline{\mathbb{S}}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_unif_even_1d_nonhom}} \\
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_opt_even_1d_nonhom}}\hspace*{0.1cm}
\subfigure[$\overline{\mathbb{S}}_{p,n,0}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_unif_even_1d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_even_1d_nonhom}}
\caption{Example~\ref{ex:convergence1D-boundary}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for even degrees $p$. Both without and with boundary data correction are considered for the spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence1D-boundary:b}
\end{figure}
\begin{example}\label{ex:convergence1D-boundary}
As a test for the strategy presented in Section~\ref{sec:general-BC-1D}, we consider problem \eqref{eq:second-order-prob} with the manufactured solution
$$u(x)=1-\frac{15}{16}x-\frac{1}{(x+1)^4}.$$
The exact solution does not satisfy the additional conditions on high-order derivatives defining the spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}$. In Figures~\ref{ex:convergence1D-boundary:a} and~\ref{ex:convergence1D-boundary:b} we depict the convergence of the approximate solutions in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}$ in terms of $n$, for various values of $p$. There is a substantial loss of accuracy for $p>2$ when approximating the solution in the reduced spline spaces. However, the full convergence order ($p+1$ in the $L^2$-norm and $p$ in the $H^1$-norm) is recovered by applying the boundary data correction described in Section~\ref{sec:general-BC-1D}. Note that in this example, per degree of freedom, the error obtained in $\mathbb{S}_{p,n,0}$ is slightly worse for $p$ even, but better for $p$ odd.
\end{example}
\subsection{Multivariate problems}
We now test the numerical performance of the presented strategies in the bivariate setting. We consider both the eigenvalue problem \eqref{eq:prob-eigenv} and second-order problems of the form \eqref{eq:Laplace} with $d=2$.
We approximately solve them by means of Galerkin discretizations in the reduced tensor-product spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$. Just like in Section~\ref{sec:numerics-1D}, we also compare them with the full tensor-product spline space $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$. All these spaces have the same dimension~$n^2$.
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{p,50,0}^\mathrm{opt}\otimes\mathbb{S}_{p,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_odd_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{p,50,0}\otimes\mathbb{S}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_odd_2d_lin_zoom}}\hspace*{0.1cm}
\subfigure[zoom out for $\mathbb{S}_{p,50,0}\otimes\mathbb{S}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_odd_2d_lin}}\\
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{p,50,0}^\mathrm{opt}\otimes\mathbb{S}_{p,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_odd_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{p,50,0}\otimes\mathbb{S}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_full_odd_2d_lin}}
\caption{Example~\ref{ex:eigenvalues2D}: Relative frequency errors $e_{\omega,l_1,l_2}$ in \eqref{eq:error-eigenvalues2D} and $L^2$ relative eigenfunction errors $e_{u,l_1,l_2}$ in \eqref{eq:error-eigenfunctions2D} corresponding to the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ for odd degrees $p$ and $n=50$. The frequencies are ordered according to increasing values. No outliers are observed for $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$. Several outliers of $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ are outside the visible range in (b) as illustrated in (c).} \label{fig:eigenvalues2D.odd}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{p,50,0}^\mathrm{opt}\otimes\mathbb{S}_{p,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_even_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\overline{\mathbb{S}}_{p,50,0}\otimes\overline{\mathbb{S}}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_freqs_unif_even_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{p,50,0}\otimes\mathbb{S}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_freqs_full_even_2d_lin_zoom}}\\
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{p,50,0}^\mathrm{opt}\otimes\mathbb{S}_{p,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_even_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\overline{\mathbb{S}}_{p,50,0}\otimes\overline{\mathbb{S}}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_unif_even_2d_lin}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{p,50,0}\otimes\mathbb{S}_{p,50,0}$]{\includegraphics[height=4.1cm]{ex_eigfuns_full_even_2d_lin}}
\caption{Example~\ref{ex:eigenvalues2D}: Relative frequency errors $e_{\omega,l_1,l_2}$ in \eqref{eq:error-eigenvalues2D} and $L^2$ relative eigenfunction errors $e_{u,l_1,l_2}$ in \eqref{eq:error-eigenfunctions2D} corresponding to the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}\otimes\overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ for even degrees $p$ and $n=50$. The frequencies are ordered according to increasing values. No outliers are observed for $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}\otimes\overline{\mathbb{S}}_{p,n,0}$. Several outliers of $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ are outside the visible range in (c); they are not shown for clarity of the figure.} \label{fig:eigenvalues2D.even}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{1,50,0}^\mathrm{opt}\otimes\mathbb{S}_{1,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_odd_2d_p=1}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{3,50,0}^\mathrm{opt}\otimes\mathbb{S}_{3,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_odd_2d_p=3}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{5,50,0}^\mathrm{opt}\otimes\mathbb{S}_{5,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_odd_2d_p=5}}\\
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{1,50,0}^\mathrm{opt}\otimes\mathbb{S}_{1,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_odd_2d_p=1}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{3,50,0}^\mathrm{opt}\otimes\mathbb{S}_{3,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_odd_2d_p=3}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{5,50,0}^\mathrm{opt}\otimes\mathbb{S}_{5,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_odd_2d_p=5}}
\caption{Example~\ref{ex:eigenvalues2D}: Relative frequency errors $e_{\omega,l_1,l_2}$ in \eqref{eq:error-eigenvalues2D} and $L^2$ relative eigenfunction errors $e_{u,l_1,l_2}$ in \eqref{eq:error-eigenfunctions2D} corresponding to the spline space $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ for odd degrees $p$ and $n=50$. The frequencies are ordered according to increasing values of their univariate counterparts in \eqref{eq:eig-Laplace2D}. No outliers are observed.} \label{fig:eigenvalues2D.odd.surface}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{2,50,0}^\mathrm{opt}\otimes\mathbb{S}_{2,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_even_2d_p=2}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{4,50,0}^\mathrm{opt}\otimes\mathbb{S}_{4,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_even_2d_p=4}}\hspace*{0.1cm}
\subfigure[$e_{\omega,l_1,l_2}$ in $\mathbb{S}_{6,50,0}^\mathrm{opt}\otimes\mathbb{S}_{6,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_freqs_opt_even_2d_p=6}}\\
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{2,50,0}^\mathrm{opt}\otimes\mathbb{S}_{2,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_even_2d_p=2}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{4,50,0}^\mathrm{opt}\otimes\mathbb{S}_{4,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_even_2d_p=4}}\hspace*{0.1cm}
\subfigure[$e_{u,l_1,l_2}$ in $\mathbb{S}_{6,50,0}^\mathrm{opt}\otimes\mathbb{S}_{6,50,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_eigfuns_opt_even_2d_p=6}}
\caption{Example~\ref{ex:eigenvalues2D}: Relative frequency errors $e_{\omega,l_1,l_2}$ in \eqref{eq:error-eigenvalues2D} and $L^2$ relative eigenfunction errors $e_{u,l_1,l_2}$ in \eqref{eq:error-eigenfunctions2D} corresponding to the spline space $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ for even degrees $p$ and $n=50$. The frequencies are ordered according to increasing values of their univariate counterparts in \eqref{eq:eig-Laplace2D}. No outliers are observed.} \label{fig:eigenvalues2D.even.surface}
\end{figure}
\begin{example}\label{ex:eigenvalues2D}
In this example we show the performance of the discretizations for approximating the eigenvalue problem \eqref{eq:prob-eigenv}.
Let $(\omega_{h_1,h_2,l_1,l_2})^2$ be the approximate value of the eigenvalue $(\omega_{l_1,l_2})^2$; see \eqref{eq:eig-Laplace2D-type-0-BC}. In Figures~\ref{fig:eigenvalues2D.odd} and~\ref{fig:eigenvalues2D.even} we depict the relative frequency errors
\begin{equation}\label{eq:error-eigenvalues2D}
e_{\omega,l_1,l_2}:=
\frac{\omega_{h_1,h_2,l_1,l_2}-\omega_{l_1,l_2}}{\omega_{l_1,l_2}}, \quad l_1,l_2=1,\ldots,n,
\end{equation}
obtained by the Galerkin approximations in $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ for various degrees and $n=50$.
We clearly notice that the reduced spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$ capture all the eigenvalues without any outlier still maintaining the accuracy of the full tensor-product spline space.
In the same figures we also report the corresponding $L^2$ relative errors for the eigenfunction approximations
\begin{equation}\label{eq:error-eigenfunctions2D}
e_{u,l_1,l_2}:=\frac{\|u_{l_1,l_2}-u_{h_1,h_2,l_1,l_2}\|}{\|u_{l_1,l_2}\|}, \quad l_1,l_2=1,\ldots,n.
\end{equation}
Finally, we remark that the frequency errors and eigenfunction errors seem to approximately lie on several subcurves instead of a single curve as in the univariate case (see Example~\ref{ex:eigenvalues1D}). This can be simply explained by the fact that they actually lie on a bivariate surface due to the bivariate nature of the problem; see \eqref{eq:eig-Laplace2D} and \eqref{eq:approx-eig-Laplace2D}. This is illustrated in Figures~\ref{fig:eigenvalues2D.odd.surface} and~\ref{fig:eigenvalues2D.even.surface} for the space $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ considering various degrees.
\end{example}
\begin{figure}[t!]
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_odd_2d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_opt_odd_2d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_odd_2d_nonhom}}
\caption{Example~\ref{ex:convergence2D-boundary}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ in terms of $n$, for odd degrees $p$. Both without and with boundary data correction are considered for the space $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence2D-boundary:a}
\bigskip
\centering
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$]{\includegraphics[height=4.1cm]{ex_conv_opt_even_2d_nonhom}}\hspace*{0.1cm}
\subfigure[$\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_unif_even_2d_nonhom}} \\
\subfigure[$\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_opt_even_2d_nonhom}}\hspace*{0.1cm}
\subfigure[$\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$ with correction]{\includegraphics[height=4.1cm]{ex_conv_corr_unif_even_2d_nonhom}}\hspace*{0.1cm}
\subfigure[$\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$]{\includegraphics[height=4.1cm]{ex_conv_full_even_2d_nonhom}}
\caption{Example~\ref{ex:convergence2D-boundary}: $L^2$ and $H^1$ error convergence in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ in terms of $n$, for even degrees $p$. Both without and with boundary data correction are considered for the spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$. The reference convergence order in $n$ is indicated by black triangles.} \label{ex:convergence2D-boundary:b}
\end{figure}
\begin{example}\label{ex:convergence2D-boundary}
As a test for the strategy presented in Section~\ref{sec:general-BC-2D} in the bivariate setting, we consider problem \eqref{eq:Laplace} with the manufactured solution
$$u(x_1,x_2)=x_1(1-\cos(2\pi x_1))(1-e^{x_2})(1-e^{1-x_2}).$$
The exact solution does not satisfy the additional conditions on high-order derivatives defining the spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$ and $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$. In Figures~\ref{ex:convergence2D-boundary:a} and~\ref{ex:convergence2D-boundary:b} we depict the convergence of the approximate solutions in the spline spaces $\mathbb{S}_{p,n,0}^\mathrm{opt}\otimes\mathbb{S}_{p,n,0}^\mathrm{opt}$, $\overline{\mathbb{S}}_{p,n,0}\otimes \overline{\mathbb{S}}_{p,n,0}$ and $\mathbb{S}_{p,n,0}\otimes\mathbb{S}_{p,n,0}$ in terms of $n$, for various values of $p$. Like in the univariate setting, we see a substantial loss of accuracy for $p>2$ when approximating the solution in the reduced spline spaces. However, the full convergence order is recovered by applying the boundary data correction described in Section~\ref{sec:general-BC-2D}.
\end{example}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have presented and theoretically analyzed the use of optimal spline subspaces in isogeometric Galerkin discretizations of eigenvalue problems related to the Laplace operator subject to standard homogeneous boundary conditions (Dirichlet/Neumann/mixed) in the univariate and in the multivariate tensor-product case.
By completing the theory started in \cite{Sande:2019} for periodic boundary conditions, we have proved that in the optimal spline subspaces described in \cite{Floater:2017,Floater:2018}, as suggested in \cite{Sande:2019,Sande:2020}, the whole spectrum is well approximated and no spurious numerical values can appear, thus resulting in accurate outlier-free discretizations.
The main contributions of the paper are twofold:
\begin{itemize}
\item we have provided explicit error estimates for Ritz projectors in the considered optimal spline subspaces;
\item by exploiting the above estimates, we have derived explicit error estimates for the approximated eigenfunctions and frequencies in terms of the exact ones.
\end{itemize}
The first item shows that the optimal spline subspaces possess full approximation power, while the second item implies that, for a fixed number of degrees of freedom, there is convergence in $p$ of the whole discrete spectrum (thus no outliers).
The optimal spaces we are dealing with are subspaces of the standard (maximally smooth) spline space defined on certain uniform knot sequences (whose structure depends on the parity of the degree $p$). The subspaces are identified and simply described by mimicking the behavior of vanishing derivatives (up to order $p$) of the exact eigenfunctions at the boundary.
It turns out that the optimal spline subspaces fully analyzed here are very similar (and actually identical in several cases) to those proposed as trial spaces for outlier removal in \cite{Hiemstra:2021}; see also \cite{Deng:2021}.
However, for a fixed type of boundary condition (Dirichlet/Neumann/mixed) the subspaces in \cite{Hiemstra:2021} can slightly differ from ours in the partition and in the maximum order of vanishing derivatives at the boundary, depending on the parity of the degree $p$; see Section~\ref{sec:outlier-free}.
The subspaces in \cite{Hiemstra:2021} were previously introduced for uniform knot sequences in \cite{Sogn:2018,Takacs:2016}, and further analyzed in \cite[Section~5.2]{Sande:2020}. A complete error analysis
for such subspaces --- when different from the optimal ones addressed in the present paper --- is worth to be subject of future research. Nevertheless, their strong similarity with the optimal subspaces discussed here suggests that they are ``almost optimal'' and motivates their effectiveness for outlier removal, which was already clearly numerically demonstrated in \cite{Hiemstra:2021} and also in Section~\ref{sec:numerics}.
As a side result, we have also illustrated that the outlier-free optimal spline subspaces can be exploited to obtain discretization schemes for general (non-homogeneous) problems with full approximation power by providing a suitable compensation of the boundary conditions.
Moreover, the discussed optimal spline subspaces can be equipped with B-spline-like bases. We have provided an explicit expression of these bases in terms of cardinal B-splines and almost trivial extraction operators; see Section~\ref{sec:bsplines}. This makes the optimal subspaces completely equivalent to the full spline space from the computational point of view.
Summarizing, the results of this paper fully uncover and theoretically explain the relation between optimal spline subspaces and eigenvalue/eigenfunction convergence in isogeometric Galerkin discretizations, and fix the outlier issue both from the theoretical and the computational point of view.
Finally, we remark that the paper focuses on Galerkin discretizations but similar results can be expected when dealing with collocation methods.
\section*{Acknowledgements}
This work was supported
by the Beyond Borders Programme of the University of Rome Tor Vergata through the project ASTRID (CUP E84I19002250005)
and
by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006).
The authors are members of Gruppo Nazionale per il Calcolo Scientifico, Istituto Nazionale di Alta Matematica.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\label{I}Introduction}
The injection of thermal and kinetic energy by stellar winds and supernova (SN)
explosions drives transonic turbulence in the interstellar medium (ISM) and
produces an inhomogeneous, multiphase system
\citep{Elmegreen:2004,Scalo:2004,MacLow:2004}.
The outer scale of turbulent motions in the ISM consistently suggested by
observations, theory and simulations is of order $10\text{--}100\,{\rm pc}$, and the
turbulent scales extend to a fraction of a parsec \citep{Armstrong:1995}.
Understanding the properties and nature of a turbulent flow requires the
separation of mean and fluctuating quantities. Such a separation is well
understood for statistically homogeneous random flows where a number of
averaging procedures are available. Volume or area averaging are most important
in astronomy, while numerical simulations provide a further opportunity to
average over time. Under favourable conditions (defined by ergodic theorems
and hypotheses), the resulting averages are equivalent to the statistical ensemble
averages employed in theory
\citep[e.g.,][]{Monin:1975,Panchev:1971,Tennekes:1972}. The
ensemble averages are rarely accessible in applications,
as their calculation requires the availability of a large number
of statistically independent realizations of the random processes.
Space and time averaging procedures are consistent with ensemble averaging
provided they satisfy the Reynolds rules of averaging, such as
$\mean{f+g}=\mean{f}+\mean{g}$, $\mean{\mean{f}g}=\mean{f}\mean{g}$
and $\mean{\mean{f}}=\mean{f}$,
where $f$ and $g$ are random functions and angular brackets denote averaging
\citep[e.g., Sect.~3.1 in][]{Monin:1975}. Volume and time averaging only satisfy
the latter Reynolds rule in an approximate manner when the scales of
variations of the mean quantities and the fluctuations differ significantly (the
requirement of scale separation between the averaged quantities and the
fluctuations) and the averaging scale is large in comparison with the scale of
the fluctuations and small in comparison with that of the mean quantities. In
practice, the mean quantities need to be homogeneous or time-independent for
the ensemble and volume (or time) averages to be consistent with each other.
The outer scale of the interstellar turbulence is comparable to the scale
height of the gas density distribution in spiral galaxies (about $0.1\,{\rm kpc}$ and
$0.5\,{\rm kpc}$ for the cold and warm diffuse \ion{H}{i}, respectively). Therefore,
the interstellar turbulent flow cannot be considered statistically homogeneous
apart from along the horizontal directions. However, numerical simulations of
the supernova-driven, multi-phase ISM have relatively small horizontal domains
of order $1\,{\rm kpc}\times1\,{\rm kpc}$ or less
\citep[e.g.,][]{Korpi:1999b, Joung:2006, deAvillez:2007, deAvillez:2012a,
deAvillez:2012b, Gressel:2008, Federrath:2010, Hill:2012, Gent:2013b, Gent:2013a,
Gressel:2013, Bendre:2015, Walch:2015, Girichidis:2016a, Girichidis:2016b}.
Meanwhile, the ISM has a wide range of density and velocity structures (e.g.,
those related to gas clouds, galactic outflows and spiral patterns) that cover
continuously the range of scales from $1 \,{\rm pc}$ to $10\,{\rm kpc}$. Therefore, scale
separation between the random and large-scale ISM flows is questionable at best.
This poses difficulties for the interpretation of numerical simulations. Similar
difficulties arise in the interpretation of observations, but numerical simulations
have exposed the problems especially clearly.
The division of the Navier--Stokes and magnetohydrodynamic (MHD) equations into
evolution equations for the mean flow and the fluctuations has been explored for
both ensemble averaging and filtering of the fluctuations (also known as
coarse-graining); i.e., volume averaging via convolution with a compact kernel.
The Reynolds rules of averaging are not satisfied for this procedure but this is
not an obstacle to developing a mathematically sound formalism that leads to
evolution equations for averaged quantities and their moments \citep{Germano:1992}.
The most widely known application of this technique is to subgrid models for large
eddy simulations of turbulent flows \citet{Meneveau:2012}. \citet{Eyink:2018} and
\citet{Aluie:2017} provide details and a review of this approach to hydrodynamic
and MHD turbulence \citep[see also][]{Eyink:1995,Eyink:2015}.
An important advantage of the filtering approach is that, together with
ensemble averaging, it does not require scale separation between the mean
fields and their fluctuations \citep[e.g.,][]{Aluie:2017}.
The separation of the mean and fluctuating quantities in a random flow is of
crucial significance in the theory of mean-field turbulent dynamos, and the
problem of averaging has been exposed in this area earlier than in other
applications. The mean-field dynamo theory is based on ensemble averaging but
numerical simulations rely on various volume and time averaging procedures. For
example, the separation of the magnetic field into mean and fluctuating
components often involves averaging over the whole computational volume or, in
systems stratified along the $z$-direction due to gravity, averaging in the
$(x,y)$-planes \citep[horizontal averaging; see][]{Brandenburg:2005}. The
resulting mean magnetic field is either perfectly uniform or only dependent on
$z$. However, these constraints on the form of the mean magnetic field are
artificial and unphysical. An inhomogeneous system, such as the ISM, is
expected to produce a spatially complex mean field, which is ignored in the
simple volume of horizontal averaging techniques described above. A further
complication with horizontal averaging is the requirement that
$\langle B_z \rangle$ vanishes when periodic boundary conditions are used in $x$
and $y$; otherwise the solenoidality of the mean magnetic field cannot be guaranteed
\citep[e.g.,][]{Gent:2013a}. Furthermore, the kinematic mean-field dynamo
action, with homogeneous transport coefficients $\alpha$ and $\beta$, in infinite
space produces an inhomogeneous mean magnetic field that varies at all
wave-numbers below $\alpha/\beta$, with the dominant mode having the
wave-number $\alpha/(2\beta)$ \citep[e.g.,][]{Sokoloff:1983}. \emph{The spatial
structure of any mean field is controlled by the physical properties of the system
rather than by the size of the computational domain.} The only advantage of
horizontal averaging is that it obeys the Reynolds rules, but this is often
achieved at the expense of physical validity. Another option, consistent
with the Reynolds rules, is to use azimuthal averaging to obtain an axially
symmetric mean magnetic field in global simulations of dynamo action in a
rotating spherical object \citep[see][for a review]{Simard:2016}. This approach is
easier to justify but still it excludes physically admissible azimuthal
variations of the mean field.
We discuss an alternative approach to averaging based on Gaussian smoothing
as suggested by \citet{Germano:1992}, and employ it to obtain
the mean fields in simulations of the multi-phase, supernova-driven ISM. Averaging
with a Gaussian (or another) kernel is inherent in astronomical observations,
where such smoothing is applied either during data reduction or stems from the
finite width of a telescope beam. This approach has been applied by
\citet{Gent:2013a} to the simulated magnetic field; here we extend it to the
velocity and density fields and, importantly, energy densities, which represent
higher-order statistical moments. In particular, kinetic energy density in a
compressible flow represents a third-order statistical moment and requires
special attention.
A summary of the ISM simulations is presented in
Section~\ref{sect:model_summary}, and Section~\ref{sect:mean_fields} introduces
averaging based on Gaussian smoothing. Various approaches to the
selection of the smoothing length are discussed in
Section~\ref{sect:power_spectra}. Section~\ref{sect:energy_densities} analyses
the behaviour of magnetic and kinetic energy densities.
Section~\ref{sect:dynamo} details the effects of the amplified mean field on
the magnetic and kinetic energies.
\section{Simulations of the multiphase ISM}\label{sect:model_summary}
We use our earlier simulations of the ISM, described in detail by
\citet{Gent:2013a,Gent:2013b}. The simulations involve solving the full,
compressible, non-ideal MHD equations with parameters generally typical of the
Solar neighbourhood in a three-dimensional Cartesian, shearing box with
radial ($x$) and azimuthal ($y$) extents of $L_x=L_y=1.024\,{\rm kpc}$ and vertical
($z$) extent $L_z=1.086\,{\rm kpc}$ on either side of the mid-plane at $z=0$.
Our numerical resolution is $\grs{\Delta=}\Delta x = \Delta y = \Delta z = 4 \,{\rm pc}$,
using $256$ grid points in $x$ and $y$ and $544$ in $z$. \citet{Gent:2013a}
demonstrate that this resolution is sufficient to reproduce the known solutions
for expanding SN remnants in the Sedov--Taylor and momentum-conserving phases.
Details of the numerical implementation and its comparison with some other
similar simulations can be found in Appendix~\ref{sect:parameters}.
The mass conservation, Navier--Stokes, heat and induction equations are solved for
mass density $\rho$, velocity $\vect{u}$, specific entropy $s$, and magnetic
vector potential $\vect{A}$ (such that $\vect{B}=\nabla\times\vect{A}$).
The Navier--Stokes equation includes a fixed vertical gravity force with
contributions from the stellar disk and dark halo. The initial state is an
approximate hydrostatic equilibrium. The Galactic differential rotation is
modelled by a background shear flow $\vect{U} = (0,-q \Omega x, 0)$, where $q$
is the shear parameter and $\Omega$ is the Galactic angular velocity.
Here we use $q=1$, as in a flat rotation curve, and $\Omega=50\,{\rm km\,s^{-1}}\,{\rm kpc}^{-1}$,
twice that of the Solar neighbourhood in order to enhance the mean-field dynamo
action and thus reduce the computational time.
The velocity $\vect{u}$ is the perturbation velocity in the rotating
frame, that remains after the subtraction of the background shear flow from the
total velocity. However, it still contains a large-scale vertical component due
to an outflow driven by the SN activity.
Both Type II and Type I supernova explosions (SNe) are included in the simulations.
These differ only in their vertical distribution and frequency. The frequencies used
correspond to those in the Solar neighbourhood. We introduce Type II SNe at a mean
rate per surface area of $\nu_{\rm II} = 25 \,{\rm kpc}^{-2} \,{\rm Myr}^{-1}$.
Type I SNe have a mean rate per surface area
of $\nu_{\rm I} = 4 \,{\rm kpc}^{-2} \,{\rm Myr}^{-1}$.
The SN sites are distributed randomly in the horizontal planes. Their vertical
positions have Gaussian distributions with scale heights $h_{\rm II}=0.09\,{\rm kpc}$
and $h_{\rm I} = 0.325 \,{\rm kpc}$ for SN\,II and SN\,I, respectively.
No spatial clustering of the SNe is included since the size of superbubbles produced
by SNe clustering are comparable to the horizontal size of the computational domain.
Simulations in a domain of significantly larger size are required to capture the
effects of the SN clustering. \citet{deAvillez:2007} include SN clustering in their
simulations and obtain the correlation scale of the random flows of $75\,{\rm pc}$,
comparable to those obtained from the correlation analysis of this model
\citep[see][]{Hollins:2017}.
Each SN is initialised as an injection of $0.5\times10^{51}\,{\rm erg}$ of thermal energy
and a variable amount of kinetic energy that depends on the local gas density and
has the mean value $0.4\times10^{51}\,{\rm erg}$.
We include optically thin radiative cooling with a parametrised cooling
function. For $T<10^5\,{\rm K}$, we adopt a power-law fit to the `standard
equilibrium' pressure--density curve of \citet[][]{Wolfire:1995}, as given in
\citet[][]{Sanchez-Salcedo:2002}. For $T>10^5\,{\rm K}$, we use the cooling function
of \citet[][]{Sarazin:1987}. Photoelectric heating is also included as in
\citet[][]{Wolfire:1995}; it decreases with $|z|$ on a length scale comparable
to the gas scale height near the Sun. The system exhibits distinct hot, warm
and cold gas phases identifiable as peaks in the joint probability distribution
of the gas density and temperature.
Shock-capturing kinetic, thermal and magnetic diffusivities (in addition to
background diffusivities) are included to resolve shock discontinuities and
maintain numerical stability in the Navier--Stokes, heat and induction
equations. Periodic boundary conditions are used in $y$, and sheared-periodic
boundary conditions in $x$. Open boundary conditions, permitting outflow and
inflow, are used at the vertical boundaries at $z=\pm L_z$.
\citet{Gent:2013b,Gent:2013a} provide further details on the boundary
conditions used and on the other implementations briefly described above.
Starting with a weak azimuthal magnetic field at the mid-plane,
the system is susceptible to the dynamo instability. Dynamo action can be
identified with exponential growth of magnetic field saturating after about
$1.4\,{\rm Gyr}$ at a level of $2.5\,\mu{\rm G}$, comparable to observational estimates for
the solar neighbourhood \citep{Gent:2013a}. The magnetic field has energy at a
scale comparable to the size of the computational domain, suggesting a
mean-field dynamo action \citep{Gent:2013b}.
We analyse snapshots in the range $0.8 \leq t \leq 1.725\,{\rm Gyr}$. Regarding the
magnetic field and dynamo action, three distinct periods can be identified. For
$0.8 \leq t < 1.1\,{\rm Gyr}$, the magnetic energy is low compared to the thermal and
kinetic energies and the mean-field dynamo is in its kinematic stage. The dynamo
adjusts itself to a non-linear stage at $1.1 \leq t < 1.45\,{\rm Gyr}$ as the magnetic
energy reaches approximate equipartition with kinetic energy of the random flow.
Finally, at $1.45 \leq t \leq 1.725\,{\rm Gyr}$, the mean-field dynamo saturates and the
magnetic energy slightly exceeds the kinetic energy \citep[see][]{Gent:2013b}.
Since the evolution of the magnetic field is expected to significantly affect
the structure of the gas density and velocity, each period is considered
independently. The results are illustrated in the figures shown below using the
snapshot at $t=1.6\,{\rm Gyr}$.
\section{Mean fields and fluctuations in a compressible random flow}
\label{sect:mean_fields}
Averaging procedures can be used to represent a physical variable $f$
as a superposition of a mean $\langle f \rangle$ and fluctuating $f'$ parts,
$f=\langle f \rangle + f'$. Ensemble averaging is used in
most theoretical contexts. Ensemble-averaged quantities do not need to be
independent of any spatial or temporal variable. However, volume and
time averaging are often the only options available in simulations and
observations. For example, the average over a volume $V$,
\begin{align}\label{eq:reynolds}
\langle f \rangle_{V} = \frac{1}{V}\int_{V} f(\vect{x}') \, \dd^{3}\vect{x'}\,,
\end{align}
satisfies the Reynolds rules of averaging, including
\begin{align}
\langle f \langle g \rangle_{V} \rangle_{V}
&= \langle f \rangle_{V} \langle g \rangle_{V}\,,
\qquad
\langle \langle f \rangle_{V} \rangle_{V} = \langle f \rangle_{V}\,,
\label{eq:Reynolds_rules}\\
\intertext{leading to}
\langle f' \rangle_{V} &= 0\,, \qquad
\langle \langle f \rangle_{V} \, g' \rangle_{V} = 0\,,
\label{eq:conditions_1}
\end{align}
for the random variables $f$ and $g$.
This allows evolutionary equations for the central moments
$\langle f' g' \rangle_{V}$, $\langle f' g' h' \rangle_{V}$
(with another random variable $h$), etc.,
to be derived by averaging the governing equations using relations such as
\citep[e.g.,][]{Monin:1975},
\begin{align}
\langle u'_{i} u'_{j} \rangle_{V}& = \langle u_{i} u_{j} \rangle_{V} -
\langle u_{i} \rangle_{V} \langle u_{j} \rangle_{V}\,, \notag \\
\langle u'_{i} u'_{j} u'_{k} \rangle_{V} &=
\langle u_{i} u_{j} u_{k} \rangle_{V}
-\langle u_i\rangle_V \langle u'_j u'_k\rangle_V
-\langle u_j \rangle_V \langle u'_k u'_i\rangle_V \notag \\
&- \langle u_k \rangle_V \langle u'_i u'_j \rangle_V
- \langle u_i \rangle_V \langle u_j \rangle_V \langle u_k\rangle_V\,,
\label{eq:central_moments_equations}
\end{align}
in the case of the velocity field $\vect{u}$.
In numerical simulations, $V$ is often the whole computational domain,
or some significant part of it, or a (thin) slice parallel to one
of the coordinate planes, as in averages over a horizontal plane ($x,y)$.
Another widespread averaging procedure is azimuthal averaging, appropriate
when the mean quantities are axially symmetric. Such averages
are restricted to be partially or fully independent of position, in all three
directions in the case of volume averages, in two dimensions for horizontal
averages and in the azimuth for axial averages. As we discuss in
Section~\ref{I}, these constraints may be --- and often are --- unreasonably
restrictive. Moreover, any observational data obtained with a finite resolution
represent a convolution of the quantity observed with the telescope beam and
are free to vary with position. It is therefore desirable to apply to numerical
results an averaging procedure, compatible with the observational proceduress,
in a manner that does not impose unjustifiable constraints on the averaged quantities.
his is the goal of this paper.
\begin{table*}
\centering
\caption{\label{tab:Table1}Notation for the total (T), mean (M) and fluctuating
(F) fields and their respective Fourier spectra.
}
\begin{tabular}{l ccc c ccc c ccc c ccc}%
\hline
&&& \multicolumn{3}{c}{Spectrum}
&&\multicolumn{3}{c}{Energy density} &&\multicolumn{3}{c}{Energy}\\
\cline{5-7} \cline{9-11} \cline{13-15}\noalign{\vskip 3pt}
&T &M &F &T &M &F &&T &M &F &&T &M &F \\
\hline
Magnetic field
&$\vect{B}$ & $\vect{B}_\ell$ & $\vect{b}$
&$S_B(k)$ & $S_{B_\ell}(k)$ & $S_b(k)$ &
&$\langle e_B\rangle_\ell$ & $e_{B_\ell}$ & $e_b$ &
&${\cal E}_B$ & ${\cal E}_{B_\ell}$ & ${\cal E}_b$ \\
Gas density
& $\rho$ & $\rho_\ell$ & $\rho'$
&$S_\rho(k)$ & $S_{\rho_\ell}(k)$ & $S_{\rho'}(k)$ &
&$\text{---}$ & $\text{---}$ & $\text{---}$ &
&$\text{---}$ & $\text{---}$ & $\text{---}$ \\
Gas velocity
& $\vect{u}$ & $\vect{u}_\ell$ & $\vect{u}'$
&$S_u(k)$ & $S_{u_\ell}(k)$ & $S_{u'}(k)$ &
&$\langle e_{\rm k}\rangle_\ell$ & $e_{\rm s}$ & $e_{\rm st}, \, e_{\rm t}$ &
&${\cal E}_{\rm k}$ & ${\cal E}_{\rm s}$ & ${\cal E}_{\rm st}, \, {\cal E}_{\rm t}$ \\
\hline
\end{tabular}
\end{table*}
\subsection{The filtering approach to averaging}\label{sect:filtering}
A local mean part of a random field $f(\vect{x})$, denoted $\langle
f\rangle_\ell$, is obtained by spatial smoothing (filtering) of its
fluctuations at scales $l<\ell$, with a certain \textit{smoothing length}
$\ell$, using a smoothing kernel $\mathcal{G}_\ell$:
\begin{equation}\label{eq:filter}
\langle f(\vect{x}) \rangle_\ell
= \int_V f(\vect{x}')\mathcal{G}_\ell(\vect{x}-\vect{x}') \,\dd^{3}\vect{x}'\,,
\end{equation}
where integration extends over the whole volume where $f(\vect{x})$ is defined.
The filtering kernel is normalized,
\begin{equation}\nonumber
\int_V \mathcal{G}_\ell(\vect{x}-\vect{x}')\,\dd^3\vect{x}' = 1\,,
\end{equation}
and assumed to be symmetric,
\begin{equation}\label{Gsym}
\int_{V} \vect{x} G_\ell(\vect{x}) \,\dd^3\vect{x} = 0\,.
\end{equation}
To ensure that fluctuations in kinetic energy density are positive definite,
the kernel must be positive for all $\vect{x}$ \citep[][and references
therein]{Aluie:2017}.
The fluctuation field is obtained as
\begin{equation}\label{fluct}
f'(\vect{x}) = f(\vect{x}) -\langle f(\vect{x}) \rangle_{\ell}\,,
\end{equation}
(with the link between the prime and the scale $\ell$ being understood).
This procedure retains the spatial structure of both the mean field and the
fluctuations. We discuss below physically motivated choices for the smoothing
length $\ell$.
Thus defined, the averaging procedure does not satisfy the Reynolds rules
outlined in equations~\eqref{eq:Reynolds_rules} and \eqref{eq:conditions_1}.
In particular, the mean of the fluctuations does not vanish, repeated averaging
affects the mean field $\langle f(\vect{x}\rangle_\ell$, and the mean and
fluctuating fields are not uncorrelated:
\begin{equation}\label{eq:conditions_filtering}
\langle f' \rangle_{\ell} \neq 0\,, \qquad
\langle \langle f \rangle_\ell \rangle_\ell \neq \langle f \rangle_\ell\,,
\qquad
\langle \langle f \rangle_\ell f' \rangle_\ell \neq 0\,.
\end{equation}
As a consequence, the standard relations between statistical moments of total
fields and their fluctuations, shown in
equation~\eqref{eq:central_moments_equations}, are no longer valid.
To address these complications, \citet{Germano:1992} introduced generalised
statistical moments $\mu(f, g)\,\ \mu(f, g, h)\,\ \ldots\,,$ of random fields
$f(\vect{x})$, $g(\vect{x})$ and $h(\vect{x})$ to ensure that the mathematical
soundness and simplicity of the averaged governing equations is regained for
both the mean fields and their statistical moments. In fact, relations between
the statistical moments are quite similar to the standard ones of
equation~\eqref{eq:central_moments_equations}. For example, the generalised
statistical moments of the velocity field $\vect{u}(\vect{x})$ are defined as
\begin{align}\nonumber
\mu(u_i,u_j)&= \langle u_iu_j\rangle_\ell
-\langle u_i \rangle_\ell \langle u_j \rangle_\ell\,,\\\label{eq:generalised_moments_equations}
\mu(u_i, u_j, u_k)&= \langle u_i u_j u_k \rangle_\ell
- \langle u_i \rangle_\ell \mu (u_j, u_k)
- \langle u_j \rangle_\ell \mu (u_k, u_i) \notag \\
&\mbox{}\quad- \langle u_k \rangle_\ell \mu (u_i, u_j)
- \langle u_i \rangle_\ell \langle u_j \rangle_\ell
\langle u_k \rangle_\ell\,.
\end{align}
Statistical moments of the fluctuations are obtained from the moments of the
total fields and their averages as, for example,
\begin{align}\label{eq:second_order_moment_direct}
\langle u'_{i} u'_{j} \rangle_\ell &=
\langle(u_i-\langle u_i\rangle_\ell)
(u_j-\langle u_j\rangle_\ell)\rangle_\ell \notag \\
&= \langle u_i u_j-\langle u_i\rangle_\ell u_j-u_i\langle u_j \rangle_\ell
+\langle u_i\rangle_\ell \langle u_j \rangle_\ell \rangle_\ell\notag \\
&= \langle u_i u_j - \langle u_i\rangle_\ell u'_j-u'_i\langle u_j \rangle_\ell
- \langle u_i\rangle_\ell \langle u_j\rangle_\ell\rangle_\ell\notag \\
&= \langle u_i u_j \rangle_\ell
- \langle\langle u_i\rangle_\ell u'_j \rangle_\ell
- \langle u'_i \langle u_j\rangle_\ell\rangle_\ell
-\langle\langle u_i\rangle_\ell\langle u_j\rangle_\ell \rangle_\ell\,.
\end{align}
As in equation~\eqref{eq:conditions_filtering}, we have
$\langle \langle u_i\rangle_\ell u'_j \rangle_\ell \neq 0$
and
$\langle u'_i\langle u_j\rangle_\ell \rangle_\ell \neq 0$.
In addition,
$\langle\langle u_i \rangle_\ell \langle u_j \rangle_\ell\rangle_\ell
\neq \langle u_i\rangle_\ell \langle u_j \rangle_\ell$ since
$\langle \langle u_i \rangle_\ell \rangle_\ell \neq \langle u_i \rangle_\ell$.
As a consequence,
$\langle u'_i u'_j\rangle_\ell\neq\langle u_iu_j\rangle_\ell
- \langle u_i\rangle_\ell \langle u_j\rangle_\ell= \mu(u_i,u_j)$.
Replacing statistical moments of the fluctuations such as
$\langle u'_i u'_j\rangle_\ell$ wherever they appear with generalised
central moments such as $\mu(u_i, u_j)$, leads to governing equations for the
fluctuations in a mathematically simple form practically identical to that
obtained under ensemble averaging \citep[see][for the case of MHD
equations]{Aluie:2017}.
The algebraic structure of the closure is the same, regardless of the choice of
the filter $\mathcal {G}$. Such a property is called the averaging invariance
of the turbulent equations \citep[see][]{Germano:1992}.
\label{sect:smoothing}
In application to the ISM simulations described in
Section~\ref{sect:model_summary}, we consider the decomposition of the physical
fields into mean and fluctuating components with the mean fields obtained via
filtering with a Gaussian kernel,
\begin{equation}\label{eq:gauss}
G_\ell(\vect{x}) = (2 \pi \ell^2)^{-3/2}\exp\left[-\vect{x}^2/(2\ell^2)\right],
\end{equation}
where $\ell$ is the smoothing length.
We perform this analysis for magnetic field $\vect{B}$, gas density $\rho$
and velocity $\vect{u}$. All averages are denoted with the subscript $\ell$ and
fluctuations with the prime, with the
exception of magnetic field fluctuations denoted $\vect{b}$:
\begin{align}
\vect{B} &= \vect{B}_\ell + \vect{b}\,,
&\vect{B}_{\ell} &= \langle \vect{B} \rangle_\ell\,,
&\vect{b} &= \vect{B} - \vect{B}_\ell\,, \nonumber\\
\rho &= \rho_\ell + \rho'\,,
&\rho_\ell &= \langle \rho \rangle_\ell\,,
&\rho'& = \rho - \rho_\ell\,, \nonumber \\
\vect{u} &= \vect{u}_\ell + \vect{u}'\,,
&\vect{u}_\ell &= \langle \vect{u} \rangle_\ell\,,
&\vect{u}' &= \vect{u} - \vect{u}_\ell\,.
\label{eq:decomp}
\end{align}
\section{The smoothing scale and Fourier spectra}\label{sect:power_spectra}
The challenge in applying the filtering approach in our context is to
determine an appropriate smoothing length $\ell$ or its admissible range. We
note that the mean and fluctuating parts of different variables, e.g.,
$\vect{B}$, $\rho$ and $\vect{u}$, can have different spatial properties and,
hence, different smoothing lengths may be required to separate the
fluctuations in different variables. For example, \citet{Hollins:2017} find
that the correlation lengths of the three variables are different in the
simulations discussed here. Unlike applications to subgrid turbulence models,
where $\ell$ is identified with the spatial resolution of a simulation, the
choice of $\ell$ in the present context is motivated by physical considerations.
Following \citet{Gent:2013b}, we select $\ell$ using the spectral structure of
each variable as discussed below.
Scale separation between the mean and fluctuation fields is required neither by
theory based on ensemble averages nor by the filtering technique. Nevertheless,
it is natural to expect some difference in scales between the two. For example,
the scale of the mean field in a turbulent dynamo is controlled by deviations
of the random flow from mirror symmetry and mean velocity shear, whereas
turbulent scales depend on the nature of the driving forces. Given the
fundamental difference between the two groups of physical effects, it is
unlikely that the two parts of magnetic field have similar scales. Since
deviations from mirror symmetry are usually weak, the scale of the mean field
is expected to be correspondingly large and to exceed the turbulent scale.
Arguments of this kind are used to justify the two-scale approach in
mean-field magnetohydrodynamics
\citep{Moffatt:1978,Krause:1980,Zeldovich:1983}. However, numerical
simulations of dynamo systems (including those discussed here) are performed in
domains that are only moderately larger than the integral scale of the
simulated random flow \citep[][and references therein]{Brandenburg:2005} which
precludes any strong scale separation between the simulated mean and fluctuating
fields. Nevertheless, evidence for such separation is
usually sought, in the form of a pronounced minimum in the Fourier spectra
at a scale exceeding the presumed integral scale of the fluctuations (often,
the scale at which the random flow is driven by an explicit force) and the
domain size. In application to the magnetic field, \citet{Gent:2013b} demonstrate
that the situation can be more subtle and, despite a pronounced difference of
the two scales (by a factor of two), the Fourier spectrum of the total magnetic
field may not have a noticeable minimum between them.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{fig1a.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig1b.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig1c.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig1d.pdf}
\caption{Fourier spectra of the total magnetic field
$S_B(k)$ (solid, blue), its mean part $S_{B_\ell}(k)$ (dash-dotted, green) and the
fluctuations $S_b(k)$ (dashed, red) for $\ell=50\,{\rm pc}$ at $t=1.6\,{\rm Gyr}$
for various values of the smoothing length $\ell$:
\textbf{(a)}~$\ell=50\,{\rm pc}$,
\textbf{(b)}~$\ell=20\,{\rm pc}$ and
\textbf{(c)}~$\ell=140\,{\rm pc}$. The vertical dotted lines indicate (from left to
right) the wave-numbers corresponding to the scale of the mean field
$L_{B_\ell}$, its fluctuations $L_b$, the smoothing length $\ell$ and the
resolution of the simulations $\Delta$.
\textbf{(d)}: Ratio of the integral scales $L_b$ and $L_{B_\ell}$
as a function of the smoothing length $\ell$ in the three periods of magnetic
field evolution, kinematic $0.8 \leq t < 1.1\,{\rm Gyr}$ (solid, blue), transitional
$1.1 \leq t < 1.45\,{\rm Gyr}$ (dash-dotted, green) and non-linear
$1.45 \leq t \leq 1.725\,{\rm Gyr}$ (dashed, red).
\label{fig:Figure1}
}
\end{figure*}
The Fourier spectrum of the total magnetic field $\vect{B}$ is given by
\begin{equation}\label{eq:S(k)}
S_B(k)=k^2\langle|\widehat{\vect{B}}(\vect{k})|^2\rangle_{k}\,,
\end{equation}
where $\widehat{\vect{B}}(\vect{k})=
\int_V \vect{B}(\vect{x})\exp(-2\pi\ii\vect{k}\cdot\vect{x})\,\dd^3\vect{x}$ is
the Fourier transform of $\vect{B}$ and $\langle\cdots\rangle_k$ denotes the
average value within a spherical shell of thickness $\delta k$ with radius
$k=|\vect{k}|$. The power spectra for the mean and random fields,
$S_{B_\ell}(k)$ and $S_b(k)$, are similarly defined in terms of
$\widehat{\vect{B}}_\ell(\vect{k})$ and $\widehat{\vect{b}}(\vect{k})$, the
Fourier transforms of $\vect{B}_{\ell}$ and $\vect{b}$:
\begin{equation}\label{eq:S_mean(k)_S_turb(k)}
S_{B_\ell}(k) = k^2\langle| \widehat{\vect{B}_\ell}(\vect{k})|^2\rangle_{k}\,,
\qquad
S_b(k) = k^2\langle|\widehat{\vect{b}}(\vect{k})|^2\rangle_k\,.
\end{equation}
We also consider the integral scale of each field
\citep[Sect.~12.1 in][]{Monin2:1975},
\begin{equation}\label{eq:int_scales}
L = \frac{\pi}{2}
\frac{\int_{2\pi/D}^{\pi/\Delta} k^{-1} S(k)\,\dd k}{\int_{2\pi/D}^{\pi/
\Delta} S(k)\,\dd k} \, ,
\end{equation}
calculated using the appropriate power spectrum $S(k)$,
where $\Delta$ is the grid spacing and $D$ the size of the computational domain.
Since both the mean field and the fluctuations are inhomogeneous,
equation~\eqref{eq:int_scales} can be used to derive the characteristic scales
of both the mean and fluctuating fields:
e.g.\ $L_{B_\ell}$ for the mean magnetic field,
such that $L_{B_\ell}^2 \simeq |\vect{B}_\ell|/|\nabla^{2}\vect{B}_{\ell}|$.
As discussed in Sections~\ref{sect:magnetic_field_ps},
\ref{sect:density_ps} and \ref{sect:velocity_ps}, none of the Fourier
spectra of $\vect{B}$, $\rho$ and $\vect{u}$, have a local minimum.
Nonetheless, each variable has distinct, well separated length scales for the mean
and fluctuating fields. The optimal smoothing scale $\ell$ for each variable is
obtained in Section~\ref{sect:magnetic_field_ps} from the requirements that:
(i)~the major maxima in the Fourier spectra of the mean field and fluctuations
in each variable occur on different sides, along the wave-number axis, of the
wave-number where they intersect; and (ii)~that the ratio of the integral
scales of the mean fields and the fluctuations is (approximately) maximized.
The spectra and lengths scales for $\rho$, $\vect{u}$ and their respective
mean and fluctuations are defined in a similar manner and denoted
$S_\rho(k)$, $S_{\rho_\ell}(k)$, $S_{\rho'}(k)$,
$S_u(k)$, $S_{u_\ell}(k)$ and $S_{u'}(k)$, with the corresponding length scales
$L_\rho$, $L_{\rho_\ell}$, $L_{\rho'}$, $L_u$, $L_{u_\ell}$ and $L_{u'}$. The
notation is summarized in Table~\ref{tab:Table1}.
The power spectrum $S_B(k)$ is equivalent, up to a constant factor of
$1/(8\pi)$, to the magnetic energy spectrum $M(k)=S_B(k)/(8\pi)$, and
the total magnetic energy can be obtained as an integral over the
relevant wave-number range, $E_B=\int_{k} M(k)\,\dd k$. However, unlike the
case of incompressible flows, the power spectrum of the velocity field cannot
be directly equated to the kinetic energy density because of the contribution
from the gas density fluctuations.
Calculation of energy densities due to the mean fields and fluctuations should
be done with care. To illustrate the general approach, consider magnetic
energy.
Although total magnetic energies can be obtained via wavenumber integrals
of the power spectra, these do not always, within the filtering approach,
correspond to the energies of the mean and fluctuating parts,
or the sum of the latter energies.
The total magnetic energy $E_B$ satisfies
$E_B=1/(8\pi) \int_k S_B(k)\,\dd k=\int_{V} e_B \, \dd V$,
where $e_{B}=B^2/(8\pi)$ and $\dd V=\dd^{3}\vect{x}$.
And the wavenumber integral of the mean field,
$E_{B_\ell}=1/(8\pi) \int S_{B_\ell}(k)\,\dd k$
does equal the volume integral of the relevant energy density,
${\cal E}_{B_\ell} = \int_V e_{B_\ell} \, \dd V$,
where
$e_{B_{\ell}}=B_{\ell}^2/(8 \pi)$.
But the corresponding quantities for the fluctuating field,
$E_{b}=1/(8\pi) \int S_b(k)\,\dd k$,
and ${\cal E}_{b}= \int_{V} e_b \, \dd V$,
are not equal: $E_{b}\neq{\cal E}_{b}$.
As defined, $E_b=1/(8\pi) \int_{V} b^2 \, \dd V$;
but as explained in more detail in Section~\ref{sect:energy_densities},
$e_{b}$ must be defined in terms of the generalised second moments with $i=j$
(from equation~\eqref{eq:generalised_moments_equations}):
$e_b=\mu(B_i,B_j)/(8\pi)$, so that $e_b\neq b^2/(8\pi)$.
Furthermore, the energies defined above do not sum:
$E_B\neq E_{B_\ell}+E_b$.
In the filtering approach, the energy densities sum as required from
equation~\eqref{eq:generalised_moments_equations}, with the definitions above:
$\langle e_{B} \rangle_{\ell} = e_{B_\ell} + e_b$.
We introduce a distinct notation for the volume integrals of these energy densities
--- ${\cal E}_{B} = \int_{V} \langle e_{B} \rangle_{\ell} \, {\rm d}V$,
${\cal E}_{B_{\ell}} = \int_{V} e_{B_{\ell}} \, {\rm d}V$,
${\cal E}_{b} = \int_{V} e_{b} \, {\rm d}V$ ---
so that we can also write ${\cal E}_{B}={\cal E}_{B_{\ell}}+{\cal E}_b$.
These energy densities and their volume integrals are summarised
in Table~\ref{tab:Table1},
and discussed further in Section~\ref{sect:energy_densities}.
But note that, while ${\cal E}_{B_{\ell}}=E_{B_{\ell}}$,
${\cal E}_{B}\neq E_{B}$, and ${\cal E}_{b}\neq E_{b}$.
We analyse the Fourier spectra of the basic physical variables in
Sections~\ref{sect:magnetic_field_ps}--\ref{sect:velocity_ps} to identify
appropriate smoothing lengths $\ell$, which can be different for different
variables, and then use the filtering approach to derive and discuss the
corresponding energy densities in Section~\ref{sect:energy_densities}.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{fig2a.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig2b.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig2c.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig2d.pdf}
\caption{
As for Fig.~\ref{fig:Figure1} but for the gas density $\rho$ (in $\,{\rm g} \,{\rm cm}^{-3}$)
with \textbf{(a)}~$\ell=50\,{\rm pc}$, \textbf{(b)}~$\ell=20\,{\rm pc}$ and
\textbf{(c)}~$\ell=140\,{\rm pc}$.
\label{fig:Figure2}
}
\end{figure*}
\subsection{Magnetic field}\label{sect:magnetic_field_ps}
Figs.~\ref{fig:Figure1}b and~\ref{fig:Figure1}c show the power
spectra of the total magnetic field and its mean and fluctuating parts obtained
using $\ell=20\,{\rm pc}$ and $\ell=140\,{\rm pc}$, respectively. When $\ell=20\,{\rm pc}$, the integral
scales of the mean field and the fluctuations are $L_{B_\ell}=0.49\,{\rm kpc}$ and
$L_b=0.17\,{\rm kpc}$, but the scale $\lambda=0.09\,{\rm kpc}$ where the two power spectra
intersect, $S_{B_\ell}(\lambda)=S_b(\lambda)$, is smaller than the integral scale
of the fluctuations, $\lambda<L_b$. This is physically inconsistent. When
$\ell=140\,{\rm pc}$, the opposite and equally unsatisfactory situation follows with
$L_{B_\ell}=0.92\,{\rm kpc}<\lambda=1.09\,{\rm kpc}$, $L_{b}=0.39 \,{\rm kpc}$.
A more satisfactory picture emerges when $\ell=50\,{\rm pc}$ shown in
Fig.~\ref{fig:Figure1}a, resulting in $L_{B_\ell}=0.65\,{\rm kpc}$, $L_b=0.27\,{\rm kpc}$ and
$\lambda=0.3 \,{\rm kpc}$, so that $L_b<\lambda<L_{B_\ell}$.
Thus, $\ell=50\,{\rm pc}$ can be
adopted as an appropriate smoothing length for the magnetic field: then the mean
field dominates at scales around $L_{B_\ell}$ whereas the fluctuations contribute
most of the power at scales around $L_b$.
The ratio of $L_{B_\ell}$ and $l_b$ as a function of $\ell$ is shown in
Fig.~\ref{fig:Figure1}d for the three periods of the magnetic field
evolution. When magnetic field is still weak, there is a pronounced maximum at
$\ell=65\,{\rm pc}$ which becomes less prominent as the magnetic field growth
saturates. Thus, the requirement that $L_b< \lambda<L_{B_\ell}$ is compatible
with the maximum scale separation between the mean field and the fluctuations.
The ratio reaches an asymptotic value in the range 0.3--0.4 at $\ell\approx90\,{\rm pc}$.
\subsection{Gas density}\label{sect:density_ps}
Using the same arguments as for magnetic field, we conclude that $\ell=50\,{\rm pc}$ is
a suitable smoothing length for the density distribution, as also shown in
Fig.~\ref{fig:Figure2}. Indeed, when $\ell=50\,{\rm pc}$, we obtain
$L_{\rho_\ell}=0.62\,{\rm kpc}$ and $L_{\rho'}=0.27\,{\rm kpc}$, with $\lambda=0.31\,{\rm kpc}$.
In contrast, $L_{\rho_\ell}=0.47\,{\rm kpc}$ and $L_{\rho'}=0.17\,{\rm kpc}>\lambda=0.11\,{\rm kpc}$
for $\ell=20\,{\rm pc}$, and $L_{\rho_\ell}=0.91\,{\rm kpc}<\lambda=0.95\,{\rm kpc}$ and
$L_{\rho'}=0.37\,{\rm kpc}$ for $\ell=140\,{\rm pc}$.
The ratio of $L_{\rho_\ell}$ and $L_{\rho'}$ as a function of $\ell$ is shown
in Fig.~\ref{fig:Figure2}d. Its maximum is reached at values of $\ell$
increasing from $65\,{\rm pc}$ to $75\,{\rm pc}$ as the magnetic field saturates, suggesting a
suitable smoothing length of approximately $70\,{\rm pc}$.
\subsection{Gas velocity}\label{sect:velocity_ps}
Figs.~\ref{fig:Figure3}a--d illustrate similar arguments for the velocity
field $\vect{u}$ (we recall that $\vect{u}$ represents deviations from the
overall shearing flow and contains a systematic vertical outflow velocity).
When $\ell = 50 \,{\rm pc}$, $L_{u_\ell}=0.66\,{\rm kpc}$ and $L_{u'}=0.27\,{\rm kpc}$,
with $\lambda=0.3\,{\rm kpc}$. Conversely, $L_{u_\ell}=0.50\,{\rm kpc}$, $L_{u'}=0.16\,{\rm kpc}$
and $\lambda=0.12\,{\rm kpc}<L_{u'}$ for $\ell = 20 \,{\rm pc}$, whilst for $\ell = 140 \,{\rm pc}$
we have $L_{u_\ell}=0.92\,{\rm kpc}$, $L_{u'}=0.39\,{\rm kpc}$ and $\lambda=1.12\,{\rm kpc}>L_{u_\ell}$.
However, the ratio of length scales in Fig.~\ref{fig:Figure3}d does not have any
pronounced maxima, as it increases monotonically with $\ell$ for $t < 1.45 \,{\rm Gyr}$,
and has a very broad maximum at $\ell=90\text{--}100\,{\rm pc}$ for $t>1.45 \,{\rm Gyr}$.
It is clear from each of Figs.~\ref{fig:Figure1}, \ref{fig:Figure2}
and \ref{fig:Figure3}, that the spectral properties of each of these fields
are distinct. In addition, the properties of each field vary in time. The
simulation times considered here, $0.8 \leq t \leq 1.725 \,{\rm Gyr}$, are all much greater
than SN-driven hydrodynamics reaches a statistical steady state, which occurs at
about $400 \,{\rm Myr}$. Thus, we are confident that any changes in time result from the
evolution of the mean-field dynamo, which evolves over a time-scale of order $\,{\rm Gyr}$.
It would therefore seem most appropriate to select different smoothing lengths
to obtain the fluctuations, depending on both the variable considered and the
simulation time. However, complications would then arise with the interpretation
of results obtained from such choices. The sensitivity of the results to any change
in smoothing length would have to be considered.
Theories based on a filtering approach to the MHD equations requires a
consistent filter as the averaging operator. Hence, applying different smoothing
lengths for each variable would introduce new difficulties when trying to interpret
the mean fields and moments of the fluctuating fields as solutions of the filtered
equations.
In addition, complications could arise when selecting a smoothing scale for moments
computed from multiple basic variables, such as the kinetic energy
$\rho \boldsymbol{u}^{2}$.
A time dependent smoothing length could be used, interpreted as a change in the grid
scale of such a simulation.
We shall attempt to identify an appropriate value of $\ell$
that can be used as a smoothing length for all three variables throughout the
times considered. We adopt $\ell = 75 \,{\rm pc}$ as the smoothing length for
magnetic field, gas density and gas velocity, since for magnetic field and gas
density the local maxima in the ratios of the mean and fluctuating length scales
occur close to $75 \,{\rm pc}$. For the gas velocity, the value of this ratio at $75 \,{\rm pc}$
is above $90 \%$ of the asymptotic value in each period, whilst the value at $75 \,{\rm pc}$
in the saturated period is very similar to the value at the broad local maximum.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{fig3a.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig3b.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig3c.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig3d.pdf}
\caption{
As for Fig.~\ref{fig:Figure1} but for the gas velocity (in $\,{\rm km\,s^{-1}}$) with
\textbf{(a)}~$\ell=50\,{\rm pc}$, \textbf{(b)}~$\ell=20\,{\rm pc}$ and \textbf{(c)}~$\ell=140\,{\rm pc}$.
\label{fig:Figure3}
}
\end{figure*}
\section{Energy densities}\label{sect:energy_densities}
Magnetic and kinetic energy densities have to be derived using the generalized
central moments, as discussed in Section~\ref{sect:filtering}. The required
moments are derived in Appendix~\ref{sect:Integrals}. Since the mean
and fluctuating fields are sensitive to the choice of smoothing length, the
resultant energies will also depend on $\ell$. The maximum admissible value of
$\ell$ is half the horizontal extent of the simulation domain. We derive the
energy densities obtained with various smoothing lengths in the range
$0<\ell<0.5\,{\rm kpc}$ and discuss the results in this section.
As previously, we consider the three periods of the mean-field dynamo
independently and present results averaged over the snapshots within each
period.
\subsection{Magnetic energy}
The total magnetic energy density is given by
\begin{equation*}
e_B = |\vect{B}|^2/(8\pi)\,,
\end{equation*}
with the energy density of the fluctuating magnetic field obtained as
\begin{equation}
e_{b} = \frac{1}{8 \pi} \int_{V} |\vect{B}(\vect{x}') -
\vect{B_{\ell}}(\vect{x})|^{2} \,
G_{\ell}(\vect{x}-\vect{x}') \; \mathrm{d}^{3} \vect{x}'\, .
\label{eq:localmag}
\end{equation}
This ensures the energies of the mean and fluctuating magnetic fields sum to the
energy of the (filtered) total magnetic energy, i.e.
\[
\langle e_{B} \rangle_{\ell} = e_{B_{\ell}} + e_{b} \, ,
\]
where $e_{B_\ell}=|\vect{B}_\ell|^2/(8\pi)$ is the energy density of the mean
magnetic field. We note that $e_b\neq|\vect{b}|^2/(8\pi)$, but it can be shown,
by expanding $\vect{B}(\vect{x}')$ in a Taylor series around $\vect{x}$,
that $e_b=|\vect{b}|^2/(8\pi)+\mathcal{O}(\ell^2/L_{B_\ell}^2)$.
Thus, the difference between the volume and filtering averages decreases as
$\ell/L_{B_\ell}\to0$.
This fact, also true for any other variable, suggests one consideration
for the choice of $\ell$ might be to maximise the ratio for $L_{B_\ell}/\ell$.
In practice, however, this would simply lead to $\ell\rightarrow0$; i.e.\
all the signal in the mean field, and effectively no decomposition.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{fig4a.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig4b.pdf}
\caption{a) Volume averages of the mean magnetic energy density
$\langle e_{B_{\ell}} \rangle_{V}$ at times $0.8 \leq t < 1.1\,{\rm Gyr}$ (green,
dash-dotted), $1.1 \leq t < 1.45\,{\rm Gyr}$ (black, solid) and $t \geq 1.45\,{\rm Gyr}$ (red,
dotted); also the fluctuating magnetic energy density $\langle e_{b} \rangle_{V}$
at $1.1 \leq t < 1.45\,{\rm Gyr}$ (blue, dashed), as functions of the smoothing length
$\ell$. These are normalised by the volume average of the smoothed magnetic energy
density, $\langle \langle e_{B} \rangle_{\ell} \rangle_{V}$, with the volume
averaging over the region $|z| < 0.5$.
b) Derivatives of $\langle e_{b} \rangle_{V}$, normalised by
$\langle \langle e_{B} \rangle_{\ell} \rangle_{V}$, with respect to $\ell$ at
$0.8 \leq t < 1.1\,{\rm Gyr}$ (green,dash- dotted), $1.1 \leq t < 1.45\,{\rm Gyr}$ (blue, dashed)
and $t \geq 1.45\,{\rm Gyr}$ (red, dotted).
}
\label{fig:Figure4}
\end{figure*}
The larger is $\ell$, the smaller part of the total field is deemed to be a
mean field, and $\langle e_{B_\ell} \rangle_{V}$ monotonically decreases with
$\ell$ whilst $e_b$ monotonically increases, as shown in Fig.~\ref{fig:Figure4}a.
The rate of variation of
$\langle e_b\rangle_V/\langle\langle
e_B\rangle_\ell\rangle_V$ with $\ell$, shown in Fig.~\ref{fig:Figure4}b
--- and also of $\langle e_B\rangle_V/\langle\langle
e_B\rangle_\ell\rangle_V$, not shown ---
becomes relatively small when $\ell>50\,{\rm pc}$. This confirms that the appropriate choice
for the smoothing length is $\ell > 50\,{\rm pc}$. (The difference between
Fig.~\ref{fig:Figure4}a and fig.~2a of \citet{Gent:2013a} is caused by a
downsampling to a grid $\Delta x=8$\,pc used in the Fourier transform for that
calculation in \citet{Gent:2013a}.)
The mean magnetic energy grows with time due to dynamo action, and the value of
$\ell$ for which the two energies are equal to each other increases. At late
times, the mean magnetic field is energetically dominant over the fluctuating
magnetic field for all $\ell$.
\subsection{Kinetic Energy}
In a compressible flow, the mean kinetic energy density is represented by a
third-order moment involving the density and velocity fields. Under ensemble
(or volume) averaging, the mean kinetic energy density is conveniently --- and
physically meaningfully --- represented \citep[see Section~6.4 in][]{Monin:1975} as
\begin{align}\label{eq:compressible_monin_yaglom}
\langle e_\text{k}\rangle& = \tfrac12\langle\rho u_iu_i\rangle\nonumber\\
&= \tfrac12\langle\rho\rangle \langle u_i\rangle \langle u_i\rangle
+\langle u_i \rangle \langle \rho' u'_i \rangle
+\tfrac12 \langle \rho u'_i u'_i \rangle \nonumber \\
&\equiv e_{\rm s} + e_{\rm st} + e_{\rm t}\,,
\end{align}
where $e_{\rm s}$ is the energy density of the mean flow, $e_{\rm t}$ is
the energy density of the fluctuations and $e_{\rm st}$ represents the
transport of momentum $\langle \rho'u'_i\rangle$ by the mean flow
(summation over repeated indices is understood here and below). An equivalent
decomposition is appropriate under the filtering approach as well:
\begin{equation}\label{eq:tau_monin_yaglom}
\begin{split}
\langle e_{\rm k}\rangle_\ell&=\tfrac12\langle\rho u_iu_i\rangle_{\ell}
=e_{\rm s} + e_{\rm st} + e_{\rm t} \,,\\
e_{\rm s} &=\tfrac12\langle\rho\rangle_\ell \langle u_i\rangle_\ell
\langle u_i\rangle_\ell\,,\\
e_{\rm st} &= \langle u_i\rangle_\ell \mu(\rho, u_i)\,, \\
e_{\rm t} &= \langle e_{\rm k}\rangle_\ell - e_{\rm s} - e_{\rm st}\\
&= \tfrac12\langle\rho\rangle_\ell \mu(u_i, u_i)+\tfrac12\mu(\rho,u_i,u_i)\,,
\end{split}
\end{equation}
where the moments involved are derived in Appendix~\ref{sect:Integrals} in
explicit integral forms:
\begin{align}
e_\text{st} = &\int_V \vect{u}(\vect{x}') G_\ell(\vect{x}-\vect{x}')\,
\dd^3 \vect{x}' \nonumber \\
&\mbox{}\quad\times\int_V \Delta\rho_\ell(\vect{x},\vect{x}')
\Delta\vect{u}_\ell(\vect{x},\vect{x}')
G_\ell(\vect{x}-\vect{x}')\,\dd^3\vect{x}'\,,\nonumber \\
e_\text{t} = &\tfrac12 \int_V \rho(\vect{x}') G_\ell(\vect{x}-\vect{x}')
\,\dd^{3} \vect{x}'\nonumber\\
&\mbox{\quad}\times \int_V |\Delta\vect{u}_\ell(\vect{x},\vect{x}')|^{2}
G_\ell(\vect{x}-\vect{x}')\,\dd^3 \vect{x'}\nonumber \\\label{eq:localkinetic}
&+\tfrac12\int_V \Delta\rho_\ell(\vect{x},\vect{x}')
|\Delta\vect{u}_\ell(\vect{x},\vect{x}')|^2
G_\ell(\vect{x}-\vect{x}')\,\dd^3\vect{x}'\,,
\end{align}
where $\Delta\rho_\ell(\vect{x},\vect{x}')=\rho(\vect{x}') -
\rho_\ell(\vect{x})$ and
$\Delta\vect{u}_\ell(\vect{x},\vect{x}')=\vect{u}(\vect{x}') -
\vect{u_\ell}(\vect{x})$.
Fig.~\ref{fig:Figure5} shows how various parts of the kinetic energy density
depend on the smoothing length $\ell$.
The behaviour of the volume averages of these contributions to the kinetic
energies is much less straightforward than for magnetic energy, except
for $t>1.45 \,{\rm Gyr}$ where similar monotonic dependence on $\ell$ is observed.
Additionally for both $0.8 \leq t < 1.1\,{\rm Gyr}$ and $1.1 \leq t < 1.45\,{\rm Gyr}$,
we observe that the fluctuating kinetic energy $e_{\rm st} + e_{\rm t}$ is equal to
zero within errors for $50 \leq \ell \leq 100 \,{\rm pc}$. This results from cancellation
between $\langle e_\text{st} \rangle_{V}$ and $\langle e_\text{t} \rangle_{V}$,
with $\langle e_\text{st} \rangle_{V}$ significantly negative, as confirmed by
Fig.~\ref{fig:Figure7}.
The quantity $e_\text{st}=\langle u_i \rangle_\ell \mu(\rho,u_i)$ is dominated
by the contribution of the $z$-component of the velocity field ($i=3$) since
$\langle u_z \rangle_{\ell}$ is much larger than the $x$- and $y$- components
because of a systematic gas outflow from the mid-plane.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{fig5a.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{fig5b.pdf}
\caption{\textbf{(a)}~As for Fig.~\ref{fig:Figure4}a but for the volume average of
the mean kinetic energy density $\langle e_\text{s}\rangle_V$ at
$0.8 \leq t < 1.1\,{\rm Gyr}$ (green, dash-dotted), $1.1 \leq t < 1.45\,{\rm Gyr}$ (black, solid)
and $t \geq 1.45\,{\rm Gyr}$ (red, dotted); with the volume average of the fluctuating
kinetic energy density $\langle e_\text{st} + e_\text{t}\rangle_V$ at
$1.1 \leq t < 1.45\,{\rm Gyr}$ (blue, dashed). These are normalised by the volume average
of the smoothed kinetic energy $\langle \langle e_\text{k} \rangle_\ell \rangle_V$.
\textbf{(b)}~As for Fig.~\ref{fig:Figure4}b but for the derivative of
$\langle e_\text{st}+e_\text{t}\rangle_V$, with respect to $\ell$ (normalised by
$\langle \langle e_\text{k} \rangle_\ell\rangle_V$); at $0.8 \leq t < 1.1\,{\rm Gyr}$
(green, dash-dotted), $1.1 \leq t < 1.45\,{\rm Gyr}$ (blue, dashed) and $t \geq 1.45\,{\rm Gyr}$
(red, dotted).
}
\label{fig:Figure5}
\end{figure*}
The supernovae contain large values of $\langle u_{z} \rangle_{\ell}$ and the gas
involved in the outflow is hotter and less dense than on average, leading to large
negative values of $-\langle\rho\rangle_\ell \langle u_z\rangle_\ell$ for $z>0$
and, hence, of $\langle u_z\rangle_\ell \, \mu(\rho,u_z)=\langle u_z\rangle_\ell
\, (\langle\rho u_z\rangle_\ell-\langle\rho\rangle_\ell \langle u_z\rangle_\ell)$
(the dominant component of $e_{\rm st}$).
For $z < 0 \,{\rm kpc}$, the mean vertical velocity in the supernovae
$\langle u_{z} \rangle_{\ell}$ is large and negative, resulting in large, positive
values for $\mu(\rho, \, u_{z}) = \langle\rho u_z\rangle_\ell-
\langle\rho\rangle_\ell\langle u_z\rangle_\ell$.
Thus, the opposite signs of $\langle u_{z} \rangle_{\ell}$ and
$\mu(\rho, \, u_{z})$ result in large, negative values of $e_{\rm st}$ for
negative $z$.
These large, negative values for $e_{\rm st}$ appear to dominate the kinetic energy
statistics during earlier snapshots. This is discussed in more detail below.
The variation with $\ell$ of the fluctuating kinetic energy produces a more
complicated pattern than for fluctuating magnetic energy, see
Fig.~\ref{fig:Figure5}.
The values of $\ell$ for which the variation is weak are $\ell > 300 \,{\rm pc}$.
Such a smoothing length is much larger than any estimate of the correlation
scale of the random motions, and the optimal smoothing lengths of both $\rho$ or
$\vect{u}$. As a result, the criterion that the variation of the fluctuating
kinetic energy must be weak is not an appropriate method for choosing suitable
smoothing lengths for either $\rho$ or $\vect{u}$.
\section{Influence of the mean-field dynamo}
\label{sect:dynamo}
Figs.~\ref{fig:Figure4} and \ref{fig:Figure5} both suggest that the structure of
magnetic and kinetic energies vary with the state of the mean-field dynamo. We first
examine the vertical structure of both energies, comparing the three time ranges
discussed previously, to demonstrate the changes in structure caused by the dynamo.
At early times, when the fluctuating magnetic field dominates the mean field, the
magnetic field is strongest at $|z| = 0.3 \,{\rm kpc}$ where the kinetic energy is
maximal, see Figs.~\ref{fig:Figure6}a and~\ref{fig:Figure7}a.
As the mean field dynamo saturates, the mean magnetic field dominates compared
to the fluctuating field. The vertical profile of the smoothed total magnetic energy
corresponds to the mean magnetic energy. The peaks of the vertical profiles remain
at $|z| = 0.3 \,{\rm kpc}$, see Figs.~\ref{fig:Figure6}b,c.
The increasing mean magnetic field significantly alters the vertical profile of
the kinetic energy, as shown in Figs.~\ref{fig:Figure7}b,c. All the components in
the division of kinetic energy are concentrated towards the midplane and the
maximum value of $\langle e_{\rm k} \rangle_{\ell}$ decreases.
Strong mean magnetic fields generated via dynamo action in
the same ISM simulations have been shown to suppress outflows of hot gas
\citep[see][]{Evirgen:2017}, which
are associated with high values of kinetic energy. This would lead to a vertical
profile of kinetic energy with the characteristics present in
Fig.~\ref{fig:Figure7}c.
The most dramatic change is the effect on the `intermediate scale' component of the
kinetic energy, $e_{\rm st}$. As the magnetic field strength increases, the
horizontal average of $e_{\rm st}$ decreases significantly, becoming almost equal to
zero except near to the mid-plane. As a result, the kinetic energy is approximately
split between the mean and small-scale energies $e_{\rm s}$ and $e_{\rm t}$.
As this change appears to be the most significant, we focus on horizontal planes
from the snapshots $t = 0.8 \,{\rm Gyr}$ and $t = 1.6 \,{\rm Gyr}$ at which the vertical
profiles of $e_{\rm st}$ shown in Fig.~\ref{fig:Figure7}
show the most profound differences.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{fig6a.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig6b.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig6c.pdf}
\caption{Vertical profiles of the horizontal averages of the smoothed total
magnetic energy, $\langle \langle e_{B} \rangle_{\ell} \rangle_{xy}$ (blue, solid),
mean magnetic energy, $\langle e_{B_{\ell}} \rangle_{xy}$ (green, dashed), and
fluctuating magnetic energy, $\langle e_{b} \rangle_{xy}$ (red, dash-dotted); at
times a) $0.8 \leq t < 1.1\,{\rm Gyr}$, b) $1.1 \leq t < 1.45\,{\rm Gyr}$ and
c) $t \geq 1.45\,{\rm Gyr}$. The smoothing length applied for each snapshot is
$\ell = 75 \,{\rm pc}$.
}
\label{fig:Figure6}
\end{figure}
In the kinematic stage of the mean-field dynamo, there are regions in which
$e_{st}$ is significantly non-zero, whilst $\langle e_{k} \rangle_{\ell}$ is
uniform by comparison (see Fig.~\ref{fig:Figure8}).
The mean and turbulent kinetic energies, $e_{s}$ and $e_{t}$ respectively, also
exhibit highly non-zero behaviour in the same regions as $e_{st}$, although these
are not demonstrated here.
The contribution from the $z\text{--}$component of $e_{st}$,
$\langle u_z\rangle_\ell \, \mu(\rho,u_z)$, comprises a large fraction of the
total quantity (about $80 \%$) and so the vertical behaviour is dominant for
$e_{st}$ at this stage.
The values for which $e_{st}$ is highest strongly coincide with regions of large
positive $\langle u_{z} \rangle_{\ell}$, which are the regions of hot gas outflows.
Thus, at the kinematic stage of the mean-field dynamo, $e_{st}$ is strongly
correlated with the outflows of hot gas.
In this model, the mean magnetic field is absent from the regions of hot gas,
as demonstrated by \citet{Evirgen:2017}.
Thus, the mean magnetic field also avoids regions in which $e_{st}$ is strongly
non-zero.
The action of the amplified mean magnetic field on the kinetic energies is
demonstrated in Fig.~\ref{fig:Figure9}. The values of $e_{st}$
are reduced significantly and $e_{st}$ appears more uniform.
By contrast, $\langle e_{k} \rangle_{\ell}$ is now more significant and the
non-uniform structure of $\langle e_{k} \rangle_{\ell}$ is much more clear.
The vertical contribution to $e_{st}$ is also dramatically reduced and is no
longer the dominant contribution.
The mean vertical velocity is reduced both in maximal value and in the size of
regions in which $\langle u_z\rangle_\ell$ is highly non-zero, indicative of the
reduction of hot gas outflows.
Thus, the partial suppression of the hot gas outflows by the mean magnetic field
has both significantly reduced the value of $e_{st}$ and resulted in behaviour of
the overall kinetic energy becoming independent of the behaviour of $e_{st}$.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{fig7a.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig7b.pdf}
\\
\includegraphics[width=0.46\textwidth]{fig7c.pdf}
\caption{As for Fig.~\ref{fig:Figure6} but for the smoothed total
kinetic energy density $\langle \langle e_{\rm k} \rangle_{\ell} \rangle_{xy}$
(blue, solid), mean kinetic energy density $\langle e_{\rm s} \rangle_{xy}$ (green,
dashed), `intermediate scale' kinetic energy density
$\langle e_{\rm st} \rangle_{xy}$ (cyan, dash-dot-dotted), fluctuating kinetic
energy density $\langle e_{\rm t} \rangle_{xy}$ (red, dot-dashed), and the sum,
$\langle e_{\rm st} + e_{\rm t} \rangle_{xy}$ (purple, dotted); at times a)
$0.8 \leq t < 1.1\,{\rm Gyr}$, b) $1.1 \leq t < 1.45\,{\rm Gyr}$ and c) $t \geq 1.45\,{\rm Gyr}$.
As for Fig.~\ref{fig:Figure6}, the smoothing length applied
is $\ell = 75 \,{\rm pc}$.
}
\label{fig:Figure7}
\end{figure}
\section{Discussion}
\label{sect:discussion}
We have applied Gaussian smoothing to obtain mean fields for magnetic field,
density and velocity. The optimal smoothing lengths were obtained by spectral
analysis of each field independently.
We find $\ell = 75 \,{\rm pc}$ is an appropriate smoothing length to use for each of
these fields.
The differing spectral behaviour of the magnetic, density and velocity fields
is unsurprising, since their structures are controlled by different physical
processes, even though they do not evolve independently,.
The structure of the fluctuating fields, obtained
using Gaussian smoothing with a filtering length of $50 \,{\rm pc}$,
are distinct, as shown in \citet{Hollins:2017}.
We examine the mean and fluctuating magnetic and kinetic energies, using the idea
of the generalised central moments from \citet{Germano:1992} for our definitions
of the fluctuating energies. We examine the dependencies of the energies on $\ell$
and the magnetic dynamo.
Amplification of the mean magnetic field by dynamo action has a significant impact
on the subdivisions of the magnetic and kinetic energies. As the dynamo saturates,
the energy of the mean field dominates compared to the fluctuating field.
Throughout the run, the magnetic field is strongest at $|z| = 300 \,{\rm pc}$.
Increasing mean magnetic field results in the location of the maximum of the vertical
profile of kinetic energy shifting from $|z| = 300 \,{\rm pc}$ to the mid-plane. The
intermediate scale kinetic energy $e_{st}$ is closely correlated
with outflows, which are partly suppressed by the growing mean magnetic field.
This results in a dramatic reduction in $e_{st}$ at late times in the
simulation, when the kinetic energy is largely split between the large-scale kinetic
energy, $e_{s}$, and the small-scale kinetic energy $e_{t}$.
In the simulations considered here,
the intermediate kinetic energy density $e_{\rm st}$
is therefore a useful diagnostic for the presence of outflows;
it clearly isolates an energy transfer of interest,
allowing insight into some important physical processes within the system.
It will therefore be of great interest to consider similar decompositions
of the energy densities in other contexts, where similar insights may be
possible.
\begin{figure*}
\centering
\includegraphics[width=0.92\textwidth]{fig8.pdf}
\caption{Horizontal slices of of the smoothed total kinetic energy density
$\langle e_{k} \rangle_{\ell}$ (top-left panel), the 'intermediate-scale'
kinetic energy density $e_{st}$ (top-right panel),
$\langle u_{z} \rangle_{\ell} \mu(\rho, \, u_{z})$ the vertical contribution to
$e_{st}$ (bottom-left panel), all in units of $10^{-13} \,{\rm erg} \,{\rm cm}^{-3}$, and
the mean vertical velocity $\langle u_{z} \rangle_{\ell}$ (bottom-right panel)
in $\,{\rm km\,s^{-1}}$; at $z=290\,{\rm pc}$ from the snapshot $t=0.8\,{\rm Gyr}$. The smoothing length used
is $\ell=75\,{\rm pc}$.}
\label{fig:Figure8}
\end{figure*}
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction \label{sec.intro}}
An {\em RNA sequence} is a string composed of four types of nucleotides, namely $A, C, G$, and $U$. Given an RNA sequence, the goal of the {\em RNA folding} problem is to find a maximum cardinality set of crossing-free pairs of nucleotides, where all the pairs are either $\{A,U\}$ or $\{C,G\}$. The problem is central in bioinformatics and has found applications in many areas of molecular biology. For a more comprehensive exposition of the topic, the reader is referred to e.g. \cite{S15}.
It is well-known that the problem can be solved in cubic time using a simple dynamic programming method \cite{DEKM98}. Due to the importance of RNA folding in practice, there has been a long line of research on improving the cubic time algorithm (See e.g. \cite{A99,FG10,PTZZ11,PZTZ13,S15,VGF13}). Currently the best upper bound is $\mathcal{O}\left(\frac{n^3}{\log^2 (n)}\right)$ \cite{PZTZ13,S15}, and this can be obtained via four-Russian method or fast min-plus multiplication (based on ideas from Valiant's CFG parser \cite{V75}).
Whether the RNA folding problem can be solved in $\mathcal{O}(n^{3-\epsilon})$ time for some $\epsilon > 0$ is still a major open problem. Other than attempting to improve the upper bound, we should also approach the problem in the opposite direction, i.e. showing a lower bound or arguing why the problem is hard.
A popular way to show hardness of a problem is to demonstrate a lower bound conditioned on some widely accepted hypothesis.
\begin{conjecture} [Strongly Exponential Time Hypothesis (SETH)] \label{c-1}
There exists no $\epsilon, k_0 > 0$ such that $k$-SAT with $n$ variables can be solved in time $\mathcal{O}(2^{(1-\epsilon)n})$ for all $k > k_0$.
\end{conjecture}
\begin{conjecture} \label{c-2}
There exists no $\epsilon, k_0 > 0 $ such that $k$-clique on graphs with $n$ nodes can be solved in time $\tilde{\mathcal{O}}\left(n^{(\omega - \epsilon) k/3}\right)$ for all $k > k_0$, where $\omega < 2.373$ is the matrix multiplication exponent.
\end{conjecture}
Assuming that SETH (Conjecture~\ref{c-1}) holds, the following bounds are unattainable for any $\epsilon > 0$:
\begin{itemize}
\item an $\mathcal{O}(n^{k-\epsilon})$ algorithm for $k$-dominating set problem \cite{PR10},
\item an $\mathcal{O}(n^{2-\epsilon})$ algorithm for dynamic time warping, longest common subsequence, and edit distance \cite{ABV15*,BI14,BK15},
\item an $\mathcal{O}(m^{2-\epsilon})$ algorithm for ($3/2 - \epsilon$)-approximating the diameter of a graph with $m$ edges \cite{RV13}.
\end{itemize}
As remarked in \cite{ABV15}, it is easy to reduce the longest common subsequence problem on binary strings to the RNA folding problem as following: Given two binary strings $X, Y$, we let $\hat{X} \in {\{A,C\}}^{|X|}$ be the string such that $\hat{X}[i] = A$ if $X[i] = 0$, $\hat{X}[i] = C$ if $X[i] = 1$, and we let $\hat{Y} \in {\{G,U\}}^{|Y|}$ be the string such that $\hat{Y}[i] = U$ if $Y[i] = 0$, $\hat{Y}[i] = G$ if $Y[i] = 1$. Then we have a 1-1 correspondence between RNA foldings of $\hat{X} \circ \hat{Y}^R$ (i.e. concatenation of $\hat{X}$ and the reversal of $\hat{Y}$) and common subsequences of $X$ and $Y$. It has been shown in \cite{BK15} that there is no $\mathcal{O}(n^{2-\epsilon})$ algorithm for longest common subsequence problem on binary strings conditioned on SETH, and we immediately get the same conditional lower bound for RNA folding from the simple reduction!
Very recently, based on a conjectured hardness of $k$-clique problem (Conjecture~\ref{c-2}), a higher conditional lower bound was proved for a generalized version of the RNA folding problem (which coincides with the RNA folding problem when the alphabet size is 4) \cite{ABV15}:
\begin{theorem} [\cite{ABV15}] \label{thm-1}
If the generalized RNA folding problem on sequences of length $n$ with alphabet size 36 can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 2} \log(n) \right)\right)$ time.
\end{theorem}
Therefore, a $\mathcal{O}(n^{\omega - \epsilon})$ time algorithm for the generalized RNA folding with alphabet size at least 36 will disprove Conjecture~\ref{c-2}, yielding a breakthrough to the parameterized complexity of clique problem.
However, the above theorem is irrelevant to the RNA folding problem in real life (which has alphabet size 4). It is unknown whether the generalized RNA folding for alphabet size $4$ admits a faster algorithm than the case for alphabet size $> 4$. In fact, there are examples of string algorithms whose running time scales with alphabet size (e.g. string matching with mismatched \cite{AL91} and jumbled indexing \cite{ACLL14,CL15}). We also note that when the alphabet size is 2, the generalized RNA folding can be trivially solved in linear time.
In this paper, we improve upon Theorem~\ref{thm-1} by showing the same conditional lower bound for the RNA folding problem:
\begin{theorem} \label{thm-2}
If the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time.
\end{theorem}
Note that we also get an $\mathcal{O}(n)$ factor improvement inside $T(\cdot)$, though it does not affect the conditional lower bound.
The current state-of-art algorithm for $k-$clique, which takes $\tilde{\mathcal{O}}\left(n^{\omega k/3}\right)$ time, requires the use of fast matrix multiplication \cite{EG04} which does not perform very efficiently in practice. For combinatorial, non-algebraic algorithm for $k-$clique, the current best one runs in $\tilde{\mathcal{O}}\left(\frac{n^k}{\log^k (n)}\right)$ time \cite{V09}, which is only slightly better than the trivial approach. As a result, by Theorem~\ref{thm-2}, even a $\mathcal{O}(n^{3- \epsilon})$ time combinatorial algorithm for RNA folding will lead to an improvement for combinatorial algorithms for $k-$clique!
In the proof of Theorem~\ref{thm-1} in \cite{ABV15}, given a graph $G=(V,E)$, a sequence of length $\mathcal{O}(n^{k+2} \log(n))$ is constructed in such a way that we can decide whether $G$ has a $3k$-clique according to the number of pairs in an optimal generalized RNA folding of $S$.
Such a construction requires many different types of letters in order to build various ``walls'' which prevent undesired pairings between different parts of the sequence. Hence extending their approach to handle the case where the alphabet size is 4 may not be easy without aid from other techniques and ideas.
\bigskip
\noindent{\bf Overview of our approach.} At a high level, our reduction (from $3k$-clique problem to RNA folding problem) follows the approach in \cite{ABV15}: We enumerate all $k$-cliques, and each of them is encoded as some gadgets. All the gadgets are then put together to form an RNA sequence. The goal is to ensure that an optimal RNA folding corresponds to choosing three $k$-cliques that form a $3k$-clique, given that the underlying graph admits a $3k$-clique.
To achieve this goal without using extra types of letters that force the gadgets to match in a desired manner, we construct the gadgets via a key lemma in \cite{BK15}, whose original purpose is to prove that longest common subsequence and other edit distance problems are SETH-hard even on binary strings. We will treat it as a black box and apply it multiple times during the construction. This powerful tool will allow us to test whether two $k$-cliques form a $2k$-clique by the longest common subsequence between the two strings representing the two $k$-cliques.
In the final RNA sequence, all clique gadgets are well-separated by some carefully designed sequences whose purpose is to ``trap'' all the clique gadgets except three of them. Since we know that these three clique gadgets are guaranteed to match well if the graph has a $3k$-clique, we can infer whether the graph has a $3k$-clique from the optimal RNA folding of the RNA sequence.
\bigskip
\noindent{\bf Dyck Edit Distance.}
One other way to formulate the RNA folding problem is as follows: deleting the minimum number of letters in a given string to transform the string into a string in the language defined by the grammar $\mathbf{S} \rightarrow \mathbf{SS}, A\mathbf{S}U, U\mathbf{S}A, C\mathbf{S}G, G\mathbf{S}C, \epsilon$ (empty string). The {\em Dyck edit distance problem} \cite{S14,S15*}, which asks for the minimum number of edits to transform a given string to a well-balanced parentheses of $s$ different types, has a similar formulation. Due to the similarity, the same conditional lower bound as Theorem~\ref{thm-1} was also shown for the Dyck edit distance problem (with alphabet size $\geq 48$) in \cite{ABV15}.
In this paper, we improve and simplify their result by demonstrating a simple reduction from RNA folding to Dyck edit distance problem:
\begin{theorem}\label{thm-3}
If Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $\mathcal{O}(T(n))$ time.
\end{theorem}
Combining Theorem~\ref{thm-2},~\ref{thm-3}, we get the following corollary:
\begin{corollary} \label{cor-1}
If the Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time.
\end{corollary}
\section{Preliminaries \label{sec.prelim}}
Given a set of letters $\Sigma$, the set $\Sigma'$ is defined as $\{x' | x \in \Sigma\}$. We require that $\Sigma \cap \Sigma' = \emptyset$, and $\forall x, y \in \Sigma, (x \neq y) \rightarrow (x' \neq y')$. Therefore, we have $|\Sigma'| = |\Sigma|$ and $|\Sigma \cup \Sigma'| = 2|\Sigma|$.
For any $X = (x_1, \ldots, x_k) \in \Sigma^k$, we write $p(X)$ to denote $(x_1', \ldots, x_k')$ (the letter $p$ stands for the prime symbol). We denote the reversal of the sequence $X$ as $X^R$. The concatenation of two sequences $X, Y$ is denoted as $X \circ Y$ (or simply $XY$). We write {\em substring} to denote a contiguous subsequence.
\medskip
Two pairs of indices $(i_1,j_1)$, $(i_2,j_2)$, with $i_1 < j_1$ and $i_2 < j_2$, form a {\em crossing pair} iff
$$\left( \{i_1, j_1\} \cap \{i_2,j_2\} \neq \emptyset \right) \vee
\left( i_1 < i_2 < j_1 < j_2 \right) \vee
\left( i_2 < i_1 < j_2 < j_1\right).$$
\bigskip
\noindent{\bf Generalized RNA Folding.}
Given $S \in (\Sigma \cup \Sigma')^n$, the goal of the generalized RNA folding problem is to find a maximum cardinality set $A \subseteq \{(i,j) | 1 \leq i < j \leq n \}$ among all sets meeting the following conditions:
\begin{itemize}
\item $A$ does not contain any crossing pair.
\item For any $(i,j) \in A$, either (i) $S[i] \in \Sigma$ and $S[j] = S[i]'$ or (ii) $S[j] \in \Sigma$ and $S[i] = S[j]'$ is true.
\end{itemize}
We write $\text{RNA}(S) = |A|$.
Any set meeting the above conditions is called an {\em RNA folding} of $S$. If its cardinality equals $\text{RNA}(S)$, then it is said to be {\em optimal}.
In the paper we will only focus on the generalized RNA folding problem with four types of letters, i.e. $\Sigma = \{0,1\}, \Sigma' = \{0',1'\}$, which coincides with the RNA folding problem for alphabet $\{A,C,G,U\}$.
With a slight abuse of notation, sometimes we will write $(S[i], S[j])$ to denote a pair $(i,j) \in A$. The notation $\{\cdot,\cdot\}$ is used to indicate an unordered pair.
\bigskip
\noindent{\bf Longest Common Subsequence (LCS).} Given $X \in \Sigma^n$ and $Y \in \Sigma^m$, we define $\delta_{\text{LCS}} (X,Y) = n+m - 2k$, where $k =$ the length of the longest common subsequence of $X$ and $Y$. It is easy to observe that $\text{RNA} (X \circ p(Y^R))$ equals the length of LCS $= (n+m - \delta_{\text{LCS}} (X,Y))/2$. In this sense, we can conceive of an LCS problem as an RNA folding problem with some structural constraint on the sequence.
\bigskip
In \cite{BK15}, a conditional lower bound for the LCS problem with $|\Sigma| = 2$ based on SETH was presented. A key technique in their approach is a function that transforms an instance of an alignment problem between two sets of sequences to an instance of the LCS problem.
\bigskip
\noindent{\bf Alignments of two sets of sequences.} Let $\mathbf{X} = (X_1, \ldots, X_n)$ and $\mathbf{Y} = (Y_1, \ldots, Y_m)$ be two linearly ordered sets of sequences of alphabet $\Sigma$. We assume that $n \geq m$. An {\em alignment} is a set $A = \{(i_1, j_1), (i_2, j_2), \ldots$, $(i_{|A|}, j_{|A|})\}$ with $1 \leq i_1 < i_2 < \ldots < i_{|A|} \leq n$ and $1 \leq j_1 < j_2 < \ldots < j_{|A|} \leq m$. An alignment $A$ is called {\em structural} iff $|A| = m$ and $i_{m} = i_1 + m - 1$. That is, all sequences in $\mathbf{Y}$ are matched, and the matched positions in $\mathbf{X}$ are contiguous. The set of all alignments is denoted as $\mathcal{A}_{n,m}$, and the set of all structural alignments is denoted as $\mathcal{S}_{n,m}$.
The {\em cost} of an alignment $A$ (with respect to $\mathbf{X}$ and $\mathbf{Y}$) is defined as:
$$ \delta(A) = \sum_{(i,j) \in A} \delta_{\text{LCS}}(X_i, Y_j) + (m - |A|) \max_{i,j} \delta_{\text{LCS}}(X_i, Y_j). $$
That is, unaligned parts of $\mathbf{Y}$ are penalized by $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j)$.
\medskip
Given a sequence $X$, the {\em type} of $X$ is defined as $(|X|, \sum_i X[i])$, where each letter is assumed to be a number. Note that when the alphabet is simply $\{0, 1\}$, $\sum_i X[i]$ is simply the number of occurrences of $1$ in $X$.
The following key lemma was proved in \cite{BK15} (Lemma 4.3 of \cite{BK15}):
\begin{lemma} [\cite{BK15}] \label{lem-1}
Let $\mathbf{X} = (X_1, \ldots, X_n)$ and $\mathbf{Y} = (Y_1, \ldots, Y_m)$ be two linearly ordered sets of binary strings such that $n \geq m$, all $X_i$ are of type $\mathcal{T}_{X} = (\ell_{X}, s_{X})$, and all $Y_i$ are of type $\mathcal{T}_Y = (\ell_{Y}, s_{Y})$. There are two binary strings $S_X = \mathrm{GA}_{{X}}^{m, \mathcal{T}_{Y}}(X_1, \ldots, X_n)$, $S_Y = \mathrm{GA}_{{Y}}^{n, \mathcal{T}_{X}}(Y_1, \ldots, Y_m)$ and an integer $C$ meeting the following requirements:
\begin{itemize}
\item $\min_{A \in \mathcal{A}_{n,m}} \delta(A) \leq \delta_{\text{LCS}}(S_X, S_Y) - C \leq \min_{A \in \mathcal{S}_{n,m}} \delta(A)$.
\item The types of $S_X, S_Y$ and the integer $C$ only depend on $n, m, \mathcal{T}_{X}, \mathcal{T}_{Y}$.
\item $S_X, S_Y$, and $C$ can be calculated in time $\mathcal{O}((n+m)(\ell_X +\ell_Y))$ (hence $|S_X|$ and $|S_Y|$ are both $\mathcal{O}((n+m)(\ell_{X} +\ell_{Y}))$ ).
\end{itemize}
\end{lemma}
Note that the term $\mathrm{GA}$ comes from the word gadget.
Intuitively, computing an optimal alignment (or an optimal structural alignment) of two sets of sequences is at least as hard as computing a longest common subsequence. The above lemma gives a reduction from the computation of a number $s$ with $\min_{A \in \mathcal{A}_{n,m}} \delta(A) \leq s \leq \min_{A \in \mathcal{S}_{n,m}} \delta(A)$ (which can be regarded as an approximation of optimal alignments) to a single LCS instance.
We will use the above lemma as a black box to devise two encodings, the clique node gadget $\text{CNG}(t)$ and the clique list gadget $\text{CLG}(t)$, for a $k$-clique $t$ in a graph in such a way that we can decide whether two $k$-cliques $t_1, t_2$ form a $2k$-clique according the value of $\delta_{\text{LCS}} (\text{CNG}(t_1), \text{CLG}(t_2))$.
When invoking the lemma, $\mathbf{X}$, $\mathbf{Y}$ are designed in such a way that we can test whether a condition is met (e.g. whether two given $k$-cliques form a $2k$-clique) by the value of $\min_{A \in \mathcal{A}_{n,m}} \delta(A)$. We will show that $\min_{A \in \mathcal{A}_{n,m}} \delta(A) = \min_{A \in \mathcal{S}_{n,m}} \delta(A)$ for the case we are interested in. Therefore, we can infer whether the condition we are interested in is met from the value of $\delta_{\text{LCS}}(S_X, S_Y)$.
\section{From Cliques to RNA Folding \label{sec.reduction}}
The goal of this section is to prove Theorem~\ref{thm-2}.
Let $G = (V,E)$ be a graph, and let $n = |V|$. We write $\mathcal{C}_k$ to denote the set of $k$-cliques in $G$. We fix $\Sigma = \{0, 1\}$. As in \cite{ABV15}, we will construct a sequence $S_G \in (\Sigma \cup \Sigma')^\ast$ such that we can decide whether $G$ has a $3k$-clique according to the value of $\text{RNA}(S_G)$.
As our framework of the construction of $S_G$ is similar to the one in \cite{ABV15}, we will give the building blocks (for constructing $S_G$) the same names as their analogues in \cite{ABV15}, despite that they may have different lower-level implementations.
\medskip
The high-level plan is described as following:
In Section~\ref{ss-1} we describe two encodings $\text{CNG}(t), \text{CLG}(t)$ for a $k$-clique $t$ based on the black box described in Lemma~\ref{lem-1}. In Section~\ref{ss-2}, adapting the encodings shown in the previous subsection as the building blocks, we present the definition of the binary sequence $S_G$. We will give a lower bound on $\text{RNA}(S_G)$ by demonstrating an RNA folding of $S_G$, and the bound will depend on whether $G$ has a $3k$-clique.
The goal of the next two subsections is to show that the bound given in Section~\ref{ss-2} is actually the exact value of $\text{RNA}(S_G)$. In Section~\ref{ss-3}, we show that there exists an optimal RNA folding of $S_G$ meeting several constraints. These constraints will simplify the calculation of $\text{RNA}(S_G)$, and we will work out the exact calculation in Section~\ref{ss-4}.
\subsection{Testing $2k$-cliques via LCS \label{ss-1}}
We associate each vertex $v \in V$ a distinct integer in $\{0, 1, \ldots\ n-1\}$. Let $s_v$ be the binary encoding of such integer with $|s_v| = \lceil \log (n) \rceil$. We define $\bar{v}$ to be the binary string resulted by replacing each 0 in $s_v$ with 01 and replacing each 1 in $s_v$ with 10. It is clear that for each $v \in V$, $\bar{v}$ is of type $\mathcal{T}_0 = (2\lceil \log (n) \rceil, \lceil \log (n) \rceil)$, and $\delta_{\text{LCS}} (\bar{u}, \bar{v}) = 0$ iff $u = v$.
In this subsection we present two encodings $\text{CNG}(t), \text{CLG}(t)$ for a $k$-clique $t$ such that we can infer whether two $k$-cliques $t_1, t_2$ form a $2k$-clique from the value of $\delta_{\text{LCS}} (\text{CNG}(t_1), \text{CLG}(t_2))$.
\medskip
For each $v \in V$, the {\em list gadget} $\textrm{LG}(v)$ and the {\em node gadget} $\textrm{NG}(v)$ are defined as following:
\begin{itemize}
\item $\textrm{LG}(v) = \mathrm{GA}_{{X}}^{1, \mathcal{T}_0}(\bar{u}_1, \bar{u}_2, \ldots, \bar{u}_{|N(v)|}, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}, \ldots, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil} )$,
where $N(v) = \{{u_1}, {u_2},$ $\ldots, {u_{|N(v)|}}\}$, and the number of occurrences of $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}$ is $n - |N(v)|$.
\item $\textrm{NG}(v) = \mathrm{GA}_{{Y}}^{n, \mathcal{T}_0}(\bar{v})$.
\end{itemize}
\begin{lemma}\label{lem-2}
There is a constant $c_0$, depending only on $n$, such that for any $v_1, v_2 \in V$, we have $\{v_1, v_2\} \in E$ iff $\delta_{\text{LCS}}(\textrm{LG}(v_1), \textrm{NG}(v_2)) = c_0 = \min_{v_1', v_2' \in V} \delta_{\text{LCS}} (\textrm{LG}(v_1'), \textrm{NG}(v_2'))$.
\end{lemma}
\begin{proof}
We let $N(v_1) = \{{u_1, u_2, \ldots, u_{|N(v_1)|}}\}$.
Let $\mathbf{X} = (\bar{u}_1, \bar{u}_2, \ldots, \bar{u}_{|N(v_1)|}, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}, \ldots, 1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil})$, where the number of occurrences of $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil}$ is $n - |N(v_1)|$, and let $\mathbf{Y} = (\bar{v}_2)$.
In view of Lemma~\ref{lem-1}, we have $\min_{A \in \mathcal{A}_{n,1}} \delta(A) \leq \delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) - C \leq \min_{A \in \mathcal{S}_{n,1}} \delta(A)$, for some $C$ whose value depends on $|\mathbf{X}|, |\mathbf{Y}|$, and $\mathcal{T}_0$. As these parameters depend solely on $n$, the number $C$ only depends on $n$.
Since $|\mathbf{Y}|=1$, any non-empty alignment between $\mathbf{X}$ and $\mathbf{Y}$ is structural. This implies that $\delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) - C = \min_{A \in \mathcal{A}_{n,1}} \delta(A) = \min_{A \in \mathcal{S}_{n,1}} \delta(A)$.
When $\{v_1, v_2\} \in E$, since $\bar{v}_2$ is contained in $\mathbf{X}$, clearly $\min_{A \in \mathcal{S}_{n,m}} \delta(A) = 0$. When $\{v_1, v_2\} \not\in E$, $\bar{v}_2$ does not appear in $\mathbf{X}$, so $\min_{A \in \mathcal{S}_{n,m}} \delta(A) > 0$. Note that $1^{\lceil \log (n) \rceil}0^{\lceil \log (n) \rceil} \neq \bar{v}$, for any $v \in V$.
As a result, $\{v_1, v_2\} \in E$ iff $\delta_{\text{LCS}}(\textrm{LG}(v_1)$, $\textrm{NG}(v_2)) = C = \min_{v_1', v_2' \in V} \delta_{\text{LCS}} (\textrm{LG}(v_1'), \textrm{NG}(v_2'))$. Hence setting $c_0 = C$ suffices.
\qed
\end{proof}
We let $\mathcal{T}_X$ be the type of the list gadgets, and we let $\mathcal{T}_Y$ be the type of the node gadgets. For each $k$-clique $t = \{u_1, u_2, \ldots, u_k\}$, we define the {\em clique node gadget} $\textrm{CNG}(t)$ and the {\em clique list gadget} $\textrm{CLG}(t)$ as following:
\begin{itemize}
\item $\textrm{CLG}(t) = \mathrm{GA}_{{X}}^{k^2, \mathcal{T}_Y}(\textrm{LG}(u_1), \ldots, \textrm{LG}(u_1), \textrm{LG}(u_2), \ldots, \textrm{LG}(u_2), \ldots, \textrm{LG}(u_k), \ldots, \textrm{LG}(u_k))$, where the number of occurrences of each $\textrm{LG}(u_i)$ is $k$.
\item $\textrm{CNG}(t) = \mathrm{GA}_{{Y}}^{k^2, \mathcal{T}_X}(
\textrm{NG}(u_1), \textrm{NG}(u_2), \ldots, \textrm{NG}(u_k),
\textrm{NG}(u_1), \textrm{NG}(u_2), \ldots, \textrm{NG}(u_k),
\ldots$,
$\textrm{NG}(u_1)$,
$\textrm{NG}(u_2), \ldots$, $\textrm{NG}(u_k)
)$, where the number of occurrences of each $\textrm{NG}(u_i)$ is $k$.
\end{itemize}
We are ready to prove the main lemma in the subsection:
\begin{lemma}\label{lem-3}
There is a constant $c_1$, depending only on $n, k$, such that for any $t_1, t_2 \in \mathcal{C}_k $ , $t_1 \cup t_2$ is a $2k$-clique iff $\delta_{\text{LCS}} (\textrm{CLG}(t_1), \textrm{CNG}(t_2)) = c_1 = \min_{t_1', t_2' \in \mathcal{C}_k} \delta_{\text{LCS}} (\textrm{CLG}(t_1'), \textrm{CNG}(t_2'))$.
\end{lemma}
\begin{proof}
Let $t_1 = \{u_1, u_2, \ldots, u_k\}$, and let $t_2 = \{v_1, v_2, \ldots, v_k\}$.
Let $\mathbf{X} = (\textrm{LG}(u_1), \ldots, \textrm{LG}(u_1), \textrm{LG}(u_2), \ldots, \textrm{LG}(u_2), \ldots, \textrm{LG}(u_k), \ldots, \textrm{LG}(u_k))$, where each $\textrm{LG}(u_i)$ appears $k$ times, and let $\mathbf{Y} = (
\textrm{NG}(v_1), \textrm{NG}(v_2), \ldots, \textrm{NG}(v_k),
\textrm{NG}(v_1)$, $\textrm{NG}(v_2)$, $\ldots$, $\textrm{NG}(v_k)$,
$\ldots$,
$\textrm{NG}(v_1)$, $\textrm{NG}(v_2)$, $\ldots$, $\textrm{NG}(v_k)
)$, where each $\textrm{NG}(v_i)$ appears $k$ times.
In view of Lemma~\ref{lem-2}, we have $\min_{{w_1}, {w_2} \in V} \delta_{\text{LCS}} (\textrm{LG}({w_1}), \textrm{NG}({w_2})) \geq c_0$, so we can lower bound $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A)$ by $k^2 c_0$.
If $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j) = c_0$, any alignment has cost $k^2 c_0$. When $\max_{i,j} \delta_{\text{LCS}}(X_i, Y_j) > c_0$, it is easy to observe that in order to achieve $\delta(A) = k^2 c_0$, all sequences in $\mathbf{Y}$ must be aligned (as the cost for any unaligned sequence in $\mathbf{Y}$ is now $> c_0$). Therefore, any alignment $A$ with $\delta(A) = k^2 c_0$ must be $A = \{(i, i)| i \in \{1, 2, \ldots, k^2\}\}$ with $\delta_{\text{LCS}}(X_i, Y_i) = c_0$, for all $i \in \{1, 2, \ldots, k^2\}$.
In view of the above, $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$ iff $\delta_{\text{LCS}}(X_i, Y_i) = c_0$ for all $i \in \{1, 2, \ldots, k^2\}$.
Since $A = \{(i, i)| i \in \{1, 2, \ldots, k^2\}\}$ is structural, $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$ iff $\min_{A \in \mathcal{S}_{k^2,k^2}} \delta(A) = k^2 c_0$. Therefore, in view of Lemma~\ref{lem-1}, there exists a constant $C$ such that:
\begin{itemize}
\item If $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$, then $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) = k^2 c_0 + C$.
\item If $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) > k^2 c_0$, then $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) > k^2 c_0 + C$.
\end{itemize}
Moreover, the value of $C$ depends only on $|\mathbf{X}|, |\mathbf{Y}|$, $\mathcal{T}_X, \mathcal{T}_Y$. As these parameters depend solely on $n, k$, the number $C$ only depends on $n, k$.
When $t_1 \cup t_2$ is a $2k$-clique, all vertices in $t_1$ are adjacent to all vertices in $t_2$. In view of Lemma~\ref{lem-2}, $\forall_{i,j} \delta_{\text{LCS}}(X_i, Y_j) = c_0$. Hence $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) = k^2 c_0$, implying that $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) = k^2 c_0 + C$.
When $t_1 \cup t_2$ is not a $2k$-clique, there exist $u_i \in t_1, v_j \in t_2$ such that $\{u_i,v_j\} \not\in E$. According to our definition of $\mathbf{X}$ and $\mathbf{Y}$, we have $X_{j+k(i-1)} = \textrm{LG}(u_i)$, $Y_{j+k(i-1)} = \textrm{NG}(v_j)$, and hence $\delta_{\text{LCS}}(X_{j+k(i-1)}, Y_{j+k(i-1)}) > c_0$. This implies that $\min_{A \in \mathcal{A}_{k^2,k^2}} \delta(A) > k^2 c_0$, which leads to $\delta_{\text{LCS}} (\textrm{CLG}({t_1})$, $\textrm{CNG}({t_2})) > k^2 c_0 + C$.
As a result, $t_1 \cup t_2$ is a $2k$-clique iff $\delta_{\text{LCS}} (\textrm{CLG}(t_1)$, $\textrm{CNG}(t_2)) = k^2 c_0 + C = \min_{t_1', t_2' \in \mathcal{C}_k} \delta_{\text{LCS}} ($ $\textrm{CLG}(t_1'), \textrm{CNG}(t_2'))$. Setting $c_1 = k^2 c_0 + C$ suffices.
\qed
\end{proof}
The following lemma is a simple consequence of Lemma~\ref{lem-1}:
\begin{lemma}\label{lem-length}
There exist four integers $\ell_{\textrm{CNG}, 0}$, $\ell_{\textrm{CNG}, 1}$, $\ell_{\textrm{CLG}, 0}$, $\ell_{\textrm{CLG}, 1} \in \mathcal{O}(k^2 n \log (n))$, such that for any $t \in \mathcal{C}_k$,
\begin{itemize}
\item $\ell_{\textrm{CNG}, b} = $ the number of occurrences of $b$ in $\textrm{CNG}(t)$, $b \in \{0,1\}$.
\item $\ell_{\textrm{CLG}, b} = $ the number of occurrences of $b$ in $\textrm{CLG}(t)$, $b \in \{0,1\}$.
\end{itemize}
\end{lemma}
\begin{proof}
As a consequence of Lemma~\ref{lem-1}, all $\textrm{CNG}(t)$ have the same type, and all $\textrm{CLG}(t)$ have the same type. Therefore, the existence of these four integers is guaranteed.
In view of Lemma~\ref{lem-1}, for all $v \in V$, both $\textrm{LG}(v)$ and $\textrm{NG}(v)$ have length at most $(n+1) \cdot (2 \lceil \log (n) \rceil + 2 \lceil \log (n) \rceil ) = \mathcal{O} (n \log (n))$. Applying Lemma~\ref{lem-1} again, the length of both $\textrm{CNG}(t)$ and $\textrm{CLG}(t)$ for all $t \in \mathcal{C}_k$ is $(k^2 + k^2)(\mathcal{O} (n \log (n)) + \mathcal{O} (n \log (n))) = \mathcal{O}(k^2 n \log (n))$.
As a result, the four integers can be bounded by $\mathcal{O}(k^2 n \log (n))$.
\qed
\end{proof}
\subsection{The RNA sequence $S_G$ \label{ss-2}}
Based on the parameters in Lemma \ref{lem-length}, we define $\ell_0 = \ell_{\textrm{CNG}, 0} + \ell_{\textrm{CNG}, 1} +\ell_{\textrm{CLG}, 0} +\ell_{\textrm{CLG}, 1} = \mathcal{O}(k^2 n \log (n))$; for $i \in \{1,2,3\}$, we set $\ell_i = 100 \ell_{i-1}$; and $\ell_4 = 100 |\mathcal{C}_k| \ell_3 = \mathcal{O}(k^2 n^{k+1} \log (n)) $.
\medskip
The RNA sequence $S_G$ is then defined as following:
$$S_G = 0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\alpha(t){0'}^{\ell_3} \right) \right]
0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\beta(t) {0'}^{\ell_3} \right)\right]
0^{\ell_4} \left[{0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_\gamma(t) {0'}^{\ell_3} \right)\right],$$
where
\begin{align*}
\textrm{CG}_\alpha(t) &= {1'}^{2 \ell_2} p({\textrm{CLG}(t)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t) {1}^{\ell_2},\\
\textrm{CG}_\beta(t) &= {1'}^{\ell_2} p({\textrm{CLG}(t)}^R) {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t)) {1'}^{\ell_2},\\
\textrm{CG}_\gamma(t) &= {1}^{\ell_2} {\textrm{CLG}(t)}^R {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t) {1}^{2\ell_2}.
\end{align*}
For any $t \in \mathcal{C}_k$, $x \in \{\alpha, \beta, \gamma\}$, the string $\textrm{CG}_x(t)$ is called a {\em clique gadget}.
Note that $\textrm{CG}_\alpha(t) \in {(\Sigma \cup \Sigma')}^\ast$, $\textrm{CG}_\beta(t) \in {\Sigma'}^\ast$, and $\textrm{CG}_\gamma(t) \in {\Sigma}^\ast$.
It is obvious that $|S_G| = \mathcal{O}(|\mathcal{C}_k| \ell_0) = \mathcal{O}(k^2 n^{k+1} \log (n) )$.
\medskip
Before proceeding further, we explain some intuitions behind the definition of $S_G$ and give a simple lower bound on $\text{RNA}(S_G)$ by constructing an RNA folding as following:
\begin{itemize}
\item The pairings between letters in some ${0'}^{\ell_3}$ and some ${0}^{\ell_4}$ sometimes make a clique gadget unable to participate in the RNA folding with other clique gadgets. In this sense, a clique gadget is said to be ``blocked'' if the letters within the clique gadget only pair up with other letters within the same clique gadget or some $0$ in a ${0}^{\ell_4}$.
Let's try linking all the $0'$ in all ${0'}^{\ell_3}$ to some $0$ in some ${0}^{\ell_4}$ in such a way that all clique gadgets are blocked except $\textrm{CG}_\alpha(t_\alpha)$, $\textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$. This gives us $3(|\mathcal{C}_k|+1) \ell_3$ amount of pairs. See Fig.~\ref{fig-1}.
\item For a clique gadget that is ``blocked'', our design of $S_G$ ensures that the number of pairs involving letters in the clique gadget (in certain optimal RNA foldings) is irrelevant to its corresponding $k$-clique (we will prove it later):
\begin{itemize}
\item For a blocked $\textrm{CG}_\alpha(t)$, since $\ell_2$ is significantly larger than $\ell_1, \ell_0$, an optimal way to pair up the letters is to match as many $\{1', 1\}$ as possible. This gives us $\ell_2 + \min(\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$ pairs.
\item For a blocked $\textrm{CG}_\beta(t)$, since we do not have any 1 here, the best we can do is to match all $0'$ to some ${0}^{\ell_4}$. This gives us $2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}$ pairs.
\item For a blocked $\textrm{CG}_\gamma(t)$, no matching can be made.
\end{itemize}
Therefore, the total amount of pairs involving blocked clique gadgets is $(|\mathcal{C}_k|-1)( 2 \ell_1 + 2 \ell_2
+ \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})
+ \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} )$. See Fig.~\ref{fig-2} for an illustration.
\item For the three clique gadgets that are not blocked, we will later see that (in certain optimal RNA foldings) $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta), \textrm{CG}_\gamma(t_\gamma)$ correspond to a $3k$-clique if the graph has one. It is a simple exercise to construct an RNA folding for $\textrm{CG}_\alpha(t_\alpha) \circ \textrm{CG}_\beta(t_\beta) \circ \textrm{CG}_\gamma(t_\gamma)$ that uses up all the ${1'}^{2 \ell_2}, {1}^{2 \ell_2}, {1'}^{\ell_2}, {1}^{\ell_2}, {0'}^{ \ell_1}, {0}^{\ell_1}$ and has cardinality $6 \ell_2 + 3 \ell_1 +
\frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) \right) +
\frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) \right) +
\frac{1}{2} \left( \ell_0 - \delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma)) \right)$.
Recall that $\frac{1}{2} ( \ell_0 - \delta_{\text{LCS}} ($ $\textrm{CLG}(t_x), \textrm{CNG}(t_y)) )$ is the length of the LCS between $\textrm{CLG}(t_x)$ and $\textrm{CNG}(t_y)$. See Fig.~\ref{fig-3} for an illustration.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{fig-1}
\end{center}
\caption{The three selected clique gadgets and the matchings between ${0'}^{\ell_3}$ and ${0}^{\ell_4}$.
} \label{fig-1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{fig-2}
\end{center}
\caption{The matchings between a blocked clique gadget and ${0}^{\ell_4}$.
} \label{fig-2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{fig-3}
\end{center}
\caption{The matchings within the three selected clique gadgets.
} \label{fig-3}
\end{figure}
In light of the above discussion, we define:
\begin{itemize}
\item $m_1 = 3(|\mathcal{C}_k| +1) \ell_3 + (|\mathcal{C}_k|-1) ( 2 \ell_1 + 2 \ell_2
+ \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})
+ \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} )$,
\item $m_2 = 6 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0
- \min_{t_\alpha, t_\beta, t_\gamma \in \mathcal{C}_k} \frac{1}{2}(
\delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) +
\delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) +
\delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma))
)$.
\end{itemize}
The next lemma, which gives a lower bound on $\text{RNA}(S_G)$, is then implied instantly by the above discussion.
\begin{lemma}\label{lem-5}
$\text{RNA}(S_G) \geq m_1 + m_2$.
\end{lemma}
Ultimately we will show that $\text{RNA}(S_G) = m_1 + m_2$, and clearly this offers enough information for us to decide whether $G$ has a $3k$-clique.
The following lemma calculates $\text{RNA}(\cdot)$ of some sequences, and they will be useful in the subsequent discussion.
\begin{lemma}\label{lem-4}
The following statements are true for any $t,t' \in \mathcal{C}_k$:
\begin{enumerate}
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) ) = 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) $
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) ) = 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} $
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\gamma(t) ) = 0 $
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) 0^{\ell_4} \textrm{CG}_\beta(t') ) \leq 3.1 \ell_1 + 2 \ell_2$
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) 0^{\ell_4} \textrm{CG}_\gamma(t') ) \leq 1.1 \ell_1 + 2 \ell_2$
\item $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t') ) \leq 1.1 \ell_1 + 4 \ell_2 $
\end{enumerate}
\end{lemma}
\begin{proof}
The value of $\text{RNA}(\cdot)$ for each of the six sequences are calculated as following:
\begin{enumerate}
\item Linking as many $1$ to $1'$ gets a matching of size $m = 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$. To see that it is optimal, it suffices to show that both $(0',0)$ and $(0,0')$ cannot appear in an optimal RNA folding:
\begin{itemize}
\item If the RNA folding contains $(0,0')$ , then none of $1'$ can participate in the RNA folding. As the total number of $0'$ is $\ell_1 + \ell_{\textrm{CLG}, 0}$, the size of RNA folding is at most $\ell_1 + \ell_{\textrm{CLG}, 0} < m$.
\item If the RNA folding contains $(0',0)$ , then at most $\ell_{\textrm{CLG}, 1}$ number of letters within the middle ${1}^{\ell_2}$ (the one between ${0'}^{\ell_1}$ and ${0}^{\ell_1}$) can participate in the RNA folding. It implies that the number of $(1',1)$ pairs in the RNA folding is at most $\ell_{\textrm{CLG}, 1} + \ell_2$. Hence the size of the RNA folding can be upper bounded by $(\ell_1 + \ell_{\textrm{CLG}, 0}) + (\ell_{\textrm{CLG}, 1} + \ell_2) < m$.
\end{itemize}
\item Since there is no $1$, the equation follows from the fact that there are $ 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}$ occurrences of $0'$, all of which can be matched to some $0$ without crossing.
\item No matching can be made since there is no $0', 1'$.
\item The value of $\text{RNA}(\cdot)$ can be upper bounded by the number of $1$ and $0'$. This is $(2 \ell_2 + \ell_{\textrm{CNG}, 1}) + (3 \ell_1 + 2\ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0}) \leq 3.1 \ell_1 + 2 \ell_2$.
\item The value of $\text{RNA}(\cdot)$ can be upper bounded by the number of $1'$ and $0'$. This is $(2 \ell_2 + \ell_{\textrm{CLG}, 1}) + (\ell_1 + \ell_{\textrm{CLG}, 0} ) \leq 1.1 \ell_1 + 2 \ell_2$.
\item We define $S = 0^{\ell_4} \circ \left( {1'}^{\ell_2} {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2} \right) \circ 0^{\ell_4} \circ \left( {1}^{\ell_2} {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} {1}^{2\ell_2} \right)$, which is the result of removing the clique node gadgets and the clique list gadgets in $0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t')$. It is clear that $\text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) 0^{\ell_4} \textrm{CG}_\gamma(t') )$ $\leq 0.1 \ell_1 + \text{RNA}(S)$, as the total length of the removed substrings can be upper bounded by $0.1 \ell_1$. Therefore, it suffices to show that $\text{RNA}(S) \leq \ell_1 + 4 \ell_2$.
Let $A$ be any RNA folding of $S$:
\begin{itemize}
\item Case: there are some $(0,0')\in A$ where the $0'$ comes from the first ${0'}^{\ell_1}$ in $S$. Clearly, the first substring $ {1'}^{\ell_2}$ cannot participate in any pairing. Therefore, $|A| \leq |{0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2}| = 2 \ell_1 + 3 \ell_2 < \ell_1 + 4 \ell_2$.
\item Case: there are some $(0',0)\in A$ where the $0'$ comes from the first ${0'}^{\ell_1}$ in $S$. In this situation, at most half of the ${1}^{2\ell_2}$ can participate in the RNA folding, since only the first ${1'}^{\ell_2}$ in $S$ is reachable from ${1}^{2\ell_2}$ without crossing a pair $(0',0)$. Therefore, $|A|$ is at most the total number of $0'$ and $1$ in $S$ minus $\ell_2$, i.e. $|A| \leq 2 \ell_1 + 3 \ell_2 < \ell_1 + 4 \ell_2$ .
\item Case: the first ${0'}^{\ell_1}$ in $S$ does not participate in the RNA folding. Then, $|A| \leq |{1'}^{\ell_2} {1'}^{2\ell_2} {0'}^{\ell_1}{1'}^{\ell_2}|$ $= \ell_1 + 4 \ell_2$.
\end{itemize}
\end{enumerate}
\qed
\end{proof}
Note that (1), (2), (3) in Lemma~\ref{lem-4} imply that the RNA folding for blocked clique gadgets described in Fig.~\ref{fig-2} is optimal, and the optimal number of pairings is irrelevant to the corresponding $k$-clique.
\subsection{Optimal RNA foldings of $S_G$ \label{ss-3}}
In the previous subsection, we describe an RNA folding of $S_G$ containing $m_1 + m_2$ pairs. The two key properties of this RNA folding are:
\bigskip
\noindent {\em Property 1.} All $0'$ in all ${0'}^{\ell_3}$ are paired up with some $0$ in some $0^{\ell_4}$.
\bigskip
\noindent {\em Property 2.} All clique gadgets are ``blocked'' by the pairings between ${0'}^{\ell_3}$ and $0^{\ell_4}$, except the three clique gadgets: $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta), \textrm{CG}_\gamma(t_\gamma)$, for some $t_\alpha,t_\beta,t_\gamma \in \mathcal{C}_k$.
\bigskip
The goal in this section is to show that there is an optimal RNA folding having the above two properties, which facilitates the calculation of $\text{RNA}(S_G)$ in the next subsection.
\begin{lemma}\label{lem-6}
For any RNA folding $A$ of $S_G$, if there exists a pair linking a $0'$ in a specific ${0'}^{\ell_3}$ (denoted as $S_1$) to a $0$ in a specific $0^{\ell_4}$ (denoted as $S_2$), then there exists another RNA folding $A'$ with $|A'| \geq |A|$ where all letters in $S_1$ are linked to some letters in $S_2$.
\end{lemma}
\begin{proof}
It immediately follows from the fact that $\ell_4$ is greater than the total number of $0'$ in $S_G$. It makes rematching all the letters in $S_1$ to some letters in $S_2$ possible.
\qed
\end{proof}
Lemma~\ref{lem-7} ensures that there is an optimal RNA folding having Property 1:
\begin{lemma}\label{lem-7}
There is an optimal RNA folding $A$ of $S_G$ having Property 1.
\end{lemma}
\begin{proof}
Let's choose any RNA folding $A$ of $S_G$ with $|A| = \text{RNA}(S_G)$. In view of Lemma~\ref{lem-6}, we can assume that for each ${0'}^{\ell_3}$ in $S_G$, either all its letters are matched to some $0$ in the same $0^{\ell_4}$ or none of its letters is matched to any $0$ in any $0^{\ell_4}$. Let $z$ denote the number of ${0'}^{\ell_3}$ such that none of its letters are matched to any $0$ in any $0^{\ell_4}$.
For some $t \in \mathcal{C}_k$, and for some $x \in \{\alpha, \beta, \gamma\}$, $\textrm{CG}_x(t)$ is said to be ``trapped'' in $A$ if all letters within $\textrm{CG}_x(t)$ are either unmatched, matched to letters within $\textrm{CG}_x(t)$, or matched to letters in some $0^{\ell_4}$.
We note that a sufficient condition for $\textrm{CG}_x(t)$ to be trapped is that the letters in its two neighboring ${0'}^{\ell_3}$ are all matched to the same $0^{\ell_4}$. The cases that the condition is violated is enumerated as follows:
\begin{enumerate}
\item The two neighboring ${0'}^{\ell_3}$ of $\textrm{CG}_x(t)$ are matched to different $0^{\ell_4}$, and this occurs at most $2|\{\alpha, \beta, \gamma\}| = 6$ times (i.e. at most two times per $x \in \{\alpha, \beta, \gamma\}$).
\item A neighboring ${0'}^{\ell_3}$ of $\textrm{CG}_x(t)$ is not matched to any $0^{\ell_4}$, and this occurs at most $2z$ times.
\end{enumerate}
Therefore, the number of clique gadgets that are not trapped in $A$ is at most $6 + 2z$.
Using this information, we can derive an upper bound of $|A|$:
\begin{align*}
|A| & \leq (3(|\mathcal{C}_k|+ 1 ) - z) \ell_3 & (\text{matched }{0'}^{\ell_3}) \\
&\hspace{0.4cm}+
|\mathcal{C}_k| \bigg(
\max_{t \in \mathcal{C}_k } \text{RNA}( 0^{\ell_4} \textrm{CG}_\alpha(t) )
+ \max_{t\in \mathcal{C}_k} \text{RNA}( 0^{\ell_4} \textrm{CG}_\beta(t) ) & (\text{trapped clique gadgets}) \\
& \hspace{1.4cm} + \max_{t\in \mathcal{C}_k} \text{RNA}( 0^{\ell_4} \textrm{CG}_\gamma(t) ) \bigg)\\
&\hspace{0.4cm} + (6 + 2z) \max_{t\in \mathcal{C}_k , x \in \{\alpha, \beta, \gamma\}} |\textrm{CG}_x(t)|. & (\text{remaining clique gadgets})
\end{align*}
In view of the calculation in Lemma~\ref{lem-4}, $|A|$ is at most
$$m_1 - z \ell_3 +
\left( 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} \right) +
(6 +2z) \max_{t, x}|\textrm{CG}_x(t)|.$$
Since $ 2 \ell_2 + \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1}) + 2 \ell_1 + \ell_{\textrm{CLG}, 0} + \ell_{\textrm{CNG}, 0} < 0.1\ell_3$, and since the length of a clique gadget $< 0.1\ell_3$, we have:
$$|A| < m_1 - 0.8 z \ell_3 + 0.7 \ell_3.$$
Therefore, $|A| < m_1 < \text{RNA}(S_G)$ if $z > 0$. Hence we must have $z = 0$, i.e. all $0'$ in all ${0'}^{\ell_3}$ are paired up with some $0$ in some $0^{\ell_4}$.
\qed
\end{proof}
To proceed further, some terminologies are needed to formally define the Property 2:
\begin{definition}
Let $A$ be an RNA folding of a sequence where $S_1, S_2$ are two substrings (subsequences of consecutive elements). We write $S_1 \overset{A}\longleftrightarrow S_2$ iff
\begin{itemize}
\item there exists $\{x_1, x_2\} \in A$ with $x_1 \in S_1, x_2 \in S_2$.
\item $S_1, S_2$ are disjoint substrings.
\end{itemize}
\end{definition}
For example, ``$\textrm{CG}_x(t_{1})$ is blocked in $A$'' is equivalent to ``there exist no $y \in \{\alpha, \beta, \gamma\}$, $t_2 \in \mathcal{C}_k$ such that $\textrm{CG}_x(t_{1}) \overset{A}\longleftrightarrow \textrm{CG}_y(t_{2})$''.
\begin{definition}
$\mathcal{M}_\alpha$ is defined as the set of RNA foldings of $S_G$ such that $A \in \mathcal{M}_\alpha$ iff
\begin{itemize}
\item $A$ has Property 1, and
\item
there exist $t_{\alpha,1}, t_{\alpha,2}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ such that $t_{\alpha,1} \neq t_{\alpha,2}$, and for any $t_1, t_2 \in \mathcal{C}_k$, $\{u_1, u_2\} \subseteq \{\alpha, \beta, \gamma\}$, $\textrm{CG}_{u_1}(t_1) \overset{A}\longleftrightarrow \textrm{CG}_{u_2}(t_2)$ implies that $\{(u_1, t_1), (u_2,t_2)\} \in \{ \{(\alpha,t_{\alpha,1}),(\beta, t_\beta)\},\{(\alpha,t_{\alpha,2}),(\gamma, t_\gamma)\} \}$.
\end{itemize}
$\mathcal{M}_\beta$ and $\mathcal{M}_\gamma$ are defined analogously.
\end{definition}
\begin{definition}
$\mathcal{M}_{\alpha, \beta, \gamma}$ is defined as the set of RNA foldings of $S_G$ such that $A \in \mathcal{M}_{\alpha, \beta, \gamma}$ iff
\begin{itemize}
\item $A$ has Property 1, and
\item
there exist $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ such that for any $t_1, t_2 \in \mathcal{C}_k$, $\{u_1, u_2\} \subseteq \{\alpha, \beta, \gamma\}$, $\textrm{CG}_{u_1}(t_1) \overset{A}\longleftrightarrow \textrm{CG}_{u_2}(t_2)$ implies that $\{(u_1, t_1), (u_2,t_2)\} \subseteq \{(\alpha,t_{\alpha}),(\beta, t_\beta), (\gamma, t_\gamma)\}$.
\end{itemize}
\end{definition}
Using the above notions, it is clear that $A \in \mathcal{M}_{\alpha, \beta, \gamma}$ iff $A$ has both Property 1 and Property 2. In the remainder of the subsection, we will prove that there exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_{\alpha, \beta, \gamma}$.
To ease the notation, for each $x \in \{\alpha, \beta, \gamma\}$, we call $\textrm{CG}_x(t)$ an ``$x$ clique gadget'', for all $t \in \mathcal{C}_k$; we write ``$C_1$ and $C_2$ are {\em linked} (in $A$)'' to denote $C_1 \overset{A}\longleftrightarrow C_2$.
\begin{lemma}\label{lem-8}
Let $A$ be an optimal RNA folding of $S_G$ having Property 1. For any $x \in \{\alpha, \beta, \gamma\}$, there does not exist two $x$ clique gadgets $C_1, C_2$ such that $C_1 \overset{A}\longleftrightarrow C_2$.
\end{lemma}
\begin{proof}
There is a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. Existence of a pair in $A$ linking a letter in $C_1$ and a letter in $C_2$ makes it impossible for any letter in the ${0'}^{\ell_3}$ be matched to any letter in any $0^{\ell_4}$, a contradiction.
\qed
\end{proof}
\begin{lemma}\label{lem-9}
Let $A$ be an optimal RNA folding of $S_G$ having Property 1. For any $\{x,y\} \in \{ \{\alpha, \beta\}$, $\{\alpha, \gamma\}, \{\beta, \gamma\}\}$, there does not exist two distinct $x$ clique gadgets $C_1, C_2$ and two (not necessarily distinct) $y$ clique gadgets $C_3, C_4$ such that $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$.
\end{lemma}
\begin{proof}
Clearly there must be a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. However, since $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$, letters in the substring ${0'}^{\ell_3}$ can only be matched to letters in $C_1, C_2, C_3, C_4$, letters between $C_1, C_2$, and letters between $C_3, C_4$. This contradicts Property 1.
\qed
\end{proof}
\begin{lemma}\label{lem-10}
Let $A$ be an optimal RNA folding of $S_G$ having Property 1.
For any $x \in \{\alpha, \beta, \gamma\}$, suppose that there exist two distinct $x$ clique gadgets $C_1, C_2$ such that $C_1 \overset{A}\longleftrightarrow C_3$ and $C_2 \overset{A}\longleftrightarrow C_4$ for some clique gadgets $C_3, C_4$.
Then there does not exist any other pairs of clique gadgets that are linked in $A$.
\end{lemma}
\begin{proof}
Let $y, z \in \{\alpha, \beta, \gamma\}$ such that $C_3$ is a $y$ clique gadget, and $C_4$ is a $z$ clique gadget. By Lemma~\ref{lem-8} and Lemma~\ref{lem-9}, $x, y, z$ must be distinct.
Suppose that there exist two clique gadgets $C_5, C_6$ that are linked in $A$ such that $\{C_5, C_6\} \not\in \{\{C_1, C_3\},\{C_2, C_4\}\}$. We show that this leads to a contradiction.
First of all, none of $C_5, C_6$ can be an $x$ clique gadget. Suppose that $C_5$ is an $x$ clique gadget. Then by Lemma~\ref{lem-8}, $C_6$ is either a $y$ clique gadget or a $z$ clique gadget. In any case, Lemma~\ref{lem-9} is violated.
Therefore, we can (without loss of generality) assume that $C_5$ is a $y$ clique gadget, and $C_6$ is a $z$ clique gadget.
Since $C_1, C_2$ are distinct, there must be a substring ${0'}^{\ell_3}$ located between $C_1$ and $C_2$. Since $C_1$ is linked to a $y$ gadget, and since $C_2$ is linked to a $z$ gadget, letters in the substring ${0'}^{\ell_3}$ can only be paired up with letters in the substring $0^{\ell_4}$ bordering both ${0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_y(t) {0'}^{\ell_3} \right)$ and ${0'}^{\ell_3} \underset{{t \in \mathcal{C}_k}}{\bigcirc} \left( \textrm{CG}_z(t) {0'}^{\ell_3} \right)$ (imaging $S_G$ as a circular string). However, the existence of a pair linking a letter in $C_5$ (an $y$ clique gadget) and a letter in $C_6$ (a $z$ clique gadget) implies that such $0^{\ell_4}$ cannot be reached from the ${0'}^{\ell_3}$ without a crossing, a contradiction.
\qed
\end{proof}
The following lemma directly follows from Lemma \ref{lem-7} and Lemma \ref{lem-10}.
\begin{lemma}\label{lem-11}
There exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_\alpha \cup \mathcal{M}_\beta \cup \mathcal{M}_\gamma \cup \mathcal{M}_{\alpha, \beta, \gamma}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem-7}, we can restrict our consideration to optimal RNA foldings having Property 1.
Let $A$ be any such optimal RNA folding:
{\bf Case 1:} For each $x \in \{\alpha, \beta, \gamma\}$, there is at most one $x$ clique gadget that is linked to other clique gadgets. Then $A \in \mathcal{M}_{\alpha, \beta, \gamma}$.
{\bf Case 2:} For some $x \in \{\alpha, \beta, \gamma\}$, there are two distinct $x$ clique gadgets that are linked to other clique gadgets. By Lemma~\ref{lem-10}, $ A \in \mathcal{M}_x$.
\qed
\end{proof}
We are now in a position to prove the main lemma in the subsection:
\begin{lemma}\label{lem-12}
There exists an optimal RNA folding of $S_G$ that belongs to $\mathcal{M}_{\alpha, \beta, \gamma}$.
\end{lemma}
\begin{proof}
In view of Lemma~\ref{lem-11}, it suffices to show that for any $A \in \mathcal{M}_\alpha \cup \mathcal{M}_\beta \cup \mathcal{M}_\gamma$, we have $|A| < \text{RNA}(S_G)$.
Let $t_{x,1}, t_{x,2}, t_{y}, t_{z} \in \mathcal{C}_k$ and $\{y,z\} = \{\alpha, \beta, \gamma\} \setminus \{x\}$ be the ones in the definition of $\mathcal{M}_x$.
We let $A \in \mathcal{M}_x$. Each pair in $A$ falls into one of the following categories:
\begin{itemize}
\item The ones linking a $0'$ in some ${0'}^{\ell_3}$ to a $0$ in some $0^{\ell_4}$. There are exactly $3(|\mathcal{C}_k|+1) \ell_3$ number of such pairs.
\item The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \not\in \{(x,t_{x,1}),(x,t_{x,2}), (y, t_y), (z,t_z)\}$. As any letter in such $\textrm{CG}_{u}(t)$ can only be matched to the letters within $\textrm{CG}_{u}(t)$ or $0^{\ell_4}$. The number of such pairs can be upper bounded by
$(|\mathcal{C}_k| - 2) \max\limits_{t \in \mathcal{C}_k}\text{RNA}\left(0^{\ell_4} \textrm{CG}_{x}(t)\right)
+ (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{y}(t)\right)
+ (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{z}(t)\right)$.
\item The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \in \{(x,t_{x,1}),(x,t_{x,2}), (y, t_y), (z,t_z)\}$. The number of such pairs can be upper bounded by
$\max\limits_{t, t' \in \mathcal{C}_k} \text{RNA}\left(0^{\ell_4} \textrm{CG}_{x}(t) 0^{\ell_4} \textrm{CG}_{y}(t')\right)
+ \max\limits_{t, t' \in \mathcal{C}_k} \text{RNA}$ $\left(0^{\ell_4} \textrm{CG}_{x}(t) 0^{\ell_4} \textrm{CG}_{z}(t')\right) $.
\end{itemize}
Therefore, using Lemma~\ref{lem-4}, we can upper bound $|A|$ as following:
\begin{itemize}
\item When $x = \alpha$, $|A| \leq m_1 + 2 \ell_2 + 4.2 \ell_1 - \min (\ell_{\textrm{CLG}, 1}, \ell_{\textrm{CNG}, 1})$.
\item When $x = \beta$, $|A| \leq m_1 + 6 \ell_2 + 2.2 \ell_1 - \ell_{\textrm{CLG}, 0} - \ell_{\textrm{CNG}, 0}$.
\item When $x = \gamma$, $|A| \leq m_1 + 6 \ell_2 + 2.2 \ell_1 $.
\end{itemize}
By Lemma~\ref{lem-5}, we always have $|A| < m_1 + m_2 \leq \text{RNA}(S_G)$ (recall that $m_2 \geq 6 \ell_2 + 3 \ell_1$).
\qed
\end{proof}
\subsection{Calculating $\text{RNA}(S_G)$ \label{ss-4}}
In this subsection, we will prove that $\text{RNA}(S_G) = m_1 + m_2$ and finish the proof of Theorem~\ref{thm-2}.
In view of Lemma~\ref{lem-12}, when calculating $\text{RNA}(S_G)$, we can restrict our attention to RNA foldings of $S_G$ in $\mathcal{M}_{\alpha, \beta, \gamma}$. Based on the structural property of RNA foldings in $\mathcal{M}_{\alpha, \beta, \gamma}$, we first reduce the calculation of $\text{RNA}(S_G)$ to the calculation of optimal RNA foldings of much simpler sequences.
\begin{lemma}\label{lem-13}
$\text{RNA}(S_G) \leq m_1 + \max\limits_{t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$.
\end{lemma}
\begin{proof}
In view of Lemma~\ref{lem-12}, there is an optimal RNA folding of $S_G$ in $\mathcal{M}_{\alpha, \beta, \gamma}$.
For any $A \in \mathcal{M}_{\alpha, \beta, \gamma}$, let $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ be the ones in the definition of $\mathcal{M}_{\alpha, \beta, \gamma}$. Then, each pair in $A$ falls into one of the following categories:
\begin{itemize}
\item The ones linking a $0'$ in some ${0'}^{\ell_3}$ to a $0$ in some $0^{\ell_4}$. There are exactly $3(|\mathcal{C}_k|+1) \ell_3$ number of such pairs.
\item The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \not\in \{(\alpha,t_{\alpha}),(\beta,t_{\beta}), (\gamma, t_\gamma)\}$. As any letter in such $\textrm{CG}_{u}(t)$ can only be matched to the letters within $\textrm{CG}_{u}(t)$ or $0^{\ell_4}$. The number of such pairs can be upper bounded by
$(|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k}\text{RNA}(0^{\ell_4} \textrm{CG}_{\alpha}(t))
+ (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_{\beta}(t))
+ (|\mathcal{C}_k| - 1) \max\limits_{t \in \mathcal{C}_k} \text{RNA}(0^{\ell_4} \textrm{CG}_{\gamma}(t))$.
\item The ones involving a letter in some $\textrm{CG}_{u}(t)$, where $(u,t) \in \{(\alpha,t_{\alpha}),(\beta,t_{\beta}), (\gamma, t_\gamma)\}$. The number of such pairs can be upper bounded by
$\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$.
\end{itemize}
In view of Lemma~\ref{lem-4}, $|A| = m_1 + \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$. Hence we conclude the proof.
\qed
\end{proof}
The following auxiliary lemma is useful in the later discussion.
\begin{lemma}\label{lem-14}
Let $S = S_1 \circ S_2 \circ S_3 \in \{0,1,0',1'\}^{\ast}$, where $S_2$ is either $1 1'$ or $1' 1$. Then $\text{RNA}(S) = \text{RNA}(S_1 \circ S_3) + 1$.
\end{lemma}
\begin{proof}
It suffices to show that there exists an optimal RNA folding of $S$ such that the $1$ and the $1'$ in $S_2$ are matched.
We first choose any optimal RNA folding $A$ of $S$, and then we show that we can modify $A$ in such a way that the $1$ and the $1'$ in $S_2$ are matched without changing the number of matched pairs.
\begin{itemize}
\item Case: the $1$ and the $1'$ in $S_2$ are already matched. We are done.
\item Case: Exactly one of the $1$ and the $1'$ in $S_2$ is matched. We first unmatch it, and then we pair up the $1$ and the $1'$. Doing so does not change the number of matched pairs.
\item Case: both of the $1$ and the $1'$ in $S_2$ are matched to some other letters. Let the $1$ be matched with $x$, and let the $1'$ be matched with $y$. Removing these two pairs from $A$ and adding $\{x,y\}$ and $\{1,1'\}$ to $A$ does not change the number of matched pairs.
\end{itemize}
\qed
\end{proof}
For any choices of three $k$-cliques $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$, we define:
$$S_{t_{\alpha}, t_{\beta}, t_{\gamma}} = {1}^{\ell_2} \circ S_{t_{\gamma}, t_{\alpha}} \circ {1}^{\ell_2} \circ S_{t_{\alpha}, t_{\beta}, } \circ {1'}^{2 \ell_2} \circ S_{t_{\beta}, t_{\gamma}},$$
where
\begin{align*}
S_{t_{\gamma}, t_{\alpha}} &= {0}^{\ell_1} \textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1},\\
S_{t_{\alpha}, t_{\beta}} &= {0}^{\ell_1} \textrm{CNG}(t_\alpha) p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1},\\
S_{t_{\beta}, t_{\gamma}} &= {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1}.\\
\end{align*}
$S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$ is simply a cyclic shift of the concatenation of $\textrm{CG}_\alpha(t_\alpha)$, $\textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$ after removing the sequences of $1$s and $1'$s at the beginning and the end of these clique gadgets.
The next lemma (together with Lemma~\ref{lem-13}) reduces the calculation of $\text{RNA}(S_G)$ to the calculation of $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$.
\begin{lemma}\label{lem-15}
$\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)) = 4 \ell_2 + \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$.
\end{lemma}
\begin{proof}
First of all, we state a few easy observations that will be applied in the proof:
\begin{itemize}
\item By simply matching only the letters in $\textrm{CG}_\alpha(t_\alpha), \textrm{CG}_\beta(t_\beta)$, and $\textrm{CG}_\gamma(t_\gamma)$ (as described in Fig.~\ref{fig-3}), we can infer that $\text{RNA}(0^{\ell_4}\textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)) \geq 6 \ell_2 + 3 \ell_1$.
\item
The total number of $0'$ and $1$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $ 6 \ell_2 + 3.1 \ell_1$.
\item
the difference between the number of $1$ and $1'$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $0.1 \ell_1$
\end{itemize}
We claim that in any optimal RNA folding $A$ of $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$, all letters within all $0^{\ell_4}$ are not matched:
\begin{itemize}
\item Claim: there is no $0'$ within $\textrm{CG}_\beta(t_\beta)$ matched to any $0$ in the two $0^{\ell_4}$ preceding and after $\textrm{CG}_\beta(t_\beta)$. Recall that $\textrm{CG}_\beta(t_\beta) = {1'}^{\ell_2} p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} {1'}^{2\ell_2} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2}$. If there is such a pair, then at least $\ell_2$ amount of $1'$ cannot participate in the RNA folding. Therefore, $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - (\ell_2 - 0.1\ell_1) < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$.
\item Claim: there is no $0'$ within $\textrm{CG}_\beta(t_\beta)$ matched to any $0$ in the $0^{\ell_4}$ in the beginning of the sequence. Suppose that there is such a pair. Then the $3\ell_2$ amount of $1'$ within ${1'}^{2\ell_2}$ in $\textrm{CG}_\alpha(t_\alpha)$ and within the first ${1'}^{\ell_2}$ in $\textrm{CG}_\beta(t_\beta)$ can only be matched to letters in $\textrm{CG}_\alpha(t_\alpha)$. However, the amount of $1$ in $\textrm{CG}_\alpha(t_\alpha)$ is at most $2.1 \ell_1$, so at least $0.9 \ell_2$ amount of $1'$ are not matched. Therefore, $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - 0.9 \ell_2 < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$.
\item Claim: there is no $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ matched to any $0$ in any $0^{\ell_4}$. Suppose that there is such a pair. We can show that at least $\ell_2$ amount of $1'$ cannot participate in the RNA folding, so $|A| \leq ( 6 \ell_2 + 3.1 \ell_1) - \ell_2 < \text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))$.
\begin{itemize}
\item Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the first $0^{\ell_4}$. Then the ${1'}^{2 \ell_2}$ in the beginning of $\textrm{CG}_\alpha(t_\alpha)$ cannot participate in the RNA folding.
\item Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the second $0^{\ell_4}$. Then letters in the two ${1}^{\ell_2}$ in $\textrm{CG}_\alpha(t_\alpha)$ can only be matched to letters within $p(\text{CLG}(t_\alpha)^R)$. Hence at least $2 \ell_2 - 0.1 \ell_1$ amount of $1$ are unmatched. Since the difference between the number of $1$ and $1'$ in $0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma)$ is at most $0.1 \ell_1$, at least $2 \ell_2 - 0.2 \ell_1 > \ell_2$ amount of $1'$ cannot participate in the RNA folding.
\item Case: a $0'$ within $\textrm{CG}_\alpha(t_\alpha)$ is matched to a $0$ in the third $0^{\ell_4}$. Then all $1'$ within $\textrm{CG}_\beta(t_\beta)$ can only be matched to $1$ within $\textrm{CG}_\alpha(t_\alpha)$. It is obvious that the number of $1'$ within $\textrm{CG}_\beta(t_\beta)$ is at least $\ell_2$ more than the number of $1$ within $\textrm{CG}_\alpha(t_\alpha)$, so at least $ \ell_2 $ amount of $1'$ cannot participate in the RNA folding.
\end{itemize}
\end{itemize}
Therefore,
\begin{align*}
&\text{RNA}(0^{\ell_4} \textrm{CG}_\alpha(t_\alpha) 0^{\ell_4} \textrm{CG}_\beta(t_\beta) 0^{\ell_4} \textrm{CG}_\gamma(t_\gamma))\\
&\hspace{0.7cm} = \text{RNA}(\textrm{CG}_\alpha(t_\alpha)\textrm{CG}_\beta(t_\beta) \textrm{CG}_\gamma(t_\gamma))\\
&\hspace{0.7cm} = \text{RNA}(
{1'}^{2 \ell_2} p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha) {1}^{\ell_2}
{1'}^{\ell_2} p({\textrm{CLG}(t_\beta)}^R){0'}^{\ell_1} {1'}^{2\ell_2} & (\text{by definition})\\
&\hspace{1.2cm} {0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2}
{1}^{\ell_2} {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2}
)\\
&\hspace{0.7cm} = \text{RNA}(
{1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2}
{1'}^{2 \ell_2} p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha){1}^{\ell_2}
{1'}^{\ell_2} & (\text{cyclic shift})\\
&\hspace{1.2cm} p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} {1'}^{2\ell_2}
{0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {1'}^{\ell_2}
{1}^{\ell_2} {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1}
)\\
&\hspace{0.7cm} = 4 \ell_2 + \text{RNA}(
{1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R) {0'}^{\ell_1} {1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\alpha)p({\textrm{CLG}(t_\beta)}^R) {0'}^{\ell_1} & (\text{Lemma~\ref{lem-14}})\\
&\hspace{1.2cm} {1'}^{2\ell_2}
{0'}^{\ell_1} p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R {0}^{\ell_1}
)\\
&\hspace{0.7cm} = 4 \ell_2 + \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}).
\end{align*}
For the third equality, we just move ${1}^{\ell_2} {0}^{\ell_1} \textrm{CNG}(t_\gamma) {1}^{2\ell_2}$ from the end of the sequence to the beginning.
The fourth equality follows by applying Lemma~\ref{lem-14} iteratively (which removes ${1}^{2\ell_2} {1'}^{2\ell_2}$, ${1}^{\ell_2} {1'}^{\ell_2}$, and ${1'}^{\ell_2} {1}^{\ell_2}$).
\qed
\end{proof}
By calculating the exact value of $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$, together with several previous lemmas, the next lemma shows that $\text{RNA}(S_G) = m_1 + m_2$.
\begin{lemma}\label{lem-16}
$\text{RNA}(S_G) = m_1 + m_2$.
\end{lemma}
\begin{proof}
In view of Lemma~\ref{lem-5}, \ref{lem-13}, \ref{lem-15}, it suffices to show that $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}) = 2 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0
- \frac{1}{2}\big(
\delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) +$
$\delta_{\text{LCS}}(\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) +
\delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma))
\big)$.
First of all, it is easy to observe that $\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}}) \geq 2 \ell_2 + 3 \ell_1$, so for any optimal RNA folding $A$ (of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$), we must have $|A| \geq 2 \ell_2 + 3 \ell_1$.
We claim that in any optimal RNA folding $A$ of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$, the following two statements are true:
\begin{itemize}
\item For each of the two ${1}^{\ell_2}$, there is a $1$ that is matched to a $1'$ in the ${1'}^{2\ell_2}$.
\item For each of the $S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}$, there is a pair linking a $0'$ in its ${0'}^{\ell_1}$ and a $0$ in its ${0}^{\ell_1}$.
\end{itemize}
For the first statement, suppose that one ${1}^{\ell_2}$ does not have any letter matched to a $1'$ in the ${1'}^{2\ell_2}$. It is easy to observe that the number of $1'$ in $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$ that does not belong to ${1'}^{2\ell_2}$ is at most $0.1 \ell_1$. Therefore, $|A|$ is at most the total number of $0'$ plus the total number of $1$ minus $(\ell_2 - 0.1 \ell_1)$. By a simple calculation, $|A| \leq 3.1\ell_1 + (2 \ell_2 + 0.1 \ell_1) - (\ell_2 - 0.1 \ell_1) = \ell_2 + 3.3 \ell_1 < \text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})$. Therefore, we conclude the first statement.
For the second statement, suppose that there is an $S \in \{S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}\}$ that has no pair linking a $0'$ in its ${0'}^{\ell_1}$ and a $0$ in its ${0}^{\ell_1}$. Due to the first statement, any pairing involving ${0'}^{\ell_1}$ and ${0}^{\ell_1}$ are confined to be within $S$. Therefore, the number of pairs involving letters in $S$ is at most $|S| - 2\ell_1 \leq 0.1 \ell_1$. This is certainly not optimal, since simply matching all $0'$ in ${0'}^{\ell_1}$ to all $0$ in ${0}^{\ell_1}$ gives us $\ell_1$ amount of pairs. Therefore, we conclude the second statement.
\medskip
We can infer from the above two statements that for each $S \in \{S_{t_{\gamma}, t_{\alpha}}, S_{t_{\alpha}, t_{\beta}}, S_{t_{\beta}, t_{\gamma}}\}$, letters within $S$ are only matched to letters within $S$ in any optimal RNA folding of $S_{t_{\alpha}, t_{\beta}, t_{\gamma}}$.
As a result,
\begin{align*}
\text{RNA}(S_{t_{\alpha}, t_{\beta}, t_{\gamma}})
& = \text{RNA}({1}^{\ell_2} \circ {1}^{\ell_2} \circ {1'}^{2\ell_2}) + \text{RNA}(S_{t_{\gamma}, t_{\alpha}}) + \text{RNA}(S_{t_{\alpha}, t_{\beta}}) +\text{RNA}(S_{t_{\beta}, t_{\gamma}} )\\
&= 2 \ell_2 + 3 \ell_1 + \text{RNA}(\textrm{CNG}(t_\gamma) p({\textrm{CLG}(t_\alpha)}^R))+ \text{RNA}(\textrm{CNG}(t_\alpha) p({\textrm{CLG}(t_\beta)}^R)) \\
& \hspace{0.4cm} + \text{RNA}(p(\textrm{CNG}(t_\beta)) {\textrm{CLG}(t_\gamma)}^R)\\
&=2 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0
- \frac{1}{2} \big(
\delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) +
\delta_{\text{LCS}}(\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma))\\
& \hspace{0.4cm} +
\delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma))
\big).
\end{align*}
\qed
\end{proof}
We are ready to prove the main theorem of the paper:
\bigskip
\noindent{\bf Remainder of Theorem~\ref{thm-2}.}
{\em If the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time.}
\begin{proof}
Given a graph $G$, we construct the string $S_G$. According to Lemma~\ref{lem-1}, \ref{lem-length}, the length of $S_G$ is $\mathcal{O}(k^2 n^{k+1} \log (n))$, and $S_G$ can be constructed in time $\mathcal{O}\left(k^2 n^{k+1} \log (n)\right)$.
We let $t_{\alpha}, t_{\beta}, t_{\gamma} \in \mathcal{C}_k$ be chosen such that $Q = \delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\beta)) +
\delta_{\text{LCS}} (\textrm{CLG}(t_\alpha), \textrm{CNG}(t_\gamma)) +
\delta_{\text{LCS}} (\textrm{CLG}(t_\beta), \textrm{CNG}(t_\gamma))$ is minimized. By Lemma~\ref{lem-3}, there exists a number $c_1$ such that:
\begin{itemize}
\item the number $c_1$ depends only on $n,k$, and $Q \geq 3c_1$.
\item If $Q = 3c_1$, then each of $t_{\alpha} \cup t_{\beta}$ , $t_{\alpha} \cup t_{\gamma}$, $t_{\beta} \cup t_{\gamma}$ is a $2k$-clique, which in turn is equivalent to ``$t_{\alpha} \cup t_{\beta} \cup t_{\gamma}$ is a $3k$-clique''.
\item If $Q > 3c_1$, then the graph has no $3k$-clique.
\end{itemize}
According to Lemma~\ref{lem-16}, $\text{RNA}(S_G) = m_1 + m_2$. By its definition, $m_1$ only depends on $n,k$, and $m_2 = 6 \ell_2 + 3 \ell_1 + \frac{3}{2} \ell_0 - \frac{Q}{2}$. Hence we are able to decide whether $G$ has a $3k$-clique from the value of $\text{RNA}(S_G)$, which can be calculated in time $T\left(\mathcal{O}\left(k^2 n^{k+1} \log (n)\right)\right) = \mathcal{O}\left(T\left(k^2 n^{k+1} \log (n)\right)\right)$.
Note that $k$ is treated as a constant instead of an input parameter.
\qed
\end{proof}
\section{Hardness of Dyck Edit Distance Problem}
In this section, we shift our focus to the Dyck edit distance problem. We will present a simple reduction from RNA folding problem (with alphabet size 4) to Dyck edit distance problem (with alphabet size 10). This leads to a much simplified and improved proof for a conditional lower bound of Dyck edit distance based on the conjectured hardness $k$-clique (the previous proof presented in \cite{ABV15} requires 48 symbols).
\bigskip
\noindent{\bf Dyck Edit Distance.}
Given $S \in (\Sigma \cup \Sigma')^n$, the goal of the Dyck edit distance problem is to find a minimum number of edit operations (insertion, deletion, and substitution) that transform $S$ into a string in the Dyck context free language.
Given $\Sigma$ and its corresponding $\Sigma'$, the Dyck context free language is defined by the grammar with following production rules: $\mathbf{S} \rightarrow \mathbf{SS}$, $\forall x \in \Sigma, \mathbf{S} \rightarrow x\mathbf{S}x'$, and $\mathbf{S} \rightarrow \epsilon$ (empty string).
An alternative definition of the Dyck edit distance problem is described as follows:
\medskip
\noindent Given a sequence $S \in (\Sigma \cup \Sigma')^n$, find a minimum cost set $A \subseteq \{(i,j) | 1 \leq i < j \leq n \}$ satisfying the following conditions:
\begin{itemize}
\item $A = A_M \uplus A_S$ has no crossing pair.
\item $A_M$ contains only pairs of the form $(x,x')$, $x \in \Sigma$ (i.e. for all $(i,j)\in A_M$, we have $S[i]=x$, $S[j]=x'$, for some $x \in \Sigma$). $A_M$ corresponds to the set of matched pairs.
\item $A_S$ does not contain any pair of the form $(y',x)$, $x,y \in \Sigma$ (i.e. for all $(i,j)\in A_S$ we have either $S[i] \in \Sigma$ or $S[j] \in \Sigma'$). $A_S$ corresponds to the set of pairs that can be fixed by one substitution operation per each pair.
\item Let $D$ be the set of letters in $S$ that do not belong to any pair in $A$. Each letter in $D$ requires one deletion/insertion operation to fix.
\end{itemize}
The cost of $A$ is then defined as $|A_S|+|D|$, and the Dyck edit distance of the string $S$ is the cost of a minimum cost set meeting the above conditions.
\bigskip
Dyck edit distance problem can be thought of as an asymmetric version of the RNA folding problem (in RNA folding, we allowed the aligned pair to be either $(x,x')$ or $(x',x)$, $x \in \Sigma$) that also handles substitution (in addition to deletion and insertion). Though these two problems look similar, they can behave quite differently. For example, in Section~\ref{sec.intro} we describe a simple reduction from LCS to RNA folding; since LCS is edit distance problem without substitution, one may hope that the same reduction reduces edit distance problem to Dyck edit distance problem. However, this is not true due to the following counterexample: both the two strings $ababa$ and $abbaa$ require at least 4 edit operations to transform into the string $caaac$; but the Dyck edit distance of $ababac'a'a'a'c'$ is 4 (by deleting all $b,c'$), while the Dyck edit distance of $abbaac'a'a'a'c'$ is 3 (by deleting all $c'$ and substituting the second $b$ with $b'$).
Intuitively, the substitution operation makes Dyck edit distance more complicated than RNA folding. Indeed, the same conditional lower bound as Theorem~\ref{thm-1} for Dyck edit distance problem shown in \cite{ABV15} requires a bigger alphabet size (48 instead of 36) and a longer proof.
In the next, we prove Theorem~\ref{thm-3} by demonstrating a simple reduction from RNA folding problem to Dyck edit distance problem with alphabet size 10. This improves upon the hardness result in \cite{ABV15}, and justifies the intuition that Dyck edit distance is a harder problem than RNA folding.
\bigskip
\noindent{\bf Proof of Theorem~\ref{thm-3}.} {\em If Dyck edit distance problem on sequences of length $n$ with alphabet size 10 can be solved in $T(n)$ time, then the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $\mathcal{O}(T(n))$ time.}
\begin{proof}
For notational simplicity, we let the alphabet for the RNA folding problem be $\Sigma \cup \Sigma'= \{0,0',1,1'\}$ (instead of $\{A,C,G,U\}$). Let $S$ be any string in ${(\Sigma \cup \Sigma')^n}$. We define the string $S_{\text{Dyck}}$ as the result of applying the following operations on $S$:
\begin{itemize}
\item Replace each letter $0$ with the sequence $S_{0} = aeb'aeb'$.
\item Replace each letter $0'$ with the sequence $S_{0'} = bba'a'$.
\item Replace each letter $1$ with the sequence $S_{1} = ced'ced'$.
\item Replace each letter $1'$ with the sequence $S_{1'} = ddc'c'$.
\end{itemize}
It is clear that $S_{\text{Dyck}}$ is a sequence of length at most $6n$ on the alphabet $\{a,b,c,d,e\} \cup \{a',b',c',d',e'\}$, though the letter $e'$ is not used. We claim that the Dyck edit distance of $S_{\text{Dyck}}$ is $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$.
\medskip
First, we show that the Dyck edit distance of $S_{\text{Dyck}}$ is at most $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. Given an optimal RNA folding of $S$, we construct a crossing-free matching $A$ with cost $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$ as follows:
\medskip
\noindent {For matched pairs in the RNA folding of $S$:}
\begin{itemize}
\item For each matched pair $(0,0')$ in the RNA folding of $S$, we add two pairs $(a,a'),(a,a')$ to $A_M$, and add three pairs $(e,b'),(e,b'),(b,b)$ to $A_S$ in its corresponding pair of substrings $(S_0 = \mathbf{a}(eb')\mathbf{a}(eb'), S_{0'} = (bb)\mathbf{a'a'})$ in $S_{\text{Dyck}}$.
\item For each matched pair $(0',0)$ in the RNA folding of $S$, we add two pairs $(b,b'),(b,b')$ to $A_M$, and add three pairs $(a',a'),(a,e),(a,e)$ to $A_S$ in its corresponding pair of substrings $(S_{0'} = \mathbf{bb}(a'a'), S_{0} = (ae)\mathbf{b'}(ae)\mathbf{b'})$ in $S_{\text{Dyck}}$.
\item Similarly, for each matched pair $(1,1'),(1',1)$ in the RNA folding of $S$, we can add two pairs to $A_M$ and three pairs to $A_S$.
\end{itemize}
\noindent {For unmatched letters in $S$:}
\begin{itemize}
\item For each unmatched letter $0$ in $S$, we add three pairs $(a,b'),(e,b'),(a,e)$ to $A_S$ in its corresponding substring $S_0 = (a(eb')(ae)b')$. Similarly, for each unmatched letter $1$, we can add three pairs to $A_S$.
\item For each unmatched letter $0'$ in $S$, we add two pairs $(b,b),(a',a')$ to $A_S$ in its corresponding substring $S_0 = (bb)(a'a')$. Similarly, for each unmatched letter $1'$, we can add two pairs to $A_S$.
\end{itemize}
The set $A_M$ has size $2\text{RNA}(S)$, the set $A_S$ has size $\frac{|S_{\text{Dyck}}| - 4\text{RNA}(S)}{2}$, and $D$ is an empty set. Therefore, the cost of $A$ is $\frac{|S_{\text{Dyck}}| - 4\text{RNA}(S)}{2} = \frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$.
\medskip
Second, we show that the Dyck edit distance of $S_{\text{Dyck}}$ is at least $\frac{|S_{\text{Dyck}}|}{2} - 2\text{RNA}(S)$. Given a crossing-free matching $A$ (on the string $S_{\text{Dyck}}$) of cost $C$, we recover an RNA folding of $S$ that has $\geq \frac{|S_{\text{Dyck}}|}{4} - \frac{C}{2}$ number of matched pairs.
We build a multi-graph $G=(V,E)$ such that $V$ is the set of all substrings $S_0, S_{0'}, S_1, S_{1'}$ that constitute $S_{\text{Dyck}}$, and the number of edges between two substrings in $V$ is the number of pairs in $A_M$ linking letters between these two substrings. Note that $|V|=n, |E|=A_M$. It is clear that $C \geq \frac{|S_{\text{Dyck}}| - 2|E|}{2}$ (since $|A_S|+|D| \geq \frac{|S_{\text{Dyck}}| - 2|A_M|}{2} = \frac{|S_{\text{Dyck}}| - 2|E|}{2}$). Therefore, we are done if we can recover an RNA folding of size $\geq \frac{|E|}{2}$, since $\frac{|E|}{2} \geq \frac{|S_{\text{Dyck}}|}{4} - \frac{C}{2}$.
We observe the following:
\begin{itemize}
\item $G$ has degree at most 2 (due to our definition of $S_0, S_{0'}, S_1, S_{1'}$, at most two letters in such a substring can participate in pairings of the form $(x,x')$, $x \in \{a,b,c,d\}$, without crossing).
\item In the graph $G$, any edge must either links an $S_0$ with an $S_{0'}$ or links an $S_1$ with an $S_{1'}$ (due to our definition of $S_0, S_{0'}, S_1, S_{1'}$, any pairing of the form $(x,x')$, $x \in \{a,b,c,d\}$, must be made between $S_0, S_{0'}$ or between $S_1, S_{1'}$).
\item $G$ does not contain any cycle of odd length (due to the above observation).
\end{itemize}
In view of the above (second) observation, a (graph-theoretic) matching $M \subseteq E$ of $G$ naturally corresponds to a (size $|M|$) RNA folding of $S$: for each edge (a pair of substrings in $S_{\text{Dyck}}$) in $M$, we add its corresponding pair of letters in $S$ to the RNA folding. Since a maximum matching has size $\geq \frac{|E|}{2}$ in a graph of maximum degree 2 without odd cycles, we conclude the proof.
\qed
\end{proof}
We note that for the case substitution is not allowed, the letter $e$ in the above proof is not needed, and this lowers the required alphabet size to 8.
The reason that the letter $e$ is essential for the above proof to work is explained as follows: Suppose that $e$ is removed. For each matched pair $(0,0')$ in the RNA folding of $S$, after adding two pairs $(a,a'),(a,a')$ to $A_M$, the letter $b'$ between two $a$ in $S_0=ab'ab'$ cannot participate in any matching anymore. Hence some letters will be in $D$ according to our construction of the crossing-free matching $A$. This indicates that our construction may not be optimal. Indeed, for the string $(0 0' 0')_{\text{Dyck}} = ab'ab'bba'a'bba'a'$ (after removing $e$), if we insist on matching the two pairs $(a,a'),(a,a')$ in $\mathbf{a}b'\mathbf{a}b'bb\mathbf{a'a'}bba'a'$, then the cost will be at least $5$ (three substitutions and two deletions are needed). However, there is a solution that uses only 4 substitutions: $\mathbf{a}(b'\mathbf{a}(b'(bb)a')\mathbf{a'}(bb)a')\mathbf{a'}$.
\section{Conclusion and Future Directions}
In this paper we present a hardness result of RNA folding problem with alphabet size 4 and demonstrate a reduction from RNA folding problem to Dyck edit distance problem. A few open problems still remain:
\begin{itemize}
\item There are still a few cases where the state-of-art conditional lower bound requires a certain alphabet size to work (e.g. Theorem~\ref{thm-3}, Corollary~\ref{cor-1}, and the hardness result for Dynamic time warping in \cite{BK15}). Is it possible to improve them using our technique or other ideas?
\item Is it possible to reduce Dyck edit distance problem to RNA folding problem?
\item Besides the classic RNA folding problem, several problems in bioinformatics admit similar formulation (see e.g. \cite{ABHL12,FG12}). It would be interesting to see whether the technique presented in this paper (and \cite{ABV15,BK15}) can be adapted to give meaningful lower bounds for other problems.
\end{itemize}
\bigskip
\noindent{\bf Acknowledgements.} The author would like to thank Seth Pettie for helpful discussions and comments.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The Super $\tau$-Charm facility(STCF) is a future electron-positron collider, operating at center-of-mass energy region of 2 to 7 GeV and luminosity greater than $0.5 \times 10^{35} \rm \, cm^{-2}s^{-1}$.
The physical goals of STCF are to explore asymmetry of matter-antimatter (charged parity violation), the internal structure of hadrons and the nature of non-perturbative strong interactions, exotic particles and physics beyond Standard Model.
Physics at the STCF will have demanding requirements on Particle Identification(PID), in particular on hadron PID at momenta up to $p=2\,\rm {GeV}/c$.
Ring image Cherenkov(RICH) detector provides excellent PID capability in a wide range of momentum, and the concept has been used widely in high-energy experiments such as CLEOc\cite{bib:CLEOc}, DELPHI\cite{bib:DELPHI} , ALICE\cite{bib:AliceTDR}, and COMPASS\cite{bib:CompassRich-1}.
For STCF barrel PID, it is required that the detector is able to operated at high luminosity environment, the martial budge is less than $20\%$, and thickness of the detector is less than $20\rm\,cm$.
In order to meet those requirements, the hybrid micro-patter gas detector(MPGD) \cite{bib:MPGD} based RICH detector with approximate focusing design is adopted as the baseline design.
The STCF RICH detector consists of a layer of CsI coated thick gas electron multipliers (THGEM)\cite{bib:THGEM} and a resistive MicroMegas (MM)\cite{bib:MM}
on a pad segmented anode.
The MPGD detector has been demonstrated to be able to operate under a high counting rate with lower ion back-flow suppression while maintaining a high effective gain\cite{bib:MPGD, bib:MPGD-paper2}.
The CsI coated THGEM acts as a reflective photo-cathode.
The photoelectrons are pre-amplified by THGEM and transferred to MicroMegas.
The total gain of the prototype RICH PID detector needs to reach $\approx10^5$ to obtain a sufficient single-photon detection efficiency.
The ions generated from multiplication are accelerated by the electric fields and eventually bombard the photo-cathode, which is called as Ion Back-Flow (IBF) and may cause severe aging problem.
This hybrid design has been demonstrated to show an excellent IBF suppression \cite{bib:IBFsuppress}, since the majority of the ions obtained from the multiplication are trapped by the MicroMegas stage and further suppressed by the THGEM layer.
For the momenta interested at the STCF, liquid perfluorohexane C$_6$F$_{14}$ with quartz as container is chosen as the Cherenkov radiator.
In order to evaluate the performance of the RICH PID detector, a prototype with quartz as radiator is built.
This work presents the performance of the hybrid MPGD prototype targeting the RICH PID detector.
In section 2, the prototype and performance is described. The beam test setup is shown in section 3. In section 4, the performance of the system is presented, MC simulation is given in section 5. The conclusions are presented in section 6.
\section{The hybrid MPGD based RICH prototype}
To fulfill the PID requirements of the future STCF experiment, a similar RICH prototype has been designed and built except the radiator is pure quartz instead of liquid perfluorohexane.
The active area of the RICH prototype is about $16\, \rm cm \times 16\,cm$.
Fig.\ref{RICH} shows the prototype and the internal structure sketch:
\begin{itemize}
\item a $10 \, \rm mm$-thick quartz radiator of which both sides are polished and the transparency is about $60\%$ at the wavelength of $180\rm \, nm$. As demonstrated in the sketch, part of the Cherenkov radiation might be total internal reflected inside the quartz, thus the incident angle needs to be larger than $30^{\circ}$ so the other part of the Cherenkov radiation can reach the CsI photo-cathode plane.
\item A light propagation region of $88.2 \, \rm mm$ is separated by the quartz radiator and the drift mesh. The drift mesh is made of $75\,$ $\mu$m diameter gold plated tungsten wires and placed $4.8\,\rm mm$ away from the CsI coated THGEM and biased to a suitable voltage to maximize the extraction and collection efficiency of the converted photo-electron\cite{bib:THGEM_proper,bib:THGEM_LHB}.
\item A CsI coated THGEM layer for UV photons conversion. THGEM is standard Printed Circuit Boards(PCB), manufactured with mechanical drilling pattern holes. The THGEM selected for this prototype is $0.4$\,mm thick, $0.4$\,mm diameter holes with $0.8$\,mm pitch and $50 \mu m$ rim\cite{bib:THGEM_MiniRim}.
\item A MicroMegas layer built on a pad segmented anode. The MicroMegas is produced using thermal bonding technology. It's made of stainless steel mesh of 400 lines per inch (LPI) with $19\,\mu$m wires, stretched over the readout segmented anode. The gap of MicroMegas is $100\,\mu$m\cite{bib:MM_ThermalBonding}.
\item A germanium coated PCB as the readout anode. The anode is segmented in 1024 square $5\times5$\,mm$^2$ with $4.8$\,mm pitch, coated with a $400\,$nm germanium layer. The sheet resistivity is about $100\,$M$\Omega$/sq\cite{bib:MM_GeAnode}.
\end{itemize}
\begin{figure}[]
\centering
\includegraphics[height=5cm,width=12cm]{RICH_2.pdf}
\caption{Overview of the RICH prototype. Left: the assembled RICH prototype. Right: sketch of the prototype.}
\label{RICH}
\end{figure}
The photo-electron generated by the CsI coated THGEM surface is guided into one of the holes and multiplied to a small factor of $\approx10$, and drifted across the $2\,$ mm gap to the MicroMegas where the main multiplication occurs. In Fig.\ref{fig:gain} shows the typical gain achieved in Ar:CO$_2$=$97$:$3$ with single MicroMegas (in red), and THGEM combined MicroMegas (in blue). For the latter setup, the MM was fixed to $530\rm \, V$ which provides about $10^4$ effective gain, and varied the THGEM voltage $\Delta$V$_{\rm{THGEM}}$ from $800$\,V to $1000$\,V. Up to $10^5$ effective gain can be achieved and operated quietly for the hybrid combination, and the ion back flow of this setup is demonstrated to be less than $0.1\%$\cite{bib:MM}.
\begin{figure}[h]
\centering
\includegraphics[height=7cm,width=7cm]{gain-eps-converted-to.pdf}
\caption{Effective Gain of MicroMegas and hybrid MicroMegas+THGEM versus high voltage. For hybrid setup, the MicroMegas was fixed to $530$\,V and varied the THGEM voltage.}
\label{fig:gain}
\end{figure}
\section{The Experiment Setup}
The experiments were carried out at the DESY beam line, which uses a beam energy of up to $6.3\,$GeV electrons to cross the primary target.
The primary targets are $7\,\mu$m thick carbon fibers, at which Bremsstrahlung photons are produced.
These photons move towards a secondary target, also called conversion target, consisting of metal sheets of a few mm thickness and produce electron/positron pairs.
Then the dipole magnet is placed behind to select the particle type and the momentum.
During the test, the momentum of electron beam was set to $5$\,GeV/c and collimated within $1\times1$\,cm$^2$ area.
\begin{figure}[]
\centering
\includegraphics[width=\linewidth,height=9cm]{RICH_DESY.pdf}
\caption{Schematic view of the beam line. From left to right: the two scintillators used for the trigger, the three MicroMegas composed the tracker system, and the RICH prototype rotated with an incident angle $\theta$.}
\label{beam}
\end{figure}
Fig.\ref{beam} shows the experimental setup. The trigger signal was generated by the two-fold time coincidence of plastic scintillator paddles with photo-multiplier tube readout, which was placed in the beam upstream with a sensitive area of $1\times2$\,cm$^2$.
The trigger rate was about $20\,$Hz during the running and spread to each system by the Trigger Logic Unit(TLU)\cite{bib:tlu}.
The particle tracking system comprised three "back-to-back" MicroMegas detectors. The total sensitive area of each detector was $15\times15$\,cm$^2$. Only the center area of $5\times5$\,cm$^2$ was read out with a $400$\,$\mu$m pitch X/Y strips.
The trackers were operated with a non flammable gas mixture of Ar/CO$_2$=97:3 and the average gain was close to $10^4$.
After installation, the onsite laser system was used to measure the global alignment of the tracking detectors.
Then the inter-module alignment was extracted using fitted tracks.
The RICH prototype chamber was placed along the beam line in between three tracker elements.
The prototype was rotatable toward the direction of the electron beam to scan the different incident angle.
During the experiments, a scan from $30^\circ$ to $45^\circ$ with $5^\circ$ step were performed.
For each scan, the prototype was translated to ensure the ionization signal and Cherenkov photo-electron signal were both recorded.
The prototype was flushed with Ar/CO$_2$=97:3.
The oxygen and water contamination rates of the flowing gas were monitored by an oxygen analyzer and a dew-point hygrometer.
The RICH signals, as well as those recorded from the tracker system were read out by the AGET based Front-End Electronics(FEE)\cite{bib:AGET_1} and Data Collection Modules(DCMs)\cite{bib:DCM}.
The data acquisition was triggered by the scintillators coincidence signal.
For each triggered event, the waveform (sampled in $512$ bins of $25\,\mu$s each) of each channel was stored.
This occurred only for signals crossing a threshold which was set individually for each channel.
The online data acquisition software was used to store the synchronized data of each system on a PC for further analysis.
The AGET chip, originally designed for the GEM based time projection chambers (TPCs) used in nuclear physics experiments\cite{bib:TPC}, has wide dynamic range and high-precision time and charge measurements.
It integrates a charge-sensitive amplifier, a filter shaping circuit, a discriminator, and a switched capacitor array (SCA).
The prototype output signal comprised a fast component from the avalanche electron drift and a slow one arising from the motion of the avalanche ions.
The latter contributed the most and the signal trailing edge lasted about $100\,$ns.
Therefore the FEE was operated with a $120\,$fC dynamic range, a $1\,\mu$s shaping time and a $100\,$MHz sampling frequency during the test.
Figure \ref{fig:MMSig} shows the output waveform recorded from the incident beam ionization (a) and the Cherenkov photo-electron(b).
\begin{figure}[]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{IonSig-eps-converted-to.pdf}
\label{fig:TotSig_beam}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{Ph2Sig-eps-converted-to.pdf}
\label{fig:TotSig_MC}
\end{minipage}
}
\caption{\label{fig:MMSig} The recorded waveform from (a) incident electron beam (b) Cherenkov photo-electron.}
\end{figure}
The DCMs contained multiple $1\,$Gbps optical fiber serial link interfaces which allowed to read $1024\,$ channels and was possible to scale up by configuring one DCM as a master while other DCMs in slave mode.
Two DCMs were used during the experiments.
One was in master mode for RICH prototype with $1024\,$ channels readout and the other one was in slave mode for tracker system with $3\times256\,$ channels readout.
Figure \ref{fig:RMS} shows that the equivalent input noise charge of one tracker and RICH were typically around $0.4\,$fC and $0.2\,$fC in RMS (root mean square) for tracker and RICH, respectively.
\begin{figure}[]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{Tra_RMS-eps-converted-to.pdf}
\label{fig:Tra_RMS}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{RICH_RMS-eps-converted-to.pdf}
\label{fig:RICH_RMS}
\end{minipage}
}
\caption{\label{fig:RMS} RMS of noise charge for (a) tracker and (b) RICH.}
\end{figure}
The HV was provided by three CAEN NDT$1471$H power supply modules. The HV was monitored throughout data-taking and recorded every 1 minutes.
The trip currents were set to $2\,\mu$A.
The supplied voltage typically varied by $\pm$2 V and each channel was calibrated separately.
The voltage of tracker MicroMegas were set to $550$ V.
For RICH prototype, the voltage of the drift mesh, the THGEM up/bottom layer and MicroMegas were $1990\,$V, $1995\,$V, $920\,$V and $570\,$V, respectively.
\section{Reconstruction and Analysis}
\subsection{Track Reconstruction and Extrapolation}
The track reconstruction flow undergoes the following stages: (i) finding where hits are grouped into track candidates for each tracker, (ii) estimate the hit position with the charge centroid method, (iii) track fitting of candidates with linear function, and (iv) extrapolating the optimal track to the RICH prototype.
Figure.\ref{fig:position} shows the distribution of the residuals between the hit position and extrapolated position for trackers and RICH(rotated in $30^\circ$ position). After reconstruction, the spatial resolutions for the first two trackers were about $140 \, \mu$m, while the resolution of the last tracker was about $1.3\,$ mm mainly because the electron beam was scattered by the RICH detector.
For RICH, it is treated as the ionization signal that the cluster must be within 2 pixels ($\approx 9$\,mm) to the extrapolated position and the accumulated charge must be the largest. The resolution $\sigma_y$ of RICH was improved by a factor of $\sqrt{3}/2$ due to the rotation.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{SpaRmsTot-eps-converted-to.pdf}
\caption{ distribution of the residuals between the hit position and extrapolated position.}
\label{fig:position}
\end{figure}
\subsection{Cherenkov Angle Reconstruction Algorithm}
An analytical Cherenkov reconstruction algorithm was developed. It is an extension from $\beta-Method$\cite{bib:AliceTDR} from ALICE HMPID with the following hypotheses: the refractive index for each optical component is taken at the average energy $181\,$nm($6.85\,$eV), the photon emission point is nearly at the centre of the track path inside the Chernekov radiator.
Fig.\ref{coordinate} illustrates the geometry of the angle reconstruction.
Reconstructed Cherenkov angle is obtained by fitting the photon distribution map.
\begin{figure}[]
\includegraphics[width=\linewidth]{Coordinate.pdf}
\caption{coordinate system, the coordinate system is set up at the center of the radiator. The blue line is the track of the incident electron and the orange line is the Cherenkov photon produced by the electron.
}
\label{coordinate}
\end{figure}
According to cosine theorem and trigonometric functions, we can obtain:
\begin{equation}
a=\frac{-\sqrt{X_{ep}^2\cos^2\theta_c\sec^4\theta_0(1+\cos2\theta_0-2\cos2\phi\sin^2\theta_0)}+2X_{ep}\sin\phi\tan\theta_0}{2\cos^2\theta_c\sec^2\theta_0-2\sin^2\phi\tan^2\theta_0}
\label{eqa}
\end{equation}
where $X_{ep}$ is the starting point of Cherenkov radiation, $\theta_0$ is the angle between electron beam and detector's normal line, $\theta_c$ is Cherenkov angle which is equal to $\arccos(1/n\beta)$, $n$ is refractive-index of quartz and $\beta$ is the velocity of electron in quartz.
$\phi$ is the polar angle in our coordinate system.
Considering refraction process:
\begin{equation}
R=a+\left|T_g\right|\tan(\arcsin(\frac{n_1}{n_2}\frac{a}{\sqrt{a^2+X_{ep}^2}}))
\end{equation}
$R$ denotes the length of the polar axis of the photon after refraction in the plane where the anode plate is located by establishing a polar coordinate system. $T_g$ is the thickness of the gas.
\subsection{Analysis results}
Typical images of superimposed events and of single events are shown in Fig.\ref{fig:TotSig}.
\begin{figure}[]
\centering
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{ExperimentTotSignal-eps-converted-to.pdf}
\label{fig:ExperimentTotSignal}
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{ExperimentSingleSignal-eps-converted-to.pdf}
\label{fig:ExperimentSingleSignal}
\end{minipage}
\caption{\label{fig:TotSig} Superimposed events including the beam region and single events.}
\end{figure}
The angular resolution of single photon electron (SPE) is affected by the following systematic errors:
(1) The $chromatic \ error \ \sigma_n$, related to the chromatic dispersion of the radiator refractive index $n$ over the detectable energy range of the Cherenkov photon. The photon energy range is about $160\sim210$\,nm which is determined by the convolution of the CsI photo-cathode quantum efficiency with the transmission of the media traversed by the Cherenkov photons inside the detector, and the refractive index of quartz varies $\approx 7\%$ in this region.
(2) The $geometric \ error \ \sigma_{geo}$, related to the position of the photon emission point uncertainty. This error is proportional to the track path inside the radiator $\frac{L/\cos\theta}{\sqrt{12}}$.
(3) The $localization \ error \ \sigma_{local}$, related to the spatial resolution of the RICH detector. It is determined by the detector characteristics such as pad size. Note that for Cherenkov photon detection, a large fraction of pad clusters is only one pad hit, thus the centroid evaluation is not applicable. This error is estimated by $L_{pad}/\sqrt{12}$.
(4) The $multiple \ scattering error \ \sigma_{ms}$, related to the multiple scattering of electron beam in the radiator and its container. This error is estimated by taking the $\sigma_{ms}\sim\delta\theta_{ms}$ where $\delta\theta_{ms}\propto(1/p)\sqrt{L/X_0}$.
(5) The $track \ incident \ angle \ error \ \sigma_{\theta_0}$, related to the particle incident angle $\theta_0$ and to the precision of the tracking detectors.
Figure \ref{fig:syserror} shows the distribution of the calculated contributions to the total angular resolution as a function of the azimuthal angle $\phi$ for the incident angle $\theta_0 = 30^\circ$. The systematic error is dominated by the chromatic error which is intrinsic and contributes about $14\,$mrad when azimuth angle is $90^\circ$. The total calculated error is about $20\,$ mrad.
\begin{figure}[]
\centering
\centering
\includegraphics[width=0.7\textwidth]{AnaCal-eps-converted-to.pdf}
\caption{\label{fig:syserror} Contributions to the Chernekov angle resolution from each systematic error components for $\theta_0=30^\circ$.}
\end{figure}
The analytical reconstruction studies for S.P.E has been carried out with the test-beam data in single particle events.
In Fig.\ref{fig:SPE_beam} the angular resolution for S.P.E is about $24.07$\, mrad at $\theta_0 = 30^\circ$.
Monte-carlo simulation based on Geant4 \cite{bib:geant4} has been developed aiming at a better understanding of the experimental data.
Figure \ref{fig:SPE_MC} shows the MC simulated angular resolution for the same configuration. The resolution is about $21.45$\, mrad and in consistent with the analytical calculation. The different between MC and test beam data is from the non-uniformity of the quantum efficiency, the THGEM and MicroMegas detection efficiency, and so on.
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{SPE_BeamTest-eps-converted-to.pdf}
\label{fig:SPE_beam}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.9\textwidth]{SPE_MC-eps-converted-to.pdf}
\label{fig:SPE_MC}
\end{minipage}
}
\caption{S.P.E Cherenkov angle resolution of test beam data and MC simulation}
\label{fig:SPE}
\end{figure}
The angular resolution has also been tested for tracks with different incident angle with respect to the normal direction. In Fig.\ref{fig:sig_angle} shows the result for $30^\circ$ to $45^\circ$ with $5^\circ$ step. The angular resolution becomes worse as the incident angle increases. Monte-carlo simulation shows a similar behavior. This can be seen from Eq.\,\ref{eqa}: when the incident angle $\theta_0$ getting closer to the Chernekov radiation angle $\theta_c$, the spatial resolution will be dominate to determine the value $a$, thus the angular resolution is getting worse.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{sigma_angle-eps-converted-to.pdf}
\caption{angule resolution as a function of the track incidence}
\label{fig:sig_angle}
\end{figure}
\section{Conclusion}
The STCF is a new generation electron-positron collider. An excellent PID system is vital espeically for the hadrons with momentum up to $2$\,GeV/c. The hybrid THGEM-MicroMegas detectors with approximately focused radiator design has been adopted as the baseline for the barrel PID detector.
The prototype with the similar design as STCF PID detector has been built and tested with DESY electron beam.
The homemade THGEM and Micromegas hybrid detector has been demonstrated to be capable for single photon detection, and presented high gains in the order of $10^5$.
Additionally the AGET frond-end chips has proven a good performance during the beam test.
A new analytical reconstruction algorithm extended from ALICE HMPID $\beta-method$ has been developed. The most relevant parameters for the reconstruction algorithm has been studied using both simulations and beam-test measurements. This algorithm is not valid for large incident angle and a new reconstruction algorithm based on maximum likelihood method is being developed for this purpose.
The optimization of the detector response and the engineering problems related to larger size prototype with liquid C$_6$F$_{14}$ radiator are presently being investigated.
\section*{Acknowledgement}
This work was supported by the National Natural Science Foundation of China (Grant Nos. U1732266, 12175241, 12027803), Key Research Program of Frontier Sciences of CAS (Grant No. QYZDB-SSW-SLH039), international partnership program of CAS(Grant No. 211134KYSB20200057). The authors thank Hefei Comprehensive National Science Center for their strong support and the Double First-Class university project foundation of USTC. The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF).
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and Main Results}
\subsection{Introduction}
Given a finite set $A\subset \mathbb{R}\backslash \{0\}$, define
\begin{align*}&A\cdot A \ =\ \{a_i\cdot a_j\,|\, a_i,a_j\in A\},\\
&A/A \ =\ \{a_i/a_j\,|\,a_i,a_j\in A\}.\end{align*}
The set $A$ is said to be MPTQ (more product than quotient) if $|A\cdot A|>|A/A|$, quotient-dominated if $|A\cdot A|<|A/A|$, and balanced if $|A\cdot A| = |A/A|$. Also, define \begin{align*}A + A \ =\ \{a_i + a_j\,|\, a_i,a_j\in A\},\\A - A \ =\ \{a_i - a_j\,|\,a_i,a_j\in A\}.\end{align*}
The set $A$ is said to be MSTD (more sum than difference) if $|A + A|>|A - A|$.
We consider MPTQ and MSTD subsets of $\mathbb{R}$ (instead of $\mathbb{N}$ as in previous work) because this extension allows us to define the log transformation and the exponential transformation, which are crucial in describing the relationship between the two types of sets. Since multiplication and addition are commutative while division and subtraction are not, it is natural to think that MPTQ and MSTD sets are very rare. However, they do exist. It is believed that Conway (1969) was the first to give an example of the MSTD set
$$\{0,2,3,4,7,11,12,14\},$$
whose sum set and difference set have $26$ elements and $25$ elements, respectively. The number of MSTD subsets of $\{1,2,\ldots,n\}$ grows quite quickly as $n$ grows. On the other hand, it is harder to find MPTQ subsets of $\{1,2,\ldots, n\}$ because $\{1,2,\ldots,36\}$ contains no MPTQ subsets. Hence, we instead look for MPTQ subsets of $\{2^n\cdot 3^m\,|\,0\le n,m\le 6\}$. Below are several sets we found
\begin{align*}
&\{12,27,36,96,108,144,162,243,648,864,1944\},\\
&\{8,18,32,36,48,216,324,432,486,864,1944\},\\
&\{4,9,12,32,36,48,54,81,216,288,648\},\\
&\{1,6,8,9,24,72,108,288,324,432,2592\},\\
&\{3,18,24,27,72,108,324,864,972,1296,7776\}.
\end{align*}
Surprisingly, in 2007, Martin and O'Bryant \cite{MO} proved that as $n\rightarrow \infty$, the proportion of MSTD subsets of $\{1,2,\ldots,n\}$ is bounded below by a positive constant. Since then, research on sum-dominant sets has made considerable progress: see \cite{FP, Ma, Na2, Ru1, Ru2, Ru3} for
history and overview, \cite{He,MOS,MS,Na3,Zh1} for explicit constructions, \cite{CLMS2, MO, Zh3} for positive lower bound for the percentage of
sum-dominant sets, and \cite{CI1,CLMS1,MV,Zh2} for extensions to other settings. However, much less is known about MPTQ sets. Fortunately, many results on MSTD sets hold for MPTQ sets with the use of the log transformation and exponentiation of sets. The goal of this paper is to provide an understanding of MPTQ sets through both what we know about MSTD sets and unique properties of MPTQ sets themselves. Furthermore, properties of MPTQ sets also shed light on new results about MSTD sets. We focus on the four topics: \textit{how to search for MPTQ subsets of $\{1,2,\ldots,n\}$ more efficiently}, \textit{the probability measure of MPTQ subsets of $\{1,2,\ldots,n\}$}, \textit{when sets are not MPTQ}, and \textit{what sequences do not contain MPTQ subsets}.
\subsection{Notation}
We first introduce some notation.
\begin{enumerate}
\item For $n\in\mathbb{N}$ and $r\in \mathbb{R}\backslash\{0,\pm 1\}$, define $G_{n,r} = \{1,r,r^2,\ldots,r^{n-1}\}$.
\item For $(a_i)_{i=1}^\ell$ and a set $A$, we write $(a_i)_{i=1}^\ell\rightarrow A$ to mean the introduction of $\ell$ numbers $(a_i)_{i=1}^\ell$ into the set $A$ to form $A\cup\{a_i \mid 1\le i\le \ell\}$. (We assume that $a_i\notin A$ for all $1\le i\le \ell$.)
\item Given a set $A$ of positive real numbers and $1\neq r>0$, define
$$\log_r A \ =\ \{\log_r a_i\,|\,a_i\in A\}.$$
Because $A$ contains only positive numbers, $\log_r A$ is well-defined and $|\log_r A| = |A|$. We call this the \textit{$r$-log transformation} of $A$.
\item Given a set $B$ of real numbers and $1\neq r > 0$, define
$$r^B \ =\ \{r^{b_i}\,|\, b_i\in B\}.$$
Because $1\neq r>0$, $|r^B| = |B|$. We call this the \textit{$r$-exponential transformation} of $B$.
\item Let $A = \{a_1,a_2,\ldots,a_n\}$, where $|a_1| \le |a_2| \le \cdots \le |a_n|$. We write $A$ in the following form
$$A \ =\ (a_1\,|\,a_2/a_1, a_3/a_2,\ldots, a_n/a_{n-1}).$$ All information about set $A$ is preserved in this notation. Call $$a_2/a_1,a_3/a_2,\ldots,a_n/a_{n-1}$$ \textit{a multiplier sequence}. Note that the absolute value of each quotient in a multiplier sequence is at least $1$ and one set may have more than one multiplier sequence as shown in Example \ref{>1sq}.
\begin{exa} \label{>1sq} \normalfont
Let $A = \{5,1280,-10,-40,40,2560,160,320\}$. We can write $$A \ =\ (5\,|\,-2,4,-1,4,2,4,2)$$ or
$$A \ =\ (5\,|\,-2,-4,-1,-4,2,4,2).$$
\end{exa}
\end{enumerate}
\subsection{Main results}
\begin{thm}\label{esearch}
Let $n\in \mathbb{N}_{\ge 4}$. If we want to find all MPTQ subsets of $\{1,2,\ldots,2n\}$, we need to check at most $2^{2n-t}$ subsets, where $t$ is the number of primes strictly between $n$ and $2n$.
\end{thm}
With a simple program, the author found no MPTQ subsets of $\{1,2,\ldots,36\}$, and the program reported a memory error when we attempted $\{1,2,\ldots,38\}$. Recall that $\{1,2,\ldots, 15\}$ already contains several MSTD sets, so MPTQ subsets appear much later than MSTD sets. Along with \cite[Theorem 1]{MO}, the following theorem shows that MPTQ sets are rare compared to MSTD sets.
\begin{thm}\label{goto0}
As $n\rightarrow \infty$, the proportion of MPTQ subsets of $\{1,2,\ldots,n\}$ approaches $0$; that is, as $n\rightarrow \infty$, almost all subsets of $\{1,2,\ldots,n\}$ are not MPTQ.
\end{thm}
Our next result concerns the smallest cardinality of MPTQ sets, comparably to \cite[Theorem 1]{He} by Hegarty.
\begin{thm}\label{smallestcar}
Let $A$ be a MPTQ set of real numbers. The following claims are true.
\begin{enumerate}
\item If $A$ contains only positive numbers, then $|A|\ge 8$.
\item If $A$ contains negative numbers, then $|A|\ge 5$.
\end{enumerate}
\end{thm}
When we allow negative numbers to be included, the proof for the smallest cardinality becomes more complicated very quickly.
\begin{que}\label{quesmallest}\normalfont
What is the smallest cardinality among MPTQ sets of real numbers?
\end{que}
To prove \cite[Theorem 1]{He}, Hegarty (2007) used a nontrivial algorithm to reduce the problem to finite computation. The mathematica program was reported to run for about 15 hours. However, because it takes less memory and computation power for computers to do addition and subtraction than multiplication and division, Question \ref{quesmallest} is thus quite challenging.
Lastly, we find sequences that do not contain MPTQ subsets.
\begin{thm}\label{logprimenoMSTD} Let $P$ be the set of all primes. The following are true.
\begin{enumerate}
\item The set $P$ contains no MPTQ subsets.
\item Fix $1\neq r > 0$. Consider $P_r = \log_r(P)$. Then $P_r$ contains no MSTD subsets.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:gen} Let $A = \{a_k\}_{k=1}^\infty$ be an increasing sequence in absolute value of real numbers. If there exists a positive integer $r$ such that
\begin{enumerate}
\item $|a_k| > |a_{k-1}\cdot a_{k-r}|$ for all $k\ge r+1$, and
\item $A$ does not contain any MPTQ set $S$ with $|S| \le 2r-1$,
\end{enumerate}
then $A$ contains no MPTQ set.
\end{thm}
Theorem \ref{thm:gen} is comparable to \cite[Theorem 1]{CI1} by H. V. Chu et al. but allows more flexibility in the sense that our sequence needs only to be increasing in absolute value.
\begin{exa}\label{mulFi}\normalfont
Define the Fibonacci sequence to be $F_1 = 1$, $F_2 = 2$, and $F_{n} = F_{n-1} + F_{n-2}$ for $n\ge 3$.
Let $A = \{a_k\}_{k=1}^\infty$ with $a_k = 2^{F_k}$. Because for $k\ge 4$, $a_k = a_{k-1}a_{k-2} > a_{k-1}a_{k-3}$, and there are no MPTQ sets of size $5$ due to Theorem \ref{smallestcar} item 1, $A$ has no MPTQ subsets.
\end{exa}
\begin{exa}\label{mulFine}\normalfont
Let $A = \{a_k\}_{k=1}^\infty$ with $a_k = \pm k^{F_k}$ (we may choose the sign for each $a_k$ arbitrarily). Because for $k\ge 3$, $$|a_k|\ =\ k^{F_k} \ =\ k^{F_{k-1}}\cdot k^{F_{k-2}}\ >\ (k-1)^{F_{k-1}}\cdot (k-2)^{F_{k-2}} \ =\ |a_{k-1}a_{k-2}|,$$ and there are no MPTQ sets of size $3$ due to Theorem \ref{smallestcar} item 2, $A$ has no MPTQ subsets.
\end{exa}
\begin{rek}\normalfont
It is interesting to see that while the set of prime numbers contains infinitely many MSTD subsets \cite[Theorem 5]{CI1}, it contains no MPTQ subsets. On the other hand, an example of a set containing infinitely many MPTQ subsets while no MSTD subsets is $\{1,2,2^2,2^3,\ldots\}$.\footnote{The reason that $\{1,2,2^2,2^3,\ldots\}$ has no MSTD subsets is due to \cite[Corollary 8]{CI1}.} Finally, we also have sets that contain neither MSTD nor MPTQ subsets. An example is the sequence in Example \ref{mulFi}.
\end{rek}
\section{Search for MPTQ subsets more efficiently and\\ Probability measure for MPTQ subsets}
\begin{defi}\normalfont
For every MPTQ set $A$, let $k$ be the largest positive integer (if any) such that
$$|A\cdot A|-|A/A| \ge k|A|+\frac{k(k-3)}{2}+1.$$
Then $A$ is said to be $k$-special MPTQ.
\end{defi}
\begin{proof}[Proof of Theorem \ref{esearch}]
Fix $n\ge 4$. Let $t$ be the number of primes strictly between $n$ and $2n$. By Bertrand's postulate, we know that $t\ge 1$. Let $p$ be such a prime and $A$ be a subset of $\{1,2,\ldots,2n\}$ not containing $p$. We claim that $p\rightarrow A$ gives $|A|+1$ new products and $2|A|$ new quotients. We proceed by proving the claim.
Write $A = \{a_1,a_2,\ldots, a_j\}$, where $a_1<a_2<\cdots<a_j$. Consider the following products
$$pa_1, pa_2, \ldots, pa_j, p^2.$$
They are all new products from $p\rightarrow A$. Indeed, suppose that there exists $1\le k,\ell,m\le j$ such that either
$a_ka_\ell = pa_m$ or $a_ka_\ell = p^2$. In both cases, either $p|a_k$ or $p|a_\ell$, which contradicts that $n<p<2n$. So, the number of new products is exactly $j+1 = |A| + 1$.
Consider the following quotients
$$\frac{p}{a_1}, \frac{p}{a_2},\ldots, \frac{p}{a_j}.$$
They are all new quotients from $p\rightarrow A$. Indeed, suppose that there exists $1\le k,\ell, m\le j$ such that $\frac{p}{a_k} = \frac{a_\ell}{a_m}$. Then $pa_m = a_ka_\ell$ and so, either $p|a_k$ or $p|a_\ell$. Hence, $$\max \{a_\ell, a_k\}\ \ge\ 2p \ >\ 2n,$$
which contradicts that $A\subseteq \{1,2,\ldots, 2n\}$. Therefore, all the above quotients and their reciprocals are new. So, the number of new quotients is exactly $2j = 2|A|$.
We have proved that $p\rightarrow A$ gives $|A|+1$ new products and $2|A|$ new quotients. For any $|A|\ge 1$, $2|A| \ge |A|+1$. So, given a MPTQ set containing some primes strictly between $n$ and $2n$, we know that by excluding these primes from the set, we still have a MPTQ set.
Let $S$ be a MPTQ subset of $\{1,2,\ldots, n\}$ and let $k$ be the maximum number of primes strictly between $n$ and $2n$ that can be added to $S$ and we still have a MPTQ set $S'$. Applying our above claim repeatedly, we have
\begin{align*}|S'\cdot S'| &\ =\ |S\cdot S|+\sum_{i=1}^{k} (|S|+i)\\
|S'/S'|&\ =\ |S/S| + \sum_{i=0}^{k-1}2(|S|+i).\end{align*}
Because $|S'\cdot S'| - |S'/S'|\ge 1$, we have
$$|S\cdot S| - |S/S|\ \ge\ k|S| + \frac{k(k-3)}{2} + 1.$$
So, $S$ is $k$-special MPTQ.
We now outline the steps in finding all MPTQ subsets of $\{1,2,\ldots, n\}$.
\begin{itemize}
\item[(1)] Search for all MPTQ subsets of $\{1,2,\ldots, n\}$ without primes strictly between $n$ and $2n$.
\item[(2)] For each MPTQ subset $S$, find the largest positive integer $k$ such that $$|S\cdot S| - |S/S|\ \ge\ k|S| + \frac{k(k-3)}{2} + 1;$$
in other words, classify all MPTQ subsets found in step (1) by their $k$-special MPTQ property. This can be done since from step (1), we already know $|S\cdot S|$, $|S/S|$ and $|S|$ of each MPTQ set $S$.
\item [(3)] Given a $k$-special MPTQ set, we can add at most $k$ primes strictly between $n$ and $2n$ to it and still have a MPTQ set.
\end{itemize}
Following these steps, we will have all MPTQ subsets of $\{1,2,\ldots, n\}$. Therefore, the number of subsets we need to check is reduced by a factor of $2$ for each $p$. Because there are $t$ primes strictly between $n$ and $2n$, this
method helps reduce the number of subsets to be checked by a factor of $2^t$. \footnote{There are many improved versions of Bertrand's postulate, which may reduce the number of subsets to be checked further as our $n$ grows. For example, Nagura \cite{Nagura} proved that for $n\ge 25$, there is always a prime between $n$ and $6n/5$. Therefore, between $n$ and $2n$, there are at least $2$ primes. This reduces the number of subsets to be checked by a factor of $4$. }
\end{proof}
\begin{exa}\normalfont
If we want to find all MPTQ subsets of $\{1,2,\ldots,36\}$, we can instead find all MPTQ subsets of $\{1,2,\ldots,36\}\backslash \{19,23,29,31\}$.
\end{exa}
\begin{proof}[Proof of Theorem \ref{goto0}]
Due to \cite[Corollary 1.1]{sa} by Cilleruelo and Guijarro-Ordonez, almost all sets $A\subseteq \{1,2,\ldots,n\}$ have $|A/A| \sim Cn^2$ for some constant $C>0$. On the other hand, Erd\H{o}s \cite{Er} proved that as $n\rightarrow \infty$, $$|A\cdot A| \ \le\ |\{1,2,\ldots,n\}\cdot \{1,2,\ldots, n\}| \ =\ \frac{n^2}{(\log n)^{\delta+o(1)}} \ =\ o(n^2),$$
where $\delta = 1- \frac{1+\log\log 2}{\log 2}$. Therefore, as $n\rightarrow \infty$, almost all subsets of $\{1,2,\ldots,n\}$ are not MPTQ.
\end{proof}
\section{Preliminaries}
We now mention some important properties of MPTQ sets and the relationship between MSTD and MPTQ sets.
\begin{defi}\normalfont
A set $A$ is symmetric with respect to $a$ if there exists $a\in \mathbb{R}\backslash \{0\}$ such that $a/A = A$.
\end{defi}
\begin{exa}\normalfont
The set $S_1 = \{3,4,6,8,9,27,48,144,162,216,324,432\}$ is symmetric with respect to $1296$ because
$$S_1 \ =\ \bigg\{\frac{1296}{3},\frac{1296}{4},\frac{1296}{6},\frac{1296}{8},\frac{1296}{9},\frac{1296}{27},\frac{1296}{48},\frac{1296}{144},\frac{1296}{162},\frac{1296}{216},\frac{1296}{324},\frac{1296}{432}\bigg\}.$$
\end{exa}
\begin{lem}
A symmetric set is balanced.
\end{lem}
\begin{proof}
Let $A$ be a symmetric set with respect to $a$. We have $$|A\cdot A| \ =\ |(a/A)\cdot A| \ =\ |a\cdot (A/A)| \ =\ |A/A|.$$
Therefore, $A$ is balanced.
\end{proof}
\begin{rek}\normalfont
Let $A = \{a_1,\ldots,a_n\}$ be a MPTQ set and $A^p$ be the nonempty subset of $A$ whose elements are divisible by a prime $p$. Let $q$ be a prime that does not divide any number in $A$. For each number in $A^p$, if we replace $p$ in its prime factorization by $q$ to form $(A^p)'$. Then $(A\backslash A^p)\cup (A^p)'$ is MPTQ. The reason is that the process does not change the sizes of the product set and the quotient set. MSTD sets do not enjoy this property. We call this the \textit{$(p,q)$-prime switch} of $A$.
\end{rek}
\begin{exa}\normalfont
The set
$$S_2 \ =\ \{3,4,6,8,9,27,48,72,144,162,216,324,432\}$$
is MPTQ. By the $(2,5)$-prime switch, we have the new set
$$S_3 \ =\ \{3,25,15,125,9,27,1875,1125,5625,405,3375,2025,16875\},$$
which is also MPTQ.
\end{exa}
\begin{defi}\normalfont
Let $A\in \mathbb{R}\backslash \{0\}$. For $a_i,a_j\in A$, we have $a_i/a_i = a_j/a_j = 1$. We call the pair $(a_i, a_i)$, $(a_j,a_j)$ a trivial pair of equal quotients.
\end{defi}
\begin{prop}\label{trivialbounds}
For a finite set $A\in\mathbb{R}\backslash\{0\}$, we have the following trivial bounds
\begin{align}
|A\cdot A|&\ \le\ \frac{|A|(|A|+1)}{2},\label{eq1}\\
|A/A| &\ \le\ |A|(|A|-1)+1\label{eq2}.
\end{align}
The equality in (\ref{eq1}) is achieved if every pair of numbers gives a distinct product, and the equality in (\ref{eq2}) if every pair of distinct numbers gives a distinct quotient.
\end{prop}
\begin{rek} \label{countquot}\normalfont
Given a set $A\in\mathbb{R}\backslash\{0\}$, for each $q\in A/A$, define
$$(A/A)_q \ =\ \{\{a_i,a_j\}\,|\, a_i/a_j = q\mbox{ and } a_i,a_j\in A\}.$$
Then
\begin{align}\label{numberof=quo}\frac{1}{2}(|A|(|A|-1)+1-|A/A|) \ =\ \sum_{q\in A/A, q\neq 1, |q|\ge 1}(|(A/A)_q| - 1).\end{align}
The part $|A|(|A|-1)+1$ comes from Inequality (\ref{eq2}).
\end{rek}
We provide an example to help understand (\ref{numberof=quo}).
\begin{exa}\normalfont
Let $A = \{1,2,3,6,9\}$. We have
$$A/A = \bigg\{1,2,3,\frac{3}{2},\frac{9}{2},6,9,\frac{1}{2},\frac{1}{3},\frac{1}{9},\frac{1}{6},\frac{2}{3},\frac{2}{9}\bigg\}$$
and so, $|A/A|=13$. The left side of (\ref{numberof=quo}) is $4$. Consider the right side of (\ref{numberof=quo}). We have
\begin{align*}
&(A/A)_2 \ =\ \{\{2,1\}, \{6,3\}\},\\
&(A/A)_3 \ =\ \{\{3,1\}, \{6,2\}, \{9,3\}\},(A/A)_{3/2} \ =\ \{\{3,2\}, \{9,6\}\},\\
&(A/A)_{9/2}\ =\ \{\{9,2\}\}, (A/A)_{6} \ =\{\{6,1\}\}, (A/A)_9 \ =\ \{\{9,1\}\}.
\end{align*}
The right side is $\sum_{q\in A/A, q > 1} (|(A/A)_q|-1) = 4$, as desired.
\end{exa}
\begin{rek} \label{countprod}\normalfont
Given a set $A\in \mathbb{R}\backslash \{0\}$, for each $p\in A\cdot A$, define
$$(A\cdot A)_p \ =\ \{\{a_i,a_j\}\,|\, a_ia_j = p\mbox{ and } a_i,a_j \in A\}.$$
Then
\begin{align}\frac{1}{2}|A|(|A|+1) - |A\cdot A| \ =\ \sum_{p\in A\cdot A}(|(A\cdot A)_p|-1).\label{numberof=pro}\end{align}
The part $\frac{1}{2}|A|(|A|+1)$ comes from Inequality (\ref{eq1}).
\end{rek}
\begin{exa}\normalfont
Let $A = \{1,2,3,6,9\}$. We have
\begin{align*}
A\cdot A \ =\ \{1,2,3,4,6,9,12,18,27,36,54,81\}
\end{align*}
and so, $|A\cdot A| = 12$. The left side of (\ref{numberof=pro}) is $3$. Consider the right side of (\ref{numberof=pro}). We have
\begin{align*}
&(A/A)_1 = \{\{1,1\}\}, (A/A)_2 = \{\{1,2\}\}, (A/A)_3 = \{\{3\}\}, (A/A)_4 = \{\{2,2\}\},\\
&(A/A)_6 = \{\{1,6\}, \{2,3\}\}, (A/A)_9 = \{\{1,9\},\{3,3\}\}, (A/A)_{12} = \{\{2,6\}\},\\
&(A/A)_{18} = \{\{2,9\},\{3,6\}\}, (A/A)_{27} = \{\{3,9\}\},(A/A)_{36} = \{\{6,6\}\},\\
&(A/A)_{54} = \{\{6,9\}\}, (A/A)_{81} = \{\{9,9\}\}.
\end{align*}
So, the right side is $3$, as desired. \end{exa}
\begin{rek}\label{deepex}\normalfont
Let $A\subset \mathbb{R}\backslash \{0\}$. Loosely speaking, Remark \ref{countquot} and Remark \ref{countprod} show how pairs of equal products and nontrivial pairs of equal quotients reduce $|A\cdot A|$ and $|A/A|$, respectively. When we look at the reduction, we have to be very careful.
For example, if we have $a_i \cdot a_j = a_m \cdot a_n = a_p\cdot a_q$ for some $a_i, a_j, a_m, a_n, a_p, a_q\in A$ and $a_i,a_j, a_m, a_p,a_q$ being pairwise different, $|A\cdot A|$ is reduced by $2$, not $3$ even though $\{a_i,a_j\}, \{a_m,a_n\},\{a_p,a_q\}\in (A\cdot A)_{a_ia_j}$. This is why we need to subtract $1$ from each summand in (\ref{numberof=pro}). The same reasoning applies for $A/A$. Now, we investigate the relationship between the number of nontrivial pairs of equal quotients and the number of pairs of equal products. Consider two cases.
\begin{enumerate}
\item Case 1: we do not have $a_i \cdot a_j = a_m \cdot a_n = a_p\cdot a_q$ for all $a_i, a_j, a_m, a_n, a_p, a_q\in A$ and $a_i, a_j, a_m, a_p, a_q$ being pairwise different. In other words, for all $p\in A\cdot A$, $1\le |(A\cdot A)_p|\le 2$. In this case, we have a very useful inequality. Let $a_i,a_j,a_m,a_n\in A$, where $a_j/a_i = a_n/a_m\neq 1$ and $|a_i|\le |a_j|\le |a_m|\le |a_n|$.
\begin{itemize}
\item If $a_j\neq a_m$, we have another nontrivial pair of equal quotients whose absolute values are at least $1$: $a_m/a_i = a_n/a_j$.
\item If $a_j = a_m$, then we do not have another pair.
\end{itemize}
In both cases, we have $a_j\cdot a_m = a_i\cdot a_n$, a pair of equal products. So, a nontrivial pair of equal quotients whose absolute values are at least $1$ increases the right side of (\ref{numberof=quo}) by at most $2$, while its corresponding pair of equal products increases the right side of (\ref{numberof=pro}) by exactly $1$. Hence, if $$k = \sum_{q\in A/A, q\neq 1, |q|\ge 1}(|(A/A)_q| - 1),$$ then
\begin{align}\label{www}\sum_{p\in A\cdot A}(|(A\cdot A)_p|-1)\ \ge\ k/2.\end{align}
\item Case 2: $a_i \cdot a_j = a_m \cdot a_n = a_p\cdot a_q$ for some $a_i, a_j, a_m, a_n, a_p, a_q\in A$ and $a_i, a_j, a_m, a_p, a_q$ being pairwise different. Then we do not have (\ref{www}) anymore. To see why, suppose that $\{1,4,5,8,10,40\}\subseteq A$. Then the following pairs of equal quotients
\begin{align*}
\frac{4}{1} \ =\ \frac{40}{10}, \frac{10}{1} \ =\ \frac{40}{4}, \frac{40}{8} \ =\ \frac{5}{1},\frac{40}{5} \ =\ \frac{8}{1}, \frac{5}{4} \ =\ \frac{10}{8},\frac{10}{5} \ =\ \frac{8}{4}.
\end{align*}
increase the right side of (\ref{numberof=quo}) by $6$.
The corresponding products given by these three pairs are
$$4\cdot 10 \ =\ 1\cdot 40, 1\cdot 40 \ =\ 5\cdot 8, 4\cdot 10 = 5\cdot 8.$$
As mentioned above, the right side of (\ref{numberof=pro}) only accounts for $2$ (not $3$) out of these three pairs of equal products since $4\cdot 10 = 1\cdot 40 = 5\cdot 8$. Because $6/2 = 3>2$, we do not have Inequality (\ref{www}).
\end{enumerate}
\end{rek}
\begin{lem}\label{expo}
Let a MSTD set $A$ be chosen. Then for all $1\neq r > 0$, $B = r^A$ is MPTQ.
\end{lem}
\begin{proof}
We will prove that $|B/ B| = |A-A|$ and $|B\cdot B| = |A+A|$. Given a difference $a_i-a_j$ for some $a_i, a_j\in A$, we have the corresponding quotient $r^{a_i}/r^{a_j}$. Let $a'_i, a'_j\in A$. Because $r\notin \{0,\pm 1\}$, $a_i - a_j = a'_i - a'_j$ if and only if $r^{a_i-a_j} = r^{a'_i-a'_j}$. Therefore, $|B/B| = |A-A|$. Similarly, given a sum $a_p + a_q$ for some $a_p, a_q\in A$, we have the corresponding product $r^{a_p}r^{a_q}$. Let $a'_p, a'_q \in A$. Because $r\notin \{0,\pm 1\}$, $a_p+a_q = a'_p+a'_q$ if and only if $r^{a_p+a_q} = r^{a'_p+a'_q}$. Therefore, $|B\cdot B| = |A+A|$. This completes our proof.
\end{proof}
\begin{lem}\label{logtrans}
Let a MPTQ set $A$ of positive numbers be chosen. Fix $1\neq r>0$. Then
$B = log_r A $ is MSTD.
\end{lem}
\begin{proof}
We will prove that $|B+B| = |A\cdot A|$ and $|B-B| = |A/A|$. Given a product $a_ia_j$ for some $a_i, a_j\in A$, we have the corresponding sum $\log_r a_i+\log_r a_j$ in $B+B$. Let $a'_i, a'_j$ be chosen. We have $a_ia_j = a'_ia'_j$ if and only if $\log_r a_i+\log_ra_j = \log_r a'_i + \log_r a'_j$. Hence, $|B+B| = |A\cdot A|$. Similarly, given a quotient $a_p/a_q$ for some $a_p, a_q\in A$, we have the corresponding difference $\log_r a_p - \log_r a_q$ in $B-B$. Let $a'_p, a'_q\in A$, We have $a_p/a_q = a'_p/a'_q$ if and only if $\log_r a_p - \log_r a_q = \log_r a'_p - \log_r a'_q$. Hence, $|B-B| = |A/A|$. This completes our proof.
\end{proof}
\subsection*{Application: construction of an infinite family of MPTQ sets}
We can generate an infinite family of MSTD sets from a given MSTD set through the base expansion method. Let $A$ be a MSTD set, and let $$A_{k,m}=\bigg\{\sum_{i=1}^{k}a_im^{i-1}:a_i\in A\bigg\}.$$ If $m$ is sufficiently large, then $|A_{k,m}\pm A_{k,m}| = |A\pm
A|^k$ and $|A_{k,m}|=|A|^k$. The method is a very powerful tool and has been used extensively in the literature including \cite{He, ILMZ1, ILMZ2}. However, the base expansion method turns out to be inefficient in terms of our MSTD sets' cardinality. Due to Lemma \ref{expo} and Lemma \ref{logtrans}, we can use the base expansion method to generate an infinite family of MPTQ sets from a given MPTQ sets.
Let $A$ be a MPTQ set of positive real numbers. By Lemma \ref{logtrans}, we know that $\log_2 A$ is a MSTD set. Now, apply the base expansion method to generate an infinite family of MSTD sets from $\log_2 A$. Due to Lemma \ref{expo}, if $B$ is a MSTD set in the family, we know that $2^B$ is a MPTQ set.
\section{The smallest MPTQ set}
\begin{proof}[Proof of Theorem \ref{smallestcar} item 1]
We prove by contradiction. Let $A$ be a MPTQ set with $|A|\le 7$. By Lemma \ref{logtrans}, $B = \log_2 A$ is MSTD and $|B| = 7$. This contradicts \cite[Theorem 6]{Na1}. So, $|A|\ge 8$, as desired.
\end{proof}
\begin{exa}\normalfont
An example of a MPTQ set with cardinality $8$ is
$$S_4 \ = \ \{2^0, 2^2, 2^3, 2^4, 2^7, 2^{11}, 2^{12}, 2^{14}\}.$$
This set is the $2$-exponential transformation of the MSTD set $\{0,2,3,4,7,11,12,14\}$. Lemma \ref{expo} guarantees that $S_4$ is MPTQ.
\end{exa}
The restriction we have in Theorem \ref{smallestcar} item 1 is that our MPTQ set only contain positive numbers. Next, we relax this condition to prove Theorem \ref{smallestcar} item 2.
We employed the same technique used by the author \cite{CI2} with a nontrivial modification of the proof for the product/quotient case. The proof is more complicated compared to the proof of \cite[Theorem 1]{CI2} because of interactions between negative and positive numbers. The next lemma follows from \cite[Proposition 7]{CI2} and the proof of Lemma \ref{expo}.
\begin{lem}\label{notfar}
Let $n\in \mathbb{N}$ and $r\in \mathbb{R}\backslash \{0,\pm 1\}$. Set $a = r^{(n-1)+k}$ for some $1\le k\le n-1$. Then $a\rightarrow G_{n,r}$ gives $k+1$ new products and $2k$ new quotients.
\end{lem}
\begin{thm}\label{1joingeo}
Let $n\in\mathbb{N}$ and $r\in\mathbb{R}\backslash \{0,\pm 1\}$. For all $a\in \mathbb{R}\backslash \{0\}$, the set $G_{n,r}\cup \{a\}$ is not MPTQ.
\end{thm}
\begin{proof}
If $a\in G_{n,r}$, then we are done since $G_{n,r}$ is symmetric with respect to $r^{n-1}$ and thus, not MPTQ. For $n = 1$, we have $G_{1,r} =\{1,a\}$, which is symmetric with respect to $a$ and thus, not MPTQ. We assume that $a\notin G_{n,r}$ and $n\ge 2$. The number of new products as a result of $a\rightarrow G_{n,r}$ is at most $n+1$. We consider the following two cases.
\noindent \textbf{Case 1:} $a = r^\ell$ for some $\ell\in \mathbb{N}_{>n-1}$. If $\ell = n$, we have $G_{n,r}\cup \{a\} = G_{n+1,r}$, which is not MPTQ. Consider $\ell\ge n+1$. Write $\ell = (n-1)+k$ for some $k\ge 2$.
\begin{itemize}
\item If $2\le k\le n-1$, by Lemma \ref{notfar}, we have $k+1$ new products while $2k$ new quotients. So, our new set is not MPTQ.
\item If $k> n-1$, then we have $2n$ new quotients. Since we have at most $n+1$ new products, our new set is not MPTQ.
\end{itemize}
\noindent \textbf{Case 2:} $a = r^{\ell}$ for some $\ell\in \mathbb{N}_{<0}$. Due to symmetry, this is similar to Case 1.
\noindent \textbf{Case 3:} $a \neq r^\ell$ for all $\ell\in \mathbb{Z}$. Our set of new quotients contains
$$K \ =\ \bigg\{a,\frac{a}{r},\ldots,\frac{a}{r^{n-1}}\bigg\}.$$
\begin{itemize}
\item If $1/a\in K$, then $a^2 \in G_{n,r}$. So, the number of new products is at most $n$. Because $|K| = n$, we know that our new set is not MPTQ.
\item If $1/a\notin K$, then we have at least $n+1$ new quotients. Again, our new set is not MPTQ.
\end{itemize}
We have completed the proof.
\end{proof}
\begin{cor}\label{geoprog}
A finite set containing numbers in a geometric progression in union with an arbitrary number is not MPTQ.
\end{cor}
\begin{proof}
Let our set be $A = \{a, ar, ar^2, \ldots, ar^{n-1},b\}$, where $n\in\mathbb{N}, ab\neq 0$, $r\notin \{0,\pm 1\}$. Then, $A/a = \{1, r, r^2, \ldots, r^{n-1}, b/a\} = G_{n,r}\cup \{b/a\}$, which is not MPTQ by Theorem \ref{1joingeo}. Hence, $A$ is not MPTQ.
\end{proof}
\begin{proof}[Proof of Theorem \ref{smallestcar} item 2]
Let $A$ be our finite set of positive numbers. We analyze $5$ cases corresponding to the cardinality of $A$.
\noindent Case 1: $|A| = 1$. Write $A = \{a_1\}$ for some $a_1\in \mathbb{R}\backslash \{0\}$. Because $A$ is symmetric with respect to $a_1^2$, $A$ is not MPTQ.
\noindent Case 2: $|A| = 2$. Write $A = \{a_1,a_2\}$ for some $a_1,a_2\in\mathbb{R}\backslash \{0\}$. Because $A$ is symmetric with respect to $a_1a_2$, $A$ is not MPTQ.
\noindent Case 3: $|A| = 3$. Write $A = \{a_1,a_2,a_3\}$ for some $a_1,a_2,a_3\in \mathbb{R}\backslash \{0\}$. Consider $A/a_1 = \{1,a_2/a_1,a_3/a_1\}$. Either $a_2/a_1\neq -1$ or $a_3/a_1\neq -1$. Without loss of generality, assume that $a_2/a_1\neq -1$. Because $\{1, a_2/a_1\} = G_{2, a_2/a_1}$, Theorem \ref{1joingeo} says that $A/a_1 = G_{2,a_2/a_1} \cup \{a_3/a_1\}$ is not MPTQ. Hence, $A$ is not MPTQ.
\noindent Case 4: $|A| = 4$. Write $A = \{a_1, a_2, a_3, a_4\}$ for some $0<|a_1|\le |a_2|\le |a_3|\le |a_4|$. By Proposition \ref{trivialbounds}, we know that $\max |A\cdot A| = 10$, while $\max |A/A| = 13$. Since we have only $4$ numbers, we do not have $a_i \cdot a_j = a_m \cdot a_n = a_p\cdot a_q$ for all $a_i, a_j, a_m, a_n, a_p, a_q\in A$ and $a_i,a_j, a_m, a_p, a_q$ being pairwise different. Let $$k=\sum_{q\in A/A, q\neq 1, |q|\ge 1}(|(A/A)_q| - 1),$$
then we can apply Remark \ref{deepex} Case 1 to have $$\sum_{p\in A\cdot A}(|(A\cdot A)_p|-1)\ \ge\ k/2.$$ In order that $A$ is MPTQ, it must be that
\begin{align}
13 - 2k \ <\ 10 - k/2 \label{case4}.
\end{align}
Solving for $k$, we have $k\ge 3$. Therefore, $|A/A| \le 13 - 6 = 7$. For $1\le i\le 3$, set $m_i = a_{i+1}/a_i$. Note that $|m_i| \ge 1$ and $m_i \neq 1$. Then
$$A = (a_1\,|\,m_1,m_2,m_3).$$
We have $6$ distinct quotients
$$K \ =\ \{1, m_1, m_1m_2, m_1m_2m_3, (m_1m_2)^{-1}, (m_1m_2m_3)^{-1}\}.$$
Subcase 4.1: $m_1\neq -1$. Then $(m_1)^{-1}$ is another distinct quotient. Because $|A-A|\le 7$, we have $m_2\in K\cup\{(m_1)^{-1}\}$. The only possible option is that $m_2 = m_1$. Then $\{a_1,a_2,a_3\}$ is a geometric progression. By Corollary \ref{geoprog}, $A$ is not MPTQ.
Subcase 4.2: $m_1 = -1$. Then $m_2\neq m_1$ because if not, $m_1m_2 = 1$ or $a_1 = a_3$, a contradiction. Either $m_2\notin K$ or we have $m_2 \in \{m_1m_2m_3, (m_1m_2m_3)^{-1}\}$.
\begin{itemize}
\item Subcase 4.2.1: $m_2\notin K$. Then $(m_2)^{-1}\in K\cup\{m_2\}$. The only option is $(m_2)^{-1} \in\{ (m_1m_2m_3)^{-1}, m_1m_2m_3\}$. So, $m_3 = -1$. Our set $$A\ =\ \{a_1,-a_1,-a_1m_2,a_1m_2\},$$ which is symmetric with respect to $a_1^2m_2$ and thus, not MPTQ.
\item Subcase 4.2.2: $m_2\in K$. The only option is $m_2 \in \{(m_1m_2m_3)^{-1},m_1m_2m_3\}$, or equivalently, $m_1m_3 = 1$. Again, we have $m_3 = -1$. According to Subcase 4.2.1, our set is not MPTQ.
\end{itemize}
We complete our proof that $|A|\ge 5$.
\end{proof}
\section{Sequences with no MPTQ subsets}
\begin{proof}[Proof of Theorem \ref{thm:gen}]
Let $S = \{s_1, s_2, . . . , s_k\} = \{a_{g(1)}, a_{g(2)}, . . . , a_{g(k)}\}$ be a finite subset of $A$, where $g: \mathbb{Z}^+\rightarrow\mathbb{Z}^+$ is a strictly increasing function. We show that $S$ is not MPTQ
by strong induction on $g(k)$.
For the base case, we know that all MPTQ sets have at least $5$ elements due to Theorem \ref{smallestcar} item 2,
so any subset $S$ of $A$ with exactly $k$ elements is not a MPTQ set if $k \le 4$; in particular,
$S$ is not a MPTQ set if $g(k)\le 4$. Thus we may assume for $g(k) \ge 5$ that all $S'$ of the
form $\{s_1, . . . , s_{k-1}\}$ with $|s_{k-1}| \le |a_{g(k)}|$ are not MPTQ sets. The proof is completed by
showing $S = S' \cup \{a_{g(k)}\} = \{s_1, . . . , s_{k-1}, a_{g(k)}\}$ is not MPTQ sets for any $a_{g(k)}$.
For the inductive step, $S'$ is not a MPTQ set by the inductive assumption. If $k \le 2r-1$ then $|S| \le 2r-1$ and $S$ is not a MPTQ set by the second assumption of the theorem.
If $k\ge 2r$, consider the number of new products and quotients obtained by adding $a_{g(k)}$. As we have at most
$k$ new products, we are done if there are at least $k$ new quotients.
Since $k \ge 2r$, we have $k - \floor{\frac{k+1}{2}} \ge r$. Let $t = \floor{\frac{k+1}{2}}$. Then $t \le k - r$, which implies $|s_{t}| \le |s_{k-r}|$. The largest quotient in absolute value between elements in $S'$ is $|s_{k-1}/s_1|$ and the smallest is $|s_1/s_{k-1}|$; we now show that we have added at least $k$ distinct quotients whose absolute values are either greater than $|s_{k-1}/s_1|$ or smaller than $|s_1/s_{k-1}|$, which will complete the proof. We have
\begin{align*}
|a_{g(k)}/s_{t}| &\ \ge \ |a_{g(k)}/s_{k-r}| \ = \ |a_{g(k)}/a_{g(k-r)}| \nonumber\\
&\ \ge \ |a_{g(k)}/a_{g(k)-r}| \nonumber\\
&\ > \ |a_{g(k)-1}/a_1| & \text{(by the first assumption on $\{a_n\}$)} \nonumber\\
&\ \ge \ |s_{k-1}/a_{1}| \ = \ |s_{k-1}/s_{1}|.
\end{align*}
Since $|a_{g(k)}/s_t| > |s_{k-1}/s_1|$, we know that
$$a_{g(k)}/s_t, \ldots, a_{g(k)}/s_2, a_{g(k)}/s_1$$
are $t$ quotients whose absolute values are greater than $|s_{k-1}/s_1|$. As we could do division in the opposite order, we have $t$ quotients who absolute values are smaller than $|s_1/s_{k-1}|$. Therefore, the total number of new quotients is at least
$$2t \ =\ 2\bigg\lfloor\frac{k+1}{2}\bigg\rfloor\ \ge \ k.$$
This completes our proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{logprimenoMSTD}]
We first prove item 1. Consider $A = \{a_1, a_2,\ldots, a_n\}\subset P$ for some $n\in \mathbb{N}$ and $a_1 < a_2< \cdots < a_n$. Due to Theorem \ref{smallestcar} item 1, it suffices to prove the following claim: if $A\backslash\{a_n\}$ is not MPTQ, then $A$ is not MPTQ. In particular, we will prove that $a_{n}\rightarrow A\backslash\{a_n\}$ gives more new quotients than new products. Clearly, $a_{n}\rightarrow A\backslash\{a_n\}$ gives at most $n$ new products. The following are new quotients
$$\frac{a_{n}}{a_1},\frac{a_{n}}{a_2},\ldots, \frac{a_{n}}{a_{n-1}}.$$
Indeed, suppose that $a_{n}/a_j = a_m/a_k$ for some $1\le m,k,j\le n-1$. Then $a_{n}a_k = a_ma_j$, implying that either $a_m|a_k$ or $a_m|a_{n}$, which contradicts that $a_k,a_{n}\in P$. Hence, we have $n-1$ new quotients greater than $1$. Their reciprocals must also be new. Therefore, we have $2(n-1)$ new quotients. For $n\ge 8$, $2(n-1)>n$, and so, $A$ is not MPTQ. Again, the reason we only concern with $n\ge 8$ is due to Theorem \ref{smallestcar} item 1.
We proceed to prove item 2. Fix $r>0$ and $r\neq 1$. We prove by contradiction. Suppose that $P_r$ contains a MSTD subset $A$. By Lemma \ref{expo}, $r^{A}\subset P$ is MPTQ, implying that $P$ contains a MPTQ subset. This contradicts item 1 above.
\end{proof}
\section{Questions}
We end with a list of questions for future research.
\begin{itemize}
\item The diameter of a set is defined to be the difference between the maximum and the minimum. What is the smallest diameter of a MPTQ sets? What is the smallest $n$ such that $\{1,2,\ldots,n\}$ has a MPTQ subset?
\item Can we construct MPTQ sets explicitly without using MSTD sets and Lemma \ref{expo}? A conventional method of constructing MSTD sets is to fix a fringe pair $(L,R)$ of two sets containing elements to be used in the fringe of the interval and argue that all the middle elements will appear. The fringe pair ensures that some of the largest and smallest differences are missed and that our set is MSTD. For MSTD sets, you can use the fringes since it is possible to manage the interaction (addition and subtraction) of numbers in the middle. For example, $(n-9) + 4 = (n-10) + 5$. However, for multiplication, $4(n-9)$ is not necessarily equal to $5(n-10)$. Because it is hard to work with the middle of MPTQ sets, it is not clear how to use the fringes to construct MPTQ sets as a result.
\item Is there a set that is both MSTD and MPTQ?
\end{itemize}
\subsection*{Acknowledgement}
The author would like to thank Carlo Sanna for providing the proof of Theorem \ref{goto0}. Many thanks to the anonymous referee for useful comments that help improve this paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{A physically consistent economic framework\label{sec:Economic-framework}}
An earlier article introduced a simple macroeconomic
growth model that treats civilization in a manner consistent with physical conservation laws \citep{GarrettCO2_2009}. As illustrated in Fig.~\ref{fig:thermschematic}, all material within civilization is treated as being in local thermodynamic equilibrium with the same specific potential energy per unit matter; effectively, it is treated as a surface defined by constant temperature and pressure, constant density, or constant specific entropy. Accordingly, no distinction is made between the internal components of civilization. Unlike traditional economic models, no explicit account is made for labor, capital, households, firms, governments or banks, nor the flows to and
from these components. Rather, civilization is considered only as a whole, or at a sufficiently low resolution that the only resolved
distinction is between civilization and known primary energy reservoirs
(e.g.~coal, oil, uranium, etc.).
Flows to civilization can be viewed as a special case within the more general thermodynamic model shown in Figure ~\ref{fig:thermschematic}, a perspective that bears some similarities with the thermodynamic model used by \citet{AnnilaSalthe2009} to represent economic flows. Energy reservoirs lie along a higher potential surface than the system of interest. The interface that separates these two surfaces is defined by a fixed step in specific potential energy $\Delta\mu$ (units potential energy per material unit) and a number of material units defining the length of the interface $\breve{n}$. The total potential difference that is available to drive downward flows is the product of these two quantities, i.e., ${\Delta}G$\,=\,$\breve{n}\Delta\mu$. The flow redistributes the overall balance of potential energy towards the lower potential surface. Total material is conserved as required by the First Law of Thermodynamics, and the flow is downhill as required by the Second Law of Thermodynamics. The flow represents a ``heating'' of the lower potential system. The heating sustains this open system against a nearly equal dissipative flow due to the loss of energy to the system's surroundings.
For civilization, the heating is equivalent to the rate~$a$ (units energy per time) at which civilization consumes the potential energy in primary energy resources. The flow rate of energy is proportional to the material length of the interface $\breve{n}$ through
\begin{equation}
a~=~\alpha~{\Delta}G~=~\alpha~\breve{n}\Delta\mu\label{eq:aalphaG}
\end{equation}
where, $\alpha$ is a constant rate coefficient with units inverse time (effectively a diffusivity). This consumption of potential energy is more precisely defined as a material flux. For civilization, coal and oil are examples of the agents that carry the potential energy we consume. However, civilization is not made of coal and oil, but rather of raw materials such as iron and copper. Potential energy consumption enables these raw materials to be carried downward along with the energetic flow to add to civilization's material bulk and sustain it against decay.
If civilization's economic activities are part of the physical universe, then perhaps there might be a fiscal representation for the physical flows that sustain it. A hypothesis can be proposed that the size of civilization is expressible thermodynamically by the potential difference $\Delta G$ driving flows, or equivalently the heating of civilization at rate $a=\alpha\Delta G$. Since heating sustains all of civilization's activities against its ultimate dissipative loss to its surroundings, the heating rate might conceivably be what civilization intrinsically values, and therefore it might be related to a very general expression of civilization's real, or inflation-adjusted economic wealth through\begin{equation}
a~\equiv~{\lambda}C\label{eq:alamC}
\end{equation}
where the rate of
consumption of the potential energy in primary energy resources $a$~ (units energy
per time) is related through a constant parameter $\lambda$ to a
fiscal representation of global economic wealth~$C$ (units inflation-adjusted
currency). If there is no energy consumption, then civilization is worthless because the activities that sustain civilization's value cannot be maintained against civilization's energy loss through decay. Effectively civilization becomes indistinguishable from its surroundings because the interface $\breve{n}$ and the gradient $\Delta{G}$ shrink to zero. We eat to sustain ourselves against radiative heat loss. If we don't eat, eventually we die.
Here, wealth~$C$ is defined as the historical accumulation of
gross world economic real production~$P$ (units inflation-adjusted currency
per time). A comparison of this definition with more traditional approaches is presented in Section 4. Here, real \mbox{production~$P$} is an instantaneous quantity that is related
to the familiar gross world product (GWP) through
\begin{equation}
{\rm GWP}~=~{P}\Delta{t}\label{eq:GWP}
\end{equation}
where, for the sake of economic
statistics, ${\Delta}t$ is normally equal to one year. Total economic wealth
is distinct from production in that it is not a differential but an integral
quantity (units inflation-adjusted currency). As wealth is defined here, it
is represented by the historical accumulation of production
\begin{equation}
C~\equiv~\int_{0}^{t}~P~\left(t'\right)~dt'~\simeq~\sum_{i}~{\rm GWP}~\left(i\right)\label{eq:C}
\end{equation}
where $i$~is an index covering the full historical record for GWP. Equivalently, economic production is a consequence of a convergence of the material and energetic flows associated with wealth
\begin{equation}
\frac{dC}{dt}~\equiv~P\label{eq:dCdtP}
\end{equation}
or, expressed thermodynamically, from Eqs. \ref{eq:aalphaG} and \ref{eq:alamC}
\begin{equation}
P~=~\frac{1}{\lambda}~\frac{da}{dt}~=~\Delta\mu\frac{\alpha}{\lambda}\frac{d\breve{n}}{dt}\label{eq:Palphaw}
\end{equation}
Effectively, economic production~$P$ is a fiscal representation of the growth rate
of energy consumption \textit{da/dt} through an expansion of civilization's material interface $\breve{n}$ into the primary energy reservoirs that it consumes. Combining Eqs.~(\ref{eq:C}) and~(\ref{eq:Palphaw}), global wealth arises from an accumulation of a net material convergence over time:
\begin{equation}
C~\equiv~\frac{1}{\lambda}~\int_{0}^{t}~\frac{da}{dt}~dt'~=~\Delta\mu\frac{\alpha}{\lambda}~\int_{0}^{t}~\frac{d\breve{n}}{dt}~dt'\label{eq:Cw}
\end{equation}
Eqs.~(\ref{eq:aalphaG}) and~(\ref{eq:alamC}) imply a direct proportionality between wealth~$C$, rates of primary energy consumption~$a$, and the size of the interface driving flows $\Delta{G}=\breve{n}\Delta\mu$. In this case, there is a rate of return $\eta$ that applies equally to each:
\begin{equation}
\eta~\equiv~\frac{d~\ln~{\Delta}{G}}{dt}~=~\frac{d~\ln~\breve{n}}{dt}~=~\frac{d~\ln~a}{dt}~=~\frac{d~\ln~C}{dt}\label{eq:rateofreturn}
\end{equation}
Positive values of $\eta$ allow for exponential growth associated with interface expansion. Civilization wealth and energy consumption are in exponential decay if the interface $\breve{n}$ shrinks.
Thus, from Eqs.~(\ref{eq:dCdtP}) and~(\ref{eq:rateofreturn}), the economic production function for this framework is
\begin{equation}
P~\equiv~\frac{dC}{dt}~=~{\eta}C\label{eq:PetaC}
\end{equation}
The rate of return $\eta$ (units inverse time) is a time varying quantity that relates the past accumulation of wealth $C$ to the current production of new wealth $P$. Finally, by taking the time derivative of Eq.~(\ref{eq:PetaC}), the GWP growth rate is given by
\begin{equation}
\frac{d~\ln~P}{dt}~=~\eta~+~\frac{d~\ln~\eta}{dt}\label{eq:dlogPdt}
\end{equation}
Thus, what is normally termed as ``economic growth'' (i.e.~$d{\ln}P/dt$) is related to the sum of the growth rate of energy consumption $\eta$ and the \textit{acceleration} of growth in energy consumption $d\ln\eta /dt$. The economic growth rate stalls if this acceleration stagnates.
\section{Model validation}
The above discussion rests on an assumed constancy of the parameter $\lambda$, as it is defined through Eqs.~(\ref{eq:alamC})
and~(\ref{eq:C}) by
\begin{equation}
\lambda~\equiv~\frac{a\left(t\right)}{\int_{0}^{t}~P\left(t'\right)~dt'}~\simeq~\frac{a~\left(t\right)}{\sum_{i}~{\rm GWP}~\left(i\right)}
\label{eq:lambda}\end{equation}
To evaluate the validity of a hypothetical constancy of $\lambda$ in Eq.~(\ref{eq:lambda}), I employed statistics for world
GWP spanning more than 2000~years \citep{Maddison2003,UNstats} to calculate
wealth~$C$ from Eq.~(\ref{eq:C}). Values of~$C$ were compared to nearly four
decades of statistics for energy consumption rates~$a$ \citep{AER2009}.
Details are described in Appendix C of \citet{GarrettCO2_2009}. As illustrated in Table~\ref{tab:Measured-values}, the comparison supports
the hypothesis that the value of $\lambda$, as defined by Eq.~(\ref{eq:lambda}),
is indeed a constant that is independent of time: energy
consumption rates~$a$ and wealth $C$\,=\,$\int_{0}^{t}Pdt'$ both approximately
doubled in tandem between 1970 and 2008. On a millennial scale this time
interval is short, but it covers a tripling of GWP and more than half of
total civilization growth. The full yearly time series indicates that, during
this period, $\lambda$ maintained a mean value, with associated 95\% confidence uncertainty
in the mean, of 9.7\,$\pm$\,0.3 milliwatts per 1990\,US dollar
\citep{GarrettCO2_2009}.
A theoretically equivalent approach to calculating $\lambda$ is to take the respective derivatives of $a$ and $C$ in order to compare the inter-annual change in energy consumption rates $da/dt$ to the real GWP $P$ (Eq. \ref{eq:Palphaw}). Derivatives of measured quantities are always more noisy than their integrals. For example, the magnitude of $d\ln{a}/dt$ is only about a couple of percent per year, where $a$ itself is subject to measurement uncertainties that, while unpublished, are plausibly of a similar magnitude. Nonetheless, the calculated mean value of $P/(da/dt)$ for the 1970 to 2008 period is 11.6\,$\pm$\,4.1 milliwatts per 1990\,US dollar, which is statistically consistent with the derived value for $\lambda\equiv a/C$ of 9.7\,$\pm$\,0.3 milliwatts per 1990\,US dollar.
This combination of theoretical and observational support for there being a fixed relationship between $C$ and $a$ is the key result supporting this study.
It serves as the basis for assuming that civilization is financially
well-mixed and that wealth is derived most fundamentally from a capacity to
enable a flow of potential energy from primary energy reserves. If it is generally correct, it enables an enormous simplification of what is required to accurately model the global
economy and its waste products. At least at a global scale, a sophisticated
IAM approach that explicitly considers people and their lifestyles is not
necessary in order to forecast future rates of energy consumption. People do
not need to be explicitly resolved in order to calculate global scale
consumptive flows.
As a note, the constancy of $\lambda$ should not be expected to hold at national scales. One country could easily be relatively wealthy compared to its current rate of energy consumption, provided that other countries are relatively poor. The value of $\lambda$ is constant only as a global quantity, where $C$ and $a$ subsumes all countries that are connected through international trade.
\section{Comparison with traditional economic growth models}
The model presented here is unlike traditional models in several regards, but it also has key similarities (see also Appendix~B in \citet{GarrettCO2_2009}). Wealth~$C$ is analogous to the term ``capital'' used in traditional economic growth
frameworks in the sense that it has units of currency, rather than currency
per unit time. However, it is much more general. As shown in
Figure~\ref{fig:thermschematic}, civilization is defined as a whole, and no
distinction is made between the human and non-human elements of the global
economic system. Economic elements are not independent. Rather, all economic elements in civilization form a generalized capital that works in concert to consume primary energy reserves and enable the ``downhill'' flows of material in a field of potential energy.
Effectively, treating civilization as a whole means that it is assumed to be
internally at local thermodynamic equilibrium, homogeneous, or ``well-mixed''. This does not mean that all economic elements are equal in value (they are not), only that the speed of financial interactions between all civilization elements is sufficiently rapid compared to the
timescales of global economic growth that internal gradients can be ignored.
A consequence of treating civilization as a whole is that human labor is part of a more general expression of capital $C$. Traditional economic models
separate ``{}physical'' capital from labor as distinct motive forces of economic production
\citep{Solow1956}, sometimes including supplies of energy and raw materials in an appeal to thermodynamic constraints \citep{Saunders2000,WarrAyres2006}. Labor, capital and energy inputs are all set to exponents that are tuned to provide agreement with observed sectoral or national production statistics. Capital grows only due to ``investments'' that are separated from household and government ``consumption''. Household consumption never adds to capital. For one, people are not normally considered to be part of capital. For another, value that is consumed is presumed to be gone forever, so consumption must be subtracted from production to obtain the true capital investment.
Here, however, humans are subsumed into capital so that the production function, given by $P = \eta C$ (Eq. \ref{eq:PetaC}), is determined only by the general expression of capital used here and a variable rate of return $\eta$ that might be analogous to the ``{}total factor productivity'' employed by \citet{Solow1956}. Consequently, human consumption cannot be selectively subtracted from the production of new capital because humans are part of the whole. The component of economic production that is traditionally termed consumption is in fact an investment in sustaining and growing humanity.
That said, physically it makes most sense to refer to consumption as something that is much more extensive than what is directly tallied in economic accounts. In Figure~\ref{fig:thermschematic}, consumption is proportional to the global scale flow of primary energy resources as it passes \emph{through} civilization. This consumptive flow of matter and potential energy is downhill from high to low potential at right angles to the constant potential surface along which civilization lies. Economic production is proportional to the expansion of this potential surface. Thus, consumption and production cannot be differenced because the two quantities are mathematically orthogonal. Consumption is not a component of production, but rather production is the \emph{convergence} in thermodynamic consumption. Only if civilization as a whole consumes more energy than it dissipates can the interface expand and net economic value be produced.
An added advantage of subsuming labor into capital, where capital is fundamentally assumed to be an implicit representation of energy consumption through $a\equiv\lambda C$, is that, unlike traditional models, there is no need for any tuning of non-integer exponents in a production function. Tuning to prior data can be a useful tool of last resort. But, it has its problems because there is little guarantee that a model tuned to the past will not need retuning to be consistent with data in the future. While the physical approach discussed here may be highly unorthodox by mainstream economic standards, it does have the advantage that its absence of a tuning requirement allows it to rest on a testable, falsifiable hypothesis -- falsifiability is one of the key hallmarks of science. Either there exists a constant coefficient $\lambda$, or there doesn't. Of course, as discussed above, the constancy in $\lambda$ does appear to hold. But the point is that if this constancy ever fails, then the model presented here can be safely dismissed. Retuning is not an option.
\section{Jevons' Paradox and why efficiency gains accelerate global CO$_2$ emission rates}
Certainly, it might seem obvious that technological advances that increase energy efficiency or energy productivity (defined as $P/a$) should lead to a decrease in CO$_{2}$ emissions. Less energy is required to accomplish the same economic task. Even so, there is recognition among many economists of the existence of a {}``rebound
effect'', whereby increasing energy productivity spurs greater emissions
further down the road \citep{Dimitropoulos2007,Herring2007,Sorrell_UKERC2007}.
Two types of rebound have been identified, although in essence they
both address the issue of what happens to whatever money is saved
when energy productivity goes up. The {}``direct'' rebound effect
is limited to a particular energy service. For example, people may
choose to drive more often if a vehicle is fuel efficient, because
driving is useful or pleasurable and now more affordable. There are
also {}``indirect rebound effects'', which extend response to differing
economic sectors. Less money spent on fuel for efficient vehicles
might enable more money to be spent on fuel for home heating.
A few studies even point to an extreme form of rebound termed {}``backfire'': gains in energy efficiency lead ultimately to more rather than less energy consumption \citep{Saunders2000, Alcott2005,Owen2010, Alcott2010}. First discussion of the
principle came from an 1865 exposition on energy economics by William
Stanley Jevons \citep{Jevons1865}. Jevons was emphatic that the
introduction of energy efficient steam engines had accelerated Britain's consumption of coal. The cost of steam-powered coal extraction became cheaper and, because coal was very useful, more attractive.
While the topic has received revived attention politically \citep{HoL2006},
a general consensus on the total magnitude of the effect has proved
elusive \citep{Sorrell_UKERC2007}. One reason is that calculating
the knock-on effects from an efficiency improvement in one sector
as they propagate through the entire global economy is daunting if
not impossible. Suppose that efficient
vehicles enable added household heating through a savings in transportation costs. Then, by raising home comfort, workers sleep better so that they are better able to facilitate
resource extraction at their companies. With higher profits, the companies
then reward the workers with raises, who in turn spend the money on
goods produced overseas with coal-generated electricity. So, in this
fashion, the ramifications of any given efficiency action can multiply
indefinitely, spreading at a variety of rates throughout the global
economy. Barring global analysis of rebound effects over long time
scales, conclusions may be quantitative but uncertain, and dependent
on the time and spatial scales considered.
An easy way to address this problem is to not resolve economic flows within the global economy, but rather to take the more general approach shown in Fig. \ref{fig:thermschematic}. In this case, energy efficiency is defined only with respect to the economic capacity of civilization, taken as a whole, to grow by doing work on its surroundings, allowing it to expand into the reserves of primary energy that sustain it. The amount of net or real work that civilization does to grow itself depends on a balance between civilization's consumptive and dissipative flows. If civilization is efficient, there is a net material and energetic convergence that allows civilization to do net positive work to ``stretch'' outward its interface with respect to its primary energy supplies. If energy efficiency increases, this accelerates civilization expansion, allowing civilization to consume energy at an ever faster rate.
Expressed in analytical terms, consumption of primary energy resources at rate~$a$ enables work to be done at rate~$w$ in order to extend the material size $\breve{n}$ of the interface that drives consumptive flows. From Eq. \ref{eq:aalphaG}, work is done at rate
\begin{equation}
w~=~{\Delta}\mu\frac{d\breve{n}}{dt}~=~{\epsilon}a\label{eq:ddeltaGdt}
\end{equation}
where $\epsilon$\,=\,$w/a$ is the efficiency for the conversion of heat transfer to
work. Unlike the normal conception, where work is done only to raise the
potential of some outside agency, here work is more self-referential. Work is done by civilization to increase the size and consumptive capacity of civilization itself.
If net work is positive, then there is exponential growth in the rate of primary energy
consumption $a$. Interface expansion into new energy reservoirs creates a positive feedback loop by bootstrapping civilization into an ever more consumptive state. Combining Eqs.~(\ref{eq:aalphaG}) and~(\ref{eq:ddeltaGdt}), the rate of increase in energy consumption is related to the work done to expand the interface through
\begin{equation}
\frac{da}{dt}~=~\alpha~{\Delta}\mu\frac{d\breve{n}}{dt}~=~{\alpha}w\label{eq:dadt}
\end{equation}
where, as before, $\alpha$ is an unknown constant. Since $w={\epsilon}a$, dividing by $a$ provides an expression for the ``rate of return'' on consumption $\eta$, as defined previously in Eq. \ref{eq:rateofreturn}, that is directly proportional to energy efficiency through
\begin{equation}
\eta = \frac{1}{a}\frac{da}{dt}~=~\alpha\frac{ w}{a} = {\alpha}\epsilon\label{eq:etaefficiency}
\end{equation}
Thus, global scale increases in the energy efficiency $\epsilon$
lead to a higher rate of return $\eta$ and accelerated growth of energy consumption rates~$a$. Treated as a whole, an efficient system grows faster and consumes more.
That said, increasing energy efficiency does translate to higher prosperity. Economic production is related to the rate of return through $P=\eta C$ (Eq. \ref{eq:PetaC}), where wealth $C$ is tied to energy consumption through $a = \lambda C$ (Eq. \ref{eq:alamC}), $\lambda$ being an empirically measurable constant. It follows that, at global scales, the energy productivity $P/a$ is tied to energy efficiency $\epsilon$ through
\begin{equation}
\frac{P}{a}~=~\frac{\eta}{\lambda}~=~\frac{\alpha}{\lambda}\epsilon\label{eq:productionefficiency}
\end{equation}
The implication is that, at least for global economic systems, changes in energy efficiency and energy productivity are equivalent. Through Eq. \ref{eq:dlogPdt}, both accelerate GWP growth even if they do not in fact lead to a decrease in overall energy consumption, as is commonly assumed \citep{PacalaSocolow2004,Raupach2007}. At global scales, Jevons' Paradox holds.
The analogy here might be to a growing child, who uses the material nutrients and
potential energy in food not only to produce waste but also to grow. As the child
grows, it eats more food, accelerating its growth until it reaches adulthood
and growth stabilizes (in which case $\eta$\,$\simeq$\,0). A healthy, energy
efficient child will grow faster than one who is sick and inefficient. A
diseased child may even die (in which case $\eta$\,$<$\,0).
These conclusions have direct bearing on global scale emissions of CO$_{2}$. Just as civilization can be treated as
being well-mixed over timescales relevant to economic growth, atmospheric
concentrations of CO$_{2}$ are also well-mixed over timescales relevant to
global warming forecasts. Thus, for the purpose of relating the economy to
atmospheric CO$_{2}$ concentrations, what matters is only how fast
civilization as a whole is emitting CO$_{2}$.
CO$_{2}$ emissions are primarily a by-product of energy combustion. The emission rate $E$ is determined by the product of the global rate of energy consumption~$a$, and the carbonization of the fuel supply defined by
\begin{equation}
c~\equiv~\frac{E}{a}\label{eq:Eca}
\end{equation}
where, $E$ and $a$ are
measured quantities. It follows from
Eq.~(\ref{eq:alamC}) that current rates of CO$_{2}$ emissions $E$ are fundamentally
coupled to wealth~$C$, or past economic production, through
\begin{equation}
E~=~{\lambda}cC\label{eq:ElamcC}~=~{\lambda}c{\int_{0}^{t}~P\left(t'\right)~dt'}
\end{equation}
Drawing from statistics for CO$_{2}$ emissions from the Carbon Dioxide Information
Analysis Center \citep{Marland2007}, Table~\ref{tab:Measured-values} shows
that, like $a$ and $C$, CO$_{2}$ emissions $E$ have approximately doubled
between 1970 and 2008. Meanwhile, the value ${\lambda}c$\,=\,$E/C$ has stayed
approximately constant. Its mean value (and uncertainty in the mean) taken
from the entire yearly time series is 2.42\,$\pm$\,0.02\,ppmv\,atmospheric equivalent CO$_{2}$ per year, per thousand trillion 1990\,US dollars of global wealth.
Note that, unlike $\lambda$, the carbonization~$c$ is not a fundamental
property of the economic system within this framework. At least in principle,
it could be more variable in the future than it has been in the recent past.
Combining Eqs.~(\ref{eq:rateofreturn}) and~(\ref{eq:ElamcC}), emission rates grow
at rate that is determined by the growth rate of wealth and the rate of
change of carbonization
\begin{equation}
\frac{d~\ln~E}{dt}~=~\frac{d~\ln~C}{dt}~+~\frac{d~\ln~c}{dt}~=~\eta~+~\frac{d~\ln~c}{dt}\label{eq:dlnEdt}
\end{equation}
The implication is that, if technological changes allow energy productivity or energy efficiency to increase, then the rate of return $\eta$ increases and CO$_2$ emissions accelerate. This is unless decarbonization is as rapid as the rate of growth of wealth $\eta$. If so, then emission rates~$E$ can be stabilized. If, however, the carbonization~$c$
stays approximately constant, then CO$_{2}$ emissions rates~$E$ will remain
fundamentally linked to global economic wealth~$C$ through the constant value of 2.42\,$\pm$\,0.02\,ppmv\, of CO$_{2}$ emitted per year, per thousand trillion 1990\,US dollars of current wealth. It can only be through an economic collapse that CO$_{2}$ emissions rates will decline.
\section{Environmentally driven economic decay}
\subsection{Accounting of inflation and decay\label{economicdecay}}
The broadest available measure of the inflation rate is the so-called GDP deflator, which is calculated from the year-on-year increase in the prices of a very broad basket of consumer and industrial goods. Effectively, the gross domestic product becomes devalued
by some inflationary fraction~$i$ that makes the ``real'',
inflation-adjusted GDP less than its ``nominal'' value. Expressed for the world as a whole
\begin{equation}
i~=~\frac{{\rm nominal}~-~{\rm real}}{\rm nominal}~=~\frac{\hat{\rm GWP}~-~{\rm GWP}}{\hat{\rm GWP}}\label{eq:delta}
\end{equation}
While there have been a wide variety of theoretical explanations for
what drives inflation, the field is fluid and none have been solidly rejected \citep{Parkin2008}. Price inflation is traditionally viewed as a simple imbalance between the monetary supply and economic output, and therefore mostly a matter for central bank control. What is less clear is why high inflation appears to have a negative effect on inflation-adjusted economic growth \citep{Sarel1996}. There are also external forces that can create the initial inflationary pressure, such as an external shock to primary energy supplies \citep{Bernanke1997}, and even climate change, which drives up food prices through adverse effects on crop production \citep{Lobell2011}.
From the perspective of the model presented here, inflationary pressures can arise from either decreasing energy availability or increasing environmental disasters. This can be assessed because the real value or wealth of civilization is fixed to its current capacity to consume primary energy resources through the constant coefficient $\lambda$, which has a value of 9.7$\pm$0.3 milliwatts per inflation adjusted 1990 dollar: in 2008, 16.4 TW of power supported 1656 trillion 1990 US dollars of civilization worth. For interpreting inflation, this coefficient provides an anchor for assessing real economic worth, at least for civilization as a whole.
Supposing that natural disasters destroy the capacity of life and property to consume energy, civilization's real value declines while plausibly keeping the availability of currency largely intact. Alternatively, while banks do not actively destroy civilization's capacity to consume energy, they might be excessively loose with currency. If so, the real currency value attached to the existing capacity to consume energy becomes diluted by an excessive availability of new currency, while real wealth stays fixed. Whether banks or climate extremes initiate the action, in either case, inflation should be expected to follow as a consequence of an introduced imbalance between real and nominal value. The availability of currency becomes out of proportion with the true capacity of civilization to consume primary energy supplies.
Real, inflation-adjusted wealth has been defined here by $C$\,=\,$\int_{0}^{t}Pdt'$
(Eq.~\ref{eq:alamC}) or equivalently, the instantaneous function \textit{dC/dt}\,$\equiv$\,$P$
(Eq.~\ref{eq:PetaC}), where $P$ is the inflation-adjusted production.
Here, in effect, all production is a differential addition to a
generalized measure of wealth, provided it is adjusted for inflation. This adjustment to the nominal (non-inflation-adjusted) production of
wealth $\hat{P}$ can be expressed as a sink of existing wealth
${\gamma}C$, where $\gamma$ represents the rate at which existing wealth is being destroyed or lost due to natural decay
\citep{GarrettCO2_2009}
\begin{equation}
\frac{dC}{dt}~\equiv~P~=~\hat{P}~-~{\gamma}C\label{eq:sinksource}
\end{equation}
Thus, the rate of decay is simply
\begin{equation}
\gamma~\equiv~\frac{\hat{P}~-~P}{C}~=~\frac{\hat{P}~-~P}{\int_{0}^{t}Pdt'}\label{eq:gamma}
\end{equation}
Similarly, the rate $\beta$ at which wealth~$C$ leads to nominal production
$\hat{P}$ can be defined by
\begin{equation}
\beta~\equiv~\frac{\hat{P}}{C}~=~\frac{\hat{P}}{\int_{0}^{t}Pdt'}\label{eq:beta}
\end{equation}
In this case, from Eq.~(\ref{eq:sinksource}), the growth of wealth can be
expressed as a balance between a source and a sink of wealth
\begin{equation}
\frac{dC}{dt}~=~\left(\beta~-~\gamma\right)~C\label{eq:dCdt}
\end{equation}
This is just an alternative expression for Eq.~(\ref{eq:PetaC}) with the rate of return
on wealth $\eta$ replaced by the difference between the coefficient of
nominal production $\beta$ and the coefficient of decay $\gamma$
\begin{equation}
\eta~=~\beta~-~\gamma\label{eq:eta}
\end{equation}
The advantage of applying this treatment is that it leads to a very simple expression for
an inflationary pressure~$i$ in Eq.~(\ref{eq:delta})
\begin{equation}
i~=~\frac{\int_{t}^{t+{\Delta}t}~\left(\hat{P}~-~P\right)~dt'}{\int_{t}^{t+\Delta
t}\hat{P}~dt'}~=~\frac{\int_{t}^{t+{\Delta}t}~\gamma~Cdt'}{\int_{t}^{t+\Delta{t}}~\beta~Cdt'}
~=~\frac{\left\langle\gamma\right\rangle}{\left\langle\beta\right\rangle}\label{eq:inflation}
\end{equation}
where brackets imply a mean over the time interval of calculation
${\Delta}t$, which is normally one year. Inflation is determined by
the balance between the coefficients $\beta$ and $\gamma$ of production and decay.\footnote{In practice, statistics for nominal and real GWP are normally
provided in current and fixed-year currency, respectively, and therefore are
in different units. Thus, for a given time period ${\Delta}t$ (say one year),
$\gamma$ can be calculated from differences in the logarithmic rate of
expansion for $\hat{P}$ and $P$, noting that $\ln\left(1+x\right)$\,$\simeq$\,$x$
\[
\gamma~=~\frac{\hat{P}~-~P}{C}~\simeq~\frac{P}{C}~\left[\frac{1}{P}~\frac{d\left(\hat{P}~-~P\right)}{dt}\right]~{\Delta}t~\simeq~\frac{P}{C}~\frac{d~\ln~\left(\hat{P}/P\right)}{dt}~{\Delta}t
\]
Effectively $\left[d~\ln~\left(\hat{P}/P\right)/dt\right]{\Delta}t$ is the
fractional inflation~$i$ over period ${\Delta}t$. Then, since
$\eta$\,=\,$P/C$, it follows that $\gamma$\,=\,$i\eta$ and $\beta$\,=\,$\eta$\,+\,$\gamma$\,=\,$\left(1+i\right)\eta$.}
If ${\Delta}t$ is one year, then the quantity $i{\Delta}t$ represents the difference between nominal
and real GWP.
If the coefficient of decay becomes greater than the coefficient of
production, such that $\gamma$\,$>$\,$\beta$, then from Eq.~(\ref{eq:inflation}),
\textit{nominal} production $\hat{P}$ may be positive, but \textit{real}
production~$P$ is negative. Discussing negative real production would seem
unusual (or impossible) from a traditional economic perspective that is geared towards modeling growth. From the
more physical framework described here, it is simply a consequence of
environmentally driven decay being so large that there are economic hyper-inflationary pressures associated with a rate $i$\,=\,$\gamma/\beta$ that is greater than 100\%. Historically, and on more regional levels, this is in fact
a fairly common scenario. From Eq.~(\ref{eq:sinksource}), \textit{dC/dt}\,$<$\,0, and total wealth is in a state of collapse.
As discussed in Appendix A, hyper-inflation and collapse can be viewed thermodynamically as an interface between civilization and its energy reserves that is shrinking inwards rather than growing outwards. This means that the nominal work ${\int_{t}^{t+\Delta
t}\hat{w}~dt'}$ that is done to grow civilization is overwhelmed by external work done on civilization through decay. Real or net work done to grow civilization $\int_{t}^{t+\Delta{t}}{w}~dt'$ turns negative and civilization enjoys no return on its energetic investment. As a whole, civilization becomes less wealthy as it becomes less able to consume primary energy reserves.
A related concept is termed Energy Returned on Energy Invested (or EROI), and is becoming increasingly popular as a metric of society's capacity to exploit primary energy reserves for economic benefit \citep{Murphy2010}. Evidence suggests that the value of EROI is declining, presumably as new energy reserves become increasingly difficult to access. In Appendix A it is shown that a direct link can be drawn between the EROI concept and inflation (or the GDP deflator) discussed here. At global scales, the value of EROI is equal to the inverse of the inflation rate.
\subsection{Inflationary pressures and civilization resilience}
The IPCC Working Group~II
\citep{IPCC_WG22007} has identified potential societal damages due to climate ``extremes'', such as droughts and floods, and ``means'', such as
sea-level rise. These will exert a negative feedback on civilization wealth
such that, at some point, wealth and atmospheric CO$_{2}$ become
intrinsically coupled because civilization is no longer able to consume and emit as it has in the past.
Based on the above arguments, it is easy to see how natural disasters are
often expected to be inflationary since they represent an increase in the work done by the environment \emph{on} civilization. If the decay coefficient $\gamma$ suddenly
rises, then from Eq.~(\ref{eq:sinksource}), this expands the difference between
nominal and real production. From Eq.~(\ref{eq:inflation}), the shock
leads to inflation and less capacity to consume energy and emit CO$_{2}$.
An important point here is that, for inflationary pressures to take
hold, there must be an increase not just in total damages ${\gamma}C$, but in
the \textit{coefficient} of decay $\gamma$. Hurricane
damages along the Atlantic seaboard have risen over the past century, but not because of a long-term increase in hurricane intensity or frequency (i.e., $\gamma$). Rather, economic wealth $C$ has become increasingly concentrated at the coasts \citep{Pielke_hurr2008}.
What seems reasonable is to expect that the decay rate $\gamma$ will in fact
change over coming decades due to the increasingly harmful effects of
global warming. Impacts will be regionally specific, but extremely difficult
to predict. In light of this, the approach taken here is to simplify
representation of the global economic impacts of climate change by defining a
global economic ``resilience'' to a logarithmic increase in atmospheric
CO$_{2}$ concentrations
\begin{equation}
\rho~=~1/\left(d\gamma/d~\ln~\left[\mathcal{\rm CO}_{2}\right]\right)\label{eq:resilience}
\end{equation}
If civilization's resilience is high, then the coefficient of decay $\gamma$
responds weakly to logarithmically changing CO$_{2}$ levels.\footnote{The
logarithm of CO$_{2}$ concentrations is considered because the primary
insulating gases responsible for climate warming, namely CO$_{2}$ and water
vapor, have a longwave absorptance that varies roughly as the square root of
their concentrations \citep{Liou-book}.}
There have been estimates of the regional societal and economic impacts from extremes in
climate \citep{Patz2005,Leckebusch2007,Hsiang2011}. Unfortunately, it is not entirely obvious how to appropriately scale these impacts to civilization as a whole when many of the
effects of climate change will be sustained, global, and largely unprecedented. Recent
statistics do not yet provide meaningful guidance either. Figures~\ref{fig:etabetagamma}
and~\ref{fig:etabetagammaC} show no clear trends in the decay coefficient $\gamma$ that can easily be attributed to accelerating climate change. Up until this point, the dominant signature in $\gamma$ is only its inter-annual variability. A recent meta-analysis of disaster losses has arrived at a similar conclusion \citep{Bouwer2011}.
The hypothesis that is proposed here is that the effect on society of
elevated levels of atmospheric CO$_{2}$ will be akin to a prolonged natural
disaster. From the standpoint of the economic model discussed above, the
effect will be to steadily increase the coefficient of decay $\gamma$ without
changing the coefficient of nominal production $\beta$. From Eq.~(\ref{eq:inflation}),
this will appear economically as an inflationary pressure
that impedes the growth in wealth $C$, as described by Eq.~(\ref{eq:dCdt}). In
a phase space of $\left[{\rm CO}_{2}\right]$ and $P$, the trajectory of
civilization will depend on the resilience $\rho$ of civilization to elevated
carbon dioxide levels: it is our resilience that will determine the strength
of climate's negative feedback on economic growth.
\section{The Climate and Thermodynamics Economic Response Model~(CThERM)}
To explore the coupling between civilization and the atmosphere, the following section introduces a very simple framework for forecasting the
evolution of civilization in a phase space of $\left[{\rm CO}_{2}\right]$
and $P$, for a variety of assumed values of resilience $\rho$. The Climate and
Thermodynamics Economic Response Model (CThERM) couples a prognostic economic
model to atmospheric CO$_{2}$ concentrations, as illustrated in Fig.~\ref{fig:EEaSM}.
The prognostic economic module has just three coupled
dynamic equations for wealth~$C$, atmospheric CO$_{2}$ concentrations
{[}CO$_{2}${]}, and the rate of return $\eta$. From Eq.~(\ref{eq:rateofreturn}), wealth grows at rate
\begin{equation}
\frac{dC}{dt}~=~{\eta}C\label{eq:dlnCdt}
\end{equation}
The balance between anthropogenic emissions $E$\,=\,${\lambda}cC$ (Eq.~\ref{eq:ElamcC})
and natural sinks is
\begin{equation}
\frac{d~\left[{\rm CO}_{2}\right]}{dt}~=~E~-~\sigma~\Delta~\left[{\rm CO}_{2}\right]\label{eq:ddCO2dt}
\end{equation}
where $E$\,=\,${\lambda}cC$ (Eq.~\ref{eq:ElamcC}) and $\sigma$ is an assumed linear
sink rate on perturbations $\Delta\left[{\rm CO}_{2}\right]$\,=\,$\left[{\rm CO}_{2}\right]-\left[{\rm CO}_{2}\right]_{0}$ above some
preindustrial baseline. For convenience, here it is assumed that the CO$_{2}$
emissions are instantly diluted in the total atmospheric mass
\citep{Trenberth1981} such that 1\,ppmv\,atmospheric CO$_{2}$\,=\,2.13\,Pg emitted carbon.
Thus $c$ is expressed in units of ppmv atmospheric CO$_{2}$ emitted by
civilization per Joule of energy consumption.
The modeling approach here is aimed at the simplest
of possible approaches. In reality, the carbon cycle is much more complicated
than can be easily justified by a linear sink model
\citep{Cox2000,Canadell2007}. That said, even the current magnitude of the
CO$_{2}$ sink is not well constrained \citep{LeQuere2003}. Given current
uncertainties, assuming a linear sink that is in line with current
observations appears to provide long-range forecasts of [CO$_{2}$]
that are in good agreement with far more sophisticated models. More detailed
discussion is presented in Sect.~\ref{sec:SRES} and Appendix~C.
From Eqs.~(\ref{eq:eta}) and~(\ref{eq:resilience}), the rate of return $\eta$
changes at a rate given by
\begin{equation}
\frac{d\eta}{dt}~=~\frac{d\beta}{dt}~-~\frac{1}{\rho}~\frac{d~\ln~\left[{\rm CO}_{2}\right]}{dt}\label{eq:detadt}
\end{equation}
Model trajectories in wealth~$C$ and atmospheric carbon dioxide concentration
evolve subject to initial conditions in {[}CO$_{2}${]}, $C$, $\beta$
and $\gamma$. Note that global production~$P$ is a diagnostic quantity
given by Eq.~(\ref{eq:PetaC}).
The prognostic CThERM model expressed by Eqs.~(\ref{eq:dlnCdt})
to~(\ref{eq:detadt}) is incomplete because it lacks prognostic equations for the
carbonization of the world's wealth $c$\,=\,$E/\left({\lambda}C\right)$
(Eq.~\ref{eq:ElamcC}) and the coefficient of nominal production $\beta$\,=\,$\hat{P}/C$
(Eq.~\ref{eq:beta}). A more sophisticated model will need to address the
evolution of these terms.\footnote{In principle, the evolution of $\beta$ is
governed by two factors, as illustrated in Fig.~\ref{fig:thermschematic}. As
civilization or any other system grows, it depletes known available energy
reservoirs; at the same time, it expands into new reservoirs that were
previously unavailable or unknown. Past bursts in growth in
$\eta$\,=\,$\beta-\gamma$ are seen to have occurred around 1880 and 1950, perhaps
due to a sudden increase in availability of important new oil reservoirs
\citep{GarrettCO2_2009}. Presumably the future availability of energy
reservoirs will influence the value of~$c$ as well \citep{Sorrell2010}.}
A hindcast simulation that illustrates the accuracy of the model framework
is shown in Fig.~\ref{fig:hindcast}. The hindcast is initialized
in 1985 and, based on results shown in Fig.~\ref{fig:etabetagamma},
it is assumed that $d\gamma$/\textit{dt}\,=\,0 and that $d\beta$/\textit{dt} evolves on
a linear trajectory that is consistent with what is observed for the
period between 1970 and 1984. A linear fit for $d\beta$/\textit{dt} during this initialization time period is 0.017\%\,yr$^{-1}$ per year with a 95\% confidence limit of
$\pm$0.01\%\,yr$^{-1}$ per year. A second source of uncertainty is associated
with the CO$_{2}$ sink coefficient $\sigma$, which is estimated
to have a value of 1.55\,$\pm$\,0.75\,\%\,yr$^{-1}$ (Appendix~B).
Figure~\ref{fig:hindcast} shows that, with these assumptions, the mid-range of
hindcasts over a 23~year period between 1985 and 2008 faithfully reproduces
both the timing and magnitude of observed changes in atmospheric CO$_{2}$
concentrations and global economic production~$P$. The implication is that,
even though the model that is used is extremely simple, it is nonetheless
able to make accurate multi-decadal forecasts for the coupled growth of the global
economy and atmospheric composition. Furthermore, it suggests some ability of
the model to explore thermodynamically constrained forecasts in a space of
$P$ and {[}CO$_{2}${]} for a range of hypothetical values of civiilization
resilience $\rho$ and decarbonization rates $-d~\ln~c$/\textit{dt}.
As discussed previously, there is no good guidance yet for what a suitable
choice for the resilience $\rho$ might be, and no prognostic model is
included here for forecasting the evolution of either carbonization~$c$ or
the nominal productivity coefficient $\beta$. Thus, while the CThERM model is
thermodynamically constrained, it can still only provide forecasts for a
range of hypothetical scenarios in these parameters. In what follows, two
main categories of scenarios are considered.
\subsection{Forecast scenario~A: no decarbonization}
The first scenario that is considered here is simply to assume that for the
remainder of this century, there will be no further decarbonization, and that the
coefficient of nominal production will remain stagnant ( i.e., \textit{d{c}/d{t}}\,=\,0 and
$d\beta$/\textit{dt}\,=\,0 ). Figure~\ref{fig:trajectories} shows examples of forecasts for
these conditions for the years between 2009 and 2100. Also shown for
historical reference are past measurements between 1\,AD and 2008 (Appendix~C).
For this scenario, a range of resilience sub-scenarios can be considered. If
civilization is so resilient that it is unaffected by elevated CO$_{2}$
levels, then the world economy $P$ sustains recent growth rates of 2.2\% per
year. By 2100, it increases by nearly an order of magnitude to a value of
nearly 300~trillion 1990\,dollars per year. The accumulated production of
wealth $C$\,$\equiv$\,$\int_{0}^{2100}Pdt'$ corresponds to an increase in rates of
energy consumption $a$\,=\,${\lambda}C$ from 16\,TW in 2008 to 126\,TW in year 2100.
Absent any decarbonization, the accumulated and accelerating emissions push
CO$_{2}$ levels above 1100\,ppmv.
If, however, civilization has an extremely low resilience to elevated
CO$_{2}$ levels, then the decay coefficient $\gamma$ increases by 5\%\,yr$^{-1}$
per CO$_{2}$ doubling. Eventually, the decay coefficient exceeds
the coefficient of nominal production $\beta$. In this case, economic
production shrinks due to the impacts of climate change. Well before the year
2100, the inflationary pressure exceeds 100\%: real GDP is negative and
civilization is in a phase of collapse. However, even in this scenario,
energy consumption rates peak at 89\,TW in 2056 and although they fall to 21\,TW
in year 2100, they still exceed current levels. Because rates of energy
consumption remain high, even with rapid and immediate civilization collapse,
CO$_{2}$ levels still continue their rise to approximately 600\,ppmv by year
2100.
What is perhaps most striking when looking at these forecasts is that we can
expect some extraordinarily rapid near-term changes in the global economy
and atmospheric composition. For any plausible resilience condition,
atmospheric CO$_{2}$ concentrations and civilization GWP will change by as much
in the next 40~years as they have in the past two thousand.
\subsection{Forecast scenario~B: rapid decarbonization}
Although there is no evidence that civilization is in fact decarbonizing
\citep{Raupach2007}, one can imagine for the sake of illustration a second
forecast scenario shown in Fig.~\ref{fig:trajectories_decarbon} in which
$\beta$ stays constant, but the carbonization of civilization $c$ drops
extremely rapidly. Supposing that carbonization $c$ halves in just 50 years,
the value of~$c$ ends up 73\% lower in 2100 than it is at present. This is
highly imaginary, of course. If nothing else, no consideration is made here
of the costs of decarbonizing that would be involved. These would presumably
act to lower $\beta$ and be an inflationary pressure themselves (Eq.~\ref{eq:inflation}).
However, it is worth considering because, for one, it
illustrates how extremely rapid decarbonization would need to be to lower
CO$_{2}$ concentrations to something that only moderately exceeds the
450\,ppmv levels that might be considered to be ``dangerous''
\citep{HansenDangerous2007}. If civilization's resilience to climate change
is extremely low, then only a combination of rapid civilization collapse and
high decarbonization comes close to achieving a 450\,ppmv goal. Otherwise, if
civilization's resilience to climate change is extremely high, then emissions
increase from 3.95\,ppmv equivalent per year in 2008 to 8.64\,ppmv per year in
2100.
The reason why even rapid decarbonization still corresponds with increasing
emissions rates is that it has the side benefit of aiding economic
health and growth. By slowing growth in CO$_{2}$ concentrations,
the worst impacts of future climate change are constrained. Energy
consumption is fundamentally linked to the size of civilization through
the constant $\lambda$ (Eq.~\ref{eq:lambda}). Thus, any improvement
to economic wealth corresponds to increased energy consumption and
more rapid growth in CO$_{2}$ emissions (Eq.~\ref{eq:dlnEdt}).
It is counter-intuitive, but comparing two scenarios with very low resilience
to climate change, energy consumption rates rise about twice as fast with
rapid decarbonization as with no decarbonization. The reason is that
decarbonization aids society health by limiting global warming. Better health means greater energy
consumption, which then leads to a partial offset of any environmental gains that came
from decarbonizing in the first place.
\vspace*{2mm}
\subsection{Comparison with SRES scenarios\label{sec:SRES}}
\vspace*{2mm}
Figures~\ref{fig:trajectories} and~\ref{fig:trajectories_decarbon} include for
comparison's sake the phase space of $P$ and CO$_{2}$ concentrations that
are employed in several well-known IPCC Special Report on Emissions Scenarios
(SRES) illustrative marker scenarios \citep{IPCC_WG12007, Manning2010}. These scenarios
provide statistics through 2100 for global GWP in 1990\,MER~US dollars along
with global CO$_{2}$ emission rates from fossil fuel combustion. For the sake
of consistency with CThERM calculations, atmospheric CO$_{2}$ concentrations
are calculated from the second CThERM equation given by Eq.~(\ref{eq:ddCO2dt}).
Across the scenarios, calculated trajectories in CO$_{2}$ concentration
perturbations are lower than those presented in the IPCC Third Report for the
same emission rates, but calculated using the sophisticated ``Bern'' carbon
cycle model \citep{Joos1996}. Part of this discrepancy may be because no
consideration is made for the small additional perturbations in anthropogenic
CO$_{2}$ emissions that come from future non-fossil fuel sources. But also, no
account is made for possible future saturation of CO$_{2}$ sinks
\citep{LeQuere2007}. Regardless, the agreement is still sufficiently
favorable to support using the extremely simple CO$_{2}$ sink model in
Eq.~(\ref{eq:ddCO2dt}) as an accessible, if conservative, substitute for the more
sophisticated approaches used by the IPCC.
The comparisons between the CThERM and SRES scenarios are grouped according
to whether or not decarbonization is included in the forecasts. CThERM
trajectories in Fig.~\ref{fig:trajectories} include no decarbonization, and
are paired with the A1F1~and A2~scenarios. These two SRES storylines are both
based on a fossil-fuel reliant economy, while A1F1 has faster economic
growth. For contrast, the CTheRM trajectories in Fig.~\ref{fig:trajectories_decarbon}
do include decarbonization, and are paired
with the A1T, B1~and B2~scenarios. These storylines all include a switch to
less carbon intensive fuels, but with a range of speeds of economic
development.
Regardless of the precise scenario that is considered, there is a basic
difference between the CThERM forecasts and the SRES scenarios. Each SRES
scenario greatly underestimates how much atmospheric CO$_{2}$ concentrations
will rise for a given level of global GWP. Or, put another way, SRES
scenarios produce an unphysical overestimate of the wealth society can have
while maintaining CO$_{2}$ levels below some nominal threshold. For example,
the ``environmentally sustainable'' B1~scenario suggests that a CO$_{2}$
level below 500\,ppmv is plausible by the end of this century, while
maintaining a GWP of 360\,Trillion 1990\,US dollars per year. The CThERM
results suggest that this combination simply cannot happen because, even with
rapid decarbonization, sustaining this level of economic activity would
require too much energy consumption. It is only with rapid decarbonization
and civilization collapse that such CO$_{2}$ concentrations can be attained.
Perhaps the basic reason that there is a mismatch between the CThERM and SRES
scenarios is that the SRES scenarios are based on an assumption that
increases in energy efficiency will lower the amount of CO$_{2}$ emitted for a
given amount of economic activity. The thermodynamic and observational
analysis described here and in \citet{GarrettCO2_2009} indicate that the opposite should be expected to hold. From Eq.~(\ref{eq:etaefficiency}),
gains in efficiency $\epsilon$ accelerate CO$_{2}$ emissions by accelerating
civilization's capacity to access primary energy reservoirs. While, increasing efficiency may also lead to a higher GWP (Eq. \ref{eq:productionefficiency}), feedbacks in the economic system make it impossible to decouple the energy consumption from economic well-being.
\conclusions
This study builds on a key result presented in a prior article
\citep{GarrettCO2_2009}, that civilization wealth and global rates of primary
energy consumption are tied through a constant value of $\lambda$\,=\,9.7\,$\pm$\,0.3\,mW
per 1990\,US dollar. On this basis, a very simple prognostic
model (CThERM) is introduced for forecasting the coupled evolution of the
economy and atmospheric CO$_{2}$ concentrations. While the model in its basic
form has just three prognostic equations, it nonetheless provides accurate
multi-decadal hindcasts for global world production and atmospheric
concentrations of CO$_{2}$.
The much more sophisticated formulations commonly used in Integrated
Assessment Models can have hundreds of equations. In part this is required to
forecast regional variations of specific societal indicators such as
population or standard of living. The argument made here and in
\citet{GarrettCO2_2009} is that, at the global scales relevant to atmospheric
composition, such complexity is largely unnecessary. Both the global economy
and atmospheric CO$_{2}$ can be considered to be ``well-mixed'', and they
both are constrained by the global rate of primary energy
consumption.
One implication of this result is that global warming should
be expected to manifest itself economically as a growing gap between the nominal and inflation-adjusted GWP. Environmental pressures erode a material interface
that enables civilization to consume the primary energy resources it
requires. Normally, this erosion is more than offset by increasing access to
primary energy reservoirs; in fact, it is an increasing access to energy
supplies that has enabled a positive (and growing) inflation-adjusted gross
world product, and has led to the generally high standard of living we enjoy
today. However, in a global warming scenario, it can be expected that
environmental pressures will increase, and these will act to slow growth
in energy consumption. Fiscally, this will appear as an inflationary drag on
the growth of economic wealth. Ultimately, it is conceivable that it will push civilization towards
an accelerating decline.
Another implication is that the commonly used IPCC SRES
scenarios make unphysical underestimates of how much energy will be needed to be consumed, and CO$_{2}$ emitted, to sustain prosperity growth. At the globally relevant scales, energy efficiency gains accelerate rather than reduce energy consumption gains. They do this by promoting civilization health and its economic capacity to expand into the energy reserves that sustain it.
Reductions in CO$_2$ emissions can be achieved decarbonizing civilization's sources of fuel. But this has an important caveat. Decarbonization does not slow CO$_{2}$ accumulation by as much as might be anticipated because it also alleviates the potential rise in atmospheric CO$_{2}$ concentrations. If decarbonization leads to fewer climate extremes, then
economic wealth is supported, and because wealth is tied to energy consumption through a constant, consumptive growth partly offsets the anticipated CO$_{2}$ emission reductions. Ultimately, civilization appears
to be in a double-bind with no obvious way out. Only a combination of
extremely rapid decarbonization and civilization collapse will enable
CO$_{2}$ concentrations to be stabilized below the 450\,ppmv
level that might be considered as ``dangerous'' \citep{HansenDangerous2007}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{ Introduction }\label{introd}
The manipulation of small scale systems is a key feature of quantum technologies
and their quantum behavior is an incontestable mark of the success of quantum mechanics.
Such control is an important tool to reveal the potential of quantum concepts
from a practical point of view as well.
In this context,
one may mention interacting nanoelectromechanical systems \cite{buks,eisert} whose
harmonic movement of the elements can be used to harness the power of continuous
variables (position, momentum, etc) for quantum information purposes.
Another important example of a scalable system for exploration of quantum dynamics is
the one consisting of trapped ions whose positions are coupled through dipole-dipole
interactions \cite{bermudez}.
In both cases, one can end up with a practical implementation of a network of
interacting harmonic oscillators that are encompassed in the object of study
of this paper.
Advances in experimental implementations of oscillator networks in the context of
optomechanics \cite{lin, massel,shkarin} may also be mentioned as potential candidates
for implementation of the general results discussed here.
Due to their prominent role in physics in general, and in quantum technologies
in particular, networks of coupled harmonic oscillators are a
timely topic of interest.
In the context of entanglement, for instance, the characterization of
equilibrium states were studied in \cite{audenaert}.
On the other hand, entanglement dynamics was the subject treated in \cite{plenio2004}.
Concerning favorable conditions for entanglement propagation,
in \cite{plenio2005}, a clever scheme of minimal adjustments of
frequencies and coupling constants is developed,
which enables highly efficient transfer of entanglement through a linear
chain (first neighbor interactions) of coupled harmonic oscillators.
Starting from a pure numerical study,
they also found a simplified Hamiltonian model (no thermal baths) which allowed
analytical progress in the understanding of the high efficiency.
This was achieved by using the rationale of the rotating wave approximation (RWA), {\it i.e.},
the elimination of fast oscillating terms in the interaction picture Hamiltonian when
they do not contribute appreciably for the dynamics.
Our goal is to expand such an idea of frequency and coupling constants adjustments in
many different directions, not explored in \cite{plenio2005}.
We present here conditions for obtaining simplified reduced models for general
configurations or topologies of quadratically coupled systems
and, more importantly, we work within the formalism of open quantum systems, allowing us
to include the presence of surrounding environments.
The contribution here allows one to envision applications of our general results to the
study of thermal properties
\cite{assadian,martinez,freitas,nicacio2015},
non-classical properties \cite{leandro}
or non-equilibrium thermodynamics \cite{oscillators_th} in harmonic systems;
all of them examples of timely topics of research.
In this work, we show that the indirect or dynamical coupling between two distinct
oscillators mediated by a general network, the latter with an arbitrary number of degrees
of freedom and topology, can be effectively described by a simplified model containing
only a few degrees of freedom, provided the RWA rationale used in \cite{plenio2005} is
generalized.
The first step of the method comprises the diagonalization of the Hamiltonian
of the network, which reveals its normal modes.
It is here that the symplectic formalism is needed %
\cite{nicacio2010,kyoko,ozorio,littlejohn,gosson}.
Using this tool, we can provide a case-independent diagonalization, {\it i.e.},
a general procedure without specification {\it{a priori}} of the topology of the network.
The following critical point is to careful understand how the external distinct
oscillators couple to the normal modes of the network, and this is highly dependent
on resonances between normal mode frequencies of the network and the natural frequencies
of the external oscillators.
When dealing here with topologies which are more complex than a linear chain
treated in \cite{plenio2005}, we must take into account possible degeneracies in the
frequencies of the normal modes in order to find the correct effective simplified model.
Adding to that, another distinctive feature of our work is the inclusion of an environment
in the dynamics.
In this respect, we
extend the unitary description in \cite{plenio2005} to a non-unitary open system
treatment of the dynamics following a Lindblad master equation (LME).
Obtaining an effective description in terms of just a few degrees of freedom when
thermal baths are present is not a trivial task. However, under certain structural
conditions, we were able to successfully perform such a simplification as discussed
in this work.
The designed methodology is suitable for the study of communication and transport across
the network and, as an illustration or application, we explore here the phenomenon of
energy transfer between the external oscillators mediated by the network.
The paper is organized as follows.
In section \ref{tsid}, we describe the system of interest, namely two external
oscillators coupled to a general network of oscillators.
We add also the presence of thermal baths.
Notation in the scope of the formalism of continuous variables and the dynamical
equations for the open system dynamics are also presented in this section.
The development of the method to obtain simplified models for the dynamics of the
external oscillators is presented in Sec.~\ref{ed}.
There,
we present the mathematical tools of the symplectic
formalism needed to perform the diagonalization of the network Hamiltonian
and to obtain its normal modes.
We also present conditions under which an RWA for the open system can be performed,
which enable us to drastically simplify the system dynamics description to a picture
with just a few degrees of freedom only.
Sec.~\ref{elc} is devoted to an example of the previous simplified description,
a linear chain.
In the context of quantum technologies and the usefulness of the simplified descriptions,
we will study the propagation of energy from a quantum system to another
through a quantum bus in Sec.~\ref{et}, where we start with the case of
a network (linear chain) with non-degenerate normal modes and then proceed to an
interesting example with degeneracy.
We end this section with a consideration of a network not obeying the Hooke's law.
We then conclude with final remarks and some perspectives
of future work in Sec.~{\ref{fr}}.
Additionally, two appendices are dedicated to details about the RWA,
which is a fundamental tool used in the work, and to long length analytic expressions.
\section{ The System and Its Dynamics }\label{tsid}
The system we have in mind is depicted in Fig.\ref{fig1system1}.
Its temporal evolution will be governed by a LME
for the density operator $\hat \rho$:
\begin{equation} \label{lindblad}
\frac{d \hat \rho }{d t} =
\frac{i}{\hbar} [ \hat \rho , \hat H ] + \mathcal{L}(\hat\rho),
\end{equation}
where $\hat H$ is the Hamiltonian of the system and $\mathcal{L}(\hat\rho)$ is the
non-unitary part of the dynamics accounting for the environment-system interaction.
In the Lindblad scenario, the effect of the coupling between the system
and the reservoir appears in Eq.~(\ref{lindblad}) by means of
\begin{eqnarray} \label{liouv}
\mathcal {L}(\hat \rho) =
- \frac{1}{2\hbar} \! \sum_{k} \!\!
\left(
\{ \hat{L}_{(k)}^\dag \hat{L}_{(k)} , \hat \rho \}
- 2 \hat{L}_{(k)} \hat \rho \hat{L}_{(k)}^\dag \right),
\end{eqnarray}
where the different $\hat{L}_{(k)}$ are known as the Lindblad operators,
and $\{\hat A,\hat B\}$ denotes the anticommutator
between general operators $\hat A$ and $\hat B$.
The index $k$ of the sum is arbitrary until this point,
it records the number of Lindblad operators needed to represent the reservoir
interaction with the system.
\begin{figure}[!hbt]
\includegraphics[width=8cm]{fig1system1.pdf}
\caption{%
Schematic representation of the system of interest.
It consists of a general network of $N$ oscillators
where members $\alpha^\text{th}$ and $\beta^\text{th}$ are
coupled to two distinctive external oscillators denoted,
respectively, as $a$ and $b$. The coupling constant is
$\epsilon$. At this level, we let the coupling constants inside
the network completely arbitrary.} \label{fig1system1}
\end{figure}
Let us define the collective operator
$\hat X := ( \hat{\sf x}^\dagger, \hat x^\dagger)^\dagger$ as a column
vector corresponding to the positions and the canonical conjugate
momenta of the oscillators of the system.
In this notation, and with respect to the system depicted in Fig.~\ref{fig1system1},
the operator
$\hat x := (\hat q_1,...,\hat q_N, \hat p_1,...,\hat p_N)^\dagger $
accounts for the elements of the newtwork, while
$\hat{\sf x} :=
(\hat {\sf q}_a,\hat {\sf q}_b,
\hat {\sf p}_a,\hat {\sf p}_b )^\dagger $
represents the external oscillators.
The canonical commutation relations among coordinates and momenta
are expressed compactly as
\begin{equation} \label{comm}
[\hat x_j , \hat x_k ] = i \hbar \, \mathsf J^{[N]}_{jk} , \,\,\,
[\hat {\sf x}_j , \hat {\sf x}_k ] = i \hbar \, {\sf J}^{[2]}_{jk} , \,\,\,
\end{equation}
where
\begin{equation}
\mathsf J^{[n]} :=
\left( \!\! \begin{array}{rc}
{\bf 0}_{n} & \mathsf I_n \\
-\mathsf I_n & {\bf 0}_n
\end{array}
\!\! \right)
\end{equation}
is the fundamental $2n \times 2n$ symplectic matrix and the blocks
$\mathsf I_n$ and ${\bf 0}_n$ are, respectively,
the $n$ dimensional identity and zero matrices.
Even more compactly, one can write
\begin{equation} \label{comm2}
[\hat X_j , \hat X_k ] = i \hbar \, \mathsf J_{jk}
\,\,\, \text {with} \,\,\, \mathsf J = \mathsf J^{[2]} \oplus \mathsf J^{[N]}.
\end{equation}
The label $[n]$ of the above matrices is the number of degrees of freedom and will
be omitted if clear in the context.
The Hamiltonian $\hat H$ of the global system in (\ref{lindblad})
contains two parts $\hat H = \hat H_0 + \hat H_I$,
where
\begin{equation} \label{hamfree}
\hat H_0 = \tfrac{1}{2} \hat X^{\dagger} {\bf H}_0 \hat X =
\tfrac{1}{2} \hat{\sf x}^{\dagger} {\mathbf H}_{\rm e} \hat {\sf x} +
\tfrac{1}{2} \hat x^{\dagger} {\mathbf H}_{\rm N} \hat x
\end{equation}
is the sum of the free Hamiltonians of network and external oscillators,
and $\hat H_I$ describes the interaction of these subsystems. Assuming the
standard Hooke's law prescription (springs), one can write
\begin{equation} \label{hamint}
\hat H_I = \frac{\epsilon}{4} \left(\hat q_\alpha - \hat {\sf q}_a \right)^2 +
\frac{\epsilon}{4} \left(\hat q_\beta - \hat {\sf q}_b \right)^2.
\end{equation}
As final remarks about the system Hamiltonian, note that in our notation, we have
${\bf H}_0 = {\mathbf H}_{\rm e} \oplus {\mathbf H}_{\rm N}$.
Finally, given the forms of (\ref{hamfree}) and (\ref{hamint}),
the system Hamiltonian $\hat H$ is quadratic in $\hat X$,
that is
\begin{equation} \label{hamtot}
\hat H = \tfrac{1}{2} \hat X^\dagger {\bf H} \hat X ,
\end{equation}
with $\bf H$ beeing the Hessian of $\hat H$.
It is worth mentioning that,
despite the specificity of the coupling Hamiltonian (\ref{hamint}),
both ${\mathbf H}_{\rm e}$ and ${\mathbf H}_{\rm N}$ in (\ref{hamfree})
are completely arbitrary until this point.
For the non-unitary part of (\ref{lindblad}), let us assume that every
$\hat{L}_{(k)}$ in (\ref{liouv}) is a linear function of position and momentum,
{\it i.e.},
\begin{equation} \label{lindef}
\hat{L}_{(k)} = \lambda_{(k)}^{\!\top} \mathsf J \hat{X},
\end{equation}
where $\lambda_{(k)} \in \mathbb C^{2N+4}$ is a column vector and
$\mathsf J$ the matrix defined in (\ref{comm2}).
Of particular importance for continuous variable systems,
one defines the covariance matrix (CM) of the system state as
\begin{equation} \label{cmdef}
\mathbf V_{\! jk} (t) =
\tfrac{1}{2} {\rm Tr}\left[
\left\{ \hat X_j - \langle \hat X_j \rangle_t ,
\hat X_k - \langle \hat X_k \rangle_t \right\}
\hat \rho(t)
\right],
\end{equation}
where $\hat X_j $ is the $j^\text{th}$ component of $\hat X$
and
$\langle \hat X_j \rangle_t : = {\rm Tr}[ \hat X_j \hat \rho(t) ]$
is its mean value.
Given its importance, we will be focusing on the time evolution of the CM in this paper.
With the help of (\ref{hamtot}) and (\ref{lindef}), and by defining
\begin{equation} \label{decmatdef}
{\bf \Upsilon} := \sum_{k } \lambda_{(k)} \lambda_{(k)}^\dagger,
\end{equation}
it is possible to show that the CM equation of motion reads \cite{nicacio2010,kyoko}
\begin{equation} \label{cmev}
\frac{d}{d t} \mathbf V =
{ \bf \Gamma }\mathbf V + \mathbf V {\bf \Gamma}^\top + {\bf D} ,
\end{equation}
with
\begin{eqnarray} \label{dynmatdef}
{\bf \Gamma } := \mathsf J \mathbf H - {\rm Im} {\bf \Upsilon} \mathsf J ,
\,\,\,\,
{\bf D} := \hbar \, {\rm Re}{\bf \Upsilon} .
\end{eqnarray}
Since $\bf H$ and $\bf \Upsilon$ are {\it time independent},
the solution for (\ref{cmev}) is
\begin{equation} \label{cmsol}
{\bf V}(t) = {\rm e}^{{\bf \Gamma} t} \, {\bf V}\!_{0} \,
{ \rm e}^{ {\bf \Gamma}^{\! \top} t } +
\int_0^t \! dt^\prime \,
{\rm e}^{{\bf \Gamma} t^\prime} \,
{\bf D} \,
{\rm e}^{{\bf \Gamma}^{\! \top} t^\prime} \, ,
\end{equation}
where ${\bf V}\!_0$ is the CM of the initial state.
As a final remark, for initial Gaussian states,
the qua\-dra\-tic Hamiltonians and linear Lindbladians considered here will
dynamically preserve Gaussian states and, for this case, the CM and the mean values
embody all information about the system.
However, even in cases where the initial state is not Gaussian,
(\ref{cmsol}) is still correct.
In such cases, the knowledge of the CM and of the mean values
will not contain all information about the system state.
\section{ Effective Dynamics }\label{ed}
In this section, we will expand and generalize the results in \cite{plenio2005}
in order to treat arbitrary networks
possessing a quadratic and positive definite Hamiltonian. Besides, we include the
non-unitary contribution to the dynamics via a Lindblad
equation. This last point adds interest to our generalizations due to the multitude of
physical systems where dissipation can not be neglected in the description.
The method consists in three steps. First, we
diagonalize the free part of the system Hamiltonian. Then, we move the LME to an
interaction picture where we can perform the RWA and a structural simplification
to end up with an effective description for the dynamics of $a$, $b$,
and a few normal modes of the network.
\subsection{ Symplectic Formalism } \label{sf}
The development of our results is based on mathematical tools related to the symplectic
formalism \cite{nicacio2010,kyoko,ozorio,littlejohn,gosson}.
For the sake of simplicity, the basics will be illustrated using the $N$
oscillators of the network, but everything is readily transposed to
systems with an arbitrary number of members.
In this formalism, one is interested in transformations
$\hat x' = \mathsf S \hat x$ of
$\hat x=(\hat q_1,...,\hat q_N,\hat p_1,...,\hat p_N)^\dag$
such that the transformed operators satisfy
\begin{equation} \label{comm3}
[\hat x'_j , \hat x'_k ] = i \hbar \, \mathsf J^{[N]}_{jk}.
\end{equation}
One can show that this is guaranteed provided
$\mathsf S^{\!\top} \! \! \mathsf J \mathsf S = \mathsf J$.
The set of elements $\mathsf S$ satisfying such a statement forms the real
symplectic group $\mathsf S \in {\rm Sp}(2N, \mathbb R)$.
A central result for us here is the so called
Williamson theorem \cite{williamson}.
It states that a positive definite $2N \times 2N$ symmetric matrix $\mathbf M $, {\it i.e.},
$\mathbf M = \mathbf M^\top > 0$,
can be diagonalized by a symplectic congruence. In other words, there exists
$\mathsf S \in {\rm Sp}(2N, \mathbb R)$ such that
\begin{equation} \label{tw1}
\mathsf S \mathbf M \mathsf S^\top
= \Lambda_\mathbf{M}, \,\,\,
\Lambda_\mathbf{M} :=
{\rm Diag}(s_1,...,s_N,s_1,...,s_N)
\end{equation}
with
$ 0 < s_j \le s_k \,\,\, \text{for} \,\,\, j \le k. $
The double-paired ordered set (or the diagonal matrix) $\Lambda_\mathbf{M}$
is called {\it symplectic spectrum} of $\mathbf M$, and $s_k$ are
its symplectic eigenvalues (SE).
These can also be found from the (Euclidean) eigenvalues of
$\mathsf J \mathbf M$ \cite{gosson}, which turn out to be
\begin{equation} \label{tw2}
{\rm Spec_{\mathbb C}}(\mathsf J \mathbf M) =
{\rm Diag}(is_1, ...,is_N,-is_1,...,-is_N).
\end{equation}
The matrix $\mathsf S$ that diagonalizes $\bf M$ admits a suitable decomposition as
\begin{equation} \label{tw2a}
\mathsf S = \Lambda_{\bf M}^{\tfrac{1}{2}} O \mathbf{M}^{-\tfrac{1}{2}}
\end{equation}
with $O \in {\rm O}(2N)$, {\it i.e.}, an orthogonal matrix.
From the symplectic condition on $\mathsf S$, one can see that $O$ must obey
\begin{equation}\label{must}
O{\bf M}^{\tfrac{1}{2}} \mathsf J {\bf M}^{\tfrac{1}{2}} O^\top =
\Lambda_{\bf M} \mathsf J.
\end{equation}
If convenient, one can equivalently use creation/anni\-hi\-la\-tion
operators instead of position and momentum. In this case,
one can define the column vector
\begin{equation} \label{zdef}
\hat z := (\hat a_1^\dagger,..., \hat a_N^\dagger,
- i \hat a_1,..., -i \hat a_N )^\dagger ,
\end{equation}
where $\hat a_k := (\hat q_k + i \hat p_k)/\sqrt{2\hbar}$
is the annihilation operator.
This change of coordinates can be compactly represented by
\begin{equation} \label{carep}
\sqrt{\hbar} \, \hat z = {\mathsf C}_{[N]} \hat x
\end{equation}
with
\begin{equation} \label{ctrans}
{\mathsf C}_{[n]} := \frac{1}{\sqrt{2}}
\left(\begin{array}{cc}
\mathsf I_n & i \mathsf I_n \\
i \mathsf I_n & \mathsf I_n
\end{array} \right) ,
\,\,\, {\mathsf C}_{[n]}^\dag = {\mathsf C}_{[n]}^{-1}.
\end{equation}
One can show that $\mathsf C_{[n]} $ is symplectic, and this leads immediately to
\begin{equation} \label{cpcr}
[\hat z_j , \hat z_k ] = i \, \mathsf J^{[N]}_{jk}.
\end{equation}
If clear in context, the sub- or superscript $[n]$ will be omitted.
Giving $\mathsf S$ as defined in (\ref{tw2a}), one may wonder which conditions should
be imposed to $\bf M$ in order to make $\mathsf S \mathsf S{\!^\top}$ diagonal.
To answer this question, let us define the diagonal matrix
$\mathsf L := {\pmb L} \oplus {\pmb L}^{-1} \in {\rm Sp}(2N,\mathbb R)$, where
${\pmb L}:= {\rm Diag}(l_1,...,l_N)$ with $l_i > 0 \forall i$.
According to {\it theorem 5} in \cite{simon}, there exists a symplectic rotation
$\mathsf R \in {{\rm Sp}(2N,\mathbb R) \cap {\rm O}(2N)} $ such that
\begin{equation} \label{theo5}
\mathsf R {\bf M} \mathsf R^\top = \mathsf L \, \Lambda_{\bf M}\, \mathsf L^{\!\top},
\end{equation}
if and only if
\begin{equation} \label{condtheo}
\begin{aligned}
&[ {\bf M_Q},{\bf M_P} ] = {\bf M_C}^{\!\!2} - {{\bf M}_{\bf C}^{\top}}^2, \\
&{\bf M_P} {\bf M_C} - {\bf M_C^\top} {\bf M_P} =
{\bf M_C}{\bf M_Q} - {\bf M_Q}{\bf M_C^\top},
\end{aligned}
\end{equation}
where we wrote $\bf M$ in terms of $N \times N$ blocks, {\it i.e.},
\begin{equation}
\bf M = \left(
\begin{array}{lc}
\bf M_Q & \bf M_C \\
\bf M_C^\top & \bf M_P
\end{array} \right).
\end{equation}
Once conditions (\ref{condtheo}) are fulfilled,
the choice $\mathsf S := \mathsf L^{-1} \mathsf R$
leads to $\mathsf S \mathbf M \mathsf S^\top = \Lambda_{\bf M}$,
as a direct consequence of (\ref{theo5}). In this way,
\begin{equation}
\mathsf S \mathsf S^\top = \mathsf L^{-1} \mathsf R (\mathsf L^{-1} \mathsf R)^\top =
\mathsf L^{-2} = {\pmb L}^{-2} \oplus {\pmb L}^{2}
\end{equation}
which is a diagonal matrix giving the way $\pmb L$ was defined.
\subsection{ Diagonalization of \texorpdfstring{$\hat H_0$}{H0}} \label{di}
We now require the matrix ${\bf H}_0$ appearing in (\ref{hamfree})
to be positive definite, and this is the only restriction imposed on the
network of Fig.~\ref{fig1system1}.
On the basis of the Williamson theorem, we know that there is a symplectic matrix
$\mathsf S_{0} = {\mathsf S}_{\rm e} \oplus {\mathsf S}_{\rm N}$ such that
\begin{equation} \label{hamtw}
\mathsf S_{0} {\bf H}_0 \mathsf S_{0}^\top
= \Lambda_\mathrm{e} \oplus \Lambda_\mathrm{N},
\end{equation}
where
\begin{equation} \label{nettw1}
\Lambda_\mathrm{N} := {\rm Diag}(\varsigma_1,...,\varsigma_N,
\varsigma_1,...,\varsigma_N)
\end{equation}
is the symplectic spectrum of $\bf H_{\rm N}$.
For the external oscillators,
we may particularize to the case where they are identical with natural frequencies
$\Omega$ and masses $M$. In this case, their contribution to the Hamiltonian in
(\ref{hamfree}) is given by
\begin{equation} \label{hamext}
{\bf H}_{\rm e} = M\Omega^2 \, \mathsf I_{2} \oplus M^{-1} \mathsf I_{2},
\end{equation}
with symplectic spectrum $\Lambda_\mathrm{e} = \Omega \mathsf I_4$.
One interesting situation arises when ``$M\Omega = 1$" in a given system of units,
for example, kilogram times radian.
One can see that, in this case, Hamiltonian (\ref{hamext})
will be directly expressed
in terms of normal mode coordinates with doubly degenerate frequency $\Omega$.
The normal modes $\hat Y := (\hat{\sf y}^\dag, \hat{y}^\dag)^\dag$ for the whole system,
by definition, relates to the original coordinates through
\begin{equation} \label{sdt}
\hat X = \mathsf S_0^{\top} \! \hat Y =
\left(
\begin{array}{c}
{\mathsf S}_{\rm e}^\top \hat{\sf y} \\
{\mathsf S}_{\rm N}^\top \hat{y}
\end{array}
\right) ,
\end{equation}
where $\mathsf S_0$ is the symplectic transformation that diagonalizes
the Hessian ${\bf H}_0$ in (\ref{hamtw}).
In terms of creation/annihilation operators,
\begin{equation} \label{nmcoord}
\sqrt{\hbar} \, \hat Z =
\left( {\mathsf C}_{[2]} \oplus {\mathsf C}_{[N]} \right) \hat Y,
\end{equation}
which implies by (\ref{sdt}) that
\begin{equation} \label{nmcoord2}
\hat X =
\sqrt{\hbar} \,
\left( {\mathsf S}_{\rm e}^\top {\mathsf C}_{[2]}^\dag \oplus
{\mathsf S}_{\rm N}^\top {\mathsf C}_{[N]}^\dag \right) \hat Z,
\end{equation}
with $\hat Z = (\hat{\sf z}^\dag, \hat{z}^\dag)^\dag$.
Finally, using the transformation (\ref{nmcoord2})
in the free Hamiltonian (\ref{hamfree}), one finds
\begin{equation} \label{hamfree2}
\hat H_0 = \frac{\hbar}{2} \, \hat Z^{\dagger} \!
( \Lambda_{\rm e} \oplus \Lambda_{\rm N} ) \hat Z
= \frac{\hbar\Omega}{2} \, \hat {\sf z}^{\dagger}\hat{\sf z} +
\frac{\hbar}{2} \, \hat { z}^{\dagger} \! \Lambda_{\rm N} \hat{ z},
\end{equation}
which is the free Hamiltonian (\ref{hamfree}) written in the
cre\-a\-ti\-on/annihilation representation of the normal mode coordinates.
\subsection{ Interaction Picture }
Let us now move the dynamics to the interaction picture with respect to the free
Hamiltonian as given in Eq.~(\ref{hamfree2}).
The LME in this picture acquires the following form
\begin{equation} \label{lindblad2}
\frac{d \tilde \rho }{d t} =
\frac{i}{\hbar} [ \tilde \rho , \tilde H ] + \tilde{\mathcal{L}}(\tilde\rho),
\end{equation}
with
$\tilde H = {\rm e}^{ \frac{i}{\hbar}\hat H_0 t} \, \hat H \,
{\rm e}^{-\frac{i}{\hbar}\hat H_0 t} - \hat H_0$ and
$\tilde \rho = {\rm e}^{ \frac{i}{\hbar}\hat H_0 t} \, \hat \rho \,
{\rm e}^{-\frac{i}{\hbar}\hat H_0 t} $.
Also, all operators contained in $\mathcal L$ transform in the same way as $\hat \rho$,
leading then to $\tilde{\mathcal{L}}(\tilde\rho)$. By now, let us turn our attention
to the Hamiltonian part.
When moving to the interaction picture, the position operators of the oscillators in
the chain and of the external ones,
see (\ref{nmcoord2}), transform respectively according to
\begin{equation}
\begin{aligned}
\tilde{q}_k & = \tilde x_k =
\sqrt{\hbar} \left(
\mathsf S_{\rm N}^{\top} \mathsf{C}_{[N]}^{\dag} \tilde{z}
\right)_{\! k}, \,\,\, 1 \ge k \ge N ; \\
\tilde {\sf q}_{k} & = \tilde {\sf x}_{k} =
\sqrt{\frac{\hbar}{2}} ( \tilde{\sf a}_k + \tilde{\sf a}_k^{\dag} ), \,\,\, k = a,b.
\end{aligned}
\end{equation}
For what comes next, it is worth noticing that
\begin{equation}
\tilde{q}_k^{2} = \tilde x_k^2 = (\tilde x \tilde x^\dag)_{kk} =
\hbar \left(
\mathsf S_{\rm N}^{\top} \mathsf{C}_{[N]}^{\dag}
\tilde{z}\tilde{z}^{\dag}
\mathsf S_{\rm N}^{\top} \mathsf{C}_{[N]}^{\dag}
\right)_{kk}.
\end{equation}
Now, with all these in hand, we move $\hat H = \hat H_0 + \hat H_I$
to the interaction picture.
By using (\ref{hamint}) and (\ref{hamfree2}), one can see that the interaction
picture Hamiltonian reads
\begin{eqnarray} \label{hamint2}
\tilde H &=& \frac{\epsilon}{4} (\tilde{q}_\alpha - \tilde{q}_a)^{2} +
\frac{\epsilon}{4} (\tilde{q}_\beta - \tilde{q}_b)^{2} \nonumber \\
&=& \frac{\hbar\epsilon}{4}
\left[
\left( \mathsf S^{\!\top}\mathsf{C}^{\dag}
\tilde{z}\tilde{z}^{\dag}
\mathsf C \mathsf S \right)_{\alpha\alpha} +
\left( \mathsf S^{\!\top}\mathsf{C}^{\dag}
\tilde{z}\tilde{z}^{\dag}
\mathsf C \mathsf S \right)_{\beta\beta}
\right] \nonumber \\
&-&
\frac{\hbar\epsilon}{2\sqrt{2}}
\left[
\left( \mathsf S^{\!\top}\mathsf{C}^{\dag} \tilde{z}\right)_\alpha
( \tilde{\sf a}_a + \tilde{\sf a}_a^{\dag} ) +
\left( \mathsf S^{\!\top}\mathsf{C}^{\dag} \tilde{z}\right)_\beta
( \tilde{\sf a}_b + \tilde{\sf a}_b^{\dag} )
\right] \nonumber \\
&+& \frac{\hbar\epsilon}{8} ( \tilde{\sf a}_a + \tilde{\sf a}_a^{\dag} )^{2} +
\frac{\hbar\epsilon}{8} ( \tilde{\sf a}_b + \tilde{\sf a}_b^{\dag} )^{2},
\end{eqnarray}
where we dropped the indexes of $\mathsf S_{\rm N} $ and of $\mathsf{C}_{[N]}$
for notation simplicity, and
\begin{equation}
\begin{aligned} \label{zint}
\tilde{z}_k &:= {\rm e}^{\frac{i}{\hbar}\hat H_0 t} \,
\hat z_k \,
{\rm e}^{-\frac{i}{\hbar}\hat H_0 t}
= {\rm e}^{i \phi_k t} \hat z_k
\,\,\, (k = 1,...,2N), \\
\tilde{\sf a}_k &:= {\rm e}^{-i \Omega t} \hat{\sf a}_k
\,\,\, (k = a,b) ,
\end{aligned}
\end{equation}
with
\begin{equation}\label{nl}
\phi_k := \left\{
\begin{array}{r}
-\varsigma_k, \,\,\, \text{if} \,\,\, k \le N \\
\varsigma_{k-N}, \,\,\, \text{if} \,\,\, k > N
\end{array}
\right. .
\end{equation}
Notice that $\tilde z_k$ is
$\tilde{a}_k = \hat{a}_k {\rm e}^{-i\varsigma_k t}$ provided $k \le N $
or its hermitian conjugate otherwise.
\subsection{ RWA and Effective Hamiltonian } \label{eh}
Under certain circumstances, fast oscillating terms in the interaction picture
Hamiltonian are negligible and dropping these terms is what is called RWA.
In the Appendix~\ref{rwa}, we quantitatively describe such conditions for a
prototype system of two coupled oscillators. This guides us in the application
of the RWA for the present system of interest.
Since the free Hamiltonian (\ref{hamfree2}) is the sum of $N+2$
non-interacting oscillators, each of its eigenvectors consists of tensor products of
Fock states of each oscillator, {\it i.e.},
\begin{equation} \label{psi1}
| \Psi \rangle = |\mathfrak{n}_a,\mathfrak{n}_b,
\mathfrak{n}_1,...,\mathfrak{n}_N \rangle,
\end{equation}
where $ \{ | \mathfrak{n}_k \rangle ; k=1,...,N \} $
are eigenstates of $ \hat{a}^{\dag}_k \hat{a}_k $ and
$ \{ |\mathfrak{n}_k\rangle ; k=a,b \} $, the eigenstates of
$ \hat{\sf a}^{\dag}_k \hat{\sf a}_k $.
All transitions between eigenstates of the free Hamiltonian
will be promoted by $\tilde{H_I}$, {\it i.e.}, by (\ref{hamint2}).
Allowed transitions $| \Psi' \rangle\leftrightarrow|\Psi \rangle$,
in the scope of first order perturbation theory, are those with
$\langle \Psi | \tilde H_I | \Psi' \rangle \ne 0 $.
First-order time-dependent perturbation theory \cite{cohen} predicts that
resonant or quasi-resonant transitions or, equivalently, those driven by
static or slowly varying time dependent terms in the interaction picture Hamiltonian,
take place with higher probability when compared to transitions driven by
the rapidly oscillating terms.
The RWA consists of discarding the highly oscillating terms in the interaction
Hamiltonian that are responsible for negligible transition amplitudes when compared
to other terms that are static or oscillate slowly in time. Appendix~\ref{rwa}
is dedicated to considerations about this approximation. Since the reasoning
for usefulness of the RWA is that of the first-order perturbation theory,
we must assure that $H_I$ is weak compared to $H_0$.
In our problem, this means that the interaction of the external oscillators
with the network is weak, and this is guaranteed provided
$\epsilon \ll \Omega,\varsigma_k$ ($k = 1,...,N$).
Let us start with the case where the spectrum (\ref{nettw1}) is {\it non-degenerate}.
Mathematically, that corresponds to $\varsigma_j = \varsigma_k$ if and only if $j = k$.
In the system considered here, a interesting scenario appears when the frequency of
the external oscillators $\Omega$ is close to the frequency of one of
the normal modes of the network, let us say the $m^{\text{th}}$ mode $(1 \le m \le N)$,
{\it i.e.}, $\Omega=\varsigma_m$.
In this case, if (\ref{zint}) is substituted in (\ref{hamint2}),
the RWA can be performed using the recipe
\begin{equation} \label{rwaap}
\begin{aligned}
&\tilde{z}_k\tilde{z}_l^{\dag} =
\hat{z}_k\hat{z}_l^{\dag} \, {\rm e}^{i(\phi_k - \phi_l)t}
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{z}_l^{\dag} \, \delta_{kl}, \\
&\tilde{\sf a}^{2}_l = \hat{\sf a}^{2}_l {\rm e}^{- 2 i \Omega t}
\xrightarrow{ {\rm RWA} } 0,\,\,\,
\tilde{\sf a}^{\dag 2}_l = \hat{\sf a}^{\dag 2}_l {\rm e}^{2 i \Omega t}
\xrightarrow{ {\rm RWA} } 0 , \\
&\tilde{z}_k \tilde{\sf a}^{\dag}_l =
\hat{z}_k\hat{\sf a}_l^{\dag} \, {\rm e}^{i(\phi_k + \Omega)t}
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{\sf a}_l^{\dag} \,
\delta_{km}, \\
&\tilde{z}_k \tilde{\sf a}_l =
\hat{z}_k\hat{\sf a}_l \, {\rm e}^{i(\phi_k - \Omega )t}
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{\sf a}_l \, \delta_{k\, m+N} .
\end{aligned}
\end{equation}
One can see that, under resonance condition $\Omega=\varsigma_m$ and weak
interaction of external oscillators with the network,
both necessary conditions for RWA to work well, oscillators $a$ and $b$
couple essentially only with the $m^{\text{th}}$ normal mode.
Since modes other than $m$ follow free evolution,
we do not include them in the effective description of the dynamics.
Taking all these in to account, we can arrive at the following effective
Hamiltonian\footnote
It can be useful to write the elements of $\mathsf C_{[n]}$
in (\ref{ctrans}) as \\
$(\delta_{jk} + i \delta_{j k+n} + i \delta_{j k-n} )/\sqrt{2}$.
}
\begin{eqnarray} \label{hamint3a}
\tilde H_{\rm eff}^{(m)} & =& \frac{\hbar\epsilon}{4}\mathcal{C}_{m}^{\alpha \beta}
\hat a_m^\dagger\hat a_m
+ \frac{\hbar\epsilon}{4}
( \hat{\sf a}_a^{\dag}\hat{\sf a}_a + \hat{\sf a}_b^{\dag}\hat{\sf a}_b ) \\
&-& \frac{\hbar\epsilon}{4}
\left[ \mathcal{D}_{\!m}^{\alpha} \hat a_m \hat{\sf a}_a^{\dag}
+ \bar{\mathcal{D}}_{\!m}^{\alpha} \hat a_m^{\dag} \hat{\sf a}_a
+ \mathcal{D}_{\!m}^{\beta} \hat a_m\hat{\sf a}_b^{\dag}
+ \bar{\mathcal{D}}_{\!m}^{\beta} \hat a_m^{\dag} \hat{\sf a}_b \right], \nonumber
\end{eqnarray}
with
\begin{equation} \label{coefint}
\begin{aligned}
&\!\!\!\!\! %
\mathcal{C}_{m}^{\alpha \beta} :=
( \mathsf S_{ m \alpha }^2 + \mathsf S_{m+N \alpha}^2 +
\mathsf S_{ m \beta }^2 + \mathsf S_{m+N \beta }^2), \\
&\!\!\!\!\! %
\mathcal{D}_{m}^{\mu} := ( \mathsf S_{m \mu } - i \mathsf S_{ m+N \, \mu } ), \,\,\,
\bar{\mathcal{D}}_{m}^{\mu} := ( \mathsf S_{m \mu } + i \mathsf S_{ m+N \, \mu } ).
\end{aligned}
\end{equation}
If the symplectic spectrum possess some degree of degeneracy,
let us say $\varsigma_m = \varsigma_n$ for some $1\leq n,m\leq N$,
and we tune $\Omega = \varsigma_m = \varsigma_n$,
then scheme (\ref{rwaap}) is no longer valid. It must be modified to
\begin{equation} \label{rwaap3}
\begin{aligned}
&\tilde{z}_k\tilde{z}_l^{\dag}
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{z}_l^{\dag} \,
( \delta_{kl} + \delta_{km}\delta_{ln} + \delta_{kn}\delta_{lm} + \\
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\delta_{k\,m+N}\,\delta_{l\,n+N} + \delta_{k\,n+N}\,\delta_{l\,m+N}), \\
&\tilde{\sf a}^{2}_l = \hat{\sf a}^{2}_l {\rm e}^{- 2 i \Omega t}
\xrightarrow{ {\rm RWA} } 0,\,\,\,
\tilde{\sf a}^{\dag 2}_l = \hat{\sf a}^{\dag 2}_l {\rm e}^{2 i \Omega t}
\xrightarrow{ {\rm RWA} } 0 , \\
&\tilde{z}_k \tilde{\sf a}^{\dag}_l
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{\sf a}_l^{\dag} \,
( \delta_{km} + \delta_{kn} ), \\
&\tilde{z}_k \tilde{\sf a}_l
\xrightarrow{ {\rm RWA} } \hat{z}_k\hat{\sf a}_l \,
(\delta_{k\,m+N}+\delta_{k\, n+N}).
\end{aligned}
\end{equation}
The additional terms, in comparison with (\ref{rwaap}),
bring new elements for the dynamics the system. Now, the situation involves
the free dynamics of the degenerate modes,
their mutual coupling, and their coupling with the external oscillators.
Following the same steps as before, we can now write an effective Hamiltonian
for the system as
\begin{equation}
\begin{aligned} \label{hameff2}
&\tilde H_{\rm eff}^{(m,n)} = \tilde H_{\rm eff}^{(m)} + \tilde H_{\rm eff}^{(n)}
- \frac{\hbar\epsilon}{4} ( \hat{\sf a}_a^{\dag}\hat{\sf a}_a +
\hat{\sf a}_b^{\dag}\hat{\sf a}_b ) \\
&+\frac{\hbar\epsilon}{4}
( \mathsf S_{m \alpha}\mathsf S_{n \alpha} +
\mathsf S_{m+N \alpha}\mathsf S_{n+N \alpha} )
(\hat a_m^\dagger\hat a_n + \hat a_n^\dagger\hat a_m) \\
&+\frac{\hbar\epsilon}{4}( \mathsf S_{m \beta}\mathsf S_{n \beta} +
\mathsf S_{m+N \beta}\mathsf S_{n+N \beta} )
(\hat a_m^\dagger\hat a_n + \hat a_n^\dagger\hat a_m) \\
&+i\frac{\hbar\epsilon}{4}
( \mathsf S_{m \alpha}\mathsf S_{n+N \alpha} -
\mathsf S_{m+N \alpha}\mathsf S_{n \alpha})
(\hat a_m^\dagger\hat a_n - \hat a_n^\dagger\hat a_m) \\
&+i\frac{\hbar\epsilon}{4}
(\mathsf S_{m \beta}\mathsf S_{n+N \beta} -
\mathsf S_{m+N \beta}\mathsf S_{n \beta} )
(\hat a_m^\dagger\hat a_n - \hat a_n^\dagger\hat a_m),
\end{aligned}
\end{equation}
where $\tilde H_{\rm eff}^{(k)}$ is given in (\ref{hamint3a}) for $k=m,n$.
One may notice that, if $\mathsf S_{ m \alpha } = \mathsf S_{ m + N \alpha } = 0 $
in (\ref{hamint3a}) or in (\ref{hameff2}), oscillator $a$ will be decoupled from the
$m^\text{th}$ normal mode of the network. Physically,
position $\alpha$ would correspond to a {\it node}
of the normal mode.
In this situation, the coupling of the external oscillator $a$ with the normal mode $m$
takes place only for higher orders in $\epsilon$. Naturally, there is an analogous
conditions for oscillator $b$.
For completeness, we would also like to mention the possibility of the external
oscillators interacting with more than one member of the network.
Suppose that oscillator $b$ is coupled to both oscillators $\beta$ and $\beta'$
simultaneously, {\it i.e.}, the interaction Hamiltonian is now
\begin{equation} \label{hamint4}
\hat H'_I = \hat H_I +
\frac{\epsilon'}{4} \left(\hat q_{\beta'} - \hat {\sf q}_b \right)^2
\end{equation}
with $\hat H_I$ given in (\ref{hamint}).
All the calculations follow as before provided one now imposes $\epsilon' \ll \Omega$,
in order to fulfill the RWA requirements.
Specially, we would like to emphasize that $\mathsf S_{0}$ defined above Eq.~(\ref{hamtw})
remains the same. After the calculations, the result is
\begin{equation} \label{hamint4a}
\begin{aligned}
\tilde H_{\rm eff}^{'(m)} = & \, \tilde H_{\rm eff}^{(m)} +
\frac{\hbar\epsilon'}{4}\mathcal{E}_{m}^{\beta'} \hat a_m^\dagger\hat a_m
+ \frac{\hbar\epsilon'}{4} \hat{\sf a}_b^{\dag}\hat{\sf a}_b \\
& - \frac{\hbar\epsilon'}{4}
\left[
\mathcal{D}_{\!m}^{\beta'} \hat a_m\hat{\sf a}_b^{\dag}
+ \bar{\mathcal{D}}_{\!m}^{\beta'} \hat a_m^{\dag} \hat{\sf a}_b \right],
\end{aligned}
\end{equation}
with
$\tilde H_{\rm eff}^{(m)}$ in (\ref{hamint3a}),
$\mathcal{D}_{\!m}^{\beta'}$ in (\ref{coefint}), and
$
\mathcal{E}_{m}^{\beta'} := \mathsf S_{ m \beta' }^2 + \mathsf S_{m+N \beta' }^2.
$
Note also that if one wants to consider in (\ref{hamint}) the possibility
of distinct couplings namely $\epsilon$ for $(a,\alpha)$ and $\epsilon'$ for $(b,\beta)$,
the use of (\ref{hamint4a}) for the part referring to $(b,\beta)$ is the way to proceed.%
\subsection{ Thermal Baths and Effective Dynamics } \label{t}
The unavoidable incapacity of a perfect system isolation leads to the progressive
destruction of quantum coherence. This kind of dynamics is commonly modeled by including
appropriate non-unitary components in the equation of motion.
We will specialize here in the case of local and independent thermal
baths for each member of the system depicted in Fig.\ref{fig1system1}.
This physical scenario corresponds to the use of (\ref{liouv}) with the choice
\begin{eqnarray} \label{lopthermal}
\hat L_{(k)} &=& \sqrt{\hbar\zeta ({\bar n}_{\text{th}} + 1)} \, \hat A_k ,\nonumber\\
\hat L'_{(k)} &=& \sqrt{\hbar\zeta {\bar n}_{\text{th}}} \, \hat A_{k}^\dagger,
\,\,\, (k = a,b,1,2, ...,N),
\end{eqnarray}
where $\hat A_k$ is the annihilation operator associated to the $k^{\text{th}}$
component of $\hat X$ ,
$\zeta \ge 0$ is the effective bath-oscillator coupling constant or relaxation
rate of the system, and
${\bar n}_{\text{th}}$ is a thermal occupation number, both taken here to be the same
for all reservoirs.
Note that, for the same $k$, two Lindblad operators must be simultaneously included
(primed and unprimed).
The prescribed Lindblad operators do not couple different oscillators,
and this brings the matrix (\ref{decmatdef}) to a block structure
\begin{equation} \label{decmat}
{\bf \Upsilon} = {\bf \Upsilon}_{4} \oplus {\bf \Upsilon}_{2N},
\end{equation}
with
\begin{equation} \label{decmat2}
{\bf \Upsilon}_{\! 2n} :=
\zeta({\bar n}_{\text{th}} + \tfrac{1}{2}) \mathsf I_{n} -
\frac{i}{2} \zeta \mathsf J^{[n]},
\end{equation}
where $n$ equals $2$ or $N$.
Now, the same transformation (\ref{sdt}), used to diagonalize $\hat H_0$
in (\ref{hamtw}), must be applied to the Lindblad operators defined in (\ref{lindef}).
Consequently,
\begin{equation} \label{lindint}
\hat {L}_{(k)} =
\lambda_{(k)}^{\!\top} \mathsf J \mathsf S_0^{\!\top}\hat Y
= [\mathsf S_0^{-\top} \lambda_{(k)} ]^{\!\top}
\mathsf J \hat Y.
\end{equation}
To see that, one must apply the sympleticity of $ \mathsf S_0$, {\it i.e.},
$\mathsf S_0^{\!\top} \! \mathsf J \mathsf S_0 = \mathsf J$.
Notice that we used a compact notation $\mathsf S_0^{-\top}$ for
$(\mathsf S_0^\top)^{-1}$.
Given (\ref{decmatdef}), we see that
\begin{equation} \label{dectransf}
{\bf \Upsilon} \longrightarrow \mathsf S_0^{-\top} {\bf \Upsilon} \mathsf S_0^{-1},
\end{equation}
in such a way that (\ref{decmat}) becomes
\begin{eqnarray} \label{decmat4}
\!\!\!\!\!\!\!\!\!\!\mathsf S_0^{-\top} {\bf \Upsilon} \mathsf S_0^{-1} =
\zeta({\bar n}_{\text{th}} + \tfrac{1}{2})
(\mathsf S_0\mathsf S_0^{\top})^{-1}
- \tfrac{i}{2} \zeta \left(\mathsf J^{[2]} \oplus \mathsf J^{[N]}\right),
\end{eqnarray}
where we used the symplecticity of $\mathsf S_0$ again.
Our aim is to provide the simplest description of the dynamics of oscillators $a$ and $b$
mediated by the network. At the Hamiltonian level,
we already managed to do that when we arrived at an effective interaction involving just
these oscillators and a few resonant normal modes.
For the Lindblad operators and covariance matrices, the description in terms of normal
modes is reflected in (\ref{decmat4}). In general, the normal modes turn out to be
interacting in spite of the fact that, looking at the individual oscillators,
they interact with independent baths.
One can say that the action of the baths, in the level of normal modes
(which are collective operators), is non-local. The interaction of the normal modes
in the non-unitary part of the dynamics comes
from the mutual interactions of the individual oscillators in the Hamiltonian, {\it i.e.},
in the unitary part of the dynamics. Consequently, (\ref{decmat4}) might not be as
simple as (\ref{decmat2}), remembering that the latter refers to a description of local
independent baths.
From this, we see that the problem of obtaining a simple description of the open system
dynamics is much more involved than the same problem in the closed unitary dynamics.
So, together with the treatment of a general network, the possible simplifications for
the open system dynamics we do next make our contribution of interest giving that
previous studies treated only closed systems in a fixed
simple topology \cite{plenio2005}.
It is possible to envision some structural conditions that make (\ref{decmat4}) simpler.
In other words, conditions that lead to local and independent baths for the collective
normal modes. This basically concerns the form of the matrix
$\mathsf S_0\mathsf S_0^{\top}$ (topology), or the form of $\lambda_{(k)}$
(system-bath interaction). Let us start with the first condition which will be
illustrated with an example in Sec.~\ref{elc}.
Direct inspection of (\ref{decmat4}) reveals that the baths for the normal modes
will naturally be local and independent provided
$\mathsf S_0\mathsf S_0^{\top}$ becomes a diagonal matrix. One possibility is
$\mathsf S_0\mathsf S_0^{\top}=\mathsf I_{4}\oplus\mathsf I_{2N}$ what would lead
precisely to a form like (\ref{decmat2}), corresponding to interaction with local
independent baths. When $\mathsf S_0\mathsf S_0^{\top}$ is diagonal but not the identity
matrix, each mode will still see a local reservoir but it will not necessarily be thermal.
In this case there is a weighted mix of creation and annihilation operators
characteristic of a squeezed reservoir.
Given the quadratic Hamiltonian in (\ref{hamfree}), the results in Sec.\ref{sf} will be helpful
to determine if the action of the reservoirs will be decoupled or not.
Provided the blocks of $\bf H_{\rm N}$ satisfy the conditions in (\ref{condtheo}),
it will be suitable for symplectic diagonalization by a matrix $\mathsf S_{\rm N}$ such that
$\mathsf S_{\rm N} \mathsf S_{\rm N}^\top$ is a diagonal matrix.
Considering that the same is true for the blocks of $\bf H_{\rm e}$ in (\ref{hamfree}),
then $\mathsf S_0$ in (\ref{hamtw}) will be such that
$\mathsf S_{0} \mathsf S_{0}^\top$ is diagonal.
For resonance of $a$ and $b$ with a non-degenerate normal mode $m$, we define
\begin{equation} \label{redvec}
\check x = ( \hat{\sf q}_a, \hat{\sf q}_b, \hat y_m,
\hat{\sf p}_a, \hat{\sf p}_b,\hat y_{m+N})^{\dag},
\end{equation}
from which we may write the effective Hamiltonian (\ref{hamint3a}) as
\begin{equation} \label{hesseff}
\hat H_{\rm eff}^{(m)} =
\frac{1}{2} \check x^\dag {\bf H}_{\rm eff} \check x =
\frac{\epsilon}{8} \check x^\dag
\begin{pmatrix}
{\bf H_q} & {\bf C_{\bf qp}} \\
{\bf C^{\!\top}_{\bf qp}} & {\bf H_p}
\end{pmatrix}
\check x
\end{equation}
with
\begin{equation} \label{hesseffa}
{\bf H_q} = {\bf H_p} =
\left[ \begin{array}{ccc}
1 & 0 & - \mathsf S_{m \alpha } \\
0 & 1 & - \mathsf S_{m \beta } \\
- \mathsf S_{m \alpha } & - \mathsf S_{m \beta } &
\substack{ ( \mathsf S_{m \alpha}^{2}+\mathsf S_{m \beta}^{2} + \\
\mathsf S_{m+N \, \alpha }^{2} + \mathsf S_{m+N \, \beta }^{2} ) } \\
\end{array} \right],
\end{equation}
and
\begin{equation} \label{hesseffb}
{\bf C_{\bf qp}} =
\left[ \begin{array}{ccc}
0 & 0 & - \mathsf S_{m+N \, \alpha } \\
0 & 0 & - \mathsf S_{m+N \, \beta } \\
\mathsf S_{m+N \, \alpha } & \mathsf S_{m+N \, \beta } & 0 \\
\end{array} \right] .
\end{equation}
Using (\ref{cmdef}), a CM based on $\check x$ can be built and, just like (\ref{cmev}),
it will evolve according to
\begin{equation} \label{cmeveff}
\frac{d}{d t} \check{ \mathbf V } =
\check{ \bf \Gamma }\check{ \mathbf V } + \check{ \mathbf V } \check {\bf \Gamma}^\top
+ \check{\bf D}
\end{equation}
with
\begin{equation} \label{decmateff}
\check{\bf \Gamma } :=
\mathsf J^{[6]} {\mathbf H}_{\rm eff}
- \frac{\zeta}{2} \mathsf I_{6} , \,\,\,
\check{\bf D} := \hbar \zeta({\bar n}_{\text{th}} + \tfrac{1}{2}){\mathsf D},
\end{equation}
where $\mathsf D$ will be a $6 \times 6$ diagonal matrix
since $\mathsf S_0\mathsf S_0^{\top}$ is considered to be diagonal.
Solution (\ref{cmsol}) applied to this effective description reads
\begin{equation} \label{cmsol1}
\check{\bf V}(t) = {\rm e}^{\check{\bf \Gamma} t} \, \check{\bf V}\!_{0} \,
{ \rm e}^{ \check{\bf \Gamma}^{\! \top} t } +
\int_0^t \! dt^\prime \,
{\rm e}^{\check{\bf \Gamma} t^\prime} \,
\check{\bf D} \,
{\rm e}^{\check{\bf \Gamma}^{\! \top} t^\prime} \, ,
\end{equation}
with ${\exp[\check{\bf \Gamma}t] = {\rm e}^{-\zeta t/2}} \mathsf E (t)$, where
\begin{equation} \label{rwssimp}
\mathsf E (t) = \exp\left[ \mathsf J
{\mathbf H}_{\rm eff}\, t \right]
\in {\rm Sp}(6,\mathbb R).
\end{equation}
This represents a huge simplification to the original problem which is to describe
the open system dynamics of $a$ and $b$ when they interact
with a network of $N$ oscillators. This is especially true for big networks.
For a {\it degenerate} mode frequency, it suffices to build a vector just like
(\ref{redvec}) but now containing
all the degenerate modes. From this, one can proceed as in the non-degenerate case.
For example, if the symplectic eigenvalue is two-fold degenerate, say modes $m$ and $n$,
we define
\begin{equation} \label{degvec}
\check x =
(\hat{\sf q}_a, \hat {\sf q}_b, \hat y_m,\hat y_n,
\hat {\sf p}_a, \hat {\sf p}_b, \hat y_{m+N},\hat y_{n+N})^{\dag},
\end{equation}
and ${\bf H}_{\rm eff}$, $\check{\bf \Gamma }$ and
$\check{\bf D}$ will be $8 \times 8$ matrices. An example will be given in Sec.\ref{d}.
Let us now present a second condition allowing the description of the normal modes as
subjected to local and non-interacting baths. This will happen whenever $ \lambda_{(k)}$
appearing in (\ref{lindef}) is of the special form
$\lambda_{(k)} = \mathsf S_0^\top \mu_{(k)}$, where $\mathsf S_0$
is the symplectic matrix diagonalizing the Hamiltonian (\ref{hamtw}),
and $\mu_{(k)}$ corresponds to local thermal baths. In other words, $\mu_k$
is determined from $\hat L_k = \mu_{(k)}^{\!\top} \mathsf J \hat{X} $
with $\hat L_k$ given by (\ref{lopthermal}).
Under these circumstances,
the transformed matrix $\mathsf S_0^{-\top} {\bf \Upsilon} \mathsf S_0^{-1}$
in (\ref{dectransf}) assumes the form (\ref{decmat}) as a direct
consequence of the symplectic property of $\mathsf S_0$:
\begin{equation} \label{}
\hat L_k = \lambda_{(k)}^{\!\top} \mathsf J \hat{X} =
(\mathsf S_0^\top \mu_{(k)})^{\top} \mathsf J \mathsf S_0^\top \hat Y =
\mu_{(k)}^{\top} \mathsf J \hat Y.
\end{equation}
Since $\mu_{(k)}$ corresponds to local thermal baths, we achieved our goal.
If we give up the requirement of having local baths,
there is still other possibilities to attain an effective LME involving just a
few degrees of freedom. For instance, when the structure of the network is such that
the transformed $\Upsilon$ in (\ref{decmat4}) only interconnects the resonant
oscillators, the effective dynamics will still only involve themselves,
but possibly in a non-local way.
\section{Example: Linear Chain}\label{elc}
Consider a chain of $N$ harmonic oscillators with frequency $\omega$ and
coupled by springs (Hooke's law) with coupling constant $\kappa$,
as depicted in Fig.\ref{fig2system2}.
The external oscillators $a$ and $b$ couples respectively with the
$\alpha^\text{th}$ and $\beta^\text{th}$
oscillators of the chain as in (\ref{hamint}) and have frequency $\Omega$.
\begin{figure}[htpb!]
\includegraphics[width=8cm]{fig2system2.pdf}
\caption{%
Chain of $N$-coupled harmonic oscillators as a network
where oscillators $a$ and $b$ are attached at positions
$\alpha$ and $\beta$, respectively.
Coupling constants $\kappa$ and $\epsilon$
refer to Hooke-like forces.
} \label{fig2system2}
\end{figure}
The free Hamiltonian (\ref{hamfree}) for this particular configuration is defined with
\begin{equation} \label{hesschain}
{\mathbf H}_{\rm e} = \Omega \mathsf I_4, \,\,\,
{\mathbf H}_{\rm N} = {\bf Q} \oplus {\omega \, \mathsf I_N },
\end{equation}
where ${\bf Q}$ is a $N \times N$ potential matrix whose elements are given by
\begin{equation} \label{potchain}
{\bf Q}_{\! j k} =
( \omega + \kappa ) \delta_{jk}
- \tfrac{\kappa}{2} (\delta_{j1}\delta_{1 k} + \delta_{j n}\delta_{n k} +
\delta_{j k\pm1} ).
\end{equation}
Notice that we are taking $M\Omega=1$ as discussed in Sec.\ref{di}.
The matrix ${\bf Q}$ in (\ref{potchain}) is tridiagonal and symmetric what implies that
it can be diagonalized by an orthogonal transformation \cite{kulkarni}. In particular,
$ {\pmb O} {\bf Q} {\pmb O}^\top = {\rm Diag}(h_1,...,h_N ) $,
with
\begin{equation} \label{toep}
\begin{aligned}
{\pmb O}_{\! j k} & =
\sqrt{\frac{2-\delta_{j1}}{N} } \, \cos \tfrac{(j-1)(2k-1) \,\pi}{2 N}, \\
h_k & = (\omega + \kappa) -\kappa \cos\tfrac{(k -1) \,\pi}{ N}.
\end{aligned}
\end{equation}
Now we proceed to reveal the normal modes of the chain.
Following (\ref{tw2}), we calculate
\begin{equation}
{\rm Spec}(\mathsf J {\bf H}_{\rm N}) = (i{\varsigma}_1,...,i{\varsigma}_N,
-i{\varsigma}_1,...,-i{\varsigma}_N),
\end{equation}
with
\begin{equation} \label{sesmc}
{\varsigma}_k = \sqrt{ \omega (\omega + \kappa) -
\omega \kappa \cos\tfrac{(k - 1) \,\pi}{ N} } ,
\,\,\,
k = 1,...,N .
\end{equation}
The set (\ref{sesmc}) defines the symplectic spectrum (\ref{nettw1})
and is indeed not degenerate.
The symplectic spectrum of the external oscillators is
$\Lambda_{\rm e} = \Omega \mathsf I_4$.
With these in hand, we are able to construct $\mathsf S_0$ which is the
symplectic matrix (\ref{hamtw}) that diagonalizes ${\bf H}_{\rm e}\oplus{\bf H}_{\rm N}$.
By doing that,
the free Hamiltonian (\ref{hamfree}) with ${\bf H}_{\rm e}$ and ${\bf H}_{\rm N}$ will be
indirectly diagonalized when written in terms of the normal modes.
We start by conveniently writing $\mathsf S_0$ as
\begin{eqnarray} \label{sympdiag}
\mathsf S_0 = \mathsf I_4 \oplus \mathsf S,
\end{eqnarray}
being $\mathsf S$ the matrix that performs the simplectic diagonalization of
${\bf H}_{\rm N}$.
Considering
${\bf M} = {\bf H}_{\rm N} = {\bf Q} \oplus {\omega \, \mathsf I_N }$ in (\ref{tw2a}),
one can show that $O = {{\pmb O} \oplus {\pmb O} }$
is a solution of (\ref{must}) with $\pmb O$ defined in (\ref{toep}).
By explicitly working with (\ref{tw2a}),
it is now easy to show that
$
\mathsf S = {\pmb S} {\pmb O} \! \oplus \! {\pmb S}^{-1} {\pmb O}
$
with
\begin{equation}
{\pmb S} = {\rm Diag}\left(\sqrt[4]{ \tfrac{\omega}{ h_1 } },...,
\sqrt[4]{\tfrac{\omega}{ h_N } } \right),
\end{equation}
for $h_k$ defined in (\ref{toep}).
Also, it may be useful to note that
\begin{equation} \label{aux}
{\mathsf S}_{m \mu} =
\sqrt{ \frac{ \omega}{ \varsigma_m}} \, \pmb{O}_{ \! m \mu}, \,\,\,
{\mathsf S}_{m + N \mu} = 0 \,\,\,\,\, ( \mu = \alpha, \beta).
\end{equation}%
When the external oscillators are put in resonance with the $m^{\text{th}}$
normal mode, {\it i.e.}, $\varsigma_m = \Omega$,
the following effective Hamiltonian is obtained with application of (\ref{hamint3a})
and (\ref{aux})
\begin{equation} \label{hameff}
\begin{aligned}
&\tilde H_{\rm eff}^{(m)} =
\frac{\hbar\epsilon\omega}{4 { \varsigma_m }}
\left( \pmb{O}_{\! m \alpha}^{2} + \pmb{O}_{\! m \beta }^{2} \right)
\hat a_m^\dagger\hat a_m
+ \frac{\hbar\epsilon}{4} ( \hat{\sf a}^\dagger_a \hat{\sf a}_a +
\hat{\sf a}^\dagger_b \hat{\sf a}_b ) \\
& -\frac{\hbar\epsilon\sqrt{\omega} }{4 \sqrt{ \varsigma_m} }
\left[
\pmb{O}_{ \! m \alpha }( \hat a_m \hat{\sf a}_a^{\dag} + \hat a_m^{\dag} \hat{\sf a}_a)
+ \pmb{O}_{ \! m \beta } ( \hat a_m \hat{\sf a}_b^{\dag} + \hat a_m^{\dag} \hat{\sf a}_b)
\right].
\end{aligned}
\end{equation}
Now a few important remarks. First, one can clearly see that the resonances are not
equivalent. For example, if the resonant mode is chosen to be $m = 1$,
that is $\Omega = \varsigma_1$, the dynamics will be independent of the positions
that the external oscillators are connected to the chain (translational invariance).
In other words, there is no dependency on $\alpha$ and $\beta$
(see Fig.~{\ref{fig2system2}), and this follows from
$\pmb{O}_{\! 1 \mu } = 1/\sqrt{N},\,\, \forall \mu$, see Eq.~(\ref{toep}).
For other resonances ( $\Omega = \varsigma_m$, $m \ne 1$),
translational invariance is broken and the dynamics will drastically depend
on $\alpha,\beta$.
For instance, if $\alpha = k N/(2m -2) + 1/2 $ with $k \in \mathbb Z^\ast $,
then $\pmb O_{m \alpha } = 0$, the external oscillator $a$ is effectively decoupled
from the chain --- The aforementioned positions $\alpha$ are nodes (zero amplitude)
of high frequency modes ($m > 1$) and this results in decoupling.
As a final remark, only when resonance is set with mode $m=1$,
the closed chain considered in \cite{plenio2005} is equivalent to the open chain
treated here, {\it i.e.}, both topologies can be described by Hamiltonian
$\tilde H_{\rm eff}^{(1)}$
Now we turn our attention to the interaction with the environment.
One can see from (\ref{sympdiag}) that
\begin{equation} \label{restrans}
\begin{aligned}
\mathsf S_0\mathsf S_0^{\top} &=
\mathsf I_4 \oplus {\pmb S}^{2} \oplus {\pmb S}^{-2} \\
& = \mathsf I_4 \oplus
{\rm Diag}\!\left( \tfrac{\omega}{ \varsigma_1 },...,
\tfrac{\omega}{ \varsigma_N } ,
\tfrac{\varsigma_1 }{ \omega },...,
\tfrac{\varsigma_N }{ \omega }
\right ),
\end{aligned}
\end{equation}
and this leads (\ref{decmat4}) to a special form whose physical interpretation is
that each mode will see only a single local squeezed reservoir,
as discussed in Sec.\ref{t}.
Finally, the physical situation is that of an effective dynamics comprising only the
external oscillators and the $m^\text{th}$
normal mode of the chain, these three subjected to local baths. Since modes other than $m$
follow free dissipative evolutions (decoupled from $a$, $b$, and $m$), they do not have
to be included in the description, provided our interest is in the external oscillators.
Coming back to the position/momentum representation (\ref{hesseff}), we obtain
\begin{equation} \label{hameffinal}
\tilde H_{\rm eff}^{(m)} = \tfrac{\epsilon}{8} \check x^\dag \mathbf{H_q}\oplus
\mathbf{H_q}\check x
\end{equation}
with
\begin{equation} \label{hesseff2}
\mathbf{H_q} =
\left( \begin{array}{ccc}
1 & 0 & - \mathsf{S}_{ \! m \alpha }\\
0 & 1 &
- \mathsf{S}_{ m \beta } \\
- \mathsf{S}_{ m \alpha } &
- \mathsf{S}_{ m \beta } &
(\mathsf{S}_{ m \alpha }^2 + \mathsf{S}_{ m \beta }^2)
\end{array} \right),
\end{equation}
whose elements are given by (\ref{aux}).
For (\ref{decmateff}), we have
\begin{equation} \label{decmateff2}
\begin{aligned}
\check{\bf \Gamma } &:=
\mathsf J^{[6]} \left( \mathbf{H_q} \oplus \mathbf{H_q} \right)
- \frac{\zeta}{2} \mathsf I_{6} , \\
\check{\bf D} & := \hbar \zeta({\bar n}_{\text{th}}+ \tfrac{1}{2})
\left( \mathsf I_2 \oplus \tfrac{\varsigma_m}{\omega} \oplus
\mathsf I_2 \oplus \tfrac{\omega}{\varsigma_m} \right),
\end{aligned}
\end{equation}
which, in association with the symplectic evolution (\ref{rwssimp}),
\begin{equation} \label{rwssimp2}
\mathsf E (t) = \exp\left[ \mathsf J( \mathbf{H_q} \oplus
\mathbf{H_q} )\, t \right]
\in {\rm Sp}(6,\mathbb R) \cap {\rm O}(6),
\end{equation}
allows one to obtain the time evolved CM according to the solution (\ref{cmsol1}).
Detailed expressions for the matrix elements constituting $\mathsf E (t)$
can be found in Appendix~\ref{ma}.
\section{ Application: Energy Transport } \label{et}
The validity of the method developed so far is now carefully studied in a
specific problem of importance for quantum technologies.
This concerns the propagation of energy from a quantum system (oscillator $b$)
to another (oscillator $a$) through a quantum bus (the network).
\subsection{Non-degenerate normal modes}
Let us
consider the propagation of energy between the oscillators $b$ and $a$
through the linear chain, as depicted in Fig.\ref{fig2system2}.
For that, all oscillators are initially prepared in a tensor product of
local vacuum states,
except for $b$ which will be considered in a thermal state (TS).
Thus, the CM (\ref{cmdef}) at $t=0$ for the global system reads
\begin{equation} \label{globalcm0}
{\bf V}_0 =
{\bf V}_{\rm T} \oplus \frac{\hbar}{2} \mathsf I_{2N},
\end{equation}
in which the CM of the subsystem $(a,b)$ is given by
\begin{equation} \label{cm0}
{\bf V}_{\rm T } = \frac{\hbar}{2}
\left( \begin{array}{cc}
1 & 0 \\
0 & 2 \bar{n}_b + 1
\end{array} \right) \oplus
\frac{\hbar}{2}
\left( \begin{array}{cc}
1 & 0 \\
0 & 2 \bar{n}_b + 1
\end{array} \right),
\end{equation}
where
$\bar n_b \ge 0$ is the average number of thermal phonons initially
in the oscillator $b$. Notice that oscillator $a$ is initially in the vacuum state.
We will be interested in the dynamics of the average occupation number of $a$,
and this can be extracted from the evolved CM as
\begin{equation} \label{mon}
{\bar n}_a :=
\langle \hat{\sf a}_a^\dag \hat{\sf a}_a \rangle_t =
\tfrac{1}{2\hbar}\left[ {\bf V}_{\! 1 1 }(t) + {\bf V}_{\! 3 3 }(t) \right]
- \tfrac{1}{2}.
\end{equation}
Let us start with the ideal case of a perfectly isolated system. In this case,
the evolution of ${\bf V}_0$ will be governed by (\ref{cmev})
with ${\bf \Upsilon}=0$ and ${\bf \Gamma } = \mathsf J \mathbf H$, {\it i.e.},
\begin{equation} \label{cmsol2}
{\bf V}(t) = {\rm e}^{ \mathsf J {\bf H} t} \, {\bf V}\!_{0} \,
{\rm e}^{- {\bf H}\mathsf J t } .
\end{equation}
Note that ${\rm e}^{ \mathsf J {\bf H} t} \in {\rm Sp(2N+4,\mathbb R)}$.
Despite the apparent simplicity of this formula, it involves exponentiation
of ($2N+4)\times (2N+4)$ matrices, a difficult task depending on the magnitude of $N$.
However, using the method developed in Sec.~\ref{ed}, one deals instead with
exponentiation of $6\times 6$ matrices regardless of $N$:
\begin{equation} \label{cmsol3}
\check{\bf V}(t) = {\mathsf E}(t) \, \check{\bf V}\!_{0} \, {\mathsf E}^\top(t).
\end{equation}
Of course, $N$ can not be considered arbitrarily big.
In this case, the frequency of the modes will vary in a continuum,
and this spoils the RWA~\cite{plenio2005}.
Now, we evaluate the average occupation number of $a$ as
\begin{equation} \label{moneff}
{\check n}_a :=
\tfrac{1}{2\hbar}\left[ \check{\bf V}_{\! 1 1 }(t) + \check{\bf V}_{\! 33 }(t) \right]
- \tfrac{1}{2},
\end{equation}
which, after using the matrix elements presented in Appendix~\ref{ma}, become
\begin{eqnarray} \label{moneff2}
{\check n}_a =
2 \bar{n}_b \,
F({\mathsf S}_{m \alpha}^2 + {\mathsf S}_{m \beta}^2 + 1,\tfrac{\epsilon t}{4})
\end{eqnarray}
with
\begin{equation} \label{funF}
F(\chi,\!\tau) \! = \!
\tfrac{{\mathsf S}_{m \alpha}^2 {\mathsf S}_{m \beta}^2}{\chi(\chi - 1)} \!
\left[ \tfrac{\chi^{-1} - \cos[(\chi - 1)\tau]}{(\chi-1)} \!
+ \! \tfrac{ \cos (\chi\tau)}{\chi} +
(1-{ \cos \tau })\!\right]\!.
\end{equation}
It is interesting to notice that the energy or occupation number of oscillator $a$
depends linearly on ${\bar n}_b$.
In Fig.~\ref{fig3ocn1}, we compare the mean occupation number of oscillator $a$ predicted
by the exact (${\bar n}_a $) and effective models (${\check n}_a) $.
The latter involving just oscillators $a$, $b$,
and normal mode $m=1$ ($\Omega=\zeta_1 = \omega$).
Additionally, we present the oc\-cu\-pa\-ti\-on number of oscillator $b$ using the
exact model to see how its energy is dynamically depleted to excite oscillator $a$.
We chose a chain of moderate length
($N=10$) in order to be able to progress computationally within the exact model.
\begin{figure}[!bh]
\includegraphics[width=8.0cm, trim = 0 20 0 0]{fig3ocn1.pdf}
\caption{%
Mean occupation number as a function of dimensionless time $\omega t$.
Solid and dashed lines are exact evolutions for oscillators
$a$ and $b$, respectively, while dots refer to oscillator $a$
using the effective model involving just $a$, $b$ and normal mode $m=1$.
The chain is composed of $N = 10$ oscillators with first neighbor
interaction by means o Hooke forces with $\kappa/\omega = 20$.
Oscillators $a$ and $b$, angular frequency $\Omega = 1$,
interact with the network also through Hooke forces.
They couple to network oscillators positioned at
$\alpha = N$ and $\beta = 1$, respectively (ends of the chain).
The coupling strength is $\epsilon/\omega = 0.03$.
Oscillator $b$ starts in a thermal state
with $\bar{n}_b = 1$, while all other oscillators
start in local vacuum states.
} \label{fig3ocn1}
\end{figure}
We are working in the regime $\epsilon \ll \Omega,\varsigma_k$ ($k = 1,...,N$),
and it is clear that the simple effective model produces excellent results. Of course,
as time increases, the agreement is gradually spoiled as a consequence of the fact that
what supports RWA is a first-order perturbation theory which looses applicability for
arbitrarily long times. This was previously observed in \cite{plenio2005}. %
In order to deepen our understanding about the order of magnitude of corrections to the
approximate model, we look more closely to the exact dynamics
$\bar{n}_a$.
Especially, the simplified model predicts that the occupation number of
oscillator $a$ vanishes for ${\bar n}_b = 0$. What does the exact model predict?
To address this question, in Fig.~\ref{fig4ocn2} we present the time evolution
of $\bar{n}_a$ for the same physical parameters used in Fig.~\ref{fig3ocn1},
except for the initial occupation number of $b$, now taken to be ${\bar n}_b = 0$.
What we see are high frequency and small amplitude oscillations which contribute little
on average to $\bar{n}_a$.
These contributions coming from small-amplitude fast oscillations are a result of
terms discarded in the RWA.
\begin{figure}[!htbp]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig4ocn2.pdf}
\caption
Exact time evolution of the mean occupation number of oscillator $a$ for oscillator
$b$ initially prepared in a vacuum state ${\bar n}_b = 0$.
The remaining parameters are kept as in Fig.~\ref{fig3ocn1}.
} \label{fig4ocn2}
\end{figure}
As mentioned before, the effective Hamiltonian changes considerably depending on which
mode is in resonance with the external oscillators.
Since the dynamics of (\ref{moneff2}) is entirely determined by the function $F$,
its analysis should reveal this dependence in a clear way. One can see, for example,
that for fixed physical parameters ($\epsilon$, ${\bar n}_b$, etc.) and resonance with
mode $m=1$, the global maximum of $F$, as a function of time, majorates all global
maxima attained when resonance is set with other modes. Besides, only when resonance
takes place with $m=1$, the function $F$ is independent of $\alpha$ and $\beta$.
This rich behavior can be explored for controlling transport
in the chain \cite{plenio2005}. In order to appreciate this dependence,
we show in Fig.~\ref{fig5focn} the dynamical behavior of $F$ for resonance with $m=2$
and for $a$ fixed to one of the ends of the chain. One can clearly see the dependence
on $\beta$, {\it i.e.}, the position in the chain oscillator $b$ is attached to.
\begin{figure}[!b]
\includegraphics[width=8cm,trim=0 20 0 0]{fig5focn.pdf}
\caption
Dependence of function
$F$ defined in (\ref{funF}) on $\beta$ and
scaled time $\tau=\epsilon t/4$.
We consider $\alpha = N = 10$ and $m = 2$.
The remaining parameters are kept as in Fig.~\ref{fig3ocn1}.
} \label{fig5focn}
\end{figure}
Now, let us suppose that the energy initially in the system is not only due to
oscillator $b$. For example, the network might as well have some initial thermal energy.
Would it be possible to theoretically separate the contributions from $b$ and network
to the energy absorbed by oscillator $a$? To investigate this question,
we still consider oscillator $b$ initially in a thermal state with thermal
occupation number $\bar{n}_b$, but now each oscillator in the network is initially
found in a local thermal state, all of them at same temperature, {\it i.e.},
with the same thermal occupation $\bar{n}$. The CM (\ref{cmdef}) for the initial
global state is then
\begin{equation} \label{globalcm02}
{\bf V}'_0 =
{\bf V}_{\rm T} \oplus \tfrac{\hbar}{2} (2{\bar n} + 1) \, \mathsf I_{2N},
\end{equation}
with ${\bf V}_{\rm T}$ as in (\ref{cm0}). By using Eq.~(\ref{moneff})
and information in Appendix~\ref{ma}, it is tedious but straightforward to show that
\begin{equation} \label{moneff3}
{\check n}'_a = {\check n}_a +
4 \chi^{-2} {\mathsf S_{m \beta }^{2}
\sin^{2}\left( \tfrac{\chi \epsilon t}{8} \right) } \bar n ,
\end{equation}
where ${\check n}_a$ is given in (\ref{moneff2}) and $\chi$ is implicitly defined
in (\ref{moneff2}) and (\ref{funF}). From this, some comments are in order.
First of all, one can see that the mean occupation number of oscillator $a$ is indeed
the result of distinct contributions from oscillator $b$ and network.
The latter contributes with the term which does not depend on the occupation number of
$b$, that is
$
4 \chi^{-2} {\mathsf S_{m \beta }^{2} \sin^{2}
\left( \tfrac{\chi \epsilon t}{8} \right) } \bar n
$.
It is worth noticing that this contribution increases with the temperature of the network
oscillators as one could expect. The separation between contributions coming from $b$ and
normal mode $m$ is possible because the effective model involves only three bodies and no
direct coupling between $a$ and $b$. Being able to extract this kind of information from
a complex system is one of the main advantages of simplified but accurate models.
Another comment concerns the relatively small flux of energy from the network to
oscillator $a$. Let us consider again the chain with $N=10$ oscillators used to produce
Fig.~\ref{fig3ocn1}. Although there were initially a total number of ten thermal phonons
in the network (one for each of the ten oscillators), only $2.8\%$ of them is
absorbed by $a$. This can be seen from Fig.~\ref{fig6ocn3} where we show the time
evolution of (\ref{moneff3}) considering oscillator $b$ in the vacuum state, while the
ten oscillators of the network are prepared in local thermal states with $\bar{n} =1$.
The physical explanation for this observation relies on the fact that the initial ten
thermal photons are shared by all normal modes. When resonance is set to one of these
modes, the energy in the other modes becomes unavailable to $a$ or $b$.
\begin{figure}[!b]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig6ocn3.pdf}
\caption
Mean occupation number as a function of dimensionless time $\omega t$.
The physical parameters are that of Fig.~\ref{fig3ocn1},
except for the initial state of network oscillators which now
is the product of local thermal states with $\bar n = 1$,
and the initial state of $b$ which now is vacuum,
{\it i.e.}, $\bar n_b = 0$.
} \label{fig6ocn3}
\end{figure}
We now move to a scenario where the oscillators (external and network) are
subjected to local thermal baths whose action on system is due to (\ref{lopthermal}).
Now, the equation of motion for the CM is given by (\ref{cmev}) with
\begin{equation} \label{dynmat1}
{\bf \Gamma} = - \frac{\zeta}{2} \mathsf I_{2N+4} + \mathsf J \mathbf H, \,\,\,
{\bf D } = \hbar \zeta({\bar n}_{\text{th}} + \tfrac{1}{2}) \mathsf I_{2N+4},
\end{equation}
and its formal solution is given by (\ref{cmsol}). It is clear that the exponentiation
of $(2N+4)\times(2N+4)$ causes computational difficulties already
for moderately high $N$. Besides, it is basically impossible to progress analytically
within this many body description.
Using the results developed here, one can give a clear and accurate description of the
dynamics of the external oscillators working with (\ref{cmsol1}) and (\ref{decmateff2})
instead. Now, the matrices to be exponentiated are just $6\times 6$, and one may show
that
\begin{equation} \label{cmsol4}
\check{\bf V}(t) = {\rm e}^{-\zeta t}
{\mathsf E}(t) \, \check{\bf V}\!_{0} \, {\mathsf E}^\top(t) +
\frac{1}{\zeta}
\left( 1 - {\rm e}^{-\zeta t} \right) \check{\bf D},
\end{equation}
with ${\mathsf E}(t)$ still given by (\ref{rwssimp2}).
It is interesting to notice that the unitary part of this evolution,
already present in (\ref{cmsol3}), is now exponentially attenuated with
characteristic time $\zeta^{-1}$ in the above equation.
Let us now then compare the predictions using the approximate effective
model developed here and the exact dynamics. In Fig.~\ref{fig7ocn4},
we present the mean occupation number for oscillator $a$ following both descriptions.
We keep notation used in the closed system case. Giving the general agreement between
both descriptions, it is clear that our methodology works
well also for the open system case. This plot shows that the higher
the relaxation constant $\zeta$, the sooner the occupation number of
oscillator $a$ reaches that of the thermal reservoir it is interacting with,
which is ${\bar n}_{\text{th}}=1$. As time passes, the presence of
the initial state ${\mathbf V}_0$
in (\ref{cmsol4}) is progressively erased by ${\rm e}^{-\zeta t}$,
which makes the CM tend to
\begin{equation} \label{sstate2}
\lim_{t\to \infty} \check{\bf V}(t) = \frac{1}{\zeta}\check{\bf D},
\end{equation}
showing that not only $a$ thermalizes with its local bath,
but also $b$ and mode $m$ do the same.
\begin{figure}[!t]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig7ocn4.pdf}
\caption
Mean occupation number as a function of dimensionless time $\omega t$.
Each of the $N+2$ oscillators are attached to
thermal reservoirs with ${\bar n}_{\text{th}} = 1$.
Two values of $\zeta$ are considered: $\zeta = 0.01$ (dashed line) and
$\zeta = 0.001$ (solid line).
Dots are obtained using the effective model involving just
$a$, $b$ and normal mode $m=1$.
The initial state and the remaining parameters are kept the same as in
Fig.~\ref{fig3ocn1}.
} \label{fig7ocn4}
\end{figure}
The plot in Fig.~\ref{fig7ocn4} presents another interesting feature.
The effective model is based on the RWA,
whose validity is justified with first-order perturbation theory.
Then, the validity of the approximation is
limited for finite times. However, we see that the simplified model
gives the correct asymptotic limit,
as seen clearly in the case $\zeta = 0.01$. It can be seen with $\zeta = 0.001$
as well but at longer times (not shown in the plot).
The reason why the long time regime is not spoiled is that RWA is made only for
the Hamiltonian part of the dynamics, which becomes less and less important with
time, see (\ref{cmsol4}) and (\ref{sstate2}).
The agreement between the complete exact model and
our simplified model also shows that the decoupling mechanism in terms of local
reservoirs in the modes works perfectly well.
In summary, our model gives very accurate results for the initial cycles of the dynamics
and for the long time limit when each oscillator is coupled to a thermal bath in the
conditions discussed here.
If the system is isolated, {\it i.e.}, no local reservoirs are attached to the oscillators,
the accuracy will just slowly and gradually be spoiled with time as seen before.
Now, we want to be more quantitative in terms of the accuracy of the simplified model
developed here.
In order to do that, we will investigate the density matrix for oscillator $a$
predicted by exact and approximate models, denoted by $\hat \rho_a$
and $\check \rho_a$,
respectively. We will employ the fidelity $\mathcal{F}$ between these states as a
figure of merit \cite{scutaru}:
\begin{equation}
\mathcal{F} := \left[ {\rm Tr}
\left(\sqrt{\hat \rho_a} {\check \rho_a}
\sqrt{\hat \rho_a} \right)^{\frac{1}{2}} \right]^{2} \le 1.
\end{equation}
Since we are working with Gaussian states centered at origin of the phase space,
one can show that the above formula reduces to \cite{scutaru}
\begin{equation} \label{fid1}
\mathcal{F} = \tfrac{2}{ \sqrt{ \det( {\bf V_{\!a}} + \check{\bf V}_{\! \bf a} ) +
\det( {\bf V_{\!a}} -1 )(\check{\bf V}_{\! \bf a} - 1) } -
\sqrt{\det( {\bf V_{\!a}} -1 )(\check{\bf V}_{\! \bf a} - 1) } },
\end{equation}
where ${\bf V_{\!a}}$ and $\check{\bf V}_{\! \bf a}$ are, respectively, the CMs of
subsystem $a$ obtained with (\ref{cmsol2}) and (\ref{cmsol3})
\begin{equation}
{\bf V_{\!a}} =
\begin{pmatrix}
{\bf V}_{11} & {\bf V}_{13} \\
{\bf V}_{31} & {\bf V}_{33}
\end{pmatrix}, \,\,\,
\check{\bf V}_{\! \bf a} =
\begin{pmatrix}
\check{\bf V}_{11} & \check{\bf V}_{13} \\
\check{\bf V}_{31} & \check{\bf V}_{33}
\end{pmatrix}.
\end{equation}
\begin{figure}[!t]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig8fid1.pdf}
\caption{%
Dynamics of $1-\mathcal F$, where $\mathcal F$ is the fidelity
between evolved density operators for oscillator $a$ using the exact
and effective models. The physical parameters in the main plot are the
same as in Fig.~\ref{fig3ocn1}, but in the inset the coupling constant
with the network is slightly reduced to $\epsilon/\omega = 0.01$.
} \label{fig8fid1}
\end{figure}
In Fig.~\ref{fig8fid1}, we plot $(1-\mathcal{F})$ as a function of time for the same
physical parameters considered in Fig.~\ref{fig3ocn1}. Just like in the plots of
occupation number, here too the fidelity progressively deteriorates in time,
which corresponds to the breaking of the RWA for the closed system. However,
many oscillations are necessary to this deterioration to cause appreciable deviations.
It is interesting to see that just a slightly reduction of $\epsilon$
(inset of Fig.~\ref{fig8fid1}) is enough to make the deterioration even weaker.
This is in complete agreement with the first-order time perturbation theory justification
of RWA used in this paper.
\subsection{Degenerate normal modes} \label{d}
To emphasize the generality of effective descriptions based on the methodology developed
here, let us now consider the system depicted in Fig.~\ref{fig9system3}.
It opens up the possibility for studying the resonance between the external oscillators
and degenerate normal modes.
\begin{figure}[!htbp]
\includegraphics[width=8cm,trim=0 0 0 0]{fig9system3.pdf}
\caption{%
Triangular network of oscillators,
where $\epsilon$, $\kappa$ and $\kappa'$ are coupling constants
for oscillators coupled by springs (Hooke's forces).
} \label{fig9system3}
\end{figure}
The free Hamiltonian of the network is written as in Eq.~(\ref{hamfree})
but now with
\begin{equation} \label{hesschain2}
{\mathbf H}_{\rm N} = {\bf Q} \oplus {\omega \, \mathsf I_3 },
\end{equation}
where the $3 \times 3$ potential matrix is given by
\begin{equation} \label{potchain2}
\!{\bf Q} = \left(
\begin{array}{ccc}
\kappa + \omega & -\tfrac{1}{2}\kappa & -\tfrac{1}{2}\kappa \\
-\tfrac{1}{2} \kappa & \tfrac{1}{2}(\kappa + \kappa') + \omega & -\tfrac{1}{2}\kappa' \\
-\tfrac{1}{2}\kappa & -\tfrac{1}{2}\kappa' & \tfrac{1}{2}(\kappa + \kappa') + \omega
\end{array} \right).
\end{equation}
The symplectic spectrum (\ref{tw1}) reads now
\begin{equation}\label{update}
\begin{aligned}
\varsigma_1 &= \omega, \,\,\, \varsigma_2 = \sqrt{\omega(\omega + \kappa/2 + \kappa' )},
\,\,\, \\
\varsigma_3 &= \sqrt{\omega(\omega + 3\kappa/2 )}.
\end{aligned}
\end{equation}
From (\ref{must}), one can calculate the matrix that performs the
symplectic diagonalization of the free Hamiltonian. The same structure
as in (\ref{sympdiag}) is found here too, {\it i.e.},
$\mathsf S = {\pmb S} {\pmb O} \! \oplus \! {\pmb S}^{-1} {\pmb O}$,
but now with
\begin{equation} \label{matS}
{\pmb S} = \!
{\rm Diag}\!\left(\! \sqrt[4]{\tfrac{\omega}{\varsigma_1}},\!
\sqrt[4]{\tfrac{\omega}{\varsigma_2}},\!
\sqrt[4]{\tfrac{\omega}{\varsigma_3}} \right)\!,
{\pmb O} \!=\!\!
\left(\!\!\!\!
\begin{array}{ccc}
\frac{1}{\sqrt{3}} & \tfrac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\
0 & - \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
-\frac{\sqrt{2}}{\sqrt{3}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}
\end{array}\!
\right)\!\!,
\end{equation}
being $\pmb O$ the orthogonal matrix that performs the Euclidean diagonalization of
the potential matrix $\bf Q$.
Considering $\kappa' \neq \kappa $, the effective Hamiltonian is the same
as in (\ref{hameffinal}) with $\mathbf{H_q}$ (\ref{hesseff2}) defined in terms of
$\mathsf S_{m \mu}$ calculated using $\mathsf S$ (\ref{matS})
with index $m = 1,2,3$ and $\mu = 2,3$.
The matrices in (\ref{decmateff2}) are also the same, provided we update the symplectic
eigenvalues to (\ref{update}).
With this replacement, results
(\ref{moneff2}), (\ref{moneff3}), (\ref{cmsol4}), and (\ref{sstate2}) stay valid.
On the other hand, if $\kappa = \kappa'$, the symplectic spectrum is degenerate since
$\varsigma_2 = \varsigma_3$. As prescribed in Sec.~\ref{eh},
if the external oscillators are set in resonance with this degenerate mode,
$\Omega = \varsigma_2= \varsigma_3$, operator (\ref{degvec}) becomes
$\check x =
(\hat{\sf q}_a, \hat {\sf q}_b, \hat q_2,\hat q_3,
\hat {\sf p}_a, \hat {\sf p}_b, \hat p_2,\hat p_3)^{\dag}$,
and the effective dynamics will be governed by (\ref{hameff2}) which,
for the present case reads
$\hat H_{\rm eff}^{(2,3)} =
\tfrac{\epsilon}{8} \check x^\dag \mathbf{H_q}\oplus \mathbf{H_q}\check x$
with
\begin{equation} \label{hesseff3}
\mathbf{H_q} =
\left( \!\!\!
\begin{array}{cccc}
1 & 0 & - \mathsf{S}_{ 2 2 } & - \mathsf{S}_{ 3 2} \\
0 & 1 & - \mathsf{S}_{ 2 3 } & - \mathsf{S}_{ 3 3} \\
- \mathsf{S}_{ 2 2} & - \mathsf{S}_{ 2 3} &
\mathsf{S}_{ 2 2}^2 + \mathsf{S}_{ 2 3}^2 &
\mathsf{S}_{ 2 2} \mathsf{S}_{ 3 2} + \mathsf{S}_{ 2 3} \mathsf{S}_{ 3 3} \\
- \mathsf{S}_{ 3 2} & - \mathsf{S}_{ 3 3} &
\mathsf{S}_{ 2 2} \mathsf{S}_{ 3 2} + \mathsf{S}_{ 2 3} \mathsf{S}_{ 3 3} &
\mathsf{S}_{ 3 2}^2 + \mathsf{S}_{ 3 3}^2
\end{array}\!\!\!\!\right),
\end{equation}
to be evaluated with (\ref{matS}).
Now one can calculate
$\mathsf E (t) = \exp\left[ \mathsf J {\mathbf H}_{\rm eff}\, t \right]
\in {\rm Sp}(8,\mathbb R)$
and determine the dynamics of the occupation number for oscillator $a$ which is
plotted in
Fig.~\ref{fig10ocn5}.
\begin{figure}[!htbp]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig10ocn5.pdf}
\caption
Mean occupation number as a function of dimensionless time $\omega t$.
The topology depicted Fig.~\ref{fig9system3} is used with
$\kappa = \kappa'$, and resonance is taken with the resulting
degenerate modes $\Omega = \varsigma_2=\varsigma_3$.
As before, the solid line is exact and dots refer to the
approximate model.
The dashed line corresponds to the dynamics where one mistakably
and naively includes only mode $3$ in the effective model.
We consider
$\kappa/\omega = \kappa'/\omega = 1/3$ and
$\epsilon/\omega = 1/600$. The initial state of oscillator $b$
is thermal with $\bar{n}_b = 1$, and the other oscillators
are in local vacuum states.
} \label{fig10ocn5}
\end{figure}
Again, it is remarkable the agreement of the simplified model (now two-mode) with exact
dynamics. For comparison, it is also shown the behavior with just one mode
in the effective description.
The reason it to draw our attention to the fact that degeneracy should be taken into
account carefully through the effective description (\ref{hameff2}).
For longer times, not shown in the plot, the mean occupation number
$\bar n_a$ attain $\bar n_b = 1$ within the precision of the numerical treatment of
the original model. The effective model can not be used at such long times as previously
discussed.
The inclusion of thermal baths to each oscillator is made along the lines of the previous
examples (\ref{decmateff2}). Now, one should only be careful to take into account
the presence of one more
mode, {\it i.e.},
\begin{equation} \label{decmateff3}
\begin{aligned}
\check{\bf \Gamma } &:=
\mathsf J^{[8]} \left( \mathbf{H_q} \oplus \mathbf{H_q} \right)
- \frac{\zeta}{2} \mathsf I_{8} , \\
\check{\bf D} & := \hbar \zeta({\bar n}_{\text{th}} + \tfrac{1}{2})
\left( \mathsf I_2 \oplus \tfrac{\varsigma_2}{\omega}
\oplus \tfrac{\varsigma_3}{\omega} \oplus
\mathsf I_2 \oplus \tfrac{\omega}{\varsigma_2}
\oplus \tfrac{\omega}{\varsigma_3} \right).
\end{aligned}
\end{equation}
The effect is essentially the same as in Fig.~\ref{fig7ocn4} and, for this reason,
we will not add a plot for this case.
Control of errors due to the approximations made to obtain (\ref{hesseff3})
is again made through inspection of $(1-\mathcal{F})$, with $\mathcal F$ defined
in (\ref{fid1}). This is presented in Fig.~\ref{fig11fid2}. One can see that,
in agreement to what is shown in Fig.~\ref{fig10ocn5}, fidelility is quite
high meaning that the effective model produces accurate results even in the case of
degeneracy. As expected, the fidelity slowly degrades with time.
\begin{figure}[!htbp]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig11fid2.pdf}
\caption
Dynamics of $1-\mathcal F$, where $\mathcal F$ is the fidelity
between evolved density operators for oscillator $a$ using
the exact and effective models. This is a case with degeneracy
and the parameters are those considered in Fig.~\ref{fig10ocn5}.
} \label{fig11fid2}
\end{figure}
\subsection{Beyond the Hooke's Law}
In previous examples, the interaction between oscillators in the network follows Hooke's
law, {\it i.e.}, spring-like couplings.
This implies that the effective Hamiltonian in (\ref{hesseff}) does not present crossed
terms involving position and momentum. Mathematically, this is the same as
${\bf C_{\bf qp}}$ null in (\ref{hesseffb}).
Since the method is applicable to any positive definite Hamiltonian,
this section advances to the consideration of a toy model where momentum
and position cross in the interaction Hamiltonian. For this purpose,
we consider now that (\ref{hamfree}) is defined with
\begin{equation} \label{hammp}
{\bf H}_{\rm N} =
\left(\begin{array}{cc}
\omega \mathsf I_{3} & {\mathbf C} \\
{\mathbf C} & \omega \mathsf I_{3}
\end{array}
\right), \,\,\,
{\bf H}_{\rm e} = \Omega \mathsf I_4,
\end{equation}
where we considered $N = 3$ oscillators in the network and
\begin{equation}
\mathbf C = \frac{\gamma}{2} \, \mathsf I_3 - \frac{1}{\sqrt{2}}
\left(\begin{array}{ccc}
0 & \kappa & 0 \\
\kappa & 0 & \kappa \\
0 & \kappa & 0
\end{array}
\right).
\end{equation}
The above matrix will lead to
$- \frac{\kappa}{\sqrt{2}} (\hat q_1\hat p_2 + \hat q_2\hat p_1 +
\hat q_3\hat p_2 + \hat q_2 \hat p_3)
+ \gamma\sum_{j=1}^{3}(\hat q_i\hat p_i + \hat p_i\hat q_i) $
in the network Hamiltonian.
Since ${\bf H}_{\rm N}$ must be positive definite,
condition $\omega > \kappa + \gamma$ has to be imposed.
The external oscillators, $a$ and $b$, interact with the network as usual,
see (\ref{hamint}).
For this example, symplectic diagonalization of ${\bf H}_{\rm N}$ results in
\begin{equation}
\begin{aligned}
\varsigma_1 & = \sqrt{\omega^2 - (\kappa+\gamma)^2},\,\,\,
\varsigma_2 = \sqrt{\omega^{2} - (\kappa-\gamma)^{2}} , \\
\varsigma_3 & = \sqrt{\omega^{2} - \gamma^{2}} ,
\end{aligned}
\end{equation}
from which one obtains the matrix that performs the
symplectic diagonalization of ${\bf H}_{\rm N}$. This is now written as
$\mathsf S_{\rm N} = ({\pmb S} \oplus {\pmb S}^{-1}) \mathsf R ({\pmb O} \oplus {\pmb O})$,
where
\begin{equation}
\begin{aligned} \label{matS2}
{\pmb S} & =
{\rm Diag}\left( \sqrt[4]{\tfrac{\omega - \kappa - \gamma}{\omega + \kappa + \gamma}},
\sqrt[4]{\tfrac{\omega + \kappa - \gamma}{\omega - \kappa + \gamma}},
\sqrt[4]{\tfrac{\omega - \gamma}{\omega + \gamma}} \right), \\
\mathsf R & = \frac{1}{\sqrt{2}}
\left(
\begin{array}{cc}
\mathsf I_3 & \mathsf I_3 \\
-\mathsf I_3 & \mathsf I_3
\end{array}
\right), \,\,\, \\
{\pmb O} & =
\left(
\begin{array}{ccc}
\frac{1}{2} & -\frac{1}{\sqrt{2}} & \frac{1}{2} \\
\frac{1}{2} & \frac{1}{\sqrt{2}} & \frac{1}{2} \\
-\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}
\end{array}
\right),
\end{aligned}
\end{equation}
being $\pmb O$ the orthogonal matrix that performs Euclidean diagonalization of $\bf C$.
Writing the effective Hamiltonian (\ref{hesseff}) for $\Omega = \varsigma_1$,
we can work on the time evolution of the covariance matrix (\ref{cmsol2})
to obtain the mean occupation number of the oscillator $a$,
as plotted in Fig.~\ref{fig12ocn6}.
Again, the effective model agrees quite well with the exact dynamics.
The inclusion of local thermal baths would follow just like
before, since again $\mathsf S_{\rm N}\mathsf S_{\rm N}^\top$ is a diagonal matrix.
\begin{figure}[!tp]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig12ocn6.pdf}
\caption{%
Mean occupation number of oscillator $a$ as a function of
dimensionless time $\omega t$. The Hamiltonian of the network
is given by (\ref{hammp}) with
$\kappa/\omega = 0.5$ and $\gamma/\omega = 0.2$.
The solid line is the exact time evolution,
while the dots are the result of the effective model. Oscillators $a$ and
$b$ possess frequency $\Omega = \varsigma_1$ and are coupled, respectively,
to oscillators $\alpha = 1$ and $\beta = 3$ in the network with
$\epsilon/\omega = 0.001$. %
Oscillator $b$ is initially in a thermal state
with $\bar{n}_b = 1$ and
all other oscillators are initially in local vacuum states.
} \label{fig12ocn6}
\end{figure}
Fidelity is again used to infer the quality of the approximations made to obtain the
simplified model, see Fig.~\ref{fig13fid3}.
The result shows that the accuracy of the effective description is again remarkable.
\begin{figure}[!htbp]
\includegraphics[width=8.0cm,trim=0 20 0 0]{fig13fid3.pdf}
\caption
Dynamics of $1-\mathcal F$, where $\mathcal F$ is the fidelity between evolved density
operators for oscillator $a$ using the exact and effective models.
This plot refers to the case considered in Fig.~\ref{fig12ocn6}.
} \label{fig13fid3}
\end{figure}
\section{ Final Remarks } \label{fr}
We have described a general method to obtain useful and accurate effective descriptions
of large open systems formed by coupled harmonic oscillators. The idea is that two
external oscillators weakly coupled to a network of harmonic oscillators may have
their dynamics
effectively described by a model with just a few coupled degrees of freedom.
This was first seen for the linear case of first neighbor coupled oscillators
with no degeneracy nor thermal reservoirs in \cite{plenio2005}.
We improve and expand this idea by considering any topology of the network and
by including environments for all elements of the system. For the unitary case,
the only restriction is that the system Hamiltonian must be positive definite.
When environments are attached to the oscillators, we show that further structural
restrictions must be imposed to grant the simplified descriptions. In general,
we showed that the number of effectively coupled constituents depends on the nature
of the symplectic spectrum of the Hessian of the Hamiltonian and resonances.
As an application and illustration of the method, we consider the problem of
propagation of energy through the network. Meaningful and informative analytical
results could be obtained in the scope of the simplified model.
We also presented how fidelity between the evolved states under exact and effective
descriptions behaves, and the result shows that the accuracy of the simplified
model is quite remarkable. Different topologies are used to illustrate the
applicability of the methodology presented here.
It is worthwhile noticing that instead of coupling single harmonic oscillators
to the network, one could have networks coupled to networks and obtain simplified
models involving few coupled normal modes of different networks.
In this case, the first step is symplectic diagonalization of each network and then,
through resonances, end up with an effective model following our recipe.
Our work considers finite networks as environments for the two external oscillators in
contrast to the typical baths necessary to model a system-reservoir interaction.
In the latter, the network is formed by an infinity set of harmonic oscillators
or continuum spectrum of modes. Our approach is of interest because it applies
to this intermediate case where the network is big enough for not being amenable to
analytical exact treatment and, at the same time, it is not big enough for allowing
the usual approximations that follow from interaction with a bath.
These approximations are needed, in general, for ending up with a useful master equation.}
It is our opinion that the present study can contribute to studies involving
transport of different physical resources in coupled harmonic systems by
allowing effective descriptions amenable to analytical progress.
This is important to the evaluation, for example, of limits in channel capacities
or even stationary heat currents, just to name a few direct applications.
During the the revision of this work, we became aware of \cite{galve} which treats a
similar system but from a different point of view whose approach is subjected to different
limits of validity and aims than ours.
\acknowledgments
FN and FLS are supported by the CNPq ``Ci\^{e}ncia sem Fronteiras''
programme through the ``Pesquisador Visitante Especial'' initiative
(grant nr. 401265/2012-9).
FLS is a member of the Brazilian National Institute of Science and Technology of
Quantum Information (INCT-IQ) and acknowledges partial support from CNPq
(grant nr. 307774/2014-7).
\section*{ Appendix } | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The research on statistical properties of turbulence flows can be traced back to the semi-empirical theories of turbulence in 1920's and 1930's, while the seminal advances in the area include Prandtl \citep{Prandtl1925}, von K\'arm\'an \citep{Karman1930} and Taylor \citep{Taylor1921, Taylor1935}. The goal of statistical fluid mechanics is to provide good descriptions and computational tools for understanding the distributions of the velocity random fields of turbulent fluid flows. Unlike some other unsolved problems in theoretical physics, the equations of motion for fluid dynamics, even for turbulent flows, have been known for over a century. These equations are highly non-linear and non-local partial differential equations, and it is difficult to extract information about the evolution of fluid flows in a deterministic manner. Thus, as a matter of fact, the velocity field of a turbulent flow is better to be considered as a random field arising from either the random initial data or a random external force, or both. To understand the statistics of turbulent flows, it is desired to know, if it is possible, the evolution of some distributional characteristics of fluid flows. The distribution of a random field such as the velocity field is rather complicated and determining the distribution of turbulent flows is a challenging task even when the initial distribution is known. In 1950's Hopf \citep{EHopf1952} (see also \citep{MoninYaglom1965}) derived a functional differential equation for the law of the velocity random field, but his equation involves functional derivatives.
In the past decades, the probability density function (PDF) method, based on the transport equation, a formal adjoint equation of the Navier-Stokes equation, has been developed into a useful tool for modelling turbulent flows. This method focuses on evaluating the one-point one-time PDF $p(u;x,t)$ of the velocity field $U(x,t)$ or equivalently the centred field $U(x,t)-\overline{U(x,t)}$. The exact transport equation for the PDF, which involves the mean of the pressure term as well as the conditional expectation of the pressure term, has been derived by Pope and can be found in \citep{Pope1985,Pope2000} for details. However, only few features can be extracted from the formal PDF transport equation for the purpose of modelling turbulent flows. Therefore, applications of PDF methods have been based on the generalised Langevin model, where the time-dependent velocity $U(t)$ of a particle at position $X(t)$ is assumed to satisfy a stochastic differential equation (SDE).
The main contribution in this paper is the derivation of the PDF partial differential equation (PDE) to the velocity random field which is much more explicit than the formal PDF transport equation. This PDE is a generalisation of Reynolds mean flow equations, which can be closed by introducing Reynolds stress tensor field. Having $6$ dimensions in space $(u,x)$ and $1$ dimension in time $t$, our PDF PDE can be regard as a parabolic-transport equation which has a parabolic operator $\frac{1}{2}\partial_{t}+u\cdot\nabla_{x}+\nu\Delta_{x}$ in $x$ and transport operator $\frac{1}{2}\partial_{t}+\nabla_{u}\cdot$ in $u$. However, the PDF PDE for velocity fields is a second order partial differential equation which is in general not parabolic due to the appearance of a mixed derivative term $\partial_{u^{i}}\partial_{x^{k}}p$. Even this mixed derivative does not appear in the PDF PDE (which is the case for some turbulent flows which will be explained below), the parabolic part in the PDF PDE involves only the variable $x$, and therefore even for this case the PDF PDE is highly degenerate. This feature of the PDF PDE distinguishes itself from the prevalent parabolic PDEs or other types of PDE theories in literature.
The PDF PDE that we have obtained, relies on two conditional structure functions, which are the conditional average increment
\[
\rho^{i}(x,y,u,t)=\E\left[U^{i}(y,t)-U^{i}(x,t)\,|\,U(x,t)=u\right]
\]
and the conditional covariance
\[
\sigma^{ij}(x,y,u,t)=\E\left[U^{i}(y,t)U^{j}(y,t)\;|\;U(x,t)=u\right].
\]
These conditional structure functions describe the interactions of the velocity random field at different positions, hence they are natural to appear in the PDF PDE. The fact that the distribution of velocity random fields is characterised by the conditional first and second moments is an interesting feature reveled in this paper. These statistical characteristics are local, which have the capacity of determining the PDF PDE. Moreover, these local statistical characteristics can localise many concepts, such as homogeneity, isotropy and etc, which were introduced firstly by Taylor and Kolmogorov \citep{K41a,K41b,Taylor1935}, allowing us to generalise such concepts to their weak versions.
We outline the main structure of this paper in the following. In section 2, we introduce definitions related to random fields, which are cornerstones of our main results. The evolution equation for the distribution of the velocity random field of turbulence over time will be derived, under the assumption that the random field is regular. The PDE is going to be applied to various types of flows in section 3, including both the viscid and inviscid cases. We also obtain a stochastic representation formula for the solution of the PDF PDE, together with the constraint that ensures the solution is indeed a PDF for all time $t\geq0$ and position $x\in \R^3$. These theoretical results are important when we apply PDF PDE for modelling turbulent flows. Section 4 is thus devoted to an example of modelling the PDF of turbulence, which has the ability of demonstrating the change of distribution at a fixed position $x$ over time. Our paper will be closed by a few remarks in the last section.
\textit{Conventions on notations}. The following set of conventions is employed throughout the paper. Firstly Einstein's convention on summation on
repeated indices through their ranges is assumed, unless otherwise specified. If $A$ is a vector or a vector field (usually in the space of dimension three) dependent on some parameters, then its components are labelled with upper-script indices so that $A \coloneqq \left(A^{i}\right) =\left(A^{1},A^{2},A^{3}\right)$. The same convention applies to coordinates too. The derivative operators $\nabla$ and $\Delta$ are labelled with subscripts to indicate the variable to which the operator is applied, such as $\Delta_{x}\coloneqq\partial_{x^{i}}\partial_{x^{i}}$ and $\nabla_{x}\cdot A\coloneqq \partial_{x^{i}}A^{i}$. Finally, the velocity vector field will be denoted by $U=\left(U^{i}\right)$,
unless we specified.
\section{PDF equation of velocity fields}
In this section, we aim to introduce some fundamental concepts on random fields and to derive the evolution equation for the random velocity field $\{U(x,t)\}_{x,t}$, where $x\in\R^{3},$ $t\geq0$ and $U(x,t)$ takes values in $\R^{3}$.
\subsection{Random fields and their statistical characteristics}
Given a random field $\left\{ U(x,t)\right\} _{x,t}$ on a probability space $(\Omega,\sF,\P)$, $U(x,t)$ is, by definition, an $\mathbb{R}^{3}$-valued random variable for every $x\in\mathbb{R}^{3}$ and $t\geq0$. The law or the distribution of $U(x,t)$ for fixed $x$ and $t$ is a probability measure on the Borel $\sigma$-algebra of $\mathbb{R}^{3}$. The distribution of the random field $U$ consists of, by definition, all possible finite-dimensional marginal joint distributions of
\[
U(x_{1},t_{1}),\ldots,U(x_{n},t_{n})
\]
where $x_{i}\in\mathbb{R}^{3}$, $t_{i}\geq0$ and any positive integer $n$. For example, by saying that the random field $\{U(x,t):x\in\mathbb{R}^{3},t\geq0\}$ is Gaussian, we refer to the fact that any finite-dimensional marginal joint distribution is a Gaussian distribution, which in particular implies that the marginal distribution of $U(x,t)$ for any $(x,t)$ has a normal distribution. We remark that the converse argument is not true in general.
The most important statistical numerics for understanding a random field is the correlation function of two random variables, which plays the dominant role in the study of turbulence \citep{Batchelor1953,MoninYaglom1965}. In this paper, we however emphasize the use of a few statistical characteristics based on the conditional distribution. Let us introduce these statistical numerics, which we believe are of the most importance.
\begin{defn}
Given a time-dependent random field $\left\{ U(x,t)\right\} _{x,t}$ on a probability space $(\Omega,\sF,\P)$, for $x,y,u\in\mathbb{R}^{3}$ and $t\geq0$, and $i\in \{1,2,3\}$,
\begin{enumerate}[label=\arabic*)]
\item the \textit{conditional average increment function} $\rho^{i}$ is defined as
\begin{equation}\label{eq: con_mean_diff}
\rho^{i}(x,y,u,t)=\E\left[U^{i}(y,t)-U^{i}(x,t)\,|\,U(x,t)=u\right],
\end{equation}
\item the \textit{conditional covariance function} $\sigma(x,y,u,t)$ is defined to be the covariance of $U(y,t)-U(x,t)$ given $U(x,t)=u$,
\begin{equation}\label{eq: con_covar}
\sigma^{ij}(x,y,u,t)=\textrm{cov}\left[\left(U^{i}(y,t),U^{j}(y,t)\right)\,|\,U(x,t)=u\right].
\end{equation}
\end{enumerate}
\end{defn}
From the definition, it is clear that for every $i$, the \textit{conditional mean function} is of the form
\begin{equation}
b^{i}(x,y,u,t)\coloneqq\E\left[U^{i}(y,t)\,|\,U(x,t)=u\right]=\rho^{i}(x,y,u,t)+u^{i}\label{eq: con_mean}
\end{equation}
and $\rho(x,x,u,t)=0$ for all $u$, $x$ and $t$. The conditional covariance function $\sigma^{ij}(x,y,u,t)$ can be treated as the conditional Reynold stress. These statistical characteristics have explicit representations in terms of the two-point joint distribution. For our purpose, it is convenient to assume that the distribution of $U(x,t)$ for every $(x,t)$ has a probability density function (PDF) with respect to Lebesgue measure on $\R^{3}$, denoted by $p(u;x,t)$, in the sense that
\[
\E\left[\ione_{\left\{ U(x,t)\in E\right\} }\right]=\int_{E}p(u;x,t)\rmd u\quad\textrm{ for any Borel set }E\in\mathscr{B}({\R^{3}}).
\]
Similarly, the joint distribution of $U(x,t)$ and $U(y,t)$ at two distinct points $x\neq y$ has a joint PDF, denoted by $p_{2}(u_{1},u_{2};x,y,t)$. It follows that the conditional law of $U(y,t)$ given that $U(x,t)=u$ possesses the following conditional PDF
\[
p_{2|1}(v;u,x,y,t)\coloneqq\P(U(y,t)=v\,|\:U(x,t)=u)=\frac{p_{2}(u,v;x,y,t)}{p(u;x,t)}
\]
with $p_{2|1}(v;u,x,y,t)=0$ if $p(u;x,t)=0$. In terms of the conditional law, the joint PDF of $U(x,t)$ and $U(y,t)$ may be split into a product
\begin{align*}
p_{2}(u_{1},u_{2};x,y,t) & =p(u_{1};x,t)p_{2|1}(u_{2};u_{1},x,y,t)\\
& =p(u_{2};y,t)p_{2|1}(u_{1};u_{2},y,x,t).
\end{align*}
As a result, we are allowed to represent the conditional average difference function (\ref{eq: con_mean_diff}) and covariance function (\ref{eq: con_covar}) as an integral relevant to the conditional density, namely
\begin{equation}
\rho^{i}(x,y,u,t)=\intr(v^{i}-u^{i})p_{2|1}(v;u,x,y,t)\rmd v\label{eq: con_mean_diff_integral}
\end{equation}
and
\[
\sigma^{ij}(x,y,u,t)=\intr\left(v^{i}-b^{i}\right)\left(v^{j}-b^{j}\right)p_{2|1}(v;u,x,y,t)\rmd v.
\]
The use of the conditioning techniques is in fact the main reason for advocating the foundation of statistical fluid mechanics based on the probability theory, rather than on an average procedure, which was first explicitly proposed by Kolmogorov \citep{K41a,K41b}. The homogeneity and the isotropy can be defined in general for random fields indexed by a space variable $x\in\mathbb{R}^{3}$, which have been introduced into the study of turbulence by G. I. Taylor. The local homogeneous and local isotropic flows were introduced by Kolmogorov for formulating K41 theory (and its improved version K61 theory). According to Kolmogorov \citep{K41a,K41b}, a random field $\left\{ U(x,t)\right\} _{x,t}$ is locally homogeneous if for any $x,y\in \R^3$, the conditional distribution of $U(y,t)-U(x,s)$ given $U(x,s)=u$ depends on $y-x$ and $u$, and further it is locally isotropic if the conditional distribution depends only on $|y-x|$ and $u$. By using the conditional average and the conditional covariance functions, it is possible to generalise these terminologies to their weak versions. We are now in a position to state technical assumptions on the random field.
\begin{defn}
The random field $\{U(x,t)\}_{x,t}$ on the probability space $(\Omega,\sF,\P)$ is
\begin{enumerate}[label=\arabic*)]
\item \textit{regular} if the conditional average increment function $\rho$ has derivatives up to second order and $\rho(x,y,u,t)$ has a Taylor expansion (for every $x,u$ and $t$ fixed) about $y$:
\begin{equation}
\rho(x,y,u,t)=B_{k}(x,u,t)(y^{k}-x^{k})+\frac{1}{2}A_{jk}(x,u,t)(y^{j}-x^{j})(y^{k}-x^{k})+o\left(|y-x|^{2}\right)\label{eq: taylor_expansion}
\end{equation}
as $|y-x|\rightarrow0$, where
\begin{equation}
B_{k}(x,u,t)=\left.\frac{\partial}{\partial y^{k}}\rho(x,y,u,t)\right|_{y=x}\textrm{ and }A_{jk}(x,u,t)=\left.\frac{\partial^{2}}{\partial y^{j}\partial y^{k}}\rho(x,y,u,t)\right|_{y=x},\label{eq: BA_vectors}
\end{equation}
and $B$ and $A$ are differentiable in $x^{i},u^{i},t$ for all $i\in\{1,2,3\}$;
\item \textit{weakly homogeneous} if given $u\in\R^{3},t\geq0$, $\rho^{i}(x,y,u,t)=o(|y-x|)$ as $|y-x|\rightarrow0$ for all $i,y,x$;
\item \textit{weakly isotropic} if both $\rho(x,y,u,t)$ and $\sigma(x,y,u,t)$ depend only on $|y-x|$, $u$ and $t$, and $A_{kk}^{i}$ only depends on $u$ and $t$.
\end{enumerate}
\end{defn}
The functions $B_{k}^{i}$ and $A_{jk}^{i}$ in the Taylor expansion of the conditional average increment $\rho^{i}(x,y,u,t)$ also have the form
\[
\begin{cases}
B_{k}^{i}(x,u,t)\,=\E\left[\frac{\partial}{\partial x^{k}}U^{i}(x,t)\,\bigg|\,U(x,t)=u\right],\\
A_{jk}^{i}(x,u,t)=\E\left[\frac{\partial^{2}}{\partial x^{j}\partial x^{k}}U^{i}(x,t)\,\bigg|\,U(x,t)=u\right]
\end{cases}
\]
for $i,j,k\in\{1,2,3\}$. Moreover, if the $\{U(x,t)\}_{x,t}$ is weakly homogeneous, we have the equivalent characterisation
\[
B_{k}^{i}(x,u,t)=\lim_{\varepsilon\rightarrow0}\frac{\rho^{i}(x,x+\varepsilon e^{(k)},u,t)}{\varepsilon}=0
\]
for all $i,k\in\{1,2,3\}$, where $e^{(k)}$ is the unit vector at the
$k$-th direction.
Unlike Kolmogorov's definition of isotropic flows and homogeneous flows, our concept on weakly isotropy has no direct relationship to weakly homogeneity. Nevertheless, a regular locally homogeneous turbulent flow in the sense of Kolmogorov satisfies the condition that $A_{kk}^{i}$ depends only on $u$, since the conditional average increment of such a flow must obey $\rho^{i}(x,y,u,t)=g^{i}(y-x,u)$ for some function $g^{i}$ and
\begin{align*}
A_{jk}^{i}(x,u,t) & =\frac{\partial^2 g^{i}}{\partial y^{j}\partial y^{k}}(0,u).
\end{align*}
Moreover, if we further assume the flow is locally isotropic, the conditional average increment function satisfies $\rho^{i}(x,y,u,t)=g^{i}(|y-x|,u)$ and
\begin{align*}
B_{k}^{i}(x,u,t)=\lim_{\varepsilon\rightarrow0^{+}}\frac{\rho^{i}(x,x+\varepsilon e^{(k)},u,t)}{\varepsilon} & =\lim_{\varepsilon\rightarrow0^{+}}\frac{-\rho^{i}(x,x-\varepsilon e^{(k)},u,t)}{\varepsilon}
\end{align*}
is well-defined if and only if $B_{k}^{i}=0$. Therefore, this turbulent flow is both weakly homogeneous and weakly isotropic in our sense.
Apart from extending Kolmogorov's definitions on homogeneity and isotropy, the significance of introducing these terminologies is they will eliminate the mixed-terms in the PDF PDE, which will be thoroughly explained in section \ref{sec: Application_to_turbulent_flows}.
\subsection{The evolution equation for the velocity distribution}
In this subsection, we derive the main theoretical result, which provides the theoretical foundation of modelling PDFs of turbulent flows based on two statistical characteristics. We consider an incompressible turbulent flow, inviscid or viscous, with kinetic viscosity constant $\nu$ which is positive for viscous fluid, or $\nu$ reads as zero for inviscid fluid. The turbulent flow is described by its velocity $U=(U^{1},U^{2},U^{3})$ and the pressure $P$, which are random fields and the three dimensional Navier-Stokes equations
\begin{align}
\frac{\partial U^{i}}{\partial t}+U^{j}\frac{\partial U^{i}}{\partial x^{j}} & =\nu\Delta_{x}U^{i}-\frac{\partial P}{\partial x^{i}},\label{eq:ns-m1}\\
U(x,0) & =U_{0}(x),\nonumber
\end{align}
where $i=1,2,3$ and $\nu\geq0$ is the viscosity constant, together
with the constraint
\begin{equation}
\frac{\partial U^{j}}{\partial x^{j}}=0.\label{eq:ns-m2}
\end{equation}
The initial condition is also treated as a random field on $\R^{3}$ and each sample path $\omega\in\Omega$ corresponds to a deterministic function $U(x,t;\omega)$, which serves as a solution to equation (\ref{eq:ns-m1}) with initial data $U_{0}(x;\omega)$. We will discuss an ideal case, for the purpose of understanding the local properties of turbulent flows, where the region occupied by the fluid is the entire space $\mathbb{R}^{3}$. Moreover, without further qualifications, the dynamical variables such as $U(x,t)$ and $P(x,t)$ decay to zero sufficiently fast as $|x|$ tends to infinity. In addition, to avoid technical difficulties, but not in any way implying that these issues are not important, we will assume that the dynamical variables $U(x,t)$ and $P(x,t)$ are sufficiently smooth functions of $(x,t)$.
Due to the divergence-free condition (\ref{eq:ns-m2}) , the pressure term satisfies the following Poisson equation
\[
\Delta_{x}P=-\frac{\partial^{2}(U^{i}U^{j})}{\partial x^{j}\partial x^{i}}.
\]
Therefore, according to the Green formula, we have the integral representation
\begin{equation}
P(x,t)=\intr\frac{1}{4\pi|y-x|}\frac{\partial^{2}(U^{i}U^{j})}{\partial y^{j}\partial y^{i}}\rmd y,\label{eq: pressue}
\end{equation}
which implies in particular that the distribution of $P$ is completely determined by the distribution of the velocity random field.
We assume that $\{U(x,t)\}_{x,t}$ is a regular random field. Since $U(x,t)$ is divergence-free as in equation (\ref{eq:ns-m2}), we have for all $i,k$,
\[
\frac{\partial}{\partial y^{i}}\rho^{i}(x,y,u,t)=0,\quad B_{i}^{i}=0,\quad A_{ik}^{i}=A_{ki}^{i}=0,
\]
as well as the following integral condition for PDF of $U(x,t)$
\begin{equation}
\frac{\partial}{\partial x^{i}}\intr u^{i}p(u;x,t)\rmd u=0,\label{eq:div-PDF}
\end{equation}
which will appear as a natural constraint for the PDF PDE we will derive.
Recall that the Reynolds equation (see \cite{Reynolds1894}) is obtained by taking the average in (\ref{eq:ns-m1}), more explicitly
\begin{align*}
\frac{\partial\E\left[U^{i}\right]}{\partial t}+\E\left[U^{j}\frac{\partial U^{i}}{\partial x^{j}}\right] & =\nu\Delta_x\E\left[U^{i}\right]-\frac{\partial\E\left[P\right]}{\partial x^{i}},\\
\frac{\partial\E\left[U^{i}\right]}{\partial x^{i}} & =0.
\end{align*}
The conventional treatment for the non-linear term on the left-hand side is to write
\[
\E\left[U^{j}\frac{\partial U^{i}}{\partial x^{j}}\right]=\E\left[U^{j}\right]\frac{\partial\E\left[U^{i}\right]}{\partial x^{j}}+\frac{\partial}{\partial x^{i}}r^{ij},
\]
where
\[
r^{ij}=\E\left[(U^{i}-\E[U^{i}])(U^{j}-\E[U^{j}])\right]
\]
is the Reynolds stress. The PDF equation can be obtained by carrying out this computation for the average $\E\left[F(U)\right]$ where $F:\R^{3}\to\R$ is set as a smooth function with a compact support, instead of choosing $F(x)=x$ for each velocity component in the case of Reynold stress. We are now in a position to establish the most important work in this
paper:
\begin{thm}
\label{thm: PDF_PDE}Let $\left\{ U(x,t)\right\} _{(x,t)\in\R^{3}\times\R_{+}}$ be a regular random field defined on the probability space $(\Omega,\sF,\P)$ and take values in $\R^{3}.$ Suppose $\left\{ U(x,t)\right\} _{x,t}$ satisfies the Navier-Stokes equation (\ref{eq:ns-m1}) and the continuity equation (\ref{eq:ns-m2}), then the PDF $p(u;x,t)$ of the velocity $U(x,t)$ satisfies the evolution equation:
\begin{align}
\begin{split}\label{eq: PDF_PDE_origin}
\frac{\partial p}{\partial t}+u^{i}\frac{\partial p}{\partial x^{i}} & =\nu\Delta_{x}p+\frac{\partial}{\partial u^{i}}\left(\nu\frac{\partial\left(pB_{k}^{i}\right)}{\partial x^{k}}-\nu pA^{i}+pQ^{i}\right),\\
p(u;x,0) & =p_{0}(u;x),
\end{split}
\end{align}
where $p_{0}(u;x)$ is the PDF of the $U_{0}(x)$ in the random field
$\left\{ U_{0}(x)\right\} _{x}$ and
\begin{equation}
Q^{i}(x,u,t)=\intr\frac{y^{i}-x^{i}}{4\pi|y-x|^{3}}\frac{\partial^{2}\left(\sigma^{jk}+b^{j}b^{k}\right)}{\partial y^{k}\partial y^{j}}\rmd y,\label{eq:q-term}
\end{equation}
\begin{equation}
B_{k}^{i}=\left.\frac{\partial\rho^{i}(x,y,u,t)}{\partial y^{k}}\right|_{y=x},\textrm{ and }A^{i}(x,u,t)=\left.\Delta_{y}\rho^{i}(x,y,u,t)\right|_{y=x}=A_{kk}^{i}(x,u,t).\label{eq: BA_in_PDE}
\end{equation}
\end{thm}
\begin{proof}
Let $F\in D(\R^{3})$ be a test function, which is a smooth function taking values in $\R$ with a compact support. For simplicity, we denote $F_{i}(u)\coloneqq\partial_{u^{i}}F(u)\in D(\R^{3})$. Applying $\partial_{t}$ on the average $\E\left[F(u)\right]$ followed by exchanging integral and their derivative operator, we have
\[
\frac{d}{dt}\mathbb{E}\left[F(U(x,t))\right]=\int F(u)\frac{\partial}{\partial t}p(u;x,t)\rmd u.
\]
On the other hand
\begin{equation}
\frac{d}{dt}\mathbb{E}\left[F(U(x,t))\right]=\mathbb{E}\left[\frac{\partial}{\partial t}F(U(x,t))\right]\label{eq:exp-tu1}
\end{equation}
together with utilizing the Navier-Stokes equations (\ref{eq:ns-m1}), we get
\[
\left(\frac{\partial}{\partial t}-\nu\Delta_{x}\right)F(U)=-\frac{\partial\left(U^{i}F(U)\right)}{\partial x^{i}}-\nu\frac{\partial F_{i}(U)}{\partial x^{k}}\frac{\partial U^{i}}{\partial x^{k}}-F_{i}(U)\frac{\partial P}{\partial x^{i}}.
\]
Substituting this into (\ref{eq:exp-tu1}), we obtain that
\[
\mathbb{E}\left[\frac{\partial}{\partial t}F(U)\right]=\nu\Delta_{x}\mathbb{E}\left[F(U)\right]-\frac{\partial}{\partial x^{i}}\mathbb{E}\left[U^{i}F(U)\right]+J_{1}+J_{2},
\]
where the first two terms are equivalent to
\[
\nu\Delta_{x}\mathbb{E}\left[F(U)\right]=\intr F(u)\nu\Delta_{x}p(u;x,t)\rmd u,
\]
and
\[
-\frac{\partial}{\partial x^{i}}\mathbb{E}\left[U^{i}F(U)\right]=\intr F(u)\left(-u^{i}\frac{\partial}{\partial x^{i}}p(u;x,t)\right)\rmd u.
\]
Subsequently, the remaining $J_{1}$, $J_{2}$ are of the form
\[
J_{1}=-\nu\mathbb{E}\left[\frac{\partial F_{i}(U)}{\partial x^{k}}\frac{\partial U^{i}}{\partial x^{k}}\right],\quad J_{2}=-\mathbb{E}\left[F_{i}(U)\frac{\partial P}{\partial x^{i}}\right].
\]
The evaluation of $J_{1}$ and $J_{2}$ requires invoking the joint distribution at two points together with taking limits. Here we depart from this approach by expressing this term via the PDF, which allows us to perform similar computations for a general case. The partial derivative $\frac{\partial U^{i}}{\partial x^{j}}$ may be written as a limit
\[
U^{j}(x,t)\frac{\partial}{\partial x^{j}}U^{i}(x,t)=\lim_{h\rightarrow0}\frac{1}{h}U^{j}(x,t)\left(U^{i}(x+he^{(j)},t)-U^{i}(x,t)\right).
\]
Assuming that we are able to take the average under the limit i.e. the dominated convergence theorem can be applied, we are able to write the non-linear term in terms of
\[
\E\left[U^{j}\frac{\partial U^{i}}{\partial x^{j}}\right]=\lim_{h\rightarrow0}\frac{1}{h}\E\left[U^{j}(x,t)\left(U^{i}(x+he^{(j)},t)-U^{i}(x,t)\right)\right]=:\lim_{h\rightarrow0}L^{i}(h).
\]
The average appearing on the right-hand side, denoted by $L^{i}(h)$, may be evaluated in terms of the two-point joint distribution
\begin{align*}
L^{i}(h) & =\intr\intr u^{j}u_{1}^{i}p_{2}(u,u_{1};x,x+he^{(j)},t)\rmd u_{1}\rmd u-\int u^{j}u^{i}p(u;x,t)\rmd u\\
& =\int u^{j}p(u;x,t)\left[\intr u_{1}^{i}p_{2|1}(u_{1};u,x,x+he^{(j)},t)\rmd u_{1}\right]\rmd u\\
& \quad -\intr u^{j}u^{i}p(u;x,t)\rmd u\\
& =\intr u^{j}p(u;x,t)\left[\intr(u_{1}^{i}-u^{i})p_{2|1}(u_{1};u,x,x+he^{(j)},t)\rmd u_{1}\right]\rmd u\\
& =\intr u^{j}p(u;x,t)\rho^{i}(x,x+he^{(j)},u,t)\rmd u,
\end{align*}
which follows that
\[
\frac{\partial}{\partial x^{j}}\E\left[U^{j}U^{i}\right]=\intr u^{j}B_{j}^{i}(x,u,t)p(u;x,t)\rmd u.
\]
Now we deal with $J_{1}$. Writing the space derivatives $\frac{\partial F_{i}(U)}{\partial x^{k}}\frac{\partial U^{i}}{\partial x^{k}}$
as the following limits
\[
\lim_{h\rightarrow0}\frac{1}{h^{2}}\left(F_{i}(U(x+he^{(k)},t))-F_{i}(U(x,t))\right)\left(U^{i}(x+he^{(k)},t)-U^{i}(x,t)\right),
\]
where $e^{(1)}=(1,0,0)$ and so on. Taking expectation first, we obtain
that
\begin{align*}
J_{1} & =-\nu\lim_{h\rightarrow0}\frac{1}{h^{2}}\intr\left(F_{i}(u_{2})-F_{i}(u_{1})\right)\left(u_{2}^{i}-u_{1}^{i}\right)p_{2}(u_{1},u_{2};x,x+he^{(k)},t)\rmd u_{1}\rmd u_{2}\\
& =\nu\intr F_{i}(u)\lim_{h\rightarrow0}\frac{I^{i,k}(h)}{h^{2}}\rmd u,
\end{align*}
where
\[
I^{i,k}(h)\coloneqq\int_{\R^{3}}(u_{1}^{i}-u^{i})p_{2}(u_{1};u,x,x+he^{(k)},t)\mathrm{d}u_{1}+\int_{\R^{3}}\int_{\R^{3}}(u_{1}^{i}-u^{i})p_{2}(u;u_{1},x,x+he^{(k)},t)\mathrm{d}u_{1}.
\]
Using the conditional probability notation that we introduced, we
may write this as
\begin{align*}
I^{i,k}(h)\coloneqq & p(u;x+he^{(k)},t)\intr(u_{1}^{i}-u^{i})p_{2|1}(u_{1};u,x+he^{(k)},x,t)\rmd u_{1}\\
& +p(u;x,t)\intr(u_{1}^{i}-u^{i})p_{2|1}(u_{1};u,x,x+he^{(k)},t)\rmd u_{1}\\
= & p(u;x+he^{(k)},t)\left(\rho^{i}(x+he^{(k)},x,u,t)+\rho^{i}(x,x+he^{(k)},u,t)\right)\\
& -\left(p(u;x+he^{(k)},t)-p(u;x,t)\right)\rho^{i}(x,x+he^{(k)},u,t).
\end{align*}
The last equality is a result of applying (\ref{eq: con_mean_diff_integral}), which converts integrals into conditional average increments $\rho^{i}$. As a consequence of the regularity condition on the random field, we make use of (\ref{eq: taylor_expansion}) to deduce
\begin{align*}
\rho^{i}(x+he^{(k)},x,u,t)&+\rho^{i}(x,x+he^{(k)},u,t) =\left\{ B_{k}^{i}(x,u,t)h+\frac{1}{2}A_{kk}^{i}(x,u,t)h^{2}+o(h^{2})\right\} \\
&+\left\{ -B_{k}^{i}(x+he^{(k)},u,t)h+\frac{1}{2}A_{kk}^{i}(x+he^{(k)},u,t)h^{2}+o(h^{2})\right\} \\
& \qquad\qquad\qquad\qquad\;\,=\left(B_{k}^{i}(x,u,t)-B_{k}^{i}(x+he^{(k)},u,t)\right)h\\
& \qquad\qquad\qquad\qquad\;\,\quad+\frac{1}{2}\left(A_{kk}^{i}(x,u,t)+A_{kk}^{i}(x+he^{(k)},u,t)\right)h^{2}+o(h^{2})
\end{align*}
and therefore
\begin{align*}
\lim_{h\rightarrow0}\frac{1}{h^{2}}I^{i,k}(h) & =-p\frac{\partial}{\partial x^{k}}B_{k}^{i}+pA^{i}-\frac{\partial p}{\partial x^{k}}B_{k}^{i}\\
& =-\frac{\partial}{\partial x^{k}}(pB_{k}^{i})+pA^{i}.
\end{align*}
We perform integration by parts to derive
\[
J_{1}=\intr F(u)\left[-\frac{\partial}{\partial u^{i}}\left(\nu\left[-\frac{\partial}{\partial x^{k}}(pB_{k}^{i})+pA^{i}\right]\right)\right]\rmd u.
\]
Next we handle $J_{2}$. Applying the representation (\ref{eq: pressue}),
we arrive at
\[
\frac{\partial P}{\partial x^{i}}=\intr\frac{y^{i}-x^{i}}{4\pi|y-x|^{3}}\frac{\partial^{2}(U^{j}U^{k})}{\partial y^{k}\partial y^{j}}\rmd y,
\]
which implies
\[
J_{2}=-\mathbb{E}\left[F_{i}(U)\int_{\R^{3}}\frac{y^{i}-x^{i}}{4\pi|y-x|^{3}}\frac{\partial^{2}(U^{j}U^{k})}{\partial y^{k}\partial y^{j}}\rmd y\right].
\]
Writing the derivative in terms of
\begin{align*}
\frac{\partial^{2}(U^{j}U^{k})}{\partial y^{k}\partial y^{j}} & =\lim_{h\rightarrow0}\frac{1}{h^{2}}\bigg\{ U^{j}(y+h(e^{(k)}+e^{(j)}),t)U^{k}(y+h(e^{(k)}+e^{(j)}),t)\\
& \qquad\qquad\;-U^{j}(y+he^{(k)},t)U^{k}(y+he^{(k)},t)-U^{j}(y+he^{(j)},t)U^{k}(y+he^{(j)},t)\\
& \qquad\qquad\;+U^{j}(y,t)U^{k}(y,t)\bigg\}
\end{align*}
and using the two-point PDF by integrating then taking limit as $h\rightarrow0$,
lead us to
\[
J_{2}=-\int_{\R^{3}}F_{i}(u)\left[\int_{\R^{3}}\frac{y^{i}-x^{i}}{4\pi|y-x|^{3}}\frac{\partial^{2}}{\partial y^{k}\partial y^{j}}J_{2}^{jk}\rmd y\right]\rmd u,
\]
where the integral $J_{2}^{jk}$ has the following integral form
\begin{align*}
J_{2}^{jk}: & =\int_{\R^{3}}u_{1}^{j}u_{1}^{k}p_{2}(u,u_{1};x,y,t)\rmd u_{1}\\
& =p(u;x,t)\intr u_{1}^{j}u_{1}^{k}p_{2|1}(u_{1};u,x,y,t)\rmd u_{1}\\
& =p(u;x,t)\left(\sigma^{jk}+b^{j}b^{k}\right)(x,y,u,t),
\end{align*}
through using equations (\ref{eq: con_mean_diff}, \ref{eq: con_covar}).
Therefore, substituting this into the equation for $J_{2}$ yields
\begin{align*}
J_{2} & =-\int_{\R^{3}}F_{i}(u)p(u;x,t)\int_{\R^{3}}\frac{y^{i}-x^{i}}{4\pi|y-x|^{3}}\frac{\partial^{2}\left(\sigma^{jk}+b^{j}b^{k}\right)}{\partial y^{k}\partial y^{j}}\rmd y\rmd u\\
& =\int_{\R^{3}}F(u)\frac{\partial}{\partial u^{i}}\left[p(u;x,t)Q^{i}(x,u,t)\right]\rmd u.
\end{align*}
Putting all terms together, we deduce that
\begin{align*}
\intr F(u) & \left(-\frac{\partial}{\partial u^{i}}\left(\nu\frac{\partial\left(pB_{k}^{i}\right)}{\partial x^{k}}(x,u,t)-\nu p(u;x,t)A^{i}(x,u,t)+p(u;x,t)Q^{i}(x,u,t)\right)\right.\\
& \quad\left.+\left(\partial_{t}+u\cdot\nabla_{x}-\nu\Delta_{x}\right)p(u;x,t)\right)\rmd u=0
\end{align*}
for all such $F\in D(\R^{3})$. Therefore, we must have (\ref{eq: PDF_PDE_origin}).
\end{proof}
We finish this section by adding several comments. The PDF PDE (\ref{eq: PDF_PDE_origin}) may be written as
\begin{equation}
\left(\frac{\partial}{\partial t}+u^{i}\frac{\partial}{\partial x^{i}}-\nu\Delta_{x}\right)p=\frac{\partial}{\partial u^{i}}\left(\nu B_{k}^{i}\frac{\partial p}{\partial x^{k}}+pC^{i}\right),\label{eq: PDF_PDE_origin_C}
\end{equation}
where for simplicity we introduce the following vector field
\begin{equation}
C^{i}=\nu\frac{\partial B_{k}^{i}}{\partial x^{k}}-\nu A^{i}+Q^{i}\label{eq:C-vector}
\end{equation}
for $i=1,2,3$. The equation (\ref{eq: PDF_PDE_origin_C}) is a mixed
type of parabolic and transport PDE. The parabolic operator in variables
$(x,t)$
\[
\frac{\partial}{\partial t}+u^{i}\frac{\partial}{\partial x^{i}}-\nu\Delta_{x}
\]
is independent of fluid flows, which is a significant feature.
Although the PDF PDE (\ref{eq: PDF_PDE_origin_C}) appears to be linear in the PDF $p(u;x,t)$, it is much more complicated than it looks. In particular, the coefficients $A,B$ and $Q$ are functionals of the conditional average and covariance functions, which are in general not determined by the PDF $p(u;x,t)$ alone. Therefore the PDF PDE (\ref{eq: PDF_PDE_origin_C}) is not a closed partial differential equation. The significance of the PDF PDE lies in the fact that if the statistical numerics $\rho$ and $\sigma$ are considered as given, which will be the case for modelling turbulent flows, then the PDF PDE is a partial differential equation of second order, though mixed type of parabolic and transport in general.
Nevertheless, the PDE (\ref{eq: PDF_PDE_origin}) is a challenging obstacle even if $A,B,Q$ are all considered as given. The function $B$ can be understood as the mean velocity gradient at $(x,t)$ condition on the velocity vector at $(x,t)$, which brings the mixed derivatives $\partial_{u^{i}}\partial_{x^{k}}p$, while the corresponding diffusion matrix $(D_{ij})_{1\leq i,j\leq6}$ collecting the second order terms is of the form
\begin{align*}
D_{ij} & =\begin{cases}
0 & 1\leq i,j\leq3,\\
\nu B_{j-3}^{i} & 1\leq i\leq3,\,4\leq j\leq6,\\
\nu B_{i-3}^{j} & 1\leq j\leq3,\,4\leq i\leq6,\\
\nu\ione_{\left\{ j=k\right\} } & 4\leq i,j\leq6,
\end{cases}
\end{align*}
if we consider $(u,x)$ as a whole. The matrix $D_{ij}$ is not necessarily symmetric and not non-negative definite even if it is symmetric. It poses a challenging mathematical problem developing a theory of this kind of mixed type partial differential equations to facilitate the modelling of turbulent flows based on the PDF PDE.
\section{Application to turbulent flows\label{sec: Application_to_turbulent_flows}}
As we have seen, our PDE (\ref{eq: PDF_PDE_origin}) does not fit into any existing categories of PDE theories. However, functionals $A,B,Q$ will be derived, if the conditional statistics can be obtained or estimated through practical experiments. Therefore, tracking the PDF of the turbulent flow is equivalent to measuring or modelling the conditional mean and conditional variance, followed by solving the PDF PDE (\ref{eq: PDF_PDE_origin}) using some numerical methods. This brings a new approach on the modelling of turbulent flows.
In this part, we establish some mathematical tools for the purpose of modelling turbulence based on the PDF PDE.
For convenience, let us introduce the following technical assumptions on a function $f$.
\begin{assumption}\label{assumption: LipschitzCondition}
For a function $f(x,u,t)$ which is uniformly continuous in $t$, there exist constants $K_{1},K_{2}\geq 0$ such that
\begin{align*}
|f(x,u,t)-f(y,w,t)| & \leq K_{1}(|x-y|+|u-w|),\\
|f(x,u,t)| & \leq K_{2}\left(1+|u|\right),
\end{align*}
for all $x,y,u,w\in\R^{3}$ and $t\geq0$.
\end{assumption}
\begin{assumption}\label{assumption: Polygrowth}
The function $f(u;x)$ is continuous and has up to polynomial growth in $(u,x)$.
\end{assumption}
\subsection{Weakly homogeneous and weakly isotropic flows}
When the viscous incompressible flow is weakly homogeneous, the mix-derivative term disappears and the PDF PDE is simplified to
\begin{align}
\begin{split}\label{eq: PDF_PDE_homogeneous}
\left(\frac{\partial}{\partial t}+u^{i}\frac{\partial}{\partial x^{i}}-\nu\Delta_{x}\right)p & =\frac{\partial}{\partial u^{i}}\left(pC^{i}\right),\\
p(u;x,0) & =p_{0}(u;x),
\end{split}
\end{align}
where $C=Q-\nu A$, $A$ and $Q$ are given by equations (\ref{eq:q-term}) and (\ref{eq: BA_in_PDE}) respectively. By the definition of $B$, a weakly homogeneous flow has the property that the velocity gradient condition on the velocity vector on the same location is a centred random vector. The weak homogeneity allows us to state the representation formula, which provides a useful tool when we model weakly homogeneous turbulent flows.
\begin{thm}\label{thm: weakly_homogeneous}
Given the explicit form of $C$, we suppose that $C$ satisfies assumption \normalfont{\textbf{\ref{assumption: LipschitzCondition}}} and $p_{0}(u;x)$ satisfies assumption \normalfont{\textbf{\ref{assumption: Polygrowth}}} respectively.
\begin{enumerate}[label=\arabic*)]
\item Assuming $p(u;x,t)$ is the solution to equation (\ref{eq: PDF_PDE_homogeneous}) and also a member of $\CC^{1,2,1}(\R^{3}\times\R^{3}\times[0,T])$ with some fixed $T>0$, the solution $p$ is unique and possesses the following representation form
\begin{equation}\label{eq: S_rep_h}
p(u;x,t)=\mathbb{E}\left[p_{0}(Y(t);X(t))q(t)\right],
\end{equation}
where for any given $t\in[0,T]$ and $(x,u)$, $(X,Y,q)$ is the unique strong solution to the system of SDEs
\[
\rd X^{i}(s)=-Y^{i}(s)\rmd s+\sqrt{2\nu}\rmd M^{i}(s),\quad X(0)=x,
\]
\[
\rd Y^{i}(s)=C^{i}(X(s),Y(s),t-s)\rmd s,\quad Y(0)=u,
\]
and
\[
\rd q(s)=q(s)\frac{\partial C^{k}}{\partial u^{k}}(X(s),Y(s),t-s)\rmd s,\quad q(0)=1,
\]
for $i=1,2,3$ and $s\in[0,t]$, associated with $M=(M^{1},M^{2},M^{3})$ being the Brownian motion in $\mathbb{R}^{3}$ with $M(0)=0$ defined on some probability space.
\item\label{claim: viscosity_solution} The equation (\ref{eq: S_rep_h}) is a unique viscosity solution to the PDF PDE (\ref{eq: PDF_PDE_homogeneous}).
\end{enumerate}
\end{thm}
\begin{proof}
If $C$ is Lipschitz, the previous system of SDEs for $(X,Y)$ has a unique solution $(X,Y)$ and $q$ is given via the exponential function. Notice $(X,Y,q)$ depends on $(x,u)$ as well. Let $\theta(s)=(Y(s);X(s),t-s)$, $\eta(s)=(X(s),Y(s),t-s)$ and denote $Z(s)=p(\theta(s))q(s)$. According to It\^o's formula,
\begin{align*}
\rd Z(s) & =q(s)\frac{\partial p}{\partial u^{i}}(\theta(s))\rd Y^{i}(s)+q(s)\frac{\partial p}{\partial x^{i}}(\theta(s))\rd X^{i}(s)+q(s)\nu\Delta_{x}p(\theta(s))\rmd s\\
& \quad-q(s)\frac{\partial p}{\partial s}(\theta(s))\rmd s+p(\theta(s))\rmd q(s)\\
& =q(s)\left(\frac{\partial(pC^{i})}{\partial u^{i}}(\eta(s))-p(\theta(s))\frac{\partial C^{i}}{\partial u^{i}}(\eta(s))-\frac{\partial p}{\partial x^{i}}(\theta(s))Y^{i}(s)+\nu(\Delta_{x}p)(\theta(s))-\frac{\partial p}{\partial s}(\theta(s))\right)\rmd s\\
& \quad+\sqrt{2\nu}q(s)\frac{\partial p}{\partial x^{i}}(\theta(s))\rmd M^{i}(s)\\
& =\sqrt{2\nu}q(s)\frac{\partial p}{\partial x^{i}}(\theta(s))\rmd M^{i}(s),
\end{align*}
it follows that $(Z(s))_{s\in [0,t]}$ is a local martingale
\[
Z(s)=Z(0)+\sqrt{2\nu}\int_{0}^{s}q(r)\frac{\partial p}{\partial x^{i}}(\theta(r)) \rmd M^{i}(r),
\]
which associates with an increasing sequence of stopping times $\{\tau_n\}_{n\geq 0}$
\begin{align*}
\tau_n=\min\left\{t,\,\inf\{s\geq 0\,:\,|(Y(s),X(s))-(u,x)|\geq n\}\right\}.
\end{align*}
After taking expectation and utilising the continuity of $p$, we apply the dominated convergence theorem to obtain
\begin{align*}
p(u;x,t)
=Z(0)
=\lim_{n\rightarrow \infty} \E\left[Z(\tau_n)\right] =\E\left[Z(t)\right],
\end{align*}
which coincides with the representation formula.
Regarding the second part \ref{claim: viscosity_solution}, we introduce functions $C^i_\varepsilon(x,u,t)$ and $p_{0,\varepsilon}(u;x)$, which are smooth functions (e.g. obtained from convolution with mollifiers) and converge to $C,p_0$ uniformly on compact sets. We further define the system of non-degenerate SDEs on the time interval $[0,t]$
\begin{align*}
\rd X^i_\varepsilon(s) &= -Y^i_\varepsilon(s)\rmd t+\sqrt{2\nu}\rmd M^i(s), \qquad\qquad \qquad X_\varepsilon(0)=x,\\
\rd Y^i_\varepsilon(s) &=C^i_\varepsilon(X^i_\varepsilon(s), Y^i_\varepsilon(s),t-s)\rmd t+\sqrt{\varepsilon}\rmd M^i(s), \;\, Y_\varepsilon(0)=u,
\end{align*}
and bounded process $(q_{\varepsilon}(s))_{s\in [0,t]}$
\begin{align*}
\rd q_{\varepsilon}(s)=q_{\varepsilon}(s)\frac{\partial C_{\varepsilon}^{k}}{\partial u^{k}}(X(s),Y(s),t-s)\rmd s,\quad q_{\varepsilon}(0)=1.
\end{align*}
Consider the following parabolic PDE
\begin{align*}
\left(\frac{\partial}{\partial t}+u^{i}\frac{\partial}{\partial x^{i}}-\nu\Delta_{x}-\varepsilon\Delta_u \right)p_\varepsilon & =\frac{\partial}{\partial u^{i}}\left(p_{\varepsilon} C_\varepsilon^{i}\right),\\
p_\varepsilon(u;x,t)&=p_{0,\varepsilon}(u;x),
\end{align*}
it admits a unique classical smooth solution $p_{\varepsilon}$ by classical PDE theory. Moreover, the solution possesses the representation
\begin{align*}
p_{\varepsilon}(u;x,t)=\E\left[p_{0,\varepsilon}(Y_{\varepsilon}(t),X_{\varepsilon}(t))q_{\varepsilon}(t)\right],
\end{align*}
if we make use of 1). Applying Burkholder-Davis-Gundy and Gronwall inequalities (or following routine arguments in \citep{Kloeden1992NSSDE}), we have
\begin{align*}
\E\left[\sup_{0\leq s\leq t}\left|(Y_\varepsilon(s),X_\varepsilon(s))-(Y(s),X(s))\right|^2\right]\rightarrow 0
\end{align*}
as $\varepsilon\rightarrow 0$. Therefore, at least through a subsequence, $(Y_\varepsilon(s),X_\varepsilon(s))\rightarrow (Y(s),X(s))$ almost surely, leading to
\begin{align*}
p_{\varepsilon}(u;x,t)=\E\left[p_{0,\varepsilon}(Y_{\varepsilon}(t),X_{\varepsilon}(t))q_{\varepsilon}(t)\right]\rightarrow \E\left[p_{0}(Y(t),X(t))q(t)\right]=p(u;x,t),
\end{align*}
uniformly on compact sets. Therefore equation (\ref{eq: S_rep_h}) is a viscosity solution by Proposition 5.8 in \citep{yong1999stochastic}, whereas the uniqueness follows from \citep{ishii1990viscosity}.
\end{proof}
The PDF PDE (\ref{eq: PDF_PDE_homogeneous}) boils down to a degenerate parabolic PDE after the weak homogeneity has been applied. Apart from solving the PDE, the stochastic representation (\ref{eq: S_rep_h}) offers a route to numerically solving the PDE. The PDE (\ref{eq: PDF_PDE_homogeneous}) has 6 dimensions in space and 1 dimension in time, which is a challenging task for classical finite difference methods due to the size of grid in space. Instead, we can simulate the solution based on Monte-Carlo methods directly.
\begin{rem}
Moreover, the stochastic representation formula can be extended to non-weakly homogeneous turbulent flows. Indeed,
\begin{align*}
p(u;x,t) & =\E\left[p_{0}(Y(t);X(t))q(t)\right],
\end{align*}
is a solution to (\ref{eq: PDF_PDE_origin}) subject to the system
of SDEs
\begin{align*}
\rd Y^{i}(s) & =C^{i}(X(s),Y(s),t-s)\rmd s, \quad Y(0)=u,\\
\rd X^{i}(s) & =-Y^{i}(s)\rmd s+\sqrt{2\nu}\rmd M^{i}(s),\quad X(0)=x,
\end{align*}
with $C^{i}=\nu\frac{\partial B_{k}^{i}}{\partial x^{k}}-\nu A^{i}+Q^{i}$ for
all $i\in\{1,2,3\}$ if we impose the following condition on $p$:
\begin{align*}
\partial_{u^{i}}\left(B^{i}(x,u,t)\cdot\nabla_{x}p(u;x,t)\right) & =0.
\end{align*}
We are now in a position to verify the solution of our PDF PDE (\ref{eq: PDF_PDE_homogeneous}) is indeed a PDF. That is, it must carry two properties, including positivity and the mass preservation property. It turns out under some technical assumptions, the mass preservation property is equivalent to the divergence-free condition.
\end{rem}
\begin{lem}\label{lem: weak_homogeneous_properties}
Let $C$ be a given function which satisfies assumption \normalfont{\textbf{\ref{assumption: LipschitzCondition}}}, while $p_0(u;x)$ satisfies assumption \normalfont{\textbf{\ref{assumption: Polygrowth}}}. Let $p(u;x,t)$ be to the solution to equation (\ref{eq: PDF_PDE_homogeneous}), we have the following statements:
\begin{enumerate}[label=\arabic*)]
\item If $p_{0}\geq0$ then $p(u;x,t)\geq0$.
\item \label{claim: vanishing_boundary} If we further assume there exists
$m\geq1$ such that $\lim_{|u|\rightarrow\infty}p_{0}(u;x)|u|^{m}=0$ uniformly in $x$ as $|u|\rightarrow\infty$, then $\lim_{|u|\rightarrow\infty}p(u;x,t)|u|^{m}=0$
uniformly in $x$. Moreover, if $m>q+3$ for some $q\geq1$, then
$\intr|u|^{q}p(u;x,t)\rmd u<\infty$.
\item Suppose $p_{0}$ also satisfy the conditions in \ref{claim: vanishing_boundary} and $\intr p_{0}(u;x)\rmd u=1$, we have $\int_{\R^{3}}p(u;x,t)\rmd u=1$ for all $x,t$ if and only if
\begin{equation}
\frac{\partial}{\partial x^{i}}\int_{\R^{3}}u^{i}p(u;x,t)\rmd u=0.\label{eq: div-free for weak flow}
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
1) follows directly from the stochastic representation formula (\ref{eq: S_rep_h}). To deal with \ref{claim: vanishing_boundary}, let $Y(s;u)$ and $X(s;u)$ denote the solutions to the SDEs in Theorem \ref{thm: weakly_homogeneous}, while we put $u$ in the representation on $X,Y$ to emphasize their dependence on the initial data $u$. As a result of the Lipschitz condition, we deduce that $|C(X(s;u),Y(s;u),t-s)\cdot Y(s;u)|\leq\frac{1}{2}K(1+|Y(s;u)|^{2})$ for some $K>0.$ Introducing scalar processes $Z(s;u)\coloneqq|Y(s;u)|^{2}$, $\alpha(s;u)$ and $\beta(s;u)$:
\begin{align*}
\rd Z(s;u) & =2C(Y(s;u),X(s;u),t-s)\cdot Y(s;u)\rmd s,\\
\rd\alpha(s;u) & =-K(1+\alpha(s;u))\rmd s,\\
\rd\beta(s;u) & =K(1+\beta(s;u))\rmd s,
\end{align*}
with $\alpha(0;u)=\beta(0;u)=|Y(0;u)|^{2}=|u|^{2}$, $Z(s;u)$ must satisfy the inequality $\alpha(s;u)\leq Z(s;u)\leq\beta(s;u)$, since we are able to make use of the comparison theorem (see for example, \citep{McNabb1986}) and
\begin{align*}
-K(1+Z(s;u)) & \leq2C(Y(s;u),X(s;u),t-s)\cdot Y(s;u)\leq K(1+Z(s;u))
\end{align*}
for all $Z(s;u)\geq0$. Therefore
\[
Z(s;u)\in[\exp(-Ks)(|u|^{2}+1)-1,\exp(Ks)(|u|^{2}+1)-1].
\]
Applying the dominated convergence theorem, we get
\begin{align*}
\lim_{|u|\rightarrow\infty}|p(u;x,t)||u|^{m} & \leq\exp(Kt)\E\left[\lim_{|u|\rightarrow\infty}|p_{0}(Y(t;u);X(t,u))||u|^{m}\right]=0.
\end{align*}
Recall that $p_{0}(u;x)$ has at most polynomial growth in $(u,x)$, i.e. $p_{0}(u;x)\leq L(|u|^{l}+|x|^{l})$ for some $l,L\geq1$. Denoting $B(0,R)$ as the ball centred at the origin with radius $R>0$, the moment bound can be obtained by
\begin{align*}
\intr|u|^{q}p(u;x,t)\rmd u & \leq\int_{B(0,R)}|u|^{q}\exp\left(Kt\right)\E\left[|Y(s;u)|^{l}+|X(s;u)|^{l}\right]\rmd u\\
& \quad+\int_{\R^{3}\backslash B(0,R)}|u|^{m}p(u;x,t)\frac{1}{|u|^{m-q}}\rmd u\\
& <\infty,
\end{align*}
where $R$ is chosen large enough such that $|u|^{m}p(u;x,t)\leq K_{1}$ for all $|u|>R$ and some constant $K_{1}>0$.
In order to check 3), we consider $f(x,t)\coloneqq\int_{\R^{3}}p(u;x,t)\rmd u$. Integrating the PDF PDE with respect to the variable $u$, we obtain that
\begin{align*}
\left(\frac{\partial}{\partial t}-\nu\Delta_{x}\right)f(x,t) & =-\frac{\partial}{\partial x^{i}}\int_{\R^{3}}u^{i}p(u;x,t)\rmd u+\int_{\R^{3}}\nabla_{u}\cdot\left(pC\right)\rmd u,\\
f(x,0) & =1.
\end{align*}
Therefore, $|pC|\rightarrow0$ and the conclusion follows immediately.
\end{proof}
\begin{rem}
There exists a (viscosity) solution to equation (\ref{eq: PDF_PDE_homogeneous}) when we assume $C$ is Lipschitz in $(x,u)$ and uniformly continous in $t$, but the fact that $C^{i}$ is bounded in $x$ for all $i\in\{1,2,3\}$ is crucial to part \ref{claim: vanishing_boundary} of Lemma \ref{lem: weak_homogeneous_properties}. Assuming $C^{i}(y,x,t)=2a^{i}y^{i}+b^{i}x^{i}+c^{i}(t)$ for some $a^{i},b^{i}\in\R,$ $(a^{i})^{2}>b^{i}$ and bounded function $c^{i}$, where no Einstein's convention is applied, the solution to the system of SDEs $(Y^{i},X^{i})^{T}$ is given by
\begin{align*}
\left[\begin{array}{c} Y^{i}(s)\\
X^{i}(s)\end{array}\right]
& =\exp\left(L^{i}s\right)\left[\begin{array}{c}
u^{i}\\
x^{i}
\end{array}\right]+\int_{0}^{s}\exp\left(L^{i}(s-r)\right)\left[\begin{array}{c}
c^{i}(r)\\
0
\end{array}\right]\rmd r+\int_{0}^{s}\exp\left(M^{i}(s-r)\right)\left[\begin{array}{c}
0\\
\sqrt{2\nu}
\end{array}\right]\rmd M_{r}^{i},
\end{align*}
with $L^{i}\coloneqq\left[\begin{array}{cc} 2a^{i} & b^{i}\\-1 & 0 \end{array}\right]$. The corresponding eigenvalues of $L^{i}$ are $\lambda_{1,i}=a^{i}+\sqrt{(a^{i})^{2}-b^{i}}$ and $\lambda_{2,i}=a^{i}-\sqrt{(a^{i})^{2}-b^{i}}$, which implies
\begin{align*}
\exp\left(L^{i}t\right) & =\frac{1}{\lambda_{2,i}-\lambda_{1,i}}\left[\begin{array}{cc}
-\lambda_{1,i}\exp(\lambda_{1,i}t)+\lambda_{2,i}\exp(\lambda_{2,i}t) & b^{i}\left(-\exp(\lambda_{1,i}t)+\exp(\lambda_{2,i}t)\right)\\
\exp(\lambda_{1,i}t)-\exp(\lambda_{2,i}t) & \lambda_{2,i}\exp(\lambda_{1,i}t)-\lambda_{1,i}\exp(\lambda_{2,i}t)
\end{array}\right]
\end{align*}
and $Y^{i}(s)$ is independent of $u^{i}$ provided $s=\frac{\ln(\lambda_{2,i}/\lambda_{1,i})}{\lambda_{1,i}-\lambda_{2,i}}>0$. In particular, $a^{i}<0$ and $b^{i}>0$ with $(a^{i})^{2}>b^{i}$ lead to $s>0$, and therefore
\[
p(u;x,t)=\E\left[p_{0}(Y(t);X(t))q(t)\right]
\]
not necessarily vanishes at $|u|\rightarrow\infty$ for all $(x,t)$ if we only assume $p_{0}(u;x)\to0$ as $|u|\rightarrow\infty$ i.e. $\intr p(u;x,t)\rmd u =\infty$ for some $t>0$.
\end{rem}
We introduce a sufficient condition for the mass-preservation property when $C^{i}$ satisfies the constraints in the following Lemma.
\begin{lem}
Suppose $C$, $p_{0}$ satisfy condition \ref{claim: vanishing_boundary} in Lemma \ref{lem: weak_homogeneous_properties}
with $m\geq2$ and define $\{U(x,t)\}_{x,t}$ to be the corresponding random field such that $U(x,t)$ has the PDF $p(u;x,t)$ for all $(x,t)$. If equation (\ref{eq: div-free for weak flow}) is satisfied, we have for all $(x,t)\in\R^{3}\times\R_{+}$,
\begin{align*}
\nabla_{x}\cdot\E\left[C(x,U(x,t),t)\right] & =-\partial_{x^{i}}\partial_{x^{j}}\E\left[U^{i}(x,t)U^{j}(x,t)\right].
\end{align*}
\end{lem}
\begin{proof}
We apply $u\cdot\nabla_{x}$ on both sides of the PDE (\ref{eq: PDF_PDE_homogeneous})
followed by integrating with respect to $u$, resulting
\begin{align*}
\int_{\R^{3}}(u\cdot\nabla_{x})\left(\partial_{t}-\nu\Delta_{x}\right)p(u;x,t)\rmd u & =\left(\partial_{t}-\nu\Delta_{x}\right)\nabla_{x}\cdot\int_{\R^{3}}up(u;x,t)\rmd u=0
\end{align*}
as well as
\begin{align*}
\int_{\R^{3}}(u\cdot\nabla_{x})u\cdot\nabla_{x}p(u;x,t)\rmd u & =\int_{\R^{3}}u^{i}u^{j}\partial_{x^{i}}\partial_{x^{j}}p(u;x,t)\rmd u\\
& =\partial_{x^{i}}\partial_{x^{j}}\E\left[U^{i}(x,t)U^{j}(x,t)\right].
\end{align*}
Eventually, the right-hand side can be written in terms of
\begin{align*}
\int_{\R^{3}}(u\cdot\nabla_{x})\nabla_{u}\cdot\left(p(u;x,t)C(x,u,t)\right)\rmd u & =\int_{\R^{3}}\partial_{x^{i}}\nabla_{u}\cdot\left(u^{i}p(u;x,t)C(x,u,t)\right)\rmd u\\
& \quad-\int_{\R^{3}}\nabla_{u}\cdot\left(p(u;x,t)C(x,u,t)\right)\rmd u\\
& =-\nabla_{x}\cdot\E\left[C(x,U(x,t),t)\right],
\end{align*}
provided $|p(u;x,t)C(x,u,t)u^{i}|\rightarrow0$ as $|u|\rightarrow\infty$. This has been guaranteed by the growth condition on $C^{i}$.
\end{proof}
If we further assume the incompressible viscous turbulent fluid flow is both weakly homogeneous and weakly isotropic, the PDF PDE is a parabolic-transport equation
\begin{align}
\left(\frac{\partial}{\partial t}+u^{i}\frac{\partial}{\partial x^{i}}-\nu\Delta_{x}\right)p & =\frac{\partial}{\partial u^{i}}\left(pC^{i}\right),\label{eq: PDF_PDE_isotropic}\\
p(u;x,0) & =p_{0}(u;x),\nonumber
\end{align}
where $C$ is essentially
\begin{align*}
C^{i}(x,u,t) & =-\nu A^{i}(u,t)+Q^{i}(x,u,t)\\
& =-\nu A^{i}(u,t)+\int_{\R^{3}}\frac{y^{i}}{4\pi|y|}\frac{\partial^{2}\left(\sigma^{jk}+b^{j}b^{k}\right)}{\partial y^{k}\partial y^{j}}(y,u,t)\rmd y
\end{align*}
depending only on $u$ and $t$ but not on $x$.
\begin{cor}
Let $C^{i}(u,t)$ be Lipschitz continuous in $u$ and uniformly continuous in $t$, for all $i\in\{1,2,3\}$. For $t>0$ and $(x,u)\in\R^{3}\times\R^{3}$, $(X,Y,q)$ denotes the unique solution to the system of equations for all $i\in \{1,2,3\}$ and $s\in[0,t]$:
\begin{align*}
\rd X^{i}(s) & =-Y^{i}(s)\rmd s+\sqrt{2\nu}\rmd M_{s}^{i},\quad X(0)=x,\\
\rd Y^{i}(s)&=C^{i}(Y(s),t-s)\rmd s,\quad Y(0)=u,
\end{align*}
and
\[
\rd q(s)=-q(s)\frac{\partial C^{k}}{\partial u^{k}}(Y(s),t-s)\rmd s,\quad q(0)=1.
\]
Suppose $p(u;x,t)$ is a $\mathcal{C}^{1,1,1}(\R^{3}\times\R^{3}\times[0,T])$ solution to (\ref{eq: PDF_PDE_isotropic}) then
\[
p(u;x,t)=q(t)\int_{\R^{3}}H(x,t,z;u)p_{0}(Y(t);z)\rmd z,
\]
where
\[
H(x,t,y;u)=\frac{1}{(4\pi\nu t)^{3/2}}\exp\left(-\frac{\left|y-x+\int_{0}^{t}Y(s)\rmd u\right|^{2}}{4\nu t} \right).
\]
Therefore if $p_{0}(u;x)\geq0$ for every $x$, then so is $p(u;x,t)$ for any $(x,t)$. If $p_{0}(u;x)$ is a PDF for all $x$, then $p(u;x,t)$ is again a PDF for all $(x,t)$ if only if the following constraint holds:
\[
\frac{\partial}{\partial x^{i}}\int_{\R^{3}}u^{i}p(u;x,t)\rmd u=0.
\]
\end{cor}
\begin{proof}
By definition, $Y$ and $q$ are deterministic processes, while $X$ is a $3$-dimensional Gaussian process such that for all $E \in \mathscr{B}(\R^3)$ and $s\in [0,t]$,
\begin{align*}
\P(X_s \in E) = \int_E H(x,s,z;u)\rmd z.
\end{align*}
As a result of Theorem \ref{thm: weakly_homogeneous},
\begin{align*}
p(u;x,t) =q(t)\mathbb{E}\left[p(Y(t);X(t),0)\right]
=q(t)\int_{\R^{3}}H(x,t,z;u)p_{0}(Y(t);z)\rmd z.
\end{align*}
The remaining part is a consequence of Lemma \ref{lem: weak_homogeneous_properties}.
\end{proof}
\subsection{Inviscid flows}
The modelling of inviscid incompressible flows is significantly simplified comparing to modelling viscid flow, since the PDF PDE can be solved without imposing the weak homogeneity or the weak isotropy conditions. As $\nu=0$, the velocity $U(x,t)$ fulfils the Euler equations, while the PDF PDE becomes the following transport equation
\begin{align}
\begin{split}\label{eq: PDF_PDE_inviscid}
\frac{\partial p}{\partial t}+u^{i}\frac{\partial p}{\partial x^{i}} & =\frac{\partial}{\partial u^{i}}\left(pQ^{i}\right),\\
p(u;x,0) & =p_{0}(u;x).
\end{split}
\end{align}
\begin{thm}
\label{thm: PDF_PDE_inviscid}Suppose that $Q^{i}(x,u,t)$ satisfies assumption \normalfont{\textbf{\ref{assumption: LipschitzCondition}}}, $p_0(u;x)$ satisfies assumption \normalfont{\textbf{\ref{assumption: Polygrowth}}} and $p\in \mathcal{C}^{1,1,1}(\R^3, \R^3, [0,T])$ for some fixed $T>0$ is a solution to the transport PDE (\ref{eq: PDF_PDE_inviscid}), we have
\begin{equation}
p(u;x,t)=p_0(Y(t);X(t))q(t)\label{eq: rep_inviscid}
\end{equation}
for every $t>0$, $x$ and $u$, where $(X,Y,q)$ is the unique solution to the following system of ODEs:
\begin{align}
\begin{split}\label{eq: ODEsystem}
\rd X^{i}(s) & =-Y^{i}(s)\rmd s,\quad X(0)=x, \\
\rd Y^{i}(s) & =Q^{i}(X(s),Y(s),t-s)\rmd s,\quad Y(0)=u\\
\rd q(s)&=q(s)(\nabla_{u}\cdot Q)(X(s),Y(s),t-s)\rm \rmd s,\quad q(0)=1.
\end{split}
\end{align}
for $i=1,2,3$. Moreover, if $p(u;x,0)\geq0$ for every $x$, then so is $p(u;x,t)$ for any $(x,t)$. If $p(u;x,0)$ is a PDF for all $x$ and $p_{0}(u;x)|u|^{m}\rightarrow0$ uniformly in $x$ for some $m\geq1$, then $p(u;x,t)$ is again a PDF for all $(x,t)$ if only if the the following constraint is satisfied:
\[
\frac{\partial}{\partial x^{i}}\intr u^{i}p(u;x,t)\rmd u=0.
\]
\end{thm}
\begin{proof}
The system of ODEs (\ref{eq: ODEsystem}) has a unique solution pair $(X,Y)$ and
\[
q(s)=\exp\left(\int_{0}^{s}\frac{\partial Q^{i}}{\partial u^{i}}(X(r),Y(r),t-r)\rmd r\right),
\]
which is a bounded process by the Lipschitz assumption. Let $\theta(s)=(Y(s);X(s),t-s)$, $\eta(s)=(X(s),Y(s),t-s)$ and define
\[
h(s)=p(\theta(s))q(s),
\]
then $h(0)=p(u;x,t)$ and $h(t)=p(Y(t);X(t),0)q(t)$. Moreover for
$s\in[0,t]$, we have
\begin{align*}
h'(s) & =q(s)\left(\frac{\partial p}{\partial u^{i}}(\theta(s))\frac{\partial Y^{i}}{\partial s}+\frac{\partial p}{\partial x^{i}}(\theta(s))\frac{\partial X^{i}}{\partial s}-\frac{\partial p}{\partial s}(\theta(s))\right)+p(\theta(s))\frac{\partial q}{\partial s}\\
& =q(s)\left(\frac{\partial p}{\partial u^{i}}(\theta(s))Q^{i}(\eta(s))+\frac{\partial p}{\partial x^{i}}(\theta(s))Y^{i}(s)-\frac{\partial p}{\partial s}(\theta(s))+p(\theta(s))\frac{\partial Q^{i}}{\partial u^{i}}(\eta(s))\right)\\
& =0,
\end{align*}
hence $h$ is constant on $[0,t]$ and we make use of $p(u;x,t)=h(t)$ to deduce (\ref{eq: rep_inviscid}). Regarding the positivity and mass preservation properties, the proof is almost the same as the proof of Lemma \ref{lem: weak_homogeneous_properties}, except (\ref{eq: ODEsystem}) is not stochastic: if $p_{0}\geq0$ then $p(u;x,t)\geq0$ directly and
\begin{align*}
\lim_{|u|\rightarrow\infty}|p(u;x,t)||u|^{m} & \leq\sup_{s\in[0,t]}|q(s)|\lim_{|u|\rightarrow\infty}|p_{0}(Y(t;u);X(t;u))||u|^{m}=0.
\end{align*}
Let $f(x,t)=\int_{\R^{3}}p(u;x,t)\rmd u$. If $p_{0}$ is a probability density, then $f(x,0)=1$ for all $x$. By integrating the equation (\ref{eq: PDF_PDE_inviscid}) with respect to $u$, we obtain
\[
\frac{\partial}{\partial t}f(x,t)-\frac{\partial}{\partial x^{i}}\int_{\R^{3}}u^{i}p(u;x,t)\rmd u=0,
\]
which leads us to the conclusion.
\end{proof}
\section{Modelling the PDF: a concrete example}
On one hand, from the modelling point of view, only those solutions to the PDF PDE which satisfy the natural mass conservation condition (\ref{eq: div-free for weak flow}) can be used as models of distributions of turbulent velocity fields. On the other hand, from view-point of PDEs, the mass conservation property of solutions to the PDF PDE imposes a strong constraint on its solutions. As a matter of fact, most solutions of PDF PDE with given $A, B, Q$ do not satisfy this constraint. In general, solutions to the PDF PDE do not have an explicit expression, although our stochastic representations established in the previous sections may be helpful in dealing with the mass conservation. Consequently, it brings a next level of difficulty to verify the constraint (\ref{eq: div-free for weak flow}). It turns out that the natural condition that $\int_{\mathbb{R}^{3}}p(u;x,t)\rmd u=1$ for a solution $p(u;x,t)$ to the PDF PDE, which is equivalent to the mass conservation, is a very strict constraint for whatever the coefficients $A,B$ and $Q$ which may be modelled or measured. At least our experience demonstrates that the PDF solutions to the PDF PDE are rare, and indicates that the PDF solution $p(u;x,t)$ is not so sensitive for the choices of $A,B$ and $Q$, although we are unable to prove this claim in the present paper. Therefore the PDF PDE together with the mass conservation constraint is very rigid, and hence is good for modelling the PDF of turbulent flows. The authors hope to see further exploration in this direction in the future.
In this section, we study an explicit example to the PDF PDE, where the mixed-derivative term vanishes in the PDE i.e. the turbulent flow is weakly homogeneous. The example seems artificial, but as we have explained above, we believe that this example has relevance to real turbulent flows.
\subsection{Space homogeneous density with perturbation}
For simplicity, the viscosity parameter in this example is set to be $\nu=1$. As the solution to the PDE (\ref{eq: PDF_PDE_homogeneous}) is solely determined by the initial data $p_{0}(u;x)$ as well as the function $C(x,u,t)$, we consider the simplest scenario $C=0$. Meanwhile, instead of setting up a common distribution such as Gaussian or exponential distribution for the initial data, we introduce the following non-negative function: for every $x\in \mathbb{R}^3$, let
\begin{align}
p_{0}(u;x) =\alpha(u)+\beta(u)\gamma(x),
\label{eq: PDFexample}
\end{align}
where $\alpha(u)$ is a PDF given by
\begin{alignat*}{1}
\alpha(u) & =\frac{1}{(\sqrt{2\pi})^{3}\sqrt{\det\sigma}}\exp\left(-\frac{1}{2}u^{T}\sigma^{-1}u\right)
\end{alignat*}
with $\sigma_{ij}=(\frac{3}{2})^{i-2}\ione_{\left\{ i=j\right\} }$, corresponding to the PDF of a centred Gaussian vector with independent components. Here, $\beta$ satisfies $\int_{\R^{3}}\beta(u)\rmd u=0$, and is chosen to be the product of reciprocals
\begin{align*}
\beta(u) & =\begin{cases}
\prod_{i}\frac{1}{u^{i}}, & (u^{1},u^{2},u^{3})\in I,\\
0, & \text{otherwise},
\end{cases}
\end{align*}
and truncated if $u$ left the region $I$, where $I$ is defined as
\begin{align*}
I\coloneqq & \left(\left[\frac{1}{4},1\right]\cup\left[-1,-\frac{1}{4}\right]\right)\times\left(\left[\frac{1}{4},2\right]\cup\left[-2,-\frac{1}{4}\right]\right)\times\left(\left[\frac{2}{7},3\right]\cup\left[-3,-\frac{2}{7}\right]\right).
\end{align*}
We further set the last function $\gamma$ as
\begin{align*}
\gamma(x) & =\frac{1}{36}\frac{1}{(\sqrt{2\pi})^{3}}\left\{ \left[\frac{30(x^{1}-1)}{30+3(x^{2})^{2}+2(x^{3})^{2}}\exp\left(-\frac{(x^{1})^{2}}{3}+\frac{1}{3}\sin\left(\sum_{i}x^{i}\right)\right)+\frac{\cos(x^{2}+x^{3})}{(|x|^{2}+1)}\right]+2\exp\left(-\frac{|x|^{2}}{200}\right)\right\} ,
\end{align*}
which vanishes at $|x|\rightarrow\infty$. We select these functions so that the positivity and mass preserving properties are fulfilled, and the initial velocity is a random field whose marginal density at $x$ is given by \eqref{eq: PDFexample}.
By the stochastic representation formula, the PDF of the random fields at $(x,t)$ has the form
\begin{align}
p(u;x,t) & =\alpha(u)+\beta(u)\E\left[\gamma\left(x-ut+\sqrt{2\nu}M_{t}\right)\right].\label{eq:ExplicitPDF}
\end{align}
\begin{figure}
\includegraphics[scale=0.55]{images/T05.png}\includegraphics[scale=0.55]{images/T05diff.png}
\caption{$x=(0,0,0),\;t=\frac{1}{2}$\label{fig:T05}}
\end{figure}
We focus on the PDF in the random field $\{U(x,t)\}_{x,t}$ at $x=(0,0,0)$ and plot the graph of $u$ against $p(u;x,t)$ at $u_{3}=0.3$ for different time $t$. At $t=\frac{1}{2}$, $p(u;x,t)$ is discontinuous on the boundary of $I$ as in figure \ref{fig:T05}. If we compare the density of $p(u;x,\frac{1}{2})$ and $p(u;x,0)$ by evaluating $p(u;x,\frac{1}{2})-p(u;x,0)$, we can see the change of density in the region $I$. Meanwhile, the discontinuity becomes less apparent on the plot when we increase the time to $t=40$. From figure \ref{fig:T40}, the PDF of $U(x,t)$ is close to the density of a Gaussian random variable with density function $\alpha$, even if the discontinuity still exists. This is due to $\gamma$ vanishes at infinity and $\E[\gamma(X_{t})]\rightarrow0$ as $t\rightarrow\infty$. However, the impact of $\beta$ does not disappear from the velocity field. There is a strong discontinuity near $(\frac{1}{4},\frac{1}{4},\frac{2}{7})$ on $p(u;x,t)$ when $x=(12,12,12)$ and $t=40$. The PDF at $x=(12,12,12)$ is asymmetric and has a different evolution than the density at the origin, which demonstrates that the impact $\beta$ shifts from the origin to somewhere far away as time changes.
\subsection{Motivation for the construction}
The mass-preserving property of the PDE (\ref{eq: PDF_PDE_homogeneous}) corresponds to the divergence-free condition (\ref{eq: div-free for weak flow}), which is difficult to verify explicitly even when we have the stochastic representation. Apart from describing the motivation for choosing $\alpha,\beta$ and $\gamma$, we will demonstrate the our solution to the PDF PDE (\ref{eq:ExplicitPDF}) satisfies the divergence-free constraint.
Assuming $C=0$ does not imply that the turbulence flow associated to solution $p(u;x,t)$ is weakly isotropic. For example, we force the conditional average increment to satisfy \begin{align*}
\rho^{i}(x,y,u,t) & =O(|y-x|^{3})
\end{align*}
when $|y-x|\to0$ and to vanish sufficiently fast when $|y|\rightarrow\infty$, which naturally leads to $A_{jk}^{i}=0$. In addition, we let
\begin{align*}
\rho^{i}(x,y+x,u,t) & =\rho^{i}(x,-y+x,u,t)
\end{align*}
for all $i$. If the conditional variance is of the form $\sigma^{jk}(x,y,u,t)=c+\prod_{i}f_{i,j,k}(y_{i}-x_{i},t)\lambda(x,u,t)$, we conclude that
\begin{align*}
Q^{i}(x,u,t) & =\sum_{j,k}\int\frac{1}{4\pi|y-x|^{3}}\frac{\partial}{\partial y^{i}}\frac{\partial^{2}\left(\sigma^{kl}+b^{k}b^{l}\right)}{\partial y^{k}\partial y^{l}}\rmd y\\
& =\sum_{j,k}\int\frac{1}{4\pi|y|^{3}}\frac{\partial^{3}}{\partial y^{i}\partial y^{j}\partial y^{k}}\left(\prod_{i}f_{i,j,k}(y_{i})\lambda(x,u,t)+\left(\rho^{j}(x,y+x,u,t)\rho^{k}(x,y+x,u,t)\right)\right)\rmd y\\
& =0,
\end{align*}
provided that $f_{i,j,k}$ is an even function with $f_{i,j.k}(z)\rightarrow0$ sufficiently fast as $z\rightarrow0$ and $z\rightarrow\infty$ for all $t$. In particular $f_{i,j,k}(z,t)=\frac{1}{8}z^{4}\exp\left(-\frac{z^{2}(1+t)}{2}\right)$ fits the criterion that we required. A pair of the conditional statistics $\rho$, $\sigma$ obeying the above constraints is a reasonable choice leading to $C=0$.
\begin{figure}
\includegraphics[scale=0.55]{images/T40.png}\includegraphics[scale=0.55]{images/T40diff.png}\caption{$x=(0,0,0),\;t=40$\label{fig:T40}}
\end{figure}
Regarding the initial data $p_{0}(u;x)$ of the form (\ref{eq: PDFexample}), there is no strong restriction on $\alpha$, hence $\alpha$ is allowed to be replaced by another PDF, which not necessarily corresponds to a Gaussian vector. The crucial ingredients in $p_{0}(u;x)$ are $\beta$ and $\gamma$, which ensure the solution $p(u;x,t)$ satisfy the mass conservation property. As a consequence of the fact that $C=0$, the right-hand side of equation (\ref{eq: PDF_PDE_isotropic}) vanishes, ending up an equation which depends solely on the derivatives of $p$ with respect to $(t,x)$. Moreover, the relevant SDEs in the stochastic representation (\ref{eq: S_rep_h}) have the explicit form $Y_{t}=u$ and $X_{t}=x-ut+\sqrt{2\nu}M_{t}$ , while the divergence-free constraint reads that
\begin{align*}
\nabla_{x}\cdot\int_{\R^{3}}u & p(u;x,t)\rmd u=\E\left[\int_{\R^{3}}u\cdot\nabla_{x}p(u,x+\sqrt{2\nu}M_{t}-ut,0)\rmd u\right]=0.
\end{align*}
for all $x\in\R^{3},t>0$. In particular, if
\begin{align}
\int_{\R^{3}}u\cdot\nabla_{x}p(u,x-ut,0)\rmd u & =0\label{eq: suffice_condition_div_free_C0}
\end{align}
is ensured for all $(x,t)$, $\int_{\R^{3}}p(u;x,t)\rmd u=1$ is guaranteed.
When $t>0,$ the left-hand side of the equation (\ref{eq: suffice_condition_div_free_C0}) has the following form
\begin{align*}
\int_{\R^{3}}u\cdot\nabla_{x}p(u;x-ut,0)\rmd u & =\int_{\R^{3}}u\cdot\nabla_{x}\gamma(x-ut)\beta(u)\rmd u\\
& =-\frac{1}{t}\int_{\R^{3}}u\cdot\nabla_{u}(\gamma(x-ut))\beta(u)\rmd u\\
& =-\frac{1}{t}\int_{\R^{3}}\nabla_{u}\cdot\left(u\gamma(x-ut)\beta(u)\right)-\nabla_{u}\cdot\left(u\beta(u)\right)\gamma(x-ut)\rmd u.
\end{align*}
If $\gamma$ and $\beta$ decay sufficiently fast as $|u|\rightarrow\infty$ for all $x$, the first term in the integrand has zero contribution after integrated with respect to $u$. Apart from our consideration on the mathematical side, our choice of $\gamma$ guarantees the impact of $\beta$ vanish as $|x|\rightarrow\infty$, but the speed of decay is slow enough such that the impact of $\beta$ is still observable when $t$ is rather large. Last but not least, it must satisfy the following constraint
\begin{align*}
||\gamma(x)||_{\infty} & \leq\sup_{u\in\R^{3}}\bigg|\frac{\beta(u)}{\alpha(u)}\bigg|.
\end{align*}
We remark that under current form of $\beta$, $\gamma$ decays as $|x|\rightarrow0$ is not a necessity.
\begin{figure}
\includegraphics[scale=0.55]{images/T40X12.png}\includegraphics[scale=0.55]{images/T40diffX12.png}
\caption{$x=(12,12,12),t\;=40$\label{fig: T40X12}}
\end{figure}
The remaining problem turns into finding our right $\beta$ such that $\int_{\R^{3}}\nabla_{u}\cdot\left(u\beta(u)\right)\gamma(x-ut)\rmd u=0$ for all $t>0$ and $\intr u\beta(u)\rmd u\cdot\nabla_{x}\gamma(x)=0$, which ensure the initial data must also satisfy the divergence-free condition (\ref{eq: div-free for weak flow}). Our choice on $\beta$ is motivated by the fact that equation (\ref{eq: suffice_condition_div_free_C0}) is satisfied provided we impose the constraint $\nabla_{u}\cdot(u\beta(u))=0$ on $\beta$. $\beta$ is truncated to this form, in order to guarantee the integrability as well as the positivity of $p_{0}(u;x)$. Moreover, we can ensure $\int_{\R^{3}}\beta(u)\rmd u=0$, $\int_{\R^{3}}u^{i}\beta(u)\rmd u=0$ and $\partial_{u^{i}}(u^{i}\beta(u))=\partial_{u^{i}}(u^{j}u^{k})^{-1}=0$ for all $i=1,2,3$ and $i,j,k$ being distinct. Nevertheless, the purpose of symmetry of the interval $I$ and $\beta$ is to simplify our example, therefore $\beta$ can be asymmetrical.
The PDF (\ref{eq:ExplicitPDF}) at $(x,t)$ is discontinuous
in the variable $u$, but it is still a strong solution to the PDE, because the $\partial_{u^{i}}p$ disappears in the PDE (\ref{eq: PDF_PDE_isotropic}) in this circumstance.
\section{Concluding remarks}
This paper derives a new PDE which describes the evolution of one-time one-point PDF of the velocity random field of a turbulent flow. The PDF PDE, which is highly non-linear and is determined by two conditional statistics of a turbulent flow, should be a useful tool in modelling distributions of turbulence velocity fields.
The modelling of viscous turbulence in various environments by solving numerically the PDF PDE (\ref{eq: PDF_PDE_origin_C}) with measured data or based on the priori determination of $A,B$ and $Q$ should be beneficial in understanding turbulent flows. To implement good models of PDFs for turbulent flows, we need to numerically calculate solutions of the PDF PDE, with fed data which determine the functions $A$, $B$ and $Q$. The solution has to satisfy the natural constraint, that the mass must be preserved through out the evolution of the PDF. The conservation of the total mass of the solution is an important topic itself and is worth of further study. Finally, we would like to point out that the coefficients $A$, $B$ and $Q$ defined in equation (\ref{eq: BA_in_PDE}), which determine the statistics of the turbulence at one-time one-space, must have significant physical meaning in turbulence. These coefficients, which are considered as turbulent flow parameters, should play their roles in further research.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Processes mediated by Flavor Changing Neutral Currents (FCNCs) are very rare in
the Standard Model (SM) due to the Glashow-Iliopoulos-Maiani (GIM) mechanism~\cite{gim}.
However, because of the extended flavor structures existing in many New Physics (NP) models, the two-body FCNC decays $t\to qX$ ($q=u/c$ and $X=g/\gamma\/Z/H$) can be greatly enhanced: for example, in the Minimal Supersymmetric Standard Model~(MSSM) with branching ratio $Br(t\to cH)\sim 10^{-5}$ ~\cite{mssm}, in R-parity violating Supersymmetry (SUSY) with branching ratio
$Br(t\to cH)\sim 10^{-6}$~\cite{plb-2001-510}, in
2-Higgs-Doublet Models~(2HDMs) with branching ratio $Br(t\to cH)\sim 10^{-5}-10^{-3}$~\cite{2hdm}, in the little Higgs model with T-parity and the warped extra dimensions both with branching ratio $Br(t\to cH)\sim 10^{-5}$ ~\cite{lht,ED} and so on. Thus any experimental signatures of such FCNC processes
will serve as a clear signal for NP Beyond the SM (BSM) \cite{top-NP}. Up to now, top-Higgs FCNC interactions have been studied widely via anomalous top decays or anomalous production processes of single top quark~\cite{t1,t2,t3,t4,prd86-094014,jhep-1407-046,150908149,prd89-054011,plb703-306}.
Currently, the ATLAS and CMS collaborations have carried out searches~\cite{cms-8,atlas-8,atlas13,cms13,1805.03483} for $tqH$ interactions with 7, 8
and 13 TeV data from the LHC.
For example, using 13 TeV data, the ATLAS and the CMS experiments have studied the $tqH$ FCNC processes in top quark pair events with $H\to \gamma\gamma$ for ATLAS and $H\to b\bar{b}$ for CMS. The resulting observed (expected) limits for $Br(t\to qH)$ at $95\%$ Confidence Level (CL) have been found to be~\cite{atlas13,cms13}:
\beq
Br(t\to Hu)\leq\left\{ \begin{array}{ll}
2.4~(1.7)\times 10^{-3} & {\rm ATLAS} \\
4.7~(4.3)\times 10^{-3} & {\rm CMS} \\ \end{array} \right. \nonumber \\
Br(t\to Hc)\leq\left\{ \begin{array}{ll}
2.2~(1.6)\times 10^{-3} & {\rm ATLAS} \\
4.7~(4.4)\times 10^{-3} & {\rm CMS} \\ \end{array} \right. \label{eq:tch}
\eeq
Very recently, a search for production of top pairs in which one top quark decays via
$t\to qH$ is reported by the ATLAS Collaboration~\cite{1805.03483}, with the subsequent Higgs boson decay to final states with at least one electron or muon. The upper limits on the branching fractions $Br(t\to Hc)< 0.16\%$ and $Br(t\to Hu)< 0.19\%$ at $95\%$ CL are obtained~(with expected limits of $0.15\%$ in
both cases).
Apart from direct collider measurements, the upper limits of $Br(t\to qH) < 5 \times 10^{-3}$ and
$Br(t\to qH) < 0.21\%$ can be obtained by bounding the $tqH$ vertex from the observed $D^{0}-\bar{D^{0}}$ mixing~\cite{prd81-077701} and $Z\to c\bar{c}$~\cite{prd72-057504}, respectively.
The upcoming project of the HL-LHC
is expected to reach 3 ab$^{-1}$. Preliminary sensitivity studies for the HL-LHC suggest the upper bound on $Br(t\to qH)$ to become about $1.5 \times 10^{-4}$ at $95\%$ CL by the ATLAS Collaboration~\cite{atlas-14-3000}. Further, many phenomenological studies within model-independent methods have been
performed from different channels~\cite{tfcnc-th,multilepton,prd-zhang,t5,t6,t7,t8,t9,t10}.
In this work, we study the prospects of probing the anomalous $tHq$ couplings by considering the processes of $tH$ associated production and $t\bar{t}$ production at the HL-LHC. We analyze two kinds of final states through leptonic top quark decays and $H\to WW^{\ast}$, one with Same Sign 2-Lepton~(SS2L) and the other with 3-Lepton (3L) topology, where the Higgs boson decays into a semi-leptonic $(H\to WW^{\ast}\to \ell^{+}\nu jj)$ or fully leptonic ($H\to WW^{\ast}\to \ell^{+}\nu\ell^{-}\bar{\nu}$) mode. The advantage of these channels is that their final states including the SS2L or 3L topologies can be used to significantly suppress QCD
backgrounds~\cite{ss2l}, which have not been fully studied in previous literature.
The organization of this paper is as follows. In Sec.~II, we discuss two kinds of final states for the processes of $tH$ associated production with the decay chain $t\to W^{+}b\to \ell^{+}\nu b$ and $H\to WW^{\ast}$ as well as $t\bar{t}$ production with the decay chain $t\to \ell^{+}\nu_{\ell}b$ and $\bar{t}\to H(\to WW^{\ast})\bar{q}$. Then we discuss the HL-LHC sensitivity to the anomalous $tHq$ couplings. We summarize in Sec.~III.
\section{Numerical calculations and discussions}
The general Lagrangian for FCNC top interactions with the Higgs boson can be written as
\begin{equation}
{\cal L}= \kappa_{tuH}\bar{t}Hu+\kappa_{tcH}\bar{t}Hc+h.c.,
\label{tqh}
\end{equation}
{where the FCNC coupling parameters, $\kappa_{tuH}$ and $\kappa_{tcH}$, are real and symmetric since we do not consider here the CP violating effects}.
We perform systematic Monte Carlo (MC) simulations and study the sensitivity to the anomalous $tHq$ couplings through the associated $tH$ and $t\bar{t}\to tH\bar{q}$ processes at HL-LHC. We first extract the relevant Feynman rules via the FeynRules package~\cite{feynrules} and generate the events with MadGraph5-aMC$@$NLO~\cite{mg5}. The signal and backgrounds samples are simulated at parton level with the NN23LO1 Parton Distribution Function~(PDF) set~\cite{cteq} and then passed through PYTHIA6.4~\cite{pythia} and DELPHES 3~\cite{delphes} for parton shower and detector
simulations, with the MLM matching scheme~\cite{MLM} adopted.
Finally, event analysis is performed by using MadAnalysis5 \cite{ma5}.
\subsection{Analysis of the SS2L channel}
For the final states including the SS2L topology, the signals are generated through the following processes,
\beq\label{signal}
pp&\to& t(\to W^{+}b\to \ell^{+}\nu b)H(\to WW^{\ast}\to \ell^{+}\nu jj),\\
pp&\to& t(\to W^{+}b\to \ell^{+}\nu b)\bar{t}(\to Hq\to WW^{\ast}q\to \ell^{+}\nu jjq),
\eeq
where $\ell =e, \mu$. The representative Feynman diagrams are shown in Fig.~\ref{fey1}.
\begin{figure}[htb]\vspace{0.5cm}
\begin{center}
\centerline{\epsfxsize=16cm \epsffile{fig1}}
\vspace{-17cm}
\caption{Representative Feynman diagrams for the associated $tH$ process~(left) and the FCNC decay of the top pair production process~(right). Here $q=u,c$.}
\label{fey1}
\end{center}
\end{figure}
For this channel, the typical signal is exactly two same-sign leptons plus at least three jets, with at least one jet identified as $b$-jet, and missing transverse energy. The main backgrounds are $t\bar{t}V$ ($V=W, Z$), $W^{+}W^{+}jj$ and $W^{+}Zjj$. The $t\bar{t}$ process, which has large cross section, may also contribute to background if the a same-sign lepton pair comes from a $B$-hadron semi-leptonic decay in the $b$-jet.
We do not consider other backgrounds from $t\bar{t}H$, $t\bar{t}t\bar{t}$, tri-boson events and $tHj$. They are neglected because the cross sections are all negligible after applying the selection cuts.
The cross sections of dominant backgrounds at Leading Order (LO) are adjusted to Next-to-LO (NLO) by means of
$K$-factors, which are 1.04 for $W^{+}W^{+}jj$ jets~\cite{wwjj}, 1.24 for $t\bar{t}W$~\cite{nlo-ttw} and 1.39 for $t\bar{t}Z$~\cite{nlo-ttz}. The dominant $t\bar{t}$ background is normalized to the NNLO QCD cross section of
953.6 pb~\cite{1303.6254}. For the $tH$ production cross section, the K-factor is taken as 1.5 at the 14 TeV LHC~\cite{prd86-094014}.
The decay chain $H\to WW^{\ast}\to \ell \nu jj$ may result in soft leptons and light jets, especially when they are coming from an off-shell $W$ boson. To analyze the signal sensitivity, we thus employ the
following basic cuts to select the events:
\begin{itemize}
\item
Basic cuts: $p_{T}(\ell) > 10 \rm ~GeV$, $p_{T}(j, b) > 15 \rm ~GeV$, $|\eta_{\ell, j, b}|<2.5$, where $\ell=e, \mu$.
\end{itemize}
\begin{figure}[htb]
\begin{center}
\centerline{\hspace{2.0cm}\epsfxsize=11cm\epsffile{fig-2a}
\hspace{-3.0cm}\epsfxsize=11cm\epsffile{fig-2b}}
\centerline{\hspace{2.0cm}\epsfxsize=11cm\epsffile{fig-2c}
\hspace{-3.0cm}\epsfxsize=11cm\epsffile{fig-2d}}
\caption{Normalized distributions for the signals and the backgrounds.}
\label{th-2l}
\end{center}
\end{figure}
In order to choose appropriate kinematic cuts, we plot in Fig.~\ref{th-2l} examples of kinematic distributions for the signal and backgrounds. Based on these distributions, we impose a further set of cuts.
\begin{itemize}
\item
Cut-1: Exactly two same-sign leptons ($N(\ell^{+})=2$) with $p_T(\ell_{1})>20$ GeV and $p_T(\ell_{2})>10$ GeV~($\ell_{1}$ and $\ell_{2}$ denote the higher and lower $p_T$ lepton, respectively) plus exactly one $b$-tagged jet ($N(b)= 1$). To remove contamination from hadron decay chains including $\ell^{+}\ell^{-}$ and $Z$ boson, we choose the invariant mass larger than 12 GeV and $|M_{\ell\ell}-m_Z| > 10$ GeV.
\item
Cut-2: At least two jets in the events are required to be successfully reconstructed, i.e., $N(j)\geq 2$. Among those reconstructed jets, there are at least one pair of jets which could come
from a $W$ boson either on-shell or off-shell. Thus the invariant mass of the $W$ boson is required to be $M_{jj}<90$ GeV.
\item
Cut-3: The invariant mass of $M_{\ell_{2}jj}$ is required to be smaller than 120 GeV.
\item
Cut-4: Since the first lepton, $\ell_{1}$, is assumed to originate from the leptonically decaying top quark, the invariant mass of the $b$-jet and the leading lepton should be $M_{b\ell_{1}}< 140$ GeV.
\item
Cut-5: The scalar sum of transverse momenta, $H_{T}$, is to be smaller than 250 GeV.
\end{itemize}
\begin{table}[htb]
\centering
\caption{The cut flow of the cross sections (in fb) for the signal and SM backgrounds for the SS2L channel. The coupling parameters are taken as
$\kappa_{tuH}=0.1$ or $\kappa_{tcH}=0.1$ while fixing the other to zero. \label{cutflow-1}}
\vspace{0.2cm}
\begin{tabular}{p{2.0cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{1.8cm}<{\centering} p{0.5cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{1.0cm}<{\centering}}
\toprule[1.5pt]
\multirow{2}{*}{Cuts}& \multicolumn{3}{c }{Signal}&&\multicolumn{4}{c}{Backgrounds} \\ \cline{2-4} \cline{6-9}
& $ug$ & $cg$ & $t\bar{t}\to tHq$ &&$t\bar{t}V$ & $WWjj$ & $WZjj$ & $t\bar{t}$ \\ \cline{1-9} \midrule[1pt]
Basic cuts & 3.12&0.34&3.77&&6.73&6.42&20.9&61004 \\
Cut 1 & 0.48&0.056&0.69&&0.85&0.21&0.25&6.52\\
Cut 2 &0.225&0.027&0.34&&0.27&0.04&0.046&2.54\\
Cut 3 & 0.18&0.022&0.28&&0.092&0.016&0.011&1.7\\
Cut 4 & 0.15&0.019&0.24&&0.058&0.009&0.0063&1.36\\
Cut 5 &0.14&0.017&0.21&&0.048&0.007&0.005&1.16\\
\bottomrule[1.5pt]
\end{tabular}
\end{table}
The effects of the cuts on the signal and background processes are illustrated in Tab.~\ref{cutflow-1} for the SS2L channel, where the anomalous coupling parameters are taken as
$\kappa_{tuH}=0.1$ or $\kappa_{tcH}=0.1$ while fixing the other to zero.
From Tab.~\ref{cutflow-1} we can see that, after all these cuts, the $t\bar{t}$ backgrounds
for the SS2L channel, with fake leptons from heavy-flavor jets or charge mis-identifications
can be significant.
Obviously, the non-prompt backgrounds may also be significant, where non-prompt leptons are from heavy-flavor decays, mis-identified hadrons, muons from light-meson decays or electrons from un-identified conversions of photons into jets. Recently, the CMS collaboration searched for SS2L signatures~\cite{epjc77-578} and found that the overall non-prompt backgrounds are about 1.5 times the $t\bar{t}W$ background after all cuts. These non-prompt backgrounds are not properly modeled in our MC simulations. Therefore, for simplicity, we add a non-prompt background that is
1.5 times $t\bar{t}W$~\cite{epjc77-578} after selection cuts to the overall background. Accounting for the theoretical and experimental systematic uncertainties on the background predictions would certainly improve the reliability of the results, yet they can only be neglected in our simulation.
\subsection{Analysis of the 3L channel}
Next, we consider the final states including 3L via the following processes:
\beq\label{signal}
pp&\to& t(\to W^{+}b\to \ell^{+}\nu b)h(\to WW^{\ast}\to \ell^{+}\nu \ell^{-}\bar{\nu}),\\
pp&\to& t(\to W^{+}b\to \ell^{+}\nu b)\bar{t}(\to Hq\to WW^{\ast}q\to \ell^{+}\nu \ell^{-}\bar{\nu}q),
\eeq
where $\ell =e, \mu$.
The dominant SM backgrounds are $t\bar{t}V$ ($V=W, Z$), $t\bar{t}H$, $WZ+$ jets and $t\bar{t}$. The multi-jet backgrounds~(where jets can fake electrons) are not included since they are negligible in multi-lepton analyses~\cite{14067830}.
The pre-selection cuts are taken as follows:
there must exist exactly three isolated leptons ($\ell=e, \mu$) and exactly one $b$-tagged jet with $p_{T}(\ell_{1}) > 20 \rm ~GeV$, $p_{T}(\ell_{2,3}) > 10 \rm ~GeV$, $p_{T}(j, b) > 20 \rm ~GeV$, $\slashed E_T> 100 \rm ~GeV$ and $|\eta_{\ell, j, b}|<2.5$.
These cuts can strongly reduce the $t\bar{t}$ background and di-boson components.
\begin{figure}[htb]
\begin{center}
\vspace{0.5cm}
\centerline{\hspace{2.0cm}\epsfxsize=11cm\epsffile{fig-3a}\hspace{-3.0cm}\epsfxsize=11cm\epsffile{fig-3b}}
\caption{Normalized invariant mass distributions of $M_{\ell_{2}\ell_{3}}$ (left) and $M_{b\ell_{1}}$ (right).}
\label{mh3l}
\end{center}
\end{figure}
In Fig.~\ref{mh3l}, we show the invariant mass distribution of $M_{\ell_{2}\ell_{3}}$ and $M_{b\ell_{1}}$ from the signal and backgrounds at the 14 TeV LHC. To remove contamination from hadron decay chains including $\ell^{+}\ell^{-}$ pairs and resonant $Z$ bosons, we choose the invariant mass $M_{\ell_{2}\ell_{3}}$ cuts
\begin{itemize}
\item $12 {\rm GeV} < M(\ell_{2}\ell_{3})< 55 \rm ~GeV$.
\end{itemize}
Similarly, the invariant mass of the $b$-jet and the leading lepton, $M_{b\ell_{1}}$, should be smaller than 140 GeV. The effects of the cuts on the signal and backgrounds processes are illustrated in Table 2 for the 3L channel. One can see that significant backgrounds also come from the top pair production process with fake leptons or charge mis-identifications.
\begin{table}[htb]
\begin{center}
\caption{The cut flow of the cross sections (in fb) for the signal and background processes for the 3L channel. \label{cutflow-2}}
\vspace{0.2cm}
\begin{tabular}{p{2.0cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{2.0cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{1.5cm}<{\centering} p{2.0cm}<{\centering}}
\toprule[1.5pt]
\multirow{2}{*}{Cuts}& \multicolumn{3}{c}{Signals}&\multicolumn{4}{c}{Backgrounds} \\ \cline{2-8}
& $ug$ & $cg$ & $t\bar{t}\to tHq$ & $t\bar{t}$& $t\bar{t}V$ & $WZjj$ & $t\bar{t}h$ \\ \cline{1-8}
\midrule[1pt]
Basic cuts & 1.39&0.17&2.05&21843&1.85&46.2&0.025\\
After cuts & 0.14&0.018&0.106&0.23&0.024&0.021&$1.7\times 10^{-5}$\\
\bottomrule[2pt]
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{center}
\vspace{-0.5cm}
\centerline{\epsfxsize=9cm \epsffile{fig4a}\hspace{-1.0cm}\epsfxsize=9cm \epsffile{fig4b}}
\caption{The $3\sigma$ contour plots for the signal in the ${\cal L}_{\rm int}-\kappa_{tqH}$ plane for the SS2L (left) and 3L (right) channels at the 14 TeV LHC. }
\label{ss3}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\vspace{-0.5cm}
\centerline{\epsfxsize=9cm \epsffile{fig5a}\hspace{-1.0cm}\epsfxsize=9cm \epsffile{fig5b}}
\caption{The $5\sigma$ contour plots for the signal in the ${\cal L}_{\rm int}-\kappa_{tqH}$ plane for the SS2L (left) and 3L (right) channels at the 14 TeV LHC. }
\label{ss5}
\end{center}
\end{figure}
Using the Poisson formula
$SS=\sqrt{2{\cal L}_{\rm int}[(S+B)\ln(1+S/B)-S]}$~\cite{ss} we estimate the Signal Significance ($SS$) with fixed coupling parameters $\kappa_{tqH}$ and a given integrated luminosity ${\cal L}_{\rm int}$.
In Figs.~\ref{ss3} and \ref{ss5}, we plot the contours of $SS=3$ and $SS=5$, respectively, for two channels in the plane of ${\cal L}_{\rm int}-\kappa_{tqH}$. It is clear that, for an integrated luminosity of 3000 fb$^{-1}$, the FCNC couplings $\kappa_{tuH}~(\kappa_{tcH})$ can be probed to 0.045~(0.052) and 0.035~(0.049) at $3\sigma$ statistical sensitivity for the SS2L and 3L channels, respectively.
After neglecting the masses of light quarks, the branching ratio of $t \to qH$ is approximately given by \cite{jhep-1407-046,nlo}
\begin{equation}
Br(t \to qH) = \frac{\kappa^{2}_{tqH}}{\sqrt{2} m^2_t G_F}\frac{(1-x^2_h)^2}{(1-x^2_W)^2 (1+2x^2_W)}\lambda_{QCD} \simeq 0.58\kappa_{tqH}^{2},
\label{br}
\end{equation}
in terms of the Fermi constant $G_F$ and with $x_i=m_i/m_t~(i=W,\ h)$.
In our numerical calculation,
the relevant SM input parameters are taken as~\cite{pdg}:
\begin{align}
m_H&=125{\rm ~GeV}, \quad m_t=173.1{\rm ~GeV}, \quad m_W=80.379{\rm ~GeV},\\ \nonumber
m_Z&=91.1876{\rm ~GeV}, \quad \alpha_{s}(m_Z)=0.1185, \quad G_F=1.166370\times 10^{-5}\ {\rm GeV^{-2}}.
\end{align}
Using eq.~(\ref{br}), the limits can be translated in terms of constraints on the branching fractions of rare top decays. The $3\sigma$ CL upper limits on $Br(t\to qH)$ are about $Br(t\to uH)=1.17\times 10^{-3}$ and $Br(t\to cH)=1.56\times 10^{-3}$ for the SS2L channel, and $Br(t\to uH)=7.1\times 10^{-4}$ and $Br(t\to cH)=1.39\times 10^{-3}$ for the 3L channel. The projected limits from different
channels are summarized in Tab.~\ref{SBaftercuts_1}. We can see from the table that our results are comparable with the sensitivity limits at the HL-LHC as $Br(t\to uH)<0.036\%$ via the $H\to \gamma\gamma$ channel~\cite{t6}, $Br(t\to uH)<0.05\%$ via the multi-lepton channel and $Br(t\to uH)<0.02\%$ via the di-photon channel ~\cite{13112028}.
\begin{table}[htbp]
\renewcommand\arraystretch{0.9}
\caption{\label{SBaftercuts_1}
The projected limits on $Br(t\to qH)$ from different channels. The last two lines of the table are the results of this work.}
\vspace{-0.5cm}
\begin{center}
\scalebox{0.9}{\begin{tabular}{p{5.0cm}<{\centering} p{8.0cm}<{\centering} p{5.0cm}<{\centering} }
\toprule[1.5pt]
Channels &Data Set &Limits \\ \midrule[1pt]
$ tH\to \ell\nu b\tau^+\tau^-$ & LHC, 100 fb$^{-1}$ @ 13 TeV, $95\%$ CL &$ Br$ $(t\to uH)< 0.15$ $\%$ \cite{jhep-1407-046} \\
$ tH\to \ell\nu b\ell^+\ell^-X$ &LHC, 100 fb$^{-1}$ @ 13 TeV, $95\%$ CL &$ Br$ $(t\to uH)< 0.22$ $\%$ \cite{jhep-1407-046} \\
$ t\bar{t}\to Wb + Hc\to jj b +\tau\tau c$ &LHC, 100 fb$^{-1}$ @ 13 TeV, $95\%$ CL &$ Br$ $(t\to cH)< 0.25$ $\%$ \cite{150908149}\\
$ tH\to jjb b\bar{b}$ &LHC, 100 fb$^{-1}$ @ 13 TeV, $95\%$ CL &$ Br$ $(t\to uH)< 0.36$ $\%$ \cite{jhep-1407-046}\\
$ Wt\to WHq \to \ell\nu b\gamma\gamma q$ & LHC, 3000 $ fb^{-1}$ @ 14 TeV, $3\sigma$ & $ Br$ $(t\to qH)< 0.24$ $\%$ \cite{t5} \\
$ tH\to \ell\nu b\gamma\gamma q$ & LHC, 3000 $ fb^{-1}$ @ 14 TeV, $3\sigma$ & $ Br$ $(t\to uH)< 0.036$ $\%$ \cite{t6} \\
$ t\bar{t}\to WbqH\to \ell\nu b\gamma\gamma q$ & LHC, 3000 $ fb^{-1}$ @ 14 TeV, $3\sigma$ & $ Br$ $(t\to uH)< 0.23$ $\%$ \cite{t7} \\
$ e^{-}p\to \nu_{e}\bar{t}\to \nu_{e}H(\to b\bar{b}) \bar{q}$ & LHeC, 200 $ fb^{-1}$ @ 150 GeV $\oplus$ 7 TeV, $95\%$ CL & $ Br$ $(t\to qH)< 0.013$ $\%$ \cite{t8} \\
$ t\bar{t}\to tqH\to \ell\nu bb\bar{b} q$ & ILC, 3000 fb$^{-1}$ @ 500 GeV, $95\%$ CL & $ Br$ $(t\to qH)< 0.112$ $\%$ \cite{t9} \\
$ t\bar{t}\to tqH\to \ell\nu bb\bar{b} q$ & ILC~(unpolarized), 500 fb$^{-1}$ @ 500 GeV, $3\sigma$ & $ Br$ $(t\to qH)< 0.119$ $\%$ \cite{t10} \\
$ t\bar{t}\to tqH\to \ell\nu bb\bar{b} q$ & ILC~(polarized), 500 fb$^{-1}$ @ 500 GeV, $3\sigma$ & $ Br$ $(t\to qH)< 0.088$ $\%$ \cite{t10} \\
$ t\bar{t}\to Wb + Hq\to \ell \nu b +\gamma\gamma q$ &LHC, 3000 fb$^{-1}$ @ 14 TeV, $95\%$ CL &$ Br$ $(t\to qH)< 0.02$ $\%$ \cite{13112028}\\
$ t\bar{t}\to Wb + hq\to \ell \nu b +\ell\ell qX$ &LHC, 3000 fb$^{-1}$ @ 14 TeV, $95\%$ CL &$ Br$ $(t\to qH)< 0.05$ $\%$ \cite{13112028}\\
\multirow{2}{*}{This work for the SS2L channel}&\multirow{2}{*}{LHC, 3000 fb$^{-1}$ @ 14 TeV, $3\sigma$ }&$ Br$ $(t\to uH)< 0.117$ $\%$, \\
&& $ Br$ $(t\to cH)< 0.156$ $\%$ \\
\multirow{2}{*}{This work for the 3L channel}&\multirow{2}{*}{LHC, 3000 fb$^{-1}$ @ 14 TeV, $3\sigma$ }&$ Br$ $(t\to uH)< 0.071$ $\%$, \\
&& $ Br$ $(t\to cH)< 0.139$ $\%$ \\
\bottomrule[1.5pt]
\end{tabular}}
\end{center}
\end{table}
\section{Conclusions}
The discovery of the 125 GeV Higgs boson opens the door to probe NP processes that involve Higgs boson associated production or decay. In this paper, we have investigated the signal of $tH$ associated production via FCNC $tqH$ couplings and $t\bar{t}$ production with $\bar{t}\to H\bar{q}$ at the 14 TeV LHC. We focused on the final states including SS2L and 3L signals from the decay modes $t\to b\ell^{+}\nu_{\ell}$, $H\to WW^{\ast}\to \ell^{+}\nu jj $ or $H\to WW^{\ast}\to \ell^{+}\nu \ell^{-}\bar{\nu}$. We have then shown that, at $3\sigma$ level, the branching ratios $Br(t\to uH)$ and $Br(t\to cH)$ are, respectively, about $Br(t\to uH) \leq 1.17\times 10^{-3}$ and $Br(t\to cH) \leq 1.56\times 10^{-3}$ for the SS2L channel, and $Br(t\to uH) \leq 7.1\times 10^{-4}$ and $Br(t\to cH) \leq 1.39\times 10^{-3}$ for the 3L channel at the future HL-LHC.
\begin{acknowledgments}
The work of Y-B Liu is supported by the Foundation of Henan Institute of Science and Technology (Grant no. 2016ZD01) and the China Scholarship Council (201708410324). The work of SM is supported in part by the NExT Institute and the STFC CG ST/L000296/1.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} QC-LDPC codes have gained significant attention, due to their strength of facilitating simple encoders and decoders. It is generally accepted \cite{F04}\cite{LK04} that girth is one important factor (among others), affecting performance of LDPC codes.
There have been a large number of \emph{specific} methods for constructing QC-LDPC codes with decent or large girth. However, except for a couple of methods, such as \cite{KLF01}\cite{MMY07}\cite{MYP06}, \emph{general} skills to obtain longer codes with nondecreasing girth from shorter ones are rarely available. In this Letter, a new such method is proposed to design longer compound QC-LDPC codes from shorter base codes, while the girth of former is equal to or greater than that of the latter. On the basis of some shorter codes (from greatest-common-divisor, viz., GCD method \cite{ZSW13}) with girth eight but unsatisfactory performance, the resultant longer codes not only possess a girth at least eight but also outperform some well-known specific classes of QC-LDPC codes. The strength of the new method lies in simplicity (using only simple operations of partition and splicing) and flexibility (applicable to any base code).
\section{Preliminaries} An LDPC code is defined as the null space of a sparse parity-check matrix (PCM). If a PCM has a constant column (resp. row) weight of $J$ (resp. $L$), then it yields a $(J,L)$-regular code. Generally, the PCM of a QC-LDPC code is an array of circulants, which may include zero matrix (ZM), circulant permutation matrix (CPM) or summation of distinct CPMs.
An LDPC code with PCM $\textbf{H}$ can be represented by its associated Tanner graph, $TG(\textbf{H})$, and the length of the shortest cycle in the graph is called girth (or girth of the code/PCM). Denote by $g(\textbf{H})$ the girth of $\textbf{H}$. If $\textbf{H}$ is an array of $P\times P$ CPMs/ZMs, then it can be completely determined by $P$ and an exponent matrix $\textbf{E}$ with entries in the set $\{\infty, 0,1,\cdots,P-1\}$, where $\infty$ corresponds to a $P\times P$ ZM and any other entry (say $e$) to a $P\times P$ identity matrix with rows cyclically shifted to the right by $e~(mod~P)$ positions \cite{WZZYS}. For this case, $\textbf{H}$ and its girth can also be denoted by $\textbf{H}(\textbf{E},P)$ and $g(\textbf{E},P)$, respectively.
Let $\textbf{Z}_N=\{0,1,\cdots,N-1\}$. A Latin square of order $N$ \cite{Zhang16} is an $N\times N$ array in which each cell contains a symbol from $\textbf{Z}_N$, such that each
symbol occurs exactly once in each column and each row.
\section{A General Method from Partition and Latin-Style Splicing (PS)} The base PCM $\textbf{H}_0$ is assumed to be an $m\times n$ array of $P\times P$ matrices over $GF(2)$. Clearly, the PCM of a QC-LDPC code is a special case of $\textbf{H}_0$.
Given an integer $N\geq 2$, select $N$ masking matrices $\textbf{M}_k~(0\leq k\leq N-1)$ of size $m\times n$ defined over the integer set $\{0,1\}$, such that the ordinary summation $\sum_{k=0}^{N-1} \textbf{M}_k$ equals an $m\times n$ all-one matrix. Let $\textbf{A}=[a_{i,j}]~(0\leq i,j\leq N-1)$ be a Latin square of order $N$.
From $\textbf{H}_0$, a new $mPN\times nPN$ PCM \textbf{H} can be obtained by
\begin{equation}
\left[
\begin{array}{cccc}
\textbf{H}_0\otimes f(\textbf{M}_{a_{0,0}})& \textbf{H}_0\otimes f(\textbf{M}_{a_{0,1}}) & \cdots & \textbf{H}_0\otimes f(\textbf{M}_{a_{0,N-1}})\\
\textbf{H}_0\otimes f(\textbf{M}_{a_{1,0}})& \textbf{H}_0\otimes f(\textbf{M}_{a_{1,1}}) & \cdots & \textbf{H}_0\otimes f(\textbf{M}_{a_{1,N-1}})\\
\vdots & \vdots & \ddots & \vdots\\
\textbf{H}_0\otimes f(\textbf{M}_{a_{N-1,0}})& \textbf{H}_0\otimes f(\textbf{M}_{a_{N-1,1}}) & \cdots & \textbf{H}_0\otimes f(\textbf{M}_{a_{N-1,N-1}})\\
\end{array}
\right]
\end{equation}
where $f(\textbf{M})$ maps each 0 (resp. 1) in $\textbf{M}$ to a $P\times P$ zero (resp. all-one) matrix over $GF(2)$, and $\otimes $ is an element-by-element multiplication
defined by $x \otimes y=1$ if and only if $x=y=1$.
\emph{Theorem 1}: $g(\textbf{H})\geq g(\textbf{H}_0)$.
\emph{Proof}: According to the definitions of \textbf{M}'s and Latin square \textbf{A}, two adjacent edges within $TG(\textbf{H})$ correspond to two adjacent edges within $TG(\textbf{H}_0)$. Therefore, if there is a cycle of length $2l$ in $TG(\textbf{H})$, then there must exist a corresponding cycle with the same length in $TG(\textbf{H}_0)$. This completes the proof.
It is easily seen that the row/column weight distribution of $\textbf{H}$ is the same as that of $\textbf{H}_0$, and the designed rate for $\textbf{H}$ equals that for $\textbf{H}_0$. For $P=1$, $\textbf{H}_0$ corresponds to an LDPC code not necessarily with a special structure, that is to say, $\textbf{H}_0$ can be an arbitrary binary matrix. This suggests that the new method is applicable to any base code. If $\textbf{H}_0$ is an $m\times n$ array of $P\times P$ CPMs/ZMs, then \textbf{H} (Equ.1) can be described by (via its exponent matrix)
\begin{equation}
\textbf{E}=
\left[
\begin{array}{llll}
\textbf{E}_0\tilde \otimes \textbf{M}_{a_{0,0}}& \textbf{E}_0\tilde \otimes \textbf{M}_{a_{0,1}} & \cdots & \textbf{E}_0\tilde \otimes \textbf{M}_{a_{0,N-1}}\\
\textbf{E}_0\tilde \otimes \textbf{M}_{a_{1,0}}& \textbf{E}_0\tilde \otimes \textbf{M}_{a_{1,1}} & \cdots & \textbf{E}_0\tilde \otimes \textbf{M}_{a_{1,N-1}}\\
\vdots & \vdots & \ddots & \vdots\\
\textbf{E}_0\tilde \otimes \textbf{M}_{a_{N-1,0}}& \textbf{E}_0\tilde \otimes \textbf{M}_{a_{N-1,1}} & \cdots & \textbf{E}_0\tilde \otimes \textbf{M}_{a_{N-1,N-1}}\\
\end{array}
\right]
\end{equation}
where $\textbf{E}_0$ is the exponent matrix of $\textbf{H}_0$.
The notation $\tilde \otimes$ stands for an element-by-element operation defined by $x \tilde \otimes~0=\infty$, and $x \tilde \otimes~1=x$, where $x\in \{\infty, 0,1,\cdots,P-1\}$.
\emph{A Special Case}: Let $N=2$ and $n=Km$. Suppose that both $\textbf{M}_0$ and $\textbf{M}_1$ are $1\times K$ arrays of $K$ identical $m\times m$ matrices. Let $\textbf{M}_0=[\textbf{X},\cdots,\textbf{X}]$, where each element in the lower-triangle (including diagonal) of \textbf{X} is '1', and '0' elsewhere. Define $\textbf{M}_1=\textbf{1}_{m\times n}-\textbf{M}_0$. Then, $\textbf{E}$ can be obtained from $\textbf{E}_0$ by
\begin{equation}
\left[
\begin{array}{llll}
\textbf{E}_0\tilde \otimes \textbf{M}_{a_{0,0}} & \textbf{E}_0\tilde \otimes \textbf{M}_{a_{0,1}}\\
\textbf{E}_0\tilde \otimes \textbf{M}_{a_{1,0}} & \textbf{E}_0\tilde \otimes \textbf{M}_{a_{1,1}}\\
\end{array}
\right],
\textbf{A}=
\left[
\begin{array}{ll}
a_{0,0} & a_{0,1}\\
a_{1,0} & a_{1,1}\\
\end{array}
\right]=
\left[
\begin{array}{ll}
0 & 1\\
1 & 0\\
\end{array}
\right]
\end{equation}
Clearly, the above \textbf{E} is equivalent to that investigated in \cite{TLZL06} (Section VII). It is pointed out \cite{TLZL06} that $g(\textbf{E},P)$ is ensured to be at least 6, provided $g(\textbf{E}_0,P)\geq 6$. The method in this Letter, however, is more general in the sense that $N$ is not limited to 2, and $n$ and $m$ can be selected arbitrarily. Moreover, given an arbitrary $\textbf{E}_0$ with $g(\textbf{E}_0,P)=2g_0$, Theorem 1 guarantees an exponent matrix \textbf{E} with $g(\textbf{E},P)\geq 2g_0$.
\emph{Example 1}: Given an exponent matrix
\begin{equation}
\textbf{E}_0=
\left[
\begin{array}{llll}
0 & 0 & 0& 0\\
0 & 1 & 3& 4\\
0 & 2 & 6& 5\\
\end{array}
\right]
\end{equation}
it is easily checked that $g(\textbf{E}_0,P)=4$ for any $P\geq 7$, as there is an equation $(1-2)+(5-4)=0~(mod~P)$ regardless of $P$. Set $N=2$, and define
\begin{equation}
\textbf{M}_0=
\left[
\begin{array}{llll}
1 & 1 & 1& 1\\
1 & 1 & 1& 1\\
1 & 0 & 0& 1\\
\end{array}
\right],
\textbf{A}=
\left[
\begin{array}{ll}
a_{0,0} & a_{0,1}\\
a_{1,0} & a_{1,1}\\
\end{array}
\right]
=
\left[
\begin{array}{ll}
0 & 1\\
1 & 0\\
\end{array}
\right]
\end{equation}
Then, $\textbf{M}_1=\textbf{1}_{3\times 4}-\textbf{M}_0$.
Thus, the new method yields
\begin{equation}
\textbf{E}=
\left[
\begin{array}{ll}
\textbf{E}_0\tilde \otimes \textbf{M}_{0} & \textbf{E}_0\tilde \otimes \textbf{M}_{1}\\
\textbf{E}_0\tilde \otimes \textbf{M}_{1} & \textbf{E}_0\tilde \otimes \textbf{M}_{0}\\
\end{array}
\right]
=
\left[
\begin{array}{llll|llll}
0 &0 & 0 & 0 & & & & \\
0 &1 & 3& 4 & & & & \\
0 & & & 5 & & 2 & 6 & \\
\hline
& & & & 0 & 0 & 0 & 0\\
& & & & 0 & 1 & 3 & 4\\
& 2 & 6 & & 0& & & 5\\
\end{array}
\right]
\end{equation}
It is readily verified that $g(\textbf{E},P)=8$ for any $P\geq 7$.
\section{Comparison with existing methods} There exist some general methods to construct longer compound LDPC codes with nondecreasing girth from one or several shorter base code(s). The \emph{column splitting} method \cite{KLF01} is capable of improving on the flexibility of code length and increase code rate by splitting each column of a PCM with large column weight into several columns. The method in \cite{MMY07} yields a longer QC-LDPC codes with girth at least $2g$~($2g=6$ or $8$) from two shorter QC-LDPC codes both with girth at least $2g$. This method extends the row weight of QC-LDPC code (hence code rate increased), but only ensure a girth not exceeding eight even if both base codes have a girth larger than eight. The Chinese-remainder-theorem (CRT) method \cite{MYP06} can be employed to construct longer QC-LDPC codes by combining two shorter base QC-LDPC codes. The girth of the resultant code is not smaller than the maximal girth of the two base codes. Obviously, the new method is different from all the aforementioned methods.
Besides, if the Latin square $\textbf{A}$ is selected as $[a_{i,j}]=[i-j~(mod~N)]$, then Equ. (1) is reduced to the PCM of spatially coupled LDPC block codes \cite{MLC15}. The base matrices of the terminated ensembles and the tail-biting counterpart (Equ. (8) and Equ. (10) in \cite{MLC15}) are the same as the partial matrix and the whole matrix of \textbf{E}, respectively. However, since \textbf{A} generally possesses many forms different from the above choice, \textbf{E} and \textbf{H} are generally not identical to those of the spatially coupled LDPC codes.
\section{Simulations} The application of the new general method is illustrated by several examples. Set $\textbf{A}$ as $[a_{i,j}]=[i-j~(mod~N)]$. Given a matrix $\textbf{M}_0$, set $\textbf{M}_1=\textbf{1}_{m\times n}-\textbf{M}_0$, and let $\textbf{M}_k=\textbf{O}~(2\leq k\leq N-1)$ if $N\geq 3$. The matrix $\textbf{M}_0$ can be selected by different ways, in which three options are considered in this Letter. (a) For diagonal (D) partition, let $\textbf{M}_0=[\textbf{X},\cdots, \textbf{X}]$, where $\textbf{X}$ is an $m\times m$ matrix with 0's in diagonal and 1's elsewhere. (b) For triangle (T) partition, $\textbf{M}_0=[\textbf{X},\cdots, \textbf{X}]$, where $\textbf{X}(i,j)=1$ for $0\leq j\leq i~(0\leq i\leq m-1)$ and 0 elsewhere; and (c) For Hamming-like (H) partition, $\textbf{M}_0$ is a 0-1 $m\times n$ matrix in which the columns are as distinct as possible. For simulations, a BI-AWGN channel with BPSK modulation and SPA with 50 iterations are assumed.
\emph{Example 2}: From the GCD construction \cite{ZSW13}, $\textbf{E}_0=[0,1,L,L+1]^T\cdot[0,1,\cdots, L-1]~(mod~P)$ is chosen as the base exponent matrix, where $P=64$ and $L=8$. Set $N=4$. For H partition, $\textbf{M}_0$ can be selected as
\begin{equation}
\textbf{M}_0=
\left[
\begin{array}{llllllll}
1 & 0& 1& 1& 1 &0 & 0 & 0\\
1 & 1& 0& 1& 0 &1 & 0 & 0\\
1 & 1& 1& 0& 0 &0 & 1 & 0\\
0 & 1& 1& 1& 0 &0 & 0 & 1\\
\end{array}
\right]
\end{equation}
From D,T and H partitions, three PS $(4,8)$-regular QC-LDPC codes are obtained with girth at least eight. For comparison purpose, a girth-8 GCD code and a girth-6 quad.-cong. code, both (4,8)-regular, are generated. The GCD code is obtained by setting $\textbf{E}=[0,1,L,L+1]^T\cdot[0,1,\cdots,L-1]~(mod~P)$ where $P=256$ and $L=8$, and the quad.-cong. code is randomly constructed by the method \cite{HHY08} with a prime CPM size $257$. From Fig.1, it is observed that the GCD code performs the worst, the PS-D and PS-T codes better, and the PS-H code the best, which can be partly explained by the fact that during the simulation, codewords with weight 14, 16, 16 are found for codes from GCD, PS-D and PS-T methods, respectively, and no codewords with small weight occur for the PS-H code. Moreover, the PS-H code noticeably outperform the well-known random quad.-cong. code.
\begin{figure}[htb]
\centering
\includegraphics[width=10cm,bb=85 270 480 570]{Fig2048.eps}
\caption{Performance comparison of (4,8)-regular QC-LDPC codes from PS, GCD and Quad Cong. methods}
\label{f4}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm,bb=85 270 480 570]{Fig5184x.eps}
\caption{Performance comparison of (4,12)-regular QC-LDPC codes from PS, Quad Cong. and QC-PEG methods}
\label{f7x}
\end{figure}
\emph{Example 3}: According to the GCD construction \cite{ZSW13}, set $\textbf{E}_0=[0,1,L,L+1]^T\cdot[0,1,\cdots, L-1]~(mod~P)$ as the base exponent matrix, where $P=144$ and $L=12$. Let $N=3$, and for H partition set
\begin{equation}
\textbf{M}_0=
\left[
\begin{array}{llllllllllll}
1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\
1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\
1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0\\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1\\
\end{array}
\right]
\end{equation}
From T and H partitions, two PS $(4,12)$-regular QC-LDPC codes are obtained with girth at least eight. A (4,12)-regular girth-8 GCD code is also generated by the exponent matrix $[0,1,L,L+1]^T\cdot[0,1,\cdots, L-1]~(mod~P)$, where $P=432$ and $L=12$. A (4,12)-regular quad.-cong code is randomly generated with girth 6; besides, a (4,12)-regular QC-PEG code \cite{LK04} is obtained with girth 8, the PCM of which is a $12\times 36$ array composed of 142 CPMs, 289 ZMs and one summation of two CPMs. We observe in Fig.2 that the PS-H code performs better than the quad.-cong code and the QC-PEG code, while the GCD and PS-T counterparts both suffer from error floor partly due to their relatively poor distance property (during the simulation, codewords with weight 14 and 12 exist for the GCD code and PS-T code, respectively, and no codewords with small weight is found for the PS-H code).
\section{Conclusion} By non-overlapping partition and Latin-square-style splicing, a general method (PS) preserving girth is proposed to yield longer LDPC codes from shorter base ones. Numerical results show that the codes generated by combining PS method (using Hamming-like partition) with some poor base codes perform very well compared with the well-known QC-PEG and quad.-cong. codes. Finally, it should be pointed out that by applying the new method to some explicitly constructed base codes with large girth (e.g. \cite{KNCS06}, \cite{GM12}), type-1 QC-LDPC codes with large girth such as \cite{WZZYS}\cite{ZM17} can be easily constructed.
\IEEEpeerreviewmaketitle
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was supported by the National Natural Science Foundation of China under grant 61471294.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Ferrofluids are among the wide variety of synthetic materials created in the twentieth century. A ferrofluid is a liquid that presents ferromagnetic properties, i.e. it becomes strongly magnetizable in the presence of an external magnetic field. Such a material does not exist naturally in the environment but it was created in 1963 by NASA \cite{stephen1965low} with a very specific goal: to be used as a fuel for rockets in an environment without gravity, hence the necessity to be pumped by applying a magnetic field. \\
Ferrofluids are collidal (a mixture in which one substance of microscopically dispersed insoluble particles is suspended throughout another substance) made of nanoscale ferromagnetic particles of a compound containing iron, suspended in a fluid. They are magnetically soft, which means that they do not retain magnetization once there is no external magnetic field acting on them. \\
The versatility of such material and its peculiar property of being controlled via a magnetic field made it suitable to be used in a whole variety of applications: ferrofluids are for instance used in loudspeakers in order to cool the coil and damp the cone \cite{Miwa2003}, as seals in magnetic hard-drives \cite{raj1982ferrofluid}, in order to reduce friction \cite{Huang2011} or enhance heat transfer \cite{LAJVARDI20103508, Sheikholeslami2015}. We refer the interested reader to \cite{Zahn2001}, the introduction of \cite{NST2016} and references therein for a survey of potential applications of ferrofluids. \\
There are two systems of partial differential equations which are generally accepted as models for the motion of ferrofluids, which are known under the name of their developer, the Shliomis model \cite{shliomis1975non} and the Rosensweig model \cite{Rosensweig}. The mathematical analysis of such systems is very recent, in \cite{AH_Shilomis_weak, AH_Shilomis_strong, AH_Rosensweing_strong} and \cite{AH_Rosensweing_weak} it is proved that both Shliomis and Rosensweig model admit global weak and local strong solutions in bounded, smooth subdomains of $ \mathbb{R}^3 $. The same authors then considered as well thermal and electrical conductivity as well as steady-state solutions of various ferrofluids systems in \cite{AH12-2, AH12, AH13, AH14, AH15, AH16} and \cite{HHl16}. In \cite{Scrobo_FF2D} and \cite{DS18} it was proved that the Rosensweig system for ferrofluids is globally well posed in dimension two. \\
In the present work we consider the Bloch-Torrey regularization of the Shliomis system for ferrofluids in the whole three-dimensional space $ \mathbb{R}^3 $
\begin{equation}\label{eq:Shilomis1} \tag{S1}
\left\lbrace
\begin{aligned}
& \rho_0 \pare{\partial_t u + \pare{u\cdot\nabla} u}- \nu \Delta u + \nabla p = \mu_0 \pare{M\cdot \nabla}H +\frac{\mu_0}{2}\ \textnormal{curl}\pare{M\times H}, && \pare{x, t}\in \mathbb{R}^3\times \mathbb{R}_+ ,\\
& \partial_t M + \pare{u\cdot\nabla} M -\sigma \Delta M = \frac{1}{2}\ \pare{ \textnormal{curl}\ u}\times M-\frac{1}{\tau}\pare{M-\chi_0 H} -\beta \ M \times\pare{M\times H}, && \pare{x, t}\in \mathbb{R}^3\times \mathbb{R}_+ ,\\
& \div \pare{H + M}=F , && \pare{x, t}\in \mathbb{R}^3\times \mathbb{R}_+ ,\\
& \div \ u=0, \ \textnormal{curl} \ H=0 , && \pare{x, t}\in \mathbb{R}^3\times \mathbb{R}_+ ,
\end{aligned}
\right.
\end{equation}
proposed by M.~Shliomis in \cite{Shliomis2, Shliomis3}. The function $ u $ represents the linear velocity of the fluid. If we denote as $ H_{\textnormal{ext}} $ the external magnetic field acting on the fluid $ F = -\div \ H_{\textnormal{ext}} $ will be denoted as the \textit{external magnetic force}. The external magnetic field $ H_{\textnormal{ext}} $ induces a demagnetising field $ H $ and a magnetic induction $ B= H+M $. \\
The parameter $ \sigma > 0 $ comes in play when the diffusion of the spin magnetic moment is not negligible, we refer the reader to \cite{GaspariBloch}, and indeed it has a regularizing effect since in such regime the system \eqref{eq:Shilomis1} is purely parabolic. The constant $ \rho_0, \nu, \mu_0, \sigma, \tau, \chi_0, \beta $ are positive constants with a physical meaning. For the sake of readability we will consider the following normalization
\begin{equation*}
\rho_0= \mu_0 = \beta =1.
\end{equation*}
This assumption is made in order to simplify the readability of the paper only, and does not entails qualitative changes in the behavior of the solutions of \eqref{eq:Shilomis1} .
On the other hand we will consider
\begin{equation*}
\nu, \chi_0, \sigma, \tau >0.
\end{equation*}
We already mentioned why we consider $ \sigma > 0 $, while being $ \nu $ the kinematic viscosity of a fluid it is natural to assume it strictly positive. Let us hence now focus our attention on the remaining two physical parameters: $ \tau $ and $ \chi_0 $. The main scope of the present paper is in fact to describe the limit regimes of the solutions of \eqref{eq:Shilomis1} when $ \tau $ and $ \chi_0 $ tend to zero. \\
\begin{itemize}
\item[$ \bf \tau $ :]
The parameter $ \tau $ is called the \textit{entropic relaxation time} of the system \eqref{eq:Shilomis1}, and roughly speaking it describes the average time required by the system \eqref{eq:Shilomis1} to recover a situation of equilibrium once it is perturbed. The average relaxation time of commercial grade ferrofluids is of the order
\begin{equation*}
\tau\approx 10^{-9} \ ,
\end{equation*}
whence, considering the smallness of such factor, it is reasonable to ask what happens to the solutions of \eqref{eq:Shilomis1} when $ \tau \to 0 $.
Despite the number of works on ferrofluids systems mentioned above there is though, to the best of our knowledge, no systematic understanding of what this state of equilibrium might look like. On a formal level when $ \tau $ is very small the dynamic of the term
\begin{equation*}
\frac{1}{\tau}\pare{M-\chi_0 H},
\end{equation*}
is predominant in the evolution of $ M $, whence what is generally done in the literature is to consider the approximation
\begin{equation}\label{eq:MH_balance_intro}
M\approx \chi_0 H ,
\end{equation}
which, if satisfied, compensates the magnitude of $ \frac{1}{\tau}\pare{M-\chi_0 H} $. The main goal of the present work is hence to provide a first rigorous description of the solutions of \eqref{eq:Shilomis1} in the limit regime $ \tau\to 0 $, and to understand how and in which way small values of $ \tau $ can have stabilizing effects on the solutions of \eqref{eq:Shilomis1}. In a nutshell, we prove that when $ \tau \to 0 $
\begin{equation}\label{eq:conv_MH_intro}
\pare{M, H} \xrightarrow{\tau\to 0} \pare{\chi_0 G_F, \ G_F},
\end{equation}
where $ G_F $ is a function depending upon the external magnetic field only, while
\begin{equation*}
u\xrightarrow{\tau\to 0} U,
\end{equation*}
where $ U $ is the unique solution of the following Navier-Stokes system with hydrostatic-magnetic pressure
\begin{equation*}
\left\lbrace
\begin{aligned}
& \partial_t U + U \cdot \nabla U -\nu \Delta U = -\nabla \pare{\pi - \chi_0 P_F} , \\
& \div\ U=0,
\end{aligned}
\right.
\end{equation*}
where $ P_F $ depends only on the external magnetic force $ F $, and in particular assumes the following explicit form
\begin{equation*}
P_F = \frac{1}{ 2 \pare{1+\chi_0}^2 } \ \nabla \av{\nabla\Delta^{-1} F}^2.
\end{equation*}
The derivation of such magnetic pressure is somewhat surprising and it will be discussed in detail later in the manuscript.
\item[$ \chi_0 $ :] The dimensionless parameter $ \chi_0 $ is called \textit{magnetic susceptibility} and indicates whether a material is attracted into or repelled out of a magnetic field. If the magnetic susceptibility is greater than zero, the substance is said to be "paramagnetic"; the magnetization of the substance is higher than that of empty space. If the magnetic susceptibility is less than zero, the substance is "diamagnetic"; it tends to exclude a magnetic field from its interior. Since ferrofluids are magnetically soft materials their magnetization is higher than that of the vacuum, hence the motivation that lead us to suppose $ \chi_0 >0 $. Experimental results show that for oil-based colloidals $ \chi_0\in\bra{0.3 \ , \ 4.3} $, while for water-based colloidals $ 0<\chi_0\ll 1 $: water-based ferrofluids are hence almost neutral to external magnetic forces.
\end{itemize}
The results provided and quickly illustrated here above formally justify the physical intuition of how the parameters $ \tau $ and $ \chi_0 $ influence the dynamics of \eqref{eq:Shilomis1}. Rigorously proving such results at a mathematical level is though not so immediate. The singular linear perturbation
\begin{equation*}
\frac{1}{\tau}\pare{M-\chi_0 H},
\end{equation*}
which is reminiscent of singular perturbations arising in problems in geophysical fluid mechanics (cf. \cite{LM98, monographrotating, gallagher_schochet} etc) is in fact of a different nature; it has no definite sign and more importantly it depends upon the external magnetic field $ F $. Being this the case the singular perturbation $ \frac{1}{\tau}\pare{M-\chi_0 H} $ does not supply a zero $ L^2 $ energy contribution as it happens for rotating fluids (\cite{CDGG2, gallagher_schochet}), compressible fluids (\cite{LM98, DanchinMach, Scrobo_Ngo}) or stratified fluids (\cite{charve1, charve2, Scrobo_Froude_FS, Scrobo_Froude_periodic, Sang_Scrobo_Froude}), whence it is not possible to construct global weak or local strong solutions uniformly in $ \tau > 0 $ by means of energy methods as it is done in the examples mentioned above. \\
The way hence to construct a sequence $ \pare{U^\tau}_{\tau\in\pare{0, \tau_0}} $ of solutions of \eqref{eq:Shilomis1} passes through the understanding of the physical properties of the singular perturbation $ \frac{1}{\tau}\pare{M-\chi_0 H} $; in the geophysical fluid dynamics setting mentioned above typically the singular perturbation induces hi-frequency oscillations on which it is possible to prove dispersive estimates. In the present case the singular perturbation seems to produce a \textit{damping} effect, but it is not at all clear how such damping acts on the system; the singular perturbation has in fact no definite sign in the unknowns $ u, M, H $ and hence we cannot immediately conclude in this way. \\
The problem is that the unknowns $ M $ and $ H $ are not suitable in order to describe the system \eqref{eq:Shilomis1} \textit{uniformly in} $ \tau $. One part of the unknown is in fact effectively damped to zero while the other converges toward a stationary state; we must hence find another set of unknowns which somehow explicit such problem. If we define
\begin{align*}
\mathcal{P} = 1_{\mathbb{R}^3} - \Delta^{-1}\nabla\div , && \mathcal{Q} = \Delta^{-1}\nabla\div,
\end{align*}
it is rather easy to deduce from the magnetostatic equation $ \div\pare{M+H}=F $ that\footnote{Here we use the fact that $ \textnormal{curl} \ H=0 $. }
\begin{equation}\label{eq:intro:MH_relation}
H = -\mathcal{Q} M +\Delta^{-1}\nabla F .
\end{equation}
Using the relation \eqref{eq:intro:MH_relation} we can re-write the singular perturbation $ \frac{1}{\tau}\pare{M-\chi_0 H} $ as
\begin{equation}\label{eq:singular_pert_tau}
\frac{1}{\tau}\pare{M-\chi_0 H} = \frac{1}{\tau} \pare{1+\chi_0\mathcal{Q}} M -\frac{\chi_0}{\tau}\Delta^{-1}\nabla F.
\end{equation}
This singular perturbation presents two immediate characteristics which are not present in some classical works on singular perturbation problems (\cite{GS3, GallagherSaint-Raymondinhomogeneousrotating, gallagher_schochet, CDGG2, Scrobo_primitive_horizontal_viscosity_periodic}, the list is far from being exhaustive);
\begin{itemize}
\item if the magnetic susceptibility $ \chi_0 $ is large, which is the case for oil-based ferrofluids as explained above, the operator $ \pare{1+\chi_0\mathcal{Q}} $ has not positive sign,
\item the singular perturbation \eqref{eq:singular_pert_tau} is \textit{linear and non-homogeneous}, case that, to the best of our knowledge, has not yet been treated in the literature.
\end{itemize}
\noindent
Instead we decide to tailor a specific approach to the problem; applying the operator $ \mathcal{P} $ to the evolution equation of $ M $, and hence as well to the singular perturbation $ \frac{1}{\tau}\pare{M-\chi_0 H} $, we deduce that
\begin{equation*}
\frac{1}{\tau}\mathcal{P}\pare{M-\chi_0 H} = \frac{1}{\tau}\mathcal{P} M.
\end{equation*}
It is hence clear that $ \mathcal{P} M $, the divergence-free part of $ M $, is damped to zero in the evolution of the system \eqref{eq:Shilomis1}. The next very natural step is to compute the second term of the Hodge decomposition of $ \frac{1}{\tau}\pare{M-\chi_0 H} $ which is
\begin{equation}\label{eq:intro:1tauQM}
\frac{1}{\tau}\mathcal{Q}\pare{M-\chi_0 H} = \frac{1+\chi_0}{\tau}\pare{\mathcal{Q} M - \frac{\chi_0}{1+\chi_0}\nabla\Delta^{-1} F }.
\end{equation}
In such setting we can hence deduce the new limit $ \tau\to 0 $ formal balance
\begin{equation*}\label{eq:QM_balance_intro}
\mathcal{Q} M \approx \frac{\chi_0}{1+\chi_0}\nabla\Delta^{-1} F,
\end{equation*}
which is much better than the balance \eqref{eq:MH_balance_intro} since now we obtain an asymptotic which depends only on the external magnetic field $ F $ and \textit{not on another unknown}. We can as well recover the formal limit asymptotic for $ H $ as well from the relation \eqref{eq:intro:MH_relation}. \\
Despite a better understanding of the asymptotics as $ \tau\to 0 $ we did not yet solve the main problem of the mathematical construction of solutions uniformly in $ \tau $, the singular perturbation on the r.h.s. of \eqref{eq:intro:1tauQM} has still sign not defined, and appears in the system \eqref{eq:Shilomis1} applying the operator $ \mathcal{Q} $ to the evolution equation of $ M $, i.e.
\begin{equation}\label{eq:intro:equazione_QM}
\partial_t \mathcal{Q} M -\sigma\Delta \mathcal{Q} M + \frac{1+\chi_0}{\tau}\pare{\mathcal{Q} M - \frac{\chi_0}{1+\chi_0}\nabla\Delta^{-1} F } = \text{ Nonlinear terms }.
\end{equation}
We remark at this point though that $ F $ is not an unknown of the problem. We can hence subtract $ \frac{\chi_0}{1+\chi_0}\pare{\big. \partial_t -\sigma \Delta}\nabla\Delta^{-1} F $ from both sides of \eqref{eq:intro:equazione_QM} and defining the new unknown $ r=\mathcal{Q} M - \frac{\chi_0}{1+\chi_0}\nabla\Delta^{-1} F $ we can deduce the evolution equation for $ r $
\begin{equation*}
\partial_t r -\sigma\Delta r + \frac{1+\chi_0}{\tau} \ r = \underbrace{ - \frac{\chi_0}{1+\chi_0}\pare{\big. \partial_t -\sigma \Delta}\nabla\Delta^{-1} F}_{\text{Outer force } f} + \text{ Nonlinear terms },
\end{equation*}
which is now damped and diffused, and we can close our argument. In detail, the new evolutionary system so obtained is of the form (here $ m=\mathcal{P} M $)
\begin{equation}\label{eq:intro:S2}
\begin{aligned}
& \partial_t u -\nu \Delta u && &&= \text{ Nonlinear terms }, \\
& \partial_t m -\sigma \Delta m && +\frac{1}{\tau}\ m &&= \text{ Nonlinear terms }, \\
& \partial_t r -\sigma\Delta r && + \frac{1+\chi_0}{\tau} \ r && = \text{ Nonlinear terms }&& +f.
\end{aligned}
\end{equation}
\noindent
At this point we hence expect the unknown $ m, r $ in \eqref{eq:intro:S2} to be exponentially damped to zero at a rate $ \mathcal{O} \pare{ e^{-t/\tau}} $. There are though two immediate obstructions to such deduction:
\begin{itemize}
\item The external force $ f $ in the evolution equation is an $ \mathcal{O}\pare{1} $ function,
\item There are terms on the r.h.s. of \eqref{eq:intro:S2} which are $ \mathcal{O}\pare{1} $ functions for $ m, r\to 0 $.
\end{itemize}
Whence despite the tendency of the evolution of $ m $ and $ r $ is to be quickly damped to zero there are external forces in the system \eqref{eq:intro:S2} which are genuinely bigger than $ \tau $ and which induce a higher order growth on the unknowns $ m $ and $ r $. It is in this context that slightly more involved parabolic estimates are required (see Lemma \ref{lem:linear_damping_estimate} for the exact estimates used in this work) in order to see that $ {m, r} \xrightarrow{\tau\to 0} 0 $. A downside of such approach is that we are not able to quantify the rate of convergence \label{note:quantification} of $ m, r $ to zero as $ \tau\to 0 $, due indeed to the perturbative effects induced by the $ \mathcal{O}\pare{1} $ perturbations. \\
Let us finally mention an unexpected stabilizing effect we remarked. We already mentioned and explained in reasonable detail that the components $ m $ and $ r $ of \eqref{eq:intro:S2} are subjected to a damping-in-time. Let us hence now consider that we we want to construct $ L^4_{T} \dot{H}^1 $ solutions of \eqref{eq:intro:S2} in a fashion very similar to what is done for the more familiar incompressible Navier-Stokes equations. It is clear hence that if $ \tau $ is sufficiently small, hence the damping coefficient is very large, for any $ t > 0 $ the functions $ m\pare{t}, r\pare{t} $ are drawn to zero rather vigorously so that we expect that they are "small". This crude intuition lead us to think that we might as well expect to construct global solutions for \eqref{eq:intro:S2} imposing a smallness hypothesis on $ u_0 $, the initial data of the velocity flow, and $ \tau $: we can in fact construct global solutions substituting a smallness hypothesis on $ m_0, r_0 $ with a smallness hypothesis on $ \tau $. Such result is attainable only if we construct solutions in the critical space $ L^4_{T} \dot{H}^1 $ and not in, say, $ L^\infty_T \dot{H}^{ \frac{1}{2}}\cap L^2_T \dot{H}^{\frac{3}{2}} $; the damping effect has no effects on the $ L^\infty_T \dot{H}^{ \frac{1}{2}} $ norm.
\subsection{Results and organization of the paper}
The main goal of the present paper is to study the properties of the solutions of system \eqref{eq:Shilomis1} \textit{when the parameter $ \tau $ is small} or converging to zero, indeed hence the first (and main) result of the present work is an existence result which is uniform for $ \tau $ belonging to a suitable right-neighborhood of zero, whose size depends on the magnitude of the initial data. \\
From now on given a Banach space $ X $, any $ T\in\bra{0, \infty}, \ k\in\mathbb{N} $ and $ p\in\bra{1, \infty} $ we denote as $ W^{k, p}_T X $ the space $ W^{k, p}\pare{\left[ 0, T \right)\big. ; X} $. Given any Sobolev or Lebesgue space if the domain is not specified it is implicitly assumed to be $ \mathbb{R}^3 $. Given any $ s<3/2 $ we define the \textit{homogeneous} Sobolev space $ \dot{H}^s \pare{\mathbb{R}^d} $ as the closure of $ \mathcal{S}_0\pare{\mathbb{R}^d} $ with respect to the norm
\begin{equation*}
\norm{v}_{\dot{H}^s \pare{\mathbb{R}^d}} = \pare{\int_{\mathbb{R}^d}\av{\xi}^{2s}\av{\hat{u}\pare{\xi}}^2\d \xi}^{1/2},
\end{equation*}
while for any $ s\in\mathbb{R} $ the \textit{non-homogeneous} Sobolev space $ H^s\pare{\mathbb{R}^d} $ is composed of the tempered distributions $ v $ such that $ \pare{1+\sqrt{-\Delta} \ }^s \ v \in L^2\pare{\mathbb{R}^d} $. Given any $ k\in\mathbb{N} $ and $ p \in \bra{1, \infty} $ we say that $ v\in \dot{ W}^{k, p}_T X $ if $ \partial_t^k v \in L^p_T X $ and $ v\in {W}^{k, p}_T X $ if $ \pare{ 1+\partial_t^k} v \in L^p_T X $. Given a vector field $ V : \mathbb{R}^n\times \mathbb{R}_+ \to \mathbb{R}^m $ we will write $ V\in {W}^{k, p}\pare{\bra{0, T}; \dot{H}^s\pare{\mathbb{R}^n}} $ instead than writing $ V\in \pare{ {W}^{k, p}\pare{\bra{0, T}; \dot{H}^s\pare{\mathbb{R}^n}}}^m $ in order to simplify the overall notation.
The capital letter $ C $ will always indicate a positive value independent by any parameter of the problem whose value may implicitly vary from line to line while $ c=\min\set{\big. \nu, \sigma} $. \\
Let us moreover suppose the external magnetic field $ F $ belongs to the space\footnote{Remark that in this case the Sobolev space $ H^2 $ is considered to be non-homogeneous. }
\begin{equation} \label{eq:regularity_F}
F\in L^4_{\textnormal{loc}}\pare{ \mathbb{R}_+; L^2}\cap W^{1, 2}_{\textnormal{loc}} \pare{\mathbb{R}_+; H^2}.
\end{equation}
We underline that the external magnetic field is \textit{not} an unknown of the problem, hence it is in no way restrictive to assume that it is smooth and integrable.
\begin{theorem}\label{thm:main_thm}
Let $ u_0\in \dot{H}^{ \frac{1}{2}}, F\in L^4_{\textnormal{loc}}\pare{ \mathbb{R}_+; L^2}\cap W^{1, 2}_{\textnormal{loc}} \pare{\mathbb{R}_+; H^2} $. There exists a $ \rho, \varrho_0>0 $, where $ \rho > 2\varrho_0 $, and a $ T=T_{\varrho_0} > 0 $ defined as
\begin{equation}\label{eq:def_T}
T=T_{\varrho_0} = \sup \set{t \geqslant 0\Big. \ \left| \ \norm{F}_{L^4\pare{\bra{0, t};L^2 \pare{\mathbb{R}^3} }}<\varrho _0 \textnormal{ and } F\in W^{1, 2} \pare{ \left[0, t\right) ; H^2 }\right. },
\end{equation}
sufficiently small so that
\begin{equation*}
\norm{F}_{L^4_TL^2}\leqslant \varrho_0 \leqslant \frac{\min\set{\min\set{\big.\nu, \sigma}^{1/2}, \ \min\set{\big.\nu, \sigma}^{3/4} }}{C},
\end{equation*}
and such that if we define
\begin{align*}
m_0 = \pare{1- \Delta^{-1}\nabla\div} M_0, \hspace{5mm} r_0= \Delta^{-1}\nabla\div \ M_0 -\frac{\chi_0}{1-\chi_0} \ \nabla\Delta^{-1}F.
\end{align*}
\begin{enumerate}[{a)}]
\item \label{enum:thm1} Let $ u_0, m_0, r_0 \in \dot{H}^{ \frac{1}{2}} $ be such that
\begin{align*}
\norm{u_0}_{\dot{H}^{ \frac{1}{2}}}\leqslant \frac{\nu^{1/4}}{C} \ \rho,
&&
\norm{\pare{m_0, r_0}}_{\dot{H}^{ \frac{1}{2}}}\leqslant\frac{\sigma^{1/4}}{C}\ \rho,
\end{align*}
and
\begin{equation}\label{eq:smallness_tau1_thm}
\tau < \frac{\pare{1+\chi_0}^{7/3}}{ C\ \chi_0^{4/3}} \pare{\norm{ F}_{L^2_T\dot{H}^{2}} + \norm{F}_{\dot{W}^{1, 2}_T L^2}}^{-4/3} \ \varrho_0^{4/3}.
\end{equation}
Then there exist a unique solution $ \pare{u, M, H} $ of \eqref{eq:Shilomis1} with initial data $ \pare{u_0, M_0} $ in the space $ \mathcal{C}_T \dot{H}^{ \frac{1}{2}} \cap L^4_{T} \dot{H}^1 $.
\item \label{enum:thm2} Let $ {U_0} = \pare{u_0, m_0, r_0} \in \dot{H}^{ \frac{1}{2}} $ arbitrarily large and $ \tau > 0 $ satisfy the relation \eqref{eq:smallness_tau1_thm}, there exist a $ T^\star = T^\star_{U_0}\in\pare{0, T} $, where $ T $ is defined in \eqref{eq:def_T}, such that the system \eqref{eq:Shilomis1} admits a unique solution with initial data $ \pare{u_0, M_0} $ in the space $ \mathcal{C}_{T^\star} \dot{H}^{ \frac{1}{2}} \cap L^4_{T^\star}\dot{H}^1 $.
\item \label{enum:thm3} Let $ u_0\in\dot{H}^{ \frac{1}{2}} $ be such that
\begin{equation}\label{eq:smallness_vel_flow_thm}
\norm{u_0}_{ \dot{H}^{ \frac{1}{2}}}\leqslant\frac{\nu^{1/4}}{C} \ \rho,
\end{equation}
and $ m_0, r_0\in\dot{H}^1 $ arbitrary. Let $ \tau $ be sufficiently small so that
\begin{equation}\label{eq:smallness_tau_thm}
\tau\leqslant \min\set{
\frac{\rho^4}{C\pare{\norm{m_0}_{\dot{H}^1}^4 + \norm{r_0}_{\dot{H}^1}^4}}
\hspace{2mm}, \hspace{2mm}
\frac{ \pare{{1+\chi_0}}^{7/3}\varrho_0^{4/3}}{C\chi_0^{4/3} \ \pare{\norm{ F}_{L^2_T\dot{H}^{2}} + \norm{F}_{\dot{W}^{1, 2}_T L^2}}^{4/3}}
}.
\end{equation}
Then there exist a unique solution $ \pare{u, M, H} $ of \eqref{eq:Shilomis1} with initial data $ \pare{u_0, M_0} $ in the space $ \mathcal{C}_T \dot{H}^{ \frac{1}{2}} \cap L^4_{T} \dot{H}^1 $.
\item \label{enum:thm4} Let $ u_0\in\dot{H}^{ \frac{1}{2}} $ arbitrarily large and let $ \tau $ satisfy \eqref{eq:smallness_tau_thm}, there exist a $ T^\star\in\pare{0, T} $ such that the system \eqref{eq:Shilomis1} admits a unique solution with initial data $ \pare{u_0, M_0} $ in the space $\mathcal{C}_{T^\star} \dot{H}^{ \frac{1}{2}} \cap L^4_{T^\star}\dot{H}^1 $.
\end{enumerate}
\end{theorem}
\begin{rem}
\begin{itemize}
\item The value $ T $ defined in \eqref{eq:def_T} is well defined and strictly positive since the application
\begin{equation*}
t\mapsto \norm{F}_{L^4\pare{\bra{0, t}; L^2 \pare{\mathbb{R}^3}}},
\end{equation*}
is continuous and non-decreasing in $ \mathbb{R}_+ $ and zero when $ t=0 $.
From now on when we write $ T $ we will always consider the value defined by \eqref{eq:def_T}. Let us remark that if $ F $ is sufficiently small in $ L^4 \pare{ \mathbb{R}_+; L^2} $ then $ T $ can be equal to infinity as well, transforming hence the results stated in the points \ref{enum:thm1} and \ref{enum:thm3} in genuinely global-in-time results.
\item In the definition \eqref{eq:def_T} we must include the hypothesis $ F\in W^{1, 2}_T H^2 $ \textit{only} for the case in case in which $ T=\infty $. In fact a priori it may as well happen that $ \norm{F}_{L^4\pare{\mathbb{R}_+; L^2}}\leqslant \varrho_0 $, $ F\in W^{1, 2}_{\textnormal{loc}} \pare{\mathbb{R}_+ ; H^2 } $ but $ F $ \textit{does not belong} to the space $ W^{1, 2} \pare{\mathbb{R}_+ ; H^2 } $. In such setting we implicitly use the fact that $F\in W^{1, 2} \pare{\mathbb{R}_+ ; H^2 } $ in setting the smallness hypothesis \eqref{eq:smallness_tau1_thm} and \eqref{eq:smallness_tau_thm}.
\item The points \ref{enum:thm1} and \ref{enum:thm2} in the statement of Theorem \ref{thm:main_thm} can be rephrased as "global" existence for small data and "local" existence for arbitrary initial critical data. Indeed the point \ref{enum:thm1} is a proper global-in-time result only if $ T=\infty $ where $ T $ is defined in \eqref{eq:def_T}: the hypothesis on $ T $, which is a smallness hypothesis on the norm of $ F $, avoids that the external magnetic field $ F $ pumps too much energy in the system. It is in fact intuitive that, if $ M, H $ have to satisfy the magnetostatic equation
\begin{equation*}
\div \pare{M+H}=F,
\end{equation*}
and $ F $ is "arbitrarily large" then the curl-free part of $ M+H $ will be arbitrarily large as well (in some appropriate, non specified, topology). In such scenario $ M $ and $ H $ result to be hence "large" and it is not possible to construct solutions via a fixed point theorem around a stationary state of \eqref{eq:Shilomis1}.
\item The points \ref{enum:thm3} and \ref{enum:thm4} are again a "global" and "local" existence result. We focus now on the characteristics of the point \ref{enum:thm3}. It is worth noticing that we impose a smallness hypothesis on the initial data for the velocity field $ u_0 $ and for $ \tau $. We let hence $ M_0 $ and $ H_0 $ be \textit{arbitrarily large in} $ \dot{H}^1 $; this effect is due to the term $ \frac{1}{\tau}\pare{M-\chi_0 H} $ in \eqref{eq:Shilomis1}. Roughly speaking such term provides a damping with damping coefficient $ \tau^{-1} $ which we will exploit in order to damp the $ \dot{H}^1 $ norm of $ M_0 $ and $ H_0 $ sufficiently fast so that the overall $ L^4_{T} \dot{H}^1 $ norm will result to be small, hence to possibility apply a fixed point theorem. It is also for this reason that we construct solutions in the critical space $ L^4_{T} \dot{H}^1 $ instead that, say, the more natural critical energy space $ L^\infty_T \dot{H}^{\frac{1}{2}}\cap L^2_T \dot{H}^{\frac{3}{2}} $. If we start with large $ \dot{H}^{ \frac{1}{2}} $ data the damping effect does not influences the overall $ L^\infty_T \dot{H}^{\frac{1}{2}} $ norm of the solution, hence a fixed point theorem based on the smallness of the norm is not applicable in such setting \textit{when large initial data is considered}, in fact $ M_0 $ and $ H_0 $ can even be \textit{unbounded} in $ \dot{H}^{ \frac{1}{2}} $, but they have to be finite in $ \dot{H}^1 $ in order to apply the result in Theorem \ref{thm:main_thm}, \ref{enum:thm3}.
\item Let us remark again that in the point \ref{enum:thm3} of Theorem \ref{thm:main_thm} the only hypothesis assumes on $ m_0, r_0 $ is a smallness hypothesis with respect to $ \tau $ \textit{in the space $ \dot{H}^1 $}. The data $ m_0, r_0 $ can even be \textit{unbounded} in the critical space $ \dot{H}^{ \frac{1}{2}} $; we are hence able to construct a global-in-time solution for the system \eqref{eq:Shilomis1} imposing a smallness hypothesis on the initial velocity flow $ u_0 $ only.
\item Since the points \ref{enum:thm3} and \ref{enum:thm4} represent an unexpected dynamical effect for the system \eqref{eq:Shilomis1} we will prove explicitly only the point \ref{enum:thm3}, being the other points simple variations of this one.
\item Even if we restrain ourselves to the more familiar setting stated in the points \ref{enum:thm1} and \ref{enum:thm2} we construct solutions in the critical space $ L^4_{T} \dot{H}^1 $ imposing initial data in $ \dot{H}^{ \frac{1}{2}} $; we construct hence potentially \textit{infinity $ L^2 $ energy solutions} for \eqref{eq:Shilomis1}. This work is, to the best of our knowledge, the first work in which infinite $ L^2 $ energy solutions for ferrofluids systems are constructed. It is worth to remark that if we try to construct solutions for \eqref{eq:Shilomis1} using the natural $ L^2 $ energy of the system (see \cite{AH_Shilomis_weak, AH_Rosensweing_weak, Scrobo_FF2D, DS18}) uniformly in $ \tau $ we deduce an estimate of the form
\begin{equation*}
\mathcal{E} \pare{t} + c_{\tau} \int_0^t \mathcal{D} \pare{t'} \d t' \leqslant \frac{C}{\tau} ,
\end{equation*}
where $ \mathcal{E} $ and $ \mathcal{D} $ are the natural energy and dissipation of the system \eqref{eq:Shilomis1}. Energy methods are hence not applicable in order to construct solutions of \eqref{eq:Shilomis1} uniformly in $ \tau $ since the r.h.s. of the above equation blows-up as $ \tau\to 0 $ and does not provide uniform estimates.
\end{itemize}
\end{rem}
Theorem \ref{thm:main_thm} is hence an existence result for solutions of \eqref{eq:Shilomis1} which holds \textit{uniformly for $ \tau $ in a right neighborhood $ \pare{0, \tau_0} $ of zero}. As we already explained in detail in the remark above the points \ref{enum:thm3} and \ref{enum:thm4} deal with \textit{stabilizing properties} of solutions of \eqref{eq:Shilomis1} when $ \tau $ is small. It is hence a natural question at this stage to ask whether solutions of \eqref{eq:Shilomis1} converges (and if they do, in which topology) to some limit flow. \\
\noindent
It turns out that the term $ \frac{1}{\tau}\pare{M-\chi_0 H} $ acts effectively as an exponential damping on the components $ M, H $; such damping effect is though not immediate to prove, and neither it is immediate to rigorously deduce from the structure of the equations \eqref{eq:Shilomis1}. The precise statement is the following one:
\begin{theorem}\label{thm:convergence}
Let us consider the same hypothesis as in Theorem \ref{thm:main_thm}, \ref{enum:thm3} and let us suppose moreover that $ m_0, r_0\in\dot{H}^{ \frac{1}{2}} $, let us consider any (small) $ \varepsilon\in\pare{0, T} $, then
\begin{equation}\label{eq:convergence_MH}
\begin{aligned}
M \xrightarrow{\tau\to 0} \frac{\chi_0}{1+\chi_0} \nabla\Delta^{-1} F , && \text{in } L^\infty\pare{\pare{\varepsilon, T}; \dot{H}^{ \frac{1}{2}}}, \\
H \xrightarrow{\tau\to 0} \frac{1}{1+\chi_0} \nabla\Delta^{-1} F , && \text{in } L^\infty\pare{\pare{\varepsilon, T}; \dot{H}^{ \frac{1}{2}}}.
\end{aligned}
\end{equation}
Moreover the following convergence hold true
\begin{equation}\label{eq:convergence_space}
\begin{aligned}
u & \xrightarrow{\tau\to 0} \bar{u}, & \text{in } & L^\infty\pare{\pare{\varepsilon, T}; \dot{H}^{ \frac{1}{2}}}, \\
\nabla u & \xrightarrow{\tau\to 0} \nabla\bar{u}, & \text{in } & L^2\pare{\pare{\varepsilon, T}; \dot{H}^{ \frac{1}{2}}},
\end{aligned}
\end{equation}
where $\bar{u}$ is the solution of the following incompressible Navier-Stokes system with additional magnetic pressure
\begin{equation}\label{eq:limit_system_thm}
\left\lbrace
\begin{aligned}
& \partial_t\bar{u} + \bar{u}\cdot\nabla\bar{u} -\nu \Delta\bar{u} +\nabla\bar{p} = \frac{\chi_0}{2 \pare{1+\chi_0}^2}\ \nabla\av{\nabla\Delta^{-1} F}^2 , \\
& \div\ \bar{u}=0, \\
&\left.\bar{u}\right|_{t=0} = u_0.
\end{aligned}
\right.
\end{equation}
\end{theorem}
\begin{rem}
\begin{itemize}
\item We want to underline that the convergence mentioned in Theorem \ref{thm:convergence} takes place only in the topology \eqref{eq:convergence_space}; this is justified by the fact that when $ \tau\to 0 $ a genuine damping effect is induced, whence we \textit{cannot} have immediate convergence (i.e. in $ L^\infty\pare{\pare{0, \varepsilon}; \dot{H}^{ \frac{1}{2}}} $) to the limit function.
\item Let us denote respectively with $ M^0 $ and $ H^0 $ the r.h.s. of \eqref{eq:convergence_MH}, i.e.
\begin{align*}
M^0 & = \frac{\chi_0}{1+\chi_0} \nabla \Delta^{-1}F, &
H^0 & = \frac{1}{1+\chi_0} \nabla \Delta^{-1}F .
\end{align*}
If we let $ \tau \to 0 $ in the equation for $ M $ appearing in \eqref{eq:Shilomis1} and we denote $ \bar{u} = \lim _{\tau\to 0} u $ consistently with the notation of Theorem \ref{thm:convergence} it looks at a fist glance that such limiting process on the equation of $ M $ induces a nonlinear constraints which relates $ \bar{u} $ with the limiting flows $ M^0, \ H^0 $ which are uniquely determined by the external magnetic force $ F $, whence $ \bar{u} = \bar{u} \pare{F} $ which could not satisfy \eqref{eq:limit_system_thm} in $ \pare{\varepsilon, T} $ making of the limit system an overdetermined problem. This is indeed not the case since despite the following convergence holds true
\begin{equation*}
M-\chi_0 H \xrightarrow{\tau\to 0} 0 ,
\end{equation*}
in a sufficiently weak sense (say $ \mathcal{D}'\pare{\mathbb{R}^3\times\pare{\varepsilon, T}} $ ) we are unable to quantify the rate of convergence toward zero of $ M-\chi_0 H $ as it has been already mentioned at page \pageref{note:quantification}. Whence we do not actually know to which element will the term
\begin{equation*}
\frac{1}{\tau}\pare{M-\chi_0 H},
\end{equation*}
converge. This can though be easily deduced, at least in a formal way; let us consider a $ \phi \in \mathcal{D} \pare{\mathbb{R}^3\times\pare{\varepsilon, T}} $, considering the convergences \eqref{eq:convergence_MH} and \eqref{eq:convergence_space}, and supposing there exists a $ f \in \mathcal{D}' \pare{\mathbb{R}^3\times\pare{\varepsilon, T}} $ so that
\begin{equation*}
\frac{1}{\tau}\pare{M-\chi_0 H}\xrightarrow[\tau\to 0]{\mathcal{D}'} - f,
\end{equation*}
testing the equation \eqref{eq:Shilomis1} with $ \phi $ and letting $ \tau \to 0 $ the limit equation solved by $ M^0 $ (in $ \mathcal{D}' $) is
\begin{equation*}
\partial_t M^0 + \pare{\bar{u}\cdot\nabla} M^0 -\sigma \Delta M^0 - \frac{1}{2}\ \pare{ \textnormal{curl}\ \bar{u}}\times M^0 = f ,
\end{equation*}
whence the limit problem is consistently expressed.
\end{itemize}
\hfill$\blacklozenge$
\end{rem}
The present paper is structured as follows:
\begin{itemize}
\item Section \ref{sec:preliminaries} is devoted to introduce some preliminary result which we use all along the paper. In particular Section \ref{sec:linear_parabolic_estimates} consists of a series of bounds for linear parabolic equations with damping which will result very important in the application of the fixed point theorem in Section \ref{sec:fixed_poin_application}.
\item In Section \ref{sec:ref_syst} we define a new set of unknowns for the system \eqref{eq:Shilomis1} so that we can deduce a new system (see \eqref{eq:Shilomis2} for the detailed definition) which highlights and makes explicit the damping effect induced by the singular perturbation $ \tau^{-1}\pare{M-\chi_0 H} $. Such procedure has been already outlined in the introduction, in Section \ref{sec:ref_syst} we make this argument rigorous.
\item Section \ref{sec:existence_sol} is the core of the present article, in such section we prove Theorem \ref{thm:main_thm} which is the most technical result of the present paper. The proof of Theorem \ref{thm:main_thm} consists in a fixed point argument, which has to be performed carefully, and more importantly, has to be adapted to highlight the particular properties of the system \eqref{eq:Shilomis1} (most notably the damping effects induced by the singular perturbation $ \tau^{-1}\pare{M-\chi_0 H} $).
\item Section \ref{sec:conv_tau} is devoted to the proof of Theorem \ref{thm:convergence}. Using the result proved in Section \ref{sec:existence_sol} (i.e. Theorem \ref{thm:main_thm}, an existence result uniform in $ \tau $) we prove at first that some part of the the system is effectively damped to zero in a critical norm away from $ t=0 $, next we use such convergence in order to prove that the velocity flow converges toward the system \ref{eq:limit_system_thm}.
\end{itemize}
\section{Preliminaries}\label{sec:preliminaries}
All along the present paper we will consider nonlinear interactions of (homogeneous) Sobolev functions. It is well known that, in a more general context, the product of two distributions in, a priori, not well defined, cf. \cite{Schwartz54}. In the context of Sobolev functions we can state the following elementary criterion:
\begin{lemma}\label{lem:Sob_product_rules}
Let $ \pare{s, t}\in\mathbb{R}^2 $ and $ d\in\mathbb{N}\setminus \set{0} $ be such that $ s, t < \frac{d}{2} $ and $ s+t >0 $.
The point-wise product application maps continuously $ \dot{H}^s \pare{\mathbb{R}^d}\times \dot{H}^t\pare{\mathbb{R}^d} $ onto $ \dot{H}^{s+t-\frac{d}{2}}\pare{\mathbb{R}^d} $, i.e. if we consider $ u \in \dot{H}^s \pare{\mathbb{R}^d}, \ v\in \dot{H}^t\pare{\mathbb{R}^d} $, there exists a $ C>0 $ depending only on the dimension $ d $ so that
\begin{equation*}
\norm{u \ v}_{\dot{H}^{s+t-\frac{d}{2}}\pare{\mathbb{R}^d} }\leqslant C \norm{u}_{\dot{H}^s \pare{\mathbb{R}^d}}\norm{v}_{\dot{H}^t\pare{\mathbb{R}^d}}.
\end{equation*}
\end{lemma}
\begin{rem}
There exists a non-homogeneous counterpart of Lemma \ref{lem:Sob_product_rules}. \hfill$\blacklozenge$
\end{rem}
Lemma \ref{lem:Sob_product_rules} belongs to the mathematical folklore, and can be stated as well for periodic vector fields, cf. \cite{gallagher_schochet}. Such result is widely used in the Navier-Stokes theory and goes under the name of \textit{product rules for Sobolev spaces}. All along the paper we will use continuously, even implicitly, the result stated in Lemma \ref{lem:Sob_product_rules}.
\begin{definition}
Let $ X $ be an abstract Banach space and $ T_p : X^p\to X $ a $ p $--linear map onto $ X $. We define
\begin{equation*}
\norm{T_p} = \sup _{\phi_1, \ldots , \phi_p \in B_X\pare{0, 1}} T_p\pare{\phi_1, \ldots , \phi_p}.
\end{equation*}
\end{definition}
\begin{prop}\label{prop:fixed_point}
Let $ X $ be a Banach space and let $ T_p : X^p \to X, \ p=1,2,3 $ a $ p $-linear map onto $ X $. Suppose there exists an $ \eta\in\pare{0, \frac{1}{4}} $ such that
\begin{equation}\label{eq:hyp_T1}
\norm{T_1}\leqslant \eta,
\end{equation}
and a positive real number $ r $ such that
\begin{equation} \label{eq:hyp_r}
0< r < \min \set{\frac{1}{8 \norm{T_2}},\ \frac{1}{4\sqrt{\norm{T_3}}}}.
\end{equation}
For any $ y\in B_X\pare{0, \frac{r}{4}} $, there exist a unique $ x\in B_X\pare{0, r} $ such that
\begin{equation*}
x= y + T_1\pare{x} + T_2\pare{x, x} + T_3\pare{x, x, x}.
\end{equation*}
\end{prop}
\begin{rem}
Let us remark that we assume a smallness hypothesis (contractivity) on the linear operator $ T_1 $. Neglecting such hypothesis compromise irremediably the possibility of finding a fixed point via an iterative argument. \hfill$\blacklozenge$
\end{rem}
\begin{proof}
The proof of Proposition \ref{prop:fixed_point} is rather standard. Let us define inductively the sequence
\begin{equation*}
\left\lbrace
\begin{aligned}
& x_0 =0, \\
& x_{n+1} = y + T_1 \pare{x_n} + T_2\pare{x_n, x_n} + T_3\pare{x_n, x_n, x_n}.
\end{aligned}
\right.
\end{equation*}
We deduce immediately, thanks to \eqref{eq:hyp_T1} and \eqref{eq:hyp_r} that if $ x_n \in B_X\pare{0, r} $ then
\begin{equation*}
\norm{x_{n+1}}< r.
\end{equation*}
Next we prove that the sequence $ \pare{x_n}_n $ is a Cauchy sequence in the topology of $ X $, since
\begin{multline*}
x_{n+1}-x_n = T_1\pare{x_n-x_{n-1}} + T_2 \pare{x_n, x_n-x_{n-1} } + T_2 \pare{ x_n-x_{n-1} , x_n }\\
+ T_3 \pare{x_n, x_n, x_n-x_{n-1} } + T_3 \pare{x_n, x_n-x_{n-1} , x_n } + T_3 \pare{ x_n-x_{n-1} , x_n , x_n },
\end{multline*}
we deduce, using the hypothesis \eqref{eq:hyp_T1} and \eqref{eq:hyp_r}
\begin{align*}
\norm{x_{n+1}-x_n} & \leqslant \pare{\Big. \eta + 2r \ \norm{T_2} + 3r^2\ \norm{T_3}}\norm{x_n-x_{n-1}}, \\
& < \frac{3}{4}\ \norm{x_n-x_{n-1}},
\end{align*}
which holds for any $ n\geqslant 1 $ and which indeed implies that $ \pare{x_n}_n $ is a Cauchy sequence in the Banach space $ X $, it is hence convergent. In order to prove uniqueness we suppose there exist two different $ x, z \in B_X \pare{0, r} $ so that
\begin{align*}
x & = y + T_1\pare{x} + T_2\pare{x, x} + T_3\pare{x, x, x}, \\
z & = y + T_1\pare{z} + T_2\pare{z, z} + T_3\pare{z, z, z}.
\end{align*}
We subtract the two equations here above so that we obtain
\begin{equation*}
x-z = T_1\pare{x-z} + T_2 \pare{x, x-z } + T_2 \pare{ x-z , z }
+ T_3 \pare{x, x, x-z } + T_3 \pare{x, x-z , z } + T_3 \pare{ x-z , z , z }.
\end{equation*}
Taking norms on the above equality using the triangular inequality and the fact that $ \norm{x}, \norm{z} < r $ we deduce
\begin{align*}
\norm{x-z} & \leqslant \pare{\Big. \eta + 2r \ \norm{T_2} + 3r^2\ \norm{T_3}}\norm{x-z}, \\
& < \frac{3}{4}\ \norm{x-z},
\end{align*}
which is obviously satisfied if and only if $ x=z $, concluding.
\end{proof}
\subsection{Estimates for linear parabolic equations}\label{sec:linear_parabolic_estimates}
In the present section we prove some more or less well-known estimates for linear parabolic equations which will be of the utmost importance in the developement of the paper. \\
Let us consider two functions $ h, g $ defined on $ \mathbb{R} $, and let us consider a $ T \in \left( 0, \infty \right] $. We denote $ h\star g = 1_{\bra{0, T}} h \ast 1_{\bra{0, T}} g $ where $ \ast $ is the standard convolution. \\
In this section we will use continuously the \textit{Minkowsky integral inequality} : let us consider $ \pare{S_1, \mu_1} $ and $ \pare{S_2, \mu_2} $ two $ \sigma $--finite measure spaces and let $ f : S_1\times S_2 \to \mathbb{R} $ be measurable, then the following inequality holds true:
\begin{equation*}
{\displaystyle \left[\int _{S_{2}}\left|\int _{S_{1}}f(x,y)\,\mu _{1}(\mathrm {d} x)\right|^{p}\mu _{2}(\mathrm {d} y)\right]^{\frac {1}{p}}\leqslant \int _{S_{1}}\left(\int _{S_{2}}|f(x,y)|^{p}\,\mu _{2}(\mathrm {d} y)\right)^{\frac {1}{p}}\mu _{1}(\mathrm {d} x).}
\end{equation*}
As an immediate application of the Minkowski integral inequality we can deduce the following result;
\begin{lemma} \label{lem:anis_Leb}
Let $1\leqslant p \leqslant p'$ and $f: X_1\times X_2 \to \mathbb{R}$ a function belonging to $L^p\left( X_1; L^{p'}\left( X_2\right) \right)$ where $\left( X_1; \mu_1\right),\left(X_2;\mu_2\right)$ are measurable spaces, then $f\in L^{p'}\left( X_2; L^p\left( X_1 \right)\right)$ and we have the inequality
$$
\left\|f\right\|_{L^{p'}\left( X_2; L^p\left( X_1 \right)\right)}\leqslant \left\|f\right\|_{L^p\left( X_1; L^{p'}\left( X_2\right) \right)}.
$$
\end{lemma}
Let us now consider the linear parabolic system with damping
\begin{equation}
\label{eq:parabolic_linear}
\left\lbrace
\begin{aligned}
& \partial_t w + \gamma w - \mu \Delta w = F, \\
& \left. w\right|_{t=0} = w_0 .
\end{aligned}
\right.
\end{equation}
\noindent The estimates that we prove in this section are in particular focused to show \textit{quantitative} smoothing effects on the solutions of \eqref{eq:parabolic_linear} in terms of the parameters $ \gamma $ and $ \mu $. \\
The following result is classical, we refer to \cite[Lemma 5.10, p. 210]{BCD}:
\begin{lemma}\label{lem:parabolic1}
Let $ w $ be the unique solution of \eqref{eq:parabolic_linear} in $ \mathcal{C} \pare{ \bra{0, T} ; \mathcal{S}' \pare{\mathbb{R}^d}} $ of the Cauchy problem \eqref{eq:parabolic_linear} when $ \gamma \geqslant 0 $ with $ F \in L^2 \pare{ \bra{0, T} ; \dot{H}^{s-1} \pare{\mathbb{R}^d}} $ and let $ w_0\in \dot{H}^s \pare{\mathbb{R}^d} $. Then for each $ t\in \bra{0, T} $
\begin{align*}
\norm{w\pare{t}}_{\dot{H}^s \pare{\mathbb{R}^d}} ^2 + \mu \int_0^t \norm{\nabla w \pare{t'}}_{\dot{H}^s\pare{\mathbb{R}^d}}^2 \d t' \leqslant \norm{w_0}_{\dot{H}^s\pare{\mathbb{R}^d}}^2 + \frac{C}{\mu} \norm{F}_{L^2_T\dot{H}^{s-1}\pare{\mathbb{R}^d}}^2, \\
\norm{w}_{L^4_T \dot{H}^{s+\frac{1}{2}}\pare{\mathbb{R}^d}} \leqslant \frac{C}{\mu ^{ {1}/{4}}} \pare{ \norm{w_0}_{\dot{H}^s\pare{\mathbb{R}^d}} + \frac{1}{\mu ^{ {1}/{2}}} \norm{F}_{L^2_T\dot{H}^{s-1}\pare{\mathbb{R}^d}} } .
\end{align*}
\end{lemma}
For our purposes we will require the bulk force $ F $ appearing in \eqref{eq:parabolic_linear} to be in $ L^{ {4}/{3}}_T L^2 $, whence Lemma \ref{lem:parabolic1} will not suffice in our context.
\begin{lemma}\label{lem:parabolic2}
Let $ q\in\bra{1, 2}, \ T\in \left( 0, \infty\right] $ and let us define \begin{equation*}
s_q = 2\pare{ 1 -\frac{1}{q}}\ \in\bra{0, 1},
\end{equation*}
and let us suppose
$ F \in L^{ q} \pare{ \bra{0, T} ; \dot{H}^{s-s_q} \pare{\mathbb{R}^d}} $ and let $ w_0\in \dot{H}^s \pare{\mathbb{R}^d}\cap \dot{H}^{s+\frac{1}{2}} \pare{\mathbb{R}^d} $.
Let us denote with $ w $ be the unique solution of \eqref{eq:parabolic_linear} in $ \mathcal{C} \pare{ \bra{0, T} ; \mathcal{S}' \pare{\mathbb{R}^d}} $ of the Cauchy problem \eqref{eq:parabolic_linear} when $ \gamma > 0 $.
Then
\begin{equation*}
\norm{w}_{L^4_T\dot{H}^{s+\frac{1}{2}}\pare{\mathbb{R}^d}} \leqslant C \bra{ \min \set{ \frac{\norm{w_0}_{\dot{H}^{s+\frac{1}{2}}\pare{\mathbb{R}^d}}}{\gamma^{ {1}/{4}}}, \ \frac{\norm{w_0}_{\dot{H}^s}\pare{\mathbb{R}^d}}{\mu^{1/4}} } + \frac{1}{\mu^{3/4}} \ \norm{F}_{L^{q}_T\dot{H}^{s-s_q}\pare{\mathbb{R}^d}} }.
\end{equation*}
\end{lemma}
\begin{proof}
Let us perform a $ \dot{H}^s\pare{\mathbb{R}^d} $ estimate onto \eqref{eq:parabolic_linear}. We deduce the energy inequality
\begin{align*}
\frac{1}{2}\frac{\d}{\d t}\norm{w\pare{t}}_{\dot{H}^s}^2 + \gamma \norm{w\pare{t}}_{\dot{H}^s}^2 + \mu \norm{ \nabla w}_{\dot{H}^s}^2
&\leqslant \av{\ps{F\pare{t}}{w\pare{t}}_{\dot{H}^s}}, \\
&\leqslant \norm{F\pare{t}}_{\dot{H}^{s-s_q}}\norm{w\pare{t}}_{\dot{H}^{s+s_q}}.
\end{align*}
\noindent Integrating the above relation in $ \bra{0, t}, \ t\in\bra{0, T} $ we deduce the inequality
\begin{equation}\label{eq:LPE1}
\frac{1}{2}\sup _{t'\in\pare{0, t}}\set{\norm{w\pare{t'}}_{\dot{H}^s}^2} + \gamma \int_0^t \norm{w\pare{t'}}_{\dot{H}^s}^2 \d t' + \mu \int_0^t \norm{ \nabla w\pare{t'}}_{\dot{H}^s}^2 \d t' \leqslant \frac{1}{2}\norm{w_0}_{\dot{H}^s}^2 + \norm{F}_{L^{q}_t \dot{H}^{s-s_q}} \norm{w}_{L^{\frac{q}{q-1}}_t\dot{H}^{s+s_q}}.
\end{equation}
\noindent A standard interpolation of Sobolev spaces implies that
\begin{equation*}
\norm{w}_{L^{\frac{q}{q-1}}_t\dot{H}^{s+s_q}} \leqslant \norm{w}_{L^\infty_t\dot{H}^s}^{\frac{2-q}{q}}\norm{\nabla w}_{L^2_t\dot{H}^s}^{\frac{2}{q}\pare{q-1}},
\end{equation*}
whence using the inequality
\begin{align*}
\norm{F}_{L^{q}_t \dot{H}^{s-s_q}} \norm{w}_{L^{\frac{q}{q-1}}_t\dot{H}^{s+s_q}} &
\leqslant \norm{F}_{L^{q}_t \dot{H}^{s-s_q}} \norm{w}_{L^\infty_t\dot{H}^s}^{\frac{2-q}{q}}\norm{\nabla w}_{L^2_t\dot{H}^s}^{\frac{2}{q}\pare{q-1}}, \\
&\leqslant \frac{1}{4} \norm{w}_{L^\infty_t\dot{H}^s}^{2} + \frac{3\mu}{4} \norm{w}_{L^2_t\dot{H}^s}^2 +\frac{C}{\mu} \norm{F}_{L^{q}_t \dot{H}^{s-s_q}}^2,
\end{align*}
which inserted in \eqref{eq:LPE1} gives
\begin{equation*}
\frac{1}{4}\sup _{t'\in\pare{0, t}}\set{\norm{w\pare{t}}_{\dot{H}^s}^2} + {\gamma }\int_0^t \norm{w\pare{t'}}_{\dot{H}^s}^2 \d t' + \frac{\mu}{4} \int_0^t \norm{ \nabla w\pare{t'}}_{\dot{H}^s}^2 \d t'\leqslant \frac{1}{2}\norm{w_0}_{\dot{H}^s}^2 + \frac{C}{\mu} \norm{F}_{L^{q}_t \dot{H}^{s-s_q}}^2.
\end{equation*}
\noindent The above equation in particular implies that
\begin{equation}
\label{eq:interpolation1}
\begin{aligned}
\norm{w}_{L^2_t \dot{H}^{s+1}} & \leqslant \frac{C}{\mu ^{1/2}} \pare{
\norm{w_0}_{\dot{H}^s} + \frac{1}{\mu ^{1/2}} \norm{F}_{L^{q}_t \dot{H}^{s-s_q}}
}, \\
\norm{w}_{L^2_t \dot{H}^{s}} & \leqslant \frac{C}{\gamma ^{1/2}} \pare{
\norm{w_0}_{\dot{H}^s} + \frac{1}{\mu ^{1/2}} \norm{F}_{L^{q}_t \dot{H}^{s-s_q}}
}.
\end{aligned}
\end{equation}
Let us now denote
\begin{equation} \label{eq:propagator_Sgammamu}
S_{\gamma, \mu} \pare{ \partial, t} g \pare{x} = e^{-t\pare{\gamma - \mu \Delta}} g \pare{x}.
\end{equation}
Indeed the solution of equation \eqref{eq:parabolic_linear} can be expressed in terms of the evolution semigroup $ S_{\gamma, \mu} $ as
\begin{equation}\label{eq:w_mild}
\hat{w}\pare{\xi, t} = S_{\gamma, \mu} \pare{ \xi, t} \hat{w}_0 \pare{\xi} + \int_0^t S_{\gamma, \mu} \pare{ \xi, t-t'} \hat{F}\pare{\xi , t'} \d t'.
\end{equation}
\noindent An application of H\"older inequality give us the estimate, for $ t\in\bra{0, T} $
\begin{equation*}
\sup _{t'\in\pare{0, t}}\av{\hat{w}\pare{\xi, t'}} \leqslant \av{\hat{w}_0\pare{\xi}} + \frac{C_q}{\pare{\gamma + \mu\av{\xi}^2}^{\frac{q-1}{q}}}\ \norm{\hat{F}\pare{\xi, \cdot}}_{L^{q}\pare{\bra{0, t}}},
\end{equation*}
whence an $ L^2\pare{\mathbb{R}^d, \av{\xi}^{2s}\d \xi} $ estimate on the above inequality allow us to deduce
\begin{equation}
\label{eq:LPE2}
\begin{aligned}
V\pare{t} & \overset{\text{def}}{=} \pare{\int _{\mathbb{R}^d}\av{\xi}^{2s} \pare{\sup _{t'\in\pare{0, t}}\av{\hat{w}\pare{\xi, t'}}}\d \xi}^{1/2}, \\
& \leqslant \norm{w_0}_{\dot{H}^s} + \left( \int _{\mathbb{R}^d} \frac{\av{\xi}^{2s}}{\pare{\gamma + \mu\av{\xi}^2}^{s_q}}\ \norm{\hat{F}\pare{\xi, \cdot}}_{L^{q}\pare{\bra{0, t}}}^2 \d \xi \right)^{1/2}
\end{aligned}
\end{equation}
\noindent Whence we remark that
\begin{align*}
\pare{
\int _{\mathbb{R}^d} \frac{\av{\xi}^{2s}}{2\pare{\gamma + \mu\av{\xi}^2}^{s_q}} \norm{\hat{F}\pare{\xi, \cdot}}^2_{L^{ q}_T} \d \xi
}^{ {1}/{2}}
&\leqslant \mu ^{-1/2}
\pare{
\int _{\mathbb{R}^d} {\av{\xi}^{2\pare{s-s_q}}} \norm{\hat{F}\pare{\xi, \cdot}}^2_{L^{ q}_T} \d \xi
}^{ {1}/{2}}\\
& = \mu ^{-1/2} \norm{\hat{F}}_{L^2\pare{\mathbb{R}^d, \ \av{\xi}^{\pare{s-s_q }} \d \xi ; \ L^{q}\pare{\bra{0, T}}}}.
\end{align*}
\noindent We use hence Lemma \ref{lem:anis_Leb} with $ p=q $, $ \mu_2 \pare{\d \xi} = \av{\xi}^{\pare{s-s_q}} \d \xi $ and $ p'=2 $ to deduce that
\begin{equation}
\label{eq:LPE3}
\norm{\hat{F}}_{L^2\pare{\mathbb{R}^d, \ \av{\xi}^{\pare{s-s_q}} \d \xi ; \ L^{q}\pare{\bra{0, T}}}} \leqslant \norm{F}_{L^{q}_T \dot{H}^{s-s_q}\pare{\mathbb{R}^d} },
\end{equation}
and we use again Lemma \ref{lem:anis_Leb} in order to deduce
\begin{equation}
\label{eq:LPE4}
\norm{w}_{L^\infty_t\dot{H}^s}\leqslant V\pare{t}.
\end{equation}
\noindent
Inserting the estimates \eqref{eq:LPE3} and \eqref{eq:LPE4} in \eqref{eq:LPE2} we deduce
\begin{equation}\label{eq:interpolation2}
\norm{w}_{L^\infty_t\dot{H}^s} \leqslant C_q \left( \norm{w_0}_{\dot{H}^s} + \frac{1}{\mu ^{1/2}} \norm{F}_{L^{q}_T \dot{H}^{s-s_q}\pare{\mathbb{R}^d} } \right)
\end{equation}
Interpolating equation \eqref{eq:interpolation1} and \eqref{eq:interpolation2} we deduce that, for any $ t\in\bra{0, T} $
\begin{equation}\label{eq:interpolation3}
\begin{aligned}
\norm{w}_{L^p_t \dot{H}^{s+\frac{2}{p}}} & \leqslant \frac{C_q}{\mu ^{1/p}} \pare{
\norm{w_0}_{\dot{H}^s} + \frac{1}{\mu ^{1/2}} \norm{F}_{_{L^{q}_T \dot{H}^{s-s_q}\pare{\mathbb{R}^d} }}
}, \\
\norm{w}_{L^p_t \dot{H}^{s}} & \leqslant \frac{C_q}{\gamma ^{1/p}} \pare{
\norm{w_0}_{\dot{H}^s} + \frac{1}{\mu ^{1/2}} \norm{F}_{_{L^{q}_T \dot{H}^{s-s_q}\pare{\mathbb{R}^d} }}
}.
\end{aligned}
\end{equation}
\noindent Setting $ p=4 $ in the first equation of \eqref{eq:interpolation3} we almost obtain the claim, what remains to be proved is the decaying effects on the initial data. Using Minkowski integral inequality and standard computations
\begin{equation}
\label{eq:bound_w0_L4Hs}
\begin{aligned}
\norm{S_{\gamma, \mu}\pare{\partial} w_0}_{L^4_T\dot{H}^{s+\frac{1}{2}}\pare{\mathbb{R}^d}} & = \pare{ \int_0^T \pare{\int_{\mathbb{R}^d} \av{\xi}^{2s+1} e^{-2t\pare{\gamma + \mu \av{\xi}^2 }} \av{\hat{w}_0\pare{\xi}}^2 \d\xi}^2 \d t }^{ {1}/{4}} , \\
& \leqslant \pare{ \int_{\mathbb{R}^d} \av{\xi}^{2s+1} \av{\hat{w}_0\pare{\xi}}^{2}\pare{ \int_0^T
e^{-4t\pare{\gamma + \mu \av{\xi}^2 }}
}^{ {1}/{2}} \d \xi }^{ {1}/{2}}, \\
& \leqslant \pare{ \int_{\mathbb{R}^d} \frac{\av{\xi}^{2s+1}}{2\sqrt{\gamma + \mu\av{\xi}^2}} \av{\hat{w}_0\pare{\xi}}^{2} \d \xi }^{ {1}/{2}}, \\
& \leqslant C \ \min \set{ \frac{\norm{w_0}_{\dot{H}^{s+\frac{1}{2} \pare{\mathbb{R}^d}}}}{\gamma^{ {1}/{4}}}, \frac{\norm{w_0}_{\dot{H}^{s}\pare{\mathbb{R}^d}}}{\mu^{1/4}} } .
\end{aligned}
\end{equation}
\end{proof}
The next lemma describes the regularity of the solutions of \eqref{eq:parabolic_linear} in the case in which the external force is in $ L^4_{T} \dot{H}^1 $, whence we focus on the regularity induced by the damping $ \gamma w $ and we do not consider any space-smoothing effect induced by the heat propagator:
\begin{lemma}\label{lem:smoothin_bulk_force}
Let $ w_0\in \dot{H}^s $ and let $ F\in L^2_T \dot{H}^s$, then $ w $ solution of \eqref{eq:parabolic_linear} is such that
\begin{equation*}
\norm{w}_{L^4_T \dot{H}^s\pare{\mathbb{R}^d}}\leqslant \ C \pare{ \frac{1}{\gamma^{ {1}/{4}}} \ \norm{w_0}_{\dot{H}^s\pare{\mathbb{R}^d}} + \frac{1}{\gamma^{3/4}} \norm{F}_{L^2_T \dot{H}^s\pare{\mathbb{R}^d}} }
\end{equation*}
\end{lemma}
\begin{proof}
The present proof is a slight modification of the proof of Lemma \ref{lem:parabolic2}. \\
In the same way we deduced \eqref{eq:LPE2} we can argue that (here we set $ q=2 $ and $ t\in\bra{0, T} $)
\begin{align*}
\norm{w}_{L^\infty_t \dot{H}^s}
& \leqslant \norm{w_0}_{\dot{H}^s} + \left( \int _{\mathbb{R}^d} \frac{\av{\xi}^{2s}}{{\gamma + \mu\av{\xi}^2}}\ \norm{\hat{F}\pare{\xi, \cdot}}_{L^{2}\pare{\bra{0, t}}}^2 \d \xi \right)^{1/2}, \\
& \leqslant \norm{w_0}_{\dot{H}^s} +\frac{1}{\gamma^{1/2}}\norm{F}_{L^2_T \dot{H}^s}.
\end{align*}
Performing a $ \dot{H}^s $ energy estimate on \eqref{eq:parabolic_linear} we deduce an estimate similar to \eqref{eq:interpolation1};
\begin{equation*}
\norm{w}_{L^2_t \dot{H}^{s}} \leqslant \frac{C}{\gamma ^{1/2}} \pare{
\norm{w_0}_{\dot{H}^s} + \frac{1}{\gamma ^{1/2}} \norm{F}_{L^{2}_t \dot{H}^{s}}
}.
\end{equation*}
An interpolation now concludes the estimates.
\end{proof}
Lemma \ref{lem:smoothin_bulk_force} in particular asserts that, if $ F $ is sufficiently regular, the solution $ w $ of \eqref{eq:parabolic_linear} is an $ o_\gamma\pare{1} $ function in $ L^4_T\dot{H}^s $. This is not completely surprising, in fact supposing that $ F\in L^2_T\dot{H}^{s-1} $ (let us remark that such regularity is \textit{not} the same one required in the statement of Lemma \ref{lem:smoothin_bulk_force}) a standard $ \dot{H}^s $ energy estimate on the equation \eqref{eq:parabolic_linear} shows that in fact $ w $ is $ \mathcal{O}\pare{\gamma^{-1}} $ as $ \gamma\to\infty $ in $ L^2_T\dot{H}^s $, interpolating hence we deduce that $ w $ is $ o_\gamma\pare{1} $ in $ L^p_T\dot{H}^s $ for $ p\in\left[2, \infty\right) $ (if $ F $ is "sufficiently regular"). This is obviously not the case when $ p=\infty $; the damping provided by the term $ \gamma w $ has no effect in $ t=0 $, we want though to quantify such damping effects for strictly positive times. \\
Let us now set $ \alpha, \gamma, \mu > 0 $, and let us define the following function defined in $ \mathbb{R}^d $
\begin{equation}\label{eq:definizione_m_gamma_mu}
m_{\gamma, \mu}^\alpha\pare{x} = \pare{\frac{\av{x}^2}{\gamma + \mu \av{x}^2}}^{\frac{\alpha}{2}}.
\end{equation}
Indeed to the function $ m_{\gamma, \mu}^\alpha $ we can associate a Fourier multiplier
\begin{equation*}
m_{\gamma, \mu}^\alpha \pare{\partial} g =
\mathcal{F}^{-1} \pare{ m_{\gamma, \mu}^\alpha \pare{\xi} \hat{g} \pare{\xi} }
= \mathcal{F}^{-1} \pare{\pare{\frac{\av{\xi}^2}{\gamma + \mu \av{\xi}^2}}^{\frac{\alpha}{2}} \hat{g} \pare{\xi}}.
\end{equation*}
\begin{lemma}\label{lem:properties_m_gamma}
Let $ g\in L^2\pare{\mathbb{R}^d} $, then
\begin{equation*}
\norm{m_{\gamma, \mu}^\alpha\pare{\partial} g}_{L^2\pare{\mathbb{R}^d}} \leqslant {\frac{C}{\gamma^{\alpha/4}} } \norm{g}_{L^2\pare{\mathbb{R}^d}} +\frac{1}{{\mu^{\alpha/2}}} \ o_{\gamma}\pare{1},
\end{equation*}
where $ o_{\gamma}\pare{1} $ is a nonnegative function which tends to zero as $ \gamma $ tends to infinity.
\end{lemma}
Lemma \ref{lem:properties_m_gamma} in particular asserts that, fixed $ \mu, \alpha >0 $, $ m_{\gamma, \mu}^\alpha\pare{\partial}\xrightarrow{\gamma\to \infty} 0 $ as a linear operator on $ L^2\pare{\mathbb{R}^d} $.
\begin{proof}
\begin{align*}
\norm{m_{\gamma, \mu}\pare{\partial} g}_{L^2\pare{\mathbb{R}^d}}^2 & =
\int \pare{\frac{\av{\xi}^2}{\gamma + \mu \av{\xi}^2}}^\alpha \av{\hat{g}\pare{\xi}}^2 \d\xi, \\
& = \int_{\av{\xi}\leqslant \gamma^{1/4}} \pare{\frac{\av{\xi}^2}{\gamma + \mu \av{\xi}^2}}^\alpha \av{\hat{g}\pare{\xi}}^2 \d\xi
+ \int_{\av{\xi}> \gamma^{1/4}} \pare{\frac{\av{\xi}^2}{\gamma + \mu \av{\xi}^2}}^\alpha \av{\hat{g}\pare{\xi}}^2 \d\xi = I_\gamma + I^\gamma.
\end{align*}
Since $ g\in L^2 $ and since $ {m_{\gamma, \mu}^{2\alpha}}\leqslant \mu ^{-\alpha} $ pointwise we can assert, by dominated convergence, that
\begin{equation*}
I^\gamma \leqslant \frac{1}{\mu^\alpha} \ o_\gamma\pare{1},
\end{equation*}
while since $ {m_{\gamma, \mu}^{2\alpha}} $ is strictly increasing in $ \av{\xi} $ we can assert that
\begin{equation*}
I_\gamma \leqslant \frac{1}{{\gamma}^{\alpha/2}} \ \norm{g}_{L^2\pare{\mathbb{R}^d}}^2,
\end{equation*}
concluding.
\end{proof}
\begin{definition}
Given two Banach spaces $ X, Y $ we say that $ z\in X+Y $ if there exists a $ x\in X $ and an $ y\in Y $ so that $ z=x+y $. Moreover
\begin{equation*}
\norm{z}_{X+Y}= \sup \set{\norm{x}_X+\norm{y}_Y \ \left|\ x\in X, y\in Y\ \wedge \ x+y = z\Big. \right. } .
\end{equation*}
\end{definition}
Our aim is to use hence Lemma \ref{lem:properties_m_gamma} in order to study the damping properties, when $ \gamma $ is large, in $ L^\infty_T\dot{H}^s \pare{\mathbb{R}^d} $ of the solutions of \eqref{eq:parabolic_linear} when $ F $ is an $ \mathcal{O}\pare{1} $ function in some suitable space.
\begin{lemma}\label{lem:linear_damping_estimate}
Let $ w_0\in\dot{H}^{ \frac{1}{2}} $ and $ F\in L^2_T \dot{H}^{-\frac{1}{2}} + L^{4/3}_TL^2 $, i.e. $ F=F_1+F_2 $ with $ F_1\in L^2_T \dot{H}^{-\frac{1}{2}} $ and $ F_2 \in L^{4/3}_T L^2 $. Let $ w $ be the unique tempered distribution which solves \eqref{eq:parabolic_linear}, then for each $ t\in\bra{0, T} $
\begin{equation}\label{eq:linear_damping_estimate}
\norm{w \pare{t}}_{\dot{H}^{ \frac{1}{2}}} \leqslant C\pare{ e^{-\gamma t}\norm{w_0}_{\dot{H}^{ \frac{1}{2}}} +{\frac{1}{\min\set{\gamma^{1/4}, \gamma^{1/8}}} } \norm{F}_{L^2_T \dot{H}^{-\frac{1}{2}}+ L^{4/3}_TL^2} +\frac{1}{ \min\set{\mu ^{1/2}, \mu^{1/4}} } \ o_{\gamma}\pare{1} },
\end{equation}
whence
\begin{equation}\label{eq:linear_damping_limit}
\lim _{\gamma\to \infty} \norm{w}_{L^\infty\pare{\pare{\varepsilon, T}; \dot{H}^{ \frac{1}{2}}}} = 0,
\end{equation}
for any $ \varepsilon >0 $.
\end{lemma}
\begin{rem}
Indeed the limit in \eqref{eq:linear_damping_limit} holds in the timespan $ \pare{\varepsilon, T} $ as it is clear from the estimate \eqref{eq:linear_damping_estimate}: in $ t=0 $ there is obviously no damping effect. \hfill$\blacklozenge$
\end{rem}
\begin{proof}
By superposition we can write $ w =W + w_1 + w_2 $, where
\begin{align*}
W\pare{x, t} & = S_{\gamma, \mu} \pare{ \partial, t} w_0 \pare{x}, \\
w_1\pare{x, t} & = \int_0^t S_{\gamma, \mu} \pare{ \partial, t-t'} F_1\pare{x, t'} \d t', \\
w_2\pare{x, t} & = \int_0^t S_{\gamma, \mu} \pare{ \partial, t-t'} F_2\pare{x, t'} \d t'.
\end{align*}
Indeed the following bound is immediate
\begin{equation*}
\norm{W\pare{t}}_{\dot{H}^{ \frac{1}{2}}} = \norm{S_{\gamma, \mu} \pare{ \partial, t} w_0}_{\dot{H}^{ \frac{1}{2}}} \leqslant e^{-\gamma t}\norm{w_0}_{\dot{H}^{ \frac{1}{2}}}.
\end{equation*}
For $ w_1 $ we can argue as in \eqref{eq:interpolation1} (here we set $ q=2 $) in order to deduce
\begin{align*}
\norm{w_1}_{L^\infty_t \dot{H}^{ \frac{1}{2}}} & \leqslant \left( \int _{\mathbb{R}^d} \frac{\av{\xi}}{{\gamma + \mu\av{\xi}^2}}\ \norm{\hat{F}_1\pare{\xi, \cdot}}_{L^{2}\pare{\bra{0, t}}}^2 \d \xi \right)^{1/2}, \\
& = \norm{m_{\gamma, \mu}^1\pare{\partial}\partial^{-1/2}\norm{F_1}_{L^2_t}}_{L^2_x}.
\end{align*}
We apply Lemma \ref{lem:properties_m_gamma} with $ \alpha = 1 $ in order to deduce
\begin{align*}
\norm{w_1}_{L^\infty_t \dot{H}^{ \frac{1}{2}}} & \leqslant \frac{C}{\gamma^{1/4}} \norm{F_1}_{L^2_t\dot{H}^{ -\frac{1}{2}}} + \frac{1}{\mu ^{1/2}} o_\gamma\pare{1}.
\end{align*}
\noindent
For $ w_2 $ we repeat the same procedure which lead us to prove \eqref{eq:interpolation1}, setting $ q=4/3 $, we have
\begin{align*}
\norm{w_2}_{L^\infty_t \dot{H}^{ \frac{1}{2}}} & \leqslant \left( \int _{\mathbb{R}^d} \frac{\av{\xi}}{\pare{ \gamma + \mu\av{\xi}^2}^{1/2}}\ \norm{\hat{F}_2\pare{\xi, \cdot}}_{L^{4/3}\pare{\bra{0, t}}}^2 \d \xi \right)^{1/2}, \\
& = \norm{m_{\gamma, \mu}^{1/2}\pare{\xi}\norm{\hat{F}_2}_{L^{4/3}_t}}_{L^2_\xi}.
\end{align*}
We again use Lemma \ref{lem:properties_m_gamma} with $ \alpha=1/2 $, next Lemma \ref{lem:anis_Leb} and Plancherel theorem to deduce the final bound required
\begin{align*}
\norm{w_2}_{L^\infty_t \dot{H}^{ \frac{1}{2}}} & \leqslant \frac{C}{\gamma^{1/8}} \norm{F_2}_{L^{4/3}_t L^2} + \frac{1}{\mu ^{1/4}} o_\gamma\pare{1}.
\end{align*}
\end{proof}
\section{Reformulation of the system \eqref{eq:Shilomis1}}\label{sec:ref_syst}
As already mentioned the main goal in the present study is to study the dynamics of the system \eqref{eq:Shilomis1} when $ \tau $ is small or tends to zero. Intuitively one understands that, when $ \tau\to 0 $ the term
\begin{equation*}
\frac{1}{\tau}\pare{M-\chi_0 H},
\end{equation*}
is the leading order term (in $ \tau $) in \eqref{eq:Shilomis1}, whence we expect, when $ \tau $ is sufficiently close to zero, to have the asymptotic development $ M-\chi_0 H = \mathcal{O}\pare{\tau} $ in some suitable topology. To understand rigorously this asymptotic is the mayor difficulty in the analysis of solutions of \eqref{eq:Shilomis1}. \\
Heuristically one expects the term $ \frac{1}{\tau}\pare{M-\chi_0 H} $ to provide a damping effect on the components $ M, H $, solutions of \eqref{eq:Shilomis1}. The damping effect is though not immediately clear from \eqref{eq:Shilomis1}; the aim of the present section is hence to provide a new reformulation of the system \eqref{eq:Shilomis1} in some new, but equivalent, unknowns which explicit the damping effect provided by the term $ \frac{1}{\tau}\pare{M-\chi_0 H} $. \\
From the magnetostatic equation, the third equation of \ref{eq:Shilomis1}, and since $ \textnormal{curl} \ H=0 $, we immediately deduce that
\begin{equation*}
H = -\mathcal{Q} M + \mathcal{G}_F, \hspace{1cm} \mathcal{G}_F = \nabla\Delta^{-1}F,
\end{equation*}
where $ \mathcal{Q}=\Delta^{-1}\nabla\div $.
\begin{rem}
Let us remark that if $ \mathcal{G}_F = \nabla\Delta^{-1} F $ and $ F $ has the regularity stated in \eqref{eq:regularity_F} then
\begin{align}\label{eq:regularity_GF}
\mathcal{G}_F \in L^4_T \dot{H}^1 \cap L^2_T\dot{H}^3, && \partial_t \mathcal{G}_F \in L^2_T\dot{H}^{ 1}.
\end{align}
The regularity stated in \eqref{eq:regularity_GF} will be considered implicitly given from now on. \hfill$\blacklozenge$
\end{rem}
Whence it is clear that, denoting $ \mathcal{P} $ the Leray projector onto divergence-free vector fields, and denoting
\begin{align*}
m = \mathcal{P} M, && \tilde{m} = \mathcal{Q} M,
\end{align*}
that
\begin{align*}
\frac{1}{\tau}\mathcal{P} \pare{M-\chi_0 H} = \frac{1}{\tau} \ m , &&
\frac{1}{\tau}\mathcal{Q} \pare{M-\chi_0 H} = \frac{1+\chi_0}{\tau} \bra{\tilde{m}-\frac{\chi_0}{1+\chi_0} \mathcal{G}_F}.
\end{align*}
We can hence define the new unknown
\begin{equation*}
r = \tilde{m}-\frac{\chi_0}{1+\chi_0} \mathcal{G}_F,
\end{equation*}
of which we can compute the evolution equation from \eqref{eq:Shilomis1}. The advantage of working with the variables $ m , r $ instead than $ M, H $ resided in the fact that, for such, the damping induced by the term $ \frac{1}{\tau} \pare{M-\chi_0 H} $ is explicit. \\
We can hence compute the evolution equations for $ \pare{u, m, r} $ form the ones of $ \pare{u, M, H} $ (and vice-versa) via the following reversible change of variables
\begin{align}\label{eq:change_unknown}
\left\lbrace
\begin{aligned}
& u=u \\
& M = m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F \\
& H = -r + \frac{1}{1+\chi_0} \mathcal{G}_F
\end{aligned}
\right. ,
&&
\left\lbrace
\begin{aligned}
&u=u \\
& m = \mathcal{P} M \\
& r = \mathcal{Q} M - \frac{\chi_0}{1-\chi_0}\mathcal{G}_F
\end{aligned}
\right. .
\end{align}
Thanks to the explicit change of unknown given in \ref{eq:change_unknown} it is rather simple to deduce the evolution of $ \pare{u, m, r} $ from \eqref{eq:Shilomis1}, and we obtain
\begin{equation}\tag{S2}\label{eq:Shilomis2}
\left\lbrace
\begin{aligned}
& \begin{multlined}
{\partial_t u + \pare{u\cdot\nabla} u}- \nu \Delta u + \nabla p = \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F}\cdot \nabla \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}\\
+ \frac{1}{2}\textnormal{curl} \bra{\pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}},
\end{multlined} \\[5mm]
& \begin{multlined}
\partial_t m +\frac{1}{\tau} \ m - \sigma \Delta m = -\mathcal{P} \bra{u\cdot\nabla \pare{ m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{P} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
- \mathcal{P} \set{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}} },
\end{multlined}\\[5mm]
& \begin{multlined}
\partial_t r +\frac{1+\chi_0}{\tau} \ r - \sigma \Delta r = -\mathcal{Q} \bra{u\cdot\nabla \pare{ m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{Q} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
- \mathcal{Q} \set{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}} }\\
-\frac{\chi_0}{1+\chi_0}\pare{ \partial_t \mathcal{G}_F - \sigma \Delta \mathcal{G}_F \Big.},
\end{multlined}\\%[5mm]
& \div \ u=0, \\
& \left.\pare{u, m, r}\right|_{t=0} = \pare{u_0, m_0, r_0}.
\end{aligned}
\right.
\end{equation}
From now on we will work with the system in the form \eqref{eq:Shilomis2}.
\begin{rem}\label{rem:explanation_nonlinear_terms}
We would like to remark the fact that, despite the system \ref{eq:Shilomis2} seems at a firs sight much more complex than the system \eqref{eq:Shilomis1}, there is in fact no relevant new technical difficulty in \eqref{eq:Shilomis2}. \\
In fact the nonlinearities appearing on the right hand side of \eqref{eq:Shilomis2} belong at most to six classes which we can study without problem and which are here enumerated\footnote{Here and in the rest of the paper we use Einstein summation convention}
\begin{itemize}
\item They can be of the form
\begin{equation*}
B_{\textnormal{NS}}\pare{v, v} = \pare{ v_i \ q_{i, j}^{\textnormal{NS}, \ell}\pare{\partial} v_j}_{\ell=1, 2, 3},
\end{equation*}
where $ q_{i, j}^{\textnormal{NS}, \ell} $ are homogeneous Fourier multipliers of order one.
\item They can be of the form
\begin{align*}
\mathcal{L}_1 \pare{v} = \pare{v_i \ q_{i, j}^{\pare{1}, \ell}\pare{\partial} G_j}_{\ell=1, 2, 3}, &&
\mathcal{L}_2 \pare{v} = \pare{ G_i \ q_{i, j}^{\pare{2}, \ell}\pare{\partial}v_j}_{\ell=1, 2, 3},
\end{align*}
form some function $ G $ (notably in \eqref{eq:Shilomis2} $ G=\mathcal{G}_F $). Here again $ q_{i, j}^{\pare{k}, \ell}, \ k =1, 2 $ are homogeneous Fourier multipliers of order one.
\item Lastly they can be $ p $-linear forms of the form
\begin{align*}
\mathcal{N}_p \pare{v} = v^{\otimes p} \otimes G^{\otimes\pare{3-p}},
\end{align*}
where we recall that given a $ w\in\mathbb{R}^3 $ we identify as $ w^{\otimes q} $ the canonical $ q $--linear form whose components are elements of the form
\begin{equation*}
\prod_{q'=1}^q w_{j_{q'}}.
\end{equation*}
In particular hence the components of $ \mathcal{N}_p\pare{v} $ are of the form
\begin{equation*}
\prod_{q'=1}^p\prod_{q''=1}^{3-p} v_{j_{q'}} G_{j_{q''}}.
\end{equation*}
\end{itemize}
Whence we can assert that \eqref{eq:Shilomis2} can be studied as a special system of the form
\begin{equation}\label{eq:Shilomis_generic}
\partial_t v + \mathbb{M} v - \mathcal{A}_2 \pare{\partial} v = B_{\textnormal{NS}}\pare{v, v} + \mathcal{L}_1 \pare{v} + \mathcal{L}_2 \pare{v} + \sum_{p=1}^3 \mathcal{N}_p \pare{v} + f,
\end{equation}
where $ \mathbb{M} $ is a diagonal, nonnegative matrix and $ \mathcal{A}_2 \pare{\partial} $ is an elliptic differential homogeneous operator of order two and $ f $ is a bulk force. We will many times think of \eqref{eq:Shilomis2} in the form \eqref{eq:Shilomis_generic} since there are much less terms to consider, which qualitatively describe every nonlinear term appearing in \eqref{eq:Shilomis2}. \hfill$\blacklozenge$
\end{rem}
\section{Existence of a unique solution in a critical functional space uniformly in $ \tau\in\pare{0, \tau_0} $ } \label{sec:existence_sol}
In the present section we prove the main result of the paper, i.e. Theorem \ref{thm:main_thm}. The detailed result proved is the following one, which implies the proof of Theorem \ref{thm:main_thm} as explained in Remark \ref{rem:prop→thm};
\begin{prop}\label{prop:existence_unique_solution}
Let $ u_0\in\dot{H}^{ \frac{1}{2}} $ and $ \mathcal{G}_F\in L^4_{\textnormal{loc}}\pare{\mathbb{R}_+; \dot{H}^{ 1}}\cap W^{1, 2}_{\textnormal{loc}}\pare{\mathbb{R}_+; \dot{H}^{ 1} \cap \dot{H}^3} $. There exists a $ \rho, \varrho_0 >0 $ such that $ \rho > 2\varrho_0 $ and a $ T=T_{\varrho_0} \in\left( 0, \infty \right] $ (see \eqref{eq:def_T}) so that
\begin{equation} \label{eq:smallness_cG}
\norm{\mathcal{G}}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0 \leqslant \frac{\min\set{c^{1/2},\big. c^{3/4}}}{C},
\end{equation}
where $ c = \min\set{\nu, \sigma} $ such that, if;
\begin{enumerate}[{a)}]
\item Let $ u_0, m_0, r_0 \in \dot{H}^{ \frac{1}{2}} $ be such that
\begin{align*}
\norm{u_0}_{\dot{H}^{ \frac{1}{2}}}\leqslant \frac{\nu^{1/4}}{C} \ \rho,
&&
\norm{\pare{m_0, r_0}}_{\dot{H}^{ \frac{1}{2}}}\leqslant\frac{\sigma^{1/4}}{C}\ \rho,
\end{align*}
and
\begin{equation}\label{eq:smallness_tau1}
\tau < \frac{\pare{1+\chi_0}^{7/3}}{ C\ \chi_0^{4/3}} \pare{\norm{\mathcal{G}_F}_{L^2_T\dot{H}^{3}} + \norm{\mathcal{G}_F}_{\dot{W}^{1, 2}_T \dot{H}^1}}^{-4/3} \ \varrho_0^{4/3},
\end{equation}
then there exist a unique solution $ \pare{u, m, r} $ of \eqref{eq:Shilomis2} in the ball $ B\pare{0, 4\rho} $ of the space $ L^4_{T} \dot{H}^1 $ which moreover belongs to the space $ \mathcal{C}_T\dot{H}^{ \frac{1}{2}} $.
\item Let $ u_0, m_0, r_0 \in \dot{H}^{ \frac{1}{2}} $ arbitrarily large and $ \tau > 0 $ satisfy the relation \eqref{eq:smallness_tau1}, there exist a $ T^\star = T^\star _{U_0}\in\pare{0, T} $ such that the system \eqref{eq:Shilomis2} admits a unique solution in the ball $ B\pare{0, 4\rho} $ of the space $ L^4_{T^\star}\dot{H}^1 $ which moreover belongs to the space $ \mathcal{C}_{T^\star}\dot{H}^{ \frac{1}{2}} $.
\item \label{en:point_we_prove_in_the_ecistence_thm} Let $ u_0\in\dot{H}^{ \frac{1}{2}} $
\begin{equation}\label{eq:smallness_vel_flow}
\norm{u_0}_{ \dot{H}^{ \frac{1}{2}}}\leqslant\frac{\nu^{1/4}}{C} \ \rho,
\end{equation}
and $ m_0, r_0\in\dot{H}^1 $ arbitrary. Let $ \tau $ be sufficiently small so that
\begin{equation}\label{eq:smallness_tau}
\tau\leqslant \min\set{
\frac{\rho^4}{C\pare{\norm{m_0}_{\dot{H}^1}^4 + \norm{r_0}_{\dot{H}^1}^4}}
\hspace{2mm}, \hspace{2mm}
\frac{ \pare{{1+\chi_0}}^{7/3}\varrho_0^{4/3}}{C\chi_0^{4/3} \ \pare{\norm{\mathcal{G}_F}_{L^2_T\dot{H}^{3}} + \norm{\mathcal{G}_F}_{\dot{W}^{1, 2}_T \dot{H}^1}}^{4/3}}
}.
\end{equation}
Then there exist a unique solution $ \pare{u, m, r} $ of \eqref{eq:Shilomis2} in the ball $ B\pare{0, 4\rho} $ of the space $ L^4_{T} \dot{H}^1 $ which moreover belongs to the space $ \mathcal{C}_{T}\dot{H}^{ \frac{1}{2}} $.
\item Let $ u_0\in\dot{H}^{ \frac{1}{2}} $ arbitrarily large and let $ \tau $ satisfy \eqref{eq:smallness_tau}, there exist a $ T^\star\in\pare{0, T} $ such that the system \eqref{eq:Shilomis2} admits a unique solution in the ball $ B\pare{0, 4\rho} $ of the space $ L^4_{T^\star}\dot{H}^1 $ which moreover belongs to the space $ \mathcal{C}_{T^\star}\dot{H}^{ \frac{1}{2}} $.
\end{enumerate}
\end{prop}
\begin{rem}\label{rem:prop→thm}
Let us note that if we prove Proposition \ref{prop:existence_unique_solution} than we prove as well Theorem \ref{thm:main_thm} with the substitution
\begin{equation*}
\pare{u, m , r}\mapsto \pare{u, M , H},
\end{equation*}
defined explicitly in \eqref{eq:change_unknown}. \hfill$\blacklozenge$
\end{rem}
\begin{rem}
We will prove only the point \ref{en:point_we_prove_in_the_ecistence_thm} since the other points are variations of the same argument which are simple to the reader familiar with the construction of solutions for the Navier-Stokes equations via a fixed point theorem.~\hfill$\blacklozenge$
\end{rem}
\begin{rem}
Let us point out that if we allow $ T=\infty $ in the statement of Proposition \ref{prop:existence_unique_solution} (i.e. it suffice to consider $ \mathcal{G}_F $ to be "small" in $ L^4_{T} \dot{H}^1 $) the points a and \ref{en:point_we_prove_in_the_ecistence_thm} provide a \textit{global} solution of \eqref{eq:Shilomis2}, in particular the point \ref{en:point_we_prove_in_the_ecistence_thm} provides a global solution imposing a smallness hypothesis on $ u_0 $ \textit{only} in $ \dot{H}^{ \frac{1}{2}} $ and assuming $ m_0, r_0 $ be \textit{arbitrarily large or unbounded} in $ \dot{H}^{ \frac{1}{2}} $.~\hfill$\blacklozenge$
\end{rem}
The proof of the point \ref{en:point_we_prove_in_the_ecistence_thm} of Proposition \ref{prop:existence_unique_solution} is an application of the fixed point theorem stated in Proposition \ref{prop:fixed_point}; conceptually there is no great difference with the more familiar construction of a unique solution in a critical space for the incompressible Navier-Stokes equations, there are though two main difficulties which we want to consider
\begin{itemize}
\item Indeed the nonlinear estimates for \eqref{eq:Shilomis2} are more lengthy and complicated than the transport bilinear form of the incompressible Navier-Stokes equations,
\item Secondly, and more important in our context, we want to give a proof which provides an existence result which is uniform-in-time with respect to the physical parameter $ \tau \in\pare{0, \tau_0 } $ for some small $ \tau_0>0 $.
\end{itemize}
The proof is hence divided as follows:
\begin{itemize}
\item In Section \ref{sec:mild_form_Shilomis} we reformulate the system \eqref{eq:Shilomis2} in a suitable mild form. Such passage consist mostly in computations which have to be carried out in detail due to the many nonlinearities appearing in system \eqref{eq:Shilomis2},
\item In Section \ref{sec:par_est_for_S} we provide some nonlinear parabolic estimates for the six generic classes of nonlinearities which compose all the nonlinear terms of Shilomis system \eqref{eq:Shilomis2}, as explained in Remark \ref{rem:explanation_nonlinear_terms}. Indeed the linear parabolic estimates carried out in the introductory Section \ref{sec:linear_parabolic_estimates} will be the main tool in order to prove the nonlinear estimates required,
\item In Section \ref{sec:nonlinear_bounds_Shilomis} we apply the nonlinear parabolic bounds deduced in Section \ref{sec:par_est_for_S} to the mild form of \eqref{eq:Shilomis2} deduced in Section \ref{sec:mild_form_Shilomis},
\item Finally in Section \ref{sec:fixed_poin_application} we apply the nonlinear bounds for the Shilomis system deduced in Section \ref{sec:nonlinear_bounds_Shilomis} in order to apply the fixed point theorem stated in Proposition \ref{prop:fixed_point} and to deduce the existence of a unique solution of \eqref{eq:Shilomis2} in critical space.
\end{itemize}
\begin{rem}
Since the proof of Proposition \ref{prop:existence_unique_solution} relies on a fixed point theorem it is known that such result generally relies on a smallness hypothesis on which it is possible to construct a perturbative argument. \\
The smallness hypothesis appearing in Proposition \ref{prop:existence_unique_solution} are rather unusual, hence we would like to comment them:
\begin{enumerate}[$ \triangleright $]
\item The smallness hypothesis on the initial velocity flow \eqref{eq:smallness_vel_flow} is rather standard in the theory of Navier-Stokes equations.
\item The smallness hypothesis \eqref{eq:smallness_cG} can look peculiar in a first stance, but it is inevitable. It says in fact that the external magnetic field cannot pump too much $ L^4_{T} \dot{H}^1 $ energy in the system. This is reasonable since in the equation \eqref{eq:Shilomis2} there are terms of the form $ \mathcal{G}_F \cdot\nabla\mathcal{G}_F $, if such term is arbitrarily large it will break down any smallness condition on which the perturbative argument for Navier-Stokes equations is based; relaxing \eqref{eq:smallness_cG} is hence impossible in our context.
\item As a matter of facts the in the point \ref{en:point_we_prove_in_the_ecistence_thm} we consider initial data $ m_0, r_0 $ arbitrarily large in $ \dot{H}^{ \frac{1}{2}} $ and $ \dot{H}^1 $. Such hypothesis may look as unreasonable at a first sight, but we want to make notice to the reader that the smallness hypothesis \eqref{eq:smallness_tau} compensates to such lack of smallness for the initial data. In a nutshell it says that if the damping coefficient is sufficiently large the $ \dot{H}^1 $ norm of $ \pare{m, r} $ is damped with sufficient vigor so that $ \pare{m, r} $ turns out to be "small" in the space $ L^4_{T} \dot{H}^1 $, without hence violating the smallness principle on which any perturbative method is based.
\end{enumerate}
\end{rem}
\subsection{Reformulation of \eqref{eq:Shilomis2} in an appropriate mild form }\label{sec:mild_form_Shilomis}
Lt us rewrite the system \eqref{eq:Shilomis2} in the mild form
\begin{equation}
\label{eq:Shilomis_mild}
\left\lbrace
\begin{aligned}
& u \pare{x, t} = S_{0, \nu}\pare{\partial, t} u_0\pare{x} + \int_0^t S_{0, \nu}\pare{\partial, t -t'} \mathcal{N}_u\pare{x, t'} \d t', \\
& m \pare{x, t} = S_{\frac{1}{\tau}, \sigma}\pare{\partial, t} m_0\pare{x} + \int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \mathcal{N}_m\pare{x, t'} \d t', \\
& r \pare{x, t} = S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t} r_0\pare{x} + \int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \mathcal{N}_r\pare{x, t'} \d t'
+ \int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} f\pare{x, t'} \d t',
\end{aligned}
\right.
\end{equation}
where
\begin{equation}
\label{eq:nonlinearities_Shilomis_mild}
\begin{aligned}
& \begin{multlined}
\mathcal{N}_u = - \mathcal{P} \pare{u\cdot\nabla u } + \mathcal{P}\bra{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F}\cdot \nabla \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}}\\
+ \frac{1}{2}\mathcal{P} \ \textnormal{curl} \bra{\pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}},
\end{multlined}\\[5mm]
& \begin{multlined}
\mathcal{N}_m = -\mathcal{P} \bra{u\cdot\nabla \pare{ m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{P} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
- \mathcal{P} \set{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}} },
\end{multlined} \\[5mm]
& \begin{multlined}
\mathcal{N}_r = -\mathcal{Q} \bra{u\cdot\nabla \pare{ m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{Q} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
\hspace{8mm} - \mathcal{Q} \set{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m+r + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r + \frac{1}{1+\chi_0} \mathcal{G}_F}} }
,\end{multlined}
\\[5mm]
&f = -\frac{\chi_0}{1+\chi_0}\pare{ \partial_t \mathcal{G}_F - \sigma \Delta \mathcal{G}_F \Big.}.
\\
\end{aligned}
\end{equation}
We will now reformulate the integral system \eqref{eq:Shilomis_mild} in an even more generic form with which will be easier to study.
Let us now denote as $ U = \pare{u, m, r} $, the system \eqref{eq:Shilomis_mild} can alternatively be written as
\begin{equation}\label{eq:Shilomis_mild2}
U\pare{x, t} = \mathcal{S} \pare{\partial, t}U_0\pare{x} + \mathcal{T}\bra{U}\pare{x, t} + g \pare{x, t},
\end{equation}
where
\begin{equation}\label{eq:cSU0}
\mathcal{S} \pare{\partial, t}U_0\pare{x} = \pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial, t} u_0\pare{x} \\
S_{\frac{1}{\tau}, \sigma}\pare{\partial, t} m_0\pare{x} \\
S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t} r_0\pare{x}
\end{array}
},
\end{equation}
while
\begin{equation}\label{eq:def_cT}
\mathcal{T}\bra{U} = \left( \sum_{p=1}^3\mathcal{T}_{p}\bra{U} \right) + \mathcal{T}_{2, \textnormal{NS}} \bra{U} + \mathcal{T}_{1, \RN{1}} \bra{U} + \mathcal{T}_{1, \RN{2}} \bra{U},
\end{equation}
where
\begin{equation}\label{eq:cT2NS}
\mathcal{T}_{2, \textnormal{NS}} \bra{U} = \pare{
\begin{array}{c}
\displaystyle \int_0^t S_{0, \nu}\pare{\partial, t -t'} \set{- \mathcal{P} \pare{\big.u\cdot\nabla u } - \mathcal{P}\bra{ \big. \pare{m+r }\cdot \nabla {r }}
- \frac{1}{2}\mathcal{P} \ \textnormal{curl} \bra{\big.\pare{m+r } \times {r }}}\pare{t'} \d t' \\[5mm]
\displaystyle \int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{-\mathcal{P} \bra{ \big. u\cdot\nabla \pare{ m+r }} + \frac{1}{2}\mathcal{P} \bra{ \big. \pare{\textnormal{curl} \ u} \times \pare{m+r } } }\pare{t'} \d t' \\[5mm]
\displaystyle \int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{ -\mathcal{Q} \bra{u\cdot\nabla \pare{ m+r }} + \frac{1}{2}\mathcal{Q} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r }}}\pare{t'}\d t'
\end{array}
},
\end{equation}
\begin{equation}\label{eq:cT1I}
\mathcal{T}_{1, \RN{1}}\bra{U} = \pare{
\begin{array}{c}
\displaystyle \frac{1}{2 \pare{ 1+\chi_0}} \int_0^t S_{0, \nu}\pare{\partial, t -t'} \set{\Big.
\mathcal{P}\bra{ \pare{m+\pare{ 1-\chi_0}r }\cdot \nabla { \mathcal{G}_F}}
+ \mathcal{P} \bra{\pare{m+\pare{1+\chi_0}r}\div\ \mathcal{G}_F}}\pare{t'} \d t' \\[5mm]
\displaystyle-\frac{\chi_0}{1+\chi_0}\int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{\mathcal{P}\bra{\big. u\cdot\nabla \mathcal{G}_F}}\pare{t'} \d t' \\[5mm]
\displaystyle-\frac{\chi_0}{1+\chi_0}\int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{\mathcal{Q} \bra{\big. u\cdot\nabla \mathcal{G}_F}}\pare{t'} \d t'
\end{array}
},
\end{equation}
and let us remark how the operator $ \mathcal{T}_{1, \RN{1}}\bra{U} $ acts as a derivative on the function $ \mathcal{G}_F $ only, while the operator $ \mathcal{T}_{1, \RN{2}}\bra{U} $ is defined as
\begin{equation}\label{eq:cT1II}
\mathcal{T}_{1, \RN{2}}\bra{U} = \pare{
\begin{array}{c}
\displaystyle \frac{1}{2 \pare{ 1+\chi_0}} \int_0^t S_{0, \nu}\pare{\partial, t -t'} \set{\Big. -\mathcal{G}_{F} \div\pare{m+\pare{1+\chi_0}r} -\mathcal{G}_{F}\cdot \nabla\pare{m+\pare{1+\chi_0}r}}\pare{t'} \d t'\\[5mm]
\displaystyle \frac{\chi_0}{2\pare{ 1+\chi_0}}\int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{\Big. \mathcal{P}\bra{\textnormal{curl} u \times \mathcal{G}_F}}\pare{t'} \d t' \\[5mm]
\displaystyle\frac{\chi_0}{2\pare{ 1+\chi_0}}\int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{\Big. \mathcal{Q}\bra{\textnormal{curl} u \times \mathcal{G}_F}}\pare{t'} \d t'
\end{array}
}.
\end{equation}
\noindent We now define the $ p $--linear operators $ \mathcal{T}_p\bra{U} $;
\begin{equation}\label{eq:cT1}
\mathcal{T}_1\bra{U} = \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \frac{\chi_0}{\pare{1+\chi_0}^2}\int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{P}\bra{\Big.\mathcal{G}_F \times\pare{\big. \pare{m+\pare{1+\chi_0}r}\times \mathcal{G}_F}}}\pare{t'} \d t' \\[5mm]
\displaystyle \frac{\chi_0}{\pare{1+\chi_0}^2}\int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{Q}\bra{\Big.\mathcal{G}_F \times\pare{\big. \pare{m+\pare{1+\chi_0}r}\times \mathcal{G}_F}}}\pare{t'} \d t'
\end{array}
},
\end{equation}
\begin{equation}\label{eq:cT2}
\mathcal{T}_2\bra{U} = \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \frac{\chi_0}{1+\chi_0} \int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{P}\bra{\Big.\mathcal{G}_F \times\pare{\big. r\times m}} + \mathcal{P} \bra{\Big. \pare{m+r}\times\bra{\pare{m+\pare{1+\chi_0}r}\times\mathcal{G}_F}}}\pare{t'} \d t' \\[5mm]
\displaystyle\frac{\chi_0}{1+\chi_0} \int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{Q}\bra{\Big.\mathcal{G}_F \times\pare{\big. r\times m}} + \mathcal{Q} \bra{\Big. \pare{m+r}\times\bra{\pare{m+\pare{1+\chi_0}r}\times\mathcal{G}_F}}}\pare{t'} \d t'
\end{array}
},
\end{equation}
\begin{equation}\label{eq:cT3}
\mathcal{T}_3\bra{U} = \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \int_0^t S_{\frac{1}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{P}\bra{\pare{m+r}\times\pare{r\times {m}}\Big. }}\pare{t'} \d t' \\[5mm]
\displaystyle \int_0^t S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t -t'} \set{ \mathcal{Q}\bra{\pare{m+r}\times\pare{r\times {m}}\Big. }}\pare{t'} \d t'
\end{array}
}.
\end{equation}
\noindent
While finally we can define the outer force $ g $ as
\begin{equation}\label{eq:def_outer_g}
g = \pare{
\begin{array}{c}
\displaystyle
\frac{\chi_0}{\pare{1+\chi_0}^2} \int_0^t S_{0, \nu}\pare{\partial, t-t'}\left( \mathcal{G}_F\cdot\nabla \mathcal{G}_{F} \right)\pare{t'} \d t' \\[5mm]
0 \\[5mm]
\displaystyle -\frac{\chi_0}{1+\chi_0}\int_0^tS_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial, t - t'}\pare{ \partial_t \mathcal{G}_F - \sigma \Delta \mathcal{G}_F \Big.}\pare{t'} \d t'
\end{array}
}=\pare{\begin{array}{c}
g_1 \\ 0 \\ g_2
\end{array} }
\end{equation}
Despite the long and tedious computations we can already understand why we decided to rewrite system \eqref{eq:Shilomis_mild} in the abstract form \eqref{eq:Shilomis_mild2}. the integral operators defined explicitly in \eqref{eq:cT2NS}--\eqref{eq:cT3} are all of the following form: a time convolution of a nonlinearity which falls within one of the six cases explained in Remark \ref{rem:explanation_nonlinear_terms} with one operator of the form $ S_{\mu, \gamma} \pare{\partial} $ defined in \eqref{eq:propagator_Sgammamu}.
\subsection{Parabolic estimates for generalized Shilomis-type nonlinearities}\label{sec:par_est_for_S}
It suffice hence to check that the nonlinear integral operator defined by the right hand side of \eqref{eq:Shilomis_mild2} is continuous in $ L^4_{T} \dot{H}^1 $ in order to apply Proposition \ref{prop:fixed_point} and to deduce the existence of a fixed point for the integral equation \eqref{eq:Shilomis_mild}. Indeed to prove the continuity of each term in the nonlinearity given by \eqref{eq:def_cT} would be a lengthy and tedious work. On the other hand we can exploit the observations deduced in Remark \ref{rem:explanation_nonlinear_terms}: every term appearing in \eqref{eq:nonlinearities_Shilomis_mild} belongs to one of at most six classes of nonlinearities, this significantly simplifies the process.
\begin{prop}\label{prop:nonlinear_bounds_generic}
Let $ v, v_1, v_2, v_3, G \in L^4_{T} \dot{H}^1 $, let $ \pare{\gamma, \mu} \in \left[0, \infty\right) \times \pare{0, \infty} $ and let $ B_{\textnormal{NS}}, \mathcal{L}_j, \mathcal{N}_p, \ j=1, 2, \ p=1, 2, 3 $ be as in Remark \ref{rem:explanation_nonlinear_terms}, then setting $ S_{\gamma, \mu} $ the propagator defined in \eqref{eq:propagator_Sgammamu} the following inequalities hold true
\begin{enumerate}
\item \label{enum:nonlinear_bounds_generic1} $ \displaystyle\norm{S_{\gamma, \mu}\pare{\partial}\star_t B_{\textnormal{NS}}\pare{v_1, v_2}}_{L^4_{T} \dot{H}^1}\leqslant \frac{C}{\mu ^{3/4}} \norm{v_1}_{L^4_{T} \dot{H}^1}\norm{v_2}_{L^4_{T} \dot{H}^1} $,
\item \label{enum:nonlinear_bounds_generic2} $ \displaystyle\norm{S_{\gamma, \mu}\pare{\partial}\star_t \mathcal{L}_j \pare{v}}_{L^4_{T} \dot{H}^1}\leqslant \frac{C}{\mu ^{3/4}} \norm{G}_{L^4_{T} \dot{H}^1}\norm{v}_{L^4_{T} \dot{H}^1} $,
\item \label{enum:nonlinear_bounds_generic3} $ \norm{S_{\gamma, \mu}\pare{\partial}\star_t \mathcal{N}_p \pare{v_1, \ldots , v_p}}_{L^4_{T} \dot{H}^1}\leqslant \displaystyle \frac{C}{\mu ^{1/2}} \pare{ \prod_{i=1}^p \norm{v_i}_{L^4_{T} \dot{H}^1}} \times \norm{G}^{3-p}_{L^4_{T} \dot{H}^1} $.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item Indeed $ S_{\gamma, \mu}\pare{\partial}\star_t B_{\textnormal{NS}}\pare{v_1, v_2} $ can be though as the unique solution of \eqref{eq:parabolic_linear} when $ w_0 = 0 $ and $ F = B_{\textnormal{NS}}\pare{v_1, v_2} $, hence applying Lemma \ref{lem:parabolic1} we deduce that $ \norm{S_{\gamma, \mu}\pare{\partial}\star_t B_{\textnormal{NS}}\pare{v_1, v_2}}_{L^4_{T} \dot{H}^1} \leqslant \frac{C}{\mu ^{3/4}} \norm{ B_{\textnormal{NS}}\pare{v_1, v_2} }_{L^2_T\dot{H}^{-1/2}} $. Moreover since every term in $ B_{\textnormal{NS}} $ is of the form $ v_i \ q_{i, j}^{\textnormal{NS}, \ell}\pare{\partial} v_j $ where $ q_{i, j}^{\textnormal{NS}, \ell} $ homogeneous Fourier multiplier of order one applying Lemma \ref{lem:Sob_product_rules} we deduce
\begin{equation*}
\begin{aligned}
\norm{ B_{\textnormal{NS}}\pare{v_1, v_2} }_{L^2_T\dot{H}^{-1/2}} & \leqslant C \pare{ \norm{v_1 \otimes \nabla v_2}_{L^2_T\dot{H}^{-1/2}} + \norm{\nabla v_1 \otimes v_2}_{L^2_T\dot{H}^{-1/2}}}, \\
&\leqslant C\norm{v_1}_{L^4_{T} \dot{H}^1}\norm{v_2}_{L^4_{T} \dot{H}^1},
\end{aligned}
\end{equation*}
proving the first inequality.
\item Similarly as above $ S_{\gamma, \mu}\pare{\partial}\star_t \mathcal{L}_j \pare{v} $ is the unique solution of \eqref{eq:parabolic_linear} when $ w_0=0 $ and $ F = \mathcal{L}_j \pare{v} $, whence \linebreak$ \norm{S_{\gamma, \mu}\pare{\partial}\star_t \mathcal{L}_j \pare{v}}_{L^4_{T} \dot{H}^1} \leqslant \frac{C}{\mu ^{3/4}} \norm{ \mathcal{L}_j \pare{v} }_{L^2_T\dot{H}^{-1/2}} $. Using again Lemma \ref{lem:Sob_product_rules} we deduce
\begin{equation*}
\begin{aligned}
\norm{ \mathcal{L}_j \pare{v}}_{L^2_T\dot{H}^{-1/2}} & \leqslant C \pare{ \norm{G \otimes \nabla v}_{L^2_T\dot{H}^{-1/2}} + \norm{\nabla G \otimes v}_{L^2_T\dot{H}^{-1/2}}}, \\
&\leqslant C \norm{G }_{L^4_{T} \dot{H}^1}\norm{v}_{L^4_{T} \dot{H}^1},
\end{aligned}
\end{equation*}
concluding the proof of the second inequality.
\item Similarly as above we can deduce the estimate
\begin{equation*}
\norm{S_{\gamma, \mu}\pare{\partial}\star_t \mathcal{N}_p \pare{v_1, \ldots , v_p}}_{L^4_{T} \dot{H}^1} \leqslant \frac{C}{\mu ^{1/2}} \norm{ \mathcal{N}_p \pare{v_1, \ldots , v_p} }_{L^{4/3}_TL^2} ,
\end{equation*}
using Lemma \ref{lem:parabolic2}. Whence, since
\begin{equation*}
\mathcal{N}_p \pare{v_1, \ldots , v_p} \sim v^{\otimes p} \otimes G^{\otimes\pare{3-p}},
\end{equation*}
using repeatedly H\"older inequality and the continuous embedding $ \dot{H}^1\hookrightarrow L^6 $ we deduce
\begin{equation*}
\norm{ \mathcal{N}_p \pare{v_1, \ldots , v_p} }_{L^{4/3}_TL^2} \leqslant \pare{ \prod_{i=1}^p \norm{v_i}_{L^4_{T} \dot{H}^1}} \times \norm{G}^{3-p}_{L^4_{T} \dot{H}^1}.
\end{equation*}
\end{enumerate}
\end{proof}
\subsection{Bounds for the system \eqref{eq:Shilomis_mild2}} \label{sec:nonlinear_bounds_Shilomis}
As mentioned above the scope of the present section is to apply the nonlinear bounds proved in Section \ref{sec:nonlinear_bounds_Shilomis} to the Shilomis system in mild form \eqref{eq:Shilomis_mild2}. Such bounds will be provided systematically in the present section. \\
At first we need to estimate the contributions provided by the initial datum:
\begin{prop}\label{prop:smallness_initial_data}
Let $ u_0\in \dot{H}^{ \frac{1}{2}}, m_0, r_0\in \dot{H}^1 $, then
\begin{enumerate}
\item $ \displaystyle\norm{S_{0, \nu}\pare{\partial} u_0}_{L^4_{T} \dot{H}^1} \leqslant\frac{C}{\nu^{1/4}} \norm{u_0}_{\dot{H}^{ \frac{1}{2}}} $,
\item $ \displaystyle\norm{S_{\frac{1}{\tau}, \sigma}\pare{\partial} m_0}_{L^4_{T} \dot{H}^1} \leqslant {C}{\tau^{1/4}} \norm{m_0}_{\dot{H}^{1}} $,
\item $ \displaystyle\norm{S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial} r_0}_{L^4_{T} \dot{H}^1} \leqslant \frac{C\tau^{1/4}}{\pare{1+\chi_0}^{1/4}} \norm{r_0}_{\dot{H}^{1}} $.
\end{enumerate}
\end{prop}
\begin{proof}
We apply respectively Lemma \ref{lem:parabolic1} to deduce the first inequality and Lemma \ref{lem:parabolic2} to deduce the second and the third inequality.
\end{proof}
Next we bound the bulk force:
\begin{prop}\label{prop:smallness_outer_force}
Let $ \mathcal{G}_F\in L^4_{T} \dot{H}^1 \cap L^2_T\dot{H}^{3} \cap \dot{W}^{1, 2}_T \dot{H}^1 $, and let $ \tau $ be
\begin{equation}\label{eq:hyp_smallness_tau1}
\tau < \frac{\pare{1+\chi_0}^{7/3}}{ C\ \chi_0^{4/3}} \pare{\norm{\mathcal{G}_F}_{L^2_T\dot{H}^{3}} + \norm{\mathcal{G}_F}_{\dot{W}^{1, 2}_T \dot{H}^1}}^{-4/3} \ \varrho_0^{4/3},
\end{equation}
then let us consider $ g $ defined as in \eqref{eq:def_outer_g}, the following bound is true
\begin{equation*}
\norm{g}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0.
\end{equation*}
\end{prop}
\begin{proof}
Let us define
\begin{align*}
f_1 & = \frac{\chi_0}{\pare{1+\chi_0}^2} \mathcal{G}_F\cdot\nabla \mathcal{G}_{F}, \\
f_2 & = -\frac{\chi_0}{1+\chi_0}\pare{ \partial_t \mathcal{G}_F - \sigma \Delta \mathcal{G}_F \Big.}.
\end{align*}
And let $ g_1, g_2 $ be defined as in \eqref{eq:def_outer_g}, indeed
\begin{align*}
g_1 & = S_{0, \nu}\pare{\partial}\star_t f_1, \\
g_2 & = S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial}\star_t f_2, \\
\end{align*}
Indeed $ g_1 $ is the unique solution of the following Cauchy problem
\begin{equation*}
\left\lbrace
\begin{aligned}
& \partial_t w -\nu\Delta w = f_1, \\
& \left. w\right|_{t=0}=0,
\end{aligned}
\right.
\end{equation*}
whence applying Lemma \ref{lem:parabolic1} we deduce
\begin{equation*}
\norm{g_1}_{L^4_{T} \dot{H}^1}\leqslant \frac{C \ \chi_0}{\pare{1+\chi_0}^2\ \nu^{3/4}}\norm{\mathcal{G}_F\cdot \nabla \mathcal{G}_{F}}_{L^2_T\dot{H}^{-\frac{1}{2}}}.
\end{equation*}
Lemma \ref{lem:Sob_product_rules} and the fact that $ \norm{\mathcal{G}_F}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0 $ imply that
\begin{equation}\label{eq:bound_g1}
\norm{g_1}_{L^4_{T} \dot{H}^1}\leqslant \frac{C\chi_0 \varrho_0^2}{\pare{1+\chi_0}^2\nu^{3/4}}.
\end{equation}
In order to bound $ g_2 $ we apply Lemma \ref{lem:smoothin_bulk_force} with $ w_0=0 $ and $ F=f_2 $, obtaining the bound
\begin{equation*}
\norm{g_2}_{L^4_{T} \dot{H}^1} =
\norm{S_{\frac{1+\chi_0}{\tau}, \sigma}\pare{\partial} \star_t f_2}_{L^4_{T} \dot{H}^1}\leqslant \frac{C\chi_0 \tau^{3/4}}{\pare{1+\chi_0}^{7/4}} \pare{\norm{\mathcal{G}_F}_{L^2_T\dot{H}^{3}} + \norm{\mathcal{G}_F}_{\dot{W}^{1, 2}_T \dot{H}^1}},
\end{equation*}
which with the bound \eqref{eq:hyp_smallness_tau1} implies that
\begin{equation}\label{eq:bound_g2}
\norm{g_2}_{L^4_{T} \dot{H}^1}\leqslant \frac{\varrho_0}{2}.
\end{equation}
If $\displaystyle \varrho_0 \leqslant \frac{\pare{1+\chi_0}^2\nu^{3/4}}{2 C \chi_0} $ the bounds \eqref{eq:bound_g1} and \eqref{eq:bound_g2} imply that
\begin{equation*}
\norm{g}_{L^4_{T} \dot{H}^1}\leqslant {\varrho_0}.
\end{equation*}
\end{proof}
We prove now the nonlinear bounds; in order to do so we need to explicit the time-convolution form of the nonlinearities defined in \eqref{eq:cT2NS}--\eqref{eq:cT3}, let us hence define
\begin{equation}\label{eq:cB2NS}
B_{ \textnormal{NS}} \pare{U, U} =
\pare{\begin{array}{c}
B_{ \textnormal{NS}, u} \pare{U, U} \\
B_{ \textnormal{NS}, m} \pare{U, U} \\
B_{ \textnormal{NS}, r} \pare{U, U}
\end{array}}
=
\pare{
\begin{array}{c}
\displaystyle - \mathcal{P} \pare{\big.u\cdot\nabla u } - \mathcal{P}\bra{ \big. \pare{m+r }\cdot \nabla {r }}
- \frac{1}{2}\mathcal{P} \ \textnormal{curl} \bra{\big.\pare{m+r } \times {r }}\\[5mm]
\displaystyle -\mathcal{P} \bra{ \big. u\cdot\nabla \pare{ m+r }} + \frac{1}{2}\mathcal{P} \bra{ \big. \pare{\textnormal{curl} \ u} \times \pare{m+r } } \\[5mm]
\displaystyle -\mathcal{Q} \bra{u\cdot\nabla \pare{ m+r }} + \frac{1}{2}\mathcal{Q} \bra{ \pare{\textnormal{curl} \ u} \times \pare{m+r }}
\end{array}
},
\end{equation}
\begin{equation
\mathcal{L}_{1}\pare{U}=
\pare{
\begin{array}{c}
\mathcal{L}_{1, u}\pare{U} \\
\mathcal{L}_{1, m}\pare{U} \\
\mathcal{L}_{1, r}\pare{U}
\end{array}
}
= \pare{
\begin{array}{c}
\displaystyle \frac{1}{2 \pare{ 1+\chi_0}} \left( \Big.
\mathcal{P}\bra{ \pare{m+\pare{ 1-\chi_0}r }\cdot \nabla { \mathcal{G}_F}}
+ \mathcal{P} \bra{\pare{m+\pare{1+\chi_0}r}\div\ \mathcal{G}_F} \right)\\[5mm]
\displaystyle-\frac{\chi_0}{1+\chi_0}\mathcal{P}\bra{\big. u\cdot\nabla \mathcal{G}_F} \\[5mm]
\displaystyle-\frac{\chi_0}{1+\chi_0}\mathcal{Q} \bra{\big. u\cdot\nabla \mathcal{G}_F}
\end{array}
},
\end{equation}
\begin{equation
\mathcal{L}_{2}\pare{U}=
\pare{
\begin{array}{c}
\mathcal{L}_{2, u}\pare{U} \\
\mathcal{L}_{2, m}\pare{U} \\
\mathcal{L}_{2, r}\pare{U}
\end{array}
}
= \pare{
\begin{array}{c}
\displaystyle \frac{1}{2 \pare{ 1+\chi_0}} \pare{\Big. -\mathcal{G}_{F} \div\pare{m+\pare{1+\chi_0}r} -\mathcal{G}_{F}\cdot \nabla\pare{m+\pare{1+\chi_0}r}}\\[5mm]
\displaystyle \frac{\chi_0}{2\pare{ 1+\chi_0}}\Big. \mathcal{P}\bra{\textnormal{curl} u \times \mathcal{G}_F} \\[5mm]
\displaystyle\frac{\chi_0}{2\pare{ 1+\chi_0}}\Big. \mathcal{Q}\bra{\textnormal{curl} u \times \mathcal{G}_F}'
\end{array}
}.
\end{equation}
\begin{equation
\mathcal{N}_1\pare{U}
= \pare{
\begin{array}{c}
\mathcal{N}_{1, u}\pare{U}\\
\mathcal{N}_{1, m}\pare{U}\\
\mathcal{N}_{1, r}\pare{U}
\end{array}
}
= \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \frac{\chi_0}{\pare{1+\chi_0}^2} \set{ \mathcal{P}\bra{\Big.\mathcal{G}_F \times\pare{\big. \pare{m+\pare{1+\chi_0}r}\times \mathcal{G}_F}}} \\[5mm]
\displaystyle \frac{\chi_0}{\pare{1+\chi_0}^2} \set{ \mathcal{Q}\bra{\Big.\mathcal{G}_F \times\pare{\big. \pare{m+\pare{1+\chi_0}r}\times \mathcal{G}_F}}}
\end{array}
},
\end{equation}
\begin{equation
\mathcal{N}_2\pare{U, U}=
\pare{
\begin{array}{c}
\mathcal{N}_{2, u}\pare{U, U}\\
\mathcal{N}_{2, m}\pare{U, U}\\
\mathcal{N}_{2, r}\pare{U, U}
\end{array}
}
= \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \frac{\chi_0}{1+\chi_0} \mathcal{P}\bra{\Big.\mathcal{G}_F \times\pare{\big. r\times m}} + \frac{\chi_0}{1+\chi_0} \mathcal{P} \bra{\Big. \pare{m+r}\times\bra{\pare{m+\pare{1+\chi_0}r}\times\mathcal{G}_F}}\\[5mm]
\displaystyle \frac{\chi_0}{1+\chi_0} \mathcal{Q}\bra{\Big.\mathcal{G}_F \times\pare{\big. r\times m}} + \frac{\chi_0}{1+\chi_0} \mathcal{Q} \bra{\Big. \pare{m+r}\times\bra{\pare{m+\pare{1+\chi_0}r}\times\mathcal{G}_F}}
\end{array}
},
\end{equation}
\begin{equation}\label{eq:cN3}
\mathcal{N}_3\pare{U, U, U}=
\pare{
\begin{array}{c}
\mathcal{N}_{3, u}\pare{U, U, U} \\
\mathcal{N}_{3, m}\pare{U, U, U} \\
\mathcal{N}_{3, r}\pare{U, U, U}
\end{array}
}
= \pare{
\begin{array}{c}
0 \\[5mm]
\displaystyle \mathcal{P}\bra{\pare{m+r}\times\pare{r\times {m}}\Big. } \\[5mm]
\displaystyle \mathcal{Q}\bra{\pare{m+r}\times\pare{r\times {m}}\Big. }
\end{array}
}.
\end{equation}
With such notation we can rewrite the operators defined in \eqref{eq:cT2NS}--\eqref{eq:cT3} in a time-convolution form
\begin{equation}
\label{eq:nonlin_as_convolutions}
\begin{aligned}
\mathcal{T}_{2, \textnormal{NS}}\bra{U}& =
\pare{\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t B_{ \textnormal{NS}, u} \pare{U, U} \\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t B_{ \textnormal{NS}, m} \pare{U, U} \\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t B_{ \textnormal{NS}, r} \pare{U, U}
\end{array}},&
\mathcal{T}_{1, \RN{1}} \bra{U} & = \pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t\mathcal{L}_{1, u}\pare{U} \\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{L}_{1, m}\pare{U} \\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{L}_{1, r}\pare{U}
\end{array}
}, \\
\mathcal{T}_{1, \RN{2}} \bra{U} & = \pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t\mathcal{L}_{2, u}\pare{U} \\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{L}_{2, m}\pare{U} \\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{L}_{2, r}\pare{U}
\end{array}
}, &
\mathcal{T}_{1} \bra{U} & =
\pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t \mathcal{N}_{1, u}\pare{U}\\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{1, m}\pare{U}\\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{1, r}\pare{U}
\end{array}
},\\
\mathcal{T}_{2}\bra{U} & =
\pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t \mathcal{N}_{2, u}\pare{U, U}\\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{2, m}\pare{U, U}\\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{2, r}\pare{U, U}
\end{array}
}, &
\mathcal{T}_{3}\bra{U} & =
\pare{
\begin{array}{c}
S_{0, \nu}\pare{\partial} \star_t \mathcal{N}_{3, u}\pare{U, U, U} \\
S_{\frac{1}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{3, m}\pare{U, U, U} \\
S_{\frac{1+\chi_0}{\tau}, \sigma} \pare{\partial} \star_t \mathcal{N}_{3, r}\pare{U, U, U}
\end{array}
} .
\end{aligned}
\end{equation}
It is hence not a coincidence that the nonlinearities in \eqref{eq:cB2NS}--\eqref{eq:cN3} have the same notation as the nonlinearities on which we provide the bounds in Section \ref{sec:par_est_for_S}, setting in fact $ G=\mathcal{G} $ and $ U=\pare{u, m, r} $ we can express the nonlinearity $ \mathcal{T}\bra{U} $ of \eqref{eq:Shilomis_mild2} in the form \eqref{eq:nonlin_as_convolutions} we can use the results of Section \ref{sec:par_est_for_S} in order to prove th following result:
\begin{prop}\label{prop:nonlinear_bounds_Shilomis}
Let $ c = \min\set{\big. \nu, \sigma} $, and let $ \norm{\mathcal{G}_{F}}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0 $, then the following bounds hold true
\begin{enumerate}
\item $ \displaystyle \norm{\mathcal{T}_{1, j }\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant \frac{C}{c^{3/4}}\ \varrho_0 \norm{U}_{L^4_{T} \dot{H}^1} $ for $ j=\RN{1}, \RN{2} $,
\item $ \displaystyle \norm{\mathcal{T}_{2, \textnormal{NS}}\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant\frac{C}{c^{3/4}} \norm{U}_{L^4_{T} \dot{H}^1}^2 $,
\item $ \displaystyle \norm{\mathcal{T}_{p}\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant\frac{C}{c^{1/2}}\ \varrho_0^{3-p} \norm{U}_{L^4_{T} \dot{H}^1}^p $ for $ p=1,2,3 $.
\end{enumerate}
\end{prop}
\begin{proof}
thanks to the results stated and proved in Section \ref{sec:par_est_for_S} the proof of Proposition \ref{prop:nonlinear_bounds_Shilomis} is now immediate.
\begin{enumerate}
\item We know that $ \mathcal{T}_{1, j} $ can be written in convolution form as it is done in \eqref{eq:nonlin_as_convolutions}, whence we use the estimates proved in Proposition \ref{prop:nonlinear_bounds_generic}, \ref{enum:nonlinear_bounds_generic2} to deduce the bound
\begin{equation*}
\norm{\mathcal{T}_{1, j }\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant \frac{C}{c^{3/4}} \norm{\mathcal{G}_{F}}_{L^4_{T} \dot{H}^1}\norm{U}_{L^4_{T} \dot{H}^1},
\end{equation*}
but since $ \norm{\mathcal{G}_{F}}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0 $ we deduce the first bound.
\item Similarly as before we exploit the convolution formulation of $ \mathcal{T}_{2, \textnormal{NS}} $ given in \eqref{eq:nonlin_as_convolutions} and we use the bound proved in Proposition \ref{prop:nonlinear_bounds_generic}, \ref{enum:nonlinear_bounds_generic1} to deduce
\begin{equation*}
\norm{\mathcal{T}_{2, \textnormal{NS}}\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant\frac{C}{c^{3/4}} \norm{U}_{L^4_{T} \dot{H}^1}^2.
\end{equation*}
\item As in the first two steps, but using the bound proved in Proposition \ref{prop:nonlinear_bounds_generic}, \ref{enum:nonlinear_bounds_generic3}, we deduce that
\begin{equation*}
\norm{\mathcal{T}_{p}\bra{U}}_{L^4_{T} \dot{H}^1}\leqslant\frac{C}{c^{1/2}}\ \norm{\mathcal{G}}_{L^4_{T} \dot{H}^1}^{3-p} \norm{U}_{L^4_{T} \dot{H}^1}^p, \hspace{5mm}\text{for }p=1,2,3,
\end{equation*}
but again since $ \norm{\mathcal{G}_{F}}_{L^4_{T} \dot{H}^1}\leqslant \varrho_0 $ we prove the last bound.
\end{enumerate}
\end{proof}
\subsection{The fixed point theorem } \label{sec:fixed_poin_application}
We can at this point apply Proposition \ref{prop:fixed_point} to the system \ref{eq:Shilomis_mild2}. Let us define
\begin{equation*}
y = \mathcal{S}\pare{\partial} U_0 + g,
\end{equation*}
where $ \mathcal{S}\pare{\partial} U_0 $ is defined in \eqref{eq:cSU0} and $ g $ is defined in \eqref{eq:def_outer_g}. Next let us define
\begin{equation*}
T_1\pare{U} = \mathcal{T}_{1, \RN{1}}\bra{U} + \mathcal{T}_{1, \RN{2}}\bra{U} + \mathcal{T}_{1}\bra{U},
\end{equation*}
where $ \mathcal{T}_{1, \RN{1}}, \mathcal{T}_{1, \RN{2}} $ and $ \mathcal{T}_{1} $ are respectively defined in \eqref{eq:cT1I}, \eqref{eq:cT1II} and \eqref{eq:cT1}. Next
\begin{equation*}
T_2\pare{U, U} = \mathcal{T}_{2, \textnormal{NS}}\bra{U} + \mathcal{T}_2\bra{U},
\end{equation*}
where $ \mathcal{T}_{2, \textnormal{NS}}, \mathcal{T}_2 $ are defined in \eqref{eq:cT2NS} and \eqref{eq:cT2}. Finally we define
\begin{equation*}
T_3\pare{U, U, U} = \mathcal{T}_3\bra{U}.
\end{equation*}
In order to apply Proposition \ref{prop:fixed_point} we have to check the following three conditions
\begin{enumerate}[i]
\item The element $ y=\mathcal{S}\pare{\partial} U_0 + g $ belongs to the ball $ B_{L^4_{T} \dot{H}^1}\pare{0, \rho } $ for $ \rho $ small,
\item Each $ p $--linear operator $ T_p, \ p=1, 2, 3 $ maps continuously $ \pare{L^4_{T} \dot{H}^1}^p $ to $ L^4_{T} \dot{H}^1 $,
\item The norm of $ T_1 $ as a linear operator from $ L^4_{T} \dot{H}^1 $ to itself is strictly smaller than $ 1/4 $.
\end{enumerate}
We prove hence these conditions here below;
\begin{enumerate}[i]
\item A standard triangular inequality tells us that
\begin{equation*}
\norm{y}_{L^4_{T} \dot{H}^1} \leqslant \norm{\mathcal{S}\pare{\partial} U_0}_{L^4_{T} \dot{H}^1} + \norm{g}_{L^4_{T} \dot{H}^1},
\end{equation*}
whence, thanks to the results proved in Proposition \ref{prop:smallness_initial_data} we can argue that if
\begin{align*}
\norm{u_0}_{\dot{H}^{ \frac{1}{2}}} \leqslant \frac{\nu^{1/4}}{6C} \ \rho, &&
\tau < \frac{1+\chi_0}{6C^4\pare{\norm{m_0}_{\dot{H}^1}^4+\norm{r_0}_{\dot{H}^1}^4}}\ \rho^4,
\end{align*}
then
\begin{equation*}
\norm{\mathcal{S}\pare{\partial} U_0}_{L^4_{T} \dot{H}^1} \leqslant \frac{\rho}{2}.
\end{equation*}
While if $ \varrho_0 < \frac{\rho}{2} $ Proposition \ref{prop:smallness_outer_force} assures us that $ \norm{g}_{L^4_{T} \dot{H}^1}<\rho/2 $, proving the first claim.
\item Proposition \ref{prop:nonlinear_bounds_Shilomis} assures us that each $ p $--linear operator $ T_p, \ p=1, 2, 3 $ maps continuously $ \pare{L^4_{T} \dot{H}^1}^p $ to $ L^4_{T} \dot{H}^1 $.
\item We use again the result in Proposition \ref{prop:nonlinear_bounds_Shilomis} to deduce that
\begin{equation*}
\norm{T_1}\leqslant \frac{2C\pare{1+\varrho_0}}{\min \set{c^{1/2},\big. c^{3/4}}} \ \varrho_0,
\end{equation*}
whence we deduce that if
\begin{equation*}
\varrho_0 <\frac{\min\set{c^{1/2},\big. c^{3/4}}}{8C},
\end{equation*}
then $ \norm{T_1}<1/4 $.
\end{enumerate}
We can hence apply Proposition \ref{prop:fixed_point} to deduce the existence a unique solution to the equation in mild form \eqref{eq:Shilomis_mild2}, which in turn implies the existence of a unique solution $ \pare{u, m, r}\in L^4_{T} \dot{H}^1 $ to \eqref{eq:Shilomis2}. \\
The continuity w.r.t. the $ \dot{H}^{ \frac{1}{2}} $ topology, i.e. that $ U\in \mathcal{C}_T\dot{H}^{ \frac{1}{2}} $, follows from standard considerations which are analogous to the incompressible Navier-Stokes case, see \cite{LR2}.
\hfill $ \Box $
\section{Convergence as $ \tau\to 0 $}\label{sec:conv_tau}
In the previous section we proved that it is possible to construct solutions of \eqref{eq:Shilomis2} in a critical functional space \textit{independently} of the parameter $ \tau $, when $ \tau $ is sufficiently small. In the present section we let $ \tau\to 0 $ and we deduce the limit system solved by $ \pare{u^\tau, m^\tau, r^\tau} $ in the limit $ \tau\to 0 $. Just for this section, since we are interested to compute the asymptotic as $ \tau\to 0 $, we explicit the dependence of the unknown on the parameter $ \tau $. The result we prove is the following one.
\begin{prop}\label{prop:convergence}
Let $ \pare{u_0, m_0, r_0}, \mathcal{G}_F $ and $ \tau $ be as in the statement \ref{en:point_we_prove_in_the_ecistence_thm} of Proposition \ref{prop:existence_unique_solution}, and let us moreover assume that $ \nabla \mathcal{G}_F \in L^2_T\dot{H}^{ \frac{1}{2}}$. Then for any $ \varepsilon > 0 $
\begin{equation}\label{eq:conv_mr_to_0}
\norm{\pare{m^\tau, r^\tau}}_{L^\infty\pare{\pare{\varepsilon, T};\dot{H}^{ \frac{1}{2}}}}\xrightarrow{\tau\to 0}0.
\end{equation}
Moreover for each $ t\in\bra{0, T} $ the following energy bound holds true
\begin{equation}\label{eq:energy_Hud_est_mr}
\frac{1}{2}\norm{\pare{m^\tau\pare{t}, r^\tau \pare{t}}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{1}{\tau}\int_0^t \norm{\pare{m^\tau\pare{t'}, r^\tau \pare{t'}}}_{\dot{H}^{ \frac{1}{2}}}^2 \d t' + \sigma \int_0^t \norm{\pare{\nabla m^\tau\pare{t'}, \nabla r^\tau \pare{t'}}}_{\dot{H}^{ \frac{1}{2}}}^2 \d t'\leqslant \frac{C}{\sigma}\ \rho^4,
\end{equation}
where $ \rho $ is the radius of the ball in which the solutions constructed in Proposition \ref{prop:existence_unique_solution} live. \\
Moreover $ u^\tau\xrightarrow{\tau\to 0}\bar{u} $ in $ L^\infty\pare{\pare{\varepsilon, T};\dot{H}^{ \frac{1}{2}}}$ and $ \nabla u^\tau\xrightarrow{\tau\to 0} \nabla \bar{u} $ in $ L^2\pare{\pare{\varepsilon, T};\dot{H}^{ \frac{1}{2}}} $, where $ \bar{u} $ is the solution of the following incompressible Navier-Stokes equations
\begin{equation}\label{eq:limit_system}
\left\lbrace
\begin{aligned}
& \partial_t\bar{u} + \bar{u}\cdot\nabla\bar{u} -\nu \Delta\bar{u} +\nabla\bar{p} = \frac{\chi_0}{\pare{1+\chi_0}^2}\ \mathcal{G}_F\cdot\nabla\mathcal{G}_F, \\
& \div\ \bar{u}=0, \\
&\left.\bar{u}\right|_{t=0} = u_0.
\end{aligned}
\right.
\end{equation}
\end{prop}
\begin{rem}
\begin{itemize}
\item We want to point out that the systems \eqref{eq:limit_system_thm} and \eqref{eq:limit_system} are equivalent. Indeed since $ \mathcal{G}_F = \nabla \Delta^{-1} F $ it is not difficult to deduce that
\begin{equation*}
\mathcal{G}_F \cdot \nabla \mathcal{G}_F = \frac{1}{2} \ \nabla \av{\nabla\Delta^{-1} F}^2.
\end{equation*}
\item Thanks to the result proved in Proposition \ref{prop:existence_unique_solution} it is not surprising, performing an energy estimate, to deduce that\footnote{See the energy estimate \eqref{eq:energy_Hud_est_mr} and its proof for a complete argument. }
\begin{equation*}
\norm{\pare{r^\tau, m^\tau}}_{L^2_T\dot{H}^{ \frac{1}{2}}} = \mathcal{O}\pare{\tau}, \text{ as } \tau\to 0.
\end{equation*}
Unfortunately such convergence is not strong enough in order to deduce that $ u^\tau $ converges toward $ \bar{u} $ solution of \eqref{eq:limit_system} in the critical topology $ L^\infty\pare{\pare{\varepsilon, T};\dot{H}^{ \frac{1}{2}}}\cap L^2\pare{\pare{\varepsilon, T};\dot{H}^{\frac{3}{2}}} $ (it though sufficient in order to deduce that there is convergence in some weak sense). We must therefore prove that $ m^\tau, r^\tau $ converge to zero in a stronger topology in order to prove convergence in critical norms, for this reason we have to prove the particular convergence stated in \eqref{eq:conv_mr_to_0}.
\end{itemize}
\hfill$\blacklozenge$
\end{rem}
\begin{proof}
We will divide the proof of Proposition \ref{prop:convergence} in several steps
\begin{enumerate}[\bf Step 1 :]
\item Proof of \eqref{eq:conv_mr_to_0}. \\
We prove the result for $ m^\tau $ only being the procedure for $ r^\tau $ identical.
Let us rewrite the evolution equation of $ m^\tau $, given in \eqref{eq:Shilomis2}, as
\begin{equation}\label{eq:simplified_equation_m}
\partial_t m^\tau + \frac{1}{\tau}\ m^\tau - \sigma\Delta m^\tau = F_1^\tau + F_2^\tau,
\end{equation}
where
\begin{equation*}
\begin{aligned}
F_1^\tau & = -\mathcal{P} \bra{u^\tau\cdot\nabla \pare{ m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{P} \bra{ \pare{\textnormal{curl} \ u^\tau} \times \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
F_2^\tau & = - \mathcal{P} \set{ \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r^\tau + \frac{1}{1+\chi_0} \mathcal{G}_F}} }.
\end{aligned}
\end{equation*}
Hence thanks to the result proved in Proposition \ref{prop:existence_unique_solution} we know that there exists a $ \tau_0 = \tau_0\pare{u_0,m_0,r_0} $ and a $ T\in \left(0, \infty\right] $ so that $ \pare{u^\tau, m^\tau, r^\tau}\in L^4_{T} \dot{H}^1 $ uniformly for $ \tau\in\bra{0, \tau_0} $. Hence since by hypothesis $ \mathcal{G}_F\inL^4_{T} \dot{H}^1 $ we deduce that
\begin{align*}
F_1^\tau \in L^2_T \dot{H}^{-1/2}, && F_2^\tau\in L^{4/3}_T L^2,
\end{align*}
uniformly for $ \tau\in\bra{0, \tau_0} $.\\
We can hence apply the estimate \eqref{eq:linear_damping_estimate} of Lemma \ref{lem:linear_damping_estimate} setting $ \gamma=\tau^{-1}, \mu=\sigma $ and $ w=m^\tau $ we deduce
\begin{equation*}
\norm{m^\tau\pare{t}}_{ \dot{H}^{ \frac{1}{2}} }\leqslant C \pare{
e^{-\frac{t}{\tau}} \norm{m_0}_{\dot{H}^{ \frac{1}{2}}} + \tau^{1/8}\bra{ \norm{F_1^\tau}_{L^2_T \dot{H}^{-1/2}} + \norm{F_2^\tau}_{L^{4/3}_T L^2} } + \frac{1}{\sigma^{1/4}} o_{\frac{1}{\tau}}\pare{1}
},
\end{equation*}
which indeed proves the statement \eqref{eq:conv_mr_to_0} for $ m^\tau $. With the very same procedure we can prove the bound
\begin{equation*}
\norm{r^\tau\pare{t}}_{ \dot{H}^{ \frac{1}{2}} }\leqslant C \pare{
e^{-\frac{1+\chi_0}{\tau} t} \norm{m_0}_{\dot{H}^{ \frac{1}{2}}} + \frac{\tau^{1/8}}{\pare{1+\chi_0}^{1/8}}\bra{ \norm{H_1^\tau}_{L^2_T \dot{H}^{-1/2}} + \norm{H_2^\tau}_{L^{4/3}_T L^2} } + \frac{1}{\sigma^{1/4}} o_{\frac{1+\chi_0}{\tau}}\pare{1}
},
\end{equation*}
where
\begin{equation*}
\begin{aligned}
H_1^\tau & = -\mathcal{Q} \bra{u^\tau\cdot\nabla \pare{ m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F }} + \frac{1}{2}\mathcal{Q} \bra{ \pare{\textnormal{curl} \ u^\tau} \times \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} } \\
H_2^\tau & = - \mathcal{Q} \set{ \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \bra{ \pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r^\tau + \frac{1}{1+\chi_0} \mathcal{G}_F}} },
\end{aligned}
\end{equation*}
which concludes the proof of \eqref{eq:conv_mr_to_0}.
\item Proof of \eqref{eq:energy_Hud_est_mr}. \\
We prove the bound for $ m^\tau $ only being the procedure for $ r^\tau $ identical. Let us multiply the equation \eqref{eq:simplified_equation_m} for $ \sqrt{-\Delta}\ m^\tau $ and let us integrate in space, integrating by parts if it may be, we deduce the energy inequality
\begin{equation*}
\frac{1}{2}\frac{\d}{\d t}\norm{ {m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{1}{\tau} \norm{ {m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 + \sigma \norm{ {\nabla m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 \leqslant \psc{F^\tau_1}{\sqrt{-\Delta}\ m^\tau}_{L^2\times L^2} + \psc{F^\tau_2}{\sqrt{-\Delta}\ m^\tau}_{L^2\times L^2}.
\end{equation*}
But indeed
\begin{align*}
\psc{F^\tau_1}{\sqrt{-\Delta}\ m^\tau}_{L^2\times L^2} & \leqslant \frac{C}{\sigma} \norm{F^\tau_1}_{\dot{H}^{-\frac{1}{2}}}^2 + \frac{\sigma}{2}\norm{\nabla m^\tau}_{\dot{H}^{ \frac{1}{2}}}^2, \\
\psc{F^\tau_2}{\sqrt{-\Delta}\ m^\tau}_{L^2\times L^2}& \leqslant \norm{F^\tau_2}_{L^2}\norm{m^\tau}_{\dot{H}^1},
\end{align*}
whence we deduce
\begin{equation*}
\frac{1}{2}\frac{\d}{\d t}\norm{ {m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{1}{\tau} \norm{ {m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{\sigma}{2} \norm{ {\nabla m^\tau}}_{\dot{H}^{ \frac{1}{2}}}^2 \leqslant \frac{C}{\sigma} \norm{F^\tau_1}_{\dot{H}^{-\frac{1}{2}}}^2 + \norm{F^\tau_2}_{L^2}\norm{m^\tau}_{\dot{H}^1},
\end{equation*}
therefore integrating in time
\begin{equation*}
\frac{1}{2}\ \norm{ {m^\tau\pare{t}}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{1}{\tau} \int_0^t \norm{ {m^\tau\pare{t'}}}_{\dot{H}^{ \frac{1}{2}}}^2\d t' + \frac{\sigma}{2} \int_0^t \norm{ {\nabla m^\tau\pare{t'}}}_{\dot{H}^{ \frac{1}{2}}}^2 \d t' \leqslant \frac{C}{\sigma} \norm{F^\tau_1}_{L^2_T\dot{H}^{-\frac{1}{2}}}^2 + \norm{F^\tau_2}_{L^{4/3}_T L^2}\norm{m^\tau}_{L^4_T\dot{H}^1}.
\end{equation*}
It is hence easy to deduce using Lemma \ref{lem:Sob_product_rules} that (here we denote $ U^\tau=\pare{u^\tau, m^\tau, r^\tau} $)
\begin{equation*}
\norm{F^\tau_1}_{L^2_T\dot{H}^{-\frac{1}{2}}}\leqslant \norm{U^\tau}_{L^4_{T} \dot{H}^1}\pare{\norm{U^\tau}_{L^4_{T} \dot{H}^1} + \norm{\mathcal{G}_F}_{L^4_{T} \dot{H}^1}},
\end{equation*}
for the construction given in Proposition \ref{prop:existence_unique_solution} we know that $ \norm{U^\tau}_{L^4_{T} \dot{H}^1}\lesssim \rho $, and moreover $ \norm{\mathcal{G}_F}_{L^4_{T} \dot{H}^1}\leqslant\varrho_0 < \rho $ by hypothesis, hence we deduce
\begin{equation*}
\norm{F^\tau_1}_{L^2_T\dot{H}^{-\frac{1}{2}}}^2\leqslant C\rho^4.
\end{equation*}
Similar computations lead us to deduce the bound $ \norm{F^\tau_2}_{L^{4/3}_T L^2}\lesssim \rho^3 $, whence we conclude the proof of the estimate \eqref{eq:energy_Hud_est_mr}.
\item Convergence toward the limit system \eqref{eq:limit_system}. \\
Indeed under the smallness hypothesis on $ \mathcal{G}_F $ and $ \bar{u} $ stated in the point \ref{en:point_we_prove_in_the_ecistence_thm} of Proposition \ref{prop:existence_unique_solution} there exists a unique $ \bar{u} $ bar solution of \eqref{eq:limit_system} in the space\footnote{Let us remark that if $ T=\infty $ the solution is global.} $ \mathcal{C}_T\dot{H}^{ \frac{1}{2}}\cap L^4_{T} \dot{H}^1 $. Let us now select a $ \varepsilon\in\pare{0, T} $ so that
\begin{equation*}
\norm{u^{\tau}\pare{\cdot, \varepsilon} - \bar{u}\pare{\cdot, \varepsilon}}_{\dot{H}^{ \frac{1}{2}}}\leqslant \eta_{\varepsilon},
\end{equation*}
where $ \eta_\varepsilon\xrightarrow{\varepsilon\to 0}0 $ since the applications $ t\mapsto\norm{u^\tau\pare{\cdot, t}}_{\dot{H}^{ \frac{1}{2}}} $ and $ t\mapsto\norm{\bar{u} \pare{\cdot, t}}_{\dot{H}^{ \frac{1}{2}}} $ are continuous.
Next let us denote as
\begin{equation*}
\delta u^\tau = u^\tau -\bar{u},
\end{equation*}
by the aid of \eqref{eq:Shilomis2} and \eqref{eq:limit_system} we can compute the evolution equation satisfied by $ \delta u^\tau $, i.e.
\begin{equation*}
\partial_t \delta u^\tau - \nu \Delta \delta u^\tau + \nabla\delta p^\tau = - \delta u^\tau \cdot\nabla u^\tau + \bar{u}\cdot\nabla \delta u^\tau + G^\tau,
\end{equation*}
where the outer force $ G^\tau $ is defined as
\begin{multline}\label{eq:Gtau}
G^\tau =
\pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F}\cdot \nabla \pare{-r^\tau + \frac{1}{1+\chi_0} \mathcal{G}_F} -
\frac{\chi_0}{\pare{1+\chi_0}^2} \mathcal{G}_F\cdot \nabla \mathcal{G}_F
\\
+ \frac{1}{2}\textnormal{curl} \bra{\pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times \pare{-r^\tau + \frac{1}{1+\chi_0} \mathcal{G}_F}}.
\end{multline}
We rely now on the following technical lemma whose proof is postponed:
\begin{lemma}\label{lem:Gtau_to_0}
The function $ G^\tau $ converges to zero as $ \tau\to 0 $ in $ L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{- \frac{1}{2} }} $.
\end{lemma}
We can hence endow the system with an appropriate initial data at time $ t=\varepsilon $ in order to deduce the following Cauchy problem satisfied by $ \delta u^\tau $:
\begin{equation}\label{eq:deltau}
\left\lbrace
\begin{aligned}
& \partial_t \delta u^\tau - \nu \Delta \delta u^\tau + \nabla\delta p^\tau = - \delta u^\tau \cdot\nabla u^\tau + \bar{u}\cdot\nabla \delta u^\tau + G^\tau, & \pare{x, t}\in\mathbb{R}^3\times\pare{\varepsilon, T}, \\
& \div\ \delta u^\tau =0, & \pare{x, t}\in\mathbb{R}^3\times\pare{\varepsilon, T},
\\
& \left. \delta u^\tau\right|_{t=\varepsilon}= u^{\tau}\pare{\cdot, \varepsilon} - \bar{u}\pare{\cdot, \varepsilon}, & x\in \mathbb{R}^3 .
\end{aligned}
\right.
\end{equation}
We can hence perform an $ \dot{H}^{ \frac{1}{2}} $ energy estimate onto the system \eqref{eq:deltau} deducing the following energy inequality
\begin{equation}\label{eq:energy_ineq1}
\frac{1}{2}\frac{\d}{\d t} \norm{\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \nu \norm{\nabla\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 \leqslant \av{\ps{\delta u^\tau \cdot\nabla u^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}} + \av{\ps{\bar{u}\cdot\nabla \delta u^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}} + \av{\ps{G^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}}.
\end{equation}
The following bounds are moreover immediate for any $ \alpha >0 $
\begin{equation}\label{eq:bounds_energy_ineq}
\begin{aligned}
\av{\ps{G^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}} & \leqslant \alpha\nu \norm{\nabla\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{C}{\alpha\nu} \norm{G^\tau\pare{t}}_{\dot{H}^{-\frac{1}{2}}}^2, \\
\av{\ps{\delta u^\tau \cdot\nabla u^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}} & \leqslant \alpha\nu \norm{\nabla\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{C}{\alpha\nu} \norm{u^\tau\pare{t}}^4_{\dot{H}^1} \norm{\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2, \\
\av{\ps{\bar{u}\cdot\nabla \delta u^\tau}{\delta u^\tau}_{\dot{H}^{ \frac{1}{2}}}}& \leqslant \alpha\nu \norm{\nabla\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \frac{C}{\alpha\nu} \norm{\bar{u} \pare{t}}^4_{\dot{H}^1} \norm{\delta u^\tau\pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2.
\end{aligned}
\end{equation}
Whence selecting $ \alpha\in\pare{0, \frac{1}{8}} $, combining the inequalities of \eqref{eq:energy_ineq1} and \eqref{eq:bounds_energy_ineq} and applying a standard Gronwall argument we deduce the following bound for any $ t\in\pare{\varepsilon, T} $
\begin{multline}\label{eq:energy_ineq2}
\norm{\delta u^\tau \pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \nu \int_\varepsilon^t \norm{\nabla\delta u^\tau\pare{t'}}_{\dot{H}^{ \frac{1}{2}}}^2
\exp\set{\int _{t'}^t \norm{u^\tau\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 + \norm{\bar{u}\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 \d t''}\d t' \\
\leqslant C \eta_\varepsilon \ \exp\set{\int _{\varepsilon}^t \norm{u^\tau\pare{t'}}_{\dot{H}^{ \frac{1}{2}}}^4 + \norm{\bar{u}\pare{t'}}_{\dot{H}^{ \frac{1}{2}}}^4 \d t'}\\
+ \frac{C}{\nu} \int_0^t \norm{G^\tau\pare{t'}}_{\dot{H}^{-\frac{1}{2}}}^2 \exp\set{\int _{t'}^t \norm{u^\tau\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 + \norm{\bar{u}\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 \d t''}\d t'
\end{multline}
defining hence
\begin{equation*}
\Phi_{u^\tau, \bar{u} }\pare{t',t}= \exp\set{\int _{t'}^t \norm{u^\tau\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 + \norm{\bar{u}\pare{t''}}_{\dot{H}^{ \frac{1}{2}}}^4 \d t''},
\end{equation*}
and since $ u^\tau, \bar{u}\inL^4_{T} \dot{H}^1 $ we deduce that
\begin{align*}
\Phi_{u^\tau, \bar{u} }\pare{t',t}\geqslant 1 , && \Phi_{u^\tau, \bar{u} }\pare{t',t} \leqslant K_{u^\tau, \bar{u} },
\end{align*}
whence \eqref{eq:energy_ineq2} can be rewritten in the following more compact form
\begin{equation}\label{eq:energy_ineq3}
\norm{\delta u^\tau \pare{t}}_{\dot{H}^{ \frac{1}{2}}}^2 + \nu \int_\varepsilon^t \norm{\nabla\delta u^\tau\pare{t'}}_{\dot{H}^{ \frac{1}{2}}}^2
\d t' \leqslant \frac{K_{u^\tau, \bar{u} }}{\nu} \pare{\eta_\varepsilon + \norm{G^\tau}_{L^2_T\dot{H}^{-1/2}}},
\end{equation}
but $ \norm{G^\tau}_{L^2_T\dot{H}^{-1/2}}\xrightarrow{\tau\to 0} 0 $ thanks to the result stated in Lemma \ref{lem:Gtau_to_0}, and since $ \eta_\varepsilon\xrightarrow{\varepsilon\to 0} $ the right hand side of \eqref{eq:energy_ineq3} can be made arbitrarily small, proving hence the convergence.
\end{enumerate}
\end{proof}
\textit{Proof of Lemma \ref{lem:Gtau_to_0}} :
Let us remark that we can rewrite the function $ G^\tau $ as
\begin{multline*
G^\tau =
\pare{m^\tau+r^\tau }\cdot \nabla \pare{-r^\tau + \frac{1}{1+\chi_0} \mathcal{G}_F}
-\frac{\chi_0}{1+\chi_0} { \mathcal{G}_F}\cdot \nabla {r^\tau } \\
-\frac{1}{2}\textnormal{curl} \bra{\pare{m^\tau+r^\tau + \frac{\chi_0}{1+\chi_0} \mathcal{G}_F} \times {r^\tau }}
+ \frac{1}{2}\textnormal{curl} \bra{\pare{m^\tau+r^\tau } \times \pare{-r^\tau+ \frac{1}{1+\chi_0} \mathcal{G}_F }}
.
\end{multline*}
Eventually commuting derivatives on terms of the form $ \mathcal{G}_F\otimes \nabla \pare{m^\tau, r^\tau} $, $ G^\tau $ can again be rewritten in the following compact form
\begin{equation*}
G^\tau = R^\tau \otimes q_1\pare{\partial} R^\tau + q_2\pare{\partial} \pare{ R^\tau \otimes R^\tau} + R^\tau\otimes p_1\pare{\partial}\mathcal{G}_F + p_2\pare{\partial}\pare{\mathcal{G}_F\otimes R^\tau},
\end{equation*}
where $ q_1, q_2, p_1, p_2 $ are matrix-valued homogeneous Fourier multiplier of order one and $ R^\tau = m^\tau $ or $ r^\tau $. Hence using Lemma \ref{lem:Sob_product_rules} and the Sobolev interpolation inequality $ \norm{f}_{\dot{H}^{ 1}}\lesssim \norm{f}_{\dot{H}^{ \frac{1}{2}}}^{1/2}\norm{\nabla f}_{\dot{H}^{ \frac{1}{2}}}^{1/2} $ we deduce
\begin{align*}
\norm{R^\tau \otimes q_1\pare{\partial} R^\tau}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{- \frac{1}{2} }}} & \leqslant C \norm{R^\tau}_{L^\infty \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2} }}} \norm{\nabla R^\tau}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2}}}}, \\
\norm{q_2\pare{\partial} \pare{ R^\tau \otimes R^\tau}}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{- \frac{1}{2} }}} & \leqslant C \norm{R^\tau}_{L^\infty \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2} }}} \norm{\nabla R^\tau}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2}}}}, \\
\norm{R^\tau\otimes p_1\pare{\partial}\mathcal{G}_F}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{- \frac{1}{2} }}} & \leqslant C \norm{R^\tau}_{L^\infty \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2} }}} \norm{\nabla \mathcal{G}_F}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2}}}}, \\
\norm{p_2\pare{\partial}\pare{\mathcal{G}_F\otimes R^\tau}}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{- \frac{1}{2} }}} & \leqslant C \norm{R^\tau}_{L^\infty \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2} }}} \norm{\nabla \mathcal{G}_F}_{L^2 \pare{\pare{\varepsilon, T} ; \dot{H}^{ \frac{1}{2}}}},
\end{align*}
and each of the above terms converge to zero as $ \tau\to 0 $ thanks to the hypothesis assumed on $ \mathcal{G}_F $, the uniform bound \eqref{eq:energy_Hud_est_mr} and the convergence result \eqref{eq:conv_mr_to_0}.
\hfill$ \Box $
\footnotesize{
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}The \emph{rank} of a semisimple tensor category $\mathcal{O}$
is the number of simple objects (up to isomorphism). A problem that
has received particular attention recently is the classification
of finite rank categories with some extra axioms satisfied by
the representation categories of finite groups, quantum groups or Hopf algebras.
A
\emph{pre-modular category} \cite{Br} is a finite rank ribbon
category, \emph{i.e} a semisimple, balanced, rigid braided tensor
category. A \emph{modular category} \cite{Tur92} is a pre-modular
category that satisfies a further non-degeneracy condition (see
\cite{BK} or \cite{TuraevBook} for detailed definitions), while more
generally a \emph{fusion category} \cite{ENO} is a finite rank,
semisimple, rigid
tensor category. Zhenghan
Wang has recently conjectured that there are only finitely many
modular categories of a fixed rank; a conjecture that has been
verified for ranks 1,2,3 and 4, see \cite{BRSW} for an explicit list
of all modular categories of these ranks. Similar conjectures have
been proposed for pre-modular and fusion categories, and some
progress has been made for these generalizations (see \cite{Ostrik1}
and \cite{Ostrik2}). The problem of determining the ranks of known
pre-modular categories is motivated by this conjecture and the
relationship between these categories and low-dimensional topology
\cite{TuraevBook}, quantum computing \cite{FNSWW}, \cite{FLW} and
Hopf algebras \cite{ENO}.
The most ubiquitous examples of modular categories come from two
sources: representation categories of Hopf doubles of finite group
algebras, and sub-quotients of representation categories of
quantized universal enveloping algebras of simple Lie algebras
(henceforth \emph{quantum groups}) specialized at roots of unity. In
the finite group examples one always obtains a modular category,
whereas the quantum group categories sometimes fail the modularity
condition. The purpose of this note is to give formulas for the
ranks of the pre-modular categories constructed from quantum groups.
Most of these formulas have already appeared without formal proof in
a more primitive form in \cite{survey}. The ranks of the modular
categories constructed from finite groups have been considered in
\cite{CGR}, where all cases up to rank 50 are determined.
\section{Pre-modular Categories from Quantum Groups}
To each simple Lie algebra $\mathfrak{g}$ and positive integer $\ell$ (with
$\ell$ large enough) one obtains a family of pre-modular categories
sharing the same (finite) labeling set of simple objects and tensor
product decomposition rules. The construction (which can be found
in \cite{andersen} and \cite{TuraevBook}) is summarized as follows:
Lusztig's integral form of the Drinfeld-Jimbo quantum group $U_q\mathfrak{g}$
is well-defined for $q^2$ a primitive $\ell$th root of unity. The
corresponding representation category is not semisimple, but has a
well-behaved subcategory of so-called \emph{tilting modules}. The
quotient of the tilting module category by the tensor ideal of
\emph{negligible} morphisms (essentially the annihilator of a
trace-form) is a pre-modular category $\mathcal{C}(\mathfrak{g},q,\ell)$.
While the structure of $\mathcal{C}(\mathfrak{g},q,\ell)$
depends on the choice of $q$, the rank only depends on $\mathfrak{g}$ and
$\ell$.
To describe the labeling sets of simple objects we need some
standard notation from Lie theory, which is found in Table
\ref{notation}.
\begin{table}\label{notation}
\caption{Notation}
\begin{tabular}{*{2}{lr}}
${\lambda}_i$ & $i$th fundamental weight\\
$P_+$ & dominant weights \\
$h$ & Coxeter number\\
$h\check{}$ & dual Coxeter number\\
$\langle\cdot,\cdot\rangle$ & symmetric bilinear form on weight lattice\\
$m$ & ratio $\frac{\langle \alpha,\alpha\rangle}{\langle\beta,\beta\rangle}$
$\alpha$ a long root, $\beta$ a short root\\
$\theta_0$ & longest root\\
$\theta_1$ & longest short root\\
$\rho$ & half the sum of the positive roots\\
\end{tabular}
\end{table}
We take the form $\langle\cdot,\cdot\rangle$ to be normalized so that the
square length of a \emph{short} root is 2.
\begin{definition}
Referring to Table \ref{notation} for notation, the
labeling sets of isomorphism classes of simple objects are (see
\cite{kirillov}):
$$C_\ell(\mathfrak{g}):=\begin{cases}\{{\lambda}\in P_+: \langle{\lambda}+\rho,\theta_1\rangle<\ell\} &
\text{if $m\mid \ell$}\\
\{{\lambda}\in P_+: \langle{\lambda}+\rho,\theta_2\rangle<\ell\} & \text{if $m\nmid
\ell$}\end{cases}$$ \end{definition} Observe that $m=1$ for the
\emph{simply-laced} Lie types $A,D$ and $E$, while $m=2$ for Lie
types $B,C$ and $F_4$, and $m=3$ for Lie type $G_2$. We define an
auxiliary label $\ell_m=0$ if $m\mid\ell$ and $\ell_m=1$ if
$m\nmid\ell$ for notational convenience.
We reduce the problem of determining the cardinalities of the
labeling sets $C_\ell(\mathfrak{g})$ to counting partitions of $n$ with parts
in a fixed (finite) multiset $\mathcal{S}(\mathfrak{g},\ell_m)$ that depends
only on the rank and Lie type of $\mathfrak{g}$ and the divisibility of $\ell$
by $m$ (encoded in $\ell_m$). Fix a simple Lie algebra $\mathfrak{g}$ of rank
$r$ and a positive integer $\ell$. Let ${\lambda}=\sum_i a_i{\lambda}_i$ be a
dominant weight of $\mathfrak{g}$ written as an $\mathbb{N}$-linear combination of
fundamental weights ${\lambda}_i$. To determine if ${\lambda}\in C_\ell(\mathfrak{g})$,
we compute:
$$\langle{\lambda}+\rho,\theta_j\rangle=\langle\rho,\theta_j\rangle+\sum_i^r a_i\langle{\lambda}_i,\theta_j\rangle$$
where $j=1$ or $2$ depending on if $m\mid\ell$ or not. Setting
$L_i^{(j)}=\langle{\lambda}_i,\theta_j\rangle$ we see that the condition that
${\lambda}\in C_\ell(\mathfrak{g})$ becomes:
$$\sum_i^k a_i L_i^{(j)}\leq\ell-\langle\rho,\theta_j\rangle-1.$$ Since
$a_i,L_i^{(j)}\in\mathbb{N}$ we have:
\begin{lemma}
The cardinality of $C_\ell(\mathfrak{g})$ is the number of partitions of all
natural numbers $n$, $0\leq n\leq\ell-\langle\rho,\theta_j\rangle-1$ into
parts from the size $r=\rm{rank}(\mathfrak{g})$ multiset
$\mathcal{S}(\mathfrak{g},\ell_m)=[L_i^{(j)}]_i^r$.
\end{lemma}
So it remains only to compute the numbers $\langle\rho,\theta_j\rangle$
and $L_i^{(j)}$ (with $j=0,1$) for each Lie algebra $\mathfrak{g}$ and integer
$\ell>\langle\rho,\theta_j\rangle$ and to apply standard combinatorics to
count the number of partitions into parts in
$\mathcal{S}(\mathfrak{g},\ell_m)$. The first task is easily accomplished with
the help of the book \cite{Bou}. Table \ref{rank} lists the results
of these computations, where $\ell_0$ is the minimal non-degenerate
value of $\ell$ (\emph{i.e.} satisfying
$\ell_0\geq\langle\rho,\theta_j\rangle+1$). The combinatorial techniques
are described in the next section.
\section{Generating Functions}
\begin{table}\caption{$\mathcal{C}(\mathfrak{g},q,\ell)$ Data}\label{rank}
\begin{tabular}{*{3}{|c}|}
\hline
$X_r$ & $\mathcal{S}(\mathfrak{g},\ell_m)$ & $\ell_0$\\
\hline\hline
$A_r$ & $[1,\ldots,1]$ & $r+1$
\\ \hline $B_r$, $\ell$ odd& $[1,2,\ldots,2]$ & $2r+1$
\\ \hline $B_r$, $\ell$ even& $[2,2,4,\ldots,4]$ & $4r-2$
\\ \hline $C_r$, $\ell$ odd& $[1,2,\ldots,2]$ & $2r+1$
\\ \hline $C_r$, $\ell$ even& $[2,\ldots,2]$ & $2r+2$
\\ \hline $D_r$& $[1,1,1,2,\ldots,2]$ & $2r-2$
\\ \hline $E_6$& $[1,1,2,2,2,3]$ & $12$
\\ \hline $E_7$& $[1,2,2,2,3,3,4]$ & $18$
\\ \hline $E_8$& $[2,2,3,3,4,4,5,6]$ & $30$
\\ \hline $F_4$, $\ell$ even& $[2,4,4,6]$ & $18$
\\ \hline $F_4$, $\ell$ odd& $[2,2,3,4]$ & $13$
\\ \hline $G_2$, $3 \mid \ell$& $[3,6]$ & $12$
\\ \hline $G_2$, $3\nmid \ell$& $[2,3]$ & $7$\\
\hline
\end{tabular}
\end{table}
Let $P_{\mathcal{T}}(n)$ denote the number of partitions of $n$ into
parts in a multiset $\mathcal{T}$, and
$P_{\mathcal{T}}[s]=\sum_{n=0}^sP_{\mathcal{T}}(n)$ the number of
partitions of all integers $0\leq n\leq s$ into parts from the
multiset $\mathcal{T}$. Any standard reference on generating
functions (see e.g. \cite{Stan}) will provide enough details about
generating functions to prove the following:
\begin{lemma}\label{gflemma}
The number $P_{\mathcal{T}}(n)$ of partitions of $n$ into parts from
the multiset $\mathcal{T}$ has generating function:
$$\prod_{t\in\mathcal{T}}\frac{1}{1-x^t}=\sum_{n=0}^\infty P_{\mathcal{T}}(n),$$
while the number $P_{\mathcal{T}}[s]$ of partitions of all $n\in\mathbb{N}$
with $0\leq n\leq s$ into parts from the multiset $\mathcal{T}$ has
generating function:
$$\frac{1}{1-x}\prod_{t\in\mathcal{T}}\frac{1}{1-x^t}=\sum_{n=0}^\infty P_{\mathcal{T}}[s].$$
\end{lemma}
Applying this lemma to the sets $\mathcal{S}(\mathfrak{g},\ell_m)$ given in
Table \ref{rank} we obtain:
\begin{theorem}
Define
$$F_{\mathfrak{g},\ell_m}(x)=\frac{1}{1-x}\prod_{k\in\mathcal{S}(\mathfrak{g},\ell_m)}\frac{1}{1-x^k}.$$
Then the rank $|C_\ell(\mathfrak{g})|$ of the pre-modular category
$\mathcal{C}(\mathfrak{g},q,\ell)$ is
the coefficient of $$x^{\ell-\ell_0+\ell_m}$$ in the power series expansion
of $F_{\mathfrak{g},\ell_m}(x)$.
\end{theorem}
\begin{proof}
It is clear from Lemma \ref{gflemma} that the coefficients of
generating function $F_{\mathfrak{g},\ell_m}(x)$ counts the appropriate
partitions. The coefficient of $x$ that gives the rank for a
specific $\ell$ is shifted by the minimal non-degenerate $\ell_0$,
which corresponds to the $x^0=1$ term if $m\mid\ell$ and to the
$x^1=x$ term of $m\nmid\ell$, hence the correction by $x^{\ell_m}$.
With this normalization only the coefficients of those powers of $x$
divisible (resp. indivisible)
by $m$ give ranks corresponding to $\ell$ divisible (resp.
indivisible) by $m$.
\end{proof}
We illustrate the application of this theorem with some examples.
\begin{example}
Let $\mathfrak{g}$ be of type $G_2$. \begin{enumerate}
\item[(a)] Let $\ell=27$.
Then $\ell_m=0$ and $\ell_0=12$. So the rank of
$\mathcal{C}(\mathfrak{g}(G_2),q,27)$ is given by the $(27-12+0)=15$th
coefficient of
$$\frac{1}{(1-x)(1-x^3)(1-x^6)}=(1+x+x^2)(1+2x^3+4x^6+6x^9+9x^{12}+12x^{15}+\cdots)$$
so $|C_{27}(\mathfrak{g}(G_2))|=15$.
\item[(b)] Let $\ell=14$. Then $\ell_m=1$ and $\ell_0=7$. So
$|C_{14}(\mathfrak{g}(G_2))|$ is the $(14-7+1)$th coefficient of
$$\frac{1}{(1-x)(1-x^2)(1-x^3)}=1+x+2x^2+3x^3+4x^4+5x^5+7x^6+8x^7+10x^8\cdots$$
so the rank of $\mathcal{C}(\mathfrak{g}(G_2),q,14)$ is $10$.
\end{enumerate}
\end{example}
We close with some remarks.
\begin{remark}
For some Lie types the pre-modular categories described here admit
pre-modular subcategories. For $\mathfrak{g}$ of Lie type $A_r$ and $\ell$
is chosen so that $\gcd(\ell,r+1)=1$, one obtains a modular
subcategory generated by the simple object whose labels are integer
weights (see \cite{MaW}). The rank of this subcategory is easily
obtained as $\frac{1}{r+1}$ times the rank of the full category.
Similarly ene obtains modular subcategories from $\mathfrak{g}$ of type $B_r$
for $\ell$ odd by taking the simple objects labeled by integer
(non-spin) weights give a pre-modular subcategory with rank exactly
half that of the original category.
\end{remark}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{acknowledgement}
\label{acknowledgement}
Support by the National Science Foundation through
grant number
PHY-1509892 is gratefully acknowledged.
The authors acknowledge hospitality of
and support
(National Science Foundation under Grant No. NSF PHY-1125915)
by the KITP.
We thank J. Jacob for providing us with a copy
of his
coupled-channels code.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Resonance is a central phenomenon in every field of wave physics and is related to what is commonly called a spectral problem
(the eigenfrequencies and eigenmodes solutions of source free governing equations).
These spectral elements can be understood as privileged vibrational states and are thus an intrinsic characteristic of
the system. Closed cavities with perfect conducting walls have real eigenvalues and normal modes,
but for open electromagnetic systems, even for materials without losses, eigenfrequencies $\omega$ are in general complex,
the real part $\omega'>0$ giving the resonant frequency and the imaginary part $\omega''<0$ the linewidth of the resonance.
The associated leaky modes \cite{Sammut76} (also known as resonant states \cite{fox1961resonant,simon_resonances}, quasimodes \cite{lamb1964theory},
quasi-normal modes \cite{Settimi2009,Settimi2003},
quasi-guided modes \cite{Tikhodeev2002} in the literature) are proportional to $\cos[\omega'(t-r/v)]\exp[\omega''(t-r/v)]$ so they
are no longer of finite energy and even grow exponentially in space at infinity while possessing finite lifetime.
Physically, this exponential divergence corresponds to a wavefront excited at past times
and propagating away from the system, and the infinite energy can be understood as the accumulation of the energy radiated from the open resonator to
the rest of the universe.\\
The study of resonant properties of open optical systems is of fundamental interest in various domains of application
such as biophotonics \cite{Rigneault2005,Wenger2008} for single molecule fluorescence detection,
antennas \cite{Muehlschlegel2005,Rolly2012}, photonic crystals \cite{Yoshie2001,Akahane2003},
microstructured optical fibers \cite{Nicolet2}, diffraction gratings \cite{Peng1996,Tamir1997,Bonod2008} and
subwavelength aperture arrays \cite{RevModPhys.82.729,Genet1}
for example for filtering
applications \cite{FEHREMBACH2003,fehrembach, Ding2004a,these_vial_2013_anglais}, quantum electrodynamics
(QED) cavity experiments \cite{Gleyzes2007,Kuhr2007,CavityQED,Vuckovic2001}, \textit{etc..}. Finding eigenmodes
of open structures with non trivial geometries is thus of great theoretical and practical interest.\\
It is well known that the eigenfrequencies of an open system correspond to the poles of its scattering matrix
or of Fresnel coefficients \cite{nevbook}.
The numerical computation of these poles remains a challenging task and several approaches have been used.
Firstly, one has to compute the S-matrix coefficients, which can be done by numerous numerical method:
the {Rigorous Coupled
Wave Method} (RCWA \cite{gaylord,rokushima1}) also known as {Fourier
Modal Method} (FMM \cite{li3,guizal2009reformulation}),
the Differential Method \cite{watanabe},
the Integral Method \cite{maystre}, the
Finite Difference Time Domain method (FDTD \cite{yee,saj}), the
{Finite Element Method} (FEM \cite{delort,Ohkawa1,bao,Demesy2009}),
the {Method of Fictious Sources} (MFS \cite{zolla2})\dots
Secondly, one must find the poles of the S-matrix, and several approaches have been developed to do so:
computing the poles of its determinant \cite{Centeno2000}, the
poles of its maximum eigenvalue \cite{Felbacq2001}, others techniques based on the linearization of its inverse \cite{Gippius2010}, or more recently
an iterative method \cite{Bykov13}. In spite of numerous ways of improving the convergence of these methods, the dimension of the S-matrix has to be
very large in general to guarantee a sufficient precision of the results, which can lead to numerical instabilities.
Note that another method based on the computation of Cauchy path integrals of S-matrix valued functions of a complex variable
can be used to find an arbitrary number of poles in a given region of the complex plane \cite{Felbacq2001,fpcf}.\\
For a given problem, one can define an associated Maxwell's operator that depends on geometry, material properties and
boundary conditions. We are interested here in operators associated with functional spaces with elements defined on an unbounded
domain. In that case, it turns out that the \emph{spectrum} of this operator (the generalized set of eigenvalues) has to be
considered to fully characterize the resonant properties of the problem at stake. Particularly, in
addition with quasimodes associated with discrete complex eigenfrequencies, the spectrum of such an operator shows a real continuous part
associated with radiation modes expressing the propagation of energy from the structure towards the infinite space. \\
We use a finite element spectral method to study the resonant properties of open optical systems. Thanks to
its versatility it can handle complex geometries and arbitrary materials, which is necessary in most practical applications.
Moreover, the method naturally leads to a linear eigenvalue problem in matrix form after discretization because the basis functions
are frequency independent, in contrast to other methods such as the Boundary Element Method (BEM)
where the equations are projected on frequency dependent Green functions.
The FEM has already been used to compute leaky modes in different cases \cite{Nicolet2,Eliseev2005,Zhang2008},
however, it is of prime importance to use adequate absorbing boundary conditions to correctly handle
the divergent behaviors of fields. The solution is to use Perfectly Matched Layers (PML \cite{Berenger1994185})
damping the fields in free space \cite{pmlCompel,Popovic2003,Bon-Gou-Haz-Pri-2009}. Through an \textit{ad hoc} complex change of coordinates,
PMLs provides the suitable non-Hermitian extension of Maxwell's operator that makes possible the computation of leaky and radiation modes.
It is worth noting that the geometrical transformation introduced to define PMLs is virtually exact and its effect is not only to turn
the continuous spectrum into complex values but also to allow the computation of complex frequencies associated with quasimodes.
The continuous spectrum is finally approximated by a discrete set of eigenvalues because of the discretization of the problem by the FEM
and the effect of the truncation of PMLs at a finite distance.\\
Once the eigenmodes of the open system have been found, one expects a resonant behavior of the diffracted field
when shining light with frequency close to the real part of a given eigenfrequency. In other words, the electromagnetic spectrum
shows rapid variations with incident parameters (frequency and angle) around the resonant frequency, the rate of variation being related
to the imaginary part of the eigenfrequency, accounting for the leakage of the mode. This crucial information is at
the heart of the diffractive properties of open resonators. An interesting question is how to recover a diffracted field
with the modes as building blocks. This can be done by expanding any diffracted field on the complete basis of the eigenmodes. \\
The question of the spectral representation of waves
in open systems have extensively been studied \cite{Shevchenko1971,Leung90,Leung98} but is still not fully addressed for the general case (with non trivial geometries and material properties),
thus making quasimodal expansion techniques not
well suited for practical applications. More recently, an approach similar (by the use of PML to treat an approximated closed problem)
to the one reported here have been proposed \cite{Derudder1999,Derudder2000,Dai2012}.
Another method called Resonant State Expansion (RSE \cite{Muljarov2010,Doost2012,Doost2013}) consists in treating the system as a perturbation
of a canonical problem which spectral elements are known in closed form.
The idea is to compute these perturbed modes and to use them in the modal decomposition. Finally, a recent approach based on quasi-normal modes expansion have been developed to define mode volumes and revisit the Purcell factor in nanophotonic resonators \cite{Lalanne2013}.\\
The major difficulty relies in the fact that the modes in open systems cannot be normalized in a standard fashion by integrating their square modulus.
Instead we must consider an adjoint eigenproblem with Hermitian conjugate material properties, the modes of which modes are bi-orthogonal to
the modes of the initial problem. Equipped with this set of modes, the spectral representation of any diffracted field can be obtained.
The coefficients in the expansion express the coupling between the sources (particularly a plane wave) and a given mode,
revealing the conditions of excitation of this mode when varying incident parameters. With this QuasiModal Expansion Method (QMEM),
we obtain a reduced order model with a few modes that can accurately describe the diffractive behavior of open structures.
In addition, the source point case makes the computation of Green functions and LDOS straightforward once the eigenmodes of the systems have been found.\\
The paper is organized as follows: we first expose our FEM formulation of the diffraction of a plane wave
by an arbitrary number of scatterers of possibly complex shape buried in a multilayer stack for both fundamentals
polarizations. The materials can be inhomogeneous, dispersive and anisotropic
and the formulation can handle mono-periodic gratings. We detail the equivalent radiation problem,
the use of PML and the computational parameters related to the FEM.
In Section \ref{sec:specpb}, we develop the formulation of the spectral problem, with emphasis
on the structure of the spectrum of Maxwell's operator and its modifications with the use of PML.
The Section \ref{sec:mm} is devoted to the set up of the QMEM through the treatment of an adjoint spectral problem.
Finally, we give examples of application in Section \ref{sec:exnum} showing the strength of the methods developed
by providing a meticulous modal analysis of scattering properties of open resonators.
We first study a triangular rod in vacuum and show how the angle dependent excitation of resonances in the absorption cross section
can be explained by the QMEM coefficients. The modal reconstruction of diffracted field, absorption cross section and LDOS are also provided.
The second example is that of a lamellar diffraction grating, for which the transmission and reflection coefficients
show a complex spectral behavior that is fully explained and faithfully reproduced by the QMEM.
\section{Scattering problem}
\label{part:scatt_pb}
\subsection{Setup of the problem}
\begin{figure}[htbp!]
\includegraphics[width=\linewidth]{figure1-eps-converted-to.pdf}
\caption{Sketch of the studied structures and notations.}
\label{dessin}
\end{figure}
The formulation used here is the one described in Refs. \onlinecite{Demesy2007,Demesy2010}.
It relies on the fact that the diffraction problem can be rigorously treated as an equivalent radiation problem with sources
inside the diffractive object. We denote by $\boldsymbol x$, $\boldsymbol y$ and $\boldsymbol z$ the unit vectors of an orthogonal Cartesian co-ordinate system $Oxyz$.
We deal with time-harmonic fields, so that the electric and magnetic fields are represented by complex
vector fields $\boldsymbol E$ and $\boldsymbol H$ with a time-dependence in $\mathrm{exp}(-i\omega t)$,
which will be dropped in the notation in the sequel. Moreover, we denote $k_0=\omega/c$.\\
To remain as general as possible (in particular to handle PML), we may consider $z$-anisotropic material, so the tensor
fields of relative permittivity $\tens{\varepsilon}$ and
relative permeability $\tens{\mu}$ are of the following form:
\begin{equation}
\tens{\varepsilon}=
\left(
\begin{array}{ c c c}
\varepsilon_{xx} & \conj{\varepsilon_{a}} & 0 \\
\varepsilon_{a} & \varepsilon_{yy} & 0 \\
0 & 0 & \varepsilon_{zz}
\end{array} \right)
\text{ and}\hspace{20pt}
\tens{\mu}=
\left(
\begin{array}{ c c c}
\mu_{xx} & \conj{\mu_{a}} & 0 \\
\mu_{a} & \mu_{yy} & 0 \\
0 & 0 & \mu_{zz}
\end{array} \right),
\end{equation}
where the coefficients $\varepsilon_{xx}$, $\varepsilon_{aa}$,...$\mu_{zz}$ are (possibly) complex valued
functions of $x$ and $y$, and where $\conj{\varepsilon_{a}}$ (resp. $\conj{\mu_{a}}$)
is the complex conjugate of
$\varepsilon_{a}$ (resp. ${\mu_{a}}$).\\
The studied structures are invariant along $Oz$. They are composed of $N$ homogeneous layers
of relative permittivity $\varepsilon_{j}$ and relative permeability $\mu_{j}$, $j=1,\dots,N$ (See Fig.~\ref{dessin}).
These layers may contain one or several inhomogeneities. For the sake of clarity, we only consider one scatterer (See Fig.~\ref{dessin}(a)) of
or one infinitely $d$-periodic chain of scatterers (See Fig.~\ref{dessin}(b)) of isotropic and homogeneous material with
relative permittivity $\varepsilon_{g'}$ and relative permeability $\mu_{g'}$. These restrictions are assumed to simplify the theoretical developments but
our methods can treat additional diffractive objects buried inside different layers possibly made of $z$-anisotropic materials
without increasing the computational cost. The substrate (-) and superstrate (+) are homogeneous an isotropic with relative permittivity
$\varepsilon^{-}$ and $\varepsilon^{+}$ and relative permeability $\mu^{-}$ and $\mu^{+}$.
The structure is illuminated by an incident plane wave of wave vector defined by the angle $\theta_0$:
$\boldsymbol k^+ =\alpha\boldsymbol x+\beta\boldsymbol y =k^+(\sin{\theta_0}\boldsymbol x-\cos{\theta_0}\boldsymbol y)$.
Its electric (resp. magnetic) field is linearly polarized along the $z$-axis, this is the
so-called transverse electric or s-polarization case (resp. transverse magnetic or p-polarization case).
Under the aforementioned assumptions, the diffraction problem in a non conical
mounting can be separated in two fundamental scalar cases TE and TM.
Thus we search for a $z$-linearly polarized electric (resp. magnetic)
field $\boldsymbol E=e(x,y)\boldsymbol z$ (resp. $\boldsymbol H=h(x,y)\boldsymbol z$). Denoting $\widetilde{\tens{\varepsilon}}$
and $\widetilde{\tens{\mu}}$
the $2\times2$ matrices extracted from $\tens{\varepsilon}$ and $\tens{\mu}$:
\begin{equation}
\widetilde{\tens{\varepsilon}}=
\left(
\begin{array}{ c c}
\varepsilon_{xx} & \conj{\varepsilon_{a}} \\
\varepsilon_{a} & \varepsilon_{yy}
\end{array} \right)
\hspace{10pt}\text{ and}\hspace{10pt}
\widetilde{\tens{\mu}}=
\left(
\begin{array}{ c c}
\mu_{xx} & \conj{\mu_{a}}\\
\mu_{a} & \mu_{yy}
\end{array} \right),
\end{equation}
the functions $e$ and $h$ are solution of similar differential equations:
\begin{equation}
\mathcal{L}_{\tens{\xi},\chi}(u):=\div(\tens{\xi}\, \bm{\mathrm{\nabla}} u)+k_0^2\, \chi\, u=0,
\label{helm2D}
\end{equation}
such that $ u^d:=u-u_0$ satisfies an Outgoing Wave Condition (OWC), with
\begin{equation*}
u=e, \hspace{5pt} \tens{\xi}=\widetilde{\tens{\mu}}^{\rm T}/\mathrm{det}(\widetilde{\tens{\mu}}),
\hspace{5pt} \chi=\varepsilon_{zz} \quad\text{for the TE case,}
\end{equation*}
\begin{equation*}
u=h, \hspace{5pt} \tens{\xi}=\widetilde{\tens{\varepsilon}}^{\rm T}/\mathrm{det}(\widetilde{\tens{\varepsilon}}),
\hspace{5pt} \chi=\mu_{zz} \quad\text{for the TM case.}
\end{equation*}
Under this form, the problem is not adapted to a resolution by a numerical method
because of infinite issues: the sources of the plane wave are infinitely far, the geometric domain is unbounded and
in the periodic case the scattering structure is itself infinite.
To circumvent these issues, we compute only the
diffracted field solution of an equivalent radiation problem with sources inside the scatterers,
we use PMLs to truncate the unbounded domain at
a finite distance,
and we use quasiperiodicity conditions to model a single period in the grating case.
\subsection{Equivalent radiation problem}
Denoting $\tens{\xi_1}$ and $\chi_1$ the tensor field and the scalar function describing the
multilayer problem,
the function $u_1$ is defined as the unique solution of:
\begin{equation}
\mathcal{L}_{\tens{\xi_1},\chi_1}(u_1)=0,
\end{equation}
such that $u_1^d:=u_1-u_0$ satisfies an OWC. The expression of this function can be
calculated with a matrix transfer formalism. The unknown function $u_2^d$ is thus given by:
\begin{equation}
u_2^d=u-u_1=u^d-u_1^d.
\end{equation}
The scattering problem (\ref{helm2D}) can be rewritten as:
\begin{equation}
\mathcal{L}_{\tens{\xi},\chi}(u_2^d)=-\mathcal{L}_{\tens{\xi},\chi}(u_1):=\mathcal{S}_1.
\label{equ:u2d}
\end{equation}
The term on the right hand side can be seen as a source term $\mathcal{S}_1$ with support in
the diffractive objects and is known in closed form (See Appendix~\ref{appendix:source_term} for the detailed expression).
\subsection{Perfectly Matched Layers}
Transformation optics has recently unified various techniques in
computational electromagnetics such as the treatment of open problems, helicoidal geometries
or the design of invisibility cloaks \cite{Nicolet2008}.
These apparently different problems share the same concept of geometrical transformation,
leading to equivalent material properties \cite{Lassas2001739,Lassas2001}.
A very simple and practical rule can be set up \cite{fpcf}: when changing the co-ordinate system,
all you have to do is to replace the initial materials properties $\tens{\varepsilon}$
and $\tens{\mu}$ by equivalent material properties $\tens{\varepsilon}^\mathrm{s}$ and $\tens{\mu}^\mathrm{s}$
given by the following rule:
\begin{equation}
\tens{\varepsilon}^\mathrm{s}=\boldsymbol J^{-1}\tens{\varepsilon} \boldsymbol J^{-\rm T}\mathrm{det}(\boldsymbol J) \hspace{5pt}
\text{and~} \hspace{5pt} \tens{\mu}^\mathrm{s}=\boldsymbol J^{-1}\tens{\mu} \boldsymbol J^{-\rm T}\mathrm{det}(\boldsymbol J),
\label{equ_tranform}
\end{equation}
where $\boldsymbol J$ is the Jacobian matrix of the co-ordinate transformation consisting of the partial derivatives
of the new coordinates with respect to the original ones ($\boldsymbol J^{-\rm T}$ is
the transposed of its inverse). In this framework, the most natural way to define PMLs is to consider
them as maps on a complex space $\Gamma$, which co-ordinate change leads to equivalent permittivity
and permeability tensors. The associated complex valued change of coordinates is given by:
\begin{equation}
\eta'(\eta)=\int_0^{\eta} s_\eta(\ell)\rm d \ell,
\label{ys}
\end{equation}
where $\eta′$ is a complex coordinate such that $\mathfrak{Re}(\eta′) = \eta$
is the original coordinate (corresponding to the initial
"physical" coordinate system). The function $s_\eta$ is a complex
valued function depending on a real variable. In practice, the change
of coordinates is chosen to be the identity in the region
of interest (where the fields have therefore directly their
untransformed values) and the complex stretch is limited to
a surrounding layer. In this paper we use cylindrical or Cartesian PML with constant
stretching coefficient $s_\eta=\sigma\mathrm{e}^{i\phi}$ with $\sigma>0$ and $0<\phi<\pi/2$.\\
\subsection{Quasiperiodicity}
Let $\Gamma_l$ and $\Gamma_r$ be the two parallels boundaries orthogonal to the direction of periodicity $x$ and separated by $d$.
Bloch theorem implies:
\begin{equation}
u_2^d(x+d)=u_2^d(x)\mathrm{e}^{i\alpha d}.
\end{equation}
In practice, we consider $u_2^d$ as unknown on $\Gamma_l$ (which is done by applying Dirichlet homogeneous conditions)
and we impose the value of one point on $\Gamma_d$ to be equal to the value of the corresponding point on $\Gamma_l$ multiplied by the dephasing $\mathrm{e}^{i\alpha d}$.
\subsection{The FEM formulation}
\label{subsection:eigpb}
The radiation problem defined by
Eq.~(\ref{equ:u2d}) is then solved by the FEM, using PMLs to truncate the infinite regions and by
setting convenient boundary conditions on the outermost limits of the domain, depending on the
problem. For mono-periodic structures, we apply Bloch quasi periodicity conditions with coefficient $\alpha$ on the two
parallel boundaries orthogonal to the grating direction of periodicity. In all cases, we apply homogeneous
Neumann or Dirichlet boundary conditions on the outward boundary of the PMLs.
The computational cell is meshed using $2\textsuperscript{nd}$ order Lagrange elements. In the numerical
examples in the sequel, the maximum element size is set to
$\lambda / (N_m\sqrt{\abs{\mathfrak{Re}(\varepsilon)}})$, where $N_m$ is an integer
(between 6 and 10 is usually a good choice).
The final algebraic system is solved using a direct solver (PARDISO \cite{Schenk2004475}).
\section{Spectral problem}\label{sec:specpb}
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure2-eps-converted-to.pdf}
\caption{Guided modes, continuous spectrum and leaky modes in an open waveguide.\label{PML_leaky}}
\end{figure}
Generally speaking, the diffractive properties of open systems can be studied at a more fundamental level by
looking for both the generalized eigenfunctions and eigenvalues of a Maxwell's operator $\mathcal{M}_{\tens{\xi}}$ associated with the problem.
The definition and classification of the spectrum of an operator is quite a delicate mathematical question and
is out of the scope of this paper (nevertheless we give
in Appendix \ref{appendix:spec_anal} some basic definitions).
The eigenproblem we are dealing with consists in finding the solutions of source free Maxwell's equations, \textit{i.e.}
finding eigenvalues ${\Lambda_n}=({\omega}_n/c)^2$ and non zero eigenvectors $v_n$ such that:
\begin{equation}
\mathcal{M}_{\tens{\xi}}(v_n):=-\div(\tens{\xi}\,\bm{\mathrm{\nabla}} v_n)=\Lambda_n \,\chi \, v_n,
\label{eq:eigenpb}
\end{equation}
where $v_n$ satisfies an O.W.C.
We consider here non dispersive materials, so that the eigenvalue problem (\ref{eq:eigenpb}) is linear.
Note that in the periodic case, we search for Bloch-Floquet eigenmodes so
the operator is parametrized by the real quasiperiodicity coefficient $\alpha$.\\
For bounded problems with lossless and reciprocal materials (with permittivity and permeability
tensors represented by Hermitian operators),
the operator $\mathcal{M}_{\tens{\xi}}$ is self-adjoint
so its eigenvalues are real, positive and discrete. For Hermitian open problems, the spectrum of
the associated operator is \emph{real}\footnote{Note that when dealing with passive lossy materials, this spectrum moves in the lower complex plane $\omega''<0$,
but if active materials are considered, the eigenfrequencies can be situated in the upper complex plane $\omega''>0$.} and composed of two parts \cite{hanson2002operator}:
\begin{itemize}
\item the discrete spectrum associated with \emph{proper eigenfunctions} known as
trapped modes (also called bounded or guided modes) exponentially decreasing at infinity,
particularly the ``ideal'' surface plasmon modes when the structure contains materials with $\varepsilon<0$,
\item the continuous spectrum associated with \emph{improper eigenfunctions} composed of propagative or evanescent radiation modes.
\end{itemize}
In addition, another type of solution can be defined and is very useful to characterize the diffractive properties of unbounded structures:
the so-called \emph{leaky modes}. These modes are an intrinsic feature of open waveguides.
The associated eigenfrequencies are complex solutions of the dispersion relation of the problem but are \emph{not}
eigenfrequencies of (\ref{eq:eigenpb}). A leaky mode represent the analytical continuation of the proper discrete mode
below its cutoff frequency \cite{hanson2002operator}.\\
PMLs have proven to be a very convenient tool to
compute leaky modes
in various configurations \cite{pmlCompel,Popovic2003,HEIN2004,Eliseev2005}. Indeed they mimic
efficiently the infinite space provided a suitable choice of their parameters.
We may define a transformed operator with infinite PMLs, namely $\mathcal{M}_{\tens{\xi}^\mathrm{s}}$, with equivalent
material properties defined by Eq.(\ref{equ_tranform}). The associated spectral problem is:
\begin{equation}
\mathcal{M}_{\tens{\xi}^\mathrm{s}}(v^\mathrm{s}_n):=-\div(\tens{\xi}^\mathrm{s}\,\bm{\mathrm{\nabla}} v^\mathrm{s}_n)=\Lambda^\mathrm{s}_n \,\chi^\mathrm{s} \, v^\mathrm{s}_n.
\label{eq:eigenpb_transformed}
\end{equation}
Figure~\ref{PML_leaky}
shows how the spectrum of the considered operator
is affected by applying a complex stretch in the non periodic case (See Appendix~\ref{appendix:loc_TCS} for more details).
The introduction of infinite PMLs rotates the continuous spectrum
in the complex plane (since the operator $\mathcal{M}_{\tens{\xi}^\mathrm{s}}$ involved in the problem is now a non self-adjoint extension of the
original self-adjoint operator $\mathcal{M}_{\tens{\xi}}$).
The effect is not only to turn the continuous spectrum
into complex values but it also unveils the leaky modes is the region swept by the rotation
of this essential spectrum \cite{thesebenjg}. It is important to note that leaky modes do \emph{not}
depend on the choice of a particular complex stretching: adding the infinite PMLs is only a way to
discover them. The angle of rotation of the continuous spectrum in $\mathbb{C}$ is the opposite of
the argument $\phi$ of the constant complex stretching coefficient $s_\eta$. By increasing this parameter we discover more and more leaky modes with now exponential decay
at infinity in the PML regions, and so the associated norms become finite.\\
Finally, the PMLs can safely be truncated at finite
distance which results in an operator $\mathcal{M}_{\tens{\xi}^\mathrm{t}}$ having only discrete spectrum, which leads to the spectral problem:
\begin{equation}
\mathcal{M}_{\tens{\xi}^\mathrm{t}}(v^\mathrm{t}_n):=-\div(\tens{\xi}^\mathrm{t}\,\bm{\mathrm{\nabla}} v^\mathrm{t}_n)=\Lambda^\mathrm{t}_n \,\chi^\mathrm{t} \, v^\mathrm{t}_n.
\label{eq:eigenpb_transformed_trunc}
\end{equation}
This formulation in the form of an equivalent transformed closed problem allows the numerical computation with the FEM
of approximate leaky, guided and radiation
modes (also termed as PML modes or B\'erenger modes). This last set of modes is due to the discretization of the continuous
spectrum by finite PMLs \cite{Olyslager2004} with constant stretch and by the spatial discretization of the domain with a mesh
in the framework of the FEM. The discretization of the continuous
spectrum is finer when either the thickness of the PMLs
or the modulus $\sigma$ of the complex stretching coefficient $s_\eta$ increase.
The boundary conditions and the FEM setup are analogous to that described in section \ref{subsection:eigpb}.
Note that Neumann or Dirichlet boundary conditions applied in the outward boundaries of the PMLs result in a different set of approximate radiation modes.
Obviously, leaky modes do \emph{not} depend on all those PML-related parameters.\\
The final algebraic system can be written in a matrix form as a generalized eigenvalue problem $A\,v=\Lambda B\,v$.
Finding the eigenvalues closest to an arbitrary shift $\Lambda^0$ boils down to compute the largest
eigenvalues of matrix $C=(A-\Lambda^0 B)^{-1}B$. For this purpose, the eigenvalue solver uses
ARPACK FORTRAN libraries adapted to large scale and sparse matrices \cite{Lehoucq97arpackusers}. This code is based on a
variant of the Arnoldi algorithm called \textit{Implicitly Restarted Arnoldi Method} (IRAM).\\
In the sequel we will drop the exponent $\mathrm t$ for convenience, but one shall bear in mind that
the effective problem we are dealing with is the complex stretched and bounded version (\ref{eq:eigenpb_transformed_trunc})
of the original problem (\ref{eq:eigenpb}) defined on a whole real Cartesian an unbounded space.
\section{Quasimodal expansion method}\label{sec:mm}
\subsection{Inner product and adjoint eigenproblem}
For Hermitian problems, eigenvectors form a complete set of $L^2(\Omega)$
and every solution of the problem with sources can be expanded on this basis.
But in the general case, the problem may be non self adjoint, and we lack the nice properties of
Hermitian systems.
Nevertheless, we describe here a procedure to obtain an expansion basis of the solution space.
For this we use the classical inner product of two functions $f$ and $g$ of $L^2(\Omega)$:
\begin{equation}
\ensuremath{\left\langle} f\ensuremath{\,\middle| \,} g\ensuremath{\right\rangle} :=\int_{\Omega} f(\boldsymbol r)\, \conj{g}(\boldsymbol r) \; \mathrm{d}\boldsymbol r.
\end{equation}
Unlike self-adjoint problems, $\ensuremath{\left\langle} \chi v_n \ensuremath{\,\middle| \,} v_m\ensuremath{\right\rangle} \neq\delta_{nm}$, in other
words the eigenmodes $v_n$ are not orthogonal with respect to this standard definition. This is the reason why we
consider an adjoint spectral problem with eigenvalues $\conj{\Lambda_n}=(\conj{\omega_n}/c)^2$
and eigenvectors $w_n$. The adjoint operator $\mathcal{M}^\dagger_{\tens{\xi}}$ is defined by
\begin{equation}
\ensuremath{\left\langle} \vphantom{ \mathcal{M}^\dagger_{\tens{\xi}}(w)}\mathcal{M}_{\tens{\xi}}(v) \ensuremath{\,\middle| \,} w \ensuremath{\right\rangle} =\ensuremath{\left\langle} v \ensuremath{\,\middle| \,} \mathcal{M}^\dagger_{\tens{\xi}}(w)\ensuremath{\right\rangle}
\end{equation}
with the \emph{same} boundary conditions as the direct spectral problem,
and is such that $\mathcal{M}^\dagger_{\tens{\xi}}=\mathcal{M}_{\tens{\xi}^\star}$ (See Appendix \ref{appendix:prop_adj} for the proof),
where $A^\star={\conj{A}}^\mathrm{T}$ is the conjugate transpose of matrix $A$.
The associated adjoint problem that we shall solve is (cf. Appendix \ref{appendix:prop_adj}):
\begin{equation}
\mathcal{M}^\dagger_{\tens{\xi}}(w_n)=\mathcal{M}_{\tens{\xi}^\star}(w_n)=-\div(\tens{\xi}^\star \,\bm{\mathrm{\nabla}} w_n)= \conj{\Lambda_n}\,\conj{\chi}\, w_n.
\label{eq:eigenpb_adjoint}
\end{equation}
We know from spectral theory that the eigenvectors $v_n$ are bi-orthogonal to their adjoint
counterparts $w_n$ \cite{hanson2002operator}:
\begin{equation}
\ensuremath{\left\langle} \chi v_n \ensuremath{\,\middle| \,} w_m\ensuremath{\right\rangle} =\int_{\Omega}\chi(\boldsymbol r)\, v_n(\boldsymbol r)\, \conj{w_m}(\boldsymbol r) \; \mathrm{d}\boldsymbol r=K_n\delta_{nm}.
\label{eq:biortho}
\end{equation}
where the complex-valued normalization coefficient $K_n$ is defined as
\begin{equation}
K_n:= \ensuremath{\left\langle} \chi v_n \ensuremath{\,\middle| \,} w_m\ensuremath{\right\rangle} =\int_{\Omega}\chi(\boldsymbol r)\, v_n(\boldsymbol r)\, \conj{w_n}(\boldsymbol r) \; \mathrm{d}\boldsymbol r.
\end{equation}
\subsection{Quasimodal expansion of the diffracted field}
Relation (\ref{eq:biortho}) provides a complete bi-orthogonal set to expand every field solution
of Eq.~(\ref{equ:u2d}) propagating in the open waveguide as:
\begin{equation}
u_2^d(\boldsymbol r,\omega)=\sum_{n=1}^{+\infty}P_n(\omega)\, v_n(\boldsymbol r) + \int_{\Gamma_c} P_\nu(\omega)\, v_\nu(\boldsymbol r) \; \mathrm{d}\nu,
\label{decompmod_integrale}
\end{equation}
where $\Gamma_c$ is the continuous spectrum (a curve, with possibly a denombrable set of branches in the complex plane).
The discrete coefficients $P_n$ and the continuous density $P_\nu$ are given by similar expressions:
\begin{equation}
P_j(\omega)=\frac{1}{K_j}\ensuremath{\left\langle} \chi u_2^d \ensuremath{\,\middle| \,} w_j\ensuremath{\right\rangle} =\frac{J_j(\omega)}{{\omega}^2-\omega_j^2},\quad j=\{n,\nu\}
\label{equPn}
\end{equation}
with
\begin{equation}
J_j(\omega)=\frac{c^2}{K_j}\ensuremath{\left\langle} \mathcal{S}_1 \ensuremath{\,\middle| \,} w_j \ensuremath{\right\rangle} =\frac{c^2}{K_j}\int_{\Omega_{g'}}\mathcal{S}_1(\boldsymbol r,\omega)\, \conj{w_j}(\boldsymbol r) \;\mathrm{d}\boldsymbol r,
\label{equJn}
\end{equation}
where the integration is \emph{only performed on the inhomogeneities} $\Omega_{g'}$ since the
source term $\mathcal{S}_1$ is zero elsewhere. Note that the last integral has to be taken in the distributional meaning
which leads to a surface term on $\partial\Omega_{g'}$ because of the spatial derivatives in $\mathcal{S}_1$.\\
We are thus able to know how a given mode is excited when changing the incident field.
This modal expansion can be approximated by a discrete sum since the spectrum of the final
operator we solve for
involves only discrete eigenfrequencies, and in practice only a finite number $M$ of modes is
retained in the expansion, so that we can write:
\begin{equation}
u_2^d(\boldsymbol r,\omega)\simeq\sum_{m=1}^{M}P_m(\omega)\, v_m(\boldsymbol r).
\end{equation}
This leads to a reduced modal representation of the field which is well adapted when studying
the resonant properties of the open structure, as illustrated in the sequel.\\
Equation (\ref{equPn}) shows clearly that the complex eigenfrequency $\omega_n$ is a simple pole of the coupling coefficient $P_n$ and thus leads to a singularity of the diffracted
field. But in practice, the frequency of the incident plane wave is real, and the resonant behavior may happen in the vicinity of $\omega'_n=\mathfrak{Re}(\omega_n)$.
Consequently, the value of $P_n$ is finite, and the linewidth of the resonance is given by $\omega''_n=\mathfrak{Im}(\omega_n)$. This is the main strength of the QMEM:
it unambiguously reveals not only that a mode is excited but it indicates also the intensity of this excitation. According to Eq.~(\ref{decompmod_integrale}),
one can see that the diffracted field for a given incident frequency is due to the concomitant contributions of an infinity of eigenmodes.
However, for a given incident field, there is often a mode that plays a leading
role in the decomposition. In other words, its coupling coefficient is much larger in module than those associated with other modes, and so
a resonance of the diffracted field may be attributed mainly to the excitation of this mode.
\subsection{Green function and Local Density Of States}
We have focused our attention on a plane wave source, but the method is
also applicable for other type of excitation. Indeed, if we assume a point source
located at $\boldsymbol{r'}$, namely $\mathcal{S}_1(\boldsymbol{r})=\delta(\boldsymbol{r}-\boldsymbol{r'})$, we have from Eq.~(\ref{equJn})
$$J_n=\frac{c^2}{K_n}\conj{w_n}(\boldsymbol{r'}),$$
so we obtain immediately the Green function expansion in terms of quasimodes and adjoint quasimodes as :
\begin{equation}
g(\omega,\boldsymbol{r},\boldsymbol{r'})=\sum_{m}\frac{c^2}{K_m}\frac{v_m(\boldsymbol{r})\,\conj{w_m}(\boldsymbol{r'})}{\omega^2-\omega_m^2}.
\label{equGF}
\end{equation}
The Local Density Of States (LDOS) defined as
$$l(\omega,\boldsymbol{r})=-\frac{2\,\omega}{\pi\,c^2}\,\mathfrak{Im} \left\lbrace g(\omega,\boldsymbol{r},\boldsymbol{r})\right\rbrace$$ can thus be expanded as :
\begin{equation}
l(\omega,\boldsymbol{r})=-\frac{2\,\omega}{\pi}\sum_{m} \mathfrak{Im}\left\lbrace \frac{v_m(\boldsymbol{r})\,\conj{w_m}(\boldsymbol{r})}{K_m(\omega^2-\omega_m^2)}\right\rbrace.
\label{equLDOS}
\end{equation}
The LDOS is thus related to \emph{local} values of eigenvectors and adjoint eigenvectors conjugates.
Note that the QMEM is in this case highly computationally efficient, since it only requires to
solve two spectral problems with the FEM to obtain the LDOS in a given region of space, without the need
to compute numerically the integrals in Eq.~(\ref{equJn}). Once the eigenmodes of the system and their adjoints
have been computed, the calculation of the LDOS at any point in the computational domain and at any frequency is trivial.
This has to be compared with the resolution of a large number of direct FEM problem where
the source point position and the frequency vary.
\section{Numerical examples}\label{sec:exnum}
\subsection{Triangular rod in vacuum}\label{ssec:exnumtri}
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure3-eps-converted-to.pdf}
\caption{Loci of the eigenfrequencies in the complex $\omega$-plane. Theoretical
continuous spectrum (blue dashed line) is well approximated by discrete eigenvalues corresponding
to PML modes (blue circles). The leaky modes unveiled by shifting the continuous spectrum
in the complex plane have frequencies represented by red squares.\label{plan_C_triangle}}
\end{figure}
The first example is the case of a dielectric rod ($\varepsilon_{g'}=13-0.2i$ and $\mu_{g'}=1$) of infinite extension
along the $z$-axis embedded in vacuum (See Fig.~\ref{modes_triangle}(a)). Its cross section is a triangle defined
by the three apexes A $(-1;3)$, B $(-1;-2)$ and C $(3,-1)$. We chose the inner radius of the
PML to be $R^{\rm{in}}=1.01\cdot\rm{max}(OA,OB,OC)$, \textit{i.e.} to put the PML as
close as possible to the diffractive object to avoid numerical pollution of the results as reported by previous studies \cite{KP09}. The depth
of the PML annulus is $R^{\rm{out}}-R^{\rm{in}}=\SI{15}{\micro\meter}$, and the absorption coefficient
is $s_r=1+i$ (cf. Eq. (\ref{ys})). We solve the eigenproblem in TE polarization, and the position of the 300 eigenfrequencies
with lowest real parts in the complex plane is shown in Fig.~\ref{plan_C_triangle}. The original continuous
spectrum (for the problem without PML) is $\mathbb{R}^+$. It is rotated of an angle
$\phi=-\mathrm{arg}( s_r)=-\pi/4$ from the real axis when using PML (blue dotted curve).
The truncation of PML at a finite distance results in a discrete approximation of this continuous
spectrum (blue circles).
\begin{figure*}[htbp!]
\includegraphics[width=0.7\linewidth]{figure4-eps-converted-to.pdf}
\caption{Geometry and mesh of the structure (a) and field maps $\mathfrak{Re}(E_z)$ for the eigenmodes 1 (b), 2 (c) and 3 (d).}
\label{modes_triangle}
\end{figure*}
The field of the associate quasi
radiation modes is concentrated mainly in the PML region, as can be seen from the field map
of mode 3 plotted in Fig.~\ref{modes_triangle}(d). Eigenvalues corresponding to leaky modes are
situated closest to the real axis (red squares), and the field profiles of the associated modes are
confined in the region of physical interest $r<R^{\rm{in}}$ (See Figs.~\ref{modes_triangle}(b) and \ref{modes_triangle}(c) for leaky modes 1 and 2 respectively).\\
We focus on two leaky modes labeled $1$ and $2$ for which associated eigenfrequencies are
respectively $\omega_1= (\num{1.77e13} - \num{6.36e11}\,i)\,\si{\radian\per\second}$ (resonant wavelength
$\lambda_1=\SI{10.61}{\micro\meter}$) and $\omega_2= (\num{1.90e13} - \num{1.01e12}\,i)\,\si{\radian\per\second}$
($\lambda_2=\SI{9.89}{\micro\meter}$).
In order to understand how these eigenmodes are excited, we compute the modal coefficients $P_n$ for varying incident
wavelength $\lambda$ and angle $\theta_0$. The maps of the modulus of $P_n$ ($n=1,2$) for $\lambda$ between 9
and $\SI{11}{\micro\meter}$ and $\theta_0$ between $0$ and $\SI{360}{\degree}$ are plotted in Fig.~\ref{Pn_triangle}. The coupling
coefficients $P_n$ behave as $1/(\omega^2-\omega_n^2)$, which yields a resonant behavior when $\omega$ is near
$\mathfrak{Re}(\omega_n)$ (cf. the horizontal dashed lines in Fig.~\ref{Pn_triangle}). We observe that the values of $\abs{P_n}$ strongly depend on
$\theta_0$, indicating that the considered mode will be more or less excited depending of the incidence. \\
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure5-eps-converted-to.pdf}%
\caption{Coupling coefficients $P_n$ as a function of $\lambda$ and $\theta_0$ for
the modes $1$ (top) and $2$ (bottom). Horizontal dashed lines indicate the resonant
wavelength.\label{Pn_triangle}}
\end{figure}
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure6-eps-converted-to.pdf}%
\caption{Absorption cross section as a function of $\lambda$ for different incident
angles $\theta_0$.\label{ACS_triangle}}
\end{figure}
To check our previsions, we
compute the absorption cross section by the method presented in part \ref{part:scatt_pb} at
different incidences. In the first case where $\theta_0=352$\si{\degree}, the value of $\abs{P_1(\lambda_1)}$
is high whereas the value of $\abs{P_2(\lambda_2)}$ is much lower, which means that the mode 1 will be
principally excited. This is what can be seen on Fig~\ref{ACS_triangle} (blue curve) where the
resonant peak of the absorption cross section curve occurs near $\lambda_1$, whereas no significant
resonant behavior is found near $\lambda_2$. Similar conclusions can be made
for the second case with $\theta_0=306$\si{\degree}~by interchanging the roles of modes
1 and 2. Note that the resonant peak in the second case is broader since $\mathfrak{Im}(\omega_2)>\mathfrak{Im}(\omega_1)$,
in other words the mode 2 leaks more than the mode 1. In addition, the value of the absorption cross
section at the resonance is correlated to the value of the coupling coefficient $P_n$ for the corresponding
excited eigenmode: the peak value in the first case is
greater because the mode is more excited comparing to the second case. Another
interesting example is when the two modes have comparable weight in the modal
expansion. This is the case for $\theta_0=143$\si{\degree}, so that both modes are excited.
In our case, the two resonant peaks in the absorption cross section curve merge into a
single broad one (See the red curve on Fig~\ref{ACS_triangle}). Finally for $\theta_0=55$\si{\degree}, both
modes show weak coupling coefficients, which results in a relatively flat behavior of the absorption
cross section (cyan curve on Fig~\ref{ACS_triangle}). In fact another mode dominates in this
case with resonant wavelength slightly lower than $\SI{9}{\micro\meter}$.\\
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure7-eps-converted-to.pdf}%
\caption{Absorption cross section curves computed with QMEM as a function of $\lambda$ for $\theta_0=143$\si{\degree}.
The thick red curve corresponds to the reference values computed by solving the diffraction problem.\label{rec_triangle}}
\end{figure}
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure8-eps-converted-to.pdf}%
\caption{Electric field at $\lambda=\SI{10.2}{\micro\meter}$ and $\theta_0=\SI{143}{\degree}$ calculated
by solving the diffraction problem (a) and by the QMEM (b) with 50 modes.}
\label{champs_reconstruits_triangle}
\end{figure}
Another powerful feature of our approach is that we are able to reconstruct the
field with a few eigenmodes.
From this reduced modal expansion we calculate the absorption cross section for
$\theta_0=143$\si{\degree}. The $M$ modes used are those with highest mean value of
the modal coefficient on the whole wavelength range. Results are reported on Fig~\ref{rec_triangle} and
compared with the reference values obtained by solving Eq.~(\ref{equ:u2d}). For $M=5$, we have
already captured evolution of the absorption cross section with frequency . The agreement is better
for $M=10$ except for weak wavelengths. Retaining $M=50$ modes in the modal expansion
results in an accurate approximation of the absorption cross section. We plot in
Fig.~\ref{champs_reconstruits_triangle} the field maps obtained by solving the
diffraction problem and the modal method approximation with 50 modes,
at $\lambda=\SI{10.2}{\micro\meter}$ and $\theta_0=143$\si{\degree}. As can be seen,
the two methods are in good agreement, with only local discrepancies occurring at
the interface air/PML and within the PML. Note that this reduced order model is
computationally efficient when a large range of incident parameters is investigated.
Indeed, there is only one FEM problem solved for (because in that case the adjoint modes $w_n$
are simply the conjugate of the eigenmodes $v_n$, See Appendix \ref{appendix:prop_adj} for the proof),
the rest of the calculation is only numerical integration of smooth functions and algebraic operations.\\
\begin{figure}[htbp!]
\includegraphics[width=0.9\columnwidth]{figure9-eps-converted-to.pdf}%
\caption{Local Density Of States at $\lambda=\SI{10.2}{\micro\meter}$ calculated
by solving the diffraction problem (a) and by the QMEM (b) with $M=$500 modes.}
\label{LDOS_triangle}
\end{figure}
Finally, we computed a map of the LDOS at $\lambda=\SI{10.2}{\micro\meter}$ on a regular grid with $50\times 50$ points
into the spatial window $[-2,4]\,\si{\micro\meter}\times[-3,4]\,\si{\micro\meter}$ around the dielectric rod.
The results of the QMEM using Eq. (\ref{equLDOS}) with $m=500$ (See. Fig~\ref{LDOS_triangle}(b)), that involves the resolution
of a single FEM spectral problem, is in excellent agreement with the
results obtained by 2500 direct FEM problems where the position of the source varies on the nodes of the $50\times 50$ grid (See. Fig.~\ref{LDOS_triangle}(a)).
In that example, the spectral problem consisting of \num{11753} degrees of freedom was solved in \SI{17}{min} on a laptop with
two \SI{2.8}{GHz} processors and \SI{8}{Go} of RAM. On the one hand, the computation of the modes is the limiting step but afterward the
LDOS are calculated in approximately one second. On the other hand, the computation of the LDOS on the $50\times 50$ grid with the direct problem takes more than one hour.
Moreover the LDOS can be calculated at other wavelengths without any need of additional time consuming FEM simulations:
for 50 wavelengths the direct problem would take more than two days whereas it takes less than one minute with the QMEM (once the modes have been computed).
This example shows clearly the numerical efficiency of the QMEM compared to direct simulations.
\subsection{Lamellar diffraction grating}
We focus in this section on the periodic case. Let us consider a mono-periodic diffraction grating (See Fig.~\ref{fig:schema_resfentes_MM}) constituted
of slits of width $w$ engraved in a germanium layer of permittivity $\varepsilon_{g'}=16$ and of thickness $h^g=\SI{3}{\micro\meter}$.
The grating is deposited on a ZnS substrate of permittivity $\varepsilon^-=4.84$ and the superstrate is air ($\varepsilon^+=1$).
The computational cell is limited to a strip of width $d$ with quasiperiodicity conditions on the lateral boundaries of coefficient $\alpha$.
The substrate and superstrate are truncated by PML and their thicknesses are $h^\pm=\lambda^{\rm ref}/10$, with $\lambda^{\rm ref}=\SI{14}{\micro\meter}$.
Top and bottom are PML terminated by Neumann homogeneous boundary conditions and have stretching coefficient are $\zeta^+=\zeta^-=\mathrm{e}^{i\frac{\pi}{4}}$. \\
We computed the first 801 eigenfrequencies (with lowest real parts) of this grating for $\alpha=0$ and $\SI{e5}{\radian\per\meter}$, as well as their associated
adjoints. The position of the eigenfrequencies in the complex plane as well as the theoretical curves of the continuous spectrum for $\alpha=\SI{0}{\radian\per\meter}$
are plotted on Fig.~\ref{planC_grating}.
The deviation of the approximate radiation modes eigenfrequencies are due to the large grating-PML distance required to obtain an accurate result on the diffraction efficiencies,
as we will see in the sequel. We focus on six leaky modes the resonant wavelength of which are in the far infrared spectral region $8-14\,\si{\micro\meter}$, corresponding to a transparency window of
the atmosphere (See the inset in Fig.~\ref{planC_grating}). The field maps of those modes for $\alpha=\SI{0}{\radian\per\meter}$ are plotted on Fig.~\ref{planC_grating}.
The corresponding resonant wavelength $\lambda_n=2\pi c/\omega'_n$ and quality factors $Q_n=\omega'_n/(2\omega''_n)$ are reported in Table \ref{table_modes}. \\
The modes labeled $1$ and $6$ have in both cases weak $Q$ factors, which means that the associated resonance is broad.
This is confirmed by the observation of the diffraction efficiencies (Figs.~\ref{b}(a) and \ref{b}(b)), where a wide resonant peak is found around $\lambda_1$ and $\lambda_6$.
For both values of $\alpha$, the resonant parameters of these low-$Q$ modes are almost unchanged. \\
\begin{figure}[htbp!]
\includegraphics[width=0.97\linewidth]{figure10-eps-converted-to.pdf}
\caption{Setup of the problem for the lamellar grating.
(a): sketch of the studied diffraction grating. Parameters are
$w=\SI{0.1}{\micro\meter}$, $h^g=\SI{3}{\micro\meter}$, $\varepsilon_{g'}=16$, $\varepsilon^-=4.84$, $\varepsilon^+=1$, $d=\SI{3}{\micro\meter}$,
all materials are non magnetic ($\mu=1$.)
(b): computational cell for the FEM calculations. Top and bottom PML have stretching coefficient
$\zeta^+=\zeta^-=\mathrm{e}^{i\frac{\pi}{4}}$ and their thicknesses are $\widehat{h}^\pm=\lambda^{\rm ref}/\sqrt{\varepsilon^\pm}$.
The thicknesses of the substrate and superstrate are $h^\pm=\lambda^{\rm ref}/10$, with $\lambda^{\rm ref}=\SI{14}{\micro\meter}$.
We apply quasiperiodicity conditions on the lateral boundaries with $\alpha=\SI{0}{\radian\per\meter}$ and Neumann homogeneous boundary conditions
on the outward boundaries of the PML. Maximum mesh element size is set to be $\lambda^{\rm mesh}/(20\sqrt{\abs{\mathfrak{Re}(\varepsilon)}})$,
where $\lambda^{\rm mesh}=\SI{11}{\micro\meter}$.\label{fig:schema_resfentes_MM}}
\end{figure}
\begin{figure}[htbp!]
\includegraphics[width=0.97\linewidth]{figure11-eps-converted-to.pdf}
\caption{Spectrum of the problem and leaky modes for $\alpha=\SI{0}{\radian\per\meter}$. Top: eigenfrequencies in the complex plane (blue crosses)
and theoretical curves of the continuous spectrum (dashed red and black curves for the substrate and the superstrate respectively).
The inset shows the position of the eigenvalues corresponding to the six leaky modes studied.
Bottom: real part of $H_z$ for these six leaky modes.}
\label{planC_grating}
\end{figure}
\begin{table}[htbp!]
\centering
\begin{tabularx}{\columnwidth}{>{\centering\arraybackslash}X>{\centering\arraybackslash}X>{\centering\arraybackslash}X>{\centering\arraybackslash}X>{\centering\arraybackslash}X}
& \multicolumn{2}{c}{$\alpha=\SI{0}{\radian\per\meter}$} & \multicolumn{2}{c}{$\alpha=\SI{e5}{\radian\per\meter}$} \\ \hline\hline
{$n$} & $\lambda_n$ (\si{\micro\meter}) & $Q_n$ & $\lambda_n$ (\si{\micro\meter}) & $Q_n$\\\hline
{1} & \num{11.06} & \num{3.56} & \num{11.05} & \num{3.55}\\
{2} & \num{10.59} & \num{7.01e9} & \num{10.88} & \num{2.20e2}\\
{3} & \num{10.28} & \num{8.51e1} & \num{10.02} & \num{1.35e2}\\
{4} & \num{8.65} & \num{3.25e10} & \num{8.71} & \num{4.99e2}\\
{5} & \num{7.85} & \num{5.93e1} & \num{7.81} & \num{6.55e1}\\
{6} & \num{7.67} & \num{5.69} & \num{7.66} & \num{5.71}\\
\hline\hline
\end{tabularx}
\caption{Resonant wavelengths $\lambda_n$ and quality factors $Q_n$ of the modes for $\alpha=0$ and $\SI{e5}{\radian\per\meter}$.\label{table_modes}}
\end{table}
\begin{figure*}[htbp!]
\includegraphics[width=0.97\linewidth]{figure12-eps-converted-to.pdf}
\caption{Coupling coefficients $P_n$ as a function of the incident wavelength $\lambda$.
The dashed vertical lines correspond to the position of the resonant wavelength $\lambda_n$ associated with each leaky mode.}
\label{Pna}
\end{figure*}
The coupling coefficients $P_n$ for the six leaky modes as a function of $\lambda$ were computed and are reported on Fig.~\ref{Pna}. One clearly sees a
resonant peak of the modulus of $P_n$ (See Figs.~\ref{Pna}(a) and \ref{Pna}(b)) and a phase jump (See Figs.~\ref{Pna}(c) and \ref{Pna}(d)) around the resonant wavelength $\lambda_n$. As expected, the variations are all the more curt that
the imaginary part of the eigenvalue is weak. These curves also show the relative contribution of the eigenmodes to the overall diffraction process.
The two modes labeled $3$ and $5$ with high quality factors provoke sharp resonances in the transmission and reflection spectra (See Figs.~\ref{b}(a) and \ref{b}(2)).
The high value of the modulus of their coupling coefficient $P_n$ clearly betrays their role in these resonances (See red and magenta curves on Figs.~\ref{Pna}(a) and \ref{Pna}(b)).
On the contrary, modes $2$ and $4$, which have a huge $Q$ factor for $\alpha=\SI{0}{\radian\per\meter}$ (which means they are ``quasi normal'' modes) are very weakly excited in comparison to others modes
on the whole spectral band excepted at the corresponding resonant wavelength (See cyan and green curves on Fig.~\ref{Pna}(a) where the modulus of the coupling coefficients is very weak).
These findings explain why we do not observe significant resonances on the diffraction efficiencies around $\lambda_2$ and $\lambda_4$ (See Fig.~\ref{b}(a)): the associated leaky modes are
not sufficiently excited. Actually, since these modes have extremely low leakage, they shall produce a very narrow resonance. We have computed the diffraction efficiencies
around $\lambda_2$ and $\lambda_4$ with a finer wavelength step and encountered effectively extremely sharp resonances but with very weak
variations of the reflection and transmission coefficients (of the order of $\num{e-6}$).
For $\alpha=\SI{e5}{\radian\per\meter}$, the resonant wavelength of these two modes slightly increases comparing to the case $\alpha=\SI{0}{\radian\per\meter}$, while their $Q$ factor
dramatically collapse (cf. Table \ref{table_modes}). The coupling coefficients are in this case of the same order of magnitude than the others modes (See cyan and green curves on Fig.~\ref{Pna}(b)),
implying sharp scattering resonances in the reflection and transmission spectra (See Fig.~\ref{b}(b)) around $\lambda_2$ and $\lambda_4$. One can observe another sharp resonance at a
wavelength slightly greater than $\SI{7}{\micro\meter}$, which is not studied here.\\
\begin{figure*}[htbp!]
\includegraphics[width=0.97\linewidth]{figure13-eps-converted-to.pdf}
\caption{Comparison between direct problem and QMEM.
(a) and (b): reflection and transmission coefficients in the zeroth order $R_0$ and $T_0$.
(c) and (d): relative integrated error $\mathfrak{E}^{\mathrm r}_{\mathrm {int}}$, absolute errors on transmission $\mathfrak{E}^{\mathrm a}_{T}$
and reflection $\mathfrak{E}^{\mathrm a}_{R}$.}
\label{b}
\end{figure*}
The particular example presented here illustrates
the potential complexity of the diffractive process. Indeed, there is for example two close resonances around $\SI{7.8}{\micro\meter}$ that give raise to an hybrid resonance
of the diffraction efficiencies due principally to a mixture of mode $6$ (with low $Q$ factor, yielding a broad resonance) and mode $5$ (with high $Q$ factor, sharp resonance).
The computation of the complex eigenvalues indicates the presence of these modes and their associated resonant wavelength and linewidth,
but the QMEM allows us to go further by tracking the relative weight of these modes in the scattering process.\\
In order to assess the precision of our method, we have computed the absolute errors on the efficiencies calculated by
solving the diffraction problem (DP) and by the QMEM:
\begin{equation*}
\mathfrak{E}^{\mathrm a}_{T}=T_0^{\rm DP}-T_0^{\rm QMEM}\quad\text{for transmission,}
\end{equation*}
and
\begin{equation*}
\mathfrak{E}^{\mathrm a}_{R}=R_0^{\rm DP}-R_0^{\rm QMEM}\quad\text{for reflection.}
\end{equation*}
We also calculated the integrated relative error on the computational cell $\Omega$ defined as:
\begin{eqnarray*}
\mathfrak{E}^{\mathrm r}_{\mathrm {int}} &&=
\frac{\ensuremath{\left\langle} u_2^{d,\rm DP}-u_2^{d,\rm QMEM} \ensuremath{\,\middle| \,} u_2^{d,\rm DP}-u_2^{d,\rm QMEM}\ensuremath{\right\rangle} }{\ensuremath{\left\langle} u_2^{d,\rm DP} \ensuremath{\,\middle| \,} u_2^{d,\rm DP}\ensuremath{\right\rangle} }\\
&&=\frac{ \int_\Omega \left\lvert u_2^{d,\rm DP}(\boldsymbol r)-u_2^{d,\rm QMEM}(\boldsymbol r)\right\rvert^2 \mathrm{d}\boldsymbol r}{ \int_\Omega \left\lvert u_2^{d,\rm DP}(\boldsymbol r)\right\rvert^2\mathrm{d}\boldsymbol r}.
\end{eqnarray*}
These errors are plotted as a function of $\lambda$ on Figs.~\ref{b}(c) and \ref{b}(d). One can see that the integrated relative error remains
inferior to $\num{e-5}$, and that the absolute errors on the diffraction efficiencies is smaller in absolute value than $\num{5e-4}$, which shows the accuracy of
the QMEM. The main drawback is that we have to take into account a sufficiently large number of modes (here $801$) to reconstruct correctly the field and hence the Fresnel coefficients.
In comparison with the example studied in section \ref{ssec:exnumtri} where only $50$ modes reproduces the absorption cross section well,
we must reconstruct the field very well in the substrate and superstrate to
obtain a satisfying accuracy on the transmission and reflection by taking into account a large number of approximated radiation modes (associated with the continuous spectrum).
On the contrary, the absorption being located into the diffractive object, a smaller number of leaky mode is sufficient to obtain a good approximation of the field inside the scatterer.\\
\section{Conclusion}
The quasimodal expansion method (QMEM) has been implemented and validated in planar and possibly periodic open electromagnetic systems
with arbitrary geometries. The determination of eigenmodes and eigenvalues of those structures, based on the treatment of an
equivalent closed problem with finite PML with the FEM, has been presented. Once the spectrum of Maxwell's operator have been computed,
the solution of the problem with arbitrary sources can be expressed as a linear combination of eigenstates and the
expansion coefficients can be calculated with the help of adjoint eigenvectors. The method developed has been illustrated on
numerical examples, showing both its capacity to perform a precise modal analysis and its accuracy. The first example of a triangular rod
provides the conditions of excitation of a given mode by a plane wave by studying the coupling coefficients as a function of angle and wavelength.
A reduced order model with a few modes is proven to well approximate the absorption cross section. The computation of the LDOS on a 2D spatial grid around the nanoparticle at
an arbitrary wavelength is straightforward and computationally very efficient once the eigenmodes and eigenvectors have been calculated. The second numerical example of a lamellar
diffraction grating illustrates the ability of the method to compute the eigenmodes of periodic media. The richness of the transmission and reflection spectra
with coupled resonances is fully explained by the study of modal expansion coefficients. The precision of the method is demonstrated in comparison with a
diffraction problem solved by the FEM. The extension of the QMEM to three dimensional structures, including bi-periodic grating, will be reported in future works.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\newcommand{2.42in}{2.42in}
\begin{figure}[b!]
\begin{tabular}{l}
\begin{subfigure}[!]{2.42in}
\includegraphics[width=2.3in]{figures/MC-CNN_input_overlay_quart_crop.jpg}
\caption{Input (MAE = $6.00$, RMSE = $38.8$) \label{fig:middleburyA}}
\end{subfigure}
\begin{subfigure}[!]{2.42in}
\includegraphics[width=2.3in]{figures/MC-CNN_output_overlay_quart_crop.jpg}
\caption{Output (MAE = $3.02$, RMSE = $17.9$) \label{fig:middleburyB}}
\end{subfigure}
\\ \\
\begin{subfigure}[!]{2.42in}
\includegraphics[width=2.3in]{figures/W_init_quart_crop.jpeg}
\caption{Input Confidence \label{fig:middlebury_conf}}
\end{subfigure}
\begin{subfigure}[!]{2.42in}
\includegraphics[width=2.3in]{figures/im_quart_crop.jpeg}
\caption{Input Reference \label{fig:middlebury_ref}}
\end{subfigure}
\end{tabular}
\caption{
The bilateral solver can be used to improve depth maps.
A depth map (\subref{fig:middleburyA}) from a state-of-the-art stereo method \cite{Zbontar2015} is processed with our robust bilateral solver using a reference RGB image (\subref{fig:middlebury_ref}).
Our output (\subref{fig:middleburyB}) is smooth with respect to the reference image, resulting in a $50\%$ reduction in error.
\label{fig:middlebury}
}
\end{figure}
Images of the natural world exhibit a useful prior -- many scene properties (depth, color, object category, etc.) are correlated within smooth regions of an image, while differing across discontinuities in the image.
Edge-aware smoothing techniques exploit this relationship to propagate signals of interest within, but not across edges present in an image.
Traditional approaches to edge-aware smoothing apply an image-dependent filter to a signal of interest.
Examples of this include joint bilateral filtering \cite{Smith1997,Tomasi1998} and upsampling \cite{Kopf2007}, adaptive manifolds \cite{GastalOliveira2012AdaptiveManifolds}, the domain transform \cite{GastalOliveira2011DomainTransform}, the guided filter \cite{He2010,He2015}, MST-based filtering \cite{Yang2015}, and weighted median filtering \cite{Ma2013,Zhang2014}.
These techniques are flexible and computationally efficient, but often insufficient for solving more challenging computer vision tasks.
Difficult tasks often necessitate complex iterative inference or optimization procedures that encourage smoothness while maintaining fidelity with respect to some observation.
Optimization algorithms of this nature have been used in global stereo \cite{Scharstein2002}, depth superresolution \cite{ferstl2013b,Kiechle2013,Kwon2015,Li2013,Lu2015,Park2011}, colorization \cite{Levin2004}, and semantic segmentation \cite{deeplab,KrahenbuhlK11,fcn,crfasrnn}.
These approaches are tailored to their specific task, and are generally computationally expensive.
In this work we present an optimization algorithm that is $10$-$1000\times$ faster than existing domain-specific approaches with comparable accuracy, and produces higher-quality output than lightweight filtering techniques with comparable runtimes.
Our algorithm is based on the work of Barron \emph{et al}. \cite{Barron2015A}, who presented the idea of using fast bilateral filtering techniques to solve optimization problems in ``bilateral-space''.
This allows for some optimization problems with bilateral affinity terms to be solved quickly, and also guarantees that the solutions to those problems are ``bilateral-smooth'' --- smooth within objects, but not smooth across edges.
In this paper we present a new form of bilateral-space optimization which we call \emph{the bilateral solver}, which efficiently solves a regularized least-squares optimization problem to produce an output that is bilateral-smooth and close to the input.
This approach has a number of benefits:
{\bf General} The bilateral solver is a single intuitive abstraction that can be applied to many different problems, while matching or beating the specialized state-of-the-art algorithms for each of these problems.
It can be generalized to a variety of loss functions using standard techniques from M-estimation \cite{Hampel86a}.
{\bf Differentiable} Unlike other approaches for edge-aware smoothness which require a complicated and expensive ``unrolling'' to perform backpropagation \cite{crfasrnn}, the backward pass through our solver is as simple and fast as the forward pass, allowing it to be easily incorporated into deep learning architectures.
{\bf Fast} The bilateral solver is expressible as a linear least-squares optimization problem, unlike the non-linear optimization problem used in \cite{Barron2015A}.
This enables a number of optimization improvements including a hierarchical preconditioner and initialization technique that hasten convergence, as well as efficient methods for solving multiple problems at once.
\section{Problem Formulation}
We begin by presenting the objective and optimization techniques that make up our bilateral solver.
Let us assume that we have some per-pixel input quantities $\mathbf{t}$ (the ``target'' value, see Figure~\ref{fig:middleburyA}) and some per-pixel confidence of those quantities $\mathbf{c}$ (Figure \ref{fig:middlebury_conf}), both represented as vectorized images.
Let us also assume that we have some ``reference'' image (Figure~\ref{fig:middlebury_ref}), which is a normal RGB image.
Our goal is to recover an ``output'' vector $\mathbf{x}$ (Figure~\ref{fig:middleburyB}), which will resemble the input target where the confidence is large while being smooth and tightly aligned to edges in the reference image.
We will accomplish this by constructing an optimization problem consisting of an image-dependent smoothness term that encourages $\mathbf{x}$ to be bilateral-smooth, and a data-fidelity term that minimizes the squared residual between $\mathbf{x}$ and the target $\mathbf{t}$ weighted by our confidence $\mathbf{c}$:
\begin{align}
\underset{\mathbf{x}}{\mathrm{minimize}} &\,\, \frac{\lambda}{2} \sum_{i, j} \hat W_{i,j} \left(x_i - x_j \right)^2 + \sum_i c_i (x_i - t_i)^2
\label{eq:pixel_loss}
\end{align}
The smoothness term in this optimization problem is built around an affinity matrix $\hat W$, which is a bistochastized version of a bilateral affinity matrix $W$.
Each element of the bilateral affinity matrix $W_{i, j}$ reflects the affinity between pixels $i$ and $j$ in the reference image in the YUV colorspace:
\begin{equation}
\resizebox{4.4in}{!}{
$
W_{i,j} = \exp\left( - { \norm{ [[p^x_i, p^y_i] - [p^x_j, p^y_j] }^2 \over 2 \sigma_{xy}^2 } - { ( p^l_i - p^l_j )^2 \over 2 \sigma_{l}^2 } - { \norm{ [p^u_i, p^v_i] - [p^u_j, p^v_j] }^2 \over 2 \sigma_{uv}^2 } \right)
$
}
\label{eq:bilateral_affinity}
\end{equation}
Where $p_i$ is a pixel in our reference image with a spatial position $(p^x_i, p^y_i)$ and color $(p^l_i, p^u_i, p^v_i)$\footnote{To reduce confusion between the Y's in ``YUV'' and ``XY'' we refer to luma as ``l''}.
The $\sigma_{xy}$, $\sigma_{l}$, and $\sigma_{uv}$ parameters control the extent of the spatial, luma, and chroma support of the filter, respectively.
This $W$ matrix is commonly used in the bilateral filter \cite{Tomasi1998}, an edge-preserving filter that blurs within regions but not across edges by locally adapting the filter to the image content.
There are techniques for speeding up bilateral filtering \cite{Adams2010,Chen2007} which treat the filter as a ``splat/blur/slice'' procedure:
pixel values are ``splatted'' onto a small set of vertices in a grid \cite{Barron2015A,Chen2007} or lattice \cite{Adams2010} (a soft histogramming operation), then those vertex values are blurred, and then the filtered pixel values are produced via a ``slice'' (an interpolation) of the blurred vertex values.
These splat/blur/slice filtering approaches all correspond to a compact and efficient factorization of $W$:
\begin{equation}
W = S^\mathrm{T} {\bar B} S \label{eq:factorization}
\end{equation}
Barron \emph{et al}. \cite{Barron2015A} built on this idea to allow for optimization problems to be ``splatted'' and solved in bilateral-space.
They use a ``simplified'' bilateral grid and a technique for producing bistochastization matrices $D_\mathbf{n}$, $D_\mathbf{m}$ that together give the the following equivalences:
\begin{equation}
\hat W = S^\mathrm{T} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} S \quad\quad SS^\mathrm{T} = D_\mathbf{m} \label{eq:equiv}
\end{equation}
They also perform a variable substitution, which reformulates a high-dimensional pixel-space optimization problem in terms of the lower-dimensional bilateral-space vertices:
\begin{equation}
\mathbf{x} = S^\mathrm{T} \mathbf{y} \label{eq:substitution}
\end{equation}
Where $\mathbf{y}$ is a small vector of values for each bilateral-space vertex, while $\mathbf{x}$ is a large vector of values for each pixel.
With these tools we can not only reformulate our pixel-space loss function in Eq~\ref{eq:pixel_loss} in bilateral-space, but we can rewrite that bilateral-space loss function in a quadratic form:
\begin{align}
\underset{\mathbf{y}}{\mathrm{minimize}} \quad & {1 \over 2} \mathbf{y}^\mathrm{T} A \mathbf{y} - \mathbf{b}^\mathrm{T} \mathbf{y} + c \label{eq:quad_min} \\
A = \lambda (D_\mathbf{m} - D_\mathbf{n} \bar B D_\mathbf{n}) + \mathrm{diag}(S \mathbf{c}) & \quad\quad \mathbf{b} = S ( \mathbf{c} \circ \mathbf{t} ) \quad\quad c = {1 \over 2} (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \nonumber
\end{align}
where $\circ$ is the Hadamard product.
A derivation of this reformulation can be found in the supplement.
While the optimization problem in Equation~\ref{eq:pixel_loss} is intractably expensive to solve naively, in this bilateral-space formulation optimization can be performed quickly.
Minimizing that quadratic form is equivalent to solving a sparse linear system:
\begin{equation}
A\mathbf{y} = \mathbf{b}
\label{eq:linear_system}
\end{equation}
We can produce a pixel-space solution $\mathbf{\hat x}$ by simply slicing the solution to that linear system:
\begin{equation}
\mathbf{\hat x} = S^\mathrm{T}(A^{-1} \mathbf{b})
\label{eq:solution}
\end{equation}
With this we can describe our algorithm, which we will refer to as the ``bilateral solver.''
The input to the solver is a reference RGB image, a target image that contains noisy observed quantities which we wish to improve, and a confidence image.
We construct a simplified bilateral grid from the reference image, which is bistochastized as in \cite{Barron2015A} (see the supplement for details), and with that we construct the $A$ matrix and $\mathbf{b}$ vector described in Equation~\ref{eq:quad_min} which are used to solve the linear system in Equation~\ref{eq:solution} to produce an output image.
If we have multiple target images (with the same reference and confidence images) then we can construct a larger linear system in which $\mathbf{b}$ has many columns, and solve for each channel simultaneously using the same $A$ matrix. In this many-target case, if $\mathbf{b}$ is low rank then that property can be exploited to accelerate optimization, as we show in the supplement.
Our pixel-space loss (Eq~\ref{eq:pixel_loss}) resembles that of weighted least squares filtering \cite{Elad2002,FFLS2008,Min2014}, with one critical difference being our use of bilateral-space optimization which allows for efficient optimization even when using a large spatial support in the bilateral affinity, thereby improving the quality of our output and the speed of our algorithm.
Our algorithm is similar to the optimization problem that underlies the stereo technique of \cite{Barron2015A}, but with several advantages:
Our approach reduces to a simple least-squares problem, which allows us to optimize using standard techniques (we use the preconditioned conjugate gradient algorithm of \cite{Shewchuk1994}, see the supplement for details).
This simple least-squares formulation also allows us to efficiently backpropagate through the solver (Section~\ref{sec:backprop}), allowing it to be integrated into deep learning pipelines.
This formulation also improves the rate of convergence during optimization, provides guarantees on correctness, allows us to use advanced techniques for preconditioning and initialization (Section~\ref{sec:precond}), and enables robust and multivariate generalizations of our solver (see the supplement).
\section{Backpropagation}
\label{sec:backprop}
Integrating any operation into a deep learning framework requires that it is possible to backpropagate through that operation.
Backpropagating through global operators such as our bilateral solver is generally understood to be difficult, and is an active research area \cite{Ionescu_2015_ICCV}.
Unlike most global smoothing operators, our model is easy to backpropagate through by construction.
Note that we do not mean backpropagating \emph{through} a multiplication of a matrix inverse $A^{-1}$, which would simply be another multiplication by $A^{-1}$.
Instead, we will backpropagate \emph{onto} the $A$ matrix used in the least-squares solve that underpins the bilateral solver, thereby allowing us to backpropagate through the bilateral solver itself.
Consider the general problem of solving a linear system:
\begin{eqnarray}
A \mathbf{y} = \mathbf{b}
\end{eqnarray}
Where $A$ is an invertible square matrix, and $\mathbf{y}$ and $\mathbf{b}$ are vectors. We can solve for $\mathbf{\hat y}$ as a simple least squares problem:
\begin{eqnarray}
\mathbf{\hat y} = A^{-1} \mathbf{b}
\end{eqnarray}
Let us assume that $A$ is symmetric in addition to being positive definite, which is true in our case.
Now let us compute some loss with respect to our estimated vector $g(\mathbf{\hat y})$, whose gradient will be $\partial g / \partial \mathbf{\hat y}$.
We would like to backpropagate that quantity onto $A$ and $\mathbf{b}$:
\begin{equation}
{\partial_g \over \partial_\mathbf{b}} = A^{-1} {\partial_g \over \partial_\mathbf{\hat y}} \quad\quad\quad
{\partial_g \over \partial_A} = \left( - A^{-1} {\partial_g \over \partial_\mathbf{\hat y}} \right) \mathbf{\hat y}^\mathrm{T} = -{\partial_g \over \partial_\mathbf{b}} \mathbf{\hat y}^\mathrm{T} \label{eq:linear_backprop}
\end{equation}
This can be derived using the implicit function theorem. We see that backpropagating a gradient through a linear system only requires a single least-squares solve. The gradient of the loss with respect to the diagonal of $A$ can be computed more efficiently:
\begin{align}
{\partial_g \over \partial_{\mathrm{diag}(A)} } &= -{\partial_g \over \partial_\mathbf{b}} \circ \mathbf{\hat y}
\end{align}
We will use these observations to backpropagate through the bilateral solver.
The bilateral solver takes some input target $\mathbf{t}$ and some input confidence $\mathbf{c}$, and then constructs a linear system that gives us a bilateral-space solution $\mathbf{\hat y}$, from which we can ``slice'' out a pixel-space solution $\mathbf{\hat x}$.
\begin{equation}
\mathbf{\hat y} = A^{-1} \mathbf{b} \quad\quad\quad \mathbf{\hat x} = S^\mathrm{T} \mathbf{\hat y}
\label{eq:forward}
\end{equation}
Note that $A$ and $\mathbf{b}$ are both functions of $\mathbf{t}$ and $\mathbf{c}$, though they are not written as such.
Let us assume that we have computed some loss $f(\hat x)$ and its gradient $\partial_f / \partial_\mathbf{\hat x}$.
Remember that the $A$ matrix and $\mathbf{b}$ vector in our linear system are functions of some input signal $\mathbf{t}$ and some input confidence $\mathbf{c}$.
Using (\ref{eq:linear_backprop}) we can compute the gradient of the loss with respect to the parameters of the linear system within the bilateral solver:
\begin{equation}
{\partial_f \over \partial_\mathbf{b}} = A^{-1} \left( S { \partial_f \over \partial_\mathbf{\hat x}}\right) \quad\quad\quad
{\partial_f \over \partial_{\mathrm{diag}(A)} } = -{\partial_f \over \partial_\mathbf{b}} \circ \mathbf{\hat y}
\end{equation}
We need only compute the gradient of the loss with respect to the diagonal of $A$ as opposed to the entirety of $A$, because the off-diagonal elements of $A$ do not depend on the input signal or confidence.
We can now backpropagate the gradient of the loss $f( \mathbf{\hat x} )$ onto the inputs of the bilateral solver:
\begin{equation}
{\partial_f \over \partial_{\mathbf{t}}} = \mathbf{c} \circ \left( S^\mathrm{T} {\partial_f \over \partial_\mathbf{b}} \right) \quad\quad\quad
{\partial_f \over \partial_{\mathbf{c}}} = \left(S^\mathrm{T} {\partial_f \over \partial_{\mathrm{diag}(A)} } \right) + \left(S^\mathrm{T} {\partial_f \over \partial_\mathbf{b}} \right) \circ \mathbf{t}
\end{equation}
To review, the bilateral solver can be viewed as a function which takes in a reference image, some input signal and a per-pixel confidence in that input signal, and produces some smoothed output:
\begin{equation}
\mathrm{output} \leftarrow \mathrm{solver}_\mathrm{reference}(\mathrm{target}, \mathrm{confidence} )
\end{equation}
And we have shown how to backpropagate through the solver:
\begin{equation}
(\nabla\mathrm{target}, \nabla\mathrm{confidence}) \leftarrow \mathrm{backprop}_\mathrm{reference}( \nabla\mathrm{output} )
\end{equation}
Because the computational cost of the backwards pass is dominated by the least squares solve necessary to compute ${\partial_f / \partial_\mathbf{b}}$, computing the backward pass through the solver is no more costly than computing the forward pass.
Contrast this with past approaches for using iterative optimization algorithms in deep learning architectures, which create a sequence of layers, one for each iteration in optimization \cite{crfasrnn}.
The backward pass in these networks is a fixed function of the forward pass and so cannot adapt like the bilateral solver to the structure of the error gradient at the output.
Furthermore, in these ``unrolled'' architectures, the output at each iteration (layer) must be stored during training, causing the memory requirement to grow linearly with the number of iterations.
In the bilateral solver, the memory requirements are small and independent of the number of iterations, as we only need to store the bilateral-space output of the solver $\mathbf{\hat y}$ during training.
These properties make the bilateral solver an attractive option for deep learning architectures where speed and memory usage are important.
\section{Preconditioning \& Initialization}
\label{sec:precond}
Optimization of the quadratic objective of the bilateral solver can be sped up with improved initialization and preconditioning.
In the previous work of \cite{Barron2015A}, the non-linear optimization used a hierarchical technique which lifted optimization into a pyramid space, using a bilateral variant of the image pyramid optimization approach of \cite{BarronTPAMI2015}.
This approach cannot be used by our solver, as most linear solvers require a preconditioner where the input is of the same dimensionality as the output.
Regardless, the approach of \cite{Barron2015A} is also suboptimal for our use case, as the simple linear structure of our system allows us to construct more accurate and effective preconditioning and initialization techniques.
To best explain our preconditioning and initialization techniques we must first present baselines techniques for both.
We can extract the diagonal of our $A$ matrix to construct a Jacobi preconditioner:
\begin{equation}
\resizebox{3.2in}{!}{$
\mathrm{diag}(A) = \lambda \left( \mathrm{diag}\left(D_\mathbf{m}) - \mathrm{diag}(D_\mathbf{n} \right) \bar B_{\mathrm{diag}} \mathrm{diag}(D_\mathbf{n}) \right) + S \mathbf{c} \nonumber
$}
\end{equation}
This is straightforward to compute, as $D_\mathbf{m}$ and $D_\mathbf{n}$ are diagonal matrices and $\bar B$ has a constant value along the diagonal denoted here as $\bar B_{\mathrm{diag}}$.
The Jacobi preconditioner is simply the inverse of the diagonal of $A$:
\begin{equation}
M_\mathit{jacobi}^{-1}(\mathbf{y}) = \mathrm{diag}(A)^{-1} \mathbf{y}
\end{equation}
We can also initialize the state vector $\mathbf{y}$ in our optimization to the value which minimizes the data term in our loss, which has a closed form:
\begin{equation}
\mathbf{y}_\mathit{flat} = S ( \mathbf{c} \circ \mathbf{t} ) / S ( \mathbf{c} ) \label{eq:flat_init}
\end{equation}
This preconditioner and initialization technique perform well, as can be seen in Figure~\ref{fig:precon_init}.
But we can improve upon these baseline techniques by constructing hierarchical generalizations of each.
Hierarchical preconditioners have been studied extensively for image interpolation and optimization tasks.
Unfortunately, techniques based on image pyramids \cite{Szeliski1990} are not applicable to our task as our optimization occurs in a sparse $5$-dimensional bilateral-space.
More sophisticated image-dependent or graph based techniques \cite{koutis2011combinatorial,Krishnan2013,Szeliski2006} are effective preconditioners, but in our experiments the cost of constructing the preconditioner greatly outweighs the savings provided by the improved conditioning.
We will present a novel preconditioner which is similar in spirit to hierarchical basis functions \cite{Szeliski1990} or push-pull interpolation \cite{Gortler1996}, but adapted to our task using the bilateral pyramid techniques presented in \cite{Barron2015A}.
Because of its bilateral nature, our preconditioner is inherently locally adapted and so resembles image-adapted preconditioners \cite{Krishnan2013,Szeliski2006}.
\begin{figure}[!t]
\centering
\includegraphics[width=4.2in]{figures/bsqs_precond_performance.png}
\caption{
Our loss during PCG for $20$ $4$-megapixel images, with the loss for each image normalized to $[0, 1]$ and with the $25$th-$75$th percentiles plotted.
We see that preconditioning is critical, and that our hierarchical (``Pyr'') preconditioning and initialization techniques significantly improve performance over the naive Jacobi preconditioner and ``flat'' initialization.
Note the non-linear $y$-axis and logarithmic $x$-axis.
\label{fig:precon_init}
}
\end{figure}
We will use the multiscale representation of bilateral-space presented in \cite{Barron2015A} to implement our hierarchical preconditioner.
This gives us
$P(\mathbf{y})$ and $P^\mathrm{T}(\mathbf{z})$,
which construct a pyramid-space vector $\mathbf{z}$ from a bilateral-space vector $\mathbf{y}$, and collapse $\mathbf{z}$ down to $\mathbf{y}$ respectively (see the supplement for details).
To evaluate our preconditioner, we lift our bilateral-space vector into pyramid-space, apply an element-wise scaling of each pyramid coefficient, and then project back onto bilateral-space:
\begin{equation}
M_\mathit{hier}^{-1}(\mathrm{y}) = P^\mathrm{T}\left( { \mathbf{z}_\mathit{weight} \circ P(\mathbf{1}) \circ P(\mathbf{y}) \over P(\mathrm{diag}(A)) } \right)
\label{eq:hier_precond}
\end{equation}
where the division is element-wise. $M_\mathit{hier}^{-1}(\cdot)$ includes an ad-hoc element-wise scaling:
\begin{align}
\mathbf{z}_\mathit{weight} &=
\begin{cases}
1 & \mbox{if } k = 0 \\
\alpha^{-(\beta + k)} & \mbox{otherwise}
\end{cases}
\end{align}
The pyramid-space scaling we use in Equation~\ref{eq:hier_precond} is proportional to:
1) the number of bilateral-space vertices assigned to each pyramid-space coefficient (computed by lifting a vector of ones),
2) the inverse of the diagonal of the $A$ matrix, computed by lifting and inverting the diagonal of the $A$ matrix, and
3) an exponential weighting of each pyramid-space coefficient according to its level in the pyramid.
This per-level scaling $\mathbf{z}_\mathit{weight}$ is computed as a function of the level $k$ of each coefficient, which allows us to prescribe the influence that each scale of the pyramid should have in the preconditioner.
Note that as the coarser levels are weighed less (ie, as $\alpha$ or $\beta$ increases) our preconditioner degenerates naturally to the Jacobi preconditioner.
In all experiments we use ($\alpha = 2$, $\beta = 5$) for the preconditioner.
This same bilateral pyramid approach can be used to effectively initialize the state before optimization.
Rather than simply taking the input target and using it as our initial state as was done in Equation~\ref{eq:flat_init}, we perform a push-pull filter of that initial state with the pyramid according to the input confidence:
\begin{equation}
\mathbf{y}_\mathit{hier} = P^\mathrm{T}\left( { \mathbf{z}_\mathit{weight} \circ P(S ( \mathbf{c} \circ \mathbf{t} )) \over P(\mathbf{1}) } \right) / P^\mathrm{T}\left( { \mathbf{z}_\mathit{weight} \circ P(S ( \mathbf{c} )) \over P(\mathbf{1}) } \right)
\end{equation}
Like our hierarchical preconditioner, this initialization degrades naturally to our non-hierarchical initialization in Eq.~\ref{eq:flat_init} as $\alpha$ and $\beta$ increase.
In all experiments we use ($\alpha = 4$, $\beta = 0$) for initialization.
See Figure~\ref{fig:precon_init} for a visualization of how our hierarchical preconditioning and initialization improve convergence during optimization, compared to the ``flat'' baseline algorithms.
See Table~\ref{table:stereo_speed} for a comparison of our runtime compared to \cite{Barron2015A}, where we observe a substantial speedup with respect to the solver of \cite{Barron2015A}.
Though the techniques presented here for efficient optimization and initialization are framed in terms of the forward pass through the solver, they all apply directly to the backward pass through the solver described in Section~\ref{sec:backprop}, and produce equivalent improvements in speed.
\begin{table}[t!]
\begin{center}
\caption{
Our approach's runtime has a lower mean and variance than that of \cite{Barron2015A}.
Runtimes are from the same workstation, averaged over the $20$ $4$-megapixel images used in \cite{Barron2015A} for profiling.
\label{table:stereo_speed}
}
\resizebox{3.4in}{!}{
\small
\begin{tabular}{l | c c }
& \multicolumn{2}{c}{Time (ms)} \\
$\mathrm{Algorithm}$ $\mathrm{Component}$ & $\mathrm{Barron}$ \emph{et al}. \cite{Barron2015A} \hspace{0.05in} & \hspace{0.05in} $\mathrm{This}$ $\mathrm{Work}$ \\
\hline
$\mathrm{Problem}$ $\mathrm{Construction}$ & $ 190 \pm 167 $ & $ 35 \pm 7 $ \\
$\mathrm{Optimization}$ & $ 460 \pm 207 $ & $ 152 \pm 36 $ \\
\hline
$\mathrm{Total}$ & $ 650 \pm 266 $ & $ 187 \pm 37 $
\end{tabular}
}
\end{center}
\end{table}
\section{Applications}
We evaluate our solver on a variety of applications: stereo, depth superresolution, image colorization, and semantic segmentation.
Each of these tasks has been the focus of significant research, with specialized techniques having been developed for each problem.
For some of these applications (semantic segmentation and stereo) our solver serves as a building block in a larger algorithm, while for others (colorization and depth superresolution) our solver is a complete algorithm.
We will demonstrate that our bilateral solver produces results that are comparable to or better than the state-of-the-art for each problem, while being either 1-3 orders of magnitude faster.
For those techniques with comparable runtimes, we will demonstrate that the bilateral solver produces higher quality output.
Unless otherwise noted, all runtimes were benchmarked on a 2012 HP Z420 workstation (Intel Xeon CPU E5-1650, 3.20GHz, 32 GB RAM), and our algorithm is implemented in standard, single-threaded C++.
As was done in \cite{Barron2015A}, the output of our bilateral solver is post-processed by the domain transform \cite{GastalOliveira2011DomainTransform} to smooth out the blocky artifacts introduced by the simplified bilateral grid, and the domain transform is included in all runtimes.
For all results of each application we use the same implementation of the same algorithm with different parameters, which are noted in each sub-section. Parameters are:
the spatial bandwidths of the bilateral grid ($\sigma_{\mathit{xy}}$, $\sigma_{\mathit{l}}$, $\sigma_{\mathit{uv}}$),
the smoothness multiplier ($\lambda$),
the spatial and range bandwidths of the domain transform ($\sigma'_{\mathit{xy}}$, $\sigma'_{\mathit{rgb}}$).
Unless otherwise stated, the bilateral solver is run for $25$ iterations of PCG.
\subsection{Stereo}
We first demonstrate the utility of the bilateral solver as a post-processing procedure for stereo algorithms.
Because depth maps produced by stereo algorithms tend to have heavy-tailed noise distributions, we use a variant of our technique called the robust bilateral solver (RBS) with the Geman-McClure loss (described in the supplement).
We applied the RBS to the output of the top-performing MC-CNN \cite{Zbontar2015} algorithm on the Middlebury Stereo Benchmark V3 \cite{Zbontar2015}.
For comparison, we also evaluated against four other techniques which can or have been used to post-process the output of stereo algorithms.
In Table~\ref{table:middlebury} we see that the RBS cuts test- and training-set absolute and RMS errors in half while having little negative effect on the ``bad $1$\%'' error metric (the percent of pixels which whose disparities are wrong by more than $1$).
This improvement is smaller when we only consider non-occluded (NoOcc) as most state-of-the-art stereo algorithms already perform well in the absence of occlusions.
The improvement provided by the RBS is more dramatic when the depth maps are visualized, as can be seen in Figure~\ref{fig:middlebury} and in the supplement.
At submission time our technique achieved a lower test-set MAE and RMSE on the Middlebury benchmark than any published technique\footnote{\url{http://vision.middlebury.edu/stereo/eval3/}}.
See the supplement for a discussion of how our baseline comparison results were produced, an evaluation of our RBS and our baseline techniques on three additional contemporary stereo algorithms, the parameters settings used in this experiment, how we compute the initial confidence $\mathbf{c}$ for the RBS, and many visualizations.
\begin{table}[b!]
\begin{center}
\caption{
Our robust bilateral solver significantly improves depth map quality the state-of-the-art MC-CNN\cite{Zbontar2015} stereo algorithm on the Middlebury dataset V3 \cite{Scharstein2014}.
\label{table:middlebury}
}
\resizebox{3.5in}{!}{
\begin{tabular}{ l | c c c | c c c }
$\mathrm{Method}$ & \multicolumn{3}{c|}{$\mathrm{All}$} & \multicolumn{3}{c}{$\mathrm{NoOcc}$} \\
& $\mathrm{bad}\,1\%$ & $\mathrm{MAE}$ & $\mathrm{RMSE}$ & $\mathrm{bad}\,1\%$ & $\mathrm{MAE}$ & $\mathrm{RMSE}$ \\
\hline
\multicolumn{7}{l}{\quad Test Set} \\
\hline
$\mathrm{ MC-CNN}$\cite{Zbontar2015} & $ 28.1 $ \cellcolor{Yellow} & $ 17.9 $ & $ 55.0 $ & $ 18.0 $ \cellcolor{Yellow} & $ 3.82 $ & $ 21.3 $ \\
$\mathrm{ MC-CNN}$\cite{Zbontar2015}$\mathrm{ + RBS}$ & $ 28.2 $ & $ 8.19 $ \cellcolor{Yellow} & $ 29.9 $ \cellcolor{Yellow} & $ 18.9 $ & $ 2.67 $ \cellcolor{Yellow} & $ 15.0 $ \cellcolor{Yellow} \\
\multicolumn{7}{c}{} \\
\hline
\multicolumn{7}{l}{\quad Training Set} \\
\hline
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} & $ 20.07 $ & $ 5.93 $ & $ 18.36 $ & $ 10.42 $ \cellcolor{Yellow} & $ 1.94 $ & $ 9.07 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{TF}$\cite{Yang2015} & $ 29.15 $ & $ 5.67 $ & $ 16.18 $ & $ 20.15 $ & $ 2.17 $ & $ 7.71 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{FGF}$\cite{He2015} & $ 32.29 $ & $ 5.91 $ & $ 16.32 $ & $ 23.62 $ & $ 2.42 $ & $ 7.98 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{WMF}$\cite{Ma2013} & $ 33.37 $ & $ 5.30 $ & $ 15.62 $ & $ 26.29 $ & $ 2.32 $ & $ 8.22 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{DT}$\cite{GastalOliveira2011DomainTransform} & $ 25.17 $ & $ 5.69 $ & $ 16.53 $ & $ 15.53 $ & $ 2.01 $ & $ 7.72 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{RBS\,(Ours)}$ & $ 19.49 $ \cellcolor{Yellow} & $ 2.81 $ \cellcolor{Yellow} & $ 8.44 $ \cellcolor{Yellow} & $ 11.33 $ & $ 1.40 $ \cellcolor{Yellow} & $ 5.23 $ \cellcolor{Yellow}
\end{tabular}
}
\end{center}
\end{table}
\subsection{Depth Superresolution}
With the advent of consumer depth sensors, techniques have been proposed for upsampling noisy depth maps produced by these sensors using a high-resolution RGB reference image \cite{ferstl2013b,Kiechle2013,Kwon2015,Li2013,Lu2015,Park2011,chan2008,Liu2013,Min2014}.
Other techniques have been developed for post-processing depth maps in other contexts \cite{Yang2015,Ma2013,Shen2015}, and many general edge-aware upsampling or filtering techniques can be used for this task\cite{Kopf2007,GastalOliveira2011DomainTransform,He2010,He2015,Zhang2014}.
We present an extensive evaluation of the bilateral solver against these approaches for the depth superresolution task.
Given a noisy input depth map and an RGB reference image, we resize the depth map to be the size of the reference image with bicubic interpolation and then apply the bilateral solver or one of our baseline techniques.
\begin{table}[b!]
\begin{center}
\caption{
Performance on the depth superresolution task of \cite{ferstl2013b}.
Runtimes in gray were not computed on our reference hardware, and algorithms which use external training data are indicated with a dagger.
\label{table:super}
}
\resizebox{4.0in}{!}{
\begin{tabular}{c c c}
\begin{tabular}{@{}r@{}l | c | c }
& $\mathrm{Method}$ & $\mathrm{Err}$ & $\mathrm{Time\,(sec)}$ \\
\hline
& $\mathrm{Nearest\,Neighbor}$ & $ 7.26 $ & $ 0.003 $ \\
& $\mathrm{Bicubic}$ & $ 5.91 $ & $ 0.007 $ \\
$\dagger$ & $\mathrm{Kiechle\,\emph{et al}. }$\cite{Kiechle2013} & $ 5.86 $ & \cellcolor{Blue} $ 450 $ \\
& $\mathrm{Bilinear}$ & $ 5.16 $ & $ 0.004 $ \\
& $\mathrm{Liu\,\emph{et al}.\,}$\cite{Liu2013} & $ 5.10 $ & $ 16.60 $ \\
& $\mathrm{Shen\,\emph{et al}.\,}$\cite{Shen2015} & $ 4.24 $ & $ 31.48 $ \\
& $\mathrm{Diebel\,\&\,Thrun\,}$\cite{Diebel05b} & $ 3.98 $ & $ - $ \\
& $\mathrm{Chan\,\emph{et al}. }$\cite{chan2008} & $ 3.83 $ & \cellcolor{Green}$ 3.02 $ \\
& $\mathrm{Guided Filter }$\cite{He2010,ferstl2013b} & $ 3.76 $ & \cellcolor{Green} $ 23.89 $ \\
& $\mathrm{Min\,\emph{et al}.\,}$\cite{Min2014} & $ 3.74 $ & $ 0.383 $ \\
$\dagger$ & $\mathrm{Lu\,\&\,Forsyth }$\cite{Lu2015} & $ 3.69 $ & \cellcolor{Pink} $ 20 $ \\
& $\mathrm{Park\,\emph{et al}. }$\cite{Park2011} & $ 3.61 $ & \cellcolor{Green}$ 24.05 $ \\
\multicolumn{3}{c}{ \vdots }
\end{tabular}
& \hspace{0.15in} &
\begin{tabular}{@{}r@{}l | c | c }
\multicolumn{3}{c}{ \vdots } \\
&$\mathrm{Domain\,Transform\,}$\cite{GastalOliveira2011DomainTransform} & $ 3.56 $ & $ 0.021 $ \\
& $\mathrm{Ma\,\emph{et al}.\,}$\cite{Ma2013} & $ 3.49 $ & \cellcolor{Teal} $ 18 $ \\
& $\mathrm{Guided Filter (Matlab) }$\cite{He2010} & $ 3.47 $ & $ 0.434 $ \\
& $\mathrm{Zhang\,\emph{et al}.\,}$\cite{Zhang2014} & $ 3.45 $ & \cellcolor{WindowsColor} $ 1.346 $ \\
& $\mathrm{Fast Guided Filter }$\cite{He2015} & $ 3.41 $ & $ 0.225 $ \\
& $\mathrm{Yang\,2015\,}$\cite{Yang2015} & $ 3.41 $ & \cellcolor{WindowsColor} $ 0.304 $ \\
& $\mathrm{Yang\,\emph{et al}.\,2007\,}$\cite{Yang2007} & $ 3.25 $ & $ - $ \\
& $\mathrm{Farbman\,\emph{et al}.\,}$\cite{FFLS2008} & $ 3.19 $ & $ 6.11 $ \\
& $\mathrm{JBU\,}$\cite{Adams2010,Kopf2007} & $ 3.14 $ & $ 1.98 $ \\
& $\mathrm{Ferstl\,\emph{et al}. }$\cite{ferstl2013b} & $ 2.93 $ & \cellcolor{Green}$ 140 $ \\
$\dagger$ & $\mathrm{Li\,\emph{et al}. }$\cite{Li2013} & \cellcolor{Orange} $ 2.56 $ & \cellcolor{Blue} $ 700 $ \\
$\dagger$ & $\mathrm{Kwon\,\emph{et al}. }$\cite{Kwon2015} & \cellcolor{Red} $ 1.21 $ & \cellcolor{Blue}$ 300 $ \\
\hline
& $\mathrm{BS\,(Ours)}$ & \cellcolor{Yellow} $ 2.70 $ & $ 0.234 $
\end{tabular}
\end{tabular}
}
\end{center}
\end{table}
The hyperparameters used by the solver for all experiments are: $\sigma_{\mathit{xy}} = 8$, $\sigma_l = 4$, $\sigma_{\mathit{uv}} = 3$, $\sigma'_{\mathit{xy}} = \sigma'_{\mathit{rgb}} = 16$, $\lambda = 4^{f-1/2}$ (where $f$ is the upsampling factor) and $15$ iterations of PCG.
Our confidence $\mathbf{c}$ is a Gaussian bump ($\sigma=f/4$) modeling the support of each low-resolution pixel in the upsampled image.
To evaluate our model, we use the depth superresolution benchmark of \cite{ferstl2013b} which is based on the Middlebury stereo dataset \cite{Scharstein2002}.
Our performance can be see in Table~\ref{table:super} and Figure~\ref{fig:super}, with more detailed results in the supplement.
The bilateral solver produces the third-lowest error rate for this task, though the two better-performing tasks \cite{Kwon2015,Li2013} use large amounts of external training data and so have an advantage over our technique, which uses no learning for this experiment.
Our approach is $600\times$, $1200\times$, and $3000\times$ faster than the three most accurate techniques.
The techniques with speeds comparable to or better than the bilateral solver \cite{GastalOliveira2011DomainTransform,Yang2015,He2015,Min2014} produce error rates that are 25-40$\%$ greater than our approach.
The bilateral solver represents a effective combination of speed and accuracy, while requiring no training or learning.
See the supplement for a more detailed table, a discussion of baselines and runtimes, and many visualizations.
\newcommand{1.12in}{1.15in}
\begin{figure}[!]
\centering
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_image.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_true.jpg}
\caption{True Depth}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_input.jpg}
\caption{Input Depth}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_bilateral.jpg}
\caption{JBU \cite{Adams2010,Kopf2007}}
\end{subfigure}
\\
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_my_fastguide.jpg}
\caption{Guided Filter \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_ferstl.jpg}
\caption{Ferstl \emph{et al}. \cite{ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_li.jpg}
\caption{Li \emph{et al}. \cite{Li2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_bsqs.jpg}
\caption{Our results}
\end{subfigure}
\caption{
Partial results for the depth superresolution task of \cite{ferstl2013b}, see the supplement for exhaustive visualizations.
\label{fig:super}
}
\end{figure}
\subsection{Colorization}
\begin{figure}[b!]
\centering
\begin{subfigure}[!]{1.55in}
\includegraphics[width=1.55in]{figures/colorize/example_m.jpg}
\caption{Input}
\end{subfigure}
\begin{subfigure}[!]{1.55in}
\includegraphics[width=1.55in]{figures/colorize/example_res.jpg}
\caption{Levin \emph{et al}. \cite{Levin2004}}
\end{subfigure}
\begin{subfigure}[!]{1.55in}
\includegraphics[width=1.55in]{figures/colorize/example_output.jpg}
\caption{Our results}
\end{subfigure}
\caption{
Results for the user-assisted colorization task.
Our bilateral solver produces comparable results to the technique of Levin \emph{et al}. \cite{Levin2004} while being $95\times$ faster.
\label{fig:colorization}
}
\end{figure}
Colorization is the problem of introducing color to a grayscale image with a small amount of user input or outside information, for the purpose of improving black-and-white films or photographs.
Levin \emph{et al}. \cite{Levin2004} presented an effective technique for this task by formulating and solving a specialized optimization problem.
We can solve the same task using our bilateral solver: we use the grayscale image as the input reference image and the UV channels of the user-annotated scribbles as the input target images, with a confidence image that is $1$ where the user has scribbled and $0$ everywhere else.
We then construct our final output by combining the grayscale image with our output UV images, and converting from YUV to RGB.
Our results can be seen in Figure~\ref{fig:colorization}, where we see that our output is nearly indistinguishable from that of \cite{Levin2004}.
The important distinction here is speed, as the approach of \cite{Levin2004} take $80.99$ seconds per megapixel while our approach takes $0.854$ seconds per megapixel --- a $95\times$ speedup.
For all results our parameters are $\sigma_{\mathit{xy}} = \sigma_{\mathit{l}} = \sigma_{\mathit{uv}} = 4$, $\lambda = 0.5$, $\sigma'_{\mathit{xy}} = 4$, $\sigma'_{\mathit{rgb}} = 8$.
More results can be see in the supplement.
\subsection{Semantic Segmentation}
\begin{table}[b!]
\begin{center}
\caption{
Semantic segmentation results and runtimes on Pascal VOC 2012 validation set.
The bilateral solver improves performance over the CNN output while being substantially faster than the CRF-based approaches.
``Post'' is the time spent post-processing the CNN output, which is the dense CRF for DeepLab, and the generalized CRF-RNN component for CRF-RNN.
FCN is the convolutional neural network component of the CRF-RNN model.
$^*$DeepLab-LargeFOV model from \cite{deeplab} trained on Pascal VOC 2012 training data augmented with data from \cite{BharathICCV2011}.
$^\dagger$CRF-RNN model from \cite{crfasrnn} trained with additional MSCOCO data and evaluated on reduced Pascal validation set of 346 images.
\label{table:segmentation}
}
\resizebox{2.6in}{!}{
\begin{tabular}{ l | r |r|r|r}
\multirow{2}{*}{$\mathrm{Method}$} & \multirow{2}{*}{$\mathrm{IOU (\%)}$} &
\multicolumn{3}{c}{$\mathrm{Time\ (ms)}$}
\\
&&$\mathrm{CNN}$ & $\mathrm{Post}$ & $\mathrm{Total}$\\
\hline
$\mathrm{DeepLab}$ & \cellcolor{Yellow} $62.25^*$ & $58$ & $0$ & \cellcolor{Red}$58$ \\
$\mathrm{DeepLab + CRF}$ & \cellcolor{Red} $67.64^*$ & $58$ & $918$ & \cellcolor{Yellow}$976$\\
$\mathrm{DeepLab + BS (Ours)}$& \cellcolor{Orange} $66.00^*$ & $58$ & $111$ & \cellcolor{Orange} $169$\\
\hline
$\mathrm{CNN}$ & \cellcolor{Yellow} $69.60^\dagger$ & $715$ & $0$ & \cellcolor{Red} $715$\\
$\mathrm{CRF-RNN}$ & \cellcolor{Red} $72.96^\dagger$ & $715$ & $2214$ & \cellcolor{Yellow}$2929$\\
$\mathrm{CNN + BS (Ours)}$&\cellcolor{Orange} $70.68^\dagger$ & $715$ & $217$ & \cellcolor{Orange} $913$
\end{tabular}
}
\end{center}
\end{table}
Semantic segmentation is the problem of assigning a category label to each pixel in an image.
State-of-the-art approaches to semantic segmentation use large convolutional neural networks (CNNs) to map from pixels to labels \cite{deeplab,fcn}.
The output of these CNNs is often smoothed across image boundaries, so recent approaches refine their output with a CRF (\cite{deeplab,crfasrnn}).
These CRF-based approaches improve per-pixel labeling accuracy, but this accuracy comes at a computational cost: inference in a fully connected CRF on a $500\times 500$ image can take up to a second (see Table~\ref{table:segmentation}).
To evaluate whether the bilateral solver could improve the efficiency of semantic segmentation pipelines, we use it instead of the CRF component in two state-of-the-art models: DeepLab-LargeFOV \cite{deeplab} and CRF-RNN \cite{crfasrnn}.
The DeepLab model consists of a CNN trained on Pascal VOC12 and then augmented with a fixed dense CRF.
The CRF-RNN model generalizes the CRF with a recurrent neural network, and trains this component jointly with the CNN on Pascal and MSCOCO.
As the bilateral solver operates on real-valued inputs, it is not immediately clear how to map it onto the discrete optimization problem of the dense CRF.
For each class, we compute the $21$-channel class probability image from the CNN outputs of either the DeepLab or CRF-RNN model.
As many class probability maps are zero across the entire image, the resulting $\mathbf{b}$ matrix in the bilateral solver is often low-rank, allowing us to solve a reduced linear system to recover the smoothed class probability maps (see the supplement).
This approach produces nearly identical output, with a $5\times$ speedup on average.
We applied our bilateral solver to the class probability maps using uniform confidence.
The resulting discrete segmentations are more accurate and qualitatively smoother than the CNN outputs, despite our per-channel smoothing providing no explicit smoothness guarantees on the argmax of the filtered per-class probabilities (Table.~\ref{table:segmentation}, Figure \ref{fig:seg_example}).
The bilateral solver is $8-10\times$ faster than the CRF and CRF-RNN approaches when applied to the same inputs (Table~\ref{table:segmentation}).
Although the bilateral solver performs slightly worse than the CRF-based approaches, its speed suggests that it may be a useful tool in contexts such as robotics and autonomous driving, where low latency is necessary.
\begin{figure}[!t]
\centering
\begin{subfigure}[!]{1.15in}
\includegraphics[width=1.15in]{figures/segmentation_image.jpg}
\caption{Image \label{fig:seg_exampleA}}
\end{subfigure}
\begin{subfigure}[!]{1.15in}
\includegraphics[width=1.15in]{figures/segmentation_input.jpg}
\caption{DeepLab \label{fig:seg_exampleB}}
\end{subfigure}
\begin{subfigure}[!]{1.15in}
\includegraphics[width=1.15in]{figures/segmentation_crf.jpg}
\caption{DenseCRF \label{fig:seg_exampleC}}
\end{subfigure}
\begin{subfigure}[!]{1.15in}
\includegraphics[width=1.15in]{figures/segmentation_bs.jpg}
\caption{BS (Ours) \label{fig:seg_exampleD}}
\end{subfigure}
\caption{
Using the DeepLab CNN-based semantic segmentation algorithm \cite{deeplab} (\ref{fig:seg_exampleB}) as input our bilateral solver can produce comparable edge-aware output (\ref{fig:seg_exampleD}) to the DenseCRF \cite{KrahenbuhlK11} used in \cite{deeplab} (\ref{fig:seg_exampleC}), while being $8\!\times$ faster.
\label{fig:seg_example}
}
\end{figure}
\section{Conclusion}
We have presented the bilateral solver, a flexible and fast technique for inducing edge-aware smoothness.
We have demonstrated that the solver can produce or improve state-of-the-art results on a variety of different computer vision tasks, while being faster or more accurate than other approaches.
Its speed and generality suggests that the bilateral solver is a useful tool in the construction of computer vision algorithms and deep learning pipelines.
\clearpage
\bibliographystyle{splncs03}
\section{Derivation}
\label{sec:derivation}
In the main paper we presented an optimization problem where we solve for a per-pixel quantity $\mathbf{x}$ subject to a smoothness term that encourages $\mathbf{x}$ to be smooth and a data term that encourages $\mathbf{x}$ to resemble some observed input ``target'' quantities $\mathbf{t}$ proportionally to a per-pixel ``confidence'' $\mathbf{c}$:
\begin{align}
\underset{\mathbf{x}}{\mathrm{minimize}} &\,\, \frac{\lambda}{2} \sum_{i, j} \hat W_{i,j} \left(x_i - x_j \right)^2 + \sum_i c_i (x_i - t_i)^2
\label{eq:pixel_loss}
\end{align}
We can use the procedure detailed in Barron~\emph{et al}. \cite{Barron2015A} to reformulate the smoothness term into matrix/vector notation.
The data term can be similarly reformulated:
\begin{align}
\sum_i c_i (x_i - t_i)^2 =& (\mathbf{x} - \mathbf{t})^\mathrm{T} \mathrm{diag}(\mathbf{c}) (\mathbf{x} - \mathbf{t}) \\
=& \mathbf{x}^\mathrm{T} \mathrm{diag}(\mathbf{c}) \mathbf{x} - 2\mathbf{x}^\mathrm{T} \mathrm{diag}(\mathbf{c}) \mathbf{t} + \mathbf{t}^\mathrm{T} \mathrm{diag}(\mathbf{c}) \mathbf{t} \\
=& \mathbf{x}^\mathrm{T} \mathrm{diag}(\mathbf{c}) \mathbf{x} - 2 \left(\mathbf{c} \circ \mathbf{t} \right)^\mathrm{T} \mathbf{x} + \left(\mathbf{c} \circ \mathbf{t} \right)^\mathrm{T} \mathbf{t}
\end{align}
Combining this reformulated data term with the reformulated smoothness term from \cite{Barron2015A} we get:
\begin{equation}
\underset{\mathbf{x}}{\mathrm{minimize}} \quad \mathbf{x}^\mathrm{T} \left( \lambda \left( I - \hat W \right) + \mathrm{diag}(\mathbf{c}) \right) \mathbf{x} - 2 (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{x} + (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t}
\label{eq:pixel_prob_mat}
\end{equation}
Using the bistochastization algorithm from \cite{Barron2015A}, we can decompose $\hat W$:
\begin{equation}
\hat W = S^\mathrm{T} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} S
\label{eq:What}
\end{equation}
The bistochastization algorithm from \cite{Barron2015A} is reproduced in Algorithm~\ref{alg:bistoch}.
\begin{algorithm}[!]
\caption{Bilateral-space bistochastization, reproduced from \cite{Barron2015A} \label{alg:bistoch} \\
{\bf Input:} \\
$W = S^\mathrm{T} \bar B S$ \quad // A splat-blur-splice decomposition of $W$ \\
{\bf Output:} \\
$D_\mathbf{n}, D_\mathbf{m}$ \quad // Matrices to bistochastize $W$
}
\begin{algorithmic}[1]
\State $\mathbf{m} \gets S\mathbf{1}$
\State $\mathbf{n} \gets \mathbf{1}$
\While{ not converged}
\State $\mathbf{n} \gets \sqrt{ ( \mathbf{n} \circ \mathbf{m} ) / (\bar B \mathbf{n} ) }$
\EndWhile
\State $D_\mathbf{n} \gets \mathrm{diag}(\mathbf{n})$
\State $D_\mathbf{m} \gets \mathrm{diag}(\mathbf{m})$
\end{algorithmic}
\end{algorithm}
Using the simplified bilateral grid of \cite{Barron2015A} gives us the following equivalence:
\begin{equation}
SS^\mathrm{T} = D_\mathbf{m} \label{eq:SS}
\end{equation}
We will use the same bilateral-space variable substitution as \cite{Barron2015A}, by rewriting our optimization problem framed in terms of pixels $\mathbf{x}$ as an optimization in terms of bilateral-space vertices $\mathbf{y}$:
\begin{equation}
\mathbf{x} = S^\mathrm{T} \mathbf{y} \label{eq:substitution}
\end{equation}
Let us perform this variable substitution and simplify the resulting expression using our known equivalences, first for the parts of Equation~\ref{eq:pixel_prob_mat} which correspond to the smoothness term:
\begin{align}
\mathbf{x}^\mathrm{T} \left( I - \hat W \right) \mathbf{x} =& \mathbf{x}^\mathrm{T} \left( I - S^\mathrm{T} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} S \right) \mathbf{x} \tag*{by Eq \ref{eq:What}} \\
=& (S^\mathrm{T} \mathbf{y})^\mathrm{T} \left( I - S^\mathrm{T} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} S \right) (S^\mathrm{T} \mathbf{y}) \tag*{by Eq \ref{eq:substitution}} \\
=& \mathbf{y}^\mathrm{T} \left( S I S^\mathrm{T} - S S^\mathrm{T} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} S S^\mathrm{T} \right) \mathbf{y} \\
=& \mathbf{y}^\mathrm{T} \left( D_\mathbf{m} - D_\mathbf{m} D_\mathbf{m}^{-1} D_\mathbf{n} {\bar B} D_\mathbf{n} D_\mathbf{m}^{-1} D_\mathbf{m} \right) \mathbf{y} \tag*{by Eq \ref{eq:SS}} \\
=& \mathbf{y}^\mathrm{T} \left( D_\mathbf{m} - D_\mathbf{n} {\bar B} D_\mathbf{n} \right) \mathbf{y}
\end{align}
And now, we will perform the variable substitution on the parts of Equation~\ref{eq:pixel_prob_mat} which correspond to the data term:
\begin{align}
& \mathbf{x}^\mathrm{T} \mathrm{diag}(\mathbf{c}) \mathbf{x} - 2 (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{x} + (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \\
=& (S^\mathrm{T} \mathbf{y})^\mathrm{T} \mathrm{diag}(\mathbf{c}) (S^\mathrm{T} \mathbf{y}) - 2 (\mathbf{c} \circ \mathbf{t})^\mathrm{T} (S^\mathrm{T} \mathbf{y}) + (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \tag*{by Eq \ref{eq:substitution}} \\
=& \mathbf{y}^\mathrm{T} (S \mathrm{diag}(\mathbf{c}) S^\mathrm{T}) \mathbf{y} - 2 (S (\mathbf{c} \circ \mathbf{t}))^\mathrm{T} \mathbf{y} + (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \\
=& \mathbf{y}^\mathrm{T} \mathrm{diag}(S \mathbf{c}) \mathbf{y} - 2 (S (\mathbf{c} \circ \mathbf{t}))^\mathrm{T} \mathbf{y} + (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t}
\end{align}
Combining all of this we can rewrite Equation~\ref{eq:pixel_prob_mat} in bilateral-space as follows:
\begin{equation}
\underset{\mathbf{y}}{\mathrm{minimize}} \quad {1 \over 2} \mathbf{y}^\mathrm{T} \left( \lambda \left( D_\mathbf{m} - D_\mathbf{n} {\bar B} D_\mathbf{n} \right) + \mathrm{diag}(S \mathbf{c}) \right) \mathbf{y} - (S (\mathbf{c} \circ \mathbf{t}))^\mathrm{T} \mathbf{y} + {1 \over 2} (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \label{eq:pixel_prob_after}
\end{equation}
Unlike the non-linear optimization problem of \cite{Barron2015A}, because we have a simple quadratic optimization problem we can rewrite it in standard form:
\begin{align}
\underset{\mathbf{y}}{\mathrm{minimize}} \quad & {1 \over 2} \mathbf{y}^\mathrm{T} A \mathbf{y} - \mathbf{b}^\mathrm{T} \mathbf{y} + c \label{eq:quad_min} \\
A = \lambda (D_\mathbf{m} - D_\mathbf{n} \bar B D_\mathbf{n}) + \mathrm{diag}(S \mathbf{c}) & \quad\quad \mathbf{b} = S ( \mathbf{c} \circ \mathbf{t} ) \quad\quad c = {1 \over 2} (\mathbf{c} \circ \mathbf{t})^\mathrm{T} \mathbf{t} \nonumber
\end{align}
By taking the derivative of this loss function and setting it to zero we see that minimizing that quadratic form is equivalent to solving the sparse linear system
\begin{equation}
A\mathbf{y} = \mathbf{b}
\label{eq:linear_system}
\end{equation}
We will solve this optimization problem using the preconditioned conjugate gradient (PCG) algorithm, using the initialization and preconditioning tricks described in the paper.
We use the PCG implementation described in Section B3 of \cite{Shewchuk1994}, which is reproduced here in Algorithm~\ref{alg:PCG}.
\begin{algorithm}[!]
\caption{Preconditioned Conjugate Gradients, reproduced from \cite{Shewchuk1994} \label{alg:PCG}\\
{\bf Input:} \\
$A(\cdot)$ // A function which implements $A\mathbf{x}$ \\
$\mathbf{b}$ // The $\mathbf{b}$ vector in the linear system\\
$\mathbf{x}$ // The initial value of the state $\mathbf{x}$ \\
$M^{-1}(\cdot)$ // A function which implements a preconditioner \\
$n$ // the number of iterations \\
{\bf Output:} \\
$\mathbf{x}$ // $\mathbf{x}$ such that $A(\mathbf{x}) \approx \mathbf{b}$
}
\begin{algorithmic}[1]
\State $i \gets 0$
\State $\mathbf{r} \gets \mathbf{b} - A(\mathbf{x})$
\State $\mathbf{d} \gets M^{-1}(\mathbf{r})$
\State $\lambda_{new} \gets \mathbf{r}^\mathrm{T} \mathbf{d}$
\While{$i < n$}
\State $\mathbf{q} \gets A(\mathbf{d})$
\State $\alpha \gets {\lambda_{new} \over \mathbf{d}^\mathrm{T}\mathbf{q}}$
\State $\mathbf{x} \gets \mathbf{x} + \alpha \mathbf{d}$
\State $\mathbf{r} \gets \mathbf{r} - \alpha \mathbf{q}$
\State $\mathbf{s} \gets M^{-1}(\mathbf{r})$
\State $\lambda_{old} \gets \lambda_{new}$
\State $\lambda_{new} \gets \mathbf{r}^\mathrm{T} \mathbf{s}$
\State $\beta \gets {\lambda_{new} \over \lambda_{old}}$
\State $\mathbf{d} \gets \mathbf{s} + \beta \mathbf{d}$
\State $i \gets i + 1$
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Bilateral-Space Pyramids}
\label{sec:pyramids}
Applying the hierarchical initialization and preconditioning techniques in the main paper requires that we have a multiscale representation of our simplified bilateral grid.
As was done in \cite{Barron2015A}, will use the same bilateral grid which was applied to image pixels to instead construct a bilateral grid on top of the vertex coordinates $V$, where $V$ is an $m$ by $5$ matrix produced when constructing a bilateral grid from the input image ($m$ is the number of vertices in the simplified bilateral grid, and $5$ is the dimensionality of our XYLUV bilateral-space).
From $V$ we can construct a more coarse simplified bilateral grid by dividing the elements of $V$ by $2$ (an arbitrary scale factor), and then repeat that procedure to form a pyramid:
\begin{algorithmic}
\For{$k = [0 : K-1]$}
\State $V \gets V/2$
\State $(S_k, V) \gets \mathrm{simplified\_bilateral\_grid}(V)$
\EndFor
\end{algorithmic}
Where $K$ (the number of levels of the pyramid) is set such that the top of the pyramid contains just a single vertex.
As was done in \cite{Barron2015A}, we can use these $K$ splat matrices to lift from some bilateral-space vector $\mathbf{y}$ into a pyramid-space:
\begin{equation}
P(\mathbf{y}) = [S_{K-1} \ldots S_1 S_0 \mathbf{y}, \ldots, S_1 S_0 \mathbf{y}, S_0 \mathbf{y}, \mathbf{y}]
\end{equation}
We can transpose this pyramid operation, collapsing back down to bilateral-space:
\begin{equation}
P^\mathrm{T}(\mathbf{z}) = [S_0^\mathrm{T} S_1^\mathrm{T} \ldots S_{K-1}^\mathrm{T} \mathbf{z}, \ldots, S_0^\mathrm{T} S_1^\mathrm{T} \mathbf{z}, S_0^\mathrm{T} \mathbf{z}, \mathbf{z}]
\end{equation}
We compute $P(\mathbf{y})$ and $P^\mathrm{T}(\mathbf{z})$ efficiently from the bottom up and the top down, respectively, by reusing the information from the previous scale.
\section{Robustness}
\label{sec:robustness}
Though the quadratic data term of Equation~\ref{eq:pixel_loss} enables our fast least-squares formulation, it has the natural consequence that the bilateral solver is sensitive to outliers in the input target (unless those outliers have been assigned a low confidence).
For some applications where the input target has non-Gaussian noise we may wish to be robust to outliers.
We therefore present a robustified variant of our bilateral solver, that we will call the ``robust bilateral solver'' (RBS).
The RBS minimizes the following loss:
\begin{align}
\underset{\mathbf{x}}{\mathrm{minimize}} &\,\, \frac{\lambda}{2} \sum_{i, j} \hat W_{i,j} \left(x_i - x_j \right)^2 + \sum_i \rho(x_i - t_i)
\label{eq:robust_loss}
\end{align}
Where $\rho(\cdot)$ is some robust error function.
Unless $\rho(\cdot)$ is defined as (weighted) squared-error like in Equation~\ref{eq:pixel_loss}, the optimization problem in Equation~\ref{eq:robust_loss} does not have a closed-form solution.
However, Equation~\ref{eq:robust_loss} can be solved using iteratively reweighted least squares (IRLS) \cite{beaton1974} by repeatedly linearizing the loss function around the current estimate of $\mathbf{x}$ and then solving a least-squares problem corresponding to that linearization.
The linearized version of Equation~\ref{eq:robust_loss} in IRLS takes on exactly the same form as Equation~\ref{eq:pixel_loss}, where the ``confidence'' $\mathbf{c}$ is replaced by the ``weight'' generated during IRLS.
Thus we can produce a robust bilateral solver by wrapping the standard bilateral solver in a loop, recomputing the weight at each iteration.
The RBS can be used with any standard robust loss function that would be used for M-estimation.
We use the Geman-McClure loss function \cite{geman1987}, a smooth approximation to the $\ell_0$-norm, whose estimator $\rho(\cdot)$ and corresponding weight in IRLS are:
\begin{equation}
\rho(e_i) = { e_i^2 \over \sigma_{\mathit{gm}}^2 + e_i^2 } \quad\quad w(e_i) = { 2 \sigma_{\mathit{gm}}^2 \over (\sigma_{\mathit{gm}}^2 + e_i^2)^2 }
\end{equation}
where $\sigma_{\mathit{gm}}$ is a scale parameter.
Pseudocode for the RBS with this Geman-McClure loss can be found in Algorithm~\ref{alg:RBS}.
\begin{algorithm}[!]
\caption{The Geman-McClure Robust Bilateral Solver \label{alg:RBS} \\
{\bf Input:} \\
$\mathrm{solve}(\cdot)$ // The bilateral solver \\
$\mathbf{t}$ // The input target vector \\
$\mathbf{c}_\mathit{init}$ // The input confidence vector \\
$\sigma_{\mathit{gm}}$ // The scale parameter of our Geman-McClure function \\
$n$ // The number of IRLS iterations \\
{\bf Output:} \\
$\mathbf{\hat x}$ // $\mathbf{x}$ such that Equation~\ref{eq:robust_loss} is minimized
}
\begin{algorithmic}[1]
\State $\mathbf{c} \gets \mathbf{c}_\mathit{init}$
\While{$i < n$}
\State $\mathbf{\hat x} \gets \mathrm{solve}(\mathbf{t}, \mathbf{c} )$
\State $\mathbf{e} \gets (\mathbf{\hat x} - \mathbf{t} )$
\State $\mathbf{c} \gets { 2 \sigma_{\mathit{gm}}^2 \over \left(\sigma_{\mathit{gm}}^2 + \left(\mathbf{e} \circ \mathbf{e} \right) \right)^2 }$
\State $i \gets i + 1$
\EndWhile
\end{algorithmic}
\end{algorithm}
Because our loss function is non-convex and therefore sensitive to initialization, our RBS interface takes as input some initial confidence $\mathbf{c}_\mathit{init}$ that is used in the first least-squares solve, and then overwritten by the IRLS weights in subsequent iterations.
Because the IRLS / robust M-estimation loop of the RBS is sensitive to initialization, we can improve performance by setting the initial weights used in the IRLS loop $c_{\mathit{init}}$ to reflect some noise model computed from the input to the solver.
In the case of our Middlebury stereo experiment, we construct this initial confidence according to the heuristic observation that accurate depth maps tend to have contiguous image regions with low-variance depths.
To identify such regions we compute an edge-aware measure of depth variance on the input depth map $Z$ using the recursive formulation of the domain transform \cite{GastalOliveira2011DomainTransform}, which is a simple and fast edge-aware filter.
To see how we do this, let us first review the definition of variance:
\begin{equation}
\operatorname{Var}(x) = \operatorname{E}[x^2] - \left(\operatorname{E}[x]\right)^2
\end{equation}
Where $\operatorname{E}$ is the expectation operator.
Any (normalized) linear image filtering operation with non-negative weights can be thought of as computing some local expectation of the input image for every pixel.
When coupled with an edge-aware image filtering operation such as the domain transform (DT), we can compute a local edge-aware variance $V$ of our input depth map $Z$ as follows:
\begin{equation}
V = \mathrm{DT}( Z^2 ) - \left(\mathrm{DT}( Z )\right)^2 \\
\end{equation}
We initialize our input confidence by exponentiating this scaled, negated variance:
\begin{equation}
\mathbf{c}_\mathit{init} = \exp\left( - {V \over 2 \sigma_{dt}^2 } \right)
\end{equation}
Our parameters are tuned to maximize performance on the Middlebury training set: $\sigma_{xy} = \sigma_{rgb} = 32$, $\sigma_{dt} = 2$.
We additionally modify this initial confidence using the observation that the depth map at one side of an image of each stereo pair generally has a poorly estimated depth, as the true match for those pixels are often not present in the other image of the stereo pair.
We address this by setting the leftmost $80$ columns of $\mathbf{c}_\mathit{init}$ to $0$, thereby causing them to be initially ignored.
See Figure~\ref{fig:variance} for a visualization of the initial confidence estimated using this procedure on one of the test-set depth maps from the Middlebury stereo benchmark v3.
\begin{figure}[!]
\centering
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{testset_figures/13/MC-CNN_input_overlay.jpg}
\caption{Input depth map $Z$ from MC-CNN\cite{Zbontar2015}
}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{testset_figures/W_init.jpeg}
\caption{Estimated input confidence $\mathbf{c}_\mathit{init}$
}
\end{subfigure}
\caption{
When processing the depth maps produced by other stereo algorithms, we first use an edge-aware variance estimation technique to produce initial confidence measures used by our robust bilateral solver.
This procedure downweights contiguous image regions with inconsistent depths.
\label{fig:variance}
}
\end{figure}
\section{Multiple Output Channels}
\label{sec:qr}
The tasks which we apply our bilateral solver to consist of inputs with a single channel (depth, stereo, etc) or many channels ($21$, for our semantic segmentation benchmark).
As mentioned previously, our model generalizes straightforwardly to problems with $n$-channel ``target'' inputs by simply decomposing into $n$ independent optimization problems with the same $A$ matrix and different $\mathbf{b}$ vectors.
By concatenating these $\mathbf{y}$ and $\mathbf{b}$ vectors into matrices, this can be rewritten as one large linear system:
\begin{equation}
AY = B
\label{eq:matrix_linear_system}
\end{equation}
For applications such as semantic segmentation this $B$ matrix is often very wide, but also very low-rank --- many object categories may have extremely low probabilities in the input, and some object categories may be strongly correlated.
In these cases we can produce a reduced linear system with fewer right-hand-sides by producing a low-rank approximation to $B$, solving the resulting linear system, and then expanding our low-rank solution back to input space.
This can dramatically speed up convergence, often by an order of magnitude, as many semantic segmentation algorithms often only assign a non-trivial probability to a small fraction of object categories for any given scene.
Our approach is similar to a simplified and approximate version of ``block'' conjugate gradient \cite{Oleary1980}.
We use a rank-revealing QR factorization \cite{Chan1987} to reduce our $B$ matrix, which is often used for similar tasks due to its stability and speed.
\begin{equation}
BP=QR
\end{equation}
Where $P$ is a permutation matrix, $Q$ is an orthonormal basis for the columns of $B$, and $R$ is an upper triangular matrix.
With this, let us construct a modified factorization of $B$:
\begin{equation}
B = Q' R'
\end{equation}
Where we construct $R'$ by taking $R P^\mathrm{T}$ and then shuffling the rows of the resulting matrix such the rows of $R'$ have non-increasing Euclidean norms.
$Q'$ is $Q$ where the columns of $Q$ have also been shuffled.
Let us define a vector $\mathbf{m}$ which is the ``mass'' (squared Euclidean norm) of each row of $R'$:
\begin{equation}
m_i = \sum_j {R'}_{i,j}^2
\end{equation}
Let us define some tolerance $\epsilon$, which is an upper bound on the residual tolerance fraction of $B$ that we are willing to tolerate.
We find the largest value of $t$ such that:
\begin{equation}
\sum_{i=0}^{i < t} m_i \leq (1 - \epsilon) \sum_i m_i
\end{equation}
With this we can drop the least important rows of R and columns of Q:
\begin{equation}
\tilde{Q} \approx Q'[:,1\!:\!t] \quad\quad\quad \tilde{R} \approx (R' P^\mathrm{T})[1\!:\!t, :]
\end{equation}
We can use $\tilde{Q}$ to solve the reduced linear system, and then multiply the solution to that linear system by $\tilde{R}$ to approximate the solution to the original system:
\begin{equation}
Y = (A^{-1} \tilde{Q}) \tilde{R}
\end{equation}
In our semantic segmentation experiment, For small values of $\epsilon = 0.01$ the output from this approximate is often indistinguishable from exact solution, despite being $\approx 5\times$ faster on average.
\section{Results}
\label{sec:results}
Here we present many additional results for the four tasks explored in the main paper, in the form of additional figures and expanded tables of results, as well as details regarding the evaluation of baseline techniques.
\subsection{Stereo}
In the paper we demonstrated the value of our robust bilateral solver as a post-processing procedure for stereo algorithms, using the Middlebury Stereo Benchmark V3 \cite{Scharstein2014}.
See Figures~\ref{fig:testset_middlebury1}-\ref{fig:testset_middlebury5} for examples of test set images from the Middlebury dataset in which we post-process the output of the state-of-the-art MC-CNN stereo technique\cite{Zbontar2015}, which we obtained using the publicly available source accompanying the technique.
See Figures~\ref{fig:middlebury1} and \ref{fig:middlebury2} for additional results on training set images from the Middlebury dataset, where we evaluate against a wider selection of stereo algorithms.
See Figures~\ref{fig:middleburyAlgo1} and \ref{fig:middleburyAlgo2} for results in which we have post-processed the training-set output of those these top-performing stereo algorithms with baseline edge-aware filtering or depth-map enhancement techniques.
We present these results on the training set because the Middlebury benchmark makes the training-set output of other stereo algorithms freely available, while obtaining test-set results requires the code or cooperation of the authors of each technique, or re-implementing these techniques.
The depth maps corresponding to each baseline stereo technique were therefore downloaded from the Middlebury website.
The output corresponding to each post-processing technique we compare against were produced by us, while taking care to tune all parameters to maximize performance on the Middlebury training set (the geometric mean of all 6 error metrics) for the MC-CNN output (these parameters are then used for all other stereo techniques).
For complete transparency, we will detail the procedure used to produce each baseline results:
\begin{description}[align=left]
\item [TF] This is the tree-filtering technique of Yang \cite{Yang2015}, which we ran ourselves using the publicly available code\footnote{\url{http://www.cs.cityu.edu.hk/~qiyang/publications/software/tree_filter.zip}}. The parameter settings used in these experiments are $\sigma_r = 0.25$ with nonlocal filtering, which experimentation showed to be the optimal parameter settings for this task.
\item [FGF] This is our own Matlab implementation of the (color) fast guided filter \cite{He2015}, which we found to be faster than the implementation built into Matlab 2015, and significantly faster than the released code\footnote{\url{http://research.microsoft.com/en-us/um/people/kahe/eccv10/fast-guided-filter-code-v1.rar}} while producing identical results. The parameters were tuned for this task, with a box filter size of $8$, $\epsilon = 0.01^2$, and a subsampling factor of $4$.
\item [WMF] This is the weighted median filter approach of Ma \emph{et al}. \cite{Ma2013}, which we ran using the publicly available code\footnote{\url{http://research.microsoft.com/en-us/um/people/kahe/iccv13wmf/matlab_wmf_release_v1.rar}}. Because this technique was designed for a similar use case to its use here, we used the same parameter settings as were use in the paper: $r = \mathrm{max}(\mathit{width}, \mathit{height})$, $\epsilon=0.01^2$, followed by a median filter.
\item [DT] This is the recursive formulation of the domain transform \cite{GastalOliveira2011DomainTransform} with optimally-tuned parameters for this task ($\sigma_r = 64$, $\sigma_s = 32$).
\end{description}
\begin{table}[b!]
\begin{center}
\caption{
Our RBS improves depth map quality for a variety of state-of-the-art stereo algorithms on the Middlebury Stereo Dataset V3 \cite{Scharstein2014}.
Test-set numbers were taken from the Middlebury website, while training-set numbers were produced by our own evaluation code.
\label{table:middlebury}
}
\begin{tabular}{ l | c c c | c c c }
$\mathrm{Method}$ & \multicolumn{3}{c|}{$\mathrm{All}$} & \multicolumn{3}{c}{$\mathrm{NoOcc}$} \\
& $\mathrm{bad}\,1\%$ & $\mathrm{MAE}$ & $\mathrm{RMSE}$ & $\mathrm{bad}\,1\%$ & $\mathrm{MAE}$ & $\mathrm{RMSE}$ \\
\hline
\multicolumn{7}{l}{\quad Test Set} \\
\hline
$\mathrm{ MC-CNN}$\cite{Zbontar2015} & $ 28.1 $ \cellcolor{Yellow} & $ 17.9 $ & $ 55.0 $ & $ 18.0 $ \cellcolor{Yellow} & $ 3.82 $ & $ 21.3 $ \\
$\mathrm{ MC-CNN}$\cite{Zbontar2015}$\mathrm{ + RBS}$ & $ 28.2 $ & $ 8.19 $ \cellcolor{Yellow} & $ 29.9 $ \cellcolor{Yellow} & $ 18.9 $ & $ 2.67 $ \cellcolor{Yellow} & $ 15.0 $ \cellcolor{Yellow} \\
\multicolumn{7}{c}{} \\
\hline
\multicolumn{7}{l}{\quad Training Set} \\
\hline
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} & $ 20.07 $ & $ 5.93 $ & $ 18.36 $ & $ 10.42 $ \cellcolor{Yellow} & $ 1.94 $ & $ 9.07 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{TF}$\cite{Yang2015} & $ 29.15 $ & $ 5.67 $ & $ 16.18 $ & $ 20.15 $ & $ 2.17 $ & $ 7.71 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{FGF}$\cite{He2015} & $ 32.29 $ & $ 5.91 $ & $ 16.32 $ & $ 23.62 $ & $ 2.42 $ & $ 7.98 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{WMF}$\cite{Ma2013} & $ 33.37 $ & $ 5.30 $ & $ 15.62 $ & $ 26.29 $ & $ 2.32 $ & $ 8.22 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{DT}$\cite{GastalOliveira2011DomainTransform} & $ 25.17 $ & $ 5.69 $ & $ 16.53 $ & $ 15.53 $ & $ 2.01 $ & $ 7.72 $ \\
$\mathrm{MC}$-$\mathrm{CNN}$\cite{Zbontar2015} + $\mathrm{RBS\,(Ours)}$ & $ 19.49 $ \cellcolor{Yellow} & $ 2.81 $ \cellcolor{Yellow} & $ 8.44 $ \cellcolor{Yellow} & $ 11.33 $ & $ 1.40 $ \cellcolor{Yellow} & $ 5.23 $ \cellcolor{Yellow} \\
\hline
$\mathrm{MeshStereo}$\cite{Zhang2015} & $ 21.25 $ \cellcolor{Yellow} & $ 3.83 $ & $ 10.75 $ & $ 15.13 $ \cellcolor{Yellow} & $ 2.21 $ & $ 7.86 $ \\
$\mathrm{MeshStereo}$\cite{Zhang2015} + $\mathrm{TF}$\cite{Yang2015} & $ 27.37 $ & $ 3.81 $ & $ 9.91 $ & $ 21.70 $ & $ 2.26 $ & $ 7.03 $ \\
$\mathrm{MeshStereo}$\cite{Zhang2015} + $\mathrm{FGF}$\cite{He2015} & $ 29.28 $ & $ 3.96 $ & $ 10.03 $ & $ 23.44 $ & $ 2.38 $ & $ 7.12 $ \\
$\mathrm{MeshStereo}$\cite{Zhang2015} + $\mathrm{WMF}$\cite{Ma2013} & $ 32.16 $ & $ 3.87 $ & $ 10.10 $ & $ 27.18 $ & $ 2.39 $ & $ 7.30 $ \\
$\mathrm{MeshStereo}$\cite{Zhang2015} + $\mathrm{DT}$\cite{GastalOliveira2011DomainTransform} & $ 23.97 $ & $ 3.77 $ & $ 10.12 $ & $ 17.92 $ & $ 2.18 $ & $ 7.23 $ \\
$\mathrm{MeshStereo}$\cite{Zhang2015} + $\mathrm{RBS\,(Ours)}$ & $ 21.43 $ & $ 3.22 $ \cellcolor{Yellow} & $ 8.72 $ \cellcolor{Yellow} & $ 15.52 $ & $ 2.03 $ \cellcolor{Yellow} & $ 6.68 $ \cellcolor{Yellow} \\
\hline
$\mathrm{TMAP}$\cite{Psota2015} & $ 23.08 $ & $ 3.98 $ & $ 11.55 $ & $ 16.29 $ & $ 2.24 $ & $ 7.61 $ \\
$\mathrm{TMAP}$\cite{Psota2015} + $\mathrm{TF}$\cite{Yang2015} & $ 27.47 $ & $ 3.94 $ & $ 10.90 $ & $ 20.88 $ & $ 2.26 $ & $ 7.04 $ \\
$\mathrm{TMAP}$\cite{Psota2015} + $\mathrm{FGF}$\cite{He2015} & $ 32.16 $ & $ 4.17 $ & $ 10.79 $ & $ 25.66 $ & $ 2.50 $ & $ 6.99 $ \\
$\mathrm{TMAP}$\cite{Psota2015} + $\mathrm{WMF}$\cite{Ma2013} & $ 33.90 $ & $ 4.11 $ & $ 11.01 $ & $ 28.39 $ & $ 2.56 $ & $ 7.61 $ \\
$\mathrm{TMAP}$\cite{Psota2015} + $\mathrm{DT}$\cite{GastalOliveira2011DomainTransform} & $ 24.93 $ & $ 3.86 $ & $ 10.92 $ & $ 17.99 $ & $ 2.15 $ & $ 7.02 $ \\
$\mathrm{TMAP}$\cite{Psota2015} + $\mathrm{RBS\,(Ours)}$ & $ 22.79 $ \cellcolor{Yellow} & $ 3.31 $ \cellcolor{Yellow} & $ 9.44 $ \cellcolor{Yellow} & $ 16.09 $ \cellcolor{Yellow} & $ 2.06 $ \cellcolor{Yellow} & $ 6.72 $ \cellcolor{Yellow} \\
\hline
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} & $ 24.37 $ & $ 3.85 $ & $ 10.68 $ & $ 17.91 $ \cellcolor{Yellow} & $ 2.44 $ & $ 8.04 $ \\
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} + $\mathrm{TF}$\cite{Yang2015} & $ 31.47 $ & $ 3.82 $ & $ 9.55 $ & $ 25.36 $ & $ 2.51 $ & $ 6.95 $ \cellcolor{Yellow} \\
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} + $\mathrm{FGF}$\cite{He2015} & $ 34.40 $ & $ 4.05 $ & $ 9.66 $ & $ 28.42 $ & $ 2.72 $ & $ 7.09 $ \\
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} + $\mathrm{WMF}$\cite{Ma2013} & $ 35.47 $ & $ 3.97 $ & $ 9.99 $ & $ 30.48 $ & $ 2.76 $ & $ 7.72 $ \\
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} + $\mathrm{DT}$\cite{GastalOliveira2011DomainTransform} & $ 28.78 $ & $ 3.85 $ & $ 9.90 $ & $ 22.24 $ & $ 2.49 $ & $ 7.28 $ \\
$\mathrm{SGM}$\cite{Hirschmuller05accurateand} + $\mathrm{RBS\,(Ours)}$ & $ 24.18 $ \cellcolor{Yellow} & $ 3.44 $ \cellcolor{Yellow} & $ 9.21 $ \cellcolor{Yellow} & $ 17.95 $ & $ 2.31 $ \cellcolor{Yellow} & $ 7.06 $
\end{tabular}
\end{center}
\end{table}
For all Middlebury experiments the parameters of our RBS were: $\sigma_{\mathit{xy}} = \sigma_{\mathit{l}} = \sigma_{\mathit{uv}} = 4$, $\lambda = 0.25$, $\sigma'_{\mathit{xy}} = \sigma'_{\mathit{rgb}} = 4$, $\sigma_{\mathit{gm}} = 1$, (the scale of the Geman-McClure loss function) and we performed $32$ iterations of IRLS.
On our test-set results, we present six error metrics for each image: bad-$1$\% (the percent of pixels whose disparity is wrong by more than $1$), MAE (the mean absolute error of the disparity map) and RMSE (the root mean squared error of each disparity map), for all pixels and for only non-occluded pixels.
We generally see a reduction in the MAE and RMSE error metrics, usually by around $50\%$, and a relatively unchanged bad-$1$\% metrics, which suggests that our solver has a substantial and positive impact on quality.
This improvement is quite clear when visualizing the output depth maps, as shown in Figures~\ref{fig:testset_middlebury1}--\ref{fig:testset_middlebury5}, suggesting that the ``all or nothing'' bad-$1$\% metric does not seem to correlate well with the visual quality of the depth map.
Our error reduction is roughly consistent across all choices of stereo techniques used to produce the input depth maps to our algorithm (though for all post-processing technique the improvement is most significant for MC-CNN, as that is the environment in which parameters were tuned).
The baseline depth post-processing techniques we evaluate against often do reduce some error measures, though all tend to increase the $\mathrm{bad}\,1\%$ error measure, and none produce as substantial a reduction of MAE and RMSE as our approach.
The improvement of our approach over those baseline approaches is evident upon visual inspection, as can be seen in Figures~\ref{fig:middleburyAlgo1} and \ref{fig:middleburyAlgo2}.
Our test-set errors were taken from the Middlebury website, while training-set errors were produced with our own evaluation code.
We do not report runtime for this task, as the runtime of our technique and all baselines is dominated by the time taken by the MC-CNN technique common among all entries.
\subsection{Stereo-based Defocus}
Though the primary focus of this work is the bilateral solver and not the ``defocus'' task of Barron \emph{et al}. \cite{Barron2015A}, we would be remiss to omit a comparison of our technique with the optimization technique presented in \cite{Barron2015A}, given the similarities between the two techniques.
The stereo algorithm of \cite{Barron2015A} performs brittle block-matching on a rectified stereo pair to produce, for each pixel, an interval (lower and upper values $[l_i, u_i]$ parametrizing an interval) of likely depths for that pixel.
This data term appears to have been chosen for its efficacy, and because its simple piecewise-linear form allowed this pixel-space loss to be easily ``splatted'' into bilateral space to construct a convex (though non-linear) optimization problem.
This data term is not compatible with our bilateral solver, and so we must convert it to the expected input: a per-pixel ``target'' value and ``confidence'' measure.
We simply use the average of the upper and lower interval as the target and the exponentiated negative length of the interval as the confidence:
\begin{equation}
t_i = (l_i + u_i)/2 \quad\quad c_i = \exp(l_i - u_i)
\end{equation}
This simple reparametrization combined with our bilateral solver produces an effective stereo technique for this ``defocus'' task, while being roughly $3.5\times$ faster than the already-efficient solver of \cite{Barron2015A}.
See Figures~\ref{fig:stereo_defocus_supp1} and \ref{fig:stereo_defocus_supp2} for a visualization of our performance relative to \cite{Barron2015A}, using some of the stereo pairs from \cite{Barron2015A}.
Though this is only a modest improvement over \cite{Barron2015A}, it is reassuring that our general-purpose bilateral solver not only applies to a host of problems in computer vision, but also performs at least as well as its much more specialized precursor technique.
Note that \cite{Barron2015A} reported large errors on the Middlebury V2 dataset, while our technique can be used for both accurate depth maps and pleasing graphical effects.
\subsection{Depth Superresolution}
\begin{table}[b!]
\caption{
Performance on the depth superresolution task of \cite{ferstl2013b}.
We report root-means-squared error, as was done in \cite{Kwon2015}, along with the geometric mean of those errors over the entire dataset.
Times other than our own are indicated by colors:
Green runtimes are from \cite{ferstl2013b}, blue runtimes are from \cite{Kwon2015}, pink runtimes are from \cite{Lu2015}, and the teal runtime is from \cite{Ma2013}. The beige runtimes were produced by us, but on a different, faster computer.
Algorithms which use external training data are indicated with a dagger.
\label{table:super_supp}
}
\begin{center}
\resizebox{4.8in}{!}{
\large
\begin{tabular}{r@{}r@{}l | c c c c | c c c c | c c c c | c | c }
&& $\mathrm{Method}$ & \multicolumn{4}{c|}{$\mathrm{Art}$} & \multicolumn{4}{c|}{$\mathrm{Books}$} & \multicolumn{4}{c|}{$\mathrm{Moebius}$} & $\mathrm{Avg.}$ & $\mathrm{Time\,(sec)}$ \\
\hline
A) & & $\mathrm{Nearest\,Neighbor}$ & $ 6.55 $ & $ 7.41 $ & $ 8.87 $ & $ 11.24 $ & $ 6.16 $ & $ 6.32 $ & $ 6.63 $ & $ 7.36 $ & $ 6.59 $ & $ 6.78 $ & $ 6.98 $ & $ 7.48 $ & $ 7.26 $ & $ 0.003 $ \\
B) & & $\mathrm{Bicubic}$ & $ 5.32 $ & $ 6.00 $ & $ 7.15 $ & $ 9.35 $ & $ 5.00 $ & $ 5.17 $ & $ 5.46 $ & $ 5.98 $ & $ 5.34 $ & $ 5.52 $ & $ 5.66 $ & $ 6.07 $ & $ 5.91 $ & $ 0.007 $ \\
C) & $\dagger$ & $\mathrm{Kiechle\,\emph{et al}. }$\cite{Kiechle2013} & \cellcolor{Orange} $ 2.82 $ & $ 5.10 $ & $ 6.83 $ & $ 10.80 $ & $ 3.83 $ & $ 5.10 $ & $ 6.12 $ & $ 8.43 $ & $ 4.50 $ & $ 5.73 $ & $ 6.64 $ & $ 8.96 $ & $ 5.86 $ & \cellcolor{Blue} $ 450 $ \\
D) & & $\mathrm{Bilinear}$ & $ 4.57 $ & $ 5.53 $ & $ 6.99 $ & $ 9.45 $ & $ 3.94 $ & $ 4.31 $ & $ 4.71 $ & $ 5.38 $ & $ 4.19 $ & $ 4.55 $ & $ 4.83 $ & $ 5.37 $ & $ 5.16 $ & $ 0.004 $ \\
E) & & $\mathrm{Liu\,\emph{et al}.\,}$\cite{Liu2013} & $ 4.10 $ & $ 5.43 $ & $ 7.69 $ & $ 11.36 $ & $ 3.08 $ & $ 3.87 $ & $ 4.82 $ & $ 6.46 $ & $ 3.18 $ & $ 4.04 $ & $ 5.11 $ & $ 6.62 $ & $ 5.10 $ & $ 16.60 $ \\
F) & & $\mathrm{Shen\,\emph{et al}.\,}$\cite{Shen2015} & $ 3.49 $ & $ 4.62 $ & $ 6.13 $ & $ 8.68 $ & $ 2.86 $ & $ 3.48 $ & $ 4.43 $ & $ 5.57 $ & $ 2.29 $ & $ 3.07 $ & $ 4.22 $ & $ 5.43 $ & $ 4.24 $ & $ 31.48 $ \\
G) & & $\mathrm{Diebel\,\&\,Thrun\,}$\cite{Diebel05b} & $ 3.49 $ & $ 4.41 $ & $ 6.24 $ & $ 9.11 $ & $ 2.06 $ & $ 3.00 $ & $ 4.06 $ & $ 5.13 $ & $ 2.13 $ & $ 3.10 $ & $ 4.14 $ & $ 5.12 $ & $ 3.98 $ & $ - $ \\
H) & & $\mathrm{Chan\,\emph{et al}. }$\cite{chan2008} & $ 3.44 $ & $ 4.38 $ & $ 5.98 $ & $ 8.41 $ & $ 2.09 $ & $ 2.77 $ & $ 3.78 $ & $ 5.45 $ & $ 2.08 $ & $ 2.69 $ & $ 3.73 $ & $ 5.33 $ & $ 3.83 $ & \cellcolor{Green}$ 3.02 $ \\
I) & & $\mathrm{Guided Filter }$\cite{He2010,ferstl2013b} & $ 3.55 $ & $ 4.31 $ & $ 5.59 $ & $ 8.22 $ & $ 2.37 $ & $ 2.73 $ & $ 3.42 $ & $ 4.52 $ & $ 2.48 $ & $ 2.82 $ & $ 3.54 $ & $ 4.53 $ & $ 3.76 $ & \cellcolor{Green} $ 23.89 $ \\
J) & & $\mathrm{Min\,\emph{et al}.}$\cite{Min2014} & $ 3.65 $ & $ 4.08 $ & $ 5.09 $ & $ 7.91 $ & $ 2.85 $ & $ 2.77 $ & $ 2.97 $ & $ 3.81 $ & $ 3.46 $ & $ 3.25 $ & $ 3.20 $ & $ 3.86 $ & $ 3.74 $ & $ 0.383 $ \\
K) & $\dagger$ & $\mathrm{Lu\,\&\,Forsyth }$\cite{Lu2015} & $ 4.30 $ & $ 5.05 $ & $ 6.33 $ & $ 7.94 $ & $ 2.17 $ & $ 2.71 $ & $ 3.30 $ & $ 4.29 $ & $ 2.16 $ & $ 2.50 $ & $ 3.15 $ & $ 4.10 $ & $ 3.69 $ & \cellcolor{Pink} $ 20 $ \\
L) & & $\mathrm{Park\,\emph{et al}. }$\cite{Park2011} & $ 3.76 $ & $ 4.48 $ & $ 5.80 $ & $ 8.75 $ & $ 1.95 $ & $ 2.60 $ & $ 3.30 $ & $ 4.86 $ & $ 1.96 $ & $ 2.49 $ & $ 3.21 $ & $ 4.48 $ & $ 3.61 $ & \cellcolor{Green}$ 24.05 $ \\
M) & &$\mathrm{Domain\,Transform\,}$\cite{GastalOliveira2011DomainTransform} & $ 3.95 $ & $ 4.76 $ & $ 6.14 $ & $ 8.49 $ & $ 1.80 $ & $ 2.40 $ & $ 3.23 $ & $ 4.44 $ & $ 1.83 $ & $ 2.40 $ & $ 3.35 $ & $ 4.64 $ & $ 3.56 $ & $ 0.021 $ \\
N) & & $\mathrm{Ma\,\emph{et al}.\,}$\cite{Ma2013} & $ 3.27 $ & $ 3.99 $ & $ 5.08 $ & \cellcolor{Yellow} $ 7.39 $ & $ 2.39 $ & $ 2.70 $ & $ 3.09 $ & $ 3.77 $ & $ 2.55 $ & $ 2.84 $ & $ 3.23 $ & $ 3.81 $ & $ 3.49 $ & \cellcolor{Teal} $ 18 $ \\
O) & & $\mathrm{Guided Filter (Matlab) }$\cite{He2010} & $ 3.60 $ & $ 4.25 $ & $ 5.49 $ & $ 7.99 $ & $ 2.39 $ & $ 2.52 $ & $ 2.89 $ & $ 3.89 $ & $ 2.50 $ & $ 2.57 $ & $ 2.90 $ & $ 3.61 $ & $ 3.47 $ & $ 0.434 $ \\
P) & & $\mathrm{Zhang\,\emph{et al}.\,}$\cite{Zhang2014} & $ 4.15 $ & $ 4.22 $ & $ 5.03 $ & $ 7.86 $ & $ 1.96 $ & $ 2.24 $ & $ 3.13 $ & $ 4.80 $ & $ 1.80 $ & $ 2.19 $ & $ 3.22 $ & $ 4.90 $ & $ 3.45 $ & \cellcolor{WindowsColor} $ 1.346 $ \\
Q) & & $\mathrm{Fast Guided Filter }$\cite{He2015} & $ 3.40 $ & $ 4.16 $ & $ 5.46 $ & $ 7.97 $ & $ 2.08 $ & $ 2.51 $ & $ 3.04 $ & $ 3.95 $ & $ 2.13 $ & $ 2.55 $ & $ 3.08 $ & $ 3.79 $ & $ 3.41 $ & $ 0.225 $ \\
R) & & $\mathrm{Yang\,2015\,}$\cite{Yang2015} & $ 3.27 $ & $ 4.15 $ & $ 5.46 $ & $ 7.93 $ & $ 2.00 $ & $ 2.38 $ & $ 3.00 $ & $ 4.04 $ & $ 2.25 $ & $ 2.57 $ & $ 3.13 $ & $ 4.00 $ & $ 3.41 $ & \cellcolor{WindowsColor} $ 0.304 $ \\
S) & & $\mathrm{Yang\,\emph{et al}.\,2007\,}$\cite{Yang2007} & $ 3.01 $ & $ 3.92 $ & \cellcolor{Yellow} $ 4.85 $ & $ 7.57 $ & $ 1.87 $ & $ 2.38 $ & $ 2.86 $ & $ 4.26 $ & $ 1.92 $ & $ 2.41 $ & $ 2.96 $ & $ 4.37 $ & $ 3.25 $ & $ - $ \\
T) & & $\mathrm{Farbman\,\emph{et al}.\,}$\cite{FFLS2008} & $ 3.14 $ & $ 4.00 $ & $ 5.30 $ & $ 7.70 $ & $ 1.76 $ & $ 2.26 $ & $ 2.90 $ & $ 3.88 $ & $ 1.79 $ & $ 2.29 $ & $ 2.98 $ & $ 3.93 $ & $ 3.19 $ & $ 6.11 $ \\
U) & & $\mathrm{JBU\,}$\cite{Adams2010,Kopf2007} & $ 3.17 $ & $ 4.02 $ & $ 5.37 $ & $ 7.59 $ & $ 1.83 $ & $ 2.18 $ & $ 2.80 $ & $ 4.00 $ & $ 1.83 $ & $ 2.13 $ & $ 2.71 $ & $ 3.76 $ & $ 3.14 $ & $ 1.98 $ \\
V) & & $\mathrm{Ferstl\,\emph{et al}. }$\cite{ferstl2013b} & $ 3.19 $ & $ 4.06 $ & $ 5.08 $ & $ 7.61 $ & $ 1.52 $ & $ 2.21 $ & \cellcolor{Yellow} $ 2.47 $ & \cellcolor{Yellow} $ 3.54 $ & $ 1.47 $ & $ 2.03 $ & $ 2.58 $ & \cellcolor{Yellow} $ 3.50 $ & $ 2.93 $ & \cellcolor{Green}$ 140 $ \\
W) & $\dagger$ & $\mathrm{Li\,\emph{et al}. }$\cite{Li2013} & $ 3.02 $ & \cellcolor{Orange} $ 3.12 $ & \cellcolor{Orange} $ 4.43 $ & $ 7.43 $ & \cellcolor{Orange} $ 1.18 $ & \cellcolor{Orange} $ 1.70 $ & $ 2.55 $ & $ 3.58 $ & \cellcolor{Orange} $ 1.11 $ & \cellcolor{Orange} $ 1.59 $ & \cellcolor{Orange} $ 2.28 $ & $ 3.50 $ & \cellcolor{Orange} $ 2.56 $ & \cellcolor{Blue} $ 700 $ \\
X) & $\dagger$ & $\mathrm{Kwon\,\emph{et al}. }$\cite{Kwon2015} & \cellcolor{Red} $ 0.87 $ & \cellcolor{Red} $ 1.30 $ & \cellcolor{Red} $ 2.05 $ & \cellcolor{Red} $ 3.56 $ & \cellcolor{Red} $ 0.51 $ & \cellcolor{Red} $ 0.75 $ & \cellcolor{Red} $ 1.14 $ & \cellcolor{Red} $ 1.88 $ & \cellcolor{Red} $ 0.57 $ & \cellcolor{Red} $ 0.89 $ & \cellcolor{Red} $ 1.37 $ & \cellcolor{Red} $ 2.14 $ & \cellcolor{Red} $ 1.21 $ & \cellcolor{Blue}$ 300 $ \\
\hline
Y) & & $\mathrm{BS\,(Ours)}$ & \cellcolor{Yellow} $ 2.93 $ & \cellcolor{Yellow} $ 3.79 $ & $ 4.95 $ & \cellcolor{Orange} $ 7.13 $ & \cellcolor{Yellow} $ 1.39 $ & \cellcolor{Yellow} $ 1.84 $ & \cellcolor{Orange} $ 2.38 $ & \cellcolor{Orange} $ 3.29 $ & \cellcolor{Yellow} $ 1.38 $ & \cellcolor{Yellow} $ 1.80 $ & \cellcolor{Yellow} $ 2.38 $ & \cellcolor{Orange} $ 3.23 $ & \cellcolor{Yellow} $ 2.70 $ & $ 0.234 $
\end{tabular}
}
\end{center}
\end{table}
In Table~\ref{table:super_supp} we present an expanded form of the depth superresolution results presented in the main paper, in which the task is broken down into its constituent images and scale factors, as was done in past work \cite{ferstl2013b,Kwon2015}.
Our bilateral solver appears to be roughly the second or third most accurate technique, after \cite{Kwon2015} and sometimes \cite{Li2013}.
But it should be noted that these top-performing techniques \cite{Kiechle2013,Kwon2015,Li2013} are all dictionary-based techniques which are trained on a great deal of data, while our technique produces competitive results despite the simplicity of our model and our lack of training.
What is perhaps the most important property of our model is its speed --- our technique $600$-$3000 \times$ faster than the three most accurate techniques, despite having comparable accuracy to all but \cite{Kwon2015}.
Our performance is most competitive when the upsampling factor is large ($16\times$) which is when the task is inherently most difficult.
Additionally, those techniques with speeds comparable to or better than the bilateral solver \cite{GastalOliveira2011DomainTransform,Yang2015,He2015,Min2014} produce error rates that are 25-40$\%$ greater than our approach.
The only techniques with significantly faster runtimes than the bilateral solver are standard image interpolation techniques (bilinear, bicubic, etc) which produce low-quality output, and our implementation of the domain transform \cite{GastalOliveira2011DomainTransform} which is a highly-optimized, vectorized, and multi-threaded implementation, unlike our own technique and all other techniques we compare against.
More conventional implementations of the domain transform appear to have runtimes comparable to our own, or that of the guided filter\cite{He2010}.
See Figures~\ref{fig:super1}-\ref{fig:super3} for example output for this task.
For transparency and completeness we evaluated against as many applicable baseline techniques as was possible.
This is difficult, as though many papers present results for denoising or upsampling different versions of the Middlebury dataset, there is not one standard benchmark which is universally adopted.
To this end, we build upon the benchmarked used in \cite{ferstl2013b}, but augment it heavily.
We inherit some results from past work \cite{ferstl2013b,Kwon2015} which is necessary due to the lack of publicly available source code for many techniques.
This means that some runtimes are produced on other hardware platforms, and so these runtimes should be considered accordingly.
Our own runtimes (indicated in Table~\ref{table:super_supp} in white) were benchmarked on a 2012 HP Z420 workstation (Intel Xeon CPU E5-1650, 3.20GHz, 32 GB RAM), and our algorithm is implemented in standard, single-threaded C++.
Runtimes highlighted with a color come from some other hardware platform other than our reference hardware.
For clarity, we will now detail the experimental conditions of each baseline technique, both in terms of parameter settings and hardware environments.
For all experiments in which we produced the model output, the parameters of each algorithm were tuned to minimize the average error on this task by starting with the baseline parameters in the publicly available code, and then performing coordinate descent on each parameter, halving and doubling each and taking the new parameter setting if the average error decreased.
For all models in which the code was ran by us, the runtime reported is the median of the runtime over all images in the dataset, unless the model's runtime is resolution-dependent, in which case we report the geometric mean of the runtimes.
\begin{description}[align=left]
\item [A] This is nearest-neighbor interpolation, and the reported runtime is that of the Matlab $\mathrm{imresize()}$ function on our workstation.
\item [B] This is bicubic interpolation, and the reported runtime is that of the Matlab $\mathrm{imresize()}$ function on our workstation.
\item [C] This is the technique of Kiechle \emph{et al}. \cite{Kiechle2013}, with errors and runtimes taken directly from \cite{Kwon2015}.
\item [D] This is bilinear interpolation, and the reported runtime is that of the Matlab $\mathrm{imresize()}$ function on our workstation.
\item [E] This is the Joint Geodesic Upsampling technique of \cite{Liu2013}, which we ran ourselves using the publicly available code\footnote{\url{http://www.merl.com/research/license/}}. This technique performs poorly for this task, possibly because it assumes noise-free input. We used the default parameters of the code, which we found optimal for this task: $\mathrm{interval} = 3$, $\sigma = 0.5$, $\lambda_1 = 10$, $\lambda_2 = 1$.
\item [F] This is the Mutual-Structure technique of Shen \emph{et al}. \cite{Shen2015}, which we ran ourselves using the publicly available code\footnote{\url{http://www.cse.cuhk.edu.hk/leojia/projects/mutualstructure/index.html}}. This technique performs poorly on this task, which appears to be due to the low-frequency nature of the depth-map noise inherent in this depth super-resolution task, unlike the high-frequency, per-pixel noise used in the experiments of \cite{Shen2015}. The parameters were tuned for optimal performance on our task: $\epsilon_I = 10^{-5}$, $\epsilon_G = 2 \times 10^{-4}$, $\lambda_I = 1$, $\lambda_G = 4$, $20$ iterations, window size of $2$. A more aggressive smoothing effect could be achieved using larger windows sizes which were a function of the upsampling factor, but these oversmoothed depths had significantly higher errors than was produced using these settings.
\item [G] This is the technique presented in Diebel \& Thrun \cite{Diebel05b}, which was used as a baseline algorithm in \cite{ferstl2013b}. We did not run this code ourselves, and produced these error rates using the precomputed output from \cite{ferstl2013b}, which is why we do not have runtimes.
\item [H] This is an implementation of Chan \emph{et al}. \cite{chan2008}, which was used as a baseline algorithm in \cite{ferstl2013b}. We did not run this code ourselves, and produced these error rates using the precomputed output from \cite{ferstl2013b}, and reproduced the runtime quoted in \cite{ferstl2013b}.
\item [I] This is presumably an implementation of the guided filter \cite{He2010}, which was used as a baseline algorithm in \cite{ferstl2013b}. We did not run this code ourselves, and produced these error rates using the precomputed output from \cite{ferstl2013b}, and we took the runtime quoted in \cite{ferstl2013b}. Because the quoted runtime seems unusually slow for the guided filter, we ran two of our own evaluations of the guided filter on this task (models O and Q) which produced lower errors rates and significantly faster runtimes.
\item [J] This is the weighted least-squares approach of Min \emph{et al}. \cite{Min2014}, which we ran ourselves using publicly available code\footnote{\url{https://github.com/soundsilence/ImageSmoothing}}. The parameters were tuned for optimal performance on our task: $\sigma = 0.00125 \times 2^f$ (where $f$ is the upsampling factor), $\lambda = 30^2$, $\mathrm{iteration} = 3$, $\mathrm{attenuation} = 4$.
\item [K] This is the technique presented in Lu \& Forsyth 2015 \cite{Lu2015}. The authors of \cite{Lu2015} ran their own code on the task presented here on their own computer (with comparable specifications to our own) at our request, sent us the output, and reported their approximate runtime, which we report here.
\item [L] This is the technique presented in Park \emph{et al}. \cite{Park2011}, which was used as a baseline algorithm in \cite{ferstl2013b}. We did not run this code ourselves, and produced these error rates using the precomputed output from \cite{ferstl2013b}, which is why we do not have runtimes.
\item [M] This is the recursive formulation of the domain transform \cite{GastalOliveira2011DomainTransform} with optimally-tuned parameters ($\sigma_r = 64$, $\sigma_s = 8f$). We used our own highly optimized implementation of this code, implemented in Halide, with heavily parallelization and vectorization. This is unlike most other baselines and our own algorithm, which are single-threaded unoptimized code.
\item [N] This is the weighted median filter of \cite{Ma2013}, which we ran ourselves using the publicly available code\footnote{\url{http://research.microsoft.com/en-us/um/people/kahe/iccv13wmf/matlab_wmf_release_v1.rar}}. We found the color version of the code (which we used for its improved accuracy) to perform slowly, so the runtime we cite is taken by extrapolating from the quoted runtime for the CPU implementation of \cite{Ma2013}, ($60$ms per megapixel-disparity) on a comparable computer to ours, suggesting that runtime on this task would be $\sim 18$ seconds per image ($\sim 200$ disparities, $\sim 1.5$ megapixels per image). The parameters were tuned for optimal performance on our task: filter size = $2^f$ (where $f$ is the upsampling factor) and $\epsilon = 0.02^2$.
\item [O] This is the implementation of the guided filter \cite{He2010} which is built into the 2015 version of Matlab. The parameters were tuned for this task,
``NeighborhoodSize'' = $3 \times 2^f$, (where $f$ is the upsampling factor) and ``DegreeOfSmoothing'' = $0.5$.
\item [P] This is the weighted median filter of \cite{Zhang2014}, which we ran ourselves using the publicly available code\footnote{\url{http://www.cse.cuhk.edu.hk/~leojia/projects/fastwmedian/download/JointWMF_mex.zip}}. Because the code was only available as a compiled windows binary, we were forced to use a different computer for this evaluation, and so the runtime is for a significantly faster computer than was used for our other evaluation: A dual-processor Xeon CPU-E5-2690 v3 2.6GHz with 64 GB of ram. The runtime should therefore be considered a lower bound on the actual runtime for the reference hardware. We used the default parameter settings provided with the reference code for our experiments, which we found to perform optimally for our benchmark.
\item [Q] This is our own Matlab implementation of the (color) fast guided filter \cite{He2015}, which we found to be faster than the implementation built into Matlab 2015, and significantly faster than the released code\footnote{\url{http://research.microsoft.com/en-us/um/people/kahe/eccv10/fast-guided-filter-code-v1.rar}} while producing identical results. The parameters were tuned for this task, with a box filter size of $2^f$ (where $f$ is the upsampling factor), $\epsilon = 0.02^2$, and a subsampling of $2$. Our experimentation suggests that the subsampling could be made more aggressive with little drop in accuracy, but because the algorithm requires that the filter size must be divisible by the subsampling factor, the most aggressive subsampling we could use for all upsampling factors was $2$. From this we can assume that a $4\times$ speedup may be possible.
\item [R] This is the technique presented in Yang \cite{Yang2015}, which we ran ourselves using the publicly available code\footnote{\url{http://www.cs.cityu.edu.hk/~qiyang/publications/software/tree_filter.zip}}. Because the code was only available as a compiled windows binary, like in Model P this evaluation was performed on a different, faster workstation, and so the runtime should be considered a lower bound on the actual runtime for the reference hardware. The parameter settings used in these experiments are $\sigma_r = 2^{(f-6)}$ (where $f$ is the upsampling factor) and nonlocal filtering, which experimentation showed to be the optimal parameter settings for this task.
\item [S] This is the technique presented in Yang \emph{et al}. \cite{Yang2007}, which was used as a baseline algorithm in \cite{ferstl2013b}. We did not run this code ourselves, and produced these error rates using the precomputed output from \cite{ferstl2013b}, which is why we do not have runtimes.
\item [T] This is the WLS model of Farbman \emph{et al}. \cite{FFLS2008}, which we ran ourselves using the publicly available code\footnote{\url{http://www.cs.huji.ac.il/~danix/epd/wlsFilter.m}}, with the code was modified to use non-log color reference images. The parameter settings used in these experiments are $\lambda = 5000 \times 2^{(f-4)}$ (where $f$ is the upsampling factor) and $\alpha = 2$, which experimentation showed to be the optimal parameter settings for this task.
\item [U] This model is standard joint bilateral upsampling \cite{Kopf2007} using a permutohedral lattice \cite{Adams2010} with optimally-tuned parameters ($\sigma_{rgb} = 8$, $\sigma_x = \sigma_y = 8f$, where $f$ is the upsampling factor). These errors and runtimes were produced by us using a C++ implementation of the permutohedral lattice\footnote{\url{https://code.google.com/archive/p/imagestack/}}.
\item [V] These errors and runtimes were taken from the website\footnote{\url{https://rvlab.icg.tugraz.at/project_page/project_tofusion/project_tofsuperresolution.html}} of \cite{ferstl2013b}. Since the publication of this paper, it has been revealed that the error cited in the publication is MAE, not RMSE, so the errors in the paper are incorrect. For details, see the supplement\footnote{\url{https://sites.google.com/site/datadrivendepthcvpr2015/}} of \cite{Kwon2015}.
\item [W] This is the technique of Li \emph{et al}. \cite{Li2013}, with errors and runtimes taken directly from \cite{Kwon2015}.
\item [X] This is the technique of Kwon \emph{et al}. \cite{Kwon2015}, with errors and runtimes taken directly from \cite{Kwon2015}. We attempted to obtain the output depth maps or source code from \cite{Kwon2015}, but the authors were unable to comply with our request, and so we are unable to present the results of \cite{Kwon2015} in our figures or perform a more detailed analysis of how well the algorithm performs, nor can we reproduce these results. Details on the lack of availability of the source and output from this model can be found on the project website\footnote{\url{https://sites.google.com/site/datadrivendepthcvpr2015/}}.
\end{description}
\subsection{Colorization}
See Figure~\ref{fig:colorization_supp} for a comparison of our technique on the colorization task, compared against Levin \emph{et al}. \cite{Levin2004}.
The images and code from \cite{Levin2004} were taken from the paper's website\footnote{\url{www.cs.huji.ac.il/~yweiss/Colorization/}}, and were produced using the exact least-square solver presented in the paper for still image processing.
Note that \cite{Levin2004} also presents an approximate multigrid optimization which they use for colorizing videos, which is $6 \times$ faster than their exact still image approach but presumably produces inexact results for the still image task.
Several of the techniques we evaluate against for the depth super-resolution task can also be used for colorization, presumably with the same tradeoffs between accuracy and speed seen in that experiment.
Since there is traditionally no ground-truth for this task, we do not present an exhaustive evaluation here.
\subsection{Semantic Segmentation}
See Figure~\ref{fig:supp_seg} for additional examples of our bilateral solver and competing techniques applied to the semantic segmentation task. The bilateral solver produces more edge-aware results than CRF-based techniques on well-isolated objects.
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.72in]{testset_figures/03/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.85in]{testset_figures/03/MC-CNN_input_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} \\
bad 1\% = $10.1$ \quad
MAE = $1.69$ \quad
RMSE = $9.60$
}
\end{subfigure}
\\
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.72in]{testset_figures/03/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.85in]{testset_figures/03/MC-CNN_output_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} + RBS \\
bad 1\% = $10.7$ \quad
MAE = $1.63$ \quad
RMSE = $8.72$
}
\end{subfigure}
\caption{
Results on the test set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} where our robust bilateral solver is used to improve the depth map predicted by the state-of-the-art MC-CNN technique\cite{Zbontar2015}.
On the top we have the depth map produced by \cite{Zbontar2015} (with zoomed in regions) which is used as the target in our solver.
On the bottom we have the output of our solver, where we see that quality is significantly improved.
We report the error for each depth map in terms of the percent of pixels whose disparity is off by more than 1 (``bad 1\%''), the mean-absolute-error of the disparity, and the root-mean-squared-error of the disparity.
\label{fig:testset_middlebury1}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.8in]{testset_figures/05/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.80in]{testset_figures/05/MC-CNN_input_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} \\
bad 1\% = $23.4$ \quad
MAE = $17.1$ \quad
RMSE = $67.4$
}
\end{subfigure}
\\
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.8in]{testset_figures/05/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.80in]{testset_figures/05/MC-CNN_output_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} + RBS \\
bad 1\% = $24.4$ \quad
MAE = $4.30$ \quad
RMSE = $19.9$
}
\end{subfigure}
\caption{More results on the test set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:testset_middlebury1}.
\label{fig:testset_middlebury2}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.76in]{testset_figures/07/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.83in]{testset_figures/07/MC-CNN_input_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} \\
bad 1\% = $25.0$ \quad
MAE = $2.47$ \quad
RMSE = $23.2$
}
\end{subfigure}
\\
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.76in]{testset_figures/07/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.83in]{testset_figures/07/MC-CNN_output_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} + RBS \\
bad 1\% = $27.1$ \quad
MAE = $2.81$ \quad
RMSE = $24.2$
}
\end{subfigure}
\caption{More results on the test set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:testset_middlebury1}.
\label{fig:testset_middlebury3}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.7in]{testset_figures/13/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.86in]{testset_figures/13/MC-CNN_input_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} \\
bad 1\% = $22.5$ \quad
MAE = $6.00$ \quad
RMSE = $38.8$
}
\end{subfigure}
\\
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.7in]{testset_figures/13/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.86in]{testset_figures/13/MC-CNN_output_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} + RBS \\
bad 1\% = $22.8$ \quad
MAE = $3.02$ \quad
RMSE = $17.9$
}
\end{subfigure}
\caption{More results on the test set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:testset_middlebury1}.
\label{fig:testset_middlebury4}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.76in]{testset_figures/15/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.85in]{testset_figures/15/MC-CNN_input_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} \\
bad 1\% = $22.0$ \quad
MAE = $5.66$ \quad
RMSE = $28.6$
}
\end{subfigure}
\\
\begin{subfigure}[!]{4.8in}
\includegraphics[width=3.76in]{testset_figures/15/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.85in]{testset_figures/15/MC-CNN_output_tiles.jpg}
\caption{
MC-CNN\cite{Zbontar2015} + RBS \\
bad 1\% = $21.6$ \quad
MAE = $3.19$ \quad
RMSE = $17.9$
}
\end{subfigure}
\caption{More results on the test set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:testset_middlebury1}.
\label{fig:testset_middlebury5}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_input_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_output_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MeshStereo_input_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MeshStereo_input_tiles.jpg}
\caption{MeshStereo\cite{Zhang2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MeshStereo_output_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MeshStereo_output_tiles.jpg}
\caption{TMAP\cite{Zhang2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/TMAP_input_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/TMAP_input_tiles.jpg}
\caption{TMAP\cite{Psota2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/TMAP_output_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/TMAP_output_tiles.jpg}
\caption{TMAP\cite{Psota2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/SGM_input_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/SGM_input_tiles.jpg}
\caption{SGM\cite{Hirschmuller05accurateand}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/SGM_output_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/SGM_output_tiles.jpg}
\caption{SGM\cite{Hirschmuller05accurateand} + RBS}
\end{subfigure}
\caption{
Results on the training set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} where our robust bilateral solver is used to improve the depth map predicted by four top-performing stereo algorithms.
On the left we have the depth map produced by each stereo algorithm (with zoomed in regions) which is used as the target in our solver.
On the right we have the output of our solver, where we see that quality is significantly improved.
\label{fig:middlebury1}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_input_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_output_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MeshStereo_input_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MeshStereo_input_tiles.jpg}
\caption{MeshStereo\cite{Zhang2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MeshStereo_output_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MeshStereo_output_tiles.jpg}
\caption{TMAP\cite{Zhang2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/TMAP_input_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/TMAP_input_tiles.jpg}
\caption{TMAP\cite{Psota2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/TMAP_output_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/TMAP_output_tiles.jpg}
\caption{TMAP\cite{Psota2015} + RBS}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/SGM_input_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/SGM_input_tiles.jpg}
\caption{SGM\cite{Hirschmuller05accurateand}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/SGM_output_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/SGM_output_tiles.jpg}
\caption{SGM\cite{Hirschmuller05accurateand} + RBS}
\end{subfigure}
\caption{More results on the training set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:middlebury1}.
\label{fig:middlebury2}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_input_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_output_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + RBS (Ours)}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_tf_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_tf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + TF \cite{Yang2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_wmf_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_wmf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + WMF \cite{Ma2013}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_fgf_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_fgf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + FGF \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/12/MC-CNN_fgf_overlay_boxes.jpg}
\includegraphics[width=0.41in]{trainset_figures/12/MC-CNN_fgf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + DT \cite{GastalOliveira2011DomainTransform}}
\end{subfigure}
\caption{
Results on the training set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014}, in which we compare our robust bilateral solver against several baseline techniques for post-processing depth maps.
Our model's output exhibits much higher quality than the input or any baseline, especially at the discontinuities shown in the cropped regions.
\label{fig:middleburyAlgo1}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_input_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_input_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_output_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_output_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + RBS (Ours)}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_tf_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_tf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + TF \cite{Yang2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_wmf_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_wmf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + WMF \cite{Ma2013}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_fgf_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_fgf_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + FGF \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=1.83in]{trainset_figures/01/MC-CNN_dt_overlay_boxes.jpg}
\includegraphics[width=0.42in]{trainset_figures/01/MC-CNN_dt_tiles.jpg}
\caption{MC-CNN\cite{Zbontar2015} + DT \cite{GastalOliveira2011DomainTransform}}
\end{subfigure}
\caption{More results on the training set of the Middlebury Stereo Dataset V3 \cite{Scharstein2014} in the same format as Figure~\ref{fig:middleburyAlgo1}.
\label{fig:middleburyAlgo2}
}
\end{figure*}
\newcommand{1.5in}{1.5in}
\newcommand{2.3in}{2.3in}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/01_im0.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/01_input.jpg}
\caption{Depth Input \label{fig:defocus_input}}
\end{subfigure}
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/01_weight.jpg}
\caption{Confidence Input \label{fig:defocus_confidence}}
\end{subfigure}
\\
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/01_output_old.jpg}
\caption{\cite{Barron2015A} Depth Output}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/01_render_old.jpg}
\caption{\cite{Barron2015A} Rendering}
\end{subfigure}
\\
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/01_output.jpg}
\caption{Our Depth Output}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/01_render.jpg}
\caption{Our Rendering}
\end{subfigure}
\caption{
Given the reparametrized input into the solver of \cite{Barron2015A} (\subref{fig:defocus_input} and \subref{fig:defocus_confidence}) our solver can produce depth maps and defocused renderings of a comparable quality to \cite{Barron2015A}, while being $3.5\times$ faster.
\label{fig:stereo_defocus_supp1}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/20_im0.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/20_input.jpg}
\caption{Depth Input}
\end{subfigure}
\begin{subfigure}[!]{1.5in}
\includegraphics[width=1.5in]{figures/bsqs_output/20_weight.jpg}
\caption{Confidence Input}
\end{subfigure}
\\
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/20_output_old.jpg}
\caption{\cite{Barron2015A} Depth Output}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/20_render_old.jpg}
\caption{\cite{Barron2015A} Rendering}
\end{subfigure}
\\
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/20_output.jpg}
\caption{Our Depth Output}
\end{subfigure}
\begin{subfigure}[!]{2.3in}
\includegraphics[width=2.3in]{figures/bsqs_output/20_render.jpg}
\caption{Our Rendering}
\end{subfigure}
\caption{
Additional output for the stereo defocus task of \cite{Barron2015A}, in the same format as Figure~\ref{fig:stereo_defocus_supp1}.
\label{fig:stereo_defocus_supp2}
}
\end{figure*}
\newcommand{1.12in}{1.12in}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_image.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_true.jpg}
\caption{Ground Truth}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_jgu.jpg}
\caption{Liu \emph{et al}. \cite{Liu2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_mutual.jpg}
\caption{Shen \emph{et al}. \cite{Shen2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_chan.jpg}
\caption{Chan \emph{et al}. \cite{chan2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_he.jpg}
\caption{GF \cite{He2010,ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_fastglobal.jpg}
\caption{Min \emph{et al}. \cite{Min2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_lu.jpg}
\caption{$\dagger$ Lu \cite{Lu2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_park.jpg}
\caption{Park \emph{et al}. \cite{Park2011}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_dt.jpg}
\caption{DT \cite{GastalOliveira2011DomainTransform}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_wmf.jpg}
\caption{Ma \emph{et al}. \cite{Ma2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_med100.jpg}
\caption{Zhang \emph{et al}. \cite{Zhang2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_my_fastguide.jpg}
\caption{FGF \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_treefilter.jpg}
\caption{Yang 2015\, \cite{Yang2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_yang.jpg}
\caption{Yang 2007\, \cite{Yang2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_wls.jpg}
\caption{WLS \cite{FFLS2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_bilateral.jpg}
\caption{JB \cite{Adams2010,Kopf2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_ferstl.jpg}
\caption{Ferstl \emph{et al}. \cite{ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_li.jpg}
\caption{$\dagger$ Li\emph{et al}. \cite{Li2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/1_bsqs.jpg}
\caption{BS (Ours)}
\end{subfigure}
\caption{Results for the depth superresolution task.
Algorithms are sorted according to their average error on this benchmark, from upper left to lower right. Algorithms which use external training data are indicated with a dagger.
\label{fig:super1}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_image.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_true.jpg}
\caption{Ground Truth}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_jgu.jpg}
\caption{Liu \emph{et al}. \cite{Liu2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_mutual.jpg}
\caption{Shen \emph{et al}. \cite{Shen2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_chan.jpg}
\caption{Chan \emph{et al}. \cite{chan2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_he.jpg}
\caption{GF \cite{He2010,ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_fastglobal.jpg}
\caption{Min \emph{et al}. \cite{Min2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_lu.jpg}
\caption{$\dagger$ Lu \cite{Lu2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_park.jpg}
\caption{Park \emph{et al}. \cite{Park2011}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_dt.jpg}
\caption{DT \cite{GastalOliveira2011DomainTransform}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_wmf.jpg}
\caption{Ma \emph{et al}. \cite{Ma2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_med100.jpg}
\caption{Zhang \emph{et al}. \cite{Zhang2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_my_fastguide.jpg}
\caption{FGF \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_treefilter.jpg}
\caption{Yang 2015\, \cite{Yang2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_yang.jpg}
\caption{Yang 2007\, \cite{Yang2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_wls.jpg}
\caption{WLS \cite{FFLS2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_bilateral.jpg}
\caption{JB \cite{Adams2010,Kopf2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_ferstl.jpg}
\caption{Ferstl \emph{et al}. \cite{ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_li.jpg}
\caption{$\dagger$ Li\emph{et al}. \cite{Li2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/2_bsqs.jpg}
\caption{BS (Ours)}
\end{subfigure}
\caption{More results for the depth superresolution task.
\label{fig:super2}
}
\end{figure*}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_image.jpg}
\caption{Input Image}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_true.jpg}
\caption{Ground Truth}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_jgu.jpg}
\caption{Liu \emph{et al}. \cite{Liu2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_mutual.jpg}
\caption{Shen \emph{et al}. \cite{Shen2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_chan.jpg}
\caption{Chan \emph{et al}. \cite{chan2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_he.jpg}
\caption{GF \cite{He2010,ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_fastglobal.jpg}
\caption{Min \emph{et al}. \cite{Min2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_lu.jpg}
\caption{$\dagger$ Lu \cite{Lu2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_park.jpg}
\caption{Park \emph{et al}. \cite{Park2011}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_dt.jpg}
\caption{DT \cite{GastalOliveira2011DomainTransform}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_wmf.jpg}
\caption{Ma \emph{et al}. \cite{Ma2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_med100.jpg}
\caption{Zhang \emph{et al}. \cite{Zhang2014}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_my_fastguide.jpg}
\caption{FGF \cite{He2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_treefilter.jpg}
\caption{Yang 2015\, \cite{Yang2015}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_yang.jpg}
\caption{Yang 2007\, \cite{Yang2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_wls.jpg}
\caption{WLS \cite{FFLS2008}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_bilateral.jpg}
\caption{JB \cite{Adams2010,Kopf2007}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_ferstl.jpg}
\caption{Ferstl \emph{et al}. \cite{ferstl2013b}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_li.jpg}
\caption{$\dagger$ Li\emph{et al}. \cite{Li2013}}
\end{subfigure}
\begin{subfigure}[!]{1.12in}
\includegraphics[width=1.12in]{figures/super_depth/3_bsqs.jpg}
\caption{BS (Ours)}
\end{subfigure}
\caption{More results for the depth superresolution task.
\label{fig:super3}
}
\end{figure*}
\newcommand{1.35in}{1.35in}
\begin{figure*}[p]
\centering
\begin{subfigure}[!]{1.35in}
\begin{tabular}{c}
\includegraphics[width=1.35in]{figures/colorize/example_m.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/cats_m.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/hair_m.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/monaco_m.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/waterfall_m.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/yellow_m.jpg}
\end{tabular}
\caption{Input }
\end{subfigure}
\begin{subfigure}[!]{1.35in}
\begin{tabular}{c}
\includegraphics[width=1.35in]{figures/colorize/example_res.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/cats_res.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/hair_res.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/monaco_res.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/waterfall_res.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/yellow_res.jpg}
\end{tabular}
\caption{Levin \emph{et al}. \cite{Levin2004}}
\end{subfigure}
\begin{subfigure}[!]{1.35in}
\begin{tabular}{c}
\includegraphics[width=1.35in]{figures/colorize/example_output.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/cats_output.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/hair_output.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/monaco_output.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/waterfall_output.jpg} \\
\includegraphics[width=1.35in]{figures/colorize/yellow_output.jpg}
\end{tabular}
\caption{Our results}
\end{subfigure}
\caption{
Results for the colorization task, using the images and scribbles from \cite{Levin2004}.
Our algorithm's output is nearly indistinguishable from that of \cite{Levin2004}, while being $95 \times$ faster.
\label{fig:colorization_supp}
}
\end{figure*}
\begin{figure*}[p]
\centering
{\includegraphics[width=3.3in]{figures/seg_examples.jpg}}
\caption{
Additional semantic segmentation results on Pascal VOC12 validation images. FCN refers to the fully convolutional network component of the end-to-end trained CRF-RNN model.
While the CRF-augmented DeepLab model (top rows) and CRF-RNN model (bottom rows) perform best overall, the bilateral solver produces better results on isolated objects (second and third images) at a fraction of the cost.
}
\label{fig:supp_seg}
\end{figure*}
\clearpage
\bibliographystyle{splncs03}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Historique et énoncé}
En 1972, à la suite de l'introduction par Serre des formes modulaires $p$-adiques, Katz dans \cite{Katz} introduit la notion de formes modulaires surconvergentes. Ces dernières, cas particulier des formes modulaires $p$-adiques de Serre, sont des sections de fibrés automorphes sur des voisinages stricts du lieu ordinaire. Il est possible d'utiliser les formes surconvergentes pour interpoler les formes modulaires classique, et ceci a des applications considérables en théorie des nombres.
Dans ce même article, Katz, construit un opérateur compact sur l'ensemble des formes surconvergentes en utilisant un théorème de Lubin, sur l'existence d'un sous-groupe canonique
dans la $p$-torsion d'une courbe elliptique, à condition que celle-ci ait bonne réduction, et que son invariant de Hasse ne soit pas trop grand (cf. \cite{Katz} Theorem 3.1). Cet opérateur
compact est d'une importance capitale dans la théorie des formes modulaires, et a en particulier mené à la construction par Coleman et Mazur de la "Eigenvariety", \cite{CM}.
La théorie du sous-groupe canonique de Lubin a depuis été généralisée à la torsion des groupes $p$-divisibles (tronqués) dont l'invariant de Hasse n'est pas trop grand, en utilisant les travaux de nombreux auteurs, dont Andreatta-Gasbarri, Abbes-Mokrane, Conrad, Tian, et Fargues, \cite{Far}, qui démontre le théorème suivant,
\begin{theor} [Fargues]
Soit $p>3$ un nombre premier, $K/\QQ_p$ une extension, et $G/\Spec(\mathcal O_K)$ un groupe $p$-divisible tronqué d'échelon $r$. Supposons que
\[ \Ha(G) < \frac{1}{2p^{r-1}}.\]
Alors il existe un sous-groupe fini et plat $C \subset G$ de degré $r\dim G - \frac{p^r-1}{p-1}\Ha(G)$ tel que $C(\mathcal O_{\overline K}) = (\ZZ/p^n\ZZ)^{\dim G[p]}$, que modulo $p^{1-\Ha(G)}$, $C$ coincide avec $\Ker F^r$ le noyau de Frobénius iteré $n$-fois. De plus $C$ est un cran de la filtration de Harder-Narasihman de $G$ (\cite{FarHN}) et vérifie donc de nombreuses compatibilités.
\end{theor}
De plus, à l'aide des filtrations de Harder-Narasihman, le théorème précédent peut se mettre en famille, par exemple sur des voisinages stricts du lieu ordinaire de variété de Shimura.
Le théorème précédent est aussi valide pour $p=3$ avec une borne un peu moins précise, et il a été redemontré par Hattori, qui autorise $p=2$ (et donne une meilleure borne pour $p=3$). Il a été aussi redémontré par Scholze d'une manière différente à l'aide du complexe cotangeant \cite{Sch3}.
Malheureusement le théorème précédent ne s'applique pas sur de nombreuses variétés de Shimura, comme toutes celles dont le lieu $\mu$-ordinaire est vide ; dans ce cas, tous les groupes $p$-divisibles qui apparaissent ont leur invariant de Hasse égal à 1.
L'idée est alors d'utiliser un autre invariant, l'invariant $\mu$-ordinaire tel que par exemple introduit dans \cite{GN}, voir aussi \cite{KW,Box} et \cite{Her1}.
On utilisera la construction de \cite{Her1} puisqu'elle s'applique à notre situation (voir section \ref{sectmuha}).
Soit alors $\mathcal O$ une extension finie non ramifiée de $\ZZ_p$ de degré $f$, $G$ un groupe $p$-divisible tronqué sur $\mathcal O_K$, $K$ une extension valuée de $\QQ_p$, muni d'une action de $\mathcal O$ qui permet
d'associer à $G$ une signature $(p_\tau,q_\tau)_{\tau \in \Hom(\mathcal O,\overline \QQ_p)}$. La $\mathcal O$-hauteur de $G$ (définie comme celle de $G[p]$) est alors $h = \Ht_\mathcal O(G) = p_\tau+q_\tau$. On peut alors associer à $G$ son degré (cf. \cite{Far}), mais aussi ses degrés partiels,
\[ \deg_\tau G = \deg \omega_{G,\tau},\]
où l'on écrit $\omega_G = \bigoplus_\tau \omega_{G,\tau}$ et le degré d'un $\mathcal O_K$-module de la forme $\bigoplus_{i=1}^r \mathcal O_K/a_i\mathcal O_K$ est $\sum_i v(a_i)$ (normalisée par $v(p) = 1$). On peut aussi construire des filtrations de Harder-Narasihman pour des schémas en $\mathcal O$-modules, en remplaçant la fonction degré utilisée dans \cite{Far}, par, pour tout $\tau$,
\[ \Deg_\tau(G) = \sum_{i=1}^f p^{f-i} \deg_{\sigma^i\tau}(G),\]
où $\sigma$ est le relèvement à $\mathcal O$ du Frobénius $x \dans \longmapsto x^p$. On note alors $\HN_\tau$ la filtration de Harder-Narasihman construite à partir de la fonction $\Deg_\tau$.
Notre théorème principal est – de manière simplifiée – le suivant,
\begin{theor}
On suppose que $p>4h$ et $k = (f-1)\max_\tau q_\tau$, pour simplifier. Pour tout $\mathcal O$-module $p$-divisible tronqué $G$ d'échelon $r + k$ sur $\Spec(\mathcal O_C)$ ($C =\widehat{\overline{\QQ_p}}$) comme précédemment.
vérifiant que,
\[ {^\mu}\Ha(G) < \frac{1}{2p^{f(r-1)}},\]
alors il existe une filtration de $G[p^r]$ par des sous-$\mathcal O$-modules finis et plats,
\[ \Fil_\tau(G[p^r]) \subset G[p^r],\]
telle que $\Fil_\tau(G[p^r]) \subset \Fil_{\tau'}(G[p^r])$ si et seulement si $p_\tau \leq p_{\tau'}$ et $\Fil_\tau(G[p^r])(\mathcal O_{C}) \simeq (\mathcal O/p^r\mathcal O)^{p_\tau}$. Les degrés des sous-$\mathcal O$-modules de la filtration sont alors controlés par,
\[ \Deg_\tau(\Fil_\tau(G[p^r])) \geq r \sum_{i = 1}^f \min(p_\tau,p_{\sigma^{i}\tau})p^{f-i} - \frac{p^{rf} - 1}{p^f-1}\Ha_\tau(G).\]
En particulier ceux-ci sont "de grand degré". Pour la $p$-torsion, on a la formule exacte,
\[ \Deg_\tau(\Fil_\tau(G[p])) = \sum_{i=1}^f \min(p_\tau,p_{\sigma^{i}\tau})p^{f-i} - \Ha_\tau(G).\]
De plus, chaque $\Fil_\tau(G[p^r])$ est un cran de la filtration de Harder-Narasihman modifiée $\HN_\tau$ de $G[p^r]$ (si $r = 1$ ce sont aussi des crans de la filtration de Harder-Narasihman de $G[p]$ introduite dans \cite{FarHN}), cette filtration vérifie donc toutes les compatibilités classiques.
De plus, si $r = nf$, la combinaison linéaire,
\[K_n = \sum_\tau \Fil_\tau (G[p^f])[p^{nr_\tau}] \subset G[p^{nf}],\]
où $r_\tau = |\{\tau' \dans \Hom(\mathcal O, \mathcal O_{C}) : q_{\tau'} \leq q_\tau\}|$, déforme le noyau de Frobenius $F^{nf}$, au sens où $K_n \otimes_{\mathcal O_C} \overline{\FP} = \Ker(F^{nf})$.
\end{theor}
En fait, dans les théorèmes du texte on a des meilleurs constantes que celles annoncées précedement.
À cause de l'utilisation de l'invariant de Hasse $\mu$-ordinaire tel que décrit dans \cite{Her1}, on peut en fait prendre
$k = \max \{ k_\tau : \tau \text{ tels que } q_\tau \neq h\} + 1$ où
\[ k_\tau = \sum_{\tau'} \max(q_\tau - q_{\tau'},0),\]
qui est relié à l'échelon de $G$ et n'est nécéssaire que pour définir l'invariant de Hasse. On peut probablement contourner ce problème (i.e. avoir $k = 0$) et n'utiliser qu'un groupe d'échelon $r$ en modifiant un peu la construction de l'invariant de Hasse.
L'hypothèse sur $p$ est dans les fait moins restrictive. On doit tout d'abord supposer $p > \max\{q_\tau : q_\tau \neq h\} + 1$ pour appliquer le théorème de Faltings \cite{Fal} – central dans notre construction.
De plus, et c'est un problème technique, on doit supposer que pour tout $\tau$ tel que $q_\tau \not\in \{0,h\}$ $p> \frac{2q_\tau}{1+K_\tau}$ – où $K_\tau < 1$ est une constante
(explicite) dépendant de la signature et de $\tau$ – pour calculer les "degrés" des sous-groupes $\Fil_\tau(G[p^r])$.
Sous cette hypothèse sur $p$, on a alors une borne sur ${^\mu}\Ha$ explicite, mais un peu différente de la borne du théorème précédent (un peu plus compliquée). Néanmoins si $p > 4q_\tau$, pour tout $\tau$, cette borne devient celle annoncée, à savoir $\frac{1}{2p^{f(r-1)}}$.
Le théorème précis est le théorème \ref{thrfinO}.
\subsection{Construction de la filtration}
On commence par procéder localement (i.e. sur $\mathcal O_C$, $C = \widehat{\overline{\QQ}}_p$) en étudiant finement l'application de Hodge-Tate $\alpha_G$.
Dans le cas où $G/\Spec(\mathcal O_C)$ est un $\mathcal O$-module $p$-divisible tronqué de signature $(p_\tau,q_\tau)$, on peut décomposer son complexe conormal,
\[\omega_{G^D} = \bigoplus_\tau \omega_{G^D,\tau},\]
où les constituants sont de dimensions $q_\tau$.
On peut alors regarder l'application de Hodge-Tate modifiée,
\[\alpha_{G[p],\tau} : G[p](\mathcal O_C) \overset{\alpha_G[p]}{\fleche} \omega_{G[p]^D} \fleche \omega_{G[p]^D,\tau}.\]
Lorsque $G/\Spec(\mathcal O_C)$ est un $\mathcal O$-module $p$-divisible (non tronqué) $\mu$-ordinaire, on a montré dans \cite{Her1} – en utilisant essentiellement les travaux de \cite{SW}– que $G$ est explicite,
\[ G \simeq \prod_{\ell = 1}^r \mathcal{LT}_{A_\ell}^{q_{\ell +1}-q_\ell},\]
(on renvoie à \cite{Her1} ou à la sous-section \ref{sect31} pour les notations) et on peut donc explicitement calculer la filtration de Harder-Narasihman de $G$, qui est donnée par les sous-groupes,
\[ \Fil_j = \prod_{\ell = j}^r \mathcal{LT}_{A_\ell}^{q^{(\ell +1)}-q^{(\ell)}},\]
et chacun de ces groupes correspond à certains plongements $\tau$ (tous ceux tels que $q_\tau = q^{(j)}$).
On peut alors explicitement calculer l'application de Hodge-Tate modifiée dans ce cas, et remarquer que $T_p\Fil_j = \Ker \alpha_{G,\tau}$ pour tout $\tau$ tel que $q_\tau = q^{(j)}$.
Dans Fargues (i.e. dans le cas d'une signature parallèle) on pouvait alors utiliser les résultats sur l'annulation de la cohomologie de la suite de Hodge-Tate pour en déduire qu'un certain noyau de $\alpha_G$ était un bon candidat pour être le sous-groupe canonique. Néanmoins dans notre cas cette stratégie n'est plus du tout suffisante, puisque l'application $\alpha_G$, même pour un groupe $p$-divisible $\mu$-ordinaire, est loin d'être surjective ! Il faut alors modifier l'application de Hodge-Tate par certaines périodes cristallines pour espérer contrôler son image (et son noyau).
En utilisant les résultats de Fatlings sur les Frobénius-cristaux filtrés, \cite{Fal}, et c'est le coeur technique de la construction, on peut alors prouver sous l'hypothèse sur ${^\mu}\Ha$ que, quitte à réduire un petit peu, l'image de
$\alpha_{G[p],\tau}$ est de la
taille espérée, et donc que son noyau définit un sous-$\mathcal O$-module fini et plat de $\mathcal O$-hauteur $p_\tau$, $\Fil_\tau(G[p])$. Les Frobénius cristaux étant des modules
sur l'anneau $A_{cris}$ de Fontaine, l'énoncé précédent se ramène à des multiplications et divisions habiles par des périodes de modules de Lubin-Tate $t_\mathcal O$ dans
$A_{cris}$. Une fois que l'on contrôle l'application de Hodge-Tate, on peut alors en déduire, grâce à une méthode semblable à celle de \cite{Far}, une formule sur les degrés partiels de $\Fil_\tau(G[p])$.
On procède ensuite par récurrence pour construire $\Fil_\tau(G[p^r]) \subset G[p^r]$, le point technique étant de prouver que les sous-groupes ainsi construit
sont effectivement des crans de filtrations de type Harder-Narasihman, la difficulté étant principalement d'ordre combinatoire et il faut controler les degrés de manière plus précise
que dans \cite{Far}, ce qui rend la preuve plus technique. On utilise alors de manière centrale les propositions élémentaires mais extrêmement astucieuses introduites dans
\cite{Bij} sur les degrés partiels, que l'on rappelle en annexe.
Notons aussi que Bijakowski dans \cite{Bij} démontre l'existence d'une filtration canonique sur les variétés de Shimura (PEL) en niveau Iwahorique, et la caractérise complètement en terme de "degrés partiels", en utilisant uniquement les propriétés du degré d'un schéma en groupes.
\subsection{Applications aux variétés de Shimura}
De la même manière que dans \cite{Far}, on peut utiliser les filtrations de Harder-Narasihman $\HN_\tau$ pour mettre en famille les filtrations précédentes sur un espace rigide
$\mathcal X$, en particulier sur (des voisinages stricts du lieu $\mu$-ordinaire) des variétés de Shimura PEL non ramifiées en $p$. En particulier le théorème précédent permet de
construire un opérateur compact $U_p$ sur les formes surconvergentes des variétés de Shimura (PEL non ramifiée en $p$) sans lieu ordinaire.
Soit $\mathcal D = (B,\star,V,<,>)$ une donnée de Shimura PEL, que l'on suppose non ramifiée en $p$ (voir \cite{VW} section 1.1). On peut alors associer à $\mathcal D$ et à un niveau $K$, un schéma $X_K$ sur $\Spec(O_E)$, où $E/\QQ_p$ est une extension finie, qui est un espace de module de variétés abéliennes (\cite{KotJams}).
Soit $A \fleche X$ la variété abélienne universelle, elle est munie d'une action de $O_B$, un ordre dans l'algèbre $B$. D'après l'hypothèse sur $p$, $\mathcal O_B\otimes_{\ZZ}\ZZ_p$ se scinde en un produit (fini) d'algèbres de matrices sur des entensions non ramifées de $\ZZ_p$ et nous permet d'écrire,
\[A[p^\infty] = \prod_{i=1}^r A[\pi_i^\infty],\]
où chaque $A[\pi^\infty]$ est un groupe $p$-divisible muni d'une action de $M_n(\mathcal O_{K_i})$, $K_i/\QQ_p$ non ramifiée. Par l'équivalence de Morita, on peut alors écrire,
\[A[\pi^\infty] = \mathcal O_{K_i}^n \otimes_{\mathcal O_K} G_i,\]
où $G_i$ est un $\mathcal O_{K_i}$-module $p$-divisible du type considéré précédemment, dont on note $(p^i_\tau,q_\tau^i)$ la signature.
On a alors pour tout $i$, une section d'un fibré en droite, ${^\mu}\Ha(G_i)$ qui détermine un ouvert (dense) de $X_K$, le lieu $\mu$-ordinaire de $G_i$. L'intersection de ces ouverts
est alors le lieu $\mu$-ordinaire de $X_K$. Supposons maintenant que le groupe réductif $G$ associé à $\mathcal D$ a un modèle réductif $\mathcal G$ sur $\ZZ_p$, notons le niveau $K = K^pK_p$ où $K^p$ est un niveau hors $p$ assez petit, et $K_p \subset \mathcal G(\ZZ_p)$. On note $\mathcal P \subset \mathcal G$ un sous-groupe parabolique, determiné par les signatures $(p_\tau^i,q_\tau^i)$. On peut alors construire une suite décroissante de sous-groupes de congruence, $\mathcal P_0 = K^p\mathcal G(\ZZ_p)$ et $\mathcal P_n = K^p\mathcal P_{n,p}$, où
\[\mathcal P_{n,p} = \{ g \dans \mathcal G(\ZZ_p) : g \pmod p^n \dans \mathcal P(\ZZ/p^n\ZZ)\},\]
On peut alors mettre en famille les résultats précédents (voir théorème \ref{thrfam}), et en déduire,
\begin{theor}
Supposons que $p$ est assez grand devant la signature $(p_\tau^i,q_\tau^i)_{\tau,i}$, au sens de la section \ref{sect9}.
Soit $X_{\mathcal P_k}^{rig}$ la variété rigide associée au schéma $X_{\mathcal P_k}$. Alors pour tout $i$, il existe une constante explicite $\eps_n^i$ tel que sur l'ouvert,
\[ X_{\mathcal P_k}(\eps_n) = \{ x \dans X_{\mathcal P_k}^{rig}: {^\mu}\Ha(G_i)(x) < \eps_n^i\},\]
il existe une filtration de $A[p^n]$ par des sous-groupes finis et plats de grands degrés qui étend la filtration canonique sur le lieu ordinaire.
En particulier, on en déduit une section,
\[ X_{\mathcal P_0}(\eps_n) \overset{s_n}{\fleche} X_{\mathcal P_n}(\eps_n),\]
qui étend la section donnée sur le lieu ordinaire par la filtration canonique.
\end{theor}
\subsection{$\mathcal O$-modules stricts et applications futures}
Un cas particulier où notre théorème s'applique est le cas des $\mathcal O$-modules stricts de Faltings, où la signature est donnée par $(d,h-d),(0,h),\dots,(0,h)$.
Dans ce cas il y a un seul plongement intéressant (celui pour lequel $p_\tau = d$), et la filtration précédente est réduite à un cran. Le groupe dans le cas $\mu$-ordinaire est un produit
de modules de Lubin-Tate $\mathcal{LT}_\tau$.
Bien que dans ce cas le théorème de \cite{Far} ne s'applique pas, on pourrait probablement appliquer sa démonstration en utilisant une théorie des cristaux pour les
$\mathcal O$-modules stricts (i.e. utiliser $V_\pi$ le Verschiebung modifié de \cite{Fal3} à la place du Verschiebung) et – probablement – en déduire une version de notre théorème
avec une meilleure borne sur $p$ (et peut-être ne pas réellement recourir aux résultats complets de \cite{Fal}).
On espère utiliser nos résultats pour construire des familles de formes modulaires surconvergentes propres pour les opérateurs de Hecke, dans la cohomologie cohérente des variétés de Shimura PEL non ramifiée en $p$. On reviendra sur cette question dans un futur proche.
\subsection{Description de l'article}
Dans la section \ref{sect2} on rappelle les notations classiques, les polygones associés aux $\mathcal O$-modules, ainsi que le cristal associé à de tels groupes.
Dans la section \ref{sect3} on rappelle la construction des invariants de Hasse de \cite{Her1}, ainsi que la structure des $\mathcal O$-modules $p$-divisibles $\mu$-ordinaires. On
explique ensuite pourquoi ceux-ci sont munis d'une filtration "naturelle".
La section \ref{sect4} est consacrée à l'application de Hodge-Tate. Après l'avoir définie ainsi que sa variante cristalline, on la calcule explicitement sur les groupes $\mu$-ordinaires,
on rappelle les résultats de Faltings et on explique la stratégie de l'article.
La section \ref{sect5} est le coeur technique de l'article, qui détermine la structure de l'image de l'application de Hodge-Tate. Elle est basée sur l'article de Faltings \cite{Fal},
on y montre que l'image de l'application de Hodge-Tate contient suffisamment de périodes suivant la signature de $G$, de telle sorte que l'on peut construire une application de Hodge-Tate divisée (en passant à la puissance extérieure), et on relie l'image de cette application divisée (et de l'application de départ) avec les invariants de Hasse partiels de \cite{Her1}.
Dans la section \ref{sect6} on construit la filtration dans la $p$-torsion, et on calcule ses degrés partiels à la manière de \cite{Far}.
La section \ref{sect7} décrit de nouvelles filtrations de Harder-Narasihman, basées sur l'article \cite{FarHN}, à l'aide de fonctions degrés "modifiées".
On donne aussi des conditions pour que des sous-groupes de grand degrés soient des crans de ces filtrations.
La section \ref{sect8} est consacré au théorème général et utilise les constructions des sections précédentes. Un grande partie de la démonstration est dédiée à des minorations de degrés, pour appliquer les résultats de la section \ref{sect7}. On y prouve aussi que l'on peut construire des déformation de Frobénius à l'aide de combinaisons linéaires des filtrations canoniques. Enfin, la section \ref{sect9} traite la mise en famille des filtrations précédentes.
\subsection{Remerciements}
Je tiens à exprimer mes remerciements envers Laurent Fargues et Vincent Pilloni pour m'avoir introduit à ce sujet ainsi que pour leur aides et leurs encouragements tout au long de
l'écriture de cet article. Je remercie aussi sincèrement Stéphane Bijakowski pour toutes les discussions intéressantes, ainsi que sa capacité à expliciter les choses les plus
abstraites. En particulier il me semble qu'il aurait été difficile de conclure cet article sans les propositions de \cite{Bij}.
\section{Notations et mise en situation}
\label{sect2}
Soit $\mathcal O$ l'anneau des entiers d'une extension finie $F$ non ramifiée de $\QQ_p$. Notons $f = [F:\QQ_p]$.
On notera $\mathcal I = \Hom(F,\CC_p)$ l'ensemble des plongements de $F$ dans $\CC_p$, et on utilisera communément la notation $\tau$ pour ces plongements. On notera $\sigma$ le relèvement de Frobenius à $F$, qui induit une action transitive sur $\mathcal I$.
Soit $\mathfrak X/\Spf(\mathcal O_C)$ un schéma formel admissible (au sens de Raynaud). Soit $G \fleche \mathfrak X$ un groupe de Barsotti-Tate tronqué d'échelon $r$,
muni d'une action de $\mathcal O$, i.e. d'une injection,
\[\mathcal O \fleche \End_{\mathfrak X}(G).\] On appellera un tel groupe $p$-divisible (tronqué ou non) un $\mathcal O$-module $p$-divisible (tronqué ou non).
Notons $\omega_G = e_G^*\Omega^1_{G/\mathfrak X}$ le faisceau (dit conormal) localement libre sur $\mathfrak X$ associé à $G$.
Cette action induit des décompositions,
\[ \omega_G = \bigoplus_{\tau \in \mathcal I} \omega_{G,\tau} \quad \text{et} \quad \omega_{G^D} = \bigoplus_{\tau \in \mathcal I} \omega_{G^D,\tau},\]
et supposons que la signature de $G$ soit donnée par $(p_\tau,q_\tau)_{\tau \in \mathcal I}$, c'est à dire,
\[ p_\tau = \dim_{\mathcal O_C} \omega_{G,\tau} \quad \text{et} \quad q_\tau = \dim_{\mathcal O_C} \omega_{G^D,\tau}.\]
Notons $H$ la hauteur de $G$, elle est divisible par $f$, et notons $h =\frac{H}{f} =: \Ht_{\mathcal O}(G)$. On a alors pour tout $\tau, p_\tau + q_\tau = h$.
\begin{rema}
Soit $G/\mathfrak X$ un $\mathcal O$-module $p$-divisible (tronqué ou non) de signature $(p_\tau,q_\tau)_\tau$ et de $\mathcal O$-hauteur $h$ comme ci-dessus. Alors son dual de
Cartier $G^D$, lui aussi muni d'une action de $\mathcal O$, est aussi de $\mathcal O$-hauteur $h$ et sa signature est donc $(q_\tau,p_\tau)_\tau$.
\end{rema}
Notons $\overline{X}$ la réduction de $\mathfrak X$ modulo $p\mathcal O_{\mathfrak X}$, et notons toujours $G \fleche \overline{X}$ le changement de base.
Soit $x = \Spec(k) \dans \overline{X}$ un point géométrique, et $G_x$ le groupe de Barsotti-Tate tronqué associé au-dessus de $x$.
Le cristal de Dieudonné de $G_x$, noté $\mathbb D(G_x) =: \mathbb D$ est un $W(k)/p^rW(k)$-module libre de rang $H$, muni d'une action de $\mathcal O$ qui induit,
\[ \mathbb D = \bigoplus_{\tau \in \mathcal I} \mathbb D_\tau,\]
telle que, \[F : \mathbb D_\tau \fleche \mathbb D_{\sigma\tau} \quad \text{et} \quad V : \mathbb D_{\sigma\tau} \fleche \mathbb D_\tau.\]
On peut associer à $G_x$ deux polygones, le polygone de Newton,
\[ \Newt_\mathcal O(G_x) = \frac{1}{f} \Newt(\mathbb D_\tau, V^f),\]
où $\Newt(\mathbb D_\tau, V^f)$ est le polygone de Newton (convexe) associé au $V^f$-cristal $(\mathbb D_\tau, V^f)$ par le théorème de Dieudonné Manin.
On peut aussi associer à $G_x$ son polygone de Hodge,
\[ \Hdg_\mathcal O(G_x) = \frac{1}{f} \sum_{\tau \in \mathcal I} \Hdg_\tau(G_x),\]
où $\Hdg_\tau(G_x)$ est le polygone de pente 0 sur $[0,q_\tau]$ et de pente 1 sur $[q_\tau,f]$.
On a alors la proposition classique, voir \cite{RR}, voir aussi \cite{Her1} section 3.
\begin{prop}
Le polygone de Newton $\Newt_\mathcal O(G_x)$ est au-dessus du polygone de Hodge $\Hdg_\mathcal O(G_x)$, et ils ont mêmes points terminaux.
\end{prop}
\begin{figure}[h]
\caption{Exemple de polygones de Hodge et Newton dans le cas de $\mathcal O = \ZZ_{p^2}$}
\label{HdgNewt}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\draw[->,color=black] (-0.5,0.) -- (5.,0.);
\foreach \x in {,1.,2.,3.,4.}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0.,-0.5) -- (0.,4.);
\foreach \y in {-0.5,0.5,1.,1.5,2.,2.5,3.,3.5}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\clip(-0.5,-0.5) rectangle (5.,4.);
\draw (0.,0.)-- (1.,0.);
\draw (1.,0.)-- (3.,1.);
\draw (3.,1.)-- (4.00163934426,2.20659971306);
\draw [color=ffqqqq] (0.,0.)-- (2.44169908507,0.720849542536);
\draw [color=ffqqqq] (2.44169908507,0.720849542536)-- (4.00163934426,2.20659971306);
\draw (0.209836065574,2.55093256815)-- (0.583606557377,2.55093256815);
\draw [color=ffqqqq] (0.209836065574,2.02725968436)-- (0.588524590164,2.02008608321);
\draw [dash pattern=on 1pt off 1pt] (3.,1.)-- (3.,0.);
\draw (2.99344262295,-0.0961262553802) node[anchor=north west] {$q_{\tau_2}$};
\draw (1.03606557377,-0.117647058824) node[anchor=north west] {$q_{\tau_1}$};
\draw (0.64262295082,2.15638450502) node[anchor=north west] {$\Newt_{\mathcal O}$};
\draw (0.637704918033,2.7230989957) node[anchor=north west] {$\Hdg_{\mathcal O}$};
\draw [dash pattern=on 1pt off 1pt] (4.00163934426,2.20659971306)-- (4.,0.);
\draw (3.96229508197,-0.0602582496413) node[anchor=north west] {$h$};
\end{tikzpicture}
\end{center}
\end{figure}
\begin{rema}
On peut aussi définir $\Newt_\mathcal O^\diamond$ et $\Hdg_\mathcal O^\diamond$ leurs analogues concaves évidents.
\end{rema}
Ces polygones induisent des fonctions, si on note $\mathcal P$ l'espace des polygones convexes, à abscisses de rupture dans $\frac{1}{f}\ZZ$,
définis sur $[0,\Ht_{\mathcal O}(G[p])]$, muni de la topologie de la convergence uniforme,
\[ \Newt_\mathcal O, \Hdg_\mathcal O : |\overline X| \fleche \mathcal P.\]
Notons $\mathfrak X^{rig}$ la fibre rigide au sens de Raynaud de $\mathfrak X$, on a une application de spécialisation,
\[ sp : \mathfrak X^{rig} \fleche \overline{X},\]
qui nous permet de définir les applications $\Newt_\mathcal O$ et $\Hdg_\mathcal O$ sur $|\mathfrak X^{rig}|$. Ces applications sont semi-continues vers l'espace $\mathcal P$.
\begin{defin}
Le lieu $\mu$-ordinaire de $\overline X$, noté $\overline{X}^{\mu-ord}$, est l'ensemble des $x \dans \overline{X}$ tels que $\Newt_\mathcal O(x) = \Hdg_\mathcal O(x)$.
On définit $\mathfrak X^{rig,\mu-ord}$ par $sp^{-1}(\overline{X}^{\mu-ord})$.
\end{defin}
\begin{rema}
Ici et plus généralement dans tout le texte, on utilisera $\mu$ comme une simple notation, c'est à dire qu'elle ne fait référence (même si cela ne se voit pas sur la notation)
qu'à $\mathcal O$ et pas à la signature. Bien sûr être $\mu$-ordinaire au sens précédent pour un groupe $p$-divisible dépend de l'action considérée, et de sa signature pour
cette action, mais celles-ci seront en général fixés dans tout le texte.
Dans le cas où $\mathfrak X$ provient d'une donnée de Shimura PEL, alors il est associé à cette donnée un certain cocaractère $\mu$, qui dépend de la donnée,
mais aussi de la signature (fixée par la condition de Kottwitz), et historiquement être $\mu$-ordinaire pour un point de cette variété de Shimura est une définition relative à
ce cocaractère.
Néanmoins ici être $\mu$-ordinaire ne dépend que de l'action de $\mathcal O$ (qui est la même partout) et de la signature du groupe, mais on fera un abus de notation et on dira
$\mu$-ordinaire même lorsque la signature varie.
Une meilleure notation serait de dire $\mathcal O$-ordinaire, mais il semble plus simple de garder des notations que tout le monde connaît.
\end{rema}
\section{Invariants de Hasse $\mu$-ordinaires, rappels}
\label{sect3}
Soit $S$ un schéma sur lequel $p\mathcal O_S = 0$. Soit $G$ un groupe $p$-divisible sur $S$ tronqué d'échelon $r$, tel que
$r > k_\tau = \sum_{\tau'} \max(0,q_\tau - q_{\tau'})$ pour un $\tau \in \mathcal I$ fixé (respectivement $r > \max_\tau k_\tau$).
Alors on a construit dans \cite{Her1} (cf. aussi \cite{GN} dans le cas des variétés
de Shimura) un invariant de Hasse partiel $\widetilde{\Ha_\tau}$ (respectivement, un invariant de Hasse $\mu$-ordinaire $\widetilde{^\mu\Ha}$), tels que $\widetilde{^\mu\Ha}$ est le produit
des invariants de Hasse partiels $\widetilde{\Ha_\tau}$.
\begin{defin}
Supposons donné une collection $(p_\tau,q_\tau)_{\tau \in \mathcal I}$. Pour tout $s \dans \Gal(\overline{\QQ_p}/\QQ_p)$, $s$ agit sur $(p_\tau,q_\tau)_\tau$ par,
\[s\cdot (p_\tau,q_\tau)_\tau = (p_{s\tau},q_{s\tau})_\tau.\]
On note $E$ la plus petite extension (Galoisienne) de $\QQ_p$ qui fixe $(p_\tau,q_\tau)_\tau$, que l'on appellera corps reflex (associé à la signature $(p_\tau,q_\tau)_\tau$), et
parfois corps reflex local, si l'on veut marquer la différence avec les corps reflex des variétés de Shimura.
\end{defin}
On note $\kappa_E$ le corps résiduel de $E$. Le champ qui classifie les groupes de Barsotti-Tate tronqué d'échelon $r$ munis d'une action de $\mathcal O$ est un champ lisse sur
$\mathcal O_E$, cf \cite{Wed2} (il descend d'ailleurs à $\ZZ_p$). La condition qui fixe la signature en détermine une composante connexe, que l'on note
$\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}/\Spec(\mathcal O_E)$. On a alors d'après \cite{Her1} la construction suivante,
\begin{theor}
Soit $\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}/\Spec(\kappa_E)$ le champ modulo $p$ des groupes $p$-divisibles tronqués d'échelon $r$, muni d'une action de
$\mathcal O$, de signature $(p_\tau,q_\tau)$, et $G$ le $\mathcal O$-module universel.
Alors pour $\tau \in \mathcal I$ tel que $r > k_\tau$, il existe sur l'extension des scalaires, \[\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}\times\Spec(\overline{\FP}),\]
une section du fibré inversible universel $\det(\omega_{G^D,\tau})^{\otimes(p^f-1)}$,
\[ \widetilde{\Ha_\tau} \dans H^0(\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)},\det(\omega_{G^D,\tau})^{\otimes(p^f-1)}),\]
telle que si $x \dans \mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}\times \Spec(\overline{\FP})(\overline k)$, correspond à $G/\Spec(\overline k)$,
alors $x^*\widetilde{\Ha}_\tau$ est inversible si et seulement si $\Newt_\mathcal O(G)$ et $\Hdg_\mathcal O(G)$ ont un point de contact en l'abscisse $q_\tau$.
Les sections, pour $q = q_\tau \dans \NN$ tels que $r > k_\tau$,
\[ \widetilde{\Ha}_q := \bigotimes_{\tau' : q_{\tau'} = q} \widetilde{\Ha_{\tau'}} \dans H^0(\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}\times \Spec(\overline{\FP}),\bigotimes_{\tau' : q_{\tau'} = q}\det(\omega_{G^D,\tau'})^{\otimes(p^f-1)}),\]
redescendent en une section sur $\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}/\Spec(\kappa_E)$ du faisceau,
\[ \det(\omega_{G^D,q})^{\otimes(p^f-1)},\]
où $\omega_{G^D,q} = \bigoplus_{\tau : q_\tau = q} \omega_{G^D,\tau}$, a priori sur $\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}/\Spec(\overline{\FP})$, redescend à $\kappa_E$.
Si $r > \max_\tau k_\tau$, il existe aussi sur $\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}\times \Spec(\kappa_E)$ une section de $\det(\omega_{G^D})^{\otimes(p^f-1)}$,
\[ \widetilde{^\mu\Ha} \dans H^0(\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)},\det(\omega_{G^D,\tau})^{\otimes(p^f-1)}),\]
telle que pour tout $x \dans \mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}(\overline k)$, correspondant à $G/\overline k$,
$G$ est $\mu$-ordinaire si et seulement si $x^*\widetilde{^\mu\Ha}$ est inversible.
De plus, les sections $\widetilde{\Ha_\tau}, \widetilde{\Ha_q}$ et $\widetilde{^\mu\Ha}$ sont des diviseurs de Cartier sur $\mathcal{BT}_r^{\mathcal O,(p_\tau,q_\tau)}\times \Spec(\kappa_E)$ ($\Spec(\kappa_F)$ pour $\widetilde{\Ha_\tau}$), ils sont compatibles aux changements de bases, et à la dualité, c'est à dire,
\[ \widetilde\Ha_\tau(G) = \widetilde\Ha_\tau(G^D), \forall \tau \text{ tels que }r > \max(k_\tau,k_\tau^D = \sum_{\tau'} \max(0,p_\tau-p_{\tau'}),d = \sum_{\tau'} p_{\tau'}).\]
De plus, si on a un scindage de $G$, $G = H \times H'$, qui induit un contact Hodge-Newton en un point de rupture du polygone de Newton, alors,
\[ \widetilde{\Ha_\tau}(G) = \widetilde{\Ha_\tau}(H) \otimes \widetilde{\Ha_\tau}(H'), \forall \tau \in \mathcal I.\]
\end{theor}
On déduit du théorème précédent, que pour tout groupe de Barsotti-Tate $G/S$ tronqué d'échelon $r$ suffisamment grand, sur une base $S$ de caractéristique $p$,
on peut par tirée en arrière lui associer des invariants de Hasse partiels, et un $\mu$-invariant de Hasse.
\begin{rema}
Même si être $\mu$-ordinaire ne dépend que de la $p$-torsion (cf \cite{Moo}), on ne sait construire $\widetilde{^\mu\Ha}$ que pour un $\mathcal O$-module $p$-divisible tronqué
d'échelon supérieur à $\max_\tau k_\tau+1$. On ne sait pas non plus si l'invariant $\widetilde{^\mu\Ha}$ ne dépend que de la $p$-torsion, mais on espère résoudre ce
problème dans un futur proche. Notons que c'est le cas pour les invariants construits par \cite{KW,GK}, et \cite{Box} puisqu'ils sont construit sur les champ des $G$-Zip et
sur les strates d'Ekedahl-Oort respectivement, et que ces derniers ne dépendent "que de la $p$-torsion" des groupes de Barsotti-Tate.
\end{rema}
On a alors le résultat classique suivant,
\begin{theor} (Wedhorn, \cite{Wed1})
Soit $\overline{X}$ la fibre spéciale d'une variété de Shimura PEL non ramifiée en $p$, alors,
\[ \overline{X}^{\mu-ord} = \{ x \dans \overline X : {^\mu}\Ha(x) = 0\},\]
est un ouvert dense.
\end{theor}
\begin{defin}
Dans le cas où on a un $\mathcal O$-module $G/\mathcal O_K$ avec $K$ une extension de $\QQ_p$, comme les faisceaux $\omega_{G^D,\tau}, \omega_{G^D,q}$ sont localement libres,
on identifiera librement les invariants précédents $\widetilde{\Ha_\tau},\widetilde{\Ha_q}$, et $\widetilde{^\mu\Ha}$, avec leur valuation (dans $v(K)$) notée $\Ha_\tau,\Ha_q$, et
$^\mu\Ha$, qui ne dépendent pas d'un choix d'une base du déterminant des différents faisceaux.
\end{defin}
\subsection{$\mathcal O$-modules de Lubin-Tate, groupes $p$-divisible $\mu$-ordinaire}
\label{sect31}
\begin{defin}
\label{def36}
Soit $A \subset \mathcal I$ un ensemble de plongements. Alors il existe sur $\mathcal O$ un groupe $p$-divisible $\mathcal{LT}_A$ muni d'une action de
$\mathcal O$ tel que,
\[ \Ht_\mathcal O(\mathcal{LT}_A) = 1, \quad \text{et} \quad p_\tau = 1 \text{ si et seulement si } \tau \dans A.\]
Le display de $\mathcal{LT}_A$ est donné par produit tensoriel de celui de $\mathcal{LT}_\tau$, les $\mathcal O$-modules (stricts) de Lubin-Tate,
pour $\tau\dans A$. De plus, un tel module sur $\widehat{\ZZ_p^{nr}} = W(\overline{\FP})$ est unique.
\end{defin}
On a alors le résultat de structure suivant sur $\mathcal O_C$ (\cite{Her1}, un résultat similaire sur $\overline{\FP}$ est obtenu dans \cite{Moo}),
\begin{prop}
Soit $G/\mathcal O_C$ un $\mathcal O$-module $p$-divisible (non tronqué) de signature $(p_\tau,q_\tau)$.
Notons,
\[ \{q_\tau, \tau \dans \mathcal I\} = \{q^{(1)}<\dots<q^{(r)}\}, \quad q^{(0)} = 0, q^{(r+1)} = h.\]
Alors $G$ est $\mu$-ordinaire si et seulement si,
\[ G = \prod_{l=0}^r \mathcal{LT}_{A_l}^{q^{(l+1)}-q^{(l)}},\]
où $A_l = \{ \tau \dans \mathcal I : q_\tau \leq q^{(l)}\}$. Les $(A_l)$ forment une suite strictement croissante.
\end{prop}
\begin{rema}
Si $A \not\subset A'$ et $A' \not \subset A$, alors $\mathcal{LT}_A \times \mathcal{LT}_{A'}$ n'est pas $\mu$-ordinaire, alors que chacun des facteurs l'est (ces trois groupes ont des
signatures différentes)...
Cet exemple montre bien l'utilité de la notation $\mu$-ordinaire, c'est-à-dire du fait qu'elle ne fait pas référence à la signature.
\end{rema}
\begin{exemple}
Si $\mathcal O = \ZZ_{p^2}$ et $\mathcal I = \{ \tau_1,\tau_2\}$, et $q_{\tau_2} \leq q_{\tau_1}$ alors "le" groupe $p$-divisible $\mu$-ordinaire sur $\mathcal O_C$ est donné par,
\[ (\QQ_p/\ZZ_p\otimes_{\ZZ_p}\mathcal O)^{q_{\tau_2}} \times \mathcal{LT}_{\tau_2}^{q_{\tau_1}-q_{\tau_2}} \times (\mu_{p^\infty}\otimes_{\ZZ_p}\mathcal O)^{p_{\tau_1}}.\]
\end{exemple}
Rappelons les résultats de Mantovan-Viehmann \cite{MV} et X.Shen \cite{Shen} (cf. aussi \cite{Moo}),
\begin{theor}
\label{thrshen}
Soit $G/\mathcal O_K$ un groupe $p$-divisible muni d'une action de $\mathcal O$, $K$ une extension de $\QQ_p$.
Soit $\overline{G}/\kappa_K$ sa réduction et supposons que $\Newt_\mathcal O(\overline G)$ et $\Hdg_\mathcal O(\overline G)$ ont un point de rupture en l'abscisse $t$ qui
est aussi une abscisse de rupture pour $\Newt_\mathcal O(\overline G)$, alors il existe un unique scindage de $\overline{G}$ sur $\kappa_K$,
\[ \overline{G} = \overline{H_1} \times \overline{H_2},\]
où $\overline{H_1}, \overline{H_2}$ sont des $\mathcal O$-modules $p$-divisibles de hauteurs $t$ et $h-t$ respectivement, tels que leurs polygones de Newton et Hodge soient
ceux de $G$ entre $0$ et $t$ et entre $t$ et $h$ respectivement.
De plus, si $G$ est modulaire (automatique si $K$ est un corps $p$-adique ou $K = C$), alors il existe un unique $0 \subset H_2 \subset G$ $\mathcal O$-module $p$-divisible
sur $O_K$ qui relève $\overline{H_2}$, et
$G/H_2$ relève $\overline{H_1}$.
Si de plus $G$ est polarisé, il y a une compatibilité à la polarisation.
\end{theor}
On en déduit en particulier,
\begin{corr}
Soit $G/O_K$ un groupe $p$-divisible modulaire $\tau$-ordinaire, i.e. $\Ha_\tau(G) = 0$, alors il existe un sous-$\mathcal O$-module $p$-divisible de $G$ de hauteur $p_\tau$.
Si de plus $G$ est $\mu$-ordinaire, il existe une filtration "canonique" de $G$,
\[ 0 \subset H_1 \subset H_2 \subset \dots \subset H_r = G,\]
où $H_i$ est un $\mathcal O$-module $p$-divisible de hauteur $h-q^{(r-i)}$.
\end{corr}
\begin{rema}
Grâce au travail de Scholze et Weinstein dans \cite{SW}, et en utilisant la théorie des filtrations de Harder-Narasihman de \cite{FarHN} (déjà utilisée dans \cite{Shen}) on peut
désormais enlever l'hypothèse "modulaire" des énoncés précédents.
\end{rema}
Donc par exemple, sur le lieu $\mu$-ordinaire d'une variété de Shimura $\mathfrak X$ (PEL et non ramifiée en $p$), le groupe $p$-divisible universel possède une filtration canonique,
qui dans le cas où le lieu ordinaire est non-vide, redonne simplement le sous-groupe canonique, i.e. la partie multiplicative.
On voudrait étendre l'existence de cette filtration canonique à un voisinage strict du lieu $\mu$-ordinaire dans $\mathfrak X^{rig}$. Pour cela on procède comme dans \cite{Far},
en étudiant tout d'abord l'application de Hodge-Tate.
\section{Application de Hodge-Tate}
\label{sect4}
\subsection{Noyau de l'application de Hodge-Tate}
\begin{defin}
Soit $G/\mathcal O_K$ un schéma en groupes, $K/\QQ_p$. Son application de Hodge-Tate, entre faisceaux fppf sur $\mathcal O_K$, est
\[ \alpha_G :
\begin{array}{ccc}
\underline G &\fleche& \underline \omega_{G^D} \\
(\phi : G^D \fleche \mathbb G_m) & \longmapsto & \phi^*\frac{\mathrm dT}{T},
\end{array}
\]
où l'on a identifié les faisceaux fppf $\underline G$ et $\mathcal Hom(\underline G^D,\underline{\mathbb G}_m)$.
\end{defin}
\begin{rema}
Si $G$ est muni d'une action de $\mathcal O$, alors $\omega_{G^D}$ aussi, et $\alpha_G$ est $\mathcal O$-équivariante.
\end{rema}
On utilisera maintenant la notation $\alpha_G$ pour ses $\mathcal O_{\overline K}$-points.
\begin{exemple}
\label{exeRay}
Soit $G/\mathcal O_K$ un schéma en groupe de Raynaud, \cite{Ray}, associé à la donnée $(\gamma_i,\delta_i)_{i\in \{1,\dots,f\}}$ où, pour tout $i$, $\gamma_i\delta_i = \omega$, une constante universelle de valuation $p$-adique 1, c'est à dire,
\[ G = G^{(\gamma_i,\delta_i)} =\Spec(\quotient{\mathcal O_K[X_i, i = 1 \dots f]}{(X_i^p - \gamma_{i+1}T_{i+1}})),\]
avec la (co-)multiplication est explicite, donnée par le corollaire 1.5.1 de \cite{Ray}.
On a alors,
\[G^D = G^{(\delta_i,\gamma_i)}.\]
On peut donc calculer explicitement,
\[ \omega_{G^D} = \bigoplus_{i=1}^f \quotient{\mathcal O_K}{\delta_i}\mathrm dT_i,\]
et un $\mathcal O_{\overline K}$-point de $G$ est donné par une collection $(x_i)_i \dans (\mathcal O_{\overline K})^{\{1,\dots,f\}}$ telle que, \[x_i^p = \gamma_{i+1}x_{i+1}.\]
L'application de Hodge-Tate est alors donnée par,
\[
\alpha_G :
\begin{array}{ccc}
G(\mathcal O_{\overline K}) & \fleche & \omega_{G^D} \otimes_{\mathcal O_K} \mathcal O_{\overline K} \\
(x_i)_i & \longmapsto & \sum_i x_i \pmod {\delta_i}\mathrm dT_i
\end{array}
\]
\end{exemple}
Retournons au cas $\mu$-ordinaire. Soit $G/\mathcal O_C$ un $\mathcal O$-module $p$-divisible $\mu$-ordinaire, donc de la forme,
\[ G = \prod_{l=0}^r \mathcal{LT}_{A_l}^{q^{(l+1)}-q^{(l)}},\]
où $A_l = \{ \tau \dans \mathcal I : q_\tau \leq q^{(l)}\}$.
Soit $\tau$ un plongement de $\mathcal O$ dans $\mathcal O_C$, alors comme $G$ est $\mu$-ordinaire, les théorèmes de Shen et Mantovan-Viehmann (\ref{thrshen}) prédisent l'existence d'un sous-groupe $p$-divisible,
qui correspond à,
\[ H_\tau = \prod_{l \geq l_\tau} \mathcal{LT}_{A_l}^{q^{(l+1)}-q^{(l)}},\]
où $q_\tau = q^{(l_\tau)}$.
Or si on regarde l'application de Hodge-Tate modifiée comme suit,
\[ \alpha_{G,\tau} : T_pG \overset{\alpha_G}{\fleche} \omega_{G^D} \twoheadrightarrow \omega_{G^D,\tau},\]
alors son noyau correspond exactement au module de Tate de,
\[ \prod_{l \geq l_\tau} \mathcal{LT}_{A_l}^{q^{(l+1)}-q^{(l)}}.\]
De plus, un tel produit est rationnel, c'est-à-dire que si $G/\mathcal O_K$, alors $\Ker \alpha_{G,\tau}$ est aussi défini sur $\mathcal O_K$.
On en déduit directement la proposition suivante,
\begin{prop}
Soit $G \fleche \mathfrak X$ un $\mathcal O$-module $p$-divisible.
Sur le lieu $\mu$-ordinaire, la filtration canonique est donnée par les noyaux $\Ker \alpha_{G,\tau}$, lorsque $\tau$ varie parmi les plongements de
$\mathcal O$ dans $\mathcal O_C$.
\end{prop}
On va essayer de déformer ces applications de Hodge-Tate $\alpha_{G,\tau}$ sur des voisinages stricts du lieu $\mu$-ordinaire afin que leurs noyaux déforment la filtration canonique.
\begin{defin}
Soit $M$ un $\mathcal O_K$-module, $K/\QQ_p$. On note, pour tout $\eps \dans v(K)$, \[M_\eps = M \otimes \mathcal O_K/(p^\eps).\]
Soit $G/\mathcal O_K$ un $\mathcal O$-module $p$-divisible tronqué. Supposons $K/\mathcal O[1/p]$. On définit l'application $\alpha_{G,\tau,\eps}$ par,
\[ \alpha_{G,\tau,\eps} : G(\mathcal O_{\overline K}) \overset{\alpha_{G,\tau}}{\fleche} \omega_{G^D,\tau} \twoheadrightarrow \omega_{G^D,\tau,\eps}.\]
\end{defin}
\begin{prop}
\label{proker1}
Soit $G/\mathcal O_C$ un $\mathcal O$-module de Barsotti-Tate tronqué d'échelon $1$, de signature $(p_\tau,q_\tau)_\tau$.
Soit $\eps \dans v(C)$ tel que $\frac{1}{p-1} < \eps \leq 1$, alors,
\[ \dim_{\FF_{p^f}} \Ker \alpha_{G,\tau,\eps} \leq p_\tau.\]
\end{prop}
\dem
La démonstration est identique à \cite{Far}, proposition 9. On a que $\omega_{G^D,\tau} \simeq (\mathcal O_C/p\mathcal O_C)^{q_\tau}.$ Donc si $\delta \dans \mathcal O_C$ est de valuation $\frac{1}{p-1}$,
$\delta\omega_{G^D,\tau,\eps}$ est un $\mathcal O_{C,\eps-\frac{1}{p-1}}$ module libre de rang $q_\tau$. Or d'après le théorème 3 de \cite{Far} sur l'annulation de la cohomologie de la suite de Hodge-Tate,
\[ \delta \omega_{G^D,\tau,\eps} \subset \mathcal O_{C,1} \otimes_{\FF_{p^f}}\im(\alpha_{G,\tau,\eps}).\]
Or $\mathcal O_{C,1}\im(\alpha_{G,\tau,\eps})$ est de présentation finie (comme sous-module de type fini de $\omega_{G^D,\eps}$ qui est libre), et donc
$\delta \omega_{G^D,\tau,\eps}$ est engendré par moins d'éléments que $\mathcal O_{C,1}\im(\alpha_{G,\tau,\eps})$, c'est-à-dire que $\dim_{\FF_{p^f}} \im(\alpha_{G,\tau,\eps})$,
et donc,
\[ q_\tau \leq \dim_{\FF_{p^f}} \im(\alpha_{G,\tau,\eps}).\qedhere\]
\edem
On voudrait trouver un $\eps > \frac{1}{p-1}$ qui nous assure que $\Ker(\alpha_{G,\tau,\eps})$ est exactement de hauteur $p_\tau$, malheureusement la méthode utilisée dans
\cite{Far} ne peut pas s'adapter directement, puisqu'on ne veut plus relier ce noyau à $\Ha(G)$, qui est, sur les des variétés (e.g. de Shimura) sans lieu ordinaire, identiquement 1
(c'est-à-dire que la section $\widetilde{\Ha}(G)$ est identiquement nulle).
Introduisons le relèvement cristallin de l'application de Hodge-Tate, introduit par exemple dans \cite{Far}, section 5.1.2.
\subsection{Application de Hodge-Tate cristalline}
Soit $S$ un schéma, et $G$ un $S$-schéma en groupes, fini localement libre. Supposons $p$ localement nilpotent sur $S$, et que $S$ est un $\Sigma$-schéma, où $\Sigma$ le
spectre d'un anneau $p$-adiquement complet sans $p$-torsion. $p\mathcal O_\Sigma$ est muni de puissances divisées et on s'intéresse à $\Cris(S/\Sigma)$, le gros site cristallin fppf de $S/\Sigma$. Posons,
\[ \mathbb D(G) = \mathcal Ext^1(\underline{G}^D,\mathcal O_{S/\Sigma}),\]
le cristal de Dieudonné covariant de $G$.
Notons $\mathcal J_{S/\Sigma}$ l'idéal de $\mathcal O_{S/\Sigma}$, et plaçons nous dans la catégorie dérivée des faisceaux en groupes abéliens de $\Cris(S/\Sigma)$, $D(S/\Sigma)$. Alors d'après \cite{Far}, section 5.1.1, on a une application dans $D(S/\Sigma)$,\[\alpha_G^{cris} : G \fleche \mathbb D(G).\]
Celle-ci s'inscrit après évaluation sur l'épaississement tautologique, dans le diagramme,
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
G & & \mathbb D(G)_S \\
& \omega_{G^D} = \mathcal Ext^1(G^D,\mathcal J_{S/\Sigma})_S & \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$(\alpha_{G}^{cris})_S$} (m-1-3)
(m-1-1) edge node[auto] {$\alpha_G$} (m-2-2)
(m-2-2) edge node[auto] {$$} (m-1-3);
\end{tikzpicture}
\end{center}
et relève donc l'application de Hodge-Tate $\alpha_G$.
Soit de plus $(U,T,\delta)$ un ouvert de $\Cris(S/\Sigma)$. Soit $I$ l'idéal noyau de $\kappa : \mathcal O_T \fleche \mathcal O_U$. Notons $\mathcal E = \mathbb D(G)_{(U,T,\delta)}$.
On a un morphisme, $\omega_{G^D\times U} = \omega_{G^D} \otimes_{\mathcal O_S}\mathcal O_U \fleche \mathcal E/I\mathcal E,$
et notons $\Fil \mathcal E$, la préimage de $\im(\omega_{G^D\times U} \fleche \mathcal E/I\mathcal E)$ par $\mathcal E \fleche \mathcal E/I\mathcal E$, auquel on pense comme un relèvement de la filtration de Hodge.
Alors on a la factorisation,
\[ \alpha_{G,(U,T,\delta)} : G(U) \fleche \Fil E \subset \mathcal E.\]
Si de plus sur $(U,T,\delta)$ il existe $\phi : T \fleche T$ qui relève le Frobénius de $U_0 = U \times_S S_0$, et qui commute aux puissances divisées de $U_0$ dans $T$, alors
on a un isomorphisme,
\[(\mathcal F^{(p)})_{(U,T,\delta)} = \phi^*(\mathcal F_{(U,T,\delta})),\]
pour tout cristal en $\mathcal O_{S/\Sigma}$-modules $\mathcal F$ (on utilise implicitement l'équivalence entre cristaux de $\mathcal O_{S_0/\Sigma}$-modules et $\mathcal O_{S/\Sigma}$-modules).
Dans ce cas, le morphisme de Verschiebung,
\[ V : \mathbb D(G) \fleche \mathbb D(G)^{(p)},\]
permet d'écrire plus précisément,
\[ \im((\alpha_{G}^{cris})_{(U,T,\delta)}) \subset \{ x \dans \Fil \mathcal E : Vx = x \otimes 1\}.\]
Supposons maintenant que $S = \Spec(\mathcal O_K)$, et $G$ est un groupe plat fini sur $\mathcal O_K$. Notons $S_0 = \Spec(\mathcal O_K/p\mathcal O_K)$,
$G_0 = G \times S_0$, $\overline S = \Spec(\mathcal O_{\widehat{\overline K}})$, $\overline{S_0} = \Spec(\mathcal O_{\overline K}/p\mathcal O_{\overline K})$, et enfin $\Sigma = \Spec(\ZZ_p)$.
Le morphisme $\phi : \mathcal O_{\overline{S_0}} \fleche \mathcal O_{\overline{S_0}}$ est surjectif, donc $\Cris(\overline{S_0}/\Sigma)$ possède un objet initial, l'épaississement,
\[ A_{cris} \overset{\theta}{\fleche} \mathcal O_{\widehat{\overline K}} \fleche \mathcal O_{\overline K}/p\mathcal O_{\overline K}.\]
$A_{cris}$ est muni de puissances divisées, relativement à $\ker \theta$, d'un Frobenius cristallin, et d'une action de $\Gal(\overline K/K)$.
On peut alors évaluer $\mathbb D(G)$ sur $(S_0,\Spec A_{cris})$,
\[ E = H^0(\overline{S_0}/\Sigma, \mathbb D(G_0)) = \varprojlim_n \mathbb D(G_0)_{(A_{cris}/p^nA_{cris} \twoheadrightarrow \mathcal O_{\overline K}/p\mathcal O_{\overline K})}.\]
C'est un $A_{cris}$-module (et même un $A_{cris}/p^rA_{cris}$-module libre si $G$ est un Barsotti-Tate tronqué de rang $r$), muni d'un Verschiebung $A_{cris}$-linéaire,
\[ V : E \fleche E\otimes_{A_{cris},\phi} A_{cris} = E^{(\phi)},\]
et d'un Frobenius $A_{cris}$-linéaire,
\[ F : E \otimes_{A_{cris},\phi} A_{cris} = E^{(\phi)} \fleche E.\]
On peut définir l'application que l'on notera encore $\alpha_G^{cris}$, comme la composée,
\[ G(\mathcal O_{\overline K}) \fleche G(\mathcal O_{\overline K}/p\mathcal O_{\overline K}) \overset{(\alpha_{G}^{cris})_{(S_0,\Spec A_{cris})}}{\fleche} E,\]
qui a son image dans $\{ x \dans \Fil E : Vx = x \otimes 1\}$, et $\Fil E$ est la préimage de la filtration de Hodge par $\theta$.
Si $G$ est un groupe de Barsotti-Tate, les constructions précédentes, pour $G[p^n]$, induisent,
\[ \alpha_G^{cris} : T_pG \fleche \Fil E.\]
On peut définir $\Phi : E \fleche E$ qui est $(A_{cris},\phi)$-linéaire, par $\Phi(x) = F(x\otimes 1)$. L'application $\alpha_G^{cris}$ se factorise alors par
\[ (\Fil E)^{\Phi = p}.\]
On a alors le théorème suivant dû à Faltings,
\begin{theor} [Faltings \cite{Fal}, cf. \cite{Far} Théorème 1, \cite{Chen} Théorème 4.1]
Soit $p \neq 2$ et $G/\mathcal O_K$ un groupe $p$-divisible.
\begin{enumerate}
\item L'application de Hodge-Tate induit un isomorphisme,
\[\alpha_G^{cris} : T_pG \overset{\sim}{\fleche} (\Fil E)^{\Phi = p}.\]
\item Si on filtre $E$ par $\Fil^{-1}E = E, \Fil^0 E = \Fil E$ et \[\Fil^i E = \Fil^iA_{cris} \Fil E + \Fil^{i+1}A_{cris}E,\]
alors on a des inclusions,
\[ tE \subset T_pG \otimes_{\ZZ_p} A_{cris} \subset E,\]
strictement compatibles aux filtrations (où $\Fil^i T_pG \otimes_{\ZZ_p} A_{cris} = T_pG\otimes \Fil^iA_{cris}$).
\end{enumerate}
\end{theor}
\begin{rema}
La preuve (dont on rappelle une esquisse à la section suivante dans un cadre legerement plus général) se fait par étude de la $p$-torsion puis relèvement au $\mathcal{BT}$ complet $G$.
En particulier, si $G$ est un $\mathcal{BT}_1$,
\[ \alpha_G^{cris} : G(\mathcal O_{\overline K}) \overset{\sim}{\fleche} \Fil(E/pE)^{\frac{1}{p}\Phi = \id}.\]
\end{rema}
\subsection{Stratégie}
L'application $\theta$ réduit $\Fil E$ sur $\omega_{G^D} \otimes \mathcal O_{\overline K}$, et donc pour calculer la dimension de l'image de $\alpha_{G,\eps}$, il suffit de calculer celle de l'image de la réduction modulo $\theta$ de $(\Fil E/p^\eps E)^{\frac{1}{p} \Phi = \id}$.
Rappelons que l'on voudrait, pour $G$ un $\mathcal O$-module de Barsotti-Tate sur $\mathcal O_K$, sous une certaine hypothèse sur $\Ha_\tau(G)$, trouver un
$\eps$ tel que le noyau de $\alpha_{G,\tau,\eps}$ soit de dimension $p_\tau$.
Dans le cas d'un lieu ordinaire non vide, cf. \cite{Far}, on peut remarquer que l'image de $\alpha_{G[p]}^{cris}$ se réduit par $\theta$ dans
$(\omega_{G[p]^D}\otimes O_{\overline K})^{V = \id \otimes 1}$,
et une méthode de Newton classique (déjà dans \cite{AG}, et sous la forme d'un lemme de Elkik dans \cite{Far}), permet quitte à réduire modulo $p^{1-\Ha(G)}$, de
majorer la dimension de l'image de $\alpha_G$.
Dans notre cas, on décompose le cristal $E$ de $G$,
\[ E = \bigoplus_{\tau} E_\tau,\]
et $V$ est $\sigma^{-1}$-linéaire : $V : E_\tau \fleche E_{\sigma{-1}\tau}^{(\phi)}$. On considère alors l'application,
\[ \alpha_{G[p],\tau}^{cris} : G[p](\mathcal O_{\overline K}) \fleche E/pE \twoheadrightarrow E_\tau/pE_\tau,\]
et sa réduction modulo $\theta$ arrive dans $\omega_{G[p]^D,\tau}^{V^f = \id\otimes 1}$, seulement le déterminant de $V^f$ est nul (s'il n'y a pas de lieu ordinaire), on ne peut donc pas appliquer de méthode de Newton.
Pour contourner cela il faut ruser et essayer d'appliquer la méthode de Newton à $\widetilde{\Ha}_\tau$, à la place de $V$ ou $V^f$, qui est construit comme
$\frac{1}{p^{k_\tau}}V^f$ sur un "relèvement du cristal", et dont le déterminant donne $\Ha_\tau(G)$. Malheureusement pour relier $\alpha_{G,\tau}$ à $\widetilde{\Ha}_\tau$,
il faudrait que l'image de $\alpha_{G,\tau}^{cris}$
arrive non pas dans \[\im\left(\{ x \dans E : \Phi x = px\} \fleche \{ x \dans E_\tau : \Phi^f x = p^fx\}\right),\]
mais dans
\[\{ x \dans E_\tau : V^f x = p^{k_\tau}x \otimes 1\} = "\{ x \dans E_\tau : \widetilde{\Ha}_\tau x = x \otimes 1\}".\]
Seulement voilà, $\widetilde{\Ha}_\tau$ n'existe pas sur $E_\tau$, mais sur $\bigwedge^{q_\tau} E_\tau$, voir section \ref{sectmuha}, et les solutions de
$V^f x = p^{k_\tau}x\otimes 1$ sont (probablement) beaucoup trop petites si on ne passe pas à la puissance extérieure.
L'idée est alors de remarquer que, pour $\eps \leq 1$,
\[ \left(\dim_{\FF_{p^f}} \Ker \alpha_{G[p],\tau,\eps} \geq p_\tau\right) \Leftrightarrow \left( \dim_{\FF_{p^f}} \im \alpha_{G[p],\tau,\eps} \leq q_\tau\right) \Leftarrow \left(\rg_{W(\FF_{p^f})/p^{q_\tau\eps} W(\FF_{p^f})} \im \bigwedge^{q_\tau}
\alpha_{G,\tau,q_\tau\eps} \leq 1\right),\]
où le rang est le nombre minimal de générateurs du module $\im \bigwedge^{q_\tau}
\alpha_{G,\tau,q_\tau\eps}$ (qui n'est pas libre).
Et d'essayer de majorer le rang de
$\im \bigwedge^{q_\tau}
\alpha_{G,\tau,q_\tau\eps}$, en fonction de $\Ha_\tau(G)$.
\section{Cristaux filtrés et suppression de périodes}
\label{sect5}
Dans toute cette section, on considère $G$ un $\mathcal O$-module $p$-divisible (non tronqué) sur $\mathcal O_C$, de signature $(p_\tau,q_\tau)_\tau$ et de $\mathcal O$-hauteur $h = p_\tau + q_\tau$.
Son cristal $E := H^0(\overline{S_0}/\Sigma,\mathbb D(G_0))$ est un $A_{cris}\otimes_{\ZZ_p} \mathcal O = \prod_\tau A_{cris}$-module, et se scinde donc,
\[ E = \bigoplus_{\tau} E_\tau.\]
\begin{prop}
Soit $M$ un $R_1\times R_2$-module.
Alors $M \simeq M_1 \oplus M_2$, avec $M_1$ un $R_1$-module, $M_2$ un $R_2$-module.
On a alors l'égalité,
\[\bigwedge_{R_1\times R_2}^n M = \bigwedge^{n}_{R_1} M_1 \oplus \bigwedge^{n}_{R_2} M_2.\]
\end{prop}
\dem
Soit $e_1 = (1,0) \dans R_1\times R_2$ et $e_2 = (0,1)$, on pose $M_1 = e_1M$ et $M_2 = e_2M$, comme $e_1 + e_2 = 1$ et $e_1e_2 = 0, M = M_1 \oplus M_2$.
Il suffit alors de remarquer que,
\begin{eqnarray*} M\otimes_{R_1 \times R_2} M = (M_1\oplus M_2) \otimes_{R_1\times R_2}(M_1\oplus M_2) \\= M_1\otimes_{R_1\times R_2} M_1 \oplus M_1 \otimes_{R_1\times R_2} M_2 \oplus M_2 \otimes_{R_1\times R_2} M_1 \oplus M_2 \otimes_{R_1\times R_2} M_2,\end{eqnarray*}
et que $M_1 \otimes_{R_1\times R_2} M_2 = e_1M \otimes_{R_1\times R_2} e_2M = e_1e_2 M\otimes M = 0$.
De même pour $M_2 \otimes M_1$ et $M_1 \otimes_{R_1\times R_2} M_1 = M_1 \otimes_{R_1} M_1$. Le résultat s'en déduit.
\edem
\subsection{Rappels sur $A_{cris}$.}
\label{sect42}
La construction suivante est due à Fontaine \cite{Fon}, voir aussi \cite{Chen}, section 2. Soit $C = \widehat{\overline{\QQ_p}}$, et,
\[ \mathcal O_C^\flat = \varprojlim_{\Frob} (\mathcal O_C/p) = \varprojlim_{x \mapsto x^p} (\mathcal O_C).\]
Considérons l'anneau des vecteurs de Witt, $W(\mathcal O_C^\flat)$, muni d'un Frobenius et d'une flèche "première coordonnée",
\[ \theta : W(\mathcal O_C^\flat) \fleche \mathcal O_C^\flat \fleche \mathcal O_C.\]
Complétons alors l'enveloppe à puissances divisées de $(W(\mathcal O_C^\flat),\ker\theta)$ pour la topologie $\ker \theta$-adique, on obtient donc l'anneau $A_{cris}$,
muni d'un Frobénius $\phi$,
d'une filtration
$(\Fil^iA_{cris})_{i \in \ZZ}$ et d'une application,
\[ \theta : A_{cris} \fleche \mathcal O_C.\]
La filtration vérifie que $\Fil^iA_{cris} = A_{cris}$ pour tout $i \leq 0$, $\Fil^1A_{cris} = \ker \theta$ et \[\Fil^iA_{cris}\Fil^jA_{cris} \subset \Fil^{i+j}A_{cris}, \]
mais l'inclusion est stricte en général. On a une inclusion $\ZZ_p^{nr} = W(\overline{\FP}) \subset A_{cris}$. Il existe un élément
$t$ dans $A_{cris}$ (bien définie à un élément de $\ZZ_p^\times$ près) tel que $t \dans \Fil^1A_{cris}$ et vérifie $\phi(t) = pt$.
En fait, on a $(\Fil^1A_{cris})^{\phi = p} = t\ZZ_p$, ce qui explique le choix dans la définition de $t$ (c'est le module de Tate du groupe multiplicatif).
Pour chaque choix de $\tau \dans \Hom(F,C) = \mathcal I$, il existe un élément $t_\tau \dans A_{cris}$, bien défini à un élément de $\ZZ_{p^f}^\times$ (i.e. une racine $p^f-1$-ième)
près. Cet élément $t_\tau$ est une "période" du groupe de Lubin-Tate $\mathcal{LT}_\tau$, et on en donnera une explication dans la section suivante.
Étant fixé l'application $\theta$ précédente, et donc la filtration de $A_{cris}$, seul un des $(t_\tau)_\tau$ appartient à $\Fil^1A_{cris}$, les autres sont dans $A_{cris}\priv\Fil^1A_{cris}$.
Notons $\tau$ le plongement correspondant et $t_\mathcal O$ ce $t_\tau$. Cela vient du fait que si l'on compose à droite $\theta$ par $\sigma$ un autormophisme de $\mathcal O_C$, on change l'application $\theta$, et donc son noyau et donc change $t_\mathcal O$. On peut construire directement $t_\mathcal O$ à partir d'un module de Lubin-Tate, voir section \ref{sectLT}.
Les autres $t_{\sigma^i\tau}$ s'en déduisent par, pour $i \dans \{1,\dots,f-1\}$,
\[ t_{\sigma^i\tau} = \frac{1}{p}\phi^i(t_\mathcal O).\]
À partir de maintenant on oublie la notation $t_\tau$, et on garde seulement $t_\mathcal O$ et ses composées par Frobénius. Ceci à le mérite de ne plus dépendre d'aucun choix, si ce n'est uniquement du choix de $\theta$, qui est fixé partout désormais.
Notons que, parallèlement au fait que le caractère cyclotomique soit un produit de caractères de Lubin-Tate, on a l'égalité suivante, à un inversible dans
$\ZZ_p^\times$ près,
\[ t = t_\mathcal O\prod_{i=1}^{f-1} \frac{1}{p}\phi^i(t_\mathcal O).\]
D'après \cite{FGL} proposition C.2.8 p 126, pour tout $i \dans \{1,\dots,f-1\}$, l'image de $\frac{1}{p}\phi^i(t_\mathcal O)$ dans $Gr^0A_{cris} = A_{cris}/\Fil^1A_{cris} \overset{\theta}{\simeq}\mathcal O_C$ est de valuation,
\[ \frac{p^i}{p^f-1},\]
et l'image de $t_\mathcal O$ dans $\Gr^1A_{cris}$, après un choix d'un isomorphisme de $\Gr^1A_{cris}$ avec $\mathcal O_C$, est de valuation (qui ne dépend pas de ce choix),
\[ \frac{1}{p^f-1}.\]
De ces valuation et l'équation précédente on retrouve le fait classique que l'image de $t$ dans $\Gr^1A_{cris}$ est de valuation $\frac{1}{p-1}$.
\begin{rema}
On peut aussi retrouver ces valuation, en considérant la $p$-torsion des $\mathcal{LT}_\tau$, pour $\tau \in \mathcal I$, celle-ci est alors un schéma en groupes de type $(p,\dots,p)$
au sens de Raynaud, et donc on peut retrouver ces valuation (qui sont les valuation de l'image de certains éléments par $\alpha_{\mathcal{LT}_\tau}$)
grâce à l'exemple \ref{exeRay} et au théorème de Faltings.
\end{rema}
\subsection{Le cristal $\Lambda$}
Dans cette section on fixe un $\tau$ tel que $q_\tau \not\in\{0,h\}$ et on va s'intéresser à,
\[\Lambda = \bigwedge_{A_{cris}\otimes_{\ZZ_p}\mathcal O}^{q_\tau} E = \bigoplus_{\tau'} \bigwedge^{q_\tau} E_{\tau'}.\]
Rappelons que l'on a noté $f = [\mathcal O:\ZZ_p]$.
Pour plus de simplicité, notons désormais $A := A_{cris}$, $\phi : A \fleche A$ le frobénius de $A$, $\Phi : E \fleche E$ le frobénius de $E$ qui est $(A,\phi)$-linéaire, on garde la notation $\Phi$ pour la puissance extérieure $q_\tau$-ième de $\Phi$ sur $\Lambda$, et $\Lambda_{\tau'} = \bigwedge^{q_\tau} E_{\tau'}.$
Rappelons que $\Phi : E_{\tau'} \fleche E_{\sigma\tau'}$ et donc $\Phi : \Lambda_{\tau'} \fleche \Lambda_{\sigma\tau'}$, où $\sigma$ est le Frobénius de $\mathcal O$.
On a la $\ZZ$-filtration de $E$ donnée par,
\[ \Fil^i E =
\left\{
\begin{array}{cc}
E & \text{si } i \leq -1 \\
\Fil E & \text{si } i = 0 \\
\Fil^iA \cdot \Fil E + \Fil^{i+1}A \cdot E &\text{si } i \geq 1
\end{array}
\right.
\]
Elle induit une $\ZZ^f$-filtration sur $\Lambda$ par,
\[ \Fil^{\underline a} \Lambda = \bigoplus_{\tau'} \Fil^{a_{\tau'}}\Lambda_{\tau'}, \quad \underline a \dans \ZZ^f,\]
où
\[\Fil^{a_{\tau'}}\Lambda_{\tau'} = \im\left( \sum_{
\begin{array}{c}
i_1,\dots,i_{q_\tau} \\
i_1 + \dots + i_{q_\tau} = a_{\tau'}
\end{array}
} \Fil^{i_1}E_{\tau'} \otimes \dots \otimes \Fil^{i_{q_\tau}}E_{\tau'} \fleche \bigwedge^{q_\tau} E_{\tau'}\right).\]
En particulier, pour tout $\underline a \leq -\underline{q_\tau} = (-q_\tau,\dots,-q_\tau)$,
\[ \Fil^{\underline{a}}\Lambda = \Lambda.\]
L'image de $\alpha_G^{cris}$ est incluse dans les $x \dans E$ qui vérifient l'équation $\Phi = p$ et donc l'image de l'application,
\[ \bigwedge^{q_\tau} \alpha_{G}^{cris} : \bigwedge^{q_\tau} T_pG \fleche \Lambda,\]
est incluse dans $\Fil^{\underline 0} \Lambda$ et même dans,
\[ \{x \dans \Fil^{\underline 0} \Lambda : \Phi x= p^{q_\tau}x\}.\]
Et donc l'image de l'application $ \bigwedge^{q_\tau} \alpha_{G,\tau}^{cris}$ est incluse dans,
\[\{ x \dans \Fil^0 \Lambda_\tau : (\Phi^f)_{|\Lambda_\tau}x = p^{fq_\tau}x\}.\]
\subsection{Changement d'équation.}
Il faut penser à l'équation \[(\Phi^f)_{|\Lambda_\tau} = p^{fq_\tau},\] comme se réduisant dans $\bigwedge^{q_\tau}\omega_{G^D,\tau} =\det\omega_{G^D,\tau}$ sur l'équation $V^f = \id\otimes1$.
Or pour utiliser une méthode de Newton sur $\det\omega_{G^D,\tau}$ que l'on pourrait relier à $\Ha_\tau(G)$, il faudrait une équation qui se réduise sur $\widetilde{\Ha}_\tau = \id \otimes 1$ c'est à dire $V^f = p^{k_\tau}\id \otimes 1$ sur $\Lambda_\tau$, par exemple $\Phi^f = p^{fq_\tau - k_\tau}$.
Comment passer de $(\Phi^f)_{|\Lambda_\tau} = p^{fq_\tau}$ à $(\Phi^f)_{|\Lambda_\tau} = p^{fq_\tau - k_\tau}$ ?
Dans $A$, on a la période $t$ qui vérifie $\phi(t) = pt$ donc, si $s \leq r$ et $x \dans A$ vérifient,
\[ \phi(t^sx) = p^r(t^sx),\]
alors comme $A$ est sans $t$-torsion, on en déduit que $x \dans A$ vérifie,
\[ \phi(x) = p^{r-s}x.\]
Autrement dit, quitte à diviser par la période $t$ (et pour nous plus exactement on divisera par $t_\mathcal O$ et les autres périodes de Lubin-Tate), on peut modifier la puissance de $p$ des équations qui apparaissent naturellement sur le cristal.
\subsection{Cas des $\mathcal O$-modules de Lubin-Tate}
\label{sectLT}
Donnons l'exemple de la stratégie de modification de l'équation dans le cas des $\mathcal O$-modules de Lubin-Tate.
Soit $\tau : F \hookrightarrow C$, et $G = \mathcal{LT}_\tau$ le $\mathcal O$-module de Lubin-Tate (sur $\mathcal O_C$ disons) associé à $\tau$. Il est de hauteur $f$ et dimension 1.
On a $q_\tau = 0$ et pour tout $\tau' \neq \tau, q_{\tau'} =1$. Indexons les plongements par $\{0,\dots,f\}$ de telle manière que $\tau$ corresponde à 0 et $\sigma^i\tau$ corresponde à
$i$.
On peut calculer explicitement son cristal :
\[ E = Ae_0 \oplus \bigoplus_{i=1}^{f-1} Ae_i,\]
\[ \Fil E = \Fil^1Ae_0 \oplus \bigoplus_{i=1}^{f-1}Ae_i,\]
et le Frobénius $\Phi$ est donné par
\[ \Phi e_{i-1} =
\left\{
\begin{array}{cc}
pe_i & \text{si } i \neq 1 \\
e_1 & \text{sinon.}
\end{array}
\right.
\]
D'après le théorème de Faltings, on sait que $(\Fil E)^{\Phi = p}$ est de hauteur $f$ sur $\ZZ_p$, c'est même un $\mathcal O$-module de dimension 1.
De plus, par définition de $t_\mathcal O$, on a un isomorphisme par projection sur $e_0$,
\[ (\Fil E)^{\Phi = p}\overset{\sim}{\fleche} (\Fil^1A)^{\phi^f = p} = \ZZ_{p^f}t_{\mathcal O}.\]
Et on peut explicitement calculer,
\[ (\Fil E)^{\Phi = p} = \{ x_0t_\mathcal Oe_0 + \sum_{i=1}^{f-1} \frac{1}{p}\phi^i(x_0)\phi^i(t_\mathcal O)e_i : x_0 \dans \ZZ_{p^f}\}.\]
On peut alors explicitement construire une flèche,
\[ m :
\begin{array}{ccc}
\sum_i Ae_i &\fleche & \sum_i Ae_i \\
x_ie_i , i \neq 0 & \longmapsto & \frac{1}{p}x_i\phi^i(t_\mathcal O)e_i \\
x_0e_0 & \longmapsto &x_0t_\mathcal O e_0
\end{array}
\]
et on peut facilement vérifier qu'elle induit un isomorphisme entre,
\[ E^{\Phi = D} \overset{\overset{m}{\sim}}{\fleche} (\Fil E)^{\Phi = p},\]
où $D$ dans la base $(e_0,\dots,e_{f-1})$ est la matrice
\[
\left(
\begin{array}{ccccc}
p & & & &\\
& 1 & & & \\
& &p & & \\
& & & \ddots & \\
& & & & p
\end{array}
\right)
\]
Sur $i \neq 0$, l'équation $\Phi = D$ devient sur $(\Phi^f)_{|E_i} = p^{f-1}$ et donc se projette sur $\omega_{G^D,i}$ sur $\frac{1}{p}V^f = \id \otimes 1$, dont les solutions par une méthode de Newton se relient à $\Ha_i(G)$ (qui est nul ici d'ailleurs).
\subsection{Cas des Lubin-Tate généralisés.}
Comme on a noté $A$ pour $A_{cris}$, on note $S \subset \mathcal I=Hom(\mathcal O,\mathcal O_C)$. On note $G = \mathcal{LT_S}$ le groupe $p$-divisible sur $\mathcal O_C$ avec action de $\mathcal O$ donné par la définition \ref{def36}. Sa $p$-torsion $\mathcal{LT}_S[p]$ est un schéma en groupes de type $(p,\dots,p)$ au sens de Raynaud (\cite{Ray}).
Son cristal sur $A$ est alors donné par,
\[E = \bigoplus_{\tau} Ae_\tau, \quad \Fil E = \bigoplus_{\tau} \Fil E_\tau := \bigoplus_\tau (\Fil^{\delta^{\tau \in S}}A)e_\tau,\]
c'est à dire $\Fil E_\tau = \Fil^1 A e_\tau$ si $\tau \in S$ et $\Fil E_\tau = Ae_\tau = E_\tau$ sinon.
Le Frobénius $(A,\phi)$ linéaire est donné par,
\[\Phi(e_{\sigma^{-1}\tau}) =
\left\{
\begin{array}{cc}
e_\tau & \text{si } \sigma^{-1}\tau \in S \\
pe_\tau & \text{si } \sigma^{-1}\tau \not\in S
\end{array}
\right.
\]
Lorsque $S = \{\tau\}$, on a bien le cristal d'un Lubin-Tate donné précédement.
On s'interresse alors au module de Tate, $T_pG = (\Fil E)^{\Phi = p}$, qui est identifié à,
\[\{ (x_\tau)_\tau \dans \prod_\tau \Fil^{\delta^{\tau \in S}}A : \phi(x_{\sigma^{-1}\tau}) = x_\tau \text{ si } \sigma^{-1}\tau \not\in S, \phi(x_{\sigma^{-1}\tau}) = px_\tau
\text{ sinon}\}.\]
Supposons $S \neq \mathcal I$, sinon $\mathcal{LT}_\mathcal I = \mu_{p^\infty}\otimes_{\ZZ_p} \mathcal O$ et on connait bien son cristal, et soit $\tau_0 \not\in S$.
On peut alors identifier, par projection sur $e_{\tau_0}$; l'ensemble précédent à,
\[\{ x\dans A : \phi^f(x) = p^{|S|}x \text{ et }\forall j; \sigma^j\tau_0 \dans S, \phi^j(x) \dans \Fil^1A\}.\]
Cet ensemble contient,
\[ \ZZ_{p^f} \prod_{j=1}^{f-1} \left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\delta^{\sigma^{-j}\tau_0} \in S}.\]
En effet, comme $\phi^f(t_\mathcal O) = pt_\mathcal O$ et $\tau_0 \not\in S$, on vérifie facilement que $\phi^f = p^{|S|}$ sur l'espace précédent.
De plus, on vérifie que pour tout $x$ dans cet espace, $\phi^k(x) \dans \Fil^1A$ si et seulement si il existe $j \dans \{1,\dots,f-1\}$ tel que $k = f-j$ et
$\sigma^{-j}\tau_0 \dans S$, i.e. si $\sigma^k\tau_0 \dans S$.
On peut donc y identifier le sous-module de $(\Fil E)^{\Phi = p}$ donné par,
\[ T_0 = \left\{ xe_{\tau_0} + \sum_{j=1}^{f-1}\phi^j(x)(\frac{1}{p})^{|S \cap \{\tau_0,\sigma\tau_0,\dots,\sigma^{j-1}\tau_0\}|}e_{\sigma^j\tau_0} : x \dans \ZZ_{p^f} \prod_{j=1}^{f-1} \left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\delta^{\sigma^{-j}\tau_0} \in S}\right\}\]
C'est un sous-$\ZZ_{p^f}$-module libre de rang 1 de $(\Fil E)^{\Phi = p}$, qui est lui aussi libre de rang 1 sur $\ZZ_{p^f}$ d'après \cite{Fal}, or ils ont le même nombre de points modulo $p$, $(\Fil E)^{\frac{1}{p}\Phi = \id}$ en a $p^f$ d'après Faltings, et $T_0$ en a $p^f$, car la valuation de,
\[\prod_{j=1}^{f-1} \left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\delta^{\sigma^{-j}\tau_0} \in S}\]
dans $\Gr^0A_{cris}$ est,
\[ \sum_{j=0}^{f-1} \frac{p^j \delta^{\sigma^{-j}\tau_0 \in S}}{p^f-1} < \frac{1}{p^f-1} \sum_{j=0}^{f-1} p^j = \frac{1}{p-1},\]
d'après les rappels de la sous-section \ref{sect42}.
En particulier, on peut construire une application $m$,
\[ m :
\begin{array}{ccc}
\sum_j Ae_{\sigma^j\tau_0} &\fleche & \sum_j Ae_{\sigma^j\tau_0} \\
x_je_{\sigma^j\tau_0} , j \neq 0 & \longmapsto & \frac{1}{p^{|S \cap \{\sigma\tau_0,\dots,\sigma^{j}\tau_0\}|}}x_i\prod_{k=1}^{f-1} \left(\frac{1}{p}\phi^{k+j}(t_\mathcal O)\right)^{\delta^{\sigma^{-k}\tau_0 \in S}}e_{\sigma^j\tau_0} \\
x_0e_{\tau_0} & \longmapsto & x_0\prod_{k=1}^{f-1} \left(\frac{1}{p}\phi^k(t_\mathcal O)\right)^{\delta^{\sigma^{-k}\tau_0} \in S} e_0
\end{array}
\]
et vérifier qu'elle induit un isomorphisme entre,
\[ E^{\Phi = D_S} \overset{\sim}{\fleche} (\Fil E)^{\Phi = p},\]
où $D_S$ a pour matrice dans la base $e_{\tau_0},e_{\sigma\tau_0},\dots,e_{\sigma^{-1}\tau_0}$,
\[
\left(
\begin{array}{ccccc}
p^{\delta_{\sigma^{-1}\tau_0 \not\in S}} & & & &\\
& p & & & \\
& &p^{\delta_{\sigma\tau_0 \not\in S}} & & \\
& & & \ddots & \\
& & & & p^{\delta_{\sigma^{-2}\tau_0 \not\in S}}
\end{array}
\right), \quad \text{i.e. } D_S(e_{\sigma^j\tau_0}) = p^{\delta_{\sigma^{j-1}\tau_0 \not\in S}}e_{\sigma^{j}\tau_0}.
\]
\begin{rema}
On peut relier le résultat précédent à l'application de Hodge-Tate cristalline, qui identifie $T_p\mathcal{LT}_S$ avec $(\Fil E)^{\Phi = p}$ et qui modulo $\ker \theta$ redonne
l'application de Hodge-Tate. En particulier, grâce à la description explicite des groupes $p$-divisibles $\mu$-ordinaires, on peut voir que l'application de multiplication
$m$ corrige le défaut de surjectivité de l'application de Hodge-Tate cristalline, au sens où la projection sur $\omega_{G^D,\tau}$ de la réduction modulo $\ker \theta$
de $m^{-1} \circ \alpha_G^{cris}$ est surjective.
On n'arrivera pas à montrer directement cet énoncé dans le cas général, c'est pourquoi on doit passer à une certaine puissance extérieure.
\end{rema}
\subsection{Modification de périodes}
On applique les méthodes de la sous-section précédente pour un $\mathcal O$-module de Barsotti-Tate général.
Soit \[\underline{\Lambda} = (\Lambda, \Fil^{\underline{\cdot}}\Lambda, \Phi),\] le $\mathcal O$-cristal filtré introduit précédemment.
Définissons,
\[ m :
\begin{array}{ccc}
\Lambda = \bigoplus_{\tau'} \Lambda_{\tau'} & \fleche & \bigoplus_{\tau'} \Lambda_{\tau'} \\
x_{\tau'}& \longmapsto & t_\mathcal O^{\max(0,q_\tau - q_{\tau'})}\left( \prod_{j=1}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\tau'})}\right)x_{\tau'},
\end{array}
\]
l'application de multiplication par les périodes. Elle est $\mathcal O$-équivariante, c'est-à-dire qu'elle se décompose $m = \bigoplus_{\tau'} m_{\tau'}$, où,
\[ m_{\tau'} : \Lambda_{\tau'} \fleche \Lambda_{\tau'}.\]
\begin{rema}
L'idée, à la base de la construction de $m$, est que si $\tau'$ vérifie que $q_{\tau'} < q_\tau$, on a divisé $V$ par $p^{{q_\tau} - q_{\tau'}}$ pour construire $\Ha_\tau$, et donc
pour relier notre équation $\Phi = p$ (qui devient $\Phi = p^{q_\tau}$ sur $\Lambda$ et $\Phi^f = p^{fq_\tau}$ sur $\Lambda_\tau$), on veut à chaque fois que
$q_{\tau'} < q_{\tau}$ diminuer la puissance de $p$ dans l'équation précédente, de $q_\tau - q_{\tau'}$. Mais on veut faire cela de telle manière que l'on garde une équation (cyclique) avec $\Phi$,
et donc à chaque fois que l'on "divise" par $t_{\mathcal O}^n$ un élément $x$ de $\bigwedge T_pG \cap \Lambda_{\tau'}$, on doit aussi diviser
$\Phi^j(x) \dans\bigwedge T_pG \cap \Lambda_{\sigma^j\tau'}$ par $\frac{1}{p^n}\phi^j(t_\mathcal O)^n$.
Une autre façon de voir la construction de $m$ est que sur le lieu $\mu$-ordinaire, les $\mathcal O$-modules $p$-divisibles sont explicites et on veut diviser au maximum l'image de $\alpha_G^{cris}$ qui correspond à,
\[\Fil^{\underline 0} \Lambda^{\Phi = p^{q_\tau}},\]
afin de rendre $\alpha_G^{cris}\otimes 1$ surjective. Les périodes qui apparaissent dans $m$ sont alors exactement les périodes du déterminant du cristal du
sous-$\mathcal O$-module $p$-divisible canonique de hauteur $p_\tau$ (i.e. le cran de hauteur $p_\tau$ de la filtration canonique).
\end{rema}
Introduisons la notation suivante,
\[ \forall \tau', f_{\tau'} = \min(0,q_{\tau'}-q_\tau), \quad \text{et} \quad \underline f = (f_{\tau'})_{\tau'} \dans \ZZ^f.\]
On a que $\underline f \leq \underline 0$.
Notons aussi,
\begin{IEEEeqnarray*}{ccccc}
D_{\underline f} & = &
\left(
\begin{array}{ccccc}
p^{q_\tau + f_{\sigma^{f-1}\tau}}\Id_r & & & & \\
&p^{q_\tau + f_{\tau}}\Id_r & & & \\
& & p^{q_\tau + f_{\sigma\tau}}\Id_r & & \\
& & & \ddots & \\
& & & & p^{q_\tau+ f_{\sigma^{f-2}\tau}}\Id_r
\end{array}
\right)
\\
&=& \Diag(p^{q_\tau+f_{\sigma^{i-1}\tau}}\Id_r, i = 0 \dots f-1) = \Diag(p^{\min(q_\tau,q_{\sigma^{i-1}\tau})}\Id_r, i = 0 \dots f-1)
\end{IEEEeqnarray*}
dans une base adaptée à la décomposition $E_\tau \oplus E_{\sigma\tau} \oplus \dots \oplus E_{\sigma^{f-1}\tau}.$
\begin{prop}
L'application $m$ définit précédemment envoie $\Fil^{\underline f}\Lambda$ dans $\Fil^{\underline 0} \Lambda$.
De plus, $m$ envoie $\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}$ dans $\left(\Fil^{\underline 0}\Lambda\right)^{\Phi = p^{q_\tau}}$.
\end{prop}
\dem
Soit $j \dans \{0,\dots, f-1\}$. Comme $\phi^j(t_\mathcal O) \dans \Fil^1A$ si et seulement si $j = 0$, on a bien que chaque $x_{\tau'} \dans \Fil^{f_{\tau'}}\Lambda_{\tau'}$ est envoyé dans $\Fil^{f_{\tau'} + \max(0,q_\tau - q_{\tau'})} \Lambda_{\tau'} = \Fil^0\Lambda_{\tau'}$, d'où la première assertion.
Supposons maintenant que $x = \sum_{\tau'} x_{\tau'}$ vérifie $\Phi = D_{\underline f}$, on a donc que, pour tout $\tau'$,
\[ \Phi(x_{\tau'}) = p^{q_\tau + f_{\tau'}}x_{\sigma\tau'} = p^{\min(q_\tau,q_{\tau'})}x_{\sigma\tau'}.\]
Donc $t_\mathcal O^{\max(0,q_\tau - q_{\tau'})}\left( \prod_{j=1}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\tau'})}\right)x_{\tau'}$ vérifie,
\begin{eqnarray*}
\Phi\left(t_\mathcal O^{\max(0,q_\tau - q_{\tau'})}\left( \prod_{j=1}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\tau'})}\right)x_{\tau'}\right)
\\= p^{\min(q_\tau,q_{\tau'})}
\left( \phi(t_\mathcal O)^{\max(0,q_\tau - q_{\tau'})}\right)\left( \prod_{j=1}^{f-1}\left(\frac{1}{p}\phi^{j+1}(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\tau'})}\right)x_{\sigma\tau'}\\
= p^{\min(q_\tau,q_{\tau'})} \phi(t_\mathcal O)^{\max(0,q_\tau - q_{\tau'})}\left( \prod_{j=2}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\sigma\tau'})}\right)\left(\frac{\phi^f(t_\mathcal O)}{p}\right)^{\max(0,q_\tau-q_{\sigma^{-f}\sigma\tau})}x_{\sigma\tau'} \\
= p^{\min(q_\tau,q_{\tau'})} t_\mathcal O^{\max(0,q_\tau-q_{\sigma\tau})}\phi(t_\mathcal O)^{\max(0,q_\tau - q_{\sigma^{-1}\sigma\tau'})}\left( \prod_{j=2}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\sigma\tau'})}\right)x_{\sigma\tau'} \\
= p^{q_\tau} t_\mathcal O^{\max(0,q_\tau-q_{\sigma\tau})}
\frac{\phi(t_\mathcal O)^{\max(0,q_\tau - q_{\sigma^{-1}\sigma\tau'})}}{p^{\max(0,q_\tau-q_{\tau'})}}\left( \prod_{j=2}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\sigma\tau'})}\right)x_{\sigma\tau'} \\
= p^{q_\tau}t_\mathcal O^{\max(0,q_\tau-q_{\sigma\tau})}\left(\prod_{j=1}^{f-1}\left(\frac{\phi^j(t_\mathcal O)}{p}\right)^{\max(0,q_\tau-q_{\sigma^{-j}\sigma\tau'})}\right)x_{\sigma\tau'}
\end{eqnarray*}
Donc $\sum_{\tau'} t_\mathcal O^{\max(0,q_\tau - q_{\tau'})}\left( \prod_{j=1}^{f-1}\left(\frac{1}{p}\phi^j(t_\mathcal O)\right)^{\max(0,q_\tau - q_{\sigma^{-j}\tau'})}\right)x_{\tau'}$ vérifie l'équation \[\Phi = p^{q_\tau}.\qedhere\]
\edem
On voudrait pouvoir relier $\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}$ et $\left(\Fil^{\underline 0}\Lambda\right)^{\Phi = p^{q_\tau}}$ pour pouvoir caractériser
l'image de $\alpha_{G,\tau,\eps}$, qui s'inclut dans la réduction modulo $\Fil^1A$ du second espace, en fonction de $\Ha_\tau(G)$, ce dernier apparaîssant naturellement quand on fait une
méthode de Newton sur $\det(\omega_{G^D,\tau})^{\zeta_\tau = \id \otimes 1}$, qui est un quotient du premier espace.
L'idée est donc de montrer que $m$ est bijective, pour montrer que l'on peut relier l'image de l'application de Hodge-Tate à $\Ha_\tau(G)$.
\begin{prop}
L'application $m$ est injective.
\end{prop}
\dem
C'est trivial puisque $A_{cris}$ est sans $t$-torsion et qu'à une unité (de $\ZZ_p^\times$) près,
\[ t = t_\mathcal O\prod_{j=1}^{f-1} \frac{1}{p}\phi^j(t_\mathcal O).\]
En effet, le terme de droite est dans $(\Fil^1A_{cris})^{\phi = p} = \ZZ_pt$, et sa valuation dans $\Gr^1A_{cris}$ est strictement plus petite que 1 (cf. \cite{FGL} page 130 ou les rappels de la section \ref{sect42}).
\edem
On veut montrer que $m$ (plus précisément $m_\tau$) est surjective, et pour ça on va utiliser le théorème 5 de \cite{Fal}.
\subsection{Cristaux filtrés et théorème de Faltings}
On rappelle ici le coeur de l'article \cite{Fal}, qui va nous permettre de montrer que $m$ est surjective.
\begin{defin}
On définit la catégorie des cristaux filtrés admissibles d'amplitude $a$, notée $\mathcal{MF}_{[-a,0]}(\mathcal O_C)$ comme la catégorie des triplets,
\[ (M, \Fil^\cdot M, \Phi),\]
où $M$ est un $A_{cris}$-module filtré libre, les éléments $e_i$ d'une base ayant pour degrés $q_i$ avec $-a \leq q_i \leq 0$, $\Phi : M \fleche M$ est un endomorphisme $\phi$-linéaire
tel que la restriction de $\Phi$ à $\Fil^qM$ est divisible par $p^{q+a}$, vérifiant de plus la condition suivante,
\begin{equation}
\label{Ad}
\tag{Ad}
\sum_{i=-a}^0 \frac{\Phi}{p^{i+a}}(\Fil^i M) \text{ engendre sur } A_{cris} \text{ le module } M.
\end{equation}
\end{defin}
\begin{rema}
Strictement parlant, Faltings n'utilise pas des cristaux de la forme $\mathcal{MF}_{[-a,0]}(\mathcal O_C)$ mais plutôt, $\mathcal{MF}_{[0,a]}(\mathcal O_C)$, qui est la même catégorie, quitte à décaler la filtration (la version de Faltings est la "duale" de la notre : ses foncteurs sont contravariants).
\end{rema}
\begin{exemple}
Si $G$ est un groupe $p$-divisible, $E$ son cristal,
\[ (E, \Fil E, \Phi) \dans \mathcal{MF}_{[-1,0]}(\mathcal O_C)\]
\end{exemple}
On a alors le théorème suivant (voir aussi \cite{Chen} dans le cas $a=1$ des groupes $p$-divisibles) :
\begin{theor} [Faltings]
\label{thrfal}
Supposons que $a \leq p-2$. Soit $M \dans \mathcal{MF}_{[-a,0]}(\mathcal O_C)$. On pose,
\[ \mathbb D(M) = (\Fil^0 M)^{\Phi = p^a}.\]
Alors,
\begin{enumerate}
\item $\mathbb D(M)$ est un $\ZZ_p$-module libre de rang égal au rang de $M$ sur $A_{cris}$.
\item On a les inclusions, strictement compatibles aux filtrations,
\[t^a M \subset \mathbb D(M) \otimes_{\ZZ_p} A_{cris} \subset M.\]
\end{enumerate}
\end{theor}
\dem[esquisse]
L'idée est de fixer une base de $M$ sur $A_{cris}$, et de se ramener à des équations.
Comme sur $\Fil^0M$, $\Phi$ est divisible par $p^a$, on regarde la matrice de $\Phi_a = \frac{\Phi}{p^a}$, et on obtient des équations pour,
\[(\Fil^0M)^{\Phi = p^a} = (\Fil^0M)^{\Phi_a = \id},\]
et on réduit ces dernières équations modulo $p$, c'est-à-dire que l'on en cherche les équations dans $A_{cris}/p$.
Comme $a < p-1$, Faltings utilise que (cf. \cite{Fal} p127, \cite{Fal2} page 37, ou \cite{Chen} lemme 4.7 (qui s'adapte au cas général)),
\begin{lemm}
\label{lemfilp}
L'application de réduction modulo $(p,\Fil^p)$ induit un isomorphisme,
\[ \left(\Fil^0 (M\otimes A_{cris}/p)\right)^{\Phi_a = \id} \overset{\sim}{\fleche} \left(\Fil^0 (M\otimes \quotient{A_{cris}}{(p,\Fil^pA_{cris})})\right)^{\Phi_a = \id}.\]
\end{lemm}
Grâce à la proposition ci-après, on est donc amené à résoudre des équations dans $\mathcal O_C/p$, qui se relèvent uniquement dans $\mathcal O_C$ par une méthode de Newton, et on peut en calculer
le nombre, qui est exactement $p^{\rg_A M}$.
Faltings montre ensuite que toute solution de $\left(\Fil^0 (M\otimes A_{cris}/p)\right)^{\Phi_a = \id}$ se relève uniquement à $A_{cris}$, par un lemme de Nakayama topologique
.
\edem
\begin{rema}
On pourra trouver dans le cas des groupes $p$-divisibles une démonstration plus détaillée (et sur une base plus générale) dans \cite{Chen}.
\end{rema}
\begin{prop}
On peut filtrer $\mathcal O_C/p\mathcal O_C$ par $\Fil^i \mathcal O_C/p\mathcal O_C = p^{\frac{i}{p}}\mathcal O_C/p\mathcal O_C$ et on définie,
\[ \theta :
\begin{array}{ccc}
\mathcal O_C/p\mathcal O_C&\fleche & \mathcal O_C/p\mathcal O_C \\
x & \longmapsto & x^p .
\end{array}
\]
On a alors un isomorphisme, compatible aux applications $\theta$ et qui identifie les filtrations,
\[ A_{cris}/(p,\Fil^pA_{cris}) \overset{\sim}{\fleche} \mathcal O_C/p\mathcal O_C.\]
\end{prop}
Pour la preuve, on renvoie par exemple à \cite{Chen} Lemme 4.8. Donnons peut-être les images des périodes définies dans la section \ref{sect42} sous cet isomorphisme :
Comme $t,t_\mathcal O$ sont dans le $\Fil^1A_{cris}$, grâce aux valuations rappelées dans la section \ref{sect42}, leurs images modulo $(p,\Fil^pA)$ sous l'isomorphisme précédent
sont des éléments de valuation $\frac{1}{p} + \frac{1}{p-1}$ et $\frac{1}{p} + \frac{1}{p^f-1}$ respectivement. La valuation, pour $i \dans\{1,\dots,f-1\}$, de l'image de
$\frac{1}{p}\phi^i(t_\mathcal O)$ est $\frac{p^i}{p^f-1}$ : il n'est pas dans $\Fil^1A_{cris}$.
\begin{rema}
Supposons que $M$ soit un cristal filtré dans $\mathcal{MF}_{[-a,0]}(\mathcal O_C)$ muni d'une action de $\mathcal O$.
Donc $\mathbb D(M)$ est naturellement un $\mathcal O$-module, libre, car sans torsion.
En effet, l'action de $\mathcal O$ est supposée commuter à $\Phi$,
en particulier $(\Fil^r M)^{\Phi = p^a}$ est muni d'une action de $\mathcal O$.
\end{rema}
\subsection{Modification du cristal $\Lambda$}
Dans toute cette section on fait l'hypothèse que $q_\tau < p-1$.
On a construit un cristal filtré admissible,
\[ \underline{\Lambda} = (\Lambda, \Fil^{\cdot}\Lambda,\Phi) \dans \mathcal{MF}_{[-q_\tau,0]}(\mathcal O_C),\]
(en modifiant la filtration par une $\ZZ$-filtration en posant $\Fil^k \Lambda = \Fil^{\underline k} \Lambda$, pour tout $k \dans \ZZ$), l'admissibilité découlant moralement du fait que $E$ étant admissible, ses puissances extérieures le sont aussi (on peut le vérifier directement facilement).
Le théorème de Faltings nous dit alors que si $q_\tau < p-1$, alors,
\[ \mathbb D(\underline \Lambda) = (\Fil^{\underline 0})^{\Phi = p^{q_\tau}},\]
est un $\ZZ_p$-module libre de rang $f\binom{h}{q_\tau} = \rg_{A_{cris}} \Lambda =: fr$.
\begin{prop}
\label{pro414}
On a l'égalité suivante dans $\Lambda$,
\[ \bigwedge^{q_\tau}_{A_{cris} \otimes \mathcal O} (\Fil^0 E)^{\Phi = p} = (\Fil^{\underline0}\Lambda)^{\Phi = p^{q_\tau}}.\]
\end{prop}
\dem
On a une inclusion naturelle évidente \[\bigwedge^{q_\tau}_{\mathcal O}(\Fil^0E)^{\Phi = p} \subset (\Fil^{\underline 0} \Lambda)^{\Phi = p^{q_\tau}} \subset \Lambda,\] simplement
par définition de $\Fil^{\underline 0} \Lambda$.
De plus le théorème de Faltings, appliqué à $E$ et à $\Lambda$, nous dit que les deux modules ont le même rang.
Il suffit donc de voir que le conoyau (qui est automatiquement de type fini) est nul. De plus les deux modules ont le même nombre d'élément modulo $p$, on peut donc réduire modulo $p$ et vérifier que le conoyau est nul, i.e. que l'inclusion reste une inclusion modulo $p$.
Or $\bigwedge^{q_\tau}_{\mathcal O} (\Fil^0 E)^{\Phi = p}\otimes_{\mathcal O}A_{cris} \supset t^{q_\tau}\Lambda$, dont l'image dans $\Lambda/p\Lambda$ est de rang le rang de $\Lambda$ sur $A_{cris}$ ($t^{q_\tau} \neq 0 \pmod p$ car $q_\tau < p-1$).
\edem
On rappelle qu'on veut montrer que,
\[ m : \left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}} \fleche \left(\Fil^{\underline 0}\Lambda\right)^{\Phi = p^{q_\tau}},\]
est surjective. On sait déjà qu'elle est injective, on va donc d'abord montrer que les deux $\mathcal O$-modules ont le même rang.
\begin{defin}
On modifie le cristal $\underline \Lambda$ en,
\[ \underline{\Lambda'} = (\Lambda, (\Fil^{\underline{f + r}} \Lambda)_{r \in \ZZ}, p^{q_\tau}D_{\underline f}^{-1}\Phi),\]
qui est bien définie, car $D |p^{q_\tau}I_{fr}$.
\end{defin}
\subsection{Admissibilité}
Rappelons que $(E,\Fil E, \Phi)$ est un $\mathcal O\otimes_{\ZZ_p}A_{cris}$-module filtré, que l'on peut décomposer,
\[ E = \bigoplus_{\tau'} E_{\tau'} \quad \text{et}\quad \Fil E = \bigoplus_{\tau'} \Fil E_{\tau'}.\]
De plus, il existe une base, pour tout $\tau'$, $(e_1^{\tau'},\dots,e_h^{\tau'})$ de $E_{\tau'}$ telle que,
\[ \Fil E_{\tau'} = (e_1^{\tau'},\dots,e_{p_{\tau'}}^{\tau'})\Fil^1A + (e_{p_{\tau'}+1}^{\tau'},\dots,e_h^{\tau'})A,\]
et donc la filtration supérieure, donnée par $\Fil^iE = (\Fil^iA)\Fil E + (\Fil^{i+1}A)E$ s'écrit simplement,
\[\Fil^i E_{\tau'} = (e_1^{\tau'},\dots,e_{p_{\tau'}}^{\tau'})\Fil^{i+1}A + (e_{p_{\tau'}+1}^{\tau'},\dots,e_h^{\tau'})\Fil^iA,\]
Notons pour tout $\tau'$,
\[ W_{\tau'} = (e_{p_{\tau'}+1}^{\tau'},\dots,e_h^{\tau'})A \subset \Fil E_{\tau'}, \quad \text{et} \quad W = \bigoplus_{\tau'} W_{\tau'}.\]
$W$ est un relèvement par $\theta$ de $\omega_{G^D}$, mais il n'est a priori pas stable sous $\Phi$. On a alors,
\begin{equation}
\label{filW}
\Fil^i E_{\tau'} =\Fil^{i+1}E_{\tau'} + (\Fil^iA)W_{\tau'}.\end{equation}
Rappelons que l'on a fixé un $\tau$ tel que $q_\tau \not\in\{0,h\}$ et
\[\Lambda = \bigwedge_{\mathcal O\otimes A}^{q_\tau} E = \bigoplus_{\tau'} \Lambda_{\tau'},\]
dont la filtration est donnée par la "convolution" de celle de $E$.
\begin{prop}
\label{profil}
Soit $\tau'$ tel que $q_{\tau'} < q_\tau$. Soit $0 \geq s > -q_\tau + q_{\tau'} = f_{\tau'}$, alors,
\begin{IEEEeqnarray*}{rcl}
\Fil^s \Lambda_{\tau'} &=& \sum_{j = s + q_\tau-q_{\tau'}}^{s+q_\tau} (\Fil^{j}A) \Fil^{s-j}\Lambda_{\tau'} \\
&=& (\Fil^{s+q_\tau}A)\Lambda_{\tau'} + (\Fil^{s+q_\tau-1}A)\Fil^{-q_\tau + 1} \Lambda_{\tau'} + \dots + (\Fil^{s+q_\tau-q_{\tau'}}A)\Fil^{-q_\tau+q_{\tau'}}\Lambda_{\tau'}.\end{IEEEeqnarray*}
Autrement dit, passée la dimension de $\omega_{G^D,\tau'}$, la filtration n'est modifiée que par l'action des scalaires ($\Fil^.A$).
\end{prop}
\dem
Il suffit d'écrire \[\Fil^iE = (\Fil^i A)W + \Fil^{i+1}E,\]
et,
\begin{equation}
\label{annW}
\bigwedge^r W_{\tau'} = 0, \forall r > q_{\tau'}.\end{equation}
Écrivons la définition de $\Fil^s \Lambda_{\tau'}$ comme l'image d'une somme de produits tensoriels,
\[ \Fil^s \Lambda_{\tau'} = \im\left( \sum_{
\begin{array}{c}
i_1,\dots, i_{q_\tau} \\
i_1 +\dots + i_{q_\tau} = s
\end{array}}
\Fil^{i_1}E_{\tau'} \otimes \dots\otimes\Fil^{i_{q_\tau}}E_{\tau'} \fleche \Lambda_{\tau'}\right),\]
et concentrons nous sur l'image de $\Fil^{i_1}E_{\tau'} \otimes \dots\otimes\Fil^{i_{q_\tau}}E_{\tau'}$, que l'on peut décomposer grâce à (\ref{filW}),
\[\Fil^{i_1}E_{\tau'} \otimes \dots\otimes\Fil^{i_{q_\tau}}E_{\tau'} = \sum_{j=0}^{q_{\tau}} (\Fil^{i_1}A)W_{\tau'} \otimes\dots\otimes (\Fil^{i_j}A)W_{\tau'} \otimes (\Fil^{i_{j+1}+1}A)E_{\tau'}
\otimes \dots \otimes (\Fil^{i_{q_\tau}+1}A)E_{\tau'}.\]
Or, si $j > q_{\tau'}$, l'image de $(\Fil^{i_1}A)W_{\tau'} \otimes\dots\otimes (\Fil^{i_j}A)W_{\tau'} \otimes (\Fil^{i_{j+1}+1}A)E_{\tau'}
\otimes \dots \otimes (\Fil^{i_{q_\tau}+1}A)E_{\tau'}$ dans $\Lambda_{\tau'}$ est nulle en vertu de (\ref{annW}), on peut donc écrire,
\begin{eqnarray*}
\Fil^{i_1}E_{\tau'} \otimes \dots\otimes\Fil^{i_{q_\tau}}E_{\tau'} \\
= \sum_{j=0}^{q_{\tau'}} (\Fil^{i_1}A)W_{\tau'} \otimes\dots\otimes (\Fil^{i_j}A)W_{\tau'} \otimes (\Fil^{i_{j+1}+1}A)E_{\tau'}
\otimes \dots \otimes (\Fil^{i_{q_\tau}+1}A)E_{\tau'}\\
\subset \sum_{j=0}^{q_{\tau'}} (\Fil^{q_\tau - j + i_1 + \dots + i_j + i_{j+1} + \dots + i_{q_\tau}}A)\underbrace{W_{\tau'} \otimes\dots\otimes W_{\tau'}}_{j}\otimes E_{\tau'}
\otimes \dots \otimes E_{\tau'},
\end{eqnarray*}
dont l'image dans $\Lambda_{\tau'}$ est par définition incluse dans,
\[ \sum_{j=0}^{q_{\tau'}} (\Fil^{q_\tau-j + s}A) \Fil^{-q_\tau + j}\Lambda_{\tau'} \qedhere.\]
\edem
\begin{prop}
\label{prophifil}
Soit $\Lambda$ le cristal précédent.
Alors pour $r+s \leq (p-1) - q_\tau$, les sous-modules engendrés sur $A_{cris}$ par
\[\Phi((\Fil^rA)(\Fil^s\Lambda_{\tau'})) \quad \text{et} \quad \Phi(\Fil^{r+s}\Lambda_{\tau'}),\]
sont les mêmes.
\end{prop}
\dem
C'est vrai pour $A_{cris}$ : l'espace engendré par $\phi(\Fil^iA_{cris})$ est $p^iA_{cris}$ pour tout $i \leq p-1$ (cf. \cite{Chen} Lemme 2.8 par exemple). Utilisons $<S>$ pour dénoter le sous-module engendré sur $A_{cris}$ par $S$.
Alors \[<\phi(\Fil^rA\Fil^sA)> = <\phi(\Fil^rA)><\phi(\Fil^sA)> = p^rAp^sA = p^{r+s}A = <\phi(\Fil^{r+s}A)>,\]
pour tout $r,s \geq 0$ tels que $r+s \leq p-1$.
Soit $E$ le cristal d'un $\mathcal O$-module $p$-divisible et,
\[ \Fil^iE_{\tau'} = \Fil^iW_{\tau'} + \Fil^{i+1}E_{\tau'},\]
donc, grâce au cas de $A_{cris}$,
\begin{eqnarray*}
<\Phi(\Fil^rA\Fil^sE_{\tau'})> = <\Phi(\Fil^{r}A(\Fil^sAW_{\tau'}) + \Fil^{r}A(\Fil^{s+1}AE_{\tau'}))>\\
= <\Phi((\Fil^{r+s}A)W_{\tau'} + (\Fil^{r+s+1}A)E_{\tau'})>=<\Phi(\Fil^{r+s}E_{\tau'})>.\end{eqnarray*}
Et donc grâce à la définition sur la filtration sur les puissances extérieures, on a encore le résultat sur $\Lambda_{\tau'}$.
\edem
\begin{rema}
Bien sûr c'est faux sans appliquer $\Phi$. Par exemple, sur $A_{cris}$, $\Fil^{r+s}A_{cris} \supset \Fil^rA_{cris}\Fil^sA_{cris}$, mais l'inclusion est stricte, même pour $r+s \leq p-1$.
\end{rema}
Ces deux dernières propositions vont nous permettre de vérifier que le cristal filtré modifié $\underline{\Lambda}'$ est bien admissible.
\begin{prop}
Le cristal filtré $\underline{\Lambda'}$ appartient à $\mathcal{MF}_{[-q_\tau,0]}(\mathcal O_C)$.
\end{prop}
\dem
Il faut vérifier que la restriction de $\Phi' = p^{q_\tau}D_{\underline f}^{-1}\Phi$ à $\Fil^{\underline{f + r}}\Lambda$ est divisible par $p^{q_\tau + r}$ pour $r \dans [-q_\tau,0]\cap \ZZ$,
et la condition d'admissibilité.
Soit $-q_\tau \leq r \leq 0$, et $i_1,\dots,i_{q_\tau} \geq -1$ tels que $i_1 + \dots + i_{q_\tau} = f_{\tau'} + r$.
Alors restreint à \[\Fil^{i_1}E_{\tau'} \otimes\dots\otimes \Fil^{i_{q_\tau}}E_{\tau'},\] $\Phi \otimes \dots \otimes \Phi$ est divisible par
\[p^{i_1 + 1 + i_2 + 1 + \dots + i_{q_\tau} + 1} = p^{q_\tau+f_{\tau'}+r},\]
donc $\Phi'_{|\Fil^{f_\tau' + r}\Lambda_{\tau'}}$ est divisible par,
\[ p^{q_\tau - \min(q_\tau,q_{\tau'}) + q_\tau + \min(0,q_{\tau'} -q_\tau) + r} = p^{\max(0,q_\tau-q_{\tau'}) + q_{\tau} + \min(0,q_{\tau'} - q_\tau) + r} = p^{q_\tau + r}.\]
Il faut ensuite vérifier la condition (\ref{Ad}).
Il faut donc voir que $\Lambda$ est engendré sur $A$ par
\[\sum_{ i = -q_\tau}^{0} \frac{\Phi'}{p^{i+q_\tau}}(\Fil^{\underline i} \Lambda').\]
Mais en fait $\Lambda$ est engendré par $\frac{\Phi}{p^{q_\tau}}(\Fil^{\underline 0}\Lambda)$ sur $A_{cris}$, puisque c'est déjà le cas pour $E$.
Mais alors, pour tout $\tau'$, on a par la proposition (\ref{profil}),
\[\Fil^0{\Lambda_{\tau'}} = (\Fil^{q_\tau}A)\Lambda_{\tau'} + (\Fil^{q_\tau-1}A)\Fil^{-q_\tau+1}\Lambda_{\tau'} + \dots + (\Fil^{q_\tau-q_{\tau'}}A)\Fil^{-q+q_{\tau'}}\Lambda_{\tau'},\]
et donc en regardant l'espace engendré par $\Phi$, d'après la proposition (\ref{prophifil}), l'espace engendré par l'image par $\Phi$ de $\Fil^0{\Lambda_{\tau'}}$ est, si $q_{\tau'}<q_\tau$,
\begin{eqnarray*}
p^{q_\tau}\Lambda_{\sigma\tau'} = \\ <\phi((\Fil^{q_\tau}A))\Phi(\Lambda_{\tau'}) + \phi(\Fil^{q_\tau-1}A)\Phi(\Fil^{-q_\tau+1} \Lambda_{\tau'})+ \dots + \phi(\Fil^{q_\tau-q_{\tau'}}A) \Phi(\Fil^{-q_{\tau}+q_{\tau'}}\Lambda_{\tau'})>\\
= p^{q_\tau-q_\tau'}<\Phi(\Fil^{-q_\tau+q_{\tau'}}\Lambda_{\tau'})>
\end{eqnarray*}
Et si $q_\tau \leq q_{\tau'}$, alors $\Phi(\Fil^0\Lambda_{\tau'})$ engendre $p^{q_\tau}\Lambda_{\sigma\tau'}$.
Autrement dit, dans tous les cas, $\Phi(\Fil^{f_{\tau'}}\Lambda_{\tau'})$ engendre
\[p^{\min(q_\tau,q_{\tau'})}\Lambda_{\sigma\tau'}.\]
On en déduit que $D_{\underline f}^{-1}\Phi(\Fil^{\underline f}(\Lambda)) = \frac{\Phi'}{p^{q_\tau}}(\Fil^{\underline 0}\Lambda')$ engendre $\Lambda$, et donc l'admissibilité.
\edem
\subsection{Division}
On a montré dans la section précédente que le cristal modifié $\underline{\Lambda'}$ est dans $\mathcal{MF}_{[-q_\tau,0]}(\mathcal O_C)$, et il en est de même de $\underline{\Lambda}$.
En appliquant le théorème (\ref{thrfal}), on en déduit le résultat suivant,
\begin{prop}
Supposons $q_\tau \leq p-2$.
Le $\mathcal O$-module $\mathbb D(\underline{\Lambda'}) = \left(\Fil^{0}\Lambda'\right)^{\Phi' = p^{q_\tau}}=
\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}$ est libre de rang $\binom{q_\tau}{h}$ sur $\mathcal O$.
On a de plus les inclusions strictement compatibles aux filtrations,
\[ t^{q_\tau}\Lambda \subset \mathbb D(\underline{\Lambda'})\otimes_{\ZZ_p}A_{cris} \subset \Lambda.\]
\end{prop}
\begin{prop}
Soit $(M,\Fil M,\Phi)$ un cristal filtré dans $\mathcal{MF}_{[-a,0]}(\mathcal O_C)$, muni d'une $\mathcal O$-action, c'est à dire,
\[ M = \bigoplus_{\tau} M_\tau, \quad \text{et} \quad \Phi : M_\tau \fleche M_{\sigma\tau} \quad \forall \tau.\]
Alors pour tout $\tau$ la projection,
\[\pi_\tau : M \fleche M_\tau,\]
induit un isomorphisme,
\[ \pi_\tau : \mathbb D(M) =\left(\Fil^0M\right)^{\Phi = p^a} \overset{\sim}{\fleche} \pi_\tau(\left(\Fil^0M\right)^{\Phi = p^a}).\]
En particulier ces deux $\mathcal O$-modules ont le même rang, $\rg_{A_{cris}} M$.
\end{prop}
\dem
On a deux $\mathcal O$-modules de rang fini, et une surjection,
\[ \pi_\tau : \mathbb D(M) =\left(\Fil^0M\right)^{\Phi = p^a} \overset{}{\fleche} \pi_\tau(\left(\Fil^0M\right)^{\Phi = p}).\]
Soit $x = \sum_\tau x_\tau$ tel que $\Phi(x) = p^ax$, c'est à dire,
\[\forall j \geq 0, \Phi^j(x_\tau) = p^{ja}x_{\sigma^{j}\tau}.\]
Autrement dit, comme $A_{cris}$ est sans $p$-torsion, $x_{\sigma^j\tau}$ est entièrement déterminé par $x_\tau$.
De plus, il faudrait vérifier qu'un tel $x_{\sigma^j\tau}$ est dans $\Fil^0M$, mais c'est automatique, car il est unique, et $x_\tau$ provient (par surjectivité et unicité de l'antécédent) de
$x = \sum_{j=0}^{f-1} x_{\sigma^j\tau} \dans \Fil^0M$.
\edem
On en déduit le théorème principal suivant, dit de 'suppression de période', ou de division, qui va nous permettre de relier l'image de $\alpha_{G,\tau}$ à l'invariant de Hasse.
\begin{theor}
L'application,
\[ m : \left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}} \fleche \left(\Fil^{\underline 0} \Lambda\right)^{\Phi = p},\]
est un isomorphisme.
\end{theor}
\dem
En effet, on sait que $m$ est une application $\mathcal O$-linéaire, injective, entre deux $\mathcal O$-modules libres de même rang, on en déduit que son conoyau est fini.
Par la proposition précédente, il suffit de vérifier que,
\[ m_\tau : \pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}) \overset{\sim}{\fleche} \pi_\tau(\left(\Fil^{\underline 0} \Lambda\right)^{\Phi = p}),\]
est un isomorphisme, c'est-à-dire que son conoyau est nul.
En particulier, il suffit de voir que modulo $p$, $m_\tau$ est un isomorphisme.
Or d'après Faltings et le lemme (\ref{lemfilp}), il suffit de regarder ce qu'il se passe modulo $(p,\Fil^pA_{cris})$, et les deux cristaux modulo $(p,\Fil^pA)$ ont le même
nombre de solutions à leurs équations respectives. Il suffit donc seulement de voir que $m_\tau$ est injective. Et dans ce cas on a la description de $A_{cris}/(p,\Fil^pA_{cris})$,
et donc on peut calculer que sous l'isomorphisme, $\mathcal O_C/p = A_{cris}/(p,\Fil^pA_{cris})$, $m_\tau$ est donnée par multiplication par un élément de valuation,
\[ \sum_{j=1}^{f-1} \frac{p^j\max(0,q_\tau - q_{\sigma^{-j}\tau})}{p(p^f-1)} \leq \frac{q_\tau}{p(p-1)} - \frac{q_\tau}{p(p^f-1)} < \frac{1}{p},\]
et on note,
\[K_\tau = \sum_{j=1}^{f-1} \frac{p^j\max(0,q_\tau - q_{\sigma^{-j}\tau})}{p^f-1}.\]
Comme $q_\tau \leq p-2$, on a que $K_\tau < 1$.
Or on sait que,
\[ t^{q_\tau} \Lambda \subset \left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}\otimes_{\ZZ_p}A_{cris},\]
donc si on choisit un des générateurs $e$ sur $\FP$ de $\pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}) \pmod {p,\Fil^pA}$, et on choisit un isomorphisme,
\[\Lambda_\tau \simeq A_{cris}^h,\]
alors au moins une des coordonnées de $e$ divise $t^{q_\tau} \pmod{p,\Fil^pA}$.
Or, \[m_\tau (0,\dots,0,t^{q_\tau},0,\dots,0) \pmod{p,\Fil^pA_{cris}} = (0,\dots,0,p^{\frac{K_\tau}{p} + \frac{q_\tau}{p(p-1)} + \frac{q_\tau}{p}},0,\dots,0).\]
Mais en utilisant que $q_\tau \leq p-2$, on en déduit,
\[\frac{K_\tau}{p} + \frac{q_\tau}{p(p-1)} + \frac{q_\tau}{p} < \frac{1}{p} + \frac{p-2}{p} + \frac{1}{p} = 1,\]
donc que $m_\tau(e) \neq 0 \pmod{p,\Fil^pA_{cris}}$.
On en déduit que $m_\tau$ est injective modulo $(p,\Fil^pA_{cris})$, et donc modulo $p$ par le lemme (\ref{lemfilp}), et comme les deux espaces ont le même nombre (fini) de solutions
modulo $p$, on en déduit que $m_\tau \pmod p$ est bijective.
Donc $m_\tau$ est un isomorphisme.
\edem
\subsection{Reconstruction de l'invariant de Hasse}
\label{sectmuha}
Dans \cite{Her1}, on a construit les invariants de Hasse partiels, en supposant le groupe $p$-divisible (tronqué) $G$ sur une base lisse de caractéristique $\overline S$,
et en regardant donc le cristal dans $\Cris(\overline S/\Spec(\ZZ_p))$,
\[ \mathbb D(G) = \mathcal Ext^1(G^D,\mathcal O_{S/\Sigma}) = \bigoplus \mathbb D(G)_\tau,\]
muni des applications $V$ et $F$. Plus précisément, on considère,
\[ \bigwedge^{q_\tau} \mathbb D(G)_\tau,\]
sur lequel l'application,
\[ V^f : \bigwedge^{q_\tau} \mathbb D(G)_\tau \fleche \bigwedge^{q_\tau} \mathbb D(G)_\tau^{(p^f)},\]
est divisible par $p^{k_\tau}$, division que l'on note $\phi_\tau$. Pour montrer ce fait, on utilise que la base est lisse, pour localement relever
$\overline{S}$ en $S/\Spec(\ZZ_p)$ lisse, et utiliser la description du cristal
en terme de module à connection, auquel cas il est clair que l'on peut diviser. On montre de plus que cette application se factorise à l'arrivé par un certain sous-faisceau
(\cite{Her1} Théorème 3.7) et qu'elle est unique, et quelle induit une application
\[\widetilde{\Ha_\tau}(G) : \det(\omega_{G^D[p],\tau}) \fleche \det(\omega_{G^D[p],\tau})^{\otimes(p^f)},\] cf \cite{Her1} Proposition 3.13, Lemme 3.14.
Couplé au fait que le champ des $\mathcal{BT}_r^{\mathcal O}$ est lisse, on peut donc faire la construction sur toute base.
Néanmoins étant donné un $\mathcal O$-module $p$-divisible $G$ (non tronqué pour simplifier) sur $\mathcal O_C/p$, qui n'est donc pas lisse, a priori l'expression de $\phi_\tau$ sur son cristal, bien qu'elle existe, semble un peu compliquée.
Mais en fait, si $E = \bigoplus_\tau E_\tau$ est l'évaluation de $\mathbb D(G)$ sur $A_{cris} \twoheadrightarrow \mathcal O_C/p$, on peut reconstruire $\phi_\tau$. En effet,
le principal ingrédient est que,
\[ V(E_{\tau'}) \subset \Fil (E_{\sigma^{-1}\tau'}^{(\phi)}) + pE_{\sigma^{-1}\tau'}^{(\phi)},\]
et si $q_{\tau'} < q_\tau$, alors,
\[\im(\bigotimes^{q_\tau} \Fil (E_{\tau'}^{(\phi)}) \fleche (\bigwedge^{q_\tau} E_{\tau'})^{(\phi)}) \subset
\left(\Fil^{q_\tau-q_{\tau'}}A_{cris}\bigwedge^{q_\tau} E_{\tau'}\right)^{(\phi)}\subset
\bigwedge^{q_\tau} E_{\tau'} \otimes_{A_{cris},\phi} p^{q_\tau-q_{\tau'}}A_{cris}.\]
La dernière inclusion étant due au fait que si $M$ est un $A_{cris}-$module, si $z \dans \Fil^iA_{cris}$, $x \dans M$, alors dans $M^{(\phi)} = M \otimes_{A_{cris},\phi} A_{cris}$,
\[ (zx) \otimes 1 = x \otimes \phi(z), \quad \text{et} \quad \phi(z) \dans p^i A_{cris}.\]
On peut donc, pour chaque $j$ tel que $q_{\sigma^j\tau} < q_\tau$ diviser $V^j$ par $p^{q_\tau - q_{\sigma^j\tau}}$, et donc il existe,
\begin{equation}
\label{zetaOC}
\phi_\tau' : \bigwedge^q_\tau E_\tau \fleche \bigwedge^q_\tau E_\tau\otimes_{A_{cris},\phi^f} A_{cris},\end{equation}
tel que $p^{k_\tau}\phi_\tau' = V^f$. De plus comme $A_{cris}$ est sans $p$-torsion, un tel $\phi_\tau'$ est unique.
Si on note $G^{univ}/\mathcal{BT}_r^\mathcal O$ le groupe $p$-divisible universel sur le champ des $\mathcal {BT}_r^\mathcal O$, alors il existe un unique morphisme
de cristaux modulo $p^{r-k_\tau}$,
\[ \phi_\tau : \bigwedge^q_\tau \mathcal E_\tau \fleche \left(\bigwedge^q_\tau \mathcal E_\tau\right)^{(\phi^f)},\]
qui vérifie que $p^{k_\tau} \phi_\tau = V^f$, et donc si $G/\mathcal O_C$ vérifie $G[p^r] = x^*G^{univ}$ pour $x \dans \mathcal{BT}_r^\mathcal O(\mathcal O_C)$, le morphisme (\ref{zetaOC}) coïncide avec
l'invariant de Hasse associé à $\tau$ modulo $p^{r-k_\tau}$.
\begin{prop}
La réduction modulo $(\ker\theta,p)$ de $\pi_\tau\left(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}\right)$ est incluse dans,
\[\{ x \dans \det\omega_{G[p]^D,\tau} : \widetilde{\Ha}_\tau(x) = x \otimes 1\}.\]
\end{prop}
\dem
En effet, soit $x \dans \left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}$ alors il vérifie que \begin{equation}\label{Phif}\Phi^f(x_\tau) = p^{fq_\tau -k_\tau}\cdot x\otimes 1.\end{equation}
Rappelons que,
\[ V : \Lambda \fleche \Lambda^{(\phi)}, \quad \text{et} \quad F: \Lambda^{(\phi)} \fleche \Lambda,\]
avec $VF = FV = p^{q_\tau}$ et $\Phi(x) = F(x\otimes 1)$. On en déduit que $V^f(\Phi^f(x)) = p^{fq_\tau}x\otimes 1$ et par (\ref{Phif}), que $V^f(x) = p^{k_\tau} x\otimes 1$.
Comme $A_{cris}$ est sans $p$-torsion (et $\bigwedge^{q_\tau} E_\tau$ est libre), on en déduit que $\phi_\tau'(x) = x \otimes 1$. Le résultat s'en déduit par réduction modulo $(\ker\theta,p)$.
\edem
\subsection{Image de l'application de Hodge-Tate}
Essayons maintenant de relier l'image de Hodge-Tate à $\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}}$. On peut calculer explicitement, grâce à l'expression de
$m_\tau$ et aux valuations rappelées dans la section (\ref{sect42}), que modulo $\Fil^1A_{cris} = \ker\theta$, $m_\tau$ est donné par multiplication par un élément de valuation $K_\tau$, où
\[K_\tau = \sum_{j=1}^{f-1} \frac{p^j\max(0,q_\tau - q_{\sigma^{-j}\tau})}{p^f-1}.\]
Comme $q_\tau \leq p-2$, on a que $K_\tau < 1$.
On a donc le diagramme commutatif suivant, où $u$ est inversible dans $\mathcal O_C$,
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}} & &\left(\Fil^{\underline 0} \Lambda\right)^{\Phi = p} \\
\bigwedge^{q_\tau} \omega_{G^D,\tau} & & \bigwedge^{q_\tau} \omega_{G^D,\tau} \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$m$} (m-1-3)
(m-1-1) edge node[auto] {$\theta\circ\pi_\tau$} (m-2-1)
(m-1-3) edge node[auto] {$\theta\circ\pi_\tau$} (m-2-3)
(m-2-1) edge node[auto] {$up^{K_\tau}$} (m-2-3);
\end{tikzpicture}
\end{center}
De plus ce diagramme peut se factoriser par la flèche inversible $m_\tau$, et sous l'identification
$\left(\Fil^{\underline 0} \Lambda\right)^{\Phi = p} = \bigwedge^{q_\tau}_\mathcal OT_pG$ de la proposition (\ref{pro414}), on en déduit que l'image de
$\bigwedge^{q_\tau} \alpha_{G,\tau}$ se déduit de celle de l'image de la réduction modulo $\Fil^1A_{cris}$ de
$\pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}})$ par multiplication par $up^{K_\tau}$.
Or les éléments de $\pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}})$ vérifient l'équation,
\[ \Phi^f = p^{fq_\tau - k_\tau}, \quad \text{où} \quad k_\tau = \sum_{j=0}^{f-1} \max(0,q_\tau - q_{\sigma^{j}\tau}).\]
C'est-à-dire l'équation $V^f = p^{k_\tau}\otimes 1$ ou encore, en notant $\phi_\tau$ la division de $V^f$ sur $\Lambda_\tau$ par $p^{k_\tau}$, construite dans \cite{Her1} et la section précédente,
\[\phi_\tau = \id.\]
Autrement dit, d'après la proposition précédente, comme $(\phi_\tau)_{|\Fil^0\Lambda_\tau}$ se réduit par $(\ker(\theta),p)$ sur,
\[\widetilde{\Ha_\tau} : \bigwedge^{q_\tau} \omega_{G^D,\tau}\pmod p \fleche \bigwedge^{q_\tau} \omega_{G^D,\tau}^{(p^f)}\pmod p,\]
dont la valuation du déterminant est exactement $\Ha_\tau(G)$.
L'image de la réduction par $(\ker\theta,p)$ de $\pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}})$ est incluse dans,
\[ \{ x \dans \bigwedge^{q_\tau} \omega_{G^D,\tau} \pmod p : \widetilde{\Ha}_\tau(x) = x \otimes 1\}.\]
\begin{prop}
\label{pro7}
Notons $N = \{ x \dans \bigwedge^{q_\tau} \omega_{G^D,\tau} \pmod p : \widetilde\Ha_\tau(x) = x \otimes 1\}$. C'est un sous-$\FF_{p^f}$-module du $\mathcal O_{C}/p$ module libre de rang 1,
$\bigwedge^{q_\tau} \omega_{G^D,\tau}\pmod p$. Supposons que $\Ha_\tau(G) < 1 - \frac{1}{p^f}$.
Alors,
\[\im\left(N \fleche \bigwedge^{q_\tau} \omega_{G^D,\tau}\pmod {p^{1-\Ha_\tau(G)}}\right),\]
est un $\FF_{p^f}$-module de rang 1 engendré par un élément de valuation $\frac{\Ha_\tau(G)}{p^f-1}$.
Si $\Ha_\tau(G) \geq 1 - \frac{1}{p^f}$ cette image est toujours incluse dans l'image de
$$p^{1/p^f}\bigwedge^{q_\tau}\omega_{G^D,\tau}.$$
\end{prop}
\dem
On peut directement appliquer \cite{Far} proposition 7, lorsque $\Ha_\tau(G) < \frac{1}{2}$. On plonge $\FF_{p^f}$ dans $\mathcal O_C$ via le morphisme multiplicatif de Teichmuller. En général, quitte à choisir une base de $\bigwedge^{q_\tau} \omega_{G^D,\tau}$, on est ramené à résoudre une équation dans
$\mathcal O_C/p$ du type,
\[ X^{p^f} \equiv aX \pmod{p},\]
où $a= a_0p^{v(a)}$ avec $a_0 \dans \mathcal O_C^\times$ et $v(a) = \Ha_\tau(G)$. On écrit $x = up^w$, avec $u \dans \mathcal O_C^\times$ et $w \dans [0,1]$, et on en déduit,
\[ \min(1,wp) = \min(v(a) + w,1).\]
En analysant les quatre possibilités, on trouve que si $v(a) < 1 - \frac{1}{p^f}$, alors les solutions sont dans l'image de
$p^{\frac{v(a)}{p^f-1}}a_o^{\frac{1}{p^f-1}}\mathbb F_{p^f} + p^{1-v(a)}\mathcal O_C$, où $a_o^{\frac{1}{p^f-1}}$ est un choix d'une racine de $a_0$ (on retrouve en particulier l'énonce de
la proposition 7 de Fargues lorsque le module est de dimension 1) et si $v(a) \geq 1 - \frac{1}{p^f}$, elles sont dans $p^{\frac{1}{p^f}}\mathcal O_C$.
\edem
On en déduit donc après identification $\bigwedge^{q_\tau} \omega_{G^D,\tau} = \mathcal O_C$ que l'image par $\theta$ de $\pi_\tau(\left(\Fil^{\underline f}\Lambda\right)^{\Phi = D_{\underline f}})$ est, à un inversible près, incluse dans \[p^{\frac{\Ha_\tau(G)}{p^f-1}}[\FF_{p^f}] + p^{1-\Ha_\tau(G)}\mathcal O_C.\]
Donc grâce au diagramme précédent, que l'image de $\bigwedge^{q_\tau}_\mathcal O \alpha_{G,\tau}$ est, à un inversible près, incluse dans,
\[ p^{K_\tau + \frac{\Ha_\tau(G)}{p^f-1}}[\FF_{p^f}] + p^{K_\tau+1-\Ha_\tau(G)}\mathcal O_C.\]
\begin{rema}
Pour plus de simplicité, on va prendre $\frac{1}{2}$ comme borne pour $\Ha_\tau(G)$ au lieu de $1- \frac{1}{p^f}$, pour rendre les bornes plus cohérente avec celles de Fargues. De plus, on n'aurait probablement pas été capable de montrer que nos sous-groupes "canoniques" sont des crans d'une certaine filtration de Harder-Narasihman sous l'hypothèse $\Ha_\tau(G) < 1 - \frac{1}{p^f}$.
Malheureusement, le passage à la puissance extérieure sur $\omega_{G^D,\tau}$ ne nous permettra pas en general de garder comme borne $\frac{1}{2}$, ce qui fait que la condition sur $\Ha_\tau(G)$ pour avoir un sous-groupe canonique sera malgré tout compliquée (mais ce sera simplement $\frac{1}{2}$ lorsque $p$ sera assez grand).
\end{rema}
En particulier, on en déduit la proposition suivante,
\begin{prop}
Soit $G/\mathcal O_C$ un $\mathcal O$-module $p$-divisible de hauteur $h$ et signature $(p_\tau,q_\tau)_\tau$. Soit $\tau$ tel que $q_\tau\not\in\{0,h\}$.
Supposons de plus que $q_\tau < p-1$ et $\Ha_\tau(G) < \frac{1}{2}$.
Alors,
\[ \im\left(\bigwedge^{q_\tau}_\mathcal OT_pG \fleche \bigwedge^{q_\tau} \omega_{G^D,\tau}\pmod{1+K_\tau - \Ha_\tau(G)}\right),\]
est un $\FF_{p^f}$-module libre de rang 1, engendré par un élément de valuation $K_\tau + \frac{\Ha_\tau(G)}{p^f-1}$.
\end{prop}
\section{Filtration canonique de la $p$-torsion}
\label{sect6}
Dans cette section, on fixe un plongement $\tau$.
Soit $G$ un $\mathcal O$-module $p$-divisible tronqué d'échelon $r$ de hauteur $h$ et signature $(p_\tau,q_\tau)$.
Supposons que $r > k_\tau +1$ (pour pouvoir définir l'invariant de Hasse) et $q_\tau < p-1$.
Rappelons le théorème suivant,
\begin{theor} [Wedhorn \cite{Wed2} Theorem 2.8]
Si $p > 2$ et si $\mathcal D$ est une donnée PEL non ramifiée, et $\underline{X_0}$ un $BT$ avec $\mathcal D$-structure. Alors pour tout $1 \leq n \leq m \leq \infty$,
le morphisme de foncteurs,
\[ \Def(\underline{X_0[p^m]}) \overset{[p^n]}{\fleche} \Def(\underline{X_0[p^n]}),\]
est formellement lisse.
Les déformations sont à prendre avec la $\mathcal D$-structure.
\end{theor}
En particulier, si $G$ est un $\mathcal O$-module de Barsotti-Tate tronqué sur $\mathcal O_C$, alors il existe $\widehat{G}$ un $\mathcal O$-module $p$-divisible sur
$\mathcal O_C$ tel que $G = \widehat{G}[p^r]$, et on peut donc appliquer les résultats de la section précédente à $G$ !
\subsection{Noyau de Hodge-Tate}
\begin{prop}
\label{prodegwedge}
Supposons que $\Ha_\tau(G) < \frac{1}{2}$. Alors pour tout $\eps$ tel que $K_\tau + \frac{\Ha_\tau(G)}{p^f-1} < \eps < 1 + K_\tau - \Ha_\tau(G)$,
\[\im\left(\bigwedge_\mathcal O^{q_\tau} \alpha_{G,\tau}\right)\pmod{p^\eps}\]
est un $\FF_{p^f}$-module de rang 1, et
\[ \deg\Coker\left( \bigwedge^{q_\tau} \alpha_{G,\tau}\otimes 1\right) = K_\tau + \frac{\Ha_\tau(G)}{p^f-1}.\]
\end{prop}
\dem
C'est essentiellement la proposition \ref{pro7} que l'on applique à $\hat G$, un $\mathcal O$-module $p$-divisible tel que $\hat G[p^r] = G$, puisque l'image est non triviale et incluse dans un $\FF_{p^f}$ module de rang 1.
\edem
On en déduit en particulier que $\im(\alpha_{G,\tau,\frac{1+K_\tau-\Ha_\tau(G)}{q_\tau}})$ est engendré par au plus $q_\tau$ éléments sur $\mathcal O$.
Posons $\eps_\tau = \min(1,\frac{1+K_\tau-\Ha_\tau(G)}{q_\tau}) \leq 1$. La proposition centrale est alors la suivante,
\begin{prop}
Si $\Ha_\tau(G) < 1 + K_\tau - \frac{q_\tau}{p-1}$, alors $\frac{1}{p-1} < \eps_\tau$.
Supposons que \[\Ha_\tau(G) < \min(\frac{1}{2}, 1 + K_\tau - \frac{q_\tau}{p-1}),\] alors pour tout $\frac{1}{p-1} < \eps < \eps_\tau$,
on a que,
\[ \dim_{\FF_{p^f}} \Ker \alpha_{G[p],\tau,\eps} = p_\tau.\]
\end{prop}
\dem
En effet, on a par la proposition (\ref{proker1}) que,
\[ \dim_{\FF_{p^f}} \Ker \alpha_{G[p],\tau,\eps} \leq p_\tau.\]
Or la proposition précédente assure que $\im(\alpha_{G[p],\tau,\eps})$ est engendré par au plus $q_\tau$ éléments sur $\FF_{p^f}$ donc que,
\[ \dim_{\FF_{p^f}} \Ker \alpha_{G[p],\tau,\eps} = p_\tau.\qedhere\]
\edem
\begin{rema} \begin{enumerate}
\item Si $p$ est assez grand devant $q_\tau$, la seule hypothèse dans la proposition est $\Ha_\tau(G) < \frac{1}{2}$.
\item Si $K_\tau + \frac{\Ha_\tau(G)}{p^f-1} < \frac{1}{p-1}$, alors la proposition précédente s'applique encore avec $K_\tau + \frac{\Ha_\tau(G)}{p^f-1} < \eps < \eps_\tau$,
en effet, $\im\alpha_{G,\tau,\eps_\tau} \fleche \im\alpha_{G,\tau,\eps}$ est injective pour de tels $\eps$.
\end{enumerate}
\end{rema}
\subsection{Degrés}
Notons $K/\QQ_p$ une extension valuée complète quelconque, telle que $v(p) =1$. Rappelons alors les définitions de \cite{FarHN}.
\begin{defin}
Soit $M$ un $\mathcal O_K/p^r$-module de présentation finie, annulé par une puissance de $p$.
On note $\delta = \Fitt_0M$, c'est un idéal fractionnaire de $\mathcal O_K$
Alors on définit le degré de $M$ par,
\[\deg M = v(\delta).\]
\end{defin}
\begin{exemple}
Si \[ M \simeq \prod_{i=1}^r \quotient{\mathcal O_K}{x_i\mathcal O_K},\]
alors $\deg M = \sum_{i=1}^r v(x_i).$
\end{exemple}
\begin{lemm}
\label{lemdeg}
Si $M = \Coker(f : L \fleche P)$ avec $L,P$ deux $\mathcal O_K/p^r$ modules libres, et $v(det(f)) < r$,
alors $\deg M = \det f$.
\end{lemm}
\dem
En effet, choisissons un relèvement de $f$, $\widetilde f : \mathcal O_K^r \fleche \mathcal O_K^n$ tel que $M = \Coker \widetilde f$, alors $\det f \equiv \det \widetilde f \pmod{p^r}$.
\edem
\begin{rema}
Ce n'est plus vrai sans l'hypothèse $v(\det f) < r$ puisque $v(\det f) \dans [0,r]$, et on peut avoir $v(\det f) = r$ mais $\deg M > r$ (prendre $pI_n$, avec $n$ assez grand).
\end{rema}
On peut en particulier déduire du lemme précédent et de la proposition (\ref{prodegwedge}) la proposition suivante sur le degré du conoyau de $\alpha_{G,\tau}$.
\begin{prop}
\label{prodeg}
Soit $G$ est un $BT_r^{\mathcal O}$ avec $r > k_\tau$.
Supposons que \[\Ha_\tau(G) < min(\frac{1}{2}, 1 + K_\tau - \frac{q_\tau}{p-1}),\]
alors, pour tout $\min(K_\tau + \frac{\Ha_\tau}{p^f-1},\frac{1}{p-1}) < \eps < K_\tau + 1 - \Ha_\tau$,
\[\deg \Coker\left(\alpha_{G[p],\tau,\eps}\otimes 1\right) = K_\tau + \frac{\Ha_\tau}{p^f-1}.\]
\end{prop}
\dem
D'après l'hypothèse sur $G$, on sait que, pour $K_\tau + \frac{\Ha_\tau}{p-1} < \eps \leq K_\tau + 1 - \Ha_\tau$,
\[\deg \Coker\left(\bigwedge^{q_\tau}\alpha_{G,\tau}\otimes 1\right)_{K_\tau + 1 - \Ha_\tau} = K_\tau + \frac{\Ha_\tau}{p^f-1}.\]
De plus, comme on sait que le conoyau de $\alpha_{G,\tau}\otimes 1$ est tué par $p^{\frac{1}{p-1}}$, on a que
$p^{\frac{1}{p-1}}\omega_{G^D,\tau} \subset \im\alpha_{G^D,\tau}\otimes 1$, et donc
\[ \deg\Coker\alpha_{G,\tau}\otimes 1 = \deg\Coker\alpha_{G,\tau,\eps}\otimes 1, \quad \forall \eps > \frac{1}{p-1}.\]
Donc $\deg\Coker(\alpha_{G[p],\tau,\eps}\otimes 1) = \deg\Coker(\alpha_{G,\tau,q_\tau\eps_\tau}\otimes1).$
Et donc, comme $K_\tau + \frac{\Ha_\tau}{p^f-1} < 1 + K_\tau -\Ha_\tau(G)$, le lemme précédent s'applique dans la 2e égalité,
\begin{IEEEeqnarray*}{cl} \deg\left(\Coker(\alpha_{G[p],\tau,\eps}\otimes 1)\right)
&= \deg\left(\Coker(\alpha_{G,\tau,1 + K_\tau - \Ha_\tau}\otimes 1)\right) \\
&= \deg\Coker\left(\bigwedge^{q_\tau}\alpha_{G}\otimes 1\right)_{1 + K_\tau - \Ha_\tau} \\
&= K_\tau + \frac{\Ha_\tau}{p^f-1}. \qedhere\end{IEEEeqnarray*}
\edem
\subsection{Le théorème principal}
\begin{theor}
\label{thrptors}
Supposons donné une signature $(p_\tau,q_\tau)_\tau$. Choisissons un plongement $\tau$, tel que $q_{\tau} \not\in \{0,h = p_\tau + q_\tau\}.$
Supposons $p -1 > q_\tau$ (donc $p \neq 2$ en particulier).
Soit $G$ un groupe de Barsotti-Tate tronqué sur $\mathcal O_C$, de rang $k_\tau+1$, avec action de $\mathcal O$, et de signature $(p_\tau,q_\tau)$.
Supposons de plus que \[\Ha_\tau(G) := \omega < \min(\frac{1}{2},1+K_\tau -\frac{q_\tau}{p-1}).\]
Posons $\eps_\tau = \min(1,\frac{K_\tau + 1 - \omega}{q_\tau}) \leq 1$.
Alors,
\[ \dim_{\FF_{p^f}} \Ker \alpha_{G[p],\tau,\eps_\tau} = p_\tau.\]
De plus sous l'hypothèse,
\begin{equation}
\label{hypdeg}
\tag{H1}
\frac{2q_\tau}{p-1} < 1 + K_\tau, \quad \text{et} \quad \Ha_\tau(G) < 1 + K_\tau - \frac{2q_\tau}{p-1},
\end{equation}
on a alors,
\begin{enumerate}
\item Soit $C$ l'adhérence dans $G[p]$ de $\Ker \alpha_{G[p],\tau,\eps_\tau} $. On a alors, si on note $E = G[p]/C$,
\[ p^{f-1}\deg_{\sigma\tau}(E) + p^{f-2}\deg_{\sigma^2\tau}(E) + \dots + \deg_{\tau}(E) = K_\tau (p^f-1)+ \Ha_\tau(G).\]
\item Le conoyau de $\alpha_{E,\tau}\otimes 1$ est de degré $K_\tau + \frac{\Ha_\tau(G)}{p^f-1}$.
\end{enumerate}
Remarquons que $\deg_\tau C_\tau^D = \deg_\tau E$ ! (Mais c'est faux pour les autres plongements).
\end{theor}
\begin{rema}
L'hypothèse (\ref{hypdeg}) n'est nécessaire que pour calculer la formule des degrés de $C$, et elle est bien sûr inutile si $p$ est assez grand, grâce à $\Ha_\tau(G) < \frac{1}{2}$.
Il serait intéressant de voir si l'on peut s'en passer.
De plus, on notera parfois $C_\tau$ au lieu de $C$ pour bien préciser à quel plongement est associé ce sous-groupe.
\end{rema}
\dem
Posons $E = G[p]/C$. Par définition de $C$, on a que $\mathfrak m_{C, 1- \eps_\tau}\im(\alpha_{C,\tau}\otimes 1) = 0$.
Donc $\mathfrak m_{C,1-\eps_\tau + \frac{1}{p-1}}\omega_{C^D,\tau} = 0$. On en déduit que pour, $\eps$ vérifiant,
\[ \min(K_\tau + \frac{\Ha_\tau}{p^f-1},\frac{1}{p-1}) < \eps \leq \eps_\tau - \frac{1}{p-1},\]
(ce qui est possible par l'hypothèse (\ref{hypdeg}) sur $p$), on a que,
\[\omega_{G^D,\tau,\eps} \simeq \omega_{E^D,\tau,\eps}.\]
Choisissons $e_1,\dots,e_{q}$ une $\FF_{p^f}$ base de $E(O_C)$, et $E_i$ l'adhérence schématique dans $E$ et $\FF_{p^f}(e_1,\dots,e_i)$.
On a alors, \[ 0 = E_0 \subset E_1 \subset E_2 \subset \dots \subset E_q = E,\]
une filtration dont les gradués sont des $p$-groupes de Raynaud munis d'une action de $\mathcal O$.
Regardons la filtration $\Fil_i\omega_{E^D,\tau} = \im(\omega_{E_i^D,\tau} \fleche \omega_{E^D,\tau}).$
On a une flèche naturelle $q_i : \omega_{(E_i/E_{i-1})^D,\tau} \fleche \Gr_i\omega_{E^D,\tau}$.
Comme l'image de $e_i$ dans $\Gr_i \omega_{E^D,\tau}$ est non nulle, donc $\alpha_{E_i/E_{i-1},\tau}(e_i) \neq 0$.
De plus par calcul direct sur les groupes de Raynaud, on a,
\[ \deg\Coker\alpha_{E_i/E_{i-1},\tau} = \frac{p^{f-1}\deg_\tau(E_i/E_{i-1})+p^{f-2}\deg_{\sigma\tau}(E_i/E_{i-1}) + \dots + \deg_{\sigma^{f-1}\tau}(E_i/E_{i-1})}{p^f-1}.\]
Et donc grâce à $q_i$ qui est surjective entre modules monogènes,
\[ \deg\Coker\alpha_{E_i/E_{i-1},\tau} = \deg(\Gr_i \omega_{E^D,\tau}/q_i\circ\alpha_{E_i/E_{i-1},\tau}(e_i)).\]
Mais d'après la proposition 8 de \cite{Far}, on peut écrire,
\begin{IEEEeqnarray*}{rcl}\deg \Coker(\alpha_{E,\tau,\eps} \otimes 1) &=& \sum_{i=1}^q \deg(\Gr_i(\omega_{E^D,\tau}/ q_i\circ\alpha_{E_i/E_{i-1},\tau}(e_i))) \\
&=& \frac{1}{p^f-1}\sum_{i=1}^q p^{f-1}\deg_{\sigma\tau}(E_i/E_{i-1})+p^{f-2}\deg_{\sigma^2\tau}(E_i/E_{i-1}) + \dots + \deg_{\tau}(E_i/E_{i-1})\\
&=& \frac{p^{f-1}\deg_{\sigma\tau}(E) + p^{f-2}\deg_{\sigma^2\tau}(E) + \dots + \deg_{\tau}(E)}{p^f-1}
\end{IEEEeqnarray*}
Mais d'après la proposition \ref{prodeg}, on sait que \[\deg \Coker(\alpha_{E,\tau,\eps} \otimes 1) = \deg \Coker(\alpha_{G,\tau,\eps} \otimes 1) = K_\tau + \frac{\Ha_\tau}{p^f-1},\]
d'où le résultat.
\edem
\begin{rema}
La formule correspond bien au calcul explicite sur le lieu $\mu$-ordinaire.
\end{rema}
\begin{rema}
\label{remdeg}
De la formule sur les degrés de $E$, on en déduit, comme $\deg_{\tau'} E = \deg_{\tau'} G[p] - \deg_{\tau'} C = p_{\tau'} - \deg_{\tau'} C$, que,
\begin{IEEEeqnarray*}{ccc}
\sum_{i=1}^{f} p^{f-i}\deg_{\sigma^{i}\tau}(C) &=& \sum_{i=1}^{f} p^{f-i}p_{\sigma^{i}\tau} - \sum_{i=1}^{f} \max(0,q_\tau - q_{\sigma^{-i}\tau})p^{i} - \Ha_\tau \\
& = & \sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \Ha_\tau
\end{IEEEeqnarray*}
En particulier, $C$ est de grand degré !
De plus, $C$ est de $\mathcal O$-hauteur $p_\tau$, donc pour tout $i$, $\deg_{\sigma^i\tau} C \leq p_\tau$. Ensuite, comme $C \subset G[p]$ et $\deg_{\sigma^i\tau}G[p] = p_{\sigma^i\tau}$, on a,
\[\deg_{\sigma^i\tau} C \leq \min(p_\tau,p_{\sigma^i\tau}).\]
Donc si \[\sum_{i=1}^{f} p^{f-i}\deg_{\sigma^{i}\tau}(C) =\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \Ha_\tau,\]
\begin{IEEEeqnarray*}{ccccc}
\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \Ha_\tau &=& \deg C + \sum_{i=1}^{f} (p^{f-i}-1)\deg_{\sigma^{i}\tau}(C)\\
&\leq& \deg C + \sum_{i=1}^{f} (p^{f-i}-1)\min(p_{\sigma^i\tau},p_\tau)
\end{IEEEeqnarray*}
Et donc, $\deg C \geq \sum_{i=1}^{f} \min(p_{\sigma^i\tau},p_\tau) - \Ha_\tau$.
Donc si $\Ha_\tau < \frac{1}{2}$, on a que $C$ est canonique, cf. \cite{Bij}, ou proposition (\ref{probij}).
On a aussi les inégalités suivantes, comme $\deg_{\sigma^i\tau} C \leq \min(p_{\sigma^i\tau},p_\tau)$, alors $\deg_{\sigma^i\tau} C \geq \min(p_{\sigma^i\tau},p_\tau) - \frac{\Ha_\tau}{p^{f-i}}$, et donc
\[ \deg_\tau(C_\tau) \geq p_\tau - \Ha_\tau(G) \quad \text{et}\quad \deg_\tau(C_\tau^D) \leq \Ha_\tau.\]
\end{rema}
\begin{prop}
Sous les hypothèses du théorème précédent, à savoir $\frac{2q_\tau}{p-1} < 1 + K_\tau$ et,
\[ \Ha_\tau(G) < \min(\frac{1}{2},1 + K_\tau - \frac{2q_\tau}{p-1}),\]
On a en fait que $C$ est l'adhérence schématique de $\Ker(\alpha_{G,\tau,1-\Ha_\tau(G)})$.
\end{prop}
\dem
En effet, l'égalité sur la somme coefficientée des degrés partiels de $C_\tau$ nous dit en particulier, comme $\deg_{\sigma^i\tau}C \leq \min(p_\tau,p_{\sigma^i\tau})$, que
\[\deg_\tau(C^D) \leq \Ha_\tau(G).\]
On en déduit que $\im(\omega_{C^D,\tau} \fleche \omega_{G[p]^D,1-\Ha_\tau(G)}) = 0$ et donc $C(\mathcal O_C) \subset \Ker(\alpha_{G,\tau,1-\Ha_\tau(G)})$.
\edem
\begin{prop}
Supposons que $\Ha_\tau(G) < \min(\frac{1}{p^f},1 + K_\tau - \frac{2q_\tau}{p-1})$. Alors la suite suivante est exacte :
\[ 0 \fleche C_\tau(K) \fleche G[p](K) \overset{\alpha_{G,\tau}}{\fleche} \omega_{G^D,\tau}.\]
De plus, le conoyau de la flèche,
\[ \alpha_{G,\tau} \otimes 1 : G[p](K) \otimes O_C \fleche \omega_{G^D,\tau},\]
est exactement $K_\tau + \frac{\Ha_\tau(G)}{p^f-1}$.
\end{prop}
\dem
Si $E$ est un schéma en groupes de Raynaud avec une action de $\mathcal O$, tel que $\sum_{i=0}^{f-1} p^{f-i}\deg_{\sigma^i\tau}(E) \geq 1 - \frac{1}{p^f}$,
alors on peut vérifier que l'application $\alpha_{E,\tau}$ est nulle.
Maintenant soit $C_\tau$ le noyau de $\alpha_{G,\tau,\eps_\tau}$, donné par le théorème. Alors dans ce cas, filtrons $C_\tau$ par des sous $\mathcal O$-modules, tel que les graduées $(E_k)_{k = 1,\dots,p_\tau}$ soit des $\mathcal O$-modules de Raynaud.
De l'égalité
\[\sum_{i=1}^{f} p^{f-i}\deg_{\sigma^{i}\tau}(C_\tau) =\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \Ha_\tau,\]
on en déduit en particulier que $\sum_{k=1}^{p_\tau} \deg_\tau(E_k) = \deg_{\tau}(C_\tau) \geq p_\tau - \Ha_\tau(G)$.
Donc en particulier,
\[\deg_\tau(E_k) \geq 1 - \Ha_\tau(G),\]
et donc,
\[\sum_{i=0}^{f-1} p^{f-i}\deg_{\sigma^i\tau}(E_k) \geq 1 - \Ha_\tau(G) \geq 1 - \frac{1}{p^f}.\]
Donc $\alpha_{E_k,\tau} = 0$ et donc comme on peut toujours filtrer $C_\tau(\mathcal O_C)$ par des sous-$\mathcal O$-modules dont le premier contient un $x \dans C_\tau(O_C)$ donné,
on en déduit que $\alpha_{C_\tau,\tau} = 0$.
Et on a montré précédemment que $\dim_{\mathbb F_{p^f}} \Ker(\alpha_{G,\tau}) \leq p_\tau$.
D'où l'affirmation sur la suite exacte.
Maintenant comme $\alpha_{G,\tau}$ se factorise par $G[p]/C_\tau$, et qu'on a calculé son conoyau de Hodge-Tate dans le théorème précédent, on en déduit la proposition.
\edem
\subsection{Compatibilités}
\begin{prop}
\label{produal}
Soit $G \dans \mathcal{BT}_{k_\tau+1}^\mathcal O(O_C)$. Supposons que $\Ha(G) = \Ha(G^D)$ soit strictement inférieur à $\min(\frac{1}{2}, 1 + K_\tau - \frac{2q_\tau}{p-1})$ (donc $G$ et $G^D$ ont tous les deux un sous-groupe canonique).
Notons $C_\tau$ le sous groupe canonique de $G$ et $D_\tau$ celui de $G^D$.
Alors $C_\tau = D_\tau^\perp$.
\end{prop}
\dem
En effet, \begin{IEEEeqnarray*}{ccccc}
\deg D_\tau^\perp &=& \Ht(G^D[p]/D_\tau) - \deg(G^D[p]/D_\tau) &=& fp_\tau - \deg G^D[p] + \deg D_\tau \\
&\geq& fp_\tau - \sum_i q_{\sigma^i\tau} + \sum_i \min(q_\tau,q_{\sigma^i\tau}) - \Ha(G^D)\\
&\geq& \left(\sum_{i=0}^{f-1} p_\tau - q_{\sigma^i\tau} + \min(q_\tau,q_{\sigma^i\tau})\right) - \Ha(G^D)\\
& \geq & \left(\sum_{i=0}^{f-1} p_\tau - q_{\sigma^i\tau} + h - \max(p_\tau,p_{\sigma^i\tau})\right) - \Ha(G^D)\\
& \geq & \left(\sum_{i=0}^{f-1} p_\tau + p_{\sigma^i\tau} - \max(p_\tau,p_{\sigma^i\tau})\right) - \Ha(G^D)\\
& \geq & \left(\sum_{i=0}^{f-1} \min(p_\tau,p_{\sigma^i\tau})\right) - \Ha(G^D)\\
\end{IEEEeqnarray*}
Donc $D_\tau^\perp$ est de grand degré, et par unicité, cf. \cite{Bij}, et annexe (\ref{probij}), $C_\tau = D_\tau^\perp$.
On pourrait aussi utiliser la proposition \ref{corHNp} de la section suivante.
\edem
\begin{rema}
Si on ne connaissait pas la compatibilité de $\Ha_\tau$ à la dualité, en supposant que $\Ha_\tau(G)$ et $\Ha_\tau(G^D)$ sont assez petit (plus petits que
$\min(\frac{1}{2}, 1 + K_\tau - \frac{2q_\tau}{p-1})$) on retrouve qu'ils sont égaux, en comparant les degrés des sous-groupes de $G$ et $G^D$ et en utilisant le résultat d'unicité de \cite{Bij}.
\end{rema}
\begin{rema}
Dans la construction de Fargues, \cite{Far}, il est prouvé que le sous-groupe canonique de $G$ déforme le noyau de Frobénius, modulo $p^{1-\Ha(G)}$. Un calcul sur les modules de Dieudonné dans le cas $\mu$-ordinaire laisse entendre que le sous groupe canonique associé à un plongement $\tau$ devrait correspondre à,
\[(p^{r_\tau -1} \Ker F^f)[p],\]
la $p$-torsion de l'image par la multiplication par $p^{r_\tau - 1}$, $r_\tau = |\{ \tau' : q_{\tau'} < q_\tau\}|$. Mais il ne semble pas évident qu'un tel sous-groupe existe en
général (i.e. qu'il soit representable ou fini et plat).
Néanmoins on montrera à la fin de l'article un résultat partiel, qui décrit une déformation (modulo $\mathfrak m_C$...) de $\Ker F^f$.
En particulier, les sous-groupes $C_\tau$ seront inclus dans $\Ker F^f$.
\end{rema}
\begin{prop} [Compatibilité entre les différents plongements]
Si $C_\tau$ désigne le sous-groupe canonique associé à l'abscisse $q_\tau$, alors $q_\tau \leq q_{\tau'} \Rightarrow C_{\tau'} \subset C_\tau$.
En particulier si $\tau, \tau'$ sont associés à la même abscisse de rupture $q_\tau = q_{\tau'}$, alors $C_\tau = C_{\tau'}$.
\end{prop}
\dem
Cela va découler de la section suivante (Corollaire \ref{corHNp}), car on va prouver que $C_\tau$ est une abscisse de rupture de Harder-Narasimhan. On pourrait aussi bien utiliser \cite{Bij}, rappelé ici
en annexe, proposition (\ref{probij}).
\edem
\begin{rema}
En particulier dans le cas des $\mathcal O$-modules stricts, disons associés à $\tau_0$, il n'y a qu'un plongement intéressant ($\tau_0$) et donc qu'un sous-groupe canonique, et dans ce cas $k_{\tau_0}=K_{\tau_0} = 0$, donc tout est plus simple. De plus, la dualité stricte de Faltings permet de simplifier beaucoup de choses : en particulier on peut montrer que le sous-groupe canonique ainsi construit relève le noyau de $F^f$.
\end{rema}
\section{Calculs de polygones de Harder-Narasihman}
\label{sect7}
\subsection{Polygone de Harder-Narasihman classique}
\begin{prop}
Soit $G$ un schéma en groupes avec $\mathcal O$-action. Alors $\HN_{\mathcal O}(G)(x) = \frac{1}{f}\HN(G)(fx)$ a des abscisses de rupture entières.
\end{prop}
\dem
Les abscisses de ruptures de $\HN(G)$ sont les hauteurs des groupes apparaissant dans la filtration HN de $G$, or celle-ci est stable par $\mathcal O$, donc
les hauteurs de ces groupes sont des multiples de $f$.
\edem
\begin{rema}
Dans \cite{FarHNpdiv} et \cite{Shen}, il est introduit des polygones de HN "renormalisés" pour des $\mathcal{BT}_n$ par,
\[ \widetilde{\HN}(G[p^n])(x) = \frac{1}{n}\HN(G[p^n])(nx).\]
Ces polygones ont des abscisses de rupture dans $\frac{1}{n}\ZZ$, elles ne sont plus nécessairement entières !
\end{rema}
\begin{prop}
On peut tracer le $\mathcal O$-polygone de Hodge renversé (voir figure \ref{figHN}) d'un $\mathcal{BT}^\mathcal O$ de signature $(p_\tau,q_\tau)$, $\Hdg^\diamond$.
Il a pour abscisses de rupture
\[ 0 \leq p^r < p^{r-1} < \dots < p^1 \leq h,\]
et où les pentes sont données par $(1,\frac{ |\{\tau : p_\tau \geq p^{r-1}\}|}{f},\frac{ |\{\tau : p_\tau \geq p^{r-2}\}|}{f},\dots,\frac{ |\{\tau : p_\tau \geq p^{1}\}|}{f},0)$.
On vérifie facilement que c'est aussi $\widetilde{\HN}_{\mathcal O}(G^{\mu-ord}[p^n])$, le $\mathcal O$-polygone de Harder-Narasimhan (renormalisé) de la
$p^n$-torsion du groupe $\mu$-ordinaire associé,
$\forall n \dans \NN$.
\end{prop}
\begin{figure}[h]
\begin{center}
\caption{$\mathcal O$-polygone de Hodge renversé associée à la signature $(q_\tau)_{\tau\in \mathcal I}$.}
\label{figHN}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm]
\draw[->,color=black] (-0.5,0.) -- (20.,0.);
\foreach \x in {,1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.,13.,14.,15.,16.,17.,18.,19.}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0.,-0.5) -- (0.,10.);
\foreach \y in {,1.,2.,3.,4.,5.,6.,7.,8.,9.}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\clip(-0.5,-1.) rectangle (20.,10.);
\draw (0.,0.)-- (3.,3.);
\draw (3.,3.)-- (4.99016393443,4.50789096126);
\draw (4.99016393443,4.50789096126)-- (6.9868852459,5.32855093257);
\draw (10.,6.)-- (14.0098360656,6.63845050215);
\draw (14.004934692,6.63767010007)-- (17.0049180328,6.65423242468);
\draw [dash pattern=on 1pt off 1pt] (3.,3.)-- (3.,0.);
\draw [dash pattern=on 1pt off 1pt] (4.99016393443,4.50789096126)-- (5.,0.);
\draw [dash pattern=on 1pt off 1pt] (6.9868852459,5.32855093257)-- (7.,0.);
\draw [dash pattern=on 1pt off 1pt] (10.,6.)-- (10.,0.);
\draw [dash pattern=on 1pt off 1pt] (14.004934692,6.63767010007)-- (14.,0.);
\draw [dash pattern=on 1pt off 1pt] (17.004656654,6.65423098166)-- (17.,0.);
\draw [dash pattern=on 1pt off 1pt] (8.10573770492,5.54949784792)-- (8.8631147541,5.7230989957);
\draw (2.8,-0.1) node[anchor=north west] {$p_r$};
\draw (4,-0.1) node[anchor=north west] {$p_{r-1}$};
\draw (6,-0.1) node[anchor=north west] {$p_{r-2}$};
\draw (13.5,-0.1) node[anchor=north west] {$p_1$};
\draw (9.5,-0.1) node[anchor=north west] {$p_2$};
\draw (0.996721311475,2.5) node[anchor=north west] {1};
\draw (15.3352459016,7.6) node[anchor=north west] {0};
\draw (11.5655737705,7.6) node[anchor=north west] {$\frac{n_1}{f}$};
\draw (1.8,5.5) node[anchor=north west] {$\frac{n_1+\dots+n_{r-1}}{f}$};
\draw (4.3,6.5) node[anchor=north west] {$\frac{n_1+\dots+n_{r-2}}{f}$};
\draw (16.6,-0.100430416069) node[anchor=north west] {$h$};
\draw (0.101639344262,-0.116212338594) node[anchor=north west] {$0$};
\end{tikzpicture}
\end{center}
\end{figure}
\begin{rema}
On retrouve un cas (très) particulier de \cite{Shen}, qui prédit que lorsque le polygone de Hodge et de Newton se touche en une abscisse de rupture du polygone de Newton (comme dans le cas $\mu$-ordinaire) alors ces polygones touchent aussi le polygone de Harder-Narasimhan, qui a lui aussi une rupture en cette abscisse.
\end{rema}
\begin{rema}
On a bien sur l'égalité des ensembles (et donc de leurs cardinaux),
\[ \{\tau : p_\tau \geq p^{i}\} = \{\tau : q_\tau \leq q^{i}\}.\]
\end{rema}
\begin{prop}
\label{prokern}
Si $\eps < 1 - \frac{1}{p-1}$, le noyau de $\alpha_{G,\tau,n-\eps}$ est engendré sur $\mathcal O$ par moins de $p_\tau$ éléments, i.e.
\[ \Ker \alpha_{G,\tau,n-\eps} = \bigoplus_{i=1}^{p_\tau} \quotient{\mathcal O}{p^{a_i}\mathcal O},\quad 0 \leq a_i \leq n.\]
\end{prop}
\dem
C'est exactement \cite{Far} proposition 13, en remplaçant $\ZZ_p$ par $\mathcal O$ (la proposition 12 se généralise trivialement).
\edem
\begin{prop}
\label{pro14}
Soit $G$ un $\mathcal{BT}_n^\mathcal O$ de signature $(p_\tau,q_\tau)_\tau$. Supposons qu'il existe un sous-$\mathcal O$-module $C$ tel que,
\[ \Ht_{\mathcal O}(C) = np_\tau \quad \text{et} \quad \deg_\tau(G/C) < 1 - \frac{1}{p-1}.\]
Alors, si $\eps = \deg_\tau(G/C), C(\mathcal O_{\overline K}) = \Ker \alpha_{G,\tau, n-\eps}$ qui est un $\mathcal O/p^n\mathcal O$-module libre.
\end{prop}
\dem
Si $\eps = \deg_\tau(G/C) = \deg_\tau(G) - \deg_\tau(C) = np_\tau - \deg_\tau(C) = \deg(\omega_{C^D,\tau})$ donc
$\omega_{C^D,\tau}\fleche \omega_{G^D,\tau} \fleche \omega_{G^D,\tau,\eps}$ est nulle, et par conséquent $C(\mathcal O_K) \subset \Ker(\alpha_{G,\tau,n-\eps})$,
la proposition (\ref{prokern}) conclut.
\edem
\begin{prop}
\label{propoly0}
Soit $\tau \dans \mathcal I$.
Soit $C \subset G[p^n]$ un sous-$\mathcal O$-module de hauteur $fnp_\tau$ (i.e. $\rg(C) = p^{nfp_\tau}$).
Supposons que
\[\deg C > n\sum_{j = 0}^{f-1} \min(p_\tau,p_{\sigma^j\tau}) - \frac{|\{\tau' : q_\tau = q_\tau'\}|}{2},\]
Alors le polygone de Harder-Narasimhan $\widetilde{\HN}_{\mathcal O}(G[p^n])$ a un point de rupture en l'abscisse $p_\tau$.
\end{prop}
\dem
Voir la démonstration de la proposition \ref{propoly1} qui s'applique aussi ici.
\edem
On déduit en particulier de cette dernière proposition,
\begin{corr}
\label{corHNp}
Soit $G$ un $\mathcal{BT}_{k_\tau +1}^\mathcal O$ tel que $\Ha_\tau(G) < \min(\frac{1}{2}, 1 + K_\tau - \frac{2q_\tau}{p-1}).$
Soit $C_\tau$ le sous-groupe canonique donné par le théorème (\ref{thrptors}).
Alors $C_\tau$ est un cran de la filtration de Harder-Narasihman de $G[p]$.
En particulier on retrouve que $C_\tau$ est compatible à la dualité au sens de la proposition (\ref{produal}), mais aussi que, si
le théorème (\ref{thrptors}) s'applique pour $G$ pour deux plongements $\tau$ et $\tau'$, alors,
\[ q_{\tau'} \leq q_\tau \Rightarrow C_\tau \subset C_{\tau'}.\]
\end{corr}
\dem
On a $\deg C_\tau \geq \sum_{i=1}^{f} \min(p_{\sigma^i\tau},p_\tau) - \Ha_\tau(G)$ et par hypothèse, $\Ha_\tau(G) < \frac{1}{2}$, la proposition précédente s'applique. Il reste à montrer que le sous-groupe $C'$ qui induit la rupture – i.e. le cran de la filtration de HN à l'abscisse $p_\tau$ – est $C_\tau$. Or $\deg_{\tau'}(C') \leq \min(p_\tau,p_{\tau'})$ car il est de hauteur $p_\tau$ et que $C' \subset G$ qui est de $\tau'$-degré $p_{\tau'}$.
Donc si $\deg_\tau(C') < p_\tau - \frac{1}{2}$, alors $\deg(C') = \sum_{\tau'} \deg_{\tau'}(C') < \sum_{\tau'} \min(p_\tau,p_{\tau'} - \frac{1}{2} < \deg(C_\tau)$, ce qui est absurde puisque $C'$ est un cran HN à l'abscisse $p_\tau$. Donc $\deg_\tau(C') > p_\tau - \frac{1}{p-1}$ et la proposition \ref{pro14} assure que $C'$ est l'adhérence schématique de
$\Ker \alpha_{G,\tau,1-\eps}$, or c'est aussi le cas de $C_\tau$, donc $C_\tau = C'$.
\edem
\begin{rema}
Malheureusement pour la $p^n$-torsion avec $n >1$, on n'arrivera pas à montrer la généralisation du corollaire précédent (parce que les bornes sur $\Ha_\tau$ seront moins grossières, on pourrait néanmois y arriver quitte à sacrifier les bornes), mais on va devoir changer un peu la filtration de
Harder-Narasihman en fonction du plongement $\tau$, c'est l'objet de la sous-section suivante. Cela peut s'expliquer en partie par le fait que le théorème \ref{thrptors} ne nous
donne pas les degrés des sous-groupes canoniques, mais seulement des combinaisons linéaires des degrés partiels.
\end{rema}
\subsection{Fonction degré et polygones de Harder-Narasihman modifiés}
Notons $\mathfrak{Gr}_p^{\mathcal O}(\mathcal O_K)$ la catégorie (exacte) des schémas en groupes finis et plats sur $O_K$ (d'ordre une puissance de $p$) où
$K$ est une extension valuée de $\ZZ_p$.
On suppose qu'il existe un plongement $K \supset F$.
\begin{defin}
Pour tout $\tau \in \mathcal I$, et tout $G/\mathcal O_K$ un schéma en groupes avec action de $\mathcal O$, on définie une nouvelle fonction degrée $\Deg_\tau$ par,
\[ \Deg_\tau(G) = \sum_{j=1}^f p^{f-j}\deg_{\sigma^i\tau}(G).\]
Cette fonction degré vérifie les propriétés
\begin{enumerate}
\item $\Deg_\tau$ est additive sur les suites exactes dans $\mathfrak{Gr}_p^{\mathcal O}(\mathcal O_K)$.
\item Si $u : G \fleche G'$ est un morphisme qui devient un isomorphisme en fibre générique, alors $\Deg_\tau(G') \geq \Deg_\tau(G)$, avec égalité si et seulement si
$u$ est un isomorphisme.
\end{enumerate}
\end{defin}
\dem
Voir \cite{BPS} Proposition 1.19.
\edem
Ces propriétés sont analogues à celles vérifiées par la fonction degré de \cite{FarHN}, et elles permettent de développer un formalisme Harder-Narasihman.
On note la fonction de pente,
\[ \mu_\tau = \frac{\Deg_\tau}{f\Ht_{\mathcal O}},\]
elle est à valeurs dans $[0,\frac{p^f-1}{f(p-1)}]$. On aurait pu la renormaliser pour la rendre à valeurs dans $[0,1]$, mais cela aurait (inutilement) alourdi les formules qui suivront.
À partir de maintenant, dans cette sous-section, fixons un $\tau \dans \mathcal I$.
On a alors la proposition, voir \cite{FarHN}, Théorème 1, ou \cite{And},
\begin{prop}
Soit $G$ un groupe fini plat sur $O_K$ (d'ordre une puissance de $p$), muni d'une action de $\mathcal O$, il possède une unique filtration par des sous-groupes finis et plats,
\[0 = G_0 \subsetneq G_1 \subsetneq G_2 \subsetneq \dots \subsetneq G_r = G\]
telle que,
\begin{enumerate}
\item Pour tout $i$, $G_{i+1}/G_{i}$ est semi-stable pour la fonction de pente $\mu_\tau$.
\item Pour tout $i \geq 1$, $\mu_\tau(G_i/G_{i-1}) > \mu_\tau(G_{i+1}/G_i)$.
\end{enumerate}
On fera référence à cette filtration comme $\tau$-filtration de Harder-Narasihman. On notera $\HN_\tau(G)$ le polygone concave de Harder-Narasihman,
définie par les pentes $(\mu_\tau(G_i/G_{i-1}))_{i = 1,\dots,r}$ avec multiplicités $(\Ht_\mathcal O(G_i/G_{i-1}))_{i = 1,\dots,r}$. C'est un polygone à abscisses de ruptures entières et à
pentes rationnelles.
\end{prop}
\begin{rema}
On peut vérifier que si $G$ est $\mu$-ordinaire, alors sa filtration de Harder-Narasihman "classique" (i.e. définie par la fonction $\deg$, cf. \cite{FarHN}) vérifie les deux propriétés
de la proposition ci-dessus, en particulier sur le lieu $\mu$-ordinaire, les filtrations de Harder-Narasihman données par $\mu$ ou $\mu_\tau$ sont égales, pour tout $\tau \dans \mathcal I$ (mais les polygones sont différents).
Dans le cas général, les filtrations sont différentes, mais on peut montrer que si $^\mu\Ha(G)$ est suffisamment petit (mais a priori sans borne précise, sauf pour un sous-groupe bien précis de ces filtrations), alors elles coïncident.
\end{rema}
\begin{exemple}
Supposons que $G$ soit un $\mathcal O$-module $p$-divisible $\mu$-ordinaire de signature $(p_\tau,q_\tau)$, alors on calcule explicitement,
\[ \HN_\tau(G[p])(p_{\tau'}) = \frac{1}{f}\sum_{i=1}^f p^{f-i}\min(p_{\tau'},p_{\sigma^i\tau}).\]
\end{exemple}
\begin{rema}
La différence entre deux pentes consécutives du polygone $\HN_\tau$ $\mu$-ordinaire est donc au moins $\frac{1}{f}$, cela servira dans la démonstration du théorème \ref{thrntors},
pour montrer que le sous-groupe canonique est un cran de la $\tau$-filtration Harder-Narasihman.
\end{rema}
On va utiliser ces filtrations "modifiées" pour mettre en famille la filtration canonique. En effet, les sous-groupes de la filtration canonique seront des sous-groupes apparaissant dans
les $\tau$-filtrations de Harder-Narasihman, pour différents plongements $\tau$. A priori il n'est pas clair qu'ils apparaissent aussi dans la filtration de Harder-Narasihman classique,
d'où la nécessité d'introduire ces nouvelles filtrations.
\begin{defin}
Si $G$ est un $\mathcal{BT}_n$ avec action de $\mathcal O$, on note $\widetilde{\HN}_\tau(G)$ la renormalisation (en fonction de $n$) de $\HN_\tau(G)$, c'est à dire,
\[\widetilde{\HN}_\tau(G)(x) = \frac{1}{n}\HN_\tau(G)(nx),\]
est donc un polygone à abscisses entre $0$ et $\Ht_\mathcal O(G[p])$.
\end{defin}
On va avoir besoin d'utiliser l'analogue de la proposition \ref{propoly0} pour les nouvelles filtrations :
\begin{prop}
\label{propoly1}
Soit $\tau' \dans \mathcal I$. Soit $G$ un $\mathcal{BT}_n$.
Soit $C \subset G[p^n]$ un sous-$\mathcal O$-module de hauteur $fnp_{\tau'}$ (i.e. $\rg(C) = p^{nfp_{\tau'}}$).
Supposons que
\[\Deg_\tau C > n\sum_{j = 0}^{f-1} p^{f-j}\min(p_{\tau'},p_{\sigma^j\tau}) - \frac{\sum_{j=1}^f p^{f-j}\delta_{q_{\sigma^j\tau} = q_{\tau'}}}{2},\]
Alors le polygone de Harder-Narasimhan $\widetilde{\HN}_{\tau}(G[p^n])$ a un point de rupture en l'abscisse $p_{\tau'}$.
\end{prop}
\dem
Si $\widetilde{\HN}_{\tau}(G[p^n])$ (renormalisé donc) n'a pas de rupture en $p_{\tau'}$, alors il est en dessous du polygone $\mathcal P$, voir figure \ref{fig2}.
\begin{figure}[h]
\caption{Rupture autour de $\widetilde{\HN}_\tau(G^{\mu-ord}[p^n])(p_{\tau'})$.}
\label{fig2}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm]
\draw[->,color=black] (3.,0.) -- (15.,0.);
\foreach \x in {3.,3.5,4.,4.5,5.,5.5,6.,6.5,7.,7.5,8.,8.5,9.,9.5,10.,10.5,11.,11.5,12.,12.5,13.,13.5,14.,14.5}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\clip(3.,-1.) rectangle (15.,8.);
\draw (8.,6.)-- (5.84885245902,4.42898134864);
\draw (8.,6.)-- (12.0923770492,6.68579626973);
\draw [dash pattern=on 1pt off 1pt,color=ffqqqq] (6.9803922743,5.25536360309)-- (9.04412536448,6.17497343756);
\draw [dash pattern=on 1pt off 1pt] (5.,4.)-- (3.79073770492,3.2137733142);
\draw [dash pattern=on 1pt off 1pt] (13.1300819672,6.7962697274)-- (14.8595901639,7.03299856528);
\draw (11.2196218159,5.84072805214)-- (12.7305234552,5.85220581398);
\draw [dash pattern=on 1pt off 1pt,color=ffqqqq] (11.2264121951,4.22405901773)-- (12.7719039984,4.22549373796);
\draw (7.7,-0.2) node[anchor=north west] {$p_{\tau'}$};
\draw (8,5.8) node[anchor=north west] {$v_{\tau'}$};
\draw (10,6.24338106169) node[anchor=north west] {$\widetilde{\HN_\tau}$};
\draw (10,4.51383905308) node[anchor=north west] {$\mathcal P$};
\draw [dash pattern=on 1pt off 1pt] (9.04412536448,6.17497343756)-- (9.,0.);
\draw [dash pattern=on 1pt off 1pt] (8.,6.)-- (8.,0.);
\draw [dash pattern=on 1pt off 1pt] (6.9803922743,5.25536360309)-- (7.,0.);
\draw (6.1,0.04) node[anchor=north west] {$p_{\tau'} - \frac{1}{n}$};
\draw (8.4,0.04) node[anchor=north west] {$p_{\tau'} + \frac{1}{n}$};
\end{tikzpicture}
\end{center}
\end{figure}
(polygone $\widetilde{\HN}(G^{\mu-ord})$ avec une droite reliant $p_\tau - \frac{1}{n}$ et $p_\tau +\frac{1}{n}$).
On peut explicitement calculer
\[\mathcal P(p_{\tau'}) = v_{\tau'} =\frac{\sum_{j = 0}^{f-1} p^{f-j}\min(p_{\tau'},p_{\sigma^j\tau})}{f} - \frac{\sum_{j=1}^f p^{f-j}\delta_{q_{\sigma^j\tau} = q_{\tau'}}}{2nf}.\]
En particulier, s'il existe un sous-groupe comme dans l'énoncé, et qu'il n'y a pas de rupture, on contredit la proposition 13 de \cite{FarHN}, qui s'applique encore dans ce cadre
(c'est-à-dire pour $\HN_\tau$).
\edem
\begin{prop}
\label{propoly}
Supposons maintenant que le polygone de Harder-Narasihman $\widetilde{\HN}_\tau(G[p^n])$ a une rupture en $p_{\tau}$, et seulement l'hypothèse,
\begin{equation}
\label{hyp2}
\tag{H2}
\Deg_{\tau} C > n\sum_{j = 0}^{f-1} p^{f-j}\min(p_{\tau},p_{\sigma^j\tau}) - \frac{p-2}{p-1}.
\end{equation}
Alors en plus $C$ est un cran de la $\tau$-filtration de Harder-Narasimhan de $G[p^n]$.
\end{prop}
\begin{rema}
L'hypothèse (\ref{hyp2}) permettra de traiter le cas où $\tau = \tau'$ et $n_{\tau} = |\{\theta: q_{\tau} = q_\theta\}| = 1$ lorsque l'hypothèse de la proposition précédente n'est pas vérifiée.
\end{rema}
\dem
Supposons (\ref{hyp2}),
\[ \deg_{\tau} C > np_{\tau} - \frac{p-2}{p-1} \quad \text{i.e.}\quad \deg_{\tau}(G/C) < \frac{p-2}{p-1} = 1 - \frac{1}{p-1},\]
donc d'après le point (1) de la proposition \ref{pro14}, $C = \Ker(\alpha_{G,\tau,n-\eps})$, où $\eps = \deg_{\tau}(G/C) < \frac{p-2}{p-1}$.
Soit $C'$ le cran de la filtration de Harder-Narasimhan de $G[p^n]$ d'abscisse $np_{\tau}$, qui existe puisqu'on a supposé qu'il y avait une rupture.
Dans ce cas, on a $\Deg_\tau(C') \geq \Deg_\tau(C) > n\sum_{j = 0}^{f-1} p^{f-j}\min(p_{\tau},p_{\sigma^j\tau}) - \frac{p-2}{p-1}$, donc en particulier,
\[ \deg_{\tau} C' > np_{\tau} - \frac{p-2}{p-1} \quad \text{i.e.}\quad \deg_{\tau}(G/C') < \frac{p-2}{p-1} = 1 - \frac{1}{p-1}.\]
D'après le point (1) de la proposition \ref{pro14}, on a donc $C' = \Ker(\alpha_{G,\tau,n-\eps'})$,
où $\eps' = \deg_{\tau}(G/C')$. Si $\eps' \leq \eps$, alors $C \subset C'$, mais $\Ht C = \Ht C'$, donc $C = C'$. Idem si $\eps' \geq \eps$.
\edem
\section{Filtration canonique supérieure}
\label{sect8}
\subsection{Récurrence et théorème principal}
\begin{prop}
\label{prorecdeg}
Soit $r = k_\tau +1$ et $G \dans \mathcal{BT}_{r+1}^\mathcal O$ tel que $\Ha_\tau(G) < \frac{1}{p^f + 1}$.
Soit $C_\tau$ son $\tau-$sous-groupe canonique. Alors $\quotient{p^{-r}C_\tau}{C_\tau}$ est un $\mathcal{BT}_r^\mathcal O$, et,
\[\Ha_\tau(\quotient{p^{-r}C_\tau}{C_\tau}) \leq p^f \Ha_\tau(G).\]
Plus précisément,
\[\Ha_\tau(\quotient{p^{-r}C_\tau}{C_\tau}) = (p^f-1)\deg_\tau(C_\tau^D) + \Ha_\tau(G).\]
\end{prop}
\dem
Considérons la flèche sur $\mathcal O_K$,
\[ G \overset{\pi}{\fleche} \quotient{G}{C_\tau}.\]
Elle induit une suite exacte,
\[\omega_{G^D,\tau} \overset{\pi_*}{\fleche} \omega_{(G/C_\tau)^D,\tau} \fleche \omega_{C_\tau^D,\tau} \fleche 0.\]
En effet, regardons nos groupes sur $\mathcal O_C/p$ et considérons le triangle distingué (cf. \cite{Ill}, VII.3.1.1.5),
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
\ell_{C_\tau^D} & & \ell_{G^D} \\
& \ell_{(G/C_\tau)^D} & \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$$} (m-1-3)
(m-1-3) edge node[auto] {$$} (m-2-2)
(m-2-2) edge node[auto,left] {$+1$} (m-1-1)
;
\end{tikzpicture}
\end{center}
$G$ est un $\mathcal{BT}_r$, donc son complexe de co-lie $\ell_{G^D}$ est $\omega_{G^D} \oplus \omega_{G^D}[1]$. Bien sur, $G/C_\tau$ n'est pas un $\mathcal{BT}_r$ à priori, mais on peut écrire la suite exacte,
\[0 \fleche \quotient{p^{-1}C_\tau}{C_\tau} \fleche \quotient{G}{C_\tau} \overset{p}{\fleche} \quotient{pG}{C_\tau}\fleche 0,\]
qui induit un triangle distingué,
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
\ell_{(p^{-1}C_\tau/C_\tau)^D} & & \ell_{(G/C_\tau)^D} \\
& \ell_{(pG/C_\tau)^D} & \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$$} (m-1-3)
(m-1-3) edge node[auto] {$p$} (m-2-2)
(m-2-2) edge node[auto,left] {$+1$} (m-1-1)
;
\end{tikzpicture}
\end{center}
Or, sur $\mathcal O_C/p$, la flèche $p$ est nulle, on en déduit un isomorphisme, $\ell_{(p^{-1}C_\tau/C_\tau)^D} \simeq \ell_{(G/C_\tau)^D}$.
Or, $\quotient{p^{-1}C_\tau}{C_\tau}$ est un $\mathcal{BT}_1$, on en déduit, $\ell_{(p^{-1}C_\tau/C_\tau)^D} \simeq
\omega_{(p^{-1}C_\tau/C_\tau)^D}\oplus\omega_{(p^{-1}C_\tau/C_\tau)^D}[1] \simeq \omega_{(G/C_\tau)^D}\oplus\omega_{(G/C_\tau)^D}[1] $ sur $\mathcal O_C/p$,
et donc, la suite exacte longue du premier triangle donne,
\begin{eqnarray*}
0 \fleche \mathcal{H}^{-1}(\ell_{C_\tau^D}) \fleche \mathcal{H}^{-1}(\ell_{G^D}) \fleche \mathcal{H}^{-1}(\ell_{(G/C_\tau)^D}) \fleche \mathcal{H}^{0}(\ell_{C_\tau^D}) \\
\fleche \mathcal{H}^{0}(\ell_{G^D}) \fleche \mathcal{H}^{0}(\ell_{(G/C_\tau)^D}) \fleche 0.\end{eqnarray*}
Il suffit alors de montrer que $ \mathcal{H}^{0}(\ell_{C_\tau^D}) \simeq \omega_{C_\tau^D}
\fleche \mathcal{H}^{0}(\ell_{G^D}) \simeq \omega_{G^D}$ est nulle.
Or cette flèche est la réduction modulo $p$ de celle sur $\mathcal O_C$, qui est donnée par,
\[ \omega_{C_\tau^D} \fleche \omega_{G^D} \simeq (\mathcal O_C/p^r)^d,\]
et $C_\tau$ est tué par $p$, donc $\omega_{C_\tau^D}$ est de $p$-torsion, la flèche précédente est donc de la forme $p^{r-1}\phi$, et comme $r \geq 2$, elle est nulle une fois réduite modulo $p$. On a donc la suite exacte; qui est de plus $\mathcal O$-équivariante puisque c'est le cas de tous les morphismes entre les schémas en groupes,
\[ \omega_{G^D} \fleche \omega_{(G/C_\tau)^D} \fleche \omega_{C_\tau^D} \fleche 0.\]
On en déduit donc que, d'après l'inégalité \ref{remdeg} et le lemme \ref{lemdeg},
\[ \deg_\tau(C_\tau^D) = \deg(\omega_{C_\tau^D,\tau}) = \det(\pi_*) \leq \Ha_\tau(G).\]
Notons de plus que, \[k_\tau(G) = k_\tau(\quotient{p^{-r}C_\tau}{C_\tau}),\]
En effet, soit $G \dans \mathcal{BT}^\mathcal O(\mathcal O_C)$ tel que $\Ha_\tau(G) < \frac{1}{p^f + 1}$. Soit $C_\tau/\mathcal O_C$ le sous-groupe du théorème.
L'isogenie,
\[ 0 \fleche C_\tau \fleche G \fleche G/{C_\tau}\fleche 0,\]
induit,
\[\omega_{G^D,\tau'} \fleche \omega_{G/C_\tau^D,\tau'} \fleche \omega_{C_\tau^D,\tau'} \fleche 0,\]
Or $\omega_{G^D,\tau'}$ et $\omega_{G/C_\tau^D,\tau'}$ sont sans $p$-torsion (car $BT$) et $\omega_{C_\tau^D,\tau'}$ est de $p$-torsion, donc après inversion de $p$ on voit que
$G$ et $G/C_\tau$ ont même signature.
Donc les signatures de $G$, $G/C_\tau$ sont les mêmes ! En particulier ils ont les mêmes $k_\tau$ pour tout $\tau$.
Mais le carré suivant au niveau des cristaux est commutatif,
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
\mathbb D(G)_\tau& &\mathbb D(G)_\tau^{(p^f)} \\
\mathbb D(G/C_\tau)_\tau & &\mathbb D(G/C_\tau)^{(p^f)}_\tau \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$V^f$} (m-1-3)
(m-1-1) edge node[auto] {$\pi_*$} (m-2-1)
(m-1-3) edge node[auto] {$\pi_*^{(p^f)}$} (m-2-3)
(m-2-1) edge node[auto] {$V^{'f}$} (m-2-3);
\end{tikzpicture}
\end{center}
Et donc comme $k_\tau(G) = k_\tau(G/C_\tau)$, on en déduit le même diagramme sur les puissances extérieures $q_\tau$ et avec les divisions de $V^f$, et donc la commutativité du diagramme modulo $p$ suivant,
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] at (0,0)
{
\det\omega_{G^D,\tau} & &\det\omega_{G^D,\tau}^{(p^f)} \\
\det\omega_{(G/C_\tau)^D,\tau} & &\det\omega_{(G/C_\tau)^D,\tau}^{(p^f)} \\
};
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$\widetilde\Ha_\tau(G)$} (m-1-3)
(m-1-1) edge node[auto] {$\pi_*$} (m-2-1)
(m-1-3) edge node[auto] {$\pi_*^{(p^f)}$} (m-2-3)
(m-2-1) edge node[auto] {$\widetilde\Ha_\tau(G/C_\tau)$} (m-2-3);
\end{tikzpicture}
\end{center}
On a donc l'égalité modulo $p$,
\[ \pi_*^{(p^f)}\circ \widetilde\Ha_\tau(G) \equiv \widetilde\Ha_\tau(G/C_\tau) \circ \pi_{*},\]
et donc en passant au déterminant,
\[ p^{p^f\deg_\tau(C_\tau^D) + \Ha_\tau(G)} \equiv p^{\Ha_\tau(G/C_\tau) + \deg_\tau(C_\tau^D)} \pmod p.\]
Or par hypothèse,
\[ p^f\deg_\tau(C_\tau^D) + \Ha_\tau(G) \leq (p^f+1)\Ha_\tau(G) < 1,\]
et donc,
\[(p^f-1)\deg_\tau(C_\tau^D) + \Ha_\tau(G) = \Ha_\tau(G/C_\tau).\qedhere\]
\edem
\begin{rema}
On a aussi que si $p^f \deg_\tau(C_\tau^D) + \Ha_\tau(G) \geq 1$, alors \[\Ha_\tau(G/C_\tau) \geq 1 - \deg_\tau(C_\tau^D) \geq 1-\Ha_\tau(G).\]
\end{rema}
\begin{theor}
\label{thrntors}
Soit $p > q_\tau + 1$. Soit $K/\mathcal O[1/p]$ une extension valuée. Soit $G$ un $\mathcal{BT}_{n+k_\tau}^\mathcal O(\mathcal O_K)$ de signature $(p_\tau,q_\tau)_\tau$.
Supposons que,
\[\Ha_\tau(G) < \frac{1}{p^{(n-1)f}}\min(\frac{1}{2},1+K_\tau - \frac{q_\tau}{p-1}).\]
Alors il existe $C_\tau^n \subset G[p^n]$ un sous $\mathcal O/p^n$-module de $G$.
Supposons de plus que \begin{equation}
\label{hypdeg2}
\tag{H3}
\frac{2q_\tau}{p-1} < 1 + K_\tau \quad \text{et} \quad \Ha_\tau(G) < \frac{1+K_\tau}{p^{(n-1)f}} - \frac{2q_\tau}{p^{nf} - p^{(n-1)f}}.
\end{equation}
Alors
\begin{enumerate}
\item $C_\tau^n(\mathcal O_{\overline K})$ est un $\mathcal O/p^n\mathcal O$-module libre.
\item $C_\tau^n(\mathcal O_{\overline K})$ coïncide avec le noyau de l'application $\alpha_{G,\tau, n-\frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G)}$.
\item On a que,
\begin{eqnarray*} \sum_{i=1}^f \deg_{\sigma^i\tau}(G[p^n]/C_\tau^n)p^{f-i} &=& nK_\tau(p^f-1) + n\Ha_\tau(G) + (p^f-1)\left( \deg_\tau(C_\tau^{1,D}) + \dots + \deg_\tau(C_\tau^{n-1,D})\right) \\
& \leq& nK_\tau(p^f-1) + \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G).\end{eqnarray*}
Ou encore,
\[\Deg_\tau(C^n_\tau) = \sum_{i=1}^f \deg_{\sigma^i\tau}(C_\tau^n)p^{f-i} \geq n\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G).\]
Et donc en particulier,
\[ \deg C_\tau^n = \sum_{i=1}^f \deg_{\sigma^i\tau}(C_\tau^n) \geq n\sum_{\tau'} \min(p_\tau,p_{\tau'}) - \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G).\]
\end{enumerate}
Notons que,
\[ \Ha_\tau(G/C_\tau^n) = \Ha_\tau(G) + (p^f-1)\deg_\tau(C_\tau^{n,D}) \leq p^{nf}\Ha_\tau(G).\]
On a de plus les propriétés suivantes,
\begin{enumerate}[(a)]
\item $C_\tau^n$ est un cran de la $\tau$-filtration de Harder-Narasimhan de $G[p^n]$.
\item $C_\tau^n$ est compatible à la dualité ; si $D_\tau^n$ est le sous-groupe canonique de $G^D[p^n]$, alors $D_\tau^n = C_\tau^{n,\perp}$.
\item $C_\tau^n[p^k] = C_\tau^k, \forall k \leq n$.
\item $C_\tau^n/C_\tau^k$ est le sous-groupe canonique de rang $n-k$ de $p^{n-k}C_k/C_k$.
\end{enumerate}
\end{theor}
\begin{rema}
On aimerait que l'hypothèse (\ref{hypdeg2}) ne soit pas nécessaire, remarquons que c'est le cas si le nombre premier $p$ est assez grand (avec même borne que pour l'hypothèse (\ref{hypdeg})). Si $p$ est assez grand, les hypothèses du théorème deviennent simplement $\Ha_\tau(G) < \frac{1}{2p^{(n-1)f}}.$
Si de plus on suppose que $\Ha_\tau(G) < \frac{1}{p^{(n-1)f}}\min(\frac{3}{8},1+K_\tau - \frac{2q_\tau}{p-1})$, la démonstration montre (facilement) que $C_\tau^n$ est un cran de la filtration de
Harder-Narasihman "classique" de $G[p^n]$.
$G^D$ vérifie aussi les hypothèses du théorème puisque $\Ha_\tau(G) = \Ha_\tau(G^D)$.
\end{rema}
\dem
On construit $C_\tau^n$ par récurrence comme dans \cite{Far}. Avec la formule sur le degré pour $C_\tau^1$, et la proposition (\ref{prorecdeg}) toujours par récurrence, on trouve la formule sur le degré de $C_\tau^n$, et donc l'assertion de liberté et le point (2) grâce à la proposition \ref{pro14}.
Pour le cran de la $\tau$-filtration de Harder-Narasimhan, on procède comme dans \cite{Far}, et on sait d'après la proposition (\ref{propoly}) que si,
\[ \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G) <\frac{\sum_{j=1}^f p^{f-j}\delta_{q_{\sigma^j\tau} = q_{\tau'}}}{2},\]
on a une rupture. En particulier, si $n_\tau \geq 2$, on a la rupture, puisque $\frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G) < 1 - \frac{1}{p-1}$. De plus par cette dernière inégalité, s'il y a rupture, le groupe correspondant est exactement $C_\tau^n$, car la deuxième partie de la proposition (\ref{propoly}) s'applique.
Il suffit seulement de montrer que si $\mathcal P = \HN_\tau(G[p^n])$ (non renormalisé, donc à abscisses de ruptures entières, et associé à la fonction $\Deg_\tau$),
et si $i < np_\tau < j$ désignent des abscisses de ruptures de $\mathcal P$, alors,
\begin{equation}
\label{rupture}
\frac{np_\tau - i}{j-i}\mathcal P(j) + \frac{j - np_\tau}{j-i}\mathcal P(i) < \frac{n}{f}\sum_{j=1}^f p^{f-j}\min(p_\tau,p_{\sigma^j\tau}) - \frac{1}{f}\frac{p^{nf} - 1}{p^f - 1} \Ha_\tau(G).\end{equation}
Comme dans \cite{Far}, en raisonnant sur un dessin, on voit que si $i < np_\tau - 1$,
\[\frac{np_\tau - i}{j-i}\mathcal P(j) + \frac{j - np_\tau}{j-i}\mathcal P(i) < \frac{n}{f}\sum_{\tau'} \min(p_\tau,p_{\tau'}) - \frac{2n_\tau}{3f}\]
Et comme,
\[ \frac{p^{nf}-1}{f(p^f-1)}\frac{1}{2p^{(n-1)f}} < \frac{2}{3f},\]
on en déduit que si $i < np_\tau - 1$ (ou si $j > np_\tau + 1$) il y a une rupture en l'abscisse $np_\tau$.
\begin{rema}
Ce raisonnement, pour ces $i,j$, marchait en fait aussi avec le polygone de Harder-Narasihman $\HN_{\mathcal O}$ "classique", c'est-à-dire associé avec la fonction $\deg$ et non
$\Deg_\tau$.
\end{rema}
Il reste donc le cas $n_\tau = 1$, $i = np_\tau - 1$ et $j = np_\tau +1$.
On suppose donc qu'il existe des ruptures de $\HN_\tau(G[p^n])$ aux abscisses $np_\tau-1$ et $np_\tau +1$, qui correspondent donc à des groupes $D$ et $D'$.
Essayons maintenant de montrer (\ref{rupture}) avec $i = np_\tau - 1$ et $j = np_\tau +1$, c'est-à-dire que $\Ht_\mathcal O(D) = np_\tau -1$ et $\Ht_\mathcal O(D') = np_\tau +1$.
Si le polygone de Harder-Narasihman $\HN_\tau(G[p^n])$ a une rupture en $np_\tau$, c'est gagné (proposition \ref{propoly}), supposons donc par l'absurde que ce n'est pas le cas.
On a que,
\[ D(\mathcal O_C) \subset (\mathcal O/p^n\mathcal O)^{h}.\]
On écrit la suite, exacte en fibre générique,
\[ 0 \fleche D[p^{n-1}] \fleche D \fleche p^{n-1}D\fleche 0,\]
où $p^{n-1}D$ désigne l'adhérence schématique de $p^{n-1}D(\mathcal O_C)$, c'est un $\mathcal O$-module de hauteur $x \leq p_\tau -1$, puisque
$\Ht_\mathcal O(D) = np_\tau -1$ et $D[p^{n-1}]$ est de hauteur supérieure ou égale à $(n-1)p_\tau$, on a donc,
\[ D(\mathcal O_C) = (\mathcal O/p^n\mathcal O)^x \oplus N,\]
où $N$ est un $\mathcal O/p^{n-1}\mathcal O$-module (de type fini). Moralement, plus $x$ est petit, plus le degré de $D$ aussi
(puisqu'il est de plus en plus inclus dans la $p^{n-1}$-torsion de $G$).
On va montrer qu'il est maximal, i.e. $x = p_\tau -1$.
Tout d'abord, essayons de minorer $\deg D$. Comme on a supposé que le polygone $\HN_\tau(G[p^n])$ n'avait pas de rupture en $np_\tau$, on peut donc en déduire par ce
qui précède sur le degré du sous-groupe $C^n_\tau$ et la figure \ref{HNbreak} suivante, où la ligne pointillée représente la polygone minimal qui passe par le degré minimum
possible de $\Deg_\tau(C^n_\tau)$ autorisé par le théorème, et de telle manière qu'il n'y ait pas de rupture, et où $\mu_1,\mu_2$ sont les pentes du polygone $\mu$-ordinaire
autour de $np_\tau$.
\begin{figure}[h]
\begin{center}
\caption{$\tau$-degré $\Deg_\tau$ minimal de $D$.}
\label{HNbreak}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.5cm,y=0.7cm]
\draw[->,color=black] (1.93321476589,0.) -- (8.,0.);
\foreach \x in {1.5,2.,2.5,3.,3.5,4.,4.5,5.,5.5,6.,6.5,7.,7.5}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\clip(1.93321476589,-2.48344923044) rectangle (8.,9.44281939932);
\draw (5.5,7.)-- (7.,8.);
\draw (5.5,7.)-- (4.5,4.);
\draw [dash pattern=on 4pt off 4pt,domain=4:5.996845605649142] plot(\x,{(--8.80592903973-2.0658071613*\x)/-0.488648884338});
\draw [dash pattern=on 2pt off 2pt](5.5,7.)-- (5.5,0.);
\draw [dash pattern=on 2pt off 2pt](5.,0.)-- (4.99160688666,5.47482065997);
\draw [dash pattern=on 2pt off 2pt](6.,0.)-- (5.99684560565,7.33123040377);
\draw (2.95081967213,8.48493543759)-- (3.46721311475,8.48493543759);
\draw [dash pattern=on 4pt off 4pt] (2.96721311475,7.72740315638)-- (3.46721311475,7.74461979914);
\draw (3.58692551168,8.91238277446) node[anchor=north west] {$\HN_\tau^{\mu-ord}$};
\draw [dash pattern=on 2pt off 2pt] (5.50819672131,5.26542324247)-- (4.00819672131,5.26542324247);
\draw (3.7,7.5) node[anchor=north west] {$\HN_\tau^{\mu-ord}(np_\tau)$};
\draw (2.,5.8) node[anchor=north west] {$\HN_\tau^{\mu-ord}(np_\tau) - \delta$};
\draw [dash pattern=on 2pt off 2pt] (4.99337463583,3.16)-- (4.6,3.16);
\draw (1.87,3.7) node[anchor=north west] {$\HN_\tau^{\mu-ord}(np_\tau\!\!-\!\!1)\!\!-\!\!2\delta\!\!+\!\!\mu_1\!\!-\!\!\mu_2$};
\draw (5.39541706619,-0.14) node[anchor=north west] {$np_\tau$};
\draw (5.79458862552,-0.0879289891655) node[anchor=north west] {$np_\tau\!\!+\!\!1$};
\draw (4.7,-0.0879289891655) node[anchor=north west] {$np_\tau\!\!-\!\!1$};
\begin{scriptsize}
\draw [fill=qqqqff] (5.50819672131,5.26542324247) circle (1.5pt);
\draw[color=qqqqff] (5.8,5.1) node {$C_\tau^{min}$};
\end{scriptsize}
\end{tikzpicture}
\end{center}
\end{figure}
On en déduit donc, en ayant noté la borne donnée par le degré dans le théorème,
\[\delta = \frac{p^{nf}-1}{p^f-1}\Ha_\tau(G),\]
que,
\[\Deg_\tau D > \sum_{i=1}^f p^{f-i}\min(np_\tau-1,np_{\sigma^i\tau}) - 2\frac{p^{nf}-1}{p^f-1}\Ha_\tau(G) + \frac{1}{f},\]
et donc
\begin{equation}
\label{degD}
\deg D > \sum_{i=1}^f \min(np_\tau-1,np_{\sigma^i\tau}) - 2\frac{p^{nf}-1}{p^f-1}\Ha_\tau(G) + \frac{1}{f}.\end{equation}
Maintenant, on sait aussi que,
\[ \deg D \leq \deg D[p^{n-1}] + \deg p^{n-1}D,\]
et supposons que $x \leq p_\tau -2$.
On peut alors majorer (la seconde inégalité étant du à : si $p_{\sigma^j\tau} \leq x < p_\tau$ alors $(n-1)p_{\sigma^j\tau} < (n-1)p_\tau < np_\tau -1 - x$),
\begin{align*} \deg D &\leq \sum_{j=1}^f \min(np_\tau -1 -x,(n-1)p_{\sigma^j\tau}) + \min(x,p_{\sigma^j\tau}) \\
& \leq \sum_{j=1}^f \min(np_\tau -1,np_{\sigma^i\tau},(n-1)p_{\sigma^j\tau} + x) \\
& \leq np_\tau - 2 + \sum_{j=1}^{f-1} \min(np_\tau -1,np_{\sigma^i\tau}) \\
& \leq \sum_{j=1}^{f} \min(np_\tau -1,np_{\sigma^i\tau}) - 1
\end{align*}
ce qui contredit l'inégalité (\ref{degD}), d'après l'annexe (\ref{calculann1}).
On a donc que $x = p_\tau -1$, et on peut donc écrire,
\[ D(\mathcal O_C) = (\mathcal O/p^n\mathcal O)^{p_\tau-1} \oplus N,\]
où $N$ est un $\mathcal O$-module tué par $p^{n-1}$, de $\mathcal O$-longueur $n-1$.
\begin{lemm}
On a l'égalité,
\[ D[p^{n-1}]= C^{n-1}_\tau.\]
\end{lemm}
\dem
On a que ,
\[\deg D \leq \deg D[p^{n-1}] + \deg(p^{n-1}D).\]
Or ce dernier module est de $p$-torsion et hauteur $p_\tau -1$, on a donc que,
\[ \deg(p^{n-1}D) \leq \sum_i \min(p_\tau-1,p_{\sigma^i\tau}),\]
et donc,
\[ \deg D[p^{n-1}] \geq \deg D - \sum_i \min(p_\tau-1,p_{\sigma^i\tau}).\]
On en déduit en particulier en reprenant la minoration de $\deg D$ (\ref{degD}), que,
\begin{align*}
\deg D[p^{n-1}] &> \sum_{i=1}^f \min(np_\tau-1,np_{\sigma^i\tau}) - 2\frac{p^{nf}-1}{p^f-1}\Ha_\tau(G) + \frac{1}{f} - \sum_i \min(p_\tau-1,p_{\sigma^i\tau}) \\
& = \sum_{i=1}^f \min((n-1)p_\tau,(n-1)p_{\sigma^i\tau}) - 2\frac{p^{nf}-1}{p^f-1}\Ha_\tau(G) + \frac{1}{f}.
\end{align*}
On peut donc utiliser la proposition (\ref{probij}) avec $C^{n-1}_\tau$ et $D[p^{n-1}]$, puisque,
\[ \deg C^{n-1}_\tau > \sum_{i=1}^f \min((n-1)p_\tau,(n-1)p_{\sigma^i\tau}) - \frac{p^{(n-1)f}-1}{p^f-1}\Ha_\tau(G),\]
et on vérifie (voir annexe (\ref{calculann1})) que,
\[ \frac{p^{(n-1)f}-1}{p^f-1}\frac{1}{2p^{n-1}f} + 2 \frac{p^{nf}-1}{p^f-1}\frac{1}{2p^{n-1}f} - \frac{1}{f} \leq 1.\qedhere\]
\edem
Comme $p^{(n-1)}D$ est un $\mathcal O$-module de hauteur $p_\tau -1$,
\[ \Deg_\tau(p^{n-1}D) \leq \sum_{j=1}^f p^{f-j}\min(p_\tau-1,p_{\sigma^j\tau}).\]
On en déduit (utiliser la croissance par déformation de $\Deg_\tau$),
\begin{eqnarray*}\Deg_\tau(D) \leq \Deg_\tau(D[p^{n-1}]) + \Deg_\tau(p^{n-1}D) =\\
-(n-1)\Ha_\tau(G) - (p^f-1)\left( \deg_\tau(C_\tau^{1,D}) + \dots + \deg_\tau(C_\tau^{n-2,D})\right) \\ + \sum_{j=1}^f p^{f-i}[(n-1)\min(p_\tau,p_{\sigma^i\tau}) + \min(p_\tau - 1,p_{\sigma^i\tau})] \\
= -(n-1)\Ha_\tau(G) - (p^f-1)\left( \deg_\tau(C_\tau^{1,D}) + \dots + \deg_\tau(C_\tau^{n-2,D})\right) \\ + \sum_{j=1}^f p^{f-i}[n\min(p_\tau,p_{\sigma^i\tau}) - \delta_{p_{\sigma^i\tau} \geq p_\tau}] \\
\end{eqnarray*}
En appliquant cela à $G^D$ et $D^{'\perp}$, on trouve que,
\begin{eqnarray*}\Deg_\tau(D') \leq \Deg_\tau(G[p^n]) - \sum_{j} p^{f-j}(nq_\tau -1) + \Deg_\tau(D^{'\perp}) \\
\leq -(n-1)\Ha_\tau(G^D) - (p^f-1)\left( \deg_\tau(C_\tau^{1,\perp,D}) + \dots + \deg_\tau(C_\tau^{n-2,\perp,D})\right) \\ + \sum_{j=1}^f p^{f-i}[np_{\sigma^i\tau} - nq_\tau + 1 + (n-1)\min(q_\tau,q_{\sigma^i\tau}) + \min(q_\tau - 1,q_{\sigma^i\tau})] \\
= -(n-1)\Ha_\tau(G^D) - (p^f-1)\left( \deg_\tau(C_\tau^{1,\perp,D}) + \dots + \deg_\tau(C_\tau^{n-2,\perp,D})\right) \\ + \sum_{j=1}^f p^{f-i}[np_{\sigma^i\tau} + np_\tau - (n-1)\max(p_\tau,p_{\sigma^i\tau}) + \max(p_\tau,p_{\sigma^i\tau}) + 1 - \delta_{q_{\sigma^j\tau} \geq q_\tau} ] \\
= -(n-1)\Ha_\tau(G^D) - (p^f-1)\left( \deg_\tau(C_\tau^{1,\perp,D}) + \dots + \deg_\tau(C_\tau^{n-2,\perp,D})\right) \\ + \sum_{j=1}^f p^{f-i}[n\min(p_\tau,p_{\sigma^i\tau}) + 1
- \delta_{q_{\sigma^j\tau} \geq q_\tau}]
\end{eqnarray*}
Et on en déduit que,
\begin{eqnarray*} \Deg_\tau(D) + \Deg_\tau(D') \leq \sum_{j=1}^f p^{f-j}\left(2n\min(p_\tau,p_{\sigma^i\tau}) + 1 - \delta_{q_{\sigma^j\tau} \geq q_\tau} - \delta_{p_{\sigma^j\tau} \geq p_\tau}\right) \\
- 2(n-1)\Ha_\tau(G) -(p^f-1)\left(\deg_\tau(C_\tau^{1,D}) +\deg_\tau(C_\tau^{1,\perp,D}) + \dots + \deg_\tau(C_\tau^{n-2,D}) +\deg_\tau(C_\tau^{n-2,\perp,D}) \right).\end{eqnarray*}
Remarquons que $\deg_\tau(C_\tau^{j,\perp,D}) = \deg_\tau(C_\tau^{j,D})$.
Comme \[\Deg_\tau(C^n_\tau) = n\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - n\Ha_\tau(G) - (p^f-1)\left( \deg_\tau(C_\tau^{1,D}) + \dots + \deg_\tau(C_\tau^{n-1,D})\right),\]
pour montrer (\ref{rupture}) il suffit de voir que,
\[ \frac{1}{2}\sum_{j=1}^f p^{f-j}(\delta_{q_{\sigma^j\tau}\geq q_\tau} +\delta_{q_{\sigma^j\tau}\leq q_\tau}-1) > \Ha_\tau + (p^f-1)\deg_\tau C^{n-1,D}_\tau,\]
C'est-à-dire comme $n_\tau =1$ (le pire cas dans la formule précédente),
\[ \frac{1}{2} > \Ha_\tau + (p^f-1)\deg_\tau C^{n-1,D}_\tau = \Ha_\tau(G/C^{n-1}_\tau).\]
Mais comme on a supposé que $\Ha_\tau(G/C^{n-1}_\tau) \leq p^{(n-1)f}\Ha_\tau(G) < \frac{1}{2}$ pour pouvoir faire la récurrence, c'est gagné, il y a donc une rupture de $\HN_\tau(G[p^n])$ en l'abscisse $np_\tau$.
\edem
\begin{rema}
Les sous-groupes de la filtration canonique n'existent à priori qu'au dessus de $\mathcal O[1/p]$, mais le fait qu'ils correspondent à une rupture Harder-Narasihman
va permettre de les redescendre.
\end{rema}
On en déduit le théorème final,
\begin{theor}
\label{thrfinO}
Soit $G$ un $\mathcal O$-module $p$-divisible tronqué d'échelon $n+k$ sur $\Spec(O_K)$ de signature $(p_\tau,q_\tau)_\tau$, où $k = \max_\tau k_\tau$.
Supposons que \[p > \max\{ \frac{2q_\tau}{1+K_\tau} : \tau \dans \mathcal I, q_\tau \neq h\} + 1.\]
Supposons de plus que,
\begin{equation}
\label{hypfinale}
\tag{$H_n$}
^\mu\Ha(G) < \frac{1}{p^{(n-1)f}}\min(\frac{1}{2},1+K_\tau - \frac{2q_\tau}{p-1}), \forall \tau \in \mathcal I \text{ tels que } q_\tau \notin \{0,h\}.\end{equation}
Alors il existe une (unique) filtration, appelée filtration canonique, $(\Fil_\tau G[p^n])_{\tau \in \mathcal I}$ de $G[p^n]$ par des sous-$\mathcal O/p^n$-modules finis et plats de
$G[p^n]$, dont les inclusions sont données par,
\[ \Fil_\tau G[p^n] \subset \Fil_{\tau'} G[p^n] \quad \text{si et seulement si} \quad p_\tau \leq p_{\tau'},\]
telle que pour tout $\tau$,
\[\Ht_{\mathcal O} \Fil_\tau G[p^n] = np_\tau = nh-nq_\tau,\]
et pour tout $\tau$,
\[ \Deg_\tau(\Fil_\tau(G[p^n])) = \sum_{i=1}^f p^{f-i}\deg_{\sigma^i\tau}(\Fil_\tau G[p^n]) \geq n\sum_{i=1}^{f} \min(p_\tau,p_{\sigma^i\tau})p^{f-i} - \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G).\]
Et donc en particulier,
\[ \deg \Fil_\tau G[p^n] = \sum_{i=1}^f \deg_{\sigma^i\tau}(\Fil_\tau G[p^n]) \geq n\sum_{\tau'} \min(p_\tau,p_{\tau'}) - \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G).\]
De plus $\Fil_\tau G[p^n]$ coïncide avec le noyau de $\alpha_{G[p^n],\tau, n - \frac{p^{nf}-1}{p^f-1}^\mu\Ha(G)}$.
Le cran $\Fil_\tau(G[p^n])$ est de plus un cran de la $\tau$-filtration de Harder-Narasihman de $G[p^n]$, et donc la filtration canonique est compatible à la dualité,
à la $p^k$-torsion ($k<n$) et aux quotients.
\end{theor}
\begin{rema}
L'hypothèse (\ref{hypfinale}) est simplement $^\mu\Ha(G) < \frac{1}{2p^{(n-1)f}}$ lorsque $p$ est assez grand.
\end{rema}
\dem
L'existence des sous-groupes $\Fil_\tau(G[p^n])$ est assurée par le théorème (\ref{thrntors}), ainsi que les propositions sur les degrés, les hauteurs, l'application de Hodge-Tate et la
$\tau$-filtration de Harder-Narsihman. Il ne reste donc plus qu'à montrer les inclusions, mais cela découle de la Proposition \ref{probij}, cf. \cite{Bij}, puisque l'on peut majorer,
\[ \frac{p^{nf} - 1}{p^f-1}\Ha_\tau(G) < \frac{2}{3},\]
et donc si $p_\tau < p_{\tau'}$ on vérifie que
\[ \deg\Fil_\tau(G[p^n]) + \deg\Fil_{\tau'}(G[p^n]) > \HN^{\mu-ord}(np_\tau) + \HN^{\mu-ord}(np_{\tau'}) - \frac{4}{3},\]
et donc la proposition assure que $\Fil_\tau(G[p^n]) \subset \Fil_{\tau'}(G[p^n])$. Dans le cas où $p_\tau = p_{\tau'}$ avec $\tau \neq \tau'$ il suffit de vérifier que,
\[ \deg\Fil_\tau(G[p^n]) + \deg\Fil_{\tau'}(G[p^n]) > 2\HN^{\mu-ord}(np_\tau) -2,\]
mais la même minoration que précédemment s'applique, et donc $\Fil_\tau(G[p^n]) =\Fil_{\tau'}(G[p^n])$.
\edem
\section{Application aux familles}
\label{sect9}
On fixe $p$ un nombre premier, $\mathcal O$ les entiers d'une extension non ramifiée de $\QQ_p$ et $(p_\tau,q_\tau)_\tau$ une signature telle que,
\[p > \max\{ \frac{2q_\tau}{1+K_\tau} : \tau \dans \mathcal I, q_\tau \neq h\} + 1.\]
\begin{theor}
\label{thrfam}
Soit $K$ une extension valuée complète de valuation discrete de $\QQ_p$ et $\mathfrak X$ un $\Spf(O_K)$-schéma formel topologiquement de type fini, sans $p$-torsion, et réduit.
Soit $G \fleche \mathfrak X$ un $\mathcal O$-module de Barsotti-Tate de signature $(p_\tau,q_\tau)_\tau$, tronqué d'échelon $r > \max_\tau k_\tau +n$.
Posons \[\eps_n = \frac{1}{p^{(n-1)f}}\min(\frac{1}{2},1+K_\tau - \frac{2q_\tau}{p-1}), \forall \tau \in \mathcal I \text{ tels que } q_\tau \notin \{0,h\}.\]
Soit $U = \mathfrak X^{rig}_{ord}(\overset{\circ}{\eps_n})$ le voisinage strict du lieu ordinaire de $\mathfrak X^{rig}$ où le $\mu$-invariant de Hasse est strictement plus petit
que $\eps_n$. Alors il existe sur $U$ une filtration par des sous $\mathcal O$-modules $(\Fil_\tau G^{rig}[p^n])_\tau$ de $G^{rig}[p^n]_{|U}$ tel que,
\begin{enumerate}
\item $\Fil_\tau G^{rig}[p^n]$ est localement –pour la topologie étale– isomorphe à $(\mathcal O/p^n\mathcal O)^{p_\tau}$.
\item Si $p_\tau \leq p_{\tau'}$ on a une inclusion $\Fil_\tau G^{rig}[p^n] \subset \Fil_{\tau'}G^{rig}[p^n]$.
\item En tout point de $U$, la fibre de $\Fil_\tau G^{rig}[p^n]$ coïncide avec le cran de hauteur $np_\tau$ de la $\tau$-filtration de Harder-Narasihman de la fibre de $G^{rig}[p^n]$.
\end{enumerate}
La filtration précédente est invariante sous $\End_\mathcal O(G)$, et si de plus $G$ est muni d'une polarisation compatible à $\mathcal O$,
$\lambda : G \overset{\simeq}{\fleche} G^D$, telle que $\lambda^D = \eps\lambda$, pour $\eps \dans \ZZ_p^\times (\mathcal O^\times ?)$, alors la filtration précédente vérifie en plus
que chaque $\Fil_\tau G^{rig}[p^n]$ est totalement isotrope sous l'accouplement $G^{rig}[p^n] \times G^{rig}[p^n] \fleche \mathcal O/p^n\mathcal O(1)$ (ou $\ZZ_p$?).
\end{theor}
\dem
On utilise le théorème \ref{thrfinO} et le théorème 4 de \cite{FarHN} qui nous permettent de mettre en famille chacun des groupes $\Fil_\tau(G[p^n])$ sur un éclatement formel
admissible $\mathfrak Y$ de $\mathfrak X$. Il faut prouver néanmoins que cela reste une filtration, puisqu'a priori on a plusieurs filtrations de Harder-Narasihman en jeu,
mais on peut refaire comme dans la démonstration du théorème 4 de\cite{FarHN} :
Soit $\tau,\tau'$ tels que $p_\tau \leq p_{\tau'}$, alors considérons le morphisme de $\mathfrak Y$-schémas en groupes,
\[ \Fil_\tau G[p^n] \fleche G[p^n] \fleche G/\Fil_{\tau'}G[p^n].\]
Il est nul en tout point de $\mathfrak Y^{rig} = \mathfrak X^{rig}$, mais comme $\Fil_\tau G[p^n]$ et $G/\Fil_{\tau'}G[p^n]$ sont localement libres sur $\mathfrak Y$ qui est réduit,
le morphisme est nul.
Donc on a bien la filtration voulue.
\edem
\subsection{Déformations de Frobénius}
Reprenons les notations du théorème principal \ref{thrfinO}.
\begin{prop} Soit $K/\QQ_p$ une extension finie, et $G/\Spec(O_K)$ un $\mathcal O$-module $p$-divisible tronqué d'échelon $k+f$.
Notons pour tout $\tau$,
\[ r_\tau = |\{\tau' \dans \mathcal I : q_{\tau'} \leq q_\tau\}|.\]
Supposons
\begin{equation}
\label{hypfin}
\tag{$H_f$}
^\mu\Ha(G) < \frac{1}{p^{(f-1)f}}\min(\frac{1}{2},1+K_\tau - \frac{2q_\tau}{p-1}), \forall \tau \in \mathcal I \text{ tels que } q_\tau \notin \{0,h\}.
\end{equation}
Alors le sous groupe $K_1 = \sum_\tau \Fil_\tau (G[p^f])[p^{r_\tau}]$ déforme le noyau de $F^f$ de $G[p^f] \otimes \overline \FP$, c'est à dire que,
\[ K_1 \otimes_{O_K} \overline \FP = \Ker F^f.\]
\end{prop}
\dem
Soit $\tau \dans \mathcal I$. Soit $u$ une variable et notons $(M,\phi)$ le module de Kisin de $H_\tau^1 \subset G[p]$ sur $k[[u]]$.
On note $\phi^\#$ le linerarisé de $\phi$, et on décompose $M = \bigoplus_\tau M_\tau$. D'après la théorie des diviseurs élémentaires, il existe une base
$(e_1,\dots,e_{p_\tau})$ de $M_\tau$ telle que $(u^{a_1}e_1,\dots,u^{a_{p_\tau}}e_{p_\tau})$ soit une base de $\phi^\#(M_{\sigma^{-1}\tau})$ où $0 \leq a_i \leq e$.
Alors,
\[ \deg_\tau H_\tau^1 = \deg_\tau (M,\phi) = \frac{1}{e}\sum_{i=1}^{p_\tau}a_i.\]
Or on sait que \[\deg_\tau H_\tau^1 > p_\tau - \Ha_\tau.\]
On en déduit donc que $a_i \geq e(1-\Ha_\tau)$ pour tout $i$. On a donc que $u^{e(1-\Ha_\tau)}$ divise $\phi^\#_\tau$.
Or on sait que pour tout $\tau'$ tel que $q_{\tau'} \leq q_\tau$ on a $\deg_{\tau'} H_\tau \geq p_\tau - \Ha_\tau(G)$, et donc $u^{e(1-\Ha_\tau)}$ divise $\phi^\#_{\tau'}$
pour tout $\tau'$ tel que $q_{\tau'} \leq q_\tau$. Donc si on regarde le module de Kisin $(\widetilde M,\widetilde \phi)$ de $H_\tau^f$, la matrice de
$\widetilde \phi^f \pmod u$ – qui correspond au module de Dieudonné de $H_\tau^f \otimes \overline \FP$ – est divisible par $p^{r_\tau}$ et donc
$H_\tau^f[p^{r_\tau}] \otimes \overline \FP \subset \Ker F^f$.
On en déduit donc que $K_1 \otimes \overline \FP \subset \Ker F^f$ (dans $G[p^f] \otimes \overline \FP$), mais ils ont même hauteur.
\edem
\begin{rema}
Le même résultat reste vrai sous l'hypothèse ($H_{nf}$) avec \[K_n = \sum_\tau \Fil_\tau (G[p^f])[p^{nr_\tau}] \subset G[p^{nf}],\]
qui déforme alors $\Ker F^{nf}$, au sens
précédent.
La preuve laisse entendre que le résultat est probablement vrai seulement modulo $p^{1-\max_\tau{\Ha_\tau}}$, puisque chaque $\phi^\#_\tau$ est nul modulo $u^{e(1-\Ha_\tau)}$.
Malheureusement, il ne semble pas clair à ma connaissance qu'il soit possible de relier, pour $H/\Spec(O_K)$ un schéma en groupes, $H \otimes O_K/p^w$ ($w \leq 1$) avec
$M \otimes W(k)[[u]]/u^{ew}$. Si tel est le cas, la démonstration précédente devrait s'adapter.
Le même résultat avec $K/\QQ_p$ une extension quelconque (e.g. $K = C$) est encore vrai. On devrait probablement pouvoir faire une preuve similaire en remplaçant $W(k)[[u]]$ par $A_{cris}$, malheureusement il semble qu'une théorie des modules de Breuil-Kisin sur $A_{cris}/\mathcal O_C$ comme présentée dans \cite{FarKis,Lau} ne concerne que les groupes $p$-divisibles, éventuellement tronqués, ce que ne sont pas les crans de la filtration canonique...
Néanmoins on peut obtenir le résultat par un argument de familles.
\end{rema}
\begin{prop}
Le résultat précédent vaut encore pour toute extension valuée $K/\QQ_p$ (en particulier $K = C$).
\end{prop}
\dem
Soit $w' > {^\mu}\Ha(G)$ qui vérifie encore $(H_f)$, et soit $\mathcal X_{w'}$ l'ouvert de l'espace rigide $X^{rig}$ (où $X$ est la présentation du champ $\mathcal{BT}_{k+f}$)
sur lequel ${^\mu}\Ha < w'$, et soit $G[p^f]$ la $p^f$-torsion du groupe universel.
Comme $w'$ vérifie $(H_f)$, d'après le théorème \ref{thrfam}, il existe une filtration de $G[p^f]$ par des schémas en groupes finis et plat, et on a donc sur $X_{w'}$ un $\mathcal O$-module fini et plat,
\[K_1 = \sum_\tau \Fil_\tau (G[p^f])[p^{r_\tau}] \subset G[p^f].\]
De manière équivalente, il existe $\mathfrak X_{w'}$ un ouvert d'un éclatement formel admissible de $\mathfrak X$ sur lequel $K_1$ s'étend en un schéma en groupe fini et plat.
Le groupe sur $\Spec(O_K)$ de l'énoncé, avec donc ${^\mu}\Ha < w'$, définit un $O_K$-point de $\mathfrak X_{w'}$, et donc un $\overline{\FP}$-point $\overline x$ de
$\mathfrak X_{w'}\otimes \overline{\FP}$. Or sur $\mathfrak X_{w'}\otimes \overline{\FP}$, on a deux schémas en groupes finis et plats, $K_1 \otimes \overline{\FP}$ et $\Ker F^f$, le
noyau du Frobenius itéré $f$-fois de $G[p^f]\otimes \overline{\FP}$, dont on sait de plus qu'ils sont égaux sur la réduction à $\overline{\FP}$ des points de
$\mathcal X_{w'}(\overline{\QQ_p})$. Or la réduction $\mathcal X_{w'}(\overline {\QQ_p}) = \mathfrak X_{w'}(\mathcal O_{\overline{\QQ_p}}) \fleche \mathfrak X_{w'}(\overline{\FP})$
est surjective, donc
$(K_1)_{\overline x} = (\Ker F^f)_{\overline x}$.
\edem
\begin{rema}
On peut vérifier (déjà dans le cas $\mu$-ordinaire) que $K_n$ n'est pas un cran de la filtration de Harder-Naraihman de $G[p^{nf}]$.
\end{rema}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The unprecedented observational data from compact object mergers in the recent years have confirmed the general model of the synthesis of heavy elements via rapid-neutron captures \cite{Abbott2017a,Abbott2017b,kilonova-data,Chornock2017}. However, details required for the interpretation of the correlated ultraviolet, optical and infrared electromagnetic emissions, also known as kilonova (for a review see \cite{Metzger}), are still unclear. For example, accurate kilonova modeling relies on the knowledge of the fraction of mass ejected during the event, its opacity, and the radioactive decay of freshly produced lanthanide and actinide nuclei \cite{Kasen-nature,Kasen-Barnes}.
Nucleosynthesis involving the
production of heavy and super heavy nuclei is expected to take place at
very high temperatures of the order of a few Giga Kelvin (GK). The
{\it transuranic}
elements in particular are considered to be produced in the $r$-process
occuring in environments where the neutron captures are faster than the
$\beta$ decays.
The conditions during the r-process, involving densities,
$\rho(n) \sim 10^{20} cm^{-3}$ and temperatures $T \sim $ 10$^{9}$ K,
are explosive conditions \cite{bertulanibook}.
The possible sites for the r-process are considered
to be \cite{bertulanibook} (a) neutronized atmosphere above the
proto-neutron star in
a Type II supernova, (b) neutron-rich jets from supernovae or neutron
star mergers, (c) inhomogeneous Big Bang, (d) He/C zones in Type II
supernovae, (e) red giant He flash, (f) spallation neutrons in He
zone (g) neutrino driven wind from freshly born neutron stars and
(h) outflows from black hole accretion discs originated in compact object mergers or collapsars (for recent reviews of the possible r-process sites see Cowan et al \cite{Cowan2021-sites} and C{\^o}t{\'e} et al \cite{Cote2019}).
The abundance of elements is found through a network \cite{network,Skynet}
of coupled differential equations involving nuclear reaction rates at elevated
temperatures. Theory and models play an important role in determining the
latter for neutron-rich nuclei which cannot be measured in terrestrial laboratories.
The $r$-process nucleosynthesis path is along highly unstable,
exotic, and neutron-rich nuclei that in principle does not involve alpha
emitters. However, once heavy neutron rich nuclei in the region with Z $>$ 82
are formed, and with the depletion of further neutron captures
(i.e. after r-process freeze out), those nuclei decay by different modes
(e.g. beta, alpha, fission). Some decay modes would compete. This
process can even lead to the formation of nuclei in the actinide region.
The nuclei studied in this work are part of the mass region (A $>$ 208) where
several alpha emitters are found \cite{martin,zhao}.
Thus, it is not only the photo-dissociation
and neutron capture cross sections but also fission (spontaneous and induced)
and the decay rates which are important
for the abundance evolution. The explosive conditions in supernovae and neutron star mergers \cite{thieleman,arcones,albino} leading to considerably high temperatures could result in nuclei existing in excited states. Though the possible influence of these nuclear thermal excitations
is taken into account in the production reactions as well as in
their reverse reactions, with libraries publicly available for the scientific community (e.g. \cite{Reaclib,Starlib,bruslib}), the same is not true in the
case of $\alpha$ decay. These decay rates,
entering as an input to the network calculations, are taken to be the
ground state (or terrestrial) half-lives \cite{network}. However, one must note that for high ambient temperatures,
the population factor for the excited energy levels of a nucleus
is large. These are thermal excitations and one must take into account the
possibility of $\alpha$ decay of thermally excited nuclei. This is in particular quite important for the $r$-process nucleosynthesis
where the closed neutron shells present waiting points due to the fact that
it takes a long time for the successive $\beta$ decays, which are slow at the shell closures, to occur
and allow progression through higher N nuclei. In section \ref{structure}, we will see that
the $\alpha$ decays of parent nuclei producing daughter nuclei at the shell
closures display a stronger temperature dependence with increased decay
rates at higher temperatures.
Apart from the paper of Perrone and Clayton \cite{clayton}, published
in 1970, there is indeed no estimate of the possible effects of temperature on the
$\alpha$ decay half-lives. However, given the fact that about 50 years ago,
the data on excited levels of nuclei was scarce, calculations were
performed assuming a continuum of states described by the available density
of states. The latter assumption as we will see leads to a very large
overestimate of the enhancement in the decay rate due to temperature.
In the present work, we investigate the temperature dependence of the
$\alpha$ decay rates relevant for the $r$-process nucleosynthesis. The
calculations are performed within two different approaches: (i) a statistical
model which makes use of the experimentally measured excited levels and
an empirical decay law (fitted to data) in the absence of available data
on the half-lives and (ii)
a theoretical model which treats the $\alpha$ decay as a semiclassical
tunneling of the $\alpha$ particle through the barrier created by the
interaction of the $\alpha$ and the daughter nucleus which exist inside
the parent in the form of a preformed cluster. The latter calculation is
performed using a density dependent folding model which has been reasonably
successful in reproducing the measured $\alpha$ decay half-lives \cite{kelkar2,JhoanPRC}. The formalism is presented in sections \ref{formalism},
and \ref{T-dependence}. In section \ref{structure} we connect to
shell closures, and in section \ref{results} we discuss the results.
Finally, we summarize our findings in section \ref{conclusions}.
\section{Alpha decay formalism}
\label{formalism}
One of the most successful achievements of the quantum theory is the
explanation of the $\alpha$-decay of radioactive nuclei as a tunneling problem.
This approach was developed independently by Gamow \cite{gamow} and by
Gurney and Condon \cite{condon} in the late twenties.
Though $\alpha$-decay has been studied since then within different
quantum mechanical approaches
\cite{renreview}, semiclassical approaches based on the tunneling of an
$\alpha$ particle through the potential barrier created by its interaction
with the daughter nucleus produced in the decay are some of the most popular
and widely used methods for calculating half-lives.
The interaction potential between the $\alpha$ ($^4$He nucleus) and the
daughter nucleus, and the $Q$-value, which is usually taken to be the
energy of the tunneling $\alpha$, play the main role in determining the
tunneling probability and hence the half-life.
Using the JWKB approximation \cite{froman1},
different semiclassical approaches lead to the same expression for the $\alpha$-decay width \cite{kelkar2}
\begin{equation}
\Gamma=P_{\alpha}\frac{\hbar^2}{2\mu}\left[\int_{r1}^{r2}{\frac{dr}{\kappa(r)}}\right]^{-1}\exp\left[-2\int_{r2}^{r3}{\kappa(r)dr}\right]\label{width}
\end{equation}
with the so-called wave number $\kappa(r)=\sqrt{\frac{2\mu}{\hbar^2}|V(r)-Q|}$
and $\mu$ the reduced mass of the daughter-$\alpha$ system.
The classical turning points $r_1$, $r_2$ and $r_3$ are obtained by solving
the equation $V(r)= Q$ where $Q$ is the energy of the tunneling
$\alpha$-particle. The factor in front of the exponential arises due to
the normalization of the bound state wave function in the region between
$r_1$ and $r_2$. The exponential factor is the penetration probability.
The $\alpha$-decay half-life of an isotope is evaluated as
\begin{equation}
t_{1/2}=\frac{\hbar \ln 2}{\Gamma}\, .\label{halflife}
\end{equation}
Since the tunneling decay assumes the existence of a preformed cluster of the
$^4$He and daughter nucleus inside the decaying parent nucleus, one must
include a preformation probability $P_{\alpha}$ in the expression for
half-life. This factor, in principle, can be expressed as an overlap between
the wave functions of the parent nucleus and the decaying-state wave function
describing the $\alpha$-particle coupled to the daughter nucleus.
Such a microscopic undertaking is still considered a difficult task and a
phenomenological way to determine $P_{\alpha}$ is simply taking the ratio
of the theoretical and experimental half-lives, such that,
$P_{\alpha}={t_{1/2}^{theory}}/{t_{1/2}^{exp}}$, where, $t_{1/2}^{theory}$
is evaluated using $\Gamma$ from Eq. (\ref{width}) but with $P_{\alpha} = 1$.
The total potential between the $\alpha$ and the daughter nucleus is
typically written as a function of the distance between their centers of mass
as,
\begin{equation}
V(r)=V_n(r)+V_C(r)+\frac{\hbar^2 (l+1/2)^2}{2 \mu r^2}\label{potential}
\end{equation}
where $V_n(r)$ and $V_C(r)$ are the nuclear and Coulomb potentials,
respectively.
The last term in equation \ref{potential} represents the Langer
modified centrifugal potential \cite{langer} which must be used while using
the JWKB approximation. Some of the calculations presented in this work will
be performed within the density dependent double folding model (DFM) which
is based on realistic nucleon-nucleon interactions
and has been reasonably successful in
reproducing the experimental half-lives. The details of this potential can be found in \cite{JhoanPRC,DiegoJhoan2022}. Here we describe it briefly. In the DFM, the
nucleus-nucleus interaction is related to the NN interaction by folding an effective NN interaction over the density distribution of the two nuclei. The folded nuclear potential is written as
\begin{equation}\label{eq:V_Double-Folding}
V_n(\mathbf{r})= \lambda \int \mathrm{d} \mathbf{r}_{1} \mathrm{d} \mathbf{r}_{2} \,\rho_{\alpha}\left(\mathbf{r}_{1}\right)v_N\left(|\mathbf{s}|=\left|\mathbf{r}+ \mathbf{r}_{2}-\mathbf{r}_{1}\right|\right) \rho_{d}\left(\mathbf{r}_{2}\right)\,.
\end{equation}
where $\rho_i$ ($i=$ $d$, $\alpha$) are the densities of the alpha and the daughter nucleus in a decay, and $v_N(|\mathbf{s}|)$ is the nucleon-nucleon (NN) interaction (see \cite{DiegoJhoan2022} for the figure with details). The matter density distribution of the heavy daughter is calculated as
\begin{equation}
\rho(r)=\frac{\rho_{0}}{1+\exp \left(\frac{r-R}{a}\right)}\,,
\end{equation}
where $\rho_0$ is obtained by normalizing $\rho(r)$ to the mass number, $\int\rho(\mathbf{r})\,{\rm d}\mathbf{r}=A$, and the constants are given as $R=1.07A^{1/3}$ fm and $a=0.54$ fm. The alpha or $^4$He density distribution is given using a standard Gaussian form \cite{satchler}, namely,
\begin{equation}
\rho_{\alpha}(r) = 0.4229\exp\left(-0.7024\,r^2\right)\,.
\end{equation}
We use the popular choice of the effective NN interaction which is based on the M3Y-Reid-type soft core potential,
\begin{equation}
v_N\left(|\mathbf{s}|\right)=7999 \frac{\exp \left(-4\left|\mathbf{s}\right|\right)}{4\left|\mathbf{s}\right|}-2134 \frac{\exp \left(-2.5\left|\mathbf{s}\right|\right)}{2.5\left|\mathbf{s}\right|}\, + \,J_{00}\delta(\mathbf{s}),
\end{equation}
where $|\mathbf{s}|=\left|\mathbf{r}+\mathbf{r}_{2}-\mathbf{r}_{1}\right|$ is the distance between a nucleon in the daughter nucleus and a nucleon in the alpha. The above NN interaction consists of a short-ranged repulsive part and a long-ranged attractive one, in addition to the zero-range contribution $J_{00}\delta(\mathbf{s})$ with $J_{00}=-276(1-0.005\,E/A_c)$. The latter is the so-called knock-on exchange term which takes into account the antisymmetrization of identical nucleons in the alpha and the daughter nucleus. It represents a kind of nonlocality in the DFM potential and in order to avoid double counting, is usually not included in the calculation if one uses nonlocal nuclear potentials \cite{DiegoJhoan2022}. The strength of the nuclear potential, $\lambda$, is deduced by requiring the Bohr-Sommerfeld quantization condition to be satisfied \cite{JhoanPRC}. The Coulomb potential, $V_C(r)$, is obtained in a similar way with the matter densities of the alpha and the daughter replaced by their charge densities (which have the same form as above but are normalized to the number of protons).
The angular momentum, $l$,
carried by the alpha particle must satisfy the following spin-parity
selection rules,
\begin{equation}
|J_p-J_d| \leq l \leq |J_p+J_d|\quad\mathrm{and}\quad \pi_d=\pi_p(-1)^l \label{selectionrules}
\end{equation}
where ($J_p$, $\pi_p$) and ($J_d$, $\pi_d$) are the (spin, parity) of the
parent and daughter nuclei, respectively.
In Table \ref{halflivesdoublefoldingmodel} we present the half-lives for
some nuclei using the DFM.
We examine transitions for which the alpha particle has the minimum
angular momentum value, $l_{min}$,
satisfying equations \eqref{selectionrules}. For the
decays considered in Table \ref{halflivesdoublefoldingmodel}, $l_{min} = 0$.
The experimental half-lives and the corresponding
preformation factors are also listed in Table \ref{halflivesdoublefoldingmodel}.
The theoretical values obtained are close to some others found in the
literature \cite{bairenroepke}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.5}
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{|p{2.1cm}|p{2.1cm}|p{2.1cm}|p{2.5cm}|p{2cm}|p{2.5cm}|}\hline
Isotope &Q-Value & $t_{1/2}^{exp}$ & $t_{1/2}^{\rm[DFM]}$ &
$P_{\alpha}{\rm [DFM]}$&$t_{1/2}^{\rm [UDL]}$ \\
&[MeV] & [s] & [s] & & [s] \\
\hline
$_{84}^{212}$Po & 8.954 & $2.9 \times 10^{-7}$ &$6.49 \times 10^{-8}$ & 0.22 & $ 1.57\times 10^{-7}$ \\
$_{86}^{214}$Rn & 9.208 & $2.7\times 10^{-7}$ & $7.91\times 10^{-8}$ & 0.29 & $2.06\times 10^{-7}$ \\
$_{87}^{215}$Fr & 9.540 & $8.6\times 10^{-8}$ & $2.9\times 10^{-8}$ & 0.33 & $7.10\times 10^{-8}$\\
$_{88}^{216}$Ra & 9.526 & $1.8\times 10^{-7}$ & $6.86\times 10^{-8}$ & 0.37 & $1.86\times 10^{-7}$ \\
$_{89}^{217}$Ac & 9.832 & $6.9\times 10^{-8}$ & $2.98\times 10^{-8}$ & 0.43& $7.67\times 10^{-8}$ \\
\hline
\end{tabular}}
\caption{Comparison of the alpha decay half-lives evaluated using the double
folding model (DFM) and a universal decay law (UDL) (\ref{universallaw}),
with experiments.
The phenomenological preformation factors,
$P_{\alpha} = {t_{1/2}^{\rm [DFM]}}/{t_{1/2}^{exp}}$ using the DFM
half-lives are also shown.
Here, $t_{1/2}^{\rm [DFM]}$
is evaluated using $\Gamma$ from Eq. (\ref{width}) but with $P_{\alpha} = 1$
}\label{halflivesdoublefoldingmodel}
\end{table}
The double folding model calculations can in principle be improved with the
inclusion of deformation and nonlocalities in the interaction potential
\cite{JhoanPRC,DiegoJhoan2022}.
However, the objective of the present work is to
perform a comparative study of approaches for
half-lives measured on earth and in a hot astrophysical environment and
hence it suffices to perform calculations within a model which can reproduce
alpha decay half-lives reasonably well.
Indeed, we shall also use an empirical formula (a universal decay law (UDL)
for $\alpha$ and cluster decay, obtained from fits to extensive data)
for the half-lives
calculated within the statistical approach (to be explained in the next
section) since (i) the UDL gives the right order
of magnitude estimate of half-lives and (ii) it would be quite a tedious
undertaking to evaluate the half-lives of several excited states within the
double folding model (DFM) without any significant advantage.
Such universal decay laws are usually obtained by starting with an
analytical expression \cite{BookBeisser}
which is based on the assumption of a rectangular
well for the nuclear potential and a point-like Coulomb potential between the
decay products of the radioactive nucleus. The constants appearing in such
an expression are then assumed to be free parameters and fitted to an
extensive set of data.
The latter compensates for the simplistic assumptions made
in the derivation of the empirical formula and provides a useful expression
depending on the number of nucleons and the $Q$-value of the decay.
We use the following UDL obtained in \cite{Qi,QiPRL}.
\begin{equation}\label{universallaw}
\log_{10} t_{1/2}=2aZ_d\sqrt{\frac{A}{Q}}+b\sqrt{4AZ_d(A_d^{1/3}+4^{1/3})}+c+d\sqrt{4AZ_d(A_d^{1/3}+4^{1/3})}\sqrt{l(l+1)}
\end{equation}
where $A=\frac{4A_d}{A_d+4}$, with $A_d$, $Z_d$ the mass and proton
numbers of the daughter nucleus respectively, and the constants are
given by $a=0.4392060$, $b=-0.3944174$, $c=-27.0648730$ and $d=0.0051825$ such
that $Q$ is taken in MeV resulting in the half-life, $t_{1/2}$, in seconds.
In the next section we shall also describe the effective $Q$-value approach
to evaluate the temperature dependent half-lives using the DFM and the UDL.
Comparing it with the statistical approach gives us an idea of the usefulness
of this approach in the context of $\alpha$
decay half-lives in an astrophysical
environment. We shall also discuss one of the earliest attempts to evaluate
the thermally enhanced $\alpha$-decay rates in connection with the $s$-process
nucleosynthesis \cite{clayton}. The authors in \cite{clayton} predicted a
decrease in the half-lives by about 20 - 60 orders of magnitude depending
on the nucleus for a temperature around 2 GK. We do not find such spectacular
effects of temperature in our calculations. The reason for this difference
will become evident in the next sections.
\section{Temperature dependent half-lives}
\label{T-dependence}
Nucleosynthesis in the later stages of stellar evolution, especially
through the r-process is considered to take place at considerably
high temperatures of the order
of 10$^9$ K. The calculation of the abundance of heavy elements depends on a precise
determination of the nuclear reaction rates of the processes which produce
the elements as well as the processes which destroy the newly formed
nuclei.
Though the neutron capture cross sections and their reverse reaction rates
at elevated temperatures are carefully taken into account, the network codes
usually rely on the laboratory values of the half-lives of $\alpha$
decays from the ground states of nuclei.
Perrone and Clayton \cite{clayton} investigated the effect of thermally excited states in the alpha decay of some nuclei and its application in the $s$-process nucleosynthesis. However, such effects are not included in the $r$-process
simulations.
In what follows, we present a statistical
calculation of thermally enhanced $\alpha$-decay rates
that includes experimentally observed excited levels for some nuclei.
We also propose a model that uses an effective $Q$-value approach and
which makes use of an average excitation energy at a given temperature.
Finally, we discuss the approach of Perrone and Clayton briefly for
completeness.
\subsection{Statistical calculation}
The temperature-dependent half-life, $t_{1/2}(T) = \ln 2/\lambda(T)$,
can be evaluated within the standard statistical approach \cite{iliadisbook}
by defining the temperature dependent decay constant as follows:
\begin{equation}
\lambda(T)=\sum_ip_i\sum_{j}\lambda_{ij}\,.\label{Iliadis}
\end{equation}
Here the sums over $i$ and $j$ are over the parent and daughter states
respectively. Thus, $\lambda_{ij}$ is the decay constant for
the decay of the $i^{th}$ level in the parent to the $j^{th}$ level in
the daughter such that
\begin{equation}
\lambda_{ij}=\frac{\ln(2)}{t_{1/2}^{ij}}\,.
\end{equation}
The population probability, $p_i$, is given with a Boltzmann factor as
\cite{WardFowler}
\begin{equation}\label{populate}
p_i=\frac{(2J_i+1)e^{(-E_i/k_BT)}}{\sum_l(2J_l+1)e^{(-E_l/k_BT)}}
\end{equation}
where $J_i$ and $E_i$ are the spin and the excitation energy of the
state $i$, respectively.
Inserting (\ref{populate})
in (\ref{Iliadis})
\begin{equation}\label{stathalflife}
\lambda(T)=\frac{\ln(2)}{\sum_l(2J_l+1)e^{(-E_l/k_BT)}}\sum_{i,j}{\frac{(2J_i+1)
e^{(-E_i/k_BT)}}{t_{1/2}^{i}}} \,(BR)_{ij}
\end{equation}
where $(BR)_{ij}$ is the branching fraction for the decay from the $i^{th}$
level of the parent nucleus to the $j^{th}$ level in the
daughter nucleus.
The detailed decay
schemes and the percentage decay to a particular channel, i.e.,
$I = [\lambda_{ij}/\lambda_{tot}]*100\%$ can be found at the web-site in
\cite{halflifedata}. The branching fraction, $(BR)_{ij} =
\lambda_{ij}/\lambda_{tot}$, can thus be obtained from the data tables.
To evaluate the temperature dependent half-life in the statistical approach,
we shall use Eq. (\ref{stathalflife}) with the input half-lives,
$t_{1/2}^{i}$ and $(BR)_{ij}$ taken from experiment.
If the experimental half-life of a level
is not known, it is calculated using the UDL at an effective $Q$-value given
by $Q + E_i$, where $E_i$ is the energy of the excited level. In such cases,
even if the experimental branching ratio is known, it is not used but taken to
be 100\% since the UDL per definition is formulated only for the alpha decay
channel.
\subsection{Effective Q-value model}\label{Qeff-model}
Alpha decay half-lives are very sensitive to the penetrability factor which
at the same time means that they are very sensitive to the $Q$-values too.
For a tunneling decay of an $\alpha$ particle taking place in a very hot
surrounding, one can model the effect of temperature by an increase in the
energy, $Q$, with which the $\alpha$ tunnels through the Coulomb barrier.
A higher $Q$-value would clearly reduce the area under the integral in the
exponential term in Eq. (\ref{width}) and hence lead to an increase in
the decay probability (and therefore a decrease in the half-life).
This increase in the $Q$-value can be modelled by adding an average
excitation energy of the nucleus at a given temperature.
Such an effective $Q$-value of an $\alpha$-decay can be expressed as
\begin{equation}
Q_{eff}=Q + \bar{\epsilon}(A,Z,T)\label{effectiveQ}
\end{equation}
where $\bar{\epsilon}(A,Z,T)$ is given by the standard definition
of the average
excitation energy \cite{sitenko} in statistical physics as,
\begin{equation}
\bar{\epsilon}(A,Z,T)=-\frac{\partial}{\partial\beta}\ln {\sf Z}(A,Z,T)
\label{aee}
\end{equation}
with the canonical partition function ${\sf Z}$ given by
\begin{equation}
{\sf Z}(A,Z,T)=\sum_i^n g_i\exp(-\beta E_i)+\int_{E_n}^{E_{max}}{D(E)\exp(-\beta E)dE}
\end{equation}
where, $\beta = 1/(k_B T)$, $g_i=2J_i+1$ and $J_i$ is the spin of the
$i^{th}$ level and
$D(E)$ is the nuclear level density for which we choose the following form
\cite{ericsonref}:
\begin{equation}
D(E)=\frac{\sqrt{\pi}}{12}\frac{e^{2(aE)^{1/2}}}{a^{1/4}E^{5/4}}\label{ericson}
\end{equation}
The level density parameter, $a$, is taken to be $A/9$ where $A$ is
the mass number of the parent nucleus.
If we consider the discrete levels as well as the continuum, the average
excitation energy, using the above equations can be expressed as
\begin{equation}
\bar{\epsilon}(A,Z,T)=\frac{\sum_i^n g_iE_i\exp(-\beta E_i)+\int_{E_n}^{E_{max}}{ E\times D(E)\exp(-\beta E)dE}}{\sum_i^n g_i\exp(-\beta E_i)+\int_{E_n}^{E_{max}}{D(E)\exp(-\beta E)dE}}.
\label{excitation2}
\end{equation}
The temperature, $T$, appearing in the above expression (through $\beta = 1/(k_B T)$), is in principle
the {\it nuclear temperature}. Modification of nuclear properties at finite
temperatures is relevant both for applications in astrophysics \cite{Lattimer,
BetheRevModPhys,Botvina} and for
models of finite nuclei and nuclear matter at high excitation
energy \cite{Benvenuto,Morrisey}.
The internal excitations of nuclei can
play an important role in regulating their abundance.
The excitations form an important ingredient in multifragmentation studies
of hot nuclei. For example, the authors in \cite{BotvinaPLB}, working
within a statistical multifragmentation model find significant
temperature dependent modifications
relevant for stellar dynamics and nucleosynthesis.
They perform calculations
for supernova matter by assuming that the nuclei have the same internal
temperature as the surrounding medium. In the present work, we shall also
assume a dynamical equilibrium such that the temperature, $T$, above can be
assumed to be the surrounding temperature.
Having defined the effective $Q$-value in this manner, the temperature
dependent half-life within the density dependent double folding model
can be evaluated using Eq. (\ref{width}) with $Q$ replaced by $Q_{eff}$.
With a similar replacement we can also estimate the temperature
dependence using the UDL in
Eq. (\ref{universallaw}). In Table \ref{table1} we list the average
excitation energies at different temperatures for some heavy nuclei which will
be studied in this work.
\begin{table}[h]
\resizebox{\textwidth}{!}{
\begin{tabular}{ |p{1.9cm}|p{1.9cm}|p{1.9cm}|p{1.9cm}|p{1.9cm}| p{1.9cm}|p{1.9cm}|}
\hline
\multicolumn{7}{|c|}{$\bar{\epsilon}(A,Z,T)$ [MeV]} \\
\hline
&$Q$ \,[MeV]&0.8 GK&1.2 GK&1.6 GK& 2 GK&2.4 GK\\ \hline
$_{\,\,\,84}^{212}$Po &8.954 &$0.000096$ & $0.0034$ & $0.0231$ & $0.08142$ & $0.2004$\\
$^{214}_{\,\,\,86}$Rn &9.208 &$0.000147$ & $0.00438$ & $0.02520$ & $0.0754$ & $0.1609$\\
$^{215}_{\,\,\,87}$Fr &9.540 &$0.00096$ & $0.02738$ & $0.1349$ & $0.3031$ & $0.4646$\\
$^{216}_{\,\,\,88}$Ra &9.526&$0.00020$ & $0.00455$ & $0.0252$ & $0.0729$ & $0.1510$\\
$^{217}_{\,\,\,89}$Ac &9.832 &$0.001126$ &$0.02741$&$0.1233$ &$0.2684$ &$0.4129$ \\
\hline
\end{tabular}}
\caption{Average excitation energies evaluated using equation
\eqref{excitation2}
in conjunction with \eqref{ericson}.}\label{table1}
\end{table}
\subsection{Perrone and Clayton approach}
The unique attempt in literature for the evaluation of the thermally enhanced
$\alpha$ decay half-lives of nuclei in stellar environments can be found
in \cite{clayton} by Perrone and Clayton. They evaluated the $\alpha$ decay
rates of several nuclei formed in the s-process nucleosynthesis to find that
the half-lives are greatly enhanced if one considered the
contributions from thermally excited nuclei at elevated temperatures.
The temperature-dependent half-life in \cite{clayton} was given by
\begin{equation}
[t_{1/2}^{PC}(Z,A,T)]^{-1}=\int_{0}^{\infty}{\sum_J\frac{F(Z,A,E,J,T)\,
D(Z,A,E,J)dE}{t_{1/2}(Z,A,E,J)}}\label{clayton1}
\end{equation}
where $t_{1/2}(Z,A,E,J)$, the temperature-independent half-life for the
decay of the parent nucleus to the daughter particle ground state is weighed
by the nuclear density of states $D(Z,A,E,J)$ and the occupation probability
$F(E,J,T)$, which for an excited nucleus with energy $E_i$ and spin $J_i$ is
given by
\begin{equation}
F(E_i,J_i,T)=\frac{(2J_i+1)e^{-E_i/k_B T}}
{\sum_i(2J_i+1)e^{-E_i/k_B T}}\approx\frac{(2J_i+1)e^{-E_i/k_B T}}{2J_0+1}
\end{equation}
with $J_0$ the spin of the ground state and $k_B$ the Boltzmann constant.
The last approximation was justified by mentioning that the nuclear ground
state dominates the sum over states for temperatures below 2 GK.
The nuclear level density used was taken from \cite{gilbert}.
The temperature-independent half-life in the denominator of equation \ref{clayton1} was
calculated using the standard semiclassical approach for tunneling decay
where the penetration probability depends on the $Q$-value of $\alpha$ decay.
The tunneling $\alpha$ is assumed to have an energy equal to the $Q$-value plus
the excitation energy (which is indeed the dummy variable of the integral in
\ref{clayton1}). Since the penetration factor is evaluated within a simple
model for the potential, the calculation of the temperature independent
half-life appearing inside the integral in equation \ref{clayton1}
essentially resembles the half-life as found in textbooks
\cite{BookBeisser} but
evaluated at an effective $Q$-value.
With such a half-life as an input, the temperature dependent half-life
was evaluated as an integral over energies from 0 to $\infty$.
It is important to note that although the nuclear energy levels are discrete,
the half-life $t_{1/2}^{PC}(Z,A,T)$ defined by (\ref{clayton1}) in
\cite{clayton} treats the nucleus as having a continuum of excited states
with the density of states given by $D(Z,A,E,J)$. The authors mention the
need for explicitly taking the discrete levels into account if one wished
to use the calculations in context with astrophysics. However, given the
meagre data available in 1970, the authors found their approach appropriate
for an initial survey of the problem. The present work takes into account
this omission made by Perrone and Clayton due to the lack of data and finds
that even if the decrease in half-lives is not as spectacular as that in
\cite{clayton}, it is surely significant and possibly
relevant for nucleosynthesis calculations.
\section{Excited nuclear levels, $Q$ values and shell closures}
\label{structure}
A naive expectation for the decay of thermally excited nuclei would be that
for a nucleus which decays by $\alpha$ decay in its ground state, there must
exist some excited levels which decay by emitting an $\alpha$ too.
However, experimental results show that this is, in general, not true.
A careful examination of the nuclear data tables reveals that
the $\alpha$ decay occurs in heavy nuclei mostly in the ground state.
An excited parent nucleus often decays
by emitting a photon ($\gamma$-decay). In fact, the excited nucleus
undergoes several successive $\gamma$-decays before reaching its ground state.
If this were true for all nuclei decaying by $\alpha$ decay, an undertaking as
in the present work would not make much sense.
However, based on a conjecture in an earlier work \cite{kelkar3}, we did
find exceptions.
In \cite{kelkar3}, the authors noted an interesting phenomenon while
performing a calculation of the tunneling times in $\alpha$ decay.
The amount of time spent by an $\alpha$ in front of the barrier before
tunneling (the so-called transmission dwell time), reaches a minimum
at $N$ = 128 of the parent nucleus in the region from $N$ = 116 to
$N$ = 132 which was investigated.
$N$ = 128 of the parent however corresponds to $N$ = 126 of the daughter,
implying that the $\alpha$ spends the least amount of time with the
magic daughter. The latter essentially corresponded to the shortest half-life
or the highest decay rate for a nucleus with $N$ = 128.
Putting it differently, one can say that a parent nucleus
decaying by emitting an $\alpha$ does it readily when the daughter happens to
be at the shell closure of $N$ = 126.
Motivated by this finding in \cite{kelkar3}, we examined the list of
excited levels of nuclei with $N$ = 128 and with the possibility of an
$\alpha$ decay in the ground state. Not much to our surprise, indeed, we
found that the nuclei, $^{212}_{84}$Po, $^{214}_{86}$Rn,
$^{215}_{87}$Fr, $^{216}_{88}$Ra and $^{217}_{89}$Ac had several
experimentally observed $\alpha$ decays from excited levels.
These nuclei are formed in the $r$-process nucleosynthesis and will be
studied in the present work.
The above phenomenon of a larger number of excited levels decaying by $\alpha$
decay should in principle happen at the other $N$ as well as $Z$
shell closures too. Inspecting
the parent nuclei near $N$ or $Z$ = 84 we find that they do display
some such excited states, but the effect is either not so pronounced
or the data is scarce. In the range
of the medium heavy nuclei, with daughters
near the shell closure of $Z$ = 50, there are
hardly any nuclei decaying by $\alpha$ decay and near $N$ = 50, none.
We have an interesting case however at the lowest magic number of 2.
$^8$Be decays
to two $\alpha$'s, i.e., $^4$He nuclei and hence both the daughters in the
decay have $N$ as well as $Z$ = 2. The number of excited levels which
decay by $\alpha$ decay are 13 and one would expect a strong effect in
the thermally enhanced decay rates. However, one does not observe a big
change in the decay rate due to temperature since the spacing between levels
is much larger than those in the heavy nuclei. For example, the first excited
state in $^8$Be is around 3 MeV and $^{212}$Po has about 50 excited levels
between 0 and 3 MeV. The high density of excited states in heavy nuclei is
expected and was explained a long time ago by Bethe \cite{bethe1936}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Qvalues.pdf}
\caption{Q values in $\alpha$ decay as a function of the
neutron number of the parent nuclei.
Similar symbols indicate nuclei with the same number of protons.
Note the steep rise at
$N$ = 84 and $N$ = 128 corresponding to the shell closures of
82 and 126 in the daughter nuclei.}\label{Qvaluesplot}
\end{figure}
The peculiar observations mentioned above are in fact a reflection of the
behaviour of the $Q$-values in $\alpha$ decay as a function of the neutron
and proton numbers, $N$ and $Z$ in the decaying nuclei. In Fig.
\ref{Qvaluesplot}
we see that close to the neutron numbers of $N^m$ + 2
with $N^m$ being the magic numbers 82 and 126
the $Q$ values rise sharply. Higher $Q$ values imply
a large tunneling probability for the $\alpha$ and hence a bigger decay
rate. Though not shown in the figure such a steep rise happens also as
a function of the proton number, $Z$ for $Z^m$ + 2 with $Z^m$ being magic.
However, there are not many $\alpha$ decays in the vicinity of $Z$ = 52 and
84.
When the parent nucleus has a neutron number
of $N^m$ + 2, we expect the decay probability to be large.
The $Q$ value as such is a difference of the masses of nuclei and hence
is a function of the binding energies of nuclei.
The cluster preformation probability was
shown in \cite{deng} to be directly proportional to the intrinsic energy
of the cluster which in turn depends on the difference of binding
energies of the nuclei involved.
A larger clustering leads to a larger probability
of populating excited states which decay by emitting an $\alpha$
\cite{buck5,astier,bairen}. The latter is due to the fact that increasing
the energy increases the probability of tunneling of an already formed
$\alpha$ cluster.
Looking back at the heavy nuclei studied by Perrone and Clayton (for
which data was most likely not available in 1970), namely,
$^{144}$Nd, $^{148}$Sm, $^{150}$Sm, $^{152}$Sm, $^{158}$Dy, $^{174}$Hf and
$^{176}$Hf, we do not find experimental evidence of
excited levels decaying by $\alpha$ emission in any of
these nuclei. This is essentially due to the fact that the $Q$ values in these
decays are relatively small.
These nuclides have long half-lives (of the order of 10$^{15}$ years)
or are stable. From a potential barrier perspective, their probability of
transmission is too small due to the fact that the Q-value is small.
One could consider the possibility of a $\gamma$ delayed
$\alpha$ decay of a nucleus due to the thermally excited levels. However, the
$\gamma$ decay half-lives of the excited levels are much smaller than the
ground state $\alpha$ decay half-lives and the delay would be negligible.
\section{Results and Discussions}
\label{results}
In view of the physics discussed in the previous section, we shall present
calculations for the temperature dependence of half-lives of nuclei with
the neutron number, $N$ = 128. The daughter nucleus in this case is at the
shell closure of $N$ = 126. To be specific, we shall consider the
following nuclei:
$_{\,\,\,84}^{212}$Po, $^{214}_{\,\,\,86}$Rn, $^{215}_{\,\,\,87}$Fr,
$^{216}_{\,\,\,88}$Ra and $^{217}_{\,\,\,89}$Ac. Each of these nuclei decays
to a daughter nucleus with $N$ = 126. $_{\,\,\,84}^{212}$Po decays to the
doubly magic nucleus $^{208}_{\,\,\,82}$Pb.
In the case of the statistical approach involving a sum over all excited
states, we perform the calculation using the information provided in
Table \ref{table2}. In the absence of data on the half-lives, we use a
universal decay law (UDL) at a shifted $Q$-value, namely, $Q_i = Q + E_i$,
where $E_i$ is the energy of the $i^{th}$ excited state of the parent nucleus.
For the UDL calculation, we use the minimum allowed value of
the orbital angular momentum
quantum number $l$ for each excited level decay and have
listed it in Table \ref{table2}. Higher values of $l$ could be included,
however, the nature of the UDL is such that even if larger values of $l$
were taken into account, the final result would not change much.
Note that the difference between the $b$ and $d$ parameters of the second
and third term in Eq. (\ref{universallaw}),
respectively, is of two orders of magnitude
(the third term being the smallest). The ratio of these two terms grows
roughly linearly as a function of angular momentum $1/10 \times \sqrt{l(l+1)} \sim l/10$, therefore only very high values of angular momentum would make the third term comparable to the second. Those large momenta would happen for large values of $J_p$ and in such cases $l_{min}$ is also large providing a good
approximation to the half-life.
For the half-life using the UDL, we do not include any preformation factor,
i.e., we assume it to be unity.
The factor is usually calculated
phenomenologically from the ratio of the theoretical and experimental
half-lives. Performing such a calculation for each individual excited level
is a formidable task and out of the scope of the present work.
Besides, since the contribution of this factor will not vary exponentially,
it appeared to us a reasonable assumption to take it to be a constant. We
are interested in the relative decrease in half-lives at elevated temperatures
as compared to the terrestrial ones and expect that the error introduced
due to this omission is not large.
\begin{table}[ht]
\renewcommand{\arraystretch}{0.5}
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{|p{1.2cm}|p{1.3cm}|p{1cm}|p{1cm}|p{0.8cm}|p{1.8cm}|p{3.2cm}|}\hline
Parent & $E_i[MeV]$ & $J^{\pi}_p$ &$J^{\pi}_d$& $l_{min}$&\% B.R ($\alpha$) & $t_{1/2}^{i}[s]$\\
\hline
$_{84}^{212}$Po & 0 & 0+ &0+& 0 & 100 & $2.9\times 10^{-7}$ \\
- & 0.727 & 2+ &0+& 2 &0.033 & $1.42\times 10^{-11}$\\
- & 1.132 & 4+ &0+& 4 & 0.5 &$1.12\times 10^{-8}$ (UDL) \\
- & 1.249 & 0+ &0+& 0 & 100 & $1.52\times 10^{-10}$ (UDL)\\
- & 1.355 & 6+ &0+& 6 & 3 & $7.60\times 10^{-10}$\\
- & 1.476 & 8+ &0+& 8 & 3 & $1.46\times 10^{-8}$\\
- & 1.547 & 0+ &0+& 0 & 100 & $3.49\times 10^{-11}$ (UDL)\\
- & 1.578 & 0+ &0+& 0 & 100 & $3.01\times 10^{-11}$ (UDL)\\
- & 1.612 & 0+ &0+& 0 & 100 & $2.55\times 10^{-11}$ (UDL)\\
- & 1.657 & 0+ &0+& 0 & 100 & $2.06\times 10^{-11}$ (UDL)\\
- & 1.679 & 2+ &0+& 2 & 0.3 & $5.40\times 10^{-13}$\\
- & 1.800 & 0+ &0+& 0 &26 & $1.05\times 10^{-11}$ (UDL)\\
- & 1.805 & 2+ &0+& 2 &1.6 & $7.85\times 10^{-11}$ (UDL)\\
- & 2.930 & 18+&0+& 18 &99.93(96.83) & $45.1$\\
- & 2.930 & 18+&3-& 15 & 99.93(1) & $45.1$ \\
- & 2.930 & 18+&5-& 13 & 99.93(2.05) & $45.1$ \\
$_{86}^{214}$Rn & 0 & 0+ &0+& 0 & 100 & $2.7\times 10^{-7}$\\
- & 1.442 & 6+ &0+& 6 & 2$^{\dagger}$ & $6.9\times 10^{-8}$\\
- & 1.625 & 8+ &0+& 8 & 4.1 & $6.5\times 10^{-9}$\\
$_{87}^{215}$Fr & 0 & 9/2-&9/2-& 0 & 100 & $8.6\times 10^{-8}$\\
- & 0.835 & 13/2+ &9/2-& 3 &4.3 & $1.43\times 10^{-8}$ (UDL)\\
- & 1.121 & 17/2- &9/2-& 4 &0.9& $8.07\times 10^{-9}$ (UDL)\\
- & 1.149 & 15/2- &9/2-& 4 & 0.9& $7.04\times 10^{-9}$ (UDL)\\
- & 1.440 & 19/2- &9/2-& 6 & 4.7 & $4.0\times 10^{-9}$\\
- & 1.573 & 23/2- &9/2-& 8 & 4.1 & $3.5\times 10^{-9}$\\
$_{88}^{216}$Ra & 0 & 0+ &0+& 0 & 100 & $1.82\times 10^{-7}$\\
- & 1.164 & 4+ &0+& 4 & 0.23 &$1.61\times 10^{-8}$ (UDL)\\
- & 1.507 & 6+ &0+& 6 &0.58 & $2.0\times 10^{-10}$\\
- & 1.711 & 8+ &0+& 8 & 1.86 & $1.42\times 10^{-9}$\\
- & 2.026 & 10+ &0+& 10 & 0.12 & $6.0\times 10^{-10}$\\
$_{89}^{217}$Ac & 0 & 9/2- &9/2-& 0 & 100 & $6.9\times 10^{-8}$\\
- & 1.498 & 19/2- &9/2-& 6 & 0.46 & $9.81\times 10^{-9}$ (UDL)\\
- & 1.528 & 21/2- &9/2-& 6 &0.46 & $1.00\times 10^{-8}$\\
- & 2.012 & 29/2+ &9/2-& 11 & 4.51(4.1) & $7.40\times 10^{-7}$\\
- & 2.012 & 29/2+ &7/2-& 11 &4.51(0.32) & $7.40\times 10^{-7}$\\
- & 2.012 & 29/2+ &13/2+& 8 & 4.51(0.122) & $7.40\times 10^{-7}$\\
\hline
\end{tabular}}
\\
${\dagger}$ since the listed value in the data tables is \%$\alpha >$0, we
choose 2\% for an estimate
\caption{Energy level, spin, parity, branching ratio and measured half-lives of levels which decay by alpha emission. If the experimental half-life of a level
is not known, it is calculated using the UDL at an effective $Q$-value given
by $Q + E_i$, where $E_i$ is the energy of the excited level. In such cases,
even if the experimental branching ratio is known, it is not used but taken to
be 100\% since the UDL per definition is formulated only for the alpha decay
channel. $l_{min}$ is the minimum value of the orbital angular momentum quantum number, allowed by the selection rules.}\label{table2}
\end{table}
We provide two sets of results for the half-lives of nuclei with
neutron number $N$ = 128 evaluated using two prescriptions. In the first set,
Table \ref{table3}, we calculate half-lives using the double-folding
approach and we compare them with the half-lives obtained by utilizing
Eq. \eqref{universallaw}. We take $Q\rightarrow Q_{eff}$, with the
$Q_{eff}$ approach of section \ref{Qeff-model}, in which the extra
energy is taken as in Eq. \eqref{excitation2}.
In our second set, Table \ref{table4}, we use the statistical method where
the decay constant and, in turn the half-life, are given by
Eq. \eqref{stathalflife}. We compare two approaches in Table \ref{table4}:
one in which all available listed levels are included and another one in
which only levels experimentally found to decay by alpha emission are used.
For those levels (in both approaches) with unknown experimental half-lives,
the UDL (Eq. \eqref{universallaw}) is invoked to estimate $t_{1/2}$ of those
levels.
The results in Table \ref{table3} ensure that the estimate obtained
from Eq. \eqref{universallaw} and used as
an input in the statistical approach (for the experimentally
unknown half-lives) is reasonable.
In Table \ref{table3}, a comparison of the half-lives using the UDL expression
and the double folding model is given for a range of temperatures from
0 to 2.4 GK. The half-life at a given temperature is determined using an effective $Q$ value given by Eq.\eqref{effectiveQ}, namely, $Q_{eff}=Q + \bar{\epsilon}(A,Z,T)$ where $\bar{\epsilon}(A,Z,T)$ is the average excitation energy at a given temperature. The latter is evaluated considering all excited levels of a given
nucleus.
For the sake of comparison we shall choose the preformation factor,
$P_{\alpha}$ = 1 in Eq. (\ref{width}).
\begin{table}[ht]
\resizebox{\textwidth}{!}{
\begin{tabular}{ |p{2.2cm}|p{1cm}|p{2.1cm}|p{1.9cm}|p{1.9cm}|p{1.9cm}| p{1.9cm}|p{1.9cm}|}
\hline
\multicolumn{8}{|c|}{$t_{1/2}(T)$[s]} \\
\hline
&$Q$&$0 GK$&$0.8GK$&$1.2GK$&$1.6GK$& $2GK$&$2.4GK$\\
\hline
$^{212}Po$ [UDL] &8.954 &$1.572\times 10^{-7}$ &$1.57\times 10^{-7}$ & $1.54\times 10^{-7}$ & $1.36\times 10^{-7}$ & $9.57\times 10^{-8}$ & $5.69\times 10^{-8}$\\
& & &(0.06)& (2.09) & (13.2) & (39.1) & (70.1)\\
$^{212}Po$ [DFM] & &$6.49\times 10^{-8}$ &$6.48\times 10^{-8}$ & $6.36\times 10^{-8}$ & $6.45\times 10^{-8}$ & $4.12\times 10^{-8}$ & $2.16\times 10^{-8}$\\
& & &(0.15)& (1.92) & (12.1) & (36.4) & (66.7)\\
$^{214}Rn$ [UDL] &9.208 &$2.07\times 10^{-7}$ &$2.06\times 10^{-7}$ & $2.01\times 10^{-7}$ & $1.77\times 10^{-7}$ & $1.31\times 10^{-7}$ & $7.49\times 10^{-8}$\\
& & &(0.09)& (2.60) & (14.0) & (36.4) & (61.6)\\
$^{214}Rn$ [DFM] & &$7.92\times 10^{-8}$ &$7.91\times 10^{-8}$ & $7.73\times 10^{-8}$ & $6.89\times 10^{-8}$ & $5.23\times 10^{-8}$ & $3.31\times 10^{-8}$\\
& & &(0.08)& (2.39) & (12.98) & (33.9) & (58.2)\\
$^{215}Fr$ [UDL] &9.540 &$7.11\times 10^{-8}$ &$7.07\times 10^{-8}$ & $6.07\times 10^{-8}$ & $3.28\times 10^{-8}$ & $1.28\times 10^{-8}$ & $5.32\times 10^{-9}$\\
& & &(0.55)& (14.6) & (53.8) & (81.9) & (92.5)\\
$^{215}Fr$ [DFM] & &$2.92\times 10^{-8}$ &$2.90\times 10^{-8}$ & $2.53\times 10^{-8}$ & $1.45\times 10^{-8}$ & $6.19\times 10^{-9}$ & $2.79\times 10^{-9}$\\
& & &(0.50)& (13.4) & (50.4) & (78.8) & (90.5)\\
$^{216}Ra$ [UDL] &9.526 &$1.862\times 10^{-7}$ &$1.86\times 10^{-7}$ & $1.81\times 10^{-7}$ & $1.60\times 10^{-7}$ & $1.21\times 10^{-7}$ & $7.76\times 10^{-8}$\\
& & &(0.93)& (2.63) & (13.7) & (34.6) & (58.3)\\
$^{216}Ra$ [DFM] & &$6.86\times 10^{-8}$ &$6.86\times 10^{-8}$ & $6.70\times 10^{-8}$ & $6.00\times 10^{-8}$ & $4.66\times 10^{-8}$ & $3.1\times 10^{-8}$\\
& & &(0.09)& (2.40) & (12.6) & (32.1) & (54.9)\\
$^{217}Ac$ [UDL] &9.832 &$7.67\times 10^{-8}$ &$7.61\times 10^{-8}$ &$6.56\times 10^{-8}$&$3.84\times 10^{-8}$ &$1.73\times 10^{-8}$ &$7.96\times 10^{-9}$ \\
& & &(0.63)& (14.3) & (49.8) & (77.4) & (89.6)\\
$^{217}Ac$ [DFM] & &$2.98\times 10^{-8}$ &$2.96\times 10^{-8}$ & $2.59\times 10^{-8}$ & $1.59\times 10^{-8}$ & $7.75\times 10^{-9}$ & $3.89\times 10^{-9}$\\
& & &(0.56)& (13.0) & (46.5) & (74.0) & (87.1)\\
\hline
\end{tabular}}
\caption{Half-lives within the effective $Q$-value approach with the first row
displaying the half-lives using the UDL at, $Q_{eff} = Q + \bar{\epsilon}$ (with $\bar{\epsilon}$ given by \eqref{excitation2} and listed in table \ref{table1}).
The second row displays the half-lives obtained by using DFM and $Q_{eff}$.
Quantities in brackets show the percentage decrease in the half-life due
to temperature.}
\label{table3}
\end{table}
From Table \ref{table3}, we see that the increase in temperature in general decreases the half-lives with the decrease being at the most an order of magnitude from $T$ = 0 to 2.4 GK. Though the half-lives evaluated using the universal decay law are not exactly the same as those in the more realistic
double folding model, the percentage decrease in both cases is roughly the
same. The percentage decrease is calculated as
\begin{equation}
pd=\frac{t_{1/2}^{T=0}-t_{1/2}^T}{t_{1/2}^{T=0}}\times 10
\end{equation}
This fact allows us with some reliability to use the UDL
given by \eqref{universallaw} in the statistical approach
for the calculation of the missing half-lives
in the available data for excited levels.
The temperature-dependent half-lives for several isotopes using the
statistical approach are displayed in Table \ref{table4}.
As the temperature increases the half-lives are reduced and the reduction
is larger than found in the effective $Q$-value approach.
The calculations are done using the experimentally listed half-lives.
For these
particular isotopes, several excited levels have been observed to decay
by alpha decay, however, the half-lives of many of these excited states
have not been measured. For the cases with no experimental information,
we use the UDL to evaluate the half-lives to be used in \eqref{Iliadis}.
The two rows labelled [A] and [B] in the table display the calculations
including half-lives of all listed levels in \eqref{Iliadis} and only those
which decay by $\alpha$ decay, respectively.
Particularly interesting in this table is the case of $^{212}$Po. This nucleus
decays to the doubly magic nucleus $^{208}$Pb and an $\alpha$ which could
possibly be the reason (as argued in earlier sections) that $^{212}$Po has
many more excited levels which decay by emitting an $\alpha$ as compared to
the other nuclei in the table.
As mentioned above, the calculation labeled as [A] includes decay from
all parent states. The results are, in most cases, very similar to those
when only the levels experimentally known to decay by emitting an alpha
(labeled as [B]) are used. This is a good indication that a general simpler
formulation would be appropiate and would permit an extension to
more nuclei, thus facilitating nucleosynthesis calculations.
\begin{table}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |p{2.2cm}|p{1cm}|p{1.9cm}|p{2.2cm}|p{2.2cm}|p{2.3cm}| p{2.3cm}|p{2.3cm}|}
\hline
\multicolumn{8}{|c|}{$t_{1/2}(T)$[s]} \\
\hline
Isotope&$Q$&$0$ GK&$0.8$ GK&$1.2$ GK&$1.6$ GK& $2GK$&$2.4$ GK\\
\hline
$^{212}Po [A]$ &8.954 &$2.99\times 10^{-7}$ &$2.913\times 10^{-7}$ & $3.502\times 10^{-9}$ & $6.27\times 10^{-11}$ & $5.86\times 10^{-12}$ & $1.31\times 10^{-12}$\\
$^{212}Po [B]$ & &$2.99\times 10^{-7}$ &$2.916\times 10^{-7}$ & $3.611\times 10^{-9}$ & $6.45\times 10^{-11}$ & $6.01\times 10^{-12}$ & $1.32\times 10^{-12}$\\
\hline
$^{214}Rn [A]$ &9.208 &$2.7\times 10^{-7}$ &$2.59\times 10^{-7}$ & $1.22\times 10^{-7}$ & $3.36\times 10^{-8}$ & $7.67\times 10^{-9}$ & $1.04\times 10^{-9}$\\
$^{214}Rn [B]$ & &$2.7\times 10^{-7}$ &$2.69\times 10^{-7}$ & $2.68\times 10^{-7}$ & $2.67\times 10^{-7}$ & $2.51\times 10^{-7}$ & $2.0\times 10^{-7}$\\
\hline
$^{215}Fr [A]$ &9.540 &$8.6\times 10^{-8}$ &$8.53\times 10^{-8}$ & $7.09\times 10^{-8}$ & $3.68\times 10^{-8}$ & $1.22\times 10^{-8}$ & $2.60\times 10^{-9}$\\
$^{215}Fr [B]$ & &$8.6\times 10^{-8}$ &$8.59\times 10^{-8}$ & $8.58\times 10^{-8}$ & $8.44\times 10^{-8}$ &$8.03\times 10^{-8}$ & $7.36\times 10^{-8}$ \\
\hline
$^{216}Ra [A]$ &9.526 &$1.82\times 10^{-7}$ &$1.79\times 10^{-7}$ & $1.29\times 10^{-7}$ & $4.39\times 10^{-8}$ & $9.88\times 10^{-9}$ & $1.4\times 10^{-9}$\\
$^{216}Ra [B]$ & &$1.82\times 10^{-7}$ &$1.81\times 10^{-7}$ & $1.69\times 10^{-7}$ & $7.68\times 10^{-8}$ & $1.99\times 10^{-8}$ & $6.52\times 10^{-9}$\\
\hline
$^{217}Ac [A]$ &9.832 &$6.9\times 10^{-8}$ &$6.85\times 10^{-8}$ & $5.87\times 10^{-8}$ & $3.19\times 10^{-8}$ & $1.20\times 10^{-8}$ & $4.41\times 10^{-9}$\\
$^{217}Ac [B]$ & &$6.9\times 10^{-8}$ &$6.89\times 10^{-8}$ & $6.88\times 10^{-8}$ & $6.87\times 10^{-8}$ & $6.86\times 10^{-8}$ & $6.82\times 10^{-8}$\\
\hline
\end{tabular}}
\caption{Alpha decay half-lives at different temperatures evaluated using
\eqref{Iliadis} within the statistical approach. The half-lives in the
rows marked [A] are evaluated using \eqref{Iliadis}, including the
entire set of tabulated levels. Rows marked [B] take into account only those
levels which are experimentally known to decay by the emission of an $\alpha$.}\label{table4}
\end{table}
A small note on the comparison of the two approaches, namely, the effective $Q$-value approach and the
statistical approach is in order here. With the aim of providing temperature dependent half-lives for nucleosynthesis
applications, we began by formulating an approach where the effective Q-value would enter in the UDL with the advantage
of avoiding the task of performing calculations of half-lives for many individual levels populated at very high temperatures, the
use of a density level model; thus, making the proposed effective $Q$-value model feasible in a network calculation.
However, this advantage comes at the expense of missing information about the variation in the half-lives of
different levels as well as the population probability of the excited states. Performing an average over the excitation
energies and calculating for just one effective $Q$-value is equivalent to considering decay from one excited level
which occurs at an effective excitation energy. A more realistic description is provided in the statistical approach,
which would be model-independent as long as the half-lives of the excited levels and the branching ratios for alpha
decay are known. Our introduction of the UDL for the unknown half-lives gives a pathway to extend the calculations
to a larger number of alpha emitters. In this work, we use the known UDL from literature and evaluate the half-life
of an unknown level with energy $E_i$, by replacing the $Q$-value in the UDL by $Q + E_i$. However, formulating a UDL
for excited levels using all available data on the alpha decay of excited parent and daughter nuclei is a task which
we plan for the future.
Finally, in passing we mention that there also exists the possibility that the
system can transit from the excited state to the lower state and then
decay by alpha. However, given the exponential nature of the population probability,
namely, $p_i \propto \exp{(- E_i/kT)}$ (with $E_i$ being the energy of the
excited state), at a given temperature,
the likelihood of a higher level being populated and
decaying from a lower level to which it transits will surely be smaller
than the lower level itself getting populated and decaying by emitting
an alpha. Furthermore, in the cases where experimental information exists,
the branching fraction, BR$_{ij}$, accounts for the effect.
Alpha decay plays a role, in competition with beta decay and fission,
in powering and shaping the light curves of kilonovae
\cite{Barnes:2013wka,Barnes:2016umi,Barnes:2020nfi}. It is customary in
nucleosynthesis calculations to consider alpha emission only from the
ground state, i.e at zero temperature. In view of the reduction of the
$\alpha$-decay half-lives in hot
environments found above, it seems appropriate to replace
these inputs by temperature dependent ones. Such a detailed calculation,
though necessary is out of scope of the present work.
\section{Summary}
\label{conclusions}
In this work we have explored the role of temperature in alpha emission
from nuclear excited states. We used a statistical approach and proposed a
model that can potentially be extended for several nuclei.
We particularly focused on nuclei that can be produced in $r$-process,
motivated by the impact that alpha decay has on the heating of light curves
of kilonovae. Thermally enhanced alpha decay rates were calculated for nuclei
with the neutron number $N$ = 128 decaying to a daughter nucleus at the shell
closure with $N$ = 126. The latter choice was made due to the occurrence of
more excited levels decaying by $\alpha$ emission as compared to other nuclei.
The calculation performed within the statistical approach is in principle
model independent. It requires the experimental input of the energies,
spins and half-lives (as well as their branching fractions for $\alpha$
decay) of the excited levels. However, sometimes the
experimental data are incomplete (e.g. even if it
is known that an excited level decays via $\alpha$ decay, its
half-life is not known).
In such a case we supply the missing information by calculating the
half-life using a universal decay law (UDL) for the half-life.
The latter introduces some uncertainty in the results, however, we do not
expect the results to change drastically since the
temperature dependence of the half-lives using the UDL is in good agreement
with the predictions of the more elaborate double folding model for tunneling
decay.
We found that temperatures of the order of GK can increase the half-lives of
the nuclides studied here by at least a factor of ten. Particularly,
for the case of $^{212}$Po and depending on the model,
the change can be by orders of magnitude.
\section{acknowledgments}
O.L.C. thanks Jose Trujillo and Fernando Montes for interesting discussions and acknowledges the support of the Natural Science and Engineering Research Council of Canada (NSERC).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Self-gravitating scalar field configurations
have been very useful in many aspects of gravitational theory.
Their role as describing matter models (eg.\cite{MTW,scheelteuk,kaup,ruffini,seidelsuen});
as governing mechanisms to model inflationary scenarios (eg. \cite{liddle,Lidsey:1995np});
as probes of strong curvature regions (eg.\cite{choptuik,Gundlach:1997wm}), etc. has made them an
ideal tool in a number of fronts. In this work we examine exploiting
scalar fields to mimic some salient properties of accreting black hole
systems. To this end, it is desirable to explore a configuration
where the scalar field simulates an accretion disk
surrounding a black hole. For this purpose, one should be able to confine
the scalar field within some compact region surrounding the black
hole.
Since massless scalar fields radiate away to infinity, the model sought after
should include a mechanism that will prevent this from happening, at least
to some non-trivial extent. (The existence of bound states for particular
cases in spherical symmetry are studied in \cite{bound,bound2}).
One way to confine the scalar field would be by employing
a potential well which would introduce some sort of barrier and thus allow
for confinement. The use of carefully chosen potentials is common practice
with scalar fields, and are usually functions of the field itself.
Examples of this kind of potential are the quadratic ($V(\phi)\propto\phi^2$)
--that introduces a mass term--;
and the quartic ($V(\phi)\propto\phi^4$). However, that kind of
potential does not allow for confining the scalar field
within a specific region of space, that one can specify {\em a priori.}
What we are looking for is a potential that somehow
depends on the coordinates and in particular can be chosen to describe
a potential well within a region. However, this proposition seems a priori at odds
with maintaining covariance.
The difficulty one encounters with a coordinate-dependent
potential, is that the corresponding stress-energy tensor is
in general inconsistent, in the sense that its divergence will not be zero
for a non-trivial scalar field. This fact, together with the Einstein
equations, would imply
that the Bianchi identities are not satisfied.
Faced with this situation a possible way of confining the scalar field would
be to introduce a background with respect to which coordinates could be
defined. This approach would be in line with bi-metric
theories of gravity (eg.\cite{rosen}). Another approach would be to fix a suitable
coordinates already at the level of the action as is done in \cite{kuchar} through
some suitably introduced Lagrande multipliers. This procedure then provides a way
to convariantly adopt coordinates which could, in principle, be used in the potential.
However, this method would strongly link the adopted gauge with the type of
potential introduced and it is not yet clear whether it can be made of practical use.
An alternative way, which is the one we pursue here,
is to exploit symmetry considerations without resorting to introducing any
other feature in the problem. The existence of the symmetry provides a simple
way to consistently introduce a coordinate-dependent potential in the problem.
Certainly, while more restricted than other possible viable options, this approach
is the more direct one.
A particular case of a coordinate dependent potential has already been
implemented in~\cite{frans,unpub} to effectively simulate angular
momentum in spherical or axial symmetry.
In this work we concentrate mainly on the case of spherical symmetry, but
give prescriptions for the implementation of potentials in both
spherical and axial symmetry. We will see that, if the space is
spherically symmetric, we can implement a potential that depends on the
areal radius. In the same way, for an axially symmetric spacetime, the
potential can depend on the length of the closed integral curves defined
by the associated killing vector.
Even though one will not be able to specify the potential as
an arbitrary function of any coordinate,
one may still be
able to confine a scalar field to some region, as is it shown in this
work for the case of spherical symmetry. This fact will become
apparent in section \ref{sphericalcase} and in its applications in the
rest of this work.
This paper is organized as follows. In section~\ref{derivations}, we
study the specification of a
stress-energy tensor for a scalar field with a coordinate dependent
potential. Showing that such implementation is possible when the
space-time possesses a symmetry. In particular, the case of spherical
symmetry is studied in depth (we also consider an
axi-symmetric case in an appendix). In section~\ref{TheEquations} we
describe the formulation used, and the resulting equations. In
section~\ref{numerics} we discuss how the equations are solved
numerically, after obtaining initial data by two different methods.
In section~\ref{results} we show and analyze the
numerical solutions obtained, finding that, after some transient behavior,
the scalar field reaches a state described by a simple harmonic time dependence
and remains
confined to a region surrounding the black hole. We have confirmed these
for initial masses of the scalar field up to $50\%$ of that of the
black hole. Finally, we make some final remarks in section~\ref{conclusions}.
In all this work we use Einstein's index notation and geometrized units.
\section{Scalar Field on a Coordinate-Dependent Potential} \label{derivations}
In this section we study the specification of a stress-energy tensor for a
scalar field with a coordinate dependent potential. Our motivation is
to somehow confine a scalar field within a region around a
black hole. The resulting system would share features of a
black hole interacting with an accretion disk. We will see
that this can be done when the space-time posses a symmetry. However,
the specification of such potential is not completely arbitrary since
it must depend on the coordinates only through
some particular function.
Knowing the approximate dependence of that function on the
coordinates, one can then construct a potential that confines the
scalar field.
Before presenting our approach, we include an overview of how
the equations of motion are obtained
from a stress-energy tensor in the case of a coordinate-independent
potential. Then, based on that procedure, we will study the generalization
to the case of a coordinate-dependent potential.
The equations of motion for a real scalar field $\phi$ on a
coordinate-independent potential can be derived from the stress-energy
tensor
\begin{equation}
T_{ab} = {T^{(k)}}_{ab} + {T^{(p)}}_{ab},
\end{equation}
where, for later convenience, we have split this tensor into what we
call the ``kinetic'' and ``potential'' terms:
\begin{equation} \label{T0}
{T^{(k)}}_{ab} \equiv (\nabla_a\phi) (\nabla_b\phi) - \frac{1}{2} g_{ab}(\nabla_c\phi)
(\nabla^c\phi),
\end{equation}
\begin{equation}
{T^{(p)}}_{ab} \equiv - \frac{1}{2} g_{ab} V(\phi). \label{T1_old}
\end{equation}
The kinetic part, ${T^{(k)}}_{ab}$, corresponds to a massless scalar field
without a potential.
The equations of motion can be obtained \cite{wald,MTW} through the condition
\begin{equation}
\nabla_a {T^a}_b = 0 \;,
\label{divergence}
\end{equation}
which must be satisfied to be consistent with a
covariant theory. Equation \eq{divergence}
can be re-expressed with $\nabla_b \phi$ as a
common factor,
\begin{equation}
0 = \nabla_a {T^a}_b = (\nabla_b \phi) \; \mathcal{L}(\phi),
\label{common_factor}
\end{equation}
where $\mathcal{L}(\phi)$ contains second order derivatives of $\phi$.
The equations of motion for a non-trivial scalar field is then
\begin{equation} \mathcal{L}(\phi) = 0 \label{Lis0} \,. \end{equation}
For example, for $V(\phi)=m^2\phi^2$ we obtain the Klein-Gordon equation,
\begin{equation}
\mathcal{L}(\phi) \equiv \left( \nabla_a \nabla^a - m^2 \right) \phi=0\;.
\end{equation}
This is analogous to the Lagrangian approach, where the variation of the
action is set to zero, and, after integrating by parts, the integrand becomes
$\delta\phi\mathcal{L}(\phi)$. \\
After this detour, we now turn our attention back to the case of interest,
the implementation of a coordinate-dependent potential. Our discussion is based
on the precedent one though now generalizing it to the case of
a coordinate-dependent potential $V(x^c,\phi)$.
A naive first approach would be to replace occurrences of $V(\phi)$
in \eq{T1_old} by $V(x^c,\phi)$. However, this will bring an unfortunate
consequence, namely that one can now no longer express the divergence
of ${T^a}_b$ in the form given by eqn \eq{common_factor},
where $\nabla_b \phi$ appears as a common factor. Instead one has
\begin{eqnarray}
0= \nabla_a {T^a}_b &=& (\nabla_b \phi) \left(\nabla_a \nabla^a \phi -\frac{1}{2} \frac{\d
}{\d \phi}V(x^c,\phi)\right) \nonumber \\
&& - \frac{1}{2} \frac{\d }{\d x^b}V(x^c,\phi). \label{not_common_factor}
\end{eqnarray}
The crucial difference with eqn. \eq{common_factor} is that several
(independent) equations must be satisfied by the real scalar field
$\phi$. As a result, the system of equations will be generically inconsistent.
To resolve this problem we start by: (i) adopting a different
ansatz for ${T^{(p)}}_{ab}$ (equation \eq{T1} below), and (ii)
imposing symmetry conditions on the scalar field.
First, consider setting ${T^{(p)}}_{ab}$, instead of being given by equation \eq{T1_old},
to be the product of a function of $\phi$ and a coordinate
dependent tensor,
\begin{equation}
\T{p}_{ab} \equiv H_{ab}(x^c)\;f(\phi) \label{T1},
\end{equation}
where the function $f$ is independent of $x^c$ and the tensor $H_{ab}$ is
independent of $\phi$.
Now, find a suitable $H_{ab}$ such that $\nabla_a {T^a}_b$ takes the
form of equation \eq{common_factor},
this will induce conditions on $H_{ab}$. Under this choice the divergence of the
stress-energy tensor results
\begin{equation}
\nabla_a {T^a}_b = (\nabla_b \phi) \nabla_a \nabla^a \phi +
\frac{\d f}{\d \phi}(\nabla_a \phi)
\;{H^a}_b + f(\phi) \;\nabla_a {H^a}_b \;. \label{div_T}
\end{equation}
Now, we look for conditions that would allow us to
express the \emph{r.h.s.} of equation \eq{div_T} in such
a way that $\nabla_b \phi$ appears as a common factor.
Since ${H^a}_b$ is independent of $\phi$, $\nabla_b\phi$ cannot appear
in the last term of \eq{div_T}; Then, that term must be zero, resulting in
the first condition on ${H^a}_b$,
\begin{equation}
\nabla_a {H^a}_b = 0 \label{div_H}\;.
\end{equation}
We now consider the second term in the \emph{r.h.s.}; the condition
\begin{equation}
(\nabla_a\phi)\; {H^a}_b = (\nabla_b\phi)\;h(x^c)\;, \label{ab}
\end{equation}
for some scalar $h(x^c)$, ensures that that term has $\nabla_b\phi$ as a common factor.
Equation~\eq{ab} is satisfied for any scalar field $\phi$ if
\begin{equation}
{H^a}_b = h(x^c){\delta^a}_b\;. \label{scalar}
\end{equation}
However, this condition, together with equation \eq{div_H},
implies that $h(x^c)$ is a constant. This means that ${T^{(p)}}_{ab}$ is
of the form~\eq{T1_old} (with $V$ independent of $x^c$). Thus, for an arbitrary
scalar field, and without any further structure in the spacetime, space-dependent
potentials can not be considered. However, by imposing further conditions
on the scalar field $\phi$, ${H^a}_b$ can indeed be chosen with further structure than
that of equation~\eq{scalar} while still satisfying equation \eq{ab}. To this end, we
consider\footnote{This equation can be thought just as the
definition of the tensor ${A^a}_b$} the tensor ${H^a}_b$ of the form
\begin{equation}
{H^a}_b = h(x){\delta^a}_b + {A^a}_b \label{Adef}\;.
\end{equation}
Replacing \eq{Adef} into \eq{ab} we find
\begin{equation}
(\nabla_a \phi) {A^a}_b = 0\;. \label{fA}
\end{equation}
The simplest case is the one with ${A^a}_b=0$ for which ${H^a}_b$ is given by
\eq{scalar}. More general cases arise when
$\phi$ is independent on one of the coordinates, lets say $\d_{x^3}
\phi\equiv\nabla_3\phi=0$. Here one can adopt ${A^3}_3$
arbitrarily and set all other components to zero, thus satisfying
equation \eq{fA}.
In this particular case, ${H^a}_b$ takes the form
\begin{equation}
{H^a}_b = \left[\begin{array}{cccc}
h & 0 & 0 & 0 \\
0 & h & 0 & 0 \\
0 & 0 & h & 0 \\
0 & 0 & 0 & b
\end{array}\right] \label{one}
\end{equation}
for some functions $h(x^c)$ and $b(x^c)$.
Similarly, when $\phi$ does not depend on two of
the coordinates, lets say $\d_{x^2}
\phi=0$, $\d_{x^3}\phi=0$, one can choose
\begin{equation}
{H^a}_b = \left[\begin{array}{cccc}
h & 0 & 0 & 0 \\
0 & h & 0 & 0 \\
0 & 0 & b & 0 \\
0 & 0 & 0 & c
\end{array}\right] . \label{two}
\end{equation}
Analogous results are obtained when some of its derivatives are linearly
related. For example, if $\d_{x^3}\phi=c\d_{x^2}\phi$, one can
adopt ${A^3}_3$ arbitrarily and set ${A^2}_3=-c{A^3}_3$ keeping all other components
zero. With this choice, equation \eq{fA} will be satisfied and ${H^a}_b$ will then be given in
terms of two functions $h(x^c)$ and $b(x^c)$ in a slightly different way
as is \eq{one}. \\
Summarizing, we have seen that a coordinate dependent potential can be
implemented if the following conditions are satisfied: (i) its derivatives are
linearly dependent (this includes the possibility of one or more of
them being zero). (ii) The ``potential'' part of the stress-energy tensor is
given by \eq{T1}, with ${H^a}_b$ satisfying $\nabla_a{H^a}_b=0$ and
being expressible in the form \eq{one}, \eq{two}, or similar expressions depending
on how condition (i) is fulfilled.
In the next section we will consider in detail the case of spherical
symmetry.
\subsection{Spherical Symmetry} \label{sphericalcase}
We will now concentrate on the case of spherical symmetry. The line element
can be written in the form
\begin{equation}
ds^2 = -N^2 dt^2 + g_{rr}(dr+\beta dt)^2 + g_{\Omega}
d\Omega^2,
\end{equation}
where $N$, $g_{rr}$, $\beta$, and $g_{\Omega}$ are functions of $t$ and $r$.
We assume that we can adopt coordinates so that $\d_\theta\phi=\d_\varphi\phi=0$.
Then, ${H^a}_b$ is given by \eq{two},
with the additional condition that $b=c$ due to the
spherical symmetry. ${H^a}_b$ is then
\begin{equation} \label{sH}
{H^a}_b = \left[\begin{array}{cccc}
h & 0 & 0 & 0 \\
0 & h & 0 & 0 \\
0 & 0 & b & 0 \\
0 & 0 & 0 & b
\end{array}\right] ,
\end{equation}
with $h$ and $b$ functions of $t$ and $r$.
The evaluation of $\nabla_a {H^a}_b$ gives rise to non-trivial equations
only on the $t$ and $r$ components,
\begin{eqnarray}
\frac{dg_{\Omega} }{ dt\;\;}(h-b) \label{hb_t}
+2 g_{\Omega} \frac{dh}{dt} &=& 0, \\
\frac{dg_{\Omega} }{ dr\;\;}(h-b) \label{hb_r}
+2 g_{\Omega} \frac{dh}{dr} &=& 0.
\end{eqnarray}
In order to obtain a family of solutions to these equations we will
demand that $h$ depends on the coordinates only
through $g_{\Omega}$: $h(t,r)=h(g_{\Omega}(t,r))$.
With this condition, we have that
\begin{equation}
\frac{dh}{dx^i} = \frac{\d h}{\d g_{\Omega}}
\; \frac{dg_{\Omega}}{dx^i}
\end{equation}
for $x^i=(t,r)$. Substituting this into either equation \eq{hb_t} or \eq{hb_r}, we
obtain an expression for $b$ in terms of $h$,
\begin{equation} \label{bh}
b = h + g_{\Omega} \frac{\d h}{\d g_{\Omega}} .
\end{equation}
We have just seen that, if $h$ depends on the coordinates only
through $g_{\Omega}$,
and $b$ is given in terms of $h$ by \eq{bh}, the prescription \eq{sH}
for the tensor ${H^a}_b$ allows us to express $\nabla_a {T^a}_b$
with $\nabla_b \phi$ as a common factor. More explicitly:
\begin{equation} \label{shperical_cf}
\nabla_a {T^a}_b = (\nabla_b\phi)\left(\nabla_a\nabla^a\phi +
\frac{\d f}{\d \phi} h(g_{\Omega}) \right).
\end{equation}
Notice that, if one wanted to calculate $\nabla_a {T^a}_b$ without
setting $\d_\theta \phi = \d_\varphi \phi = 0$ at the onset, one
would obtain \eq{shperical_cf}, but with
$h(g_{\Omega})$ replaced by $b(g_{\Omega})$ for the
angular components $\nabla_a {T^a}_\theta$ and $\nabla_a
{T^a}_\varphi$. However, because those terms are actually multiplied by zero,
equation~\eq{shperical_cf} is true for all four components.
Setting the {\em r.h.s} of \eq{shperical_cf} to zero we obtain the
equation of motion for $\phi$,
\begin{equation}
\nabla_a\nabla^a\phi +
\frac{\d f}{\d \phi} h(g_{\Omega}) = 0,
\end{equation}
where we remind the reader that $f$ is an arbitrary function of $\phi$, and
$h$ is an arbitrary function of $g_{\Omega}$.
Throughout the rest of this work we will choose these functions as
\begin{eqnarray}
f(\phi) &=& - \frac{1}{2} \phi^2 ,\\
h(g_{\Omega}) &=& m^2 + V({g_{\Omega}}) .
\end{eqnarray}
We do that, so that the equation of motion for the scalar field becomes
\begin{equation} \label{eq_phi_sph}
\left( \nabla_a \nabla^a - m^2 - V({g_{\Omega}}) \right) \phi = 0,
\end{equation}
where we interpret the function $V(g_{\Omega}(t,r))$ as a
(coordinate-dependent) potential. The parameter $m$ is set to zero in our
simulations. The function $g_{\Omega}(t,r)$ is
just the square of the areal radius, $R(t,r)$. Then, we can
write \eq{eq_phi_sph} in the form
\begin{equation} \label{eq_phi_sph_2}
\left( \nabla_a \nabla^a - m^2 - \tilde{V}(R) \right) \phi = 0,
\end{equation}
where $\tilde V$ is an arbitrary function of the areal radius.
In appendix \ref{axial} we summarize the results obtained in the case
of axial symmetry.
\section{The Equations} \label{TheEquations}
In this work we solve the non-vacuum Einstein equations for a dynamic
spherically symmetric space time, coupled to a real scalar field. The
scalar field satisfies a Klein-Gordon-like equation with the addition of
a potential, as explained in section~\ref{sphericalcase}.
The equations are decomposed using a Cauchy formulation, in which the
space-time is foliated by space-like surfaces. The particular
formulation used is the Einstein-Christoffel hyperbolic formulation
\cite{york_fixing}, where the equations are decomposed into a system
of first order hyperbolic ``evolution equations,'' plus a system
of (first order) ``constraint equations.'' These equations can be
solved by giving initial data that satisfy the constraint equations on
a given surface of the foliation, and then integrating the evolution
equations in time. The constraint equations at later times are then
automatically satisfied \cite{wald} in the domain of dependence of
that surface.
The equations solved are the Einstein-Klein-Gordon equations,
with the addition of a potential,
\begin{eqnarray}
&G_{ab} = 8\pi T_{ab}&, \\
&\left( \nabla_a \nabla^a -V \right) \phi = 0&, \label{eq_phi_sph_3}
\end{eqnarray}
where the stress-energy tensor, $T_{ab}$, and the potential, $V$, are
given according to section \ref{sphericalcase}, as well as the
condition that $\phi$ is independent of $(\theta,\phi)$. In
equation~\eq{eq_phi_sph_3} we have set $m=0$, but this parameter can
be incorporated in the definition of $V$.
We consider the line element and extrinsic curvature of a space time in
spherical symmetry in the form
\begin{eqnarray}
ds^2 &=& -N^2 dt^2 + g_{rr}(dr+\beta dt)^2 + r^2 g_T d\Omega^2, \\
K_{ij}dx^idx^j &=& K_{rr} dr^2 + r^2 K_T d\Omega^2, \label{Kij}
\end{eqnarray}
where $\beta$ is the ($r$ component of the) shift vector, and $N$ is the lapse
function.
In the Einstein-Christoffel formulation, the shift and ``densitized
lapse'' function, $\alpha\equiv N/\sqrt{g}$, are arbitrarily specified
and kept fixed during the evolution. We denote by $g$ the determinant of
the three-metric.
In spherical symmetry, this system reduces to nine first order
evolution equations, and four first order constraint equations,
the later containing only spatial derivatives.
The variables evolved are: the metric components, $g_{rr}$ and
$g_T$; the scalar field, $\phi$; and other variables used to
convert the equations from second to first order. They are: the
extrinsic curvature components, $K_{rr}$ and $K_T$ (defined
in eqn.\eq{Kij}); variables $\{\Psi, \Pi \}$ constructed with first-derivatives of $\phi$,
\begin{eqnarray}
\Psi &=& \d_r\phi, \\
\Pi &=& \frac{1}{N}\left(\beta \;\d_r\phi-\d_t\phi\right);
\end{eqnarray}
and the variables $\{f_{rrr},f_{rT}\}$ containing first spatial derivatives
of the metric,
\begin{eqnarray}
f_{rrr} &=& \frac{\d_r g_{rr}}{2} + \frac{4 g_{rr} f_{rT}}{g_T}, \\
f_{rT} &=& \frac{\d_r g_{T}}{2} + \frac{g_T}{r}.
\end{eqnarray}
The complete expressions of these equations are shown in detail
in appendix~\ref{app_eq}. Their derivation, and the notation used, is
based on~\cite{Kidder} and~\cite{cpbc}, with the
addition of terms containing the potential.
\section{Numerical Implementation} \label{numerics}
\subsection{Initial Data} \label{InitialData}
Consistent initial data must satisfy equations \eq{C}-\eq{Cm}.
These equations determine some variables in terms of others
judiciously chosen. In this work, we exploit this freedom
to describe a black hole centered at $r=0$
by specifying $\{V,\phi,g_{rr},K_{rr}\}$ from the known Schwarzschild solution
and solving for $g_T$ and $K_T$.
Before describing the details of our implementation, we discuss
how the potential and scalar field are chosen.
We adopt a potential $V$ with two free parameters $\{A,r_0\}$ to
regulate the depth and location of the ``well'' where the
scalar field is to be confined (see figure~\ref{potentials}). A simple
expression for $V$ suffices for this task, and we adopt
\begin{equation} \label{V}
V(R) = A\left(1-e^{-\left(R-r_0\right)^2}\right),
\end{equation}
with the areal radius $R$ given by $R=r\sqrt{g_T}$.
The parameters in
this expression were set to $A=30/M^2$ and $r_0=6M$, where $M$ is the
initial mass of the black hole.
Notice that
during the evolution $R=R(t,r)$, thus, in these coordinates,
the shape (and position) of the potential well can change in time.
We will return to this point later.
The scalar field $\phi$ is defined following either one of two different
strategies. One is designed to conform to time-harmonic situations
in weakly-gravitating cases and the other simply prescribing a
sufficiently smooth
profile. The latter choice allows us to investigate the spacetime's response
to fields not designed to conform to a time-harmonic dependence.
\subsubsection{Time-harmonic scalar field} \label{statid}
To prescribe a scalar field which will give rise to a spacetime with harmonic time-dependence,
we begin by considering the limiting case when the scalar field's amplitude
is negligible; there the metric should be described
by the Schwarzschild's solution.
Now, considering the scalar field as existing over this fixed background spacetime,
a Schr{\" o}dinger-like eigenvalue equation can be obtained to
determine time-harmonic states as discussed
below.
The Schwarzschild metric in Eddington-Finkelstein coordinates is:
\begin{eqnarray}
ds^2 &=&- \left(1-\frac{2M}{r}\right) dt^2
+ \left(1+\frac{2M}{r}\right) dr^2 +\nonumber \\
&&+ \frac{4M}{r} dtdr + r^2 d\Omega^2 . \label{Sch}
\end{eqnarray}
We use this metric to evaluate the equation of motion for $\phi$,
equation~\eq{eq_phi_sph_3}. To solve this PDE we
use the following ansatz that yields separation of
variables\footnote{Suggested by the fact that in Schwarzschild coordinates,
($\tilde{t}$, $\tilde{r}$), the ansatz $\phi =
u(\tilde{r})\cos(\omega \tilde{t})$ yields separation of
variables. The coordinates transformation being:
$\tilde{t}=t-2M\ln\left(\frac{r-2M}{M}\right)$, $\tilde{r}=r$.},
\begin{equation} \label{separation}
\phi(t,r) = u(r) \cos\left( \omega\left[t-2M\ln\left(\frac{r-2M}{M}\right)\right] \right).
\end{equation}
The equation for $u(r)$ results
\begin{equation} \label{Lu}
\mathcal{L}\; u(r) = \left[\omega^2 -
\left(1-\frac{2M}{r}\right)V(r) \right]u(r),
\end{equation}
where the second order operator $\mathcal{L}$ is given by
\begin{eqnarray}
\mathcal{L}&=&-\left(1-\frac{2M}{r}\right)^2
\frac{\d^2}{\d r^2} \nonumber \\
&& -\frac{2}{r} \left(1-\frac{M}{r}\right) \left(1-\frac{2M}{r}\right) \frac{\d}{\d r}
\end{eqnarray}
Equation \eq{Lu} is integrated to obtain $u(r)$. Then, from its
definition, equation \eq{separation},
$\phi(t,r)$ is calculated. Finally, from $\phi(t,r)$ we obtain $\Pi(t,r)$ and
$\Phi(t,r)$ evaluating these functions at $t=0$ and adopting them as initial data.
Equation \eq{Lu} can be straightforwardly integrated to obtain both the eigenvalue
and eigenfunction through a standard shooting algorithm.
To this end, we transform the second-order equation to a system of two
first order equations for $u(r)$ and $u'(r) \equiv d u /dr$ augmented
with a third equation ($(\omega^2)'=0$)
to simplify the implementation (see \cite{nrf} for the details).
The system of equations is then
integrated outwards from $r_{L}\equiv 4M$ on one hand, and also inwards
from $r_{R}\equiv 8M$.
The obtained solutions are matched at an intermediate point, in our case at $r_0$ (the center
of the potential well), with the conditions that both the solutions
and derivatives are continuous.
The initial guesses for the boundary conditions are then
varied until a satisfactory match is obtained. The code used to
implement the shooting algorithm is
the one described in \cite{nrf}, except that the ODE integrator is
replaced for LSODE (Livermore Solver for ODEs) \cite{lsode}.
The boundary conditions, consistent with the physical scenario in mind are determined
as follows.
We have a system of three first order ODEs, thus
three boundary conditions need be specified.
Natural conditions for our purposes result from
requiring the fields fall sufficiently rapid
at the boundaries. We thus impose a relationship
between $u$ and its derivative at each boundary,
of the form $u'=k u$. The coefficient $k$ at each boundary
can be found through a WKB-type approach.
To do so, we first consider the variable change $u(r) \equiv F(r) \tilde u(r)$ and
fix $F(r)=[r(r-2M)]^{-\frac{1}{2}}$ so as to remove the first order derivative in equation~\eq{Lu}.
The resulting equation is
\begin{equation}
-f(r)\tilde{u}''(r)+ V_{\rm eff}(r) \tilde{u}(r)= \omega^2\tilde{u}(r)
\end{equation}
with $f(r)$ and $V_{\rm eff}(r)$,
\begin{eqnarray}
f(r) &=&\left(1-\frac{2M}{r}\right)^2 ,\\
V_{\rm eff}(r) &=& \left(1-\frac{2M}{r} \right) V(r) -
\frac{M^2}{r^4}, \label{Veff}
\end{eqnarray}
and we interpret $V_{\rm eff}$ as an effective potential
(which is shown in figure~\ref{potentials}).
\begin{figure}[!tbh]
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{potentials.eps}
\caption{Potential and effective potential, as defined in~\eq{V}
and~\eq{Veff}, respectively. As mentioned in the text, the
potentials are, in general, functions of $R\equiv r\sqrt{g_T}$. The
potentials showed in this figure are those used to find the
time harmonic states $u(r)$, where the Schwarzschild metric is used,
hence $R=r$.
\label{potentials} }
\end{figure}
Next, we freeze the coefficients $f(r)$ and $V_{\rm eff}$ on a small neighborhood
of each boundary point and consider solutions of the form $\exp(\pm k r)$, with $k^2=(V_{\rm
eff}-\omega^2)/f$. The following conditions at the boundaries are then determined by
\begin{eqnarray}
&&\tilde{u}(r) \propto e^{+k_1 r}
\quad \mathrm{at} \quad r=r_{L} ,\\
&&\tilde{u}(r) \propto e^{-k_2 r}
\quad \mathrm{at} \quad r=r_{R},
\end{eqnarray}
where
\begin{eqnarray}
&&k_1=\sqrt{\frac{V_{\rm eff}(r_{L})-\omega^2}{f(r_{L})}} ,\\
&&k_2=\sqrt{\frac{V_{\rm eff}(r_{R})-\omega^2}{f(r_{R})}} .
\end{eqnarray}
As illustrated later, these conditions indeed ensure the solutions decay
rapidly outside of the potential well (for a bounded range of values of $\omega^2$).
Notice that since the equations are homogeneous there remains a freedom on
the amplitude of the fields at the boundaries. We fix this freedom by setting
$u=1$ at $r_{L}$ and adopting as the varying parameter for the
shooting method the value of $u$ at $r_{R}$.
Once obtained $\phi(r)$ in $[r_{L}, r_{R}]$ using equation
\eq{separation}, we set $\phi(r)=0$ outside this region. For
the amplitude of $\phi$ used in this work in the case of time-harmonic
initial data, the values of $\phi$ and its derivative at $r_{L}$ and
$r_{R}$ are small enough to ensure that this matching is sufficiently smooth, as is
corroborated when evolving these initial data.
\subsubsection{Smooth profile} \label{pulse}
The other approach employed in this work is to adopt a simple
expression for the scalar field. In particular we adopt a
``pulse'' of compact support of the form
\begin{equation} \label{pulsedef}
\phi(r) = \left\{
\begin{array}{cc}
c (r-r_1)^4 (r-r_2)^4 & r_1\le r \le r_2 \\
0 & \mathrm{elsewhere}
\end{array}
\right. ,
\end{equation}
where the
values $r_1$ and $r_2$ control the width of the pulse and were
chosen so that it is centered with the potential (at $r=6M$):
$r_1=5M$, $r_2=7M$. After specifying $r_1$ and $r_2$, the coefficient
$c$ is chosen so that the scalar field has a given mass.
This initial data is used to compare with the previous approach in
regimes where the fixed-background approximation is justified and to study
the spacetime's behavior in non-linear cases.
\subsubsection*{Remaining data}
Having specified both the potential and the scalar field, consistent initial
data is determined by integrating the constraint
equations in the following manner. First, the functions $g_{rr}$, $K_{rr}$, $\alpha$, and
$\beta$ are set equal to those read-off from the Schwarzschild solution in
Eddington-Finkelstein coordinates. Adopting these coordinates gives the freedom to
place the inner boundary inside the black hole.
We found it convenient to rewrite the constraint equations in the form:
\begin{eqnarray}
\d_r g_T &=& d_T ,\\
\d_r d_T &=& f_1(g_T,d_T,K_T;F_i) ,\\
\d_r K_T &=& f_2(g_T,d_T,K_T;F_i) ,
\end{eqnarray}
where $F_i$ represents all the functions that are specified a priori
(including $\phi$).
These equations are integrated outwards from the inner boundary using the step
adaptive integrator LSODE,
using as boundary data ($g_T$, $d_T$, and $K_T$ at $r=r_{\rm min}$) the
values read-off from the Schwarzschild solution.
\subsection{Evolution}
We discretize the equations with a scheme formulated to take advantage
of numerical techniques which guarantee stability of generic linear
first order hyperbolic systems. In this work we
adopt: (i) second order accuracy by implementing second-order
derivative operators satisfying summation by parts~\cite{KS1,KS2,strand,SBP0,SBP1};
(ii) a third-order
Runge-Kutta operator for the time integration through the method of
lines~\cite{tadmor};
(iii) a Kreiss-Oliger~\cite{KO} style dissipative algorithm to control
the high frequency
modes of the solution~\cite{gustaffsonkreissoliger,SBP1,SBP2} and
(iv) maximally dissipative boundary conditions setting
all incoming modes to zero~\cite{olsson,gustaffsonkreissoliger}.
We employ a uniform grid to cover the region $r\in [r_{\rm min},r_{\rm max}]$ with
$N$ equi-spaced points. The grid-spacing between points is $\Delta
r = (r_{\rm max} - r_{\rm min}) / (N-1)$. The time step $\Delta t$ is defined in terms
of $\Delta r$ as $\Delta t= cfl\; \Delta r$ and $cfl=0.25$ is chosen so that
the CFL condition \cite{thomas} is satisfied. In what follows,
sub-indices denote particular points of a slice, and super-indices
distinguish each slice.
The inner boundary, $r=r_{\rm min}$, is set inside the black hole initially,
and monitored during the evolution to ensure that it remains
inside and constitutes and outflow boundary of the computational domain.
Then, there is no need to prescribe boundary conditions there. At
the outer boundary, $r=r_{\rm max}$ maximally dissipative boundary conditions are adopted.
In our present case we take the simplest form of these conditions and
set the incoming modes to zero.
The characteristic structure for the system of equations is detailed
in appendix \ref{characteristic}.
The code have been tested to ensure that the numerical solutions
obtained converge to the corresponding solutions of the Einstein
equations. In appendix~\ref{tests} we show the convergence test for
the Hamiltonian constraint.
\section{Analysis and Results} \label{results}
In the simulations performed in this work we set the initial mass of the black hole to $M=1$
(in geometrized units). The domain of integration was chosen so that the
region of interest is unaffected by the conditions adopted at
the right boundary. This corresponds to $r_{\rm min}=1M$ and $r_{\rm max}=221M$. The maximum
resolution used was $\Delta r=0.01M$ (22000 grid points).
In the two approaches we use to obtain initial data, we have the
freedom of adjusting the amplitude of the scalar field, which in turn
determines its mass. We set initial data where the mass of the scalar
field is $m_{\rm sf}=0.01M$ in the time-harmonic case, while for the non-time-harmonic
cases we set $m_{\rm sf}$ equal to $0.01M$, $\kappa\,0.1M$, ($M$ being the
initial mass of the black hole and $\kappa = 1...5$). To calculate
the mass we use the Misner-Sharp formula \cite{MTW},
\begin{equation} \label{MSdef}
M_{\rm MS}(r) = \frac{r\sqrt{g_T}}{2}
\left[1+\frac{r^2}{g_T}\left(K_T^2-\frac{f_{rT}^2}{g_{rr}}\right)\right],
\end{equation}
which measures the total mass inside a spherical surface
labeled by coordinate $r$. In our initial data the mass of the black
hole, $M$, is preset, so we can calculate $m_{\rm sf}$ by subtracting $M$ from
the total mass of the space-time,
\begin{equation}
m_{\rm sf} = M_{\rm MS}(R) - M ,
\end{equation}
where $R$ labels a sphere containing the scalar field, which is
localized initially. (See figure~\ref{mass001st}).
During the evolution we employ this formula, replacing
$M$ for $M_{\rm MS}$ at the horizon\footnote{The position of
the apparent horizon is given by the outermost trapped surface.}.
In our analysis we also evaluate the Kretschmann invariant $I\equiv
R_{abcd}R^{abcd}$, where $R_{abcd}$ is the Riemann tensor. This
quantity provides a gauge-invariant answer that can be compared with
its value in known spacetimes. For
a Schwarzschild space-time, $I$ is given by
\begin{equation} \label{KI}
I_{\rm Sch} = \frac{48 {(M_{\rm MS})}^2}{R^6},
\end{equation}
where, in Schwarzschild coordinates, $M_{\rm MS}=M$ and $R=r$. We evaluate
the quotient $I/I_{\rm Sch}$ using \eq{KI} with
$R=r\sqrt{g_T}$ and $M_{\rm MS}$ defined in \eq{MSdef}.
\subsection{Initial Data}
As explained in section~\ref{statid}, we first find time-harmonic states for
the scalar field on a Schwarzschild space-time. By varying the
initial guess for the frequency in the shooting integration we obtain
different modes. We show the first modes in figures~\ref{u_modes} and~\ref{phi_modes}.
However, for this work we used only the first mode which will be referred to as
``the time-harmonic state'', unless otherwise specified.
These modes have been re-scaled so that they can be normalized (in analogy with quantum
mechanics) so that $\int{r^2 |u(r)|^2 dr=1}$. There is no physical
justification for choosing that
particular normalization, but it is helpful when comparing different
eigenstates, which otherwise would have greatly different
amplitudes.
\begin{figure}[!tbh
\includegraphics[angle=0,width=0.9\columnwidth,height=!,clip]{u_modes.eps}
\caption{First time-harmonic states of $u(r)$. \label{u_modes} }
\end{figure}
\begin{figure}[!tbh
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{phi_modes.eps}
\caption{Scalar field at $t=0$ obtained from the first time-harmonic
states of $u(r)$, using equation~\eq{separation}. \label{phi_modes} }
\end{figure}
The other approach used to define the initial data corresponds to
the ``pulse'' described in section~\ref{pulse}.
In the linear regime we employ both types of initial
data, with a scalar field's initial mass $m_{\rm sf}=0.01M$.
In the non-linear regime we adopt only the non-time-harmonic initial data with
masses $m_{\rm sf}$ ranging from $0.1M$ to $0.5M$.
\subsection{Evolution}
We study the evolution of the prescribed data. We begin by
considering first the linear regime, adopting scalar field configurations
with initial mass of 1\% of that of the black hole. After confirming that
the time-harmonic configuration behaves as expected, we confirm that the ``pulse'' configuration
evolves towards a time-harmonic regime. Then, we study cases in the
non-linear regime, with initial
scalar field masses ranging from 10\% to 50\% of that of the black hole. In all
cases we evolve until $t=200M$.
\subsubsection*{Linear case}
The time-harmonic initial data constructed essentially remains unchanged through
the evolution while the non-time-harmonic data evolves towards a time-harmonic state.
Figures~\ref{st001} and~\ref{ns001} illustrate $\phi(r)$ at different times for
the maximum resolution employed ($\Delta r = M/100$).
Figure~\ref{st001} corresponds to the time-harmonic initial data,
and figure~\ref{ns001} to non-time-harmonic initial data.
In both cases we sampled along two different periods at $t\approx80M$; and
then at $t\approx160M$. The corresponding
pairs, are then plot together illustrating how after 22 periods apart the solutions
are essentially the same.
\begin{figure*}[!tbh]
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{st001_a.eps}
\label{st001_a} }
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{st001_b.eps}
\label{st001_b} }
\caption{The scalar field at different times is compared to check if the
evolution remains described by a time-harmonic dependence.
Case with time-harmonic initial data. Initial mass of the
scalar field $m_{\rm sf}=0.01M$. Figure~\ref{st001_a} shows the scalar
field when it reaches a maximum, while figure~\ref{st001_b} shows it at
about a quarter of a period later. In both cases, the profile shown in continuous
line is separated 22 periods from the one in dashed line.
\label{st001} }
\end{figure*}
\begin{figure*}[!tbh]
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns001_a.eps}
\label{ns001_a} }
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns001_b.eps}
\label{ns001_b} }
\caption{Here we show the same comparison of profiles as in figure~\ref{st001},
this time for the case with non-time-harmonic initial data. The
separation between the profiles compared is also 22 periods. The
initial mass of the scalar filed is $m_{\rm sf}=0.01M$.
\label{ns001} }
\end{figure*}
This is further illustrated in figure~\ref{diff} where we show the difference
between each of these pairs for three different resolutions.
\begin{figure*}[!tbh]
\subfigure[$\left|\phi(t_1)-\phi(t_2)\right|$ for the case with time-harmonic
initial data (see figure~\ref{st001_a})]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{st_diff.eps}
\label{st_diff} }
\subfigure[$\left|\phi(t_1)-\phi(t_2)\right|$ for the case with non-time-harmonic
initial data (see figure~\ref{ns001_a})]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns_diff.eps}
\label{ns_diff} }
\caption{Absolute value of the difference between the scalar field at
different times: $\left|\phi(t_1)-\phi(t_2)\right|$, where
$t_2-t_1=22$~periods. Figure~\ref{st_diff} shows the
difference between the profiles shown in figure~\ref{st001_a}, while
figure~\ref{ns_diff} shows the difference between those in
figure~\ref{ns001_a}. In each case, we show these differences for three resolutions.
\label{diff} }
\end{figure*}
Finally, figure~\ref{DFT_001} displays the absolute value of the Fourier transform in time of
$\int \! \phi\;dr$, denoted $|F[\phi]|$. The scalar field is first integrated in space, then a discrete
Fourier transform in $t$ is calculated, where $t$ ranges from $0$ to
$200M$ in the case of time-harmonic initial data, and from $t_0=60M$ to
$200M$ in the non-time-harmonic case. In the plot we also indicate the frequencies
($f_n=\omega_n/2\pi$) obtained from the shooting integration when
calculating the time-harmonic states. The time $t_0$ is chosen after
the initial transient behavior, indicated by a time-harmonic behavior
observed in $\phi$.
\begin{figure}[!tbh]
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{DFT_001.eps}
\caption{Absolute value of the discrete Fourier transform in time of $\int \! \phi
\;dr$. The continuous line corresponds to the time-harmonic initial
data, while the dashed line corresponds to the non-time-harmonic
initial data. In the later case, the scalar field relaxes to
a superposition of the first time-harmonic modes, whose frequencies are
shown in the figure (labeled $f_n$). \label{DFT_001} }
\end{figure}
The initially non-time-harmonic scalar field relaxes to a
superposition of the first three time-harmonic modes, the first one
being the dominant one. We point out here that for this configuration,
the shooting method gives raise to three possible modes. It is thus
no surprising that the evolution gives rise to a solution described by these modes.
Deeper potentials give rise to more modes.
Figures~\ref{mass001st} and~\ref{mass001} show the Misner-Sharp mass
function (equation~\eq{MSdef}) for both types of
initial data. The continuous line shows the initial value ($M_{\rm
MS}$ at $t=0$). The discontinuous lines show $M_{\rm MS}$ at $t=200M$ for
three different resolutions. In both cases the asymptotic value of the
mass stays constant, indicating no scalar field energy is radiated away.
An inspection of the mass behavior at smaller radii for the solution obtained
with time-harmonic initial data reveals that this converges
to essentially the initial value, thus a negligible amount
of mass falls into the black
hole. For the non-time-harmonic case about $10\%$ of the field's
initial mass falls into the black hole.
The amount of mass that falls into the black hole is
calculated by subtracting the Misner-Sharp mass at the horizon, minus
the initial mass of the black hole. In the case of time-harmonic initial
data this number is $(1\pm3)\times 10^{-4}M$, while for that of
non-time-harmonic initial data it is $(10\pm3)\times 10^{-4}M$ (see
table~\ref{table:bh_mass} and figure~\ref{bh_mass}). These values are
calculated using the highest resolution ($\Delta r=1/100M$), and the
errors as the difference of these values with those of a lower resolution
($\Delta r=1/50M$).
\begin{figure}[!tbh]
\includegraphics[angle=00,width=\columnwidth,height=!,clip]{mass001st.eps}
\caption{ Mass function at $t=0$; and at $t=200M$ for three
resolutions. Stationary initial data. Initial $m_{\rm
sf}=0.01M$. The continuous line shows the mass function at $t=0$,
while the discontinuous lines show, for different resolutions, the
mass function at $t=200M$. In
this case the escape of mass into the black hole is negligible ($\Delta m_{\rm
sf}=(1\pm3)\times 10^{-4}M)$.
\label{mass001st}}
\end{figure}
\begin{figure}
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{mass001.eps}
\caption{As in fig. \ref{mass001st}, we show the mass function, this time for the
non-time-harmonic
case. The initial mass of the scalar field is $m_{\rm sf}=0.01M$. This time about 10\% of it
falls into the black hole. \label{mass001} }
\end{figure}
\subsubsection*{Non-linear case}
We turn now to the non-linear cases investigated. These
correspond to initial mass configurations where the scalar
field has a mass of at least 10\% of that of the black hole. In this
regime we solely adopt the ``pulse'' prescription defined
in equation~\eq{pulsedef} for the scalar field since the time-harmonic data
is obtained under an assumption which is no longer valid.
As we have done for the linear case, we also compare profiles at
different times for simulations with higher initial $m_{\rm
sf}$. Figures~\ref{ns010} and~\ref{ns050} correspond to initial
masses of the scalar field of $m_{\rm sf}=0.10M$ and $m_{\rm
sf}=0.50M$, respectively. The time it takes to reach a
state described by a harmonic time dependence is longer than
in the linear regime, especially for the higher
initial $m_{\rm sf}=0.50M$. For that reason, the first samplings
(labeld $t_1$ in the figures) occur later than in the linear case,
and the interval between the profiles compared, $t_2-t_1$, is ten
periods, as opposed to 22 in the linear cases.
\begin{figure*}[!tbh]
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns010_a.eps}
\label{ns010_a} }
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns010_b.eps}
\label{ns010_b} }
\caption{
The scalar field at different times is compared to check if the
solution obeys a harmonic time dependence. Case with non-time-harmonic
initial data. Initial mass of the
scalar field $m_{\rm sf}=0.10M$. Figure~\ref{ns010_a} shows the scalar
field when it reaches a maximum, while figure~\ref{ns010_b} shows it at
about a quarter of a period later. In both cases, the profile shown in continuous
line is separated 10 periods from the one in dashed line.
\label{ns010}}
\end{figure*}
\begin{figure*}[!tbh]
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns050_a.eps}
\label{ns050_a} }
\subfigure[]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{ns050_b.eps}
\label{ns050_b} }
\caption{
This figure shows the same comparisons as figure~\ref{ns010}, but for
an initial mass of the scalar field of $m_{\rm sf}=0.50M$. The
separation between the profiles compared is also 10 periods.
\label{ns050}}
\end{figure*}
The absolute value of the Fourier transform of $\int \! \phi\;dr$, $|F[\phi]|$, is shown in
figure~\ref{DFT_ns} for the two different initial masses of $\phi$.
Again, we compute the transformation after the initial transient
behavior has passed and the scalar filed has already reached a quiescent state.
As a useful indicator, we also show the frequencies corresponding to time-harmonic states.
Now, while the observed modes do not coincide exactly with those
obtained at the linear approximation, they are
close to them.
\begin{figure*}[!tbh]
\subfigure[Initial $m_{\rm sf}=0.10M$]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{DFT_ns010.eps}
\label{DFT_ns010} }
\subfigure[Initial $m_{\rm sf}=0.50M$]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{DFT_ns050.eps}
\label{DFT_ns050} }
\caption{Absolute value of the discrete Fourier transform in $t$ of the space integral $\int \!
\phi(r,t)\, dr$. The marks labeled $f_n$ denote the frequencies of
the first modes obtained from the shooting. The three peaks, which indicate
the dominant frequencies in the solution, lie
at slightly lower frequencies than those of the time-harmonic states in
the linear case. This behavior is consistent with the frequency shift due
to the black hole growing in size. However, the growth alone does not fully
account for the observed shift, though this is expected as non-trivial
contribution due to non-linearities also play a role.
\label{DFT_ns}}
\end{figure*}
In figure~\ref{mass} we show the Misner-Sharp mass at $t=0$; and at
$t=200M$ for three different resolutions. Figures~\ref{mass010}
and~\ref{mass050} correspond to initial masses of the scalar field of
$m_{\rm sf}=0.10M$ and $m_{\rm sf}=0.50M$, respectively. In
all these cases about 10\% of the scalar field's mass falls into the
black hole, while nothing escapes outwards. Additionally, for
the case with grater mass, the scalar filed spreads slightly outwards
before reaching a quiescent state.
\begin{figure*}[!tbh]
\subfigure[Initial $m_{\rm sf}=0.10M$]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{mass010.eps}
\label{mass010} }
\subfigure[Initial $m_{\rm sf}=0.50M$]{
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{mass050.eps}
\label{mass050} }
\caption{Mass function at $t=0$; and at $t=200M$ for three
resolutions. The discontinuous lines show the
mass function at t=200M for three resolutions. In
each of these cases,
about 10\% of the initial mass of the scalar field falls into the
black hole,
while nothing escapes to infinity. \label{mass}}
\end{figure*}
Although we only show figures corresponding to two different initial
values of $m_{\rm sf}$, we have simulated the system for other values of this
parameter $m_{\rm sf} = \kappa\,10^{-1} M$ ($\kappa=1...5$). In all these cases essentially
no scalar field energy is radiated away, while a small portion falls into the
black hole. The measured values are shown in table~\ref{table:bh_mass} and figure~\ref{bh_mass}.
\begin{table}[!tbh]
\caption{Mass that falls into the black
hole for different initial masses of the scalar field. Calculated as the
Misner-Sharp mass at the horizon at $t=200$
minus the initial mass of the black hole. See
figure~\ref{bh_mass}. \label{table:bh_mass}}
\begin{ruledtabular}
\begin{tabular}{cc}
Initial $m_{\rm sf}\;[M]$ & ($M_{\rm MS}(r_{\rm h}) - M) \; [10^{-2}M]$ \\
\hline
$0.01$ & $0.10 \pm 0.03 $ \\
$0.10$ & $1.0 \pm 0.3 $\\
$0.20$ & $2.9 \pm 0.7 $\\
$0.30$ & $3 \pm 1 $\\
$0.40$ & $5 \pm 1 $\\
$0.50$ & $7 \pm 2 $\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[!tbh]
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{bh_mass.eps}
\caption{Mass that falls into the black
hole for different initial masses of the scalar field. Calculated as the
Misner-Sharp mass at the horizon at $t=200$
minus the initial mass of the black hole. See
table~\ref{table:bh_mass}. \label{bh_mass} }
\end{figure}
If, after some transient time, the scalar field is finally confined
within a compact region, lets say $[r_a,r_b]$,
the space-time should be that of Schwarzschild for $r>r_b$,
with a Schwarzschild mass equal to the total mass inside the sphere
$r=r_b$. This can be checked by evaluating the Kretschmann
invariant. In figure~\ref{RRoverRR} we show the quotient $I/I_{\rm
Sch}$ (see the paragraph containing equation~\eq{KI}) at
$t\approx140M$ for the case with initial $m_{\rm
sf}=0.5M$. This quotient converges to one for
$r>r_b$, and also for $r<r_a$.
\begin{figure}[!tbh]
\includegraphics[angle=0,width=\columnwidth,height=!,clip]{RRoverRR2.eps}
\caption{Kretschmann invariant quotient for three resolutions at $t_1=139.983M$.
This quotient
converges to 1 outside of the region where the scalar field is
confined. A horizontal line at $I/I_{\rm Sch}=1$ have been drawn
as a guide. Initial $m_{\rm sf}=0.50M$. \label{RRoverRR} }
\end{figure}
\clearpage
\section{Conclusions} \label{conclusions}
We have discussed difficulties encountered when
attempting to confine a scalar field distribution within
some region. The existence of a symmetry in the spacetime
allows for doing so in a consistent manner. For the specific
spherically symmetric case, we have given prescriptions for
implementing a scalar field with a potential depending on the
areal radius $R$.
We have illustrated the viability of this approach
by confining a scalar field distribution around a black hole.
For our particular choice of potential and initial scalar field, the
scalar field becomes totally confined after some transient time, which depends on
the initial mass. During the transient, part of the scalar field
accretes into the black hole, while nothing escapes to infinity.
By adjusting the depth of the potential, the amount of energy
that falls in can be controlled.
The approach can be exploited, an extended, to mimic situations
of interest. These can range from physical studies of particular
systems, to serve as a testing model for infrastructure development
aimed to simulate more complex systems.
\acknowledgements
We would like to thank M.~Anderson, D.~Garfinkle, C.~Palenzuela-Luque,
J.~Pullin and R.~Wald for helpful discussions as well as
M. Tiglio and J. Pullin for comments and suggestions on the manuscript.
This work was supported in part by NSF grants PHY-0244699, PHY-0244335,
PHY0326311 and PHY0554793 to Louisiana State University.
The simulations described here were performed on
local clusters in the Dept. of Physics \& Astronomy.
L.L is grateful to the Alfred P Sloan Foundation and
Research Corporation for financial support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
We are concerned with the \emph{parametric eigenvalue problem}
\begin{align}
A(\mu) v(\mu) = v(\mu) \lambda(\mu), \label{eq:sevp}
\end{align}
where the $n\times n$ entries of $A(\mu)$ are functions of $\mu$.
If $\mu$ is a random variable, this is referred to as stochastic eigenvalue
problem. A first example of such a problem is
\begin{align}
A(\mu) = \exp_{\circ}(-\mu U), \label{eq:example-proof-of-concept}
\end{align}
where $\exp_{\circ}(\cdot)$ is the entrywise (or Hadamard) exponential---think
Matlab's \texttt{exp} instead of \texttt{expm}.\footnote{For \texttt{expm} the
solution is known and can be found in many textbooks on numerical linear
algebra. Since the eigenvectors do not depend on $\mu$ in the case of
\texttt{expm} the problem simplifies to finding the eigenvalue decomposition
of $U$ and applying \texttt{expm} to the Jordan form.} If the matrix $U$
contains in $U_{ij}$ the pointwise distances between points $x_{i}$ and $x_{j}$,
then this is a kernel method to analyze the structure of these point sets, see,
for instance, \cite{haasdonk2007invariant}. It is known that if $U$ is almost
negative definite and $\mu>0$, then $A$ is positive semidefinite
\cite[Thm.~6.3.6]{b172}. Thus, arguably, this is one of the simplest, yet
interesting parametric eigenvalue problems. The functions for the entries of
$A(\mu)$ are all of similar structure and are differentiable as often as
required. If, furthermore, all points $x_{i}$ are disjoint, the matrix is even positive
definite for all $\mu>0$. Thus, an algorithm failing for this example is likely
not useful.
Other examples can be found if $A$ is the discretization of a partial
differential operator and $\mu$ an unknown parameter of the physical model, say
the mass of a passenger in a car, the temperature, or an unknown material
constant. Matrix-valued ODEs, for instance\begin{align*}
\dot{C}(t) = -\alpha C(t) A^{T}\Gamma A C(t), \quad C(0) = C_{0}=C_{0}^{T}
\end{align*}
describing the dynamics of the covariance matrix in an ensemble Kalman
inversion, see \cite{bungert2022} for details, are another source of parametric
eigenvalue problems.
A few things about \eqref{eq:sevp} are obvious. If $(v(\mu),\lambda(\mu))$ is a
solution of \eqref{eq:sevp} then so is $(\gamma v(\mu),\lambda(\mu))$ for all
$\gamma\neq 0$, if normalization of the eigenvector is required for all $\gamma$
with $\abs{\gamma}=1$. Fixing~$\mu$ turns \eqref{eq:sevp} into a standard
eigenvalue problem. For a non-defective $n\times n$ matrix, there are $n$
eigenpairs. If all the function $\left.A(\mu)\right|_{ij}$ are continuous in
$\mu$, then so are the $n$ functions $\lambda(\mu)$ and $v(\mu)$.\footnote{Note
that there might be $\mu$ for which $A$ is defective. In this case the
eigenvalues and eigenspaces are still continuous.} In fact, if $A(\mu)$
consists of holomorphic functions in $\mu$, then the eigenvalues $\lambda(\mu)$
are analytic functions with only algebraic singularities, see \cite[Ch. 2]{b473}
for details. Singularities do not occur if $A(\mu)$ is symmetric, see
\cite[Ch. 7]{b473}. In this paper we will assume that the functions
$\left.A(\mu)\right|_{ij}$ are sufficiently smooth in $\mu$.
The problem \eqref{eq:sevp} has been investigated in the past. The following is
a selection---without claim of completeness---of the available literature on the
parametric and stochastic eigenvalue problem. Very recently Ruymbeek,
Meerbergen, and Michiels~\cite{ruymbeek2022tensor} used a tensorized Arnoldi
method to compute the extreme eigenvalues of \eqref{eq:sevp} using a polynomial
chaos expansion,
\begin{align*}
A(\mu) = \sum_{\ell=0}^{M} A_{\ell}\phi_{\ell}(\mu),
\end{align*}
to discretize the problem in the parameter space. Soused\'ik and Elman
\cite{sousedik2016inverse} in a fairly similar approach used an inverse
iteration to find the smallest eigenvalue(s) of \eqref{eq:sevp} also using a
polynomial chaos expansion. The resulting problem is a nonlinear tensor
equation, which is solved with a generalization of the inverse subspace
iteration. Ghanem and Ghosh \cite{ghanem2007efficient} used a similar idea,
but employed the Newton-Raphson algorithm to solve the tensor equation. Rahman
\cite{rahman2006solution} computed statistical moments of the generalized
stochastic eigenvalue problem,
\begin{align}
A(\mu)v(\mu) = \lambda(\mu)B(\mu)v(\mu). \label{eq:sgevp}
\end{align}
Verhoosel, Guti\'errez, and Hulshoff \cite{verhoosel2006iterative} used a finite
element discretization for $\lambda(\mu)$ and $v(\mu)$ while assuming
$A(\mu) = A_{0} + \sum_{i=1}^{m}A_{i}\mu_{i}$, where $\mu$ is a parameter
vector, to solve a parametric eigenvalue problem. This discretization also leads
to a tensor structured equation, which they solved using an inverse iteration. A
different approach is taken by Williams
\cite{williams2010method,williams2013method}, who adds an artificial time
dependency to \eqref{eq:sevp} to turn the problem into an integral
equation. Williams also uses a polynomial chaos expansion, however, for the
eigenvectors instead of $A(\mu)$.
All these methods have in common that they only compute a few of the smallest or
largest eigenvalues using inverse iteration, the power method, or more generally
a Krylov-type method. Some authors focus on the stochastic nature of the
problem. A different approach was taken very recently by Alghamdi, Boffi, and
Bonizzoni \cite{alghamdi2022greedy}. They are interested in a parameter
dependent PDE and use a model order reduction approach based on sparse grids to
find crossing points of the eigenvalue functions. They choose the parameter
$\mu$ from the $\ensuremath{\mathbb{R}}^{d}$.
In contrast we consider the parameter $\mu\in\ensuremath{\mathbb{R}}$ and focus on finding the
functions $\lambda(\mu)$ and $v(\mu)$ for the eigenvalues we are interested
in. If $A(\mu)$ is not too large, we can easily pick \emph{any}---not just the
smallest or largest---eigenvalue of $A(\mu_{0})$ and track this eigenvalue over
an interval or in the neighborhood of~$\mu_{0}$. The approach presented here is
not restricted to the smallest or largest eigenvalues. In fact we can compute
approximations for all eigenvalues, notwithstanding that in many application
only a few eigenvalues are desired.
Other generalizations of the standard eigenvalue problem,
\begin{align}
Av = \lambda v,
\label{eq:evp}
\end{align}
have been studied as well, far most the generalized eigenvalue problem
\cite{MolS73},
\begin{align}
Av = \lambda Bv.
\label{eq:gevp}
\end{align}
Although one can arguably call \eqref{eq:evp} and \eqref{eq:gevp}
nonlinear problems they are typically considered linear eigenvalue problems, due
to the linear appearance of $\lambda$ and in
contrast to the quadratic eigenvalue problem \cite{TisK01},
\begin{align*}
(\lambda^{2}A_{2}+\lambda A_{1} + A_{0})v = 0,
\end{align*}
and the general nonlinear eigenvalue problem \cite{BetHMST13,MehH04},
\begin{align*}
Q(\lambda)v = 0,
\end{align*}
which both are nonlinear in the eigenvalue. Recently, research interest in
eigenvector nonlinearities, that is, in the basic case
\begin{align*}
A(v) v = \lambda(v),
\end{align*}
has increased, see, for instance,
\cite{jarlebring2014inverse,cai2018eigenvector,jarlebring2021implicit,claes2021linearizability}.
Problem \eqref{eq:sevp} is different from these. Nonetheless, ideas for the
solution of other (nonlinear) eigenvalue problems are applicable for
\eqref{eq:sevp} as well.
We will discuss two expansions of $A(\mu)$, $x(\mu)$, and $\lambda(\mu)$. In
Section~\ref{sec:taylor} we will use a truncated Taylor series expansion. We
start with less powerful Taylor expansion, since this leads to a conceptual
simpler algorithm. We discuss this algorithm and its complexity in
Subsection~\ref{sec:algorithm:and:complexity}. In
Subsection~\ref{sec:proofofoconcpet} we demonstrate based on an example using
\eqref{eq:example-proof-of-concept} that the algorithm works. We then extend the
approximation to a truncated Chebyshev expansion in
Section~\ref{sec:chebyshev}. Therein we will use a modification of the algorithm
for the Taylor expansion to compute a good starting vector which is then refined
using Newton's method. The paper is concluded with some numerical experiments,
Section~\ref{sec:numerical:experiments}, and conclusions,
Section~\ref{sec:conclusions}.
For our numerical experiments we use Matlab (R2020b) with a machine precision of
$\epsilon_{\text{mach}}\approx 2.2204_{10^{-16}}$ and a computer with Ubuntu
18.04.1, am Intel Core i7-10710U CPU, and 16 GB of RAM.
\section{Taylor expansion}
\label{sec:taylor}
A Taylor expansion of $A(\mu)$ at the expansion point~$\mu_{0}$ is given by
\begin{align*}
A(\mu) = A_{0} + (\mu-\mu_{0}) A_{1} + \tfrac{1}{2}(\mu-\mu_{0})^{2}A_{2}
+ \tfrac{1}{3!}(\mu - \mu_{0})^{3}A_{3} + \dotsb,
\end{align*}
with
$\left.A_{k}\right|_{ij} = \frac{\partial^{k} A_{ij}}{\partial
\mu^{k}}(\mu_{0})$. Such an expansion exists, because we assume that
$A(\mu)$ is sufficiently smooth.
Let us further assume that there are also Taylor expansions for the eigenvector~$v(\mu)$,
\begin{align}
v(\mu) = v_{0} + (\mu-\mu_{0}) v_{1} + \tfrac{1}{2}(\mu-\mu_{0})^{2}v_{2} + \tfrac{1}{3!}(\mu - \mu_{0})^{3}v_{3} + \dotsb,
\label{eq:tatylor:v}
\end{align} and for the eigenvalue~$\lambda(\mu)$,
\begin{align*}
\lambda(\mu) = \lambda_{0} + (\mu-\mu_{0}) \lambda_{1}
+ \tfrac{1}{2}(\mu-\mu_{0})^{2}\lambda_{2}
+ \tfrac{1}{3!}(\mu - \mu_{0})^{3}\lambda_{3} + \dotsb,
\end{align*} with an analogous definition for $v_{k}$ and $\lambda_{k}$. The
existence of convergent series for $\lambda(\mu)$ and $v(\mu)$ is discussed for
instance in the introduction of Kato's book on perturbation theory for linear
operators \cite{b473}.
We insert these expansions in \eqref{eq:sevp} and compare the coefficients
for $(\mu-\mu_{0})^{p}$.
Unsurprisingly, the 0th order approximations is equivalent to the solution of
the standard eigenvalue problem obtained at $\mu=\mu_{0}$,
\begin{align}
A_{0}v_{0} = \lambda_{0}v_{0}.
\label{eq:0thorder}
\end{align}
Comparing the next coefficients we find the equation
\begin{align}
A_{1}v_{0} +A_{0}v_{1} = v_{1} \lambda_{0} + v_{0}\lambda_{1},
\label{eq:1}
\end{align}
which has to be fulfilled by $v_{1}$ and $\lambda_{1}$.\footnote{Obviously this
idea is far from novel and can for instance be found in
\cite{rellich1939storungstheorie}.} One can reformulate \eqref{eq:1} in
matrix vector form as
\begin{align*}
\left[\begin{array}{c|c}
v_{0} & \lambda_{0}I-A_{0}
\end{array}\right] \begin{bmatrix}
\lambda_{1}\\v_{1}
\end{bmatrix} = A_{1}v_{0}.
\end{align*}
This linear system is underdetermined. Although, underdetermined systems can be
solved, for instance with the Moore-Penrose pseudoinverse or Matlab's
\texttt{backslash} operator, we would prefer to avoid an underdetermined
systems.
We add the condition
\begin{align*}
1 = v(\mu)^{H}v(\mu),
\end{align*}
which ensures that the eigenvectors are normalized. Using the Taylor series
expansion this condition is
\begin{align*}
1 = v_{0}^{H}v_{0} + \sum_{k=1}^{p} \frac{1}{k!}\left(\sum_{\ell=0}^{k} \binom{k}{\ell}v_{k-\ell}^{H}v_{\ell}\right)(\mu-\mu_{0})^{k}.
\end{align*}
Thus when solving \eqref{eq:0thorder} we have to normalize the eigenvectors.
For $p=1$ we then have the additional condition
\begin{align*}
v_{0}^{H}v_{1} = 0.
\end{align*}
This leads to an extension of the linear system to
\begin{align}
\label{eq:bordered}
\left[\begin{array}{c|c}
0 & v_{0}^{H}\\\hline
v_{0} & \lambda_{0}I-A_{0}
\end{array}\right] \begin{bmatrix}
\lambda_{1}\\v_{1}
\end{bmatrix} =
\begin{bmatrix}0\\
A_{1}v_{0}
\end{bmatrix},
\end{align}
which is no longer underdetermined. The equation \eqref{eq:bordered} was used by
Andrew, Chu, and Lancaster to compute the derivative of eigenvectors \cite{ACL93}.
Systems of this form are sometimes referred to as bordered linear systems. If an
efficient solver for $A_{0}$ or $\lambda_{0} I-A_{0}$ is available, for instance
for sparse $A_{0}$, then a block elimination with refinement can be used to solve
\eqref{eq:bordered} efficiently, for details see \cite{govaerts1990block}.
With $v_{1}$ and $\lambda_{1}$ computed, we can find the linear Taylor
approximation for $\lambda(\mu)$ and $v(\mu)$,
\begin{align*}
\lambda(\mu) &\approx \lambda_{0} + (\mu-\mu_{0})\lambda_{1},\\
v(\mu) &\approx v_{0} + (\mu-\mu_{0})v_{1}.\\
\end{align*}
Since we now know $v_{0}$, $v_{1}$, $\lambda_{0}$, and $\lambda_{1}$, we can use
them for the quadratic approximation. Therefore, we have to compare the
coefficients in front of $(\mu-\mu_{0})^{2}$. We have
\begin{align*}
A_{2}v_{0} +A_{1}v_{1}+A_{0}V_{2} &= v_{2} \lambda_{0}+v_{1}\lambda_{1}+ v_{0}\lambda_{2}
\intertext{and}
v_{0}^{H}v_{2} &= -v_{1}^{H}v_{1}.
\end{align*}
Reformulated as a linear system that is
\begin{align*}
\left[\begin{array}{c|c}
0 & v_{0}^{H}\\\hline
v_{0} & \lambda_{0}I-A_{0}
\end{array}\right] \begin{bmatrix}
\lambda_{2}\\v_{2}
\end{bmatrix} =
\begin{bmatrix}
-v_{1}^{H}v_{1}\\
A_{2}v_{0} + 2A_{1}v_{1} - 2v_{1}\lambda_{1}
\end{bmatrix}.
\end{align*}
It is remarkable that this linear system has the same coefficient matrix as before.
Solution of this systems provides $\lambda_{2}$ and $v_{2}$. The approximation is then
\begin{align*}
\lambda(\mu) &\approx \lambda_{0} + (\mu-\mu_{0})\lambda_{1}+
(\mu-\mu_{0})^{2}\lambda_{2}, \qquad\text{and}\\
v(\mu) &\approx v_{0} + (\mu-\mu_{0})v_{1} + (\mu-\mu_{0})^{2}v_{2}.
\end{align*}
This can naturally be continued for higher-order Taylor polynomials. We leave it
to the reader to show that
\begin{align}
\left[\begin{array}{c|c}
0 & v_{0}^{H}\\\hline
v_{0} & \lambda_{0}I-A_{0}
\end{array}\right] \begin{bmatrix}
\lambda_{k}\\v_{k}
\end{bmatrix} =
\begin{bmatrix}-\frac{1}{2}\sum_{\ell=1}^{k-1} \binom{k}{\ell}v_{k-\ell}^{T}v_{\ell}\\
\sum_{\ell=0}^{k-1} \binom{k}{\ell}A_{k-\ell}v_{\ell} - \sum_{\ell=1}^{k-1}
\binom{k}{\ell} v_{k-\ell}\lambda_{\ell}
\end{bmatrix}
\label{eq:extended:system}
\end{align}
is the linear system determining $\lambda_{k}$ and $v_{k}$. Let $E$ be defined by
\begin{align*}
E :=
\left[\begin{array}{c|c}
0 & v_{0}^{H}\\\hline
v_{0} & \lambda_{0}I-A_{0}
\end{array}\right].
\end{align*}
The formulation in \eqref{eq:extended:system} has multiple advantages. We first
discuss the case where one or a few eigenpairs of $A(\mu)$ are computed. If the
matrix $A_{0}$ is symmetric or Hermitian, then so is the coefficient matrix
$E$. If the matrix $A_{0}$ is skew-symmetric/skew-Hermitian, then the sign in
the first row of the linear system should be flipped to preserve the
structure. Since only the diagonal entries of $A_{0}$ are modified to obtain $E$
the zero pattern of most sparse matrices would not be affected
significantly. Thus if $A_{0}$ is a sparse matrix, $E$ will be, too. At most
there are two more nonzeros entries per column in $E$ than there are in
$A$. Sparse direct solvers and sparse iterative solvers can then be
employed. The coefficient matrix $E$, furthermore, is the same for all~$k$. Thus
an $LU$ or Cholesky factorization of $E$ can be reused for all~$k$. Similarly a
preconditioner for $E$ can be reused. If a Krylov subspace method is employed to
solve the linear systems, then it may be possible to recycle selected subspaces
generated for the previous $k$, for details see \cite{Parks2006-fw}. Recycling
subspaces may even be possible between different eigenvalues, i.e.\ for bordered
systems with different shifts in the $\lambda_{0}I-A$ block, if the ideas of
\cite{Soodhalter2016-vs} are extended to bordered systems.
When most or all eigenpairs of $A(\mu)$ are to be computed, one can use an even
more efficient approach. Initially, we need all eigenvalues and eigenvectors of
$A_{0}$. These can all at once be obtained by computing the Schur form of
$A_{0}=QTQ^{H}$ with Francis's implicitly shifted QR
algorithm\footnote{Additional computations are necessary to obtain the
eigenvectors. Typically this is achieved by inverse iteration or by swapping
the diagonal entries of $T$. Both require $O(n^{3})$.} or by computing an
eigenvalue decomposition \mbox{$A_{0}=QDQ^{H}$} with one of the special solvers for
symmetric or skew-symmetric $A_{0}$; both require in general $O(n^{3})$. The
matrix $Q$ is unitary, $T$ upper triangular, and $D$ diagonal. We can now
simplify solving with $E$ by factorizing $E$ with the help of $Q$:
\begin{align}
E = \left[\begin{array}{c|c}
0 & v_{0}^{H}\\\hline
v_{0} & \lambda_{0}I-A_{0}\\
\end{array}\right] =
\left[\begin{array}{c|c}
1 &0\\\hline
0 & Q\\
\end{array}\right]
\underbrace{\left[\begin{array}{c|c}
0 &v_{0}^{H}Q\\\hline
Q^{H}v_{0} & \lambda_{0}I-T\\
\end{array}\right]}_{=\widehat{E}}
\left[\begin{array}{c|c}
1 &0\\\hline
0 & Q^{H}\\
\end{array}\right].
\label{eq:decomposition_of_E}
\end{align}
The matrix $\widehat{E}$ is a permutation of an upper-triangular matrix or a
permutation of a diagonal matrix, if $A_{0}$ is symmetric. This turns solving
with $E$ into applying $Q^{H}$ to part of the right-hand side followed by a
backward solve with a permuted upper triangular or diagonal matrix, followed by
another application of $Q$. These steps can be done in $O(n^{2})$ flops.
Unfortunately, there is a big disadvantage as well. The computation of $v_{k}$
and $\lambda_{k}$ depends on all previously computed $v_{\ell}$ and
$\lambda_{\ell}$. Thus one cannot solve the linear system with multiple right
hand sides at the same time, but has to compute them sequentially. Accordingly,
computational errors from earlier steps affect all the subsequent ones. This can
lead to an accumulation of errors in the computed components.
\begin{remark}
This approach works well for the standard parametric eigenvalue problem
\eqref{eq:sevp}, since only two expansions are involved on either side of the
equation. Things get significantly more involved when trying to solve the
generalized parametric eigenvalue problem \eqref{eq:sgevp}, where three
expansions are needed on the right-hand side. Thus, we restrict the discussion
here to the standard parametric eigenvalue problem.
\end{remark}
\begin{remark}
As discussed earlier with
$v(\mu) = v_{0} + (\mu-\mu_{0})v_{1} + \tfrac{1}{2}(\mu-\mu_{0})^{2}v_{2} +
\dotsb$ also
$\gamma v(\mu) = \gamma v_{0} + (\mu-\mu_{0})\gamma v_{1} +
\tfrac{1}{2}(\mu-\mu_{0})^{2}\gamma v_{2} + \dotsb$, with $\abs{\gamma}=1$, is
an eigenvector of~\eqref{eq:sevp}. When solving for the 0th order
approximation one can choose $\gamma$, since $(v_{0},\lambda_{0})$ and
$(\gamma v_{0},\lambda_{0})$ are both solutions of \eqref{eq:0thorder}. When
solving \eqref{eq:bordered} and the subsequent steps the previous choice of
$\gamma$ forces the solution to be $\gamma v_{1}$ and $\gamma v_{i}$, since
\begin{align*}
\left[\begin{array}{c|c}
0 & \gamma v_{0}^{H}\\\hline
\gamma v_{0} & \lambda_{0}I-A_{0}
\end{array}\right] \begin{bmatrix}
\lambda_{1}\\\gamma v_{1}
\end{bmatrix} =
\begin{bmatrix}0\\
A_{1}\gamma v_{0}
\end{bmatrix}
\end{align*}
is merely a scaling of \eqref{eq:bordered}.
\end{remark}
We conclude this paragraph by providing a different interpretation useful for the Chebyshev
expansion discussed in Section~\ref{sec:chebyshev}. The comparison of the
coefficients can be represented by the following non-linear block lower
triangular system
\begin{align}
\label{eq:taylor:nonlin}
\left[
\arraycolsep=2.5pt
\begin{array}{cc|cc|cc|cc}
0 & v_{0}^{T}&&&&&&\\
v_{0}& -A_{0}&&&&&&\\\midrule
0 & v_{1}^{T}& 0 & v_{0}^{T}&&&&\\
v_{1}& -A_{1}&v_{0}&-A_{0}&&&&\\\midrule
0 & v_{2}^{T} & 0 & v_{1}^{T}& 0 & v_{0}^{T}&&\\
\frac{1}{2}v_{2}& -\frac{1}{2}A_{2}&v_{1}& -A_{1}&\frac{1}{2}v_{0}& -\frac{1}{2}A_{0}&&\\\midrule
0 & v_{3}^{T}& 0 & v_{2}^{T} & 0 & v_{1}^{T}& 0 & v_{0}^{T}\\
\frac{1}{3!}v_{3}&
-\frac{1}{3!}A_{3}&\frac{1}{2}v_{2}&-\frac{1}{2}A_{2}&
\frac{1}{2}v_{1}&-\frac{1}{2}A_{1}&
\frac{1}{3!}v_{0}&
-\frac{1}{3!}A_{0}
\end{array}\right]\left[\begin{array}{c}
\lambda_{0}\\
v_{0}\\\midrule
\lambda_{1}\\
v_{1}\\\midrule
\lambda_{2}\\
v_{2}\\\midrule
\lambda_{3}\\
v_{3}
\end{array}\right] = \left[\begin{array}{c}
1\\
0\\\midrule
0\\
0\\\midrule
0\\
0\\\midrule
0\\
0
\end{array}\right],
\end{align}
where $\lambda_{i}$ and $x_{i}$ are the unknowns.
The iterative method derived above solves a diagonally scaled version of this
system by forward substitution. The forward substitution removes the
non-linearity.
\subsection{Algorithm and complexity}
\label{sec:algorithm:and:complexity}
\begin{algorithm2e}[tb]
\caption{Approximation of the Parametric Eigenvalue Problem by Taylor Polynomials.}
\label{algo:taylor1}%
\KwIn{$A_{0}, A_{1}, A_{2} \dotsc, A_{p}$ with
$A(\mu) \approx A_{0} + (\mu-\mu_{0})A_{1} +
\frac{1}{2!}(\mu-\mu_{0})^{2}A_{2} + \dotsb$.} %
\KwOut{Taylor coefficients for approximations of $\lambda(\mu)$ and $v(\mu)$
in the neighborhood around $\mu_{0}$.} %
Find one eigenpair $(v_{0},\lambda_{0})$ of $A_{0}$ with
$A_{0}v_{0}=v_{0}\lambda_{0}$.\\%
Compute $E =
\begin{bmatrix}
0 & v_{0}^{H}\\
v_{0} & \lambda_{0}I-A_{0}
\end{bmatrix}$ and, if appropriate, a decomposition of $E$.\\%
\For{$k=1,\dotsc,p$}{%
$y := 0$\;%
$z := 0$\;%
\For{$\ell=0,\dotsc,k-1$}{%
$y := y + \binom{k}{\ell} A_{k-\ell}v_{\ell}$\;%
\If{$\ell>=1$}{%
$y := y - \binom{k}{\ell} v_{k-\ell}\lambda_{\ell}$\;%
\If{$\ell<k-1$}{%
$z = z + \binom{k-1}{\ell-1} v_{k-\ell}^{H}v_{\ell}$\;%
}%
}%
}%
Solve $E\begin{bmatrix}%
\lambda_{k}\\v_{k}%
\end{bmatrix} :=
\begin{bmatrix}
-z/2\\y
\end{bmatrix}$\;%
}
\end{algorithm2e}
Algorithm~\ref{algo:taylor1} shows the steps needed to compute a Taylor
approximation for the eigenpair $(v(\mu),\lambda(\mu))$. This has to be
repeated for each eigenpair of interest. Thus up to $n$ times if all eigenpairs
are required.
We will now assume that $A_{k}$ are dense matrices, and that a matrix-vector
product costs approximately $O(n^{2})$ flops and a matrix-matrix product
$O(n^{3})$ flops. For small to medium size $n$ modern computers with modern
(blocked) implementation typically use algorithms with these computational
complexities.\footnote{Only for (very) large $n$ Strassen-type algorithms are
occasionally used for matrix-matrix multiplication.} However, the runtime of
these operations is often limited by the latency of the memory access and the
size of the cache. Thus the runtime of the matrix-matrix multiplication grows
quadratically for many examples with \mbox{$n<100$} or even \mbox{$n<1000$}.
The inner loop of Algorithm~\ref{algo:taylor1} consists of $k$ matrix-vector
products. Computations comparable to an additional matrix-vector product are
required for the solution of the linear system if a decomposition of $E$ is
available. Thus the number of flops for the outer loop can be estimated by
\begin{align*}
O(p^{2}n^{2}).
\end{align*}
The most expensive part outside the loop is the decomposition of $E$ at
$O(n^{3})$ and the computation of the eigenpair $(v_{0},\lambda_{0})$ also in
$O(n^{3})$ if inverse iteration or Francis's implicitly shifted QR algorithm is
used.\footnote {If Francis's algorithm is used, then all eigenpairs are computed
in $O(25n^{3})$ \cite{GolV13}.} Only for the largest eigenvalue(s) the power
method at costs of about $O(n^{2})$ can be used.
In total we need $O(n^{3}+p^{2}n^{2})$ flops to compute the Taylor approximation
of degree $p$ of a single eigenpair. When computing all eigenpairs we can make
use of the decomposition \eqref{eq:decomposition_of_E}. That means we need to
compute the Schur decomposition at $O(25n^{3})$ once and $O(p^{2}n^{2})$ flops
per eigenvalue for a total of $O((25+p^{2})n^{3})$ flops. In the next
subsection we will demonstrate that this algorithm works and that the numerical
experiments do not contradict the complexity estimates.
\subsection{Proof-of-Concept}
\label{sec:proofofoconcpet}
In this section we will discuss a proof-of-concept for Algorithm~\ref{algo:taylor1}.
\begin{example}\label{example:1}
We pick $n$ points $p_{i}=(x_{i},y_{i},z_{i})$ on a torus with
\begin{align*}
x_{i} &= \cos(2\pi\theta_{i})(5 + \cos(4\pi\theta_{i}))\\
y_{i} &= \sin(2\pi\theta_{i})(5 + \cos(4\pi\theta_{i}))\\
z_{i} &= \sin(4\pi\theta_{i})
\end{align*}
and $\theta_{i}=\frac{i}{n}$, $i=1,\dotsc,n$. These points are aligned on a
line coiled twice around a torus. We then define a matrix \textit{$U$} by
$U_{ij}=\norm{u_{i}-u_{j}}_{2}$ and
\begin{align*}
A(\mu) = \exp_{\circ}(-\mu U).
\end{align*}
We look at the eigenvalues of $A(\mu)$ with $\mu$ in the interval $[0,1.5]$.
The eigenvalues intersect in this interval and we are interested in verifying
that Algorithm~\ref{algo:taylor1} can deal with that.
At first we pick $n=8$.
\end{example}
We choose $\mu_{0}=0.2$ and compute the 6th order Taylor polynomial to
approximate~$\lambda(\mu)$. The resulting Taylor polynomials are depicted in
Figure~\ref{fig:example1_1_8_7_0.20_1} by the blue lines. The parameter~$\mu$ is plotted
along the $x$-axis, while the non-negative eigenvalues are shown along the
$y$-axis.
\input{code/fig41} %
For comparison we discretize the interval with 151 discretization points
$\hat{\mu}_{i}$ and solve the resulting standard eigenvalue problems for all
$\hat{\mu}_{i}$. The resulting eigenvalues are shown as red crosses. Our goal is
that there is one blue line for each sequence of red crosses ideally going
through the crosses.
One can observe that near the expansion point $0.2$ the blue lines of the
Taylor approximation follow the red crosses. In particular the behavior at
$0$---one eigenvalue converges to $n=8$, the other seven to $0$---is
represented well.
For $\mu>0.4$ the blue lines show a visible difference from the red crosses. Some
of the blues lines even go below $0$ or above $8$. It is known that for
$\mu\rightarrow\infty$ all eigenvalues converge to $1$. The red crosses
naturally exhibit this convergence while the blue lines do not. This is an
inherent limitation of using polynomials to approximate these functions.
Figure~\ref{fig:example1_1_8_7_0.20_1} also displays a magnified plot around the intersection
of the second and third largest eigenvalue. In the magnified plot the crosses
and the Taylor approximation exhibit the same qualitative behavior. These
intersection points are relevant since the dominant eigenspace change
significantly around them.
The sum of all eigenvalues for a fixed $\mu$ is equal to $n$; $n=8$ in
our example. This can be shown by verifying that the trace is equal to $n$, since
$A_{ii}=0$ for all $i$. As shown in Table~\ref{tab:example1:coefficients} the
Taylor approximations show this behavior, too. Note that we have not used any
additional restrictions on the coefficients enforcing the sum of eigenvalues to
be $8$. One can observe that the other Taylor coefficients add up to
approximately $0$. However, with increasing $p$ the sum of the $\lambda_{p}$
seems to get further and further away from $0$. For $p=10$ the sum is already
$3.78_{10^{-6}}$ and for $p=15$ the sum is $5.25_{10^{+2}}$. It appears that
the accumulation of errors imposes a limit on the maximum degree for the Taylor
expansion.
We also observe in Table~\ref{tab:example1:coefficients} that the Taylor
coefficients are growing with increasing order.
\begin{table}[t]%
\caption{Coefficients of Taylor series approximations}%
\centering%
\include{code/tab41}
\label{tab:example1:coefficients}
\end{table}
We also timed the algorithm, see Table~\ref{tab:timings}, to check if the
experimental data is in accordance with the complexity, $O(25n^{3}+p^{2}n^{3})$,
discussed in Section~\ref{sec:algorithm:and:complexity}. We observe timings
in accordance with a complexity of $n^{3}$. In the first two columns we double
$n$ from row to row, but only sometimes observe that the runtime increases
more than fourfold. In the last column we double $p$ from row to row. The
runtime seems to be growing slower than $p^{2}$.
\begin{table}[t]
\caption{Runtime $t_{i}$ of Algorithm~\ref{algo:taylor1} for the computation of all
eigenpairs for different combinations of $n$ and $p$.}
\centering
\begin{tabular}{rrrr|rrrr|rrrr}
\toprule
$n$ & $p$ & $t_{i}$ in s & $\tfrac{t_{i}}{t_{i-1}}$&
$n$ & $p$ & $t_{i}$ in s & $\tfrac{t_{i}}{t_{i-1}}$&
$n$ & $p$ & $t_{i}$ in s& $\tfrac{t_{i}}{t_{i-1}}$\\
\midrule
8 & 2 & 0.0017 & ---&
8 & 7 & 0.0017 & ---&
8 & 2 & 0.0009 & ---\\
16 & 2 & 0.0018 & 1.04 &
16 & 7 & 0.0046 & 2.76 &
8 & 4 & 0.0016 & 1.79 \\
32 & 2 & 0.0041 & 2.30 &
32 & 7 & 0.0078 & 1.70 &
8 & 8 & 0.0038 & 2.37 \\
64 & 2 & 0.0064 & 1.58 &
64 & 7 & 0.0200 & 2.56 &
8 & 16 & 0.0065 & 1.69 \\
128 & 2 & 0.0317 & 4.93 &
128 & 7 & 0.1723 & 8.59 &
8 & 32 & 0.0162 & 2.52 \\
256 & 2 & 0.1349 & 4.25 &
256 & 7 & 0.9903 & 5.75 &
8 & 64 & 0.0595 & 3.66 \\
\bottomrule\\[0.2ex]
\end{tabular}
\label{tab:timings}
\end{table}
\input{code/fig52}
\input{code/fig53}
In Figure~\ref{fig:example4_1_8_26_0.20_1} we compare the difference between the Taylor
approximation and the eigenvalues computed after discretizing the parametric
eigenvalue problem. We plot the maximum overall eight eigenvalues to declutter
the plot. We observe that the 20th Taylor approximation has an error of about
$10\epsilon_{\text{mach}}$ in the interval $[0.1,0.3]$.
Due to the factor $(\mu-\mu_{0})^{p}$ the higher Taylor coefficients play a
secondary role for the approximation of the eigenvalues for $\mu$ close to the
expansion point. However, one can clearly see that for $\mu=0.6$ and for larger
$\mu$ an increase in the degree does not provide an improvement of the
approximation. This is caused by the accumulation of errors as described above.
To check this hypothesis we have redone the computations for
Figure~\ref{fig:example4_1_8_26_0.20_1} with the only change that the matrix $E$ is rounded to
single precision instead of double precision. The results are shown in
Figure~\ref{fig:example4vpa_1_8_18_0.20_1}. We can observe that higher order approximations do
not improve the quality of the approximation due to the accumulation of errors.
\section{Chebyshev Expansion}
\label{sec:chebyshev}
We observed in the last section that the quality of the Taylor approximations
drops quickly if we get further away from the expansion point. This is expected
but unsatisfactory. Increasing the degree of the Taylor expansion can increase
the interval for which we obtain a satisfactory approximation as seen in
Figure~\ref{fig:example4_1_8_26_0.20_1} and~\ref{fig:example4vpa_1_8_18_0.20_1}.
However, as the preliminary numerical experiments above and the ones in
Section~\ref{sec:numerical:experiments} show, this is limited by the
accumulation of errors. Hence, we need a different approach if we are interested
in an accurate approximation of $\lambda(\mu)$ over a large interval.
We decided to use the Chebyshev expansion of~$A(\mu)$, $\lambda(\mu)$, and~$v(\mu)$.
Let $\{U_{0}(\mu),U_{1}(\mu),U_{2}(\mu),\dotsc \}$ be an orthonormal basis of
Chebyshev polynomials of second kind scaled to the interval
$[\mu_{1},\mu_{2}]$.\footnote{This is an arbitrary choice. Alternatively,
Chebyshev polynomials of first kind can be used with minor adaptions.} We can
then write
\begin{align}
A(\mu) &= A_{0}U_{0}(\mu) + A_{1}U_{1}(\mu) + A_{2}U_{2}(\mu) + \dotsc, \label{eq:def:A}\\
\lambda(\mu) &= \lambda_{0}U_{0}(\mu) + \lambda_{1}U_{1}(\mu) + \lambda_{2}U_{2}(\mu) + \dotsc,\quad\text{and}\nonumber\\
v(\mu) &= v_{0}U_{0}(\mu) + v_{1}U_{1}(\mu) + v_{2}U_{2}(\mu) + \dotsc.\nonumber
\end{align}
In Section~\ref{sec:taylor} we assumed that the $A_{i}$ are given. This makes
sense for Example~\ref{example:1} and for other examples since the entry-wise
derivative of $A(\mu)$ can be obtained symbolically in an efficient way. For the
Chebyshev expansion it is an unlikely situation that the $A_{i}$ are given,
i.e.\ that the matrix $A(\mu)$ is given as polynomial in a Chebyshev basis. Thus
we first have to compute them by
\begin{align}
A_{i} := \langle A(\mu), U_{i}(\mu) \rangle_{U} := \int_{\mu_{1}}^{\mu_{2}} A(\mu)\, U_{i}(\mu)\,
\frac{2}{\mu_{2}-\mu_{1}}\sqrt{1-\left(\frac{2\mu - \mu_{2}-\mu_{1}}{\mu_{2}-\mu_{1}}\right)^{2}}\,\mathrm{d}\mu.
\label{eq:sp:cheb}
\end{align}
This can be achieved by a quadrature formula or other standard algorithms. We
choose to use the Chebfun package for these computation in our Matlab
code. In~\eqref{eq:sp:cheb} lies a main difference to the Taylor expansion
approach: When using the Taylor expansion $A_{0}$ is equal to $A(\mu_{0})$,
while when using Chebyshev $A_{0}$ is a weighted average of $A(\mu)$ in
$[\mu_{1},\mu_{2}]$.
Equation \eqref{eq:sp:cheb} defines an inner product in which the Chebyshev
basis of second kind is orthonormal, that is
\begin{align*}
\langle U_{i}(\mu), U_{j}(\mu) \rangle_{U}=\delta_{ij}.
\end{align*}
We can estimate the approximation error by truncating \eqref{eq:def:A} with the
help of this inner product
\begin{align*}
A(\mu) =& A_{0}U_{0}(\mu) + A_{1}U_{1}(\mu) + A_{2}U_{2}(\mu) + \dotsc +
A_{p}U_{p}(\mu) + \Delta_{p}(\mu)\\
\norm{A(\mu)}_{U}^{2} :=& \langle A(\mu), A(\mu)\rangle_{U} =
\sum_{i=0}^{p}\langle A(\mu), U_{i}(\mu)\rangle_{U} + \langle A(\mu),\Delta_{p}(\mu)\rangle_{U}
\end{align*}
The Chebyshev basis is degree graded with the degree of $U_{i}$ being
$i$. Thus the degree of the product $U_{i}U_{j}$ is $i\cdot j$. However, the
product is not equal to $U_{ij}$ as it is the case for the basis
$T_{i}=(\mu-\mu_{0})^{i}$ used for the Taylor expansion. In fact for Chebyshev
polynomials of second kind we have
\begin{align}
\label{eq:uiuj}
U_{i}U_{j} = U_{i+j} + U_{i+j-2} + \dotsb + U_{\abs{i-j}+2} + U_{\abs{i-j}}.
\end{align}
If we ignore all but the first term in \eqref{eq:uiuj}, then we obtain an
equation very similar to \eqref{eq:taylor:nonlin}:
\begin{align*}\left[
\begin{array}{cc|cc|cc|cc}
0 & v_{0}^{T}&&&&&&\\
v_{0}& -A_{0}&&&&&&\\\midrule
0 & v_{1}^{T}& 0 & v_{0}^{T}&&&&\\
v_{1}& -A_{1}&v_{0}&-A_{0}&&&&\\\midrule
0 & v_{2}^{T} & 0 & v_{1}^{T}& 0 & v_{0}^{T}&&\\
v_{2}& -A_{2}&v_{1}& -A_{1}&v_{0}& -A_{0}&&\\\midrule
0 & v_{3}^{T}& 0 & v_{2}^{T} & 0 & v_{1}^{T}& 0 & v_{0}^{T}\\
v_{3}&
-A_{3}&v_{2}&-A_{2}&
v_{1}&-A_{1}&
v_{0}&
A_{0}
\end{array}\right]\left[\begin{array}{c}
\lambda_{0}\\
v_{0}\\\midrule
\lambda_{1}\\
v_{1}\\\midrule
\lambda_{2}\\
v_{2}\\\midrule
\lambda_{3}\\
v_{3}
\end{array}\right] = \left[\begin{array}{c}
1\\
0\\\midrule
0\\
0\\\midrule
0\\
0\\\midrule
0\\
0
\end{array}\right].
\end{align*}
Solving this equation does \emph{not} give us the correct solution because we
made a severe simplification. However, this is an approximation to the
solution, which can be computed iteratively and similar to
Algorithm~\ref{algo:taylor1} due to the block lower triangular structure. We can
obtain this approximation also in $O(25n^{3} + p^{2}n^{2})$ for one eigenpair or
$O((25+p^{2})n^{3})$ for all eigenpairs.
If we do not ignore the terms in~\eqref{eq:uiuj}, then the system of equations
for $p=3$ is
\begin{align}\left[
\begin{array}{cc|cc|cc|cc}
0 & v_{0}^{T}&0&{\color{SPECred}v_{1}^{T}}&&&&\\
v_{0}& -A_{0}&{\color{SPECred}v_{1}}&{\color{SPECred}-A_{1}}&&&&\\\midrule
0 & v_{1}^{T}& 0 & v_{0}^{T}+{\color{SPECblue}v_{2}^{T}}&0&{\color{SPECorange}v_{1}^{T}}&&\\
v_{1}& -A_{1}&v_{0}+{\color{SPECblue}v_{2}}&-A_{0}{\color{SPECblue}-A_{2}}&{\color{SPECorange}v_{1}}&{\color{SPECorange}-A_{1}}&&\\\midrule
0 & v_{2}^{T} & 0 & {\color{SPECred}v_{1}^{T}}& 0 & v_{0}^{T}&&\\
v_{2}& -A_{2}&{\color{SPECred}v_{1}}& {\color{SPECred}-A_{1}}&v_{0}& -A_{0}&&\\\midrule
0 & v_{3}^{T}& 0 & {\color{SPECblue}v_{2}^{T}} & 0 & {\color{SPECorange}v_{1}^{T}}& 0 & v_{0}^{T}\\
v_{3}&-A_{3}&{\color{SPECblue}v_{2}}&{\color{SPECblue}-A_{2}}& {\color{SPECorange}v_{1}}&{\color{SPECorange}-A_{1}}& v_{0}& A_{0}
\end{array}\right]\left[\begin{array}{c}
\lambda_{0}\\
v_{0}\\\midrule
\lambda_{1}\\
v_{1}\\\midrule
\lambda_{2}\\
v_{2}\\\midrule
\lambda_{3}\\
v_{3}
\end{array}\right] = \left[\begin{array}{c}
1\\
0\\\midrule
0\\
0\\\midrule
0\\
0\\\midrule
0\\
0
\end{array}\right], \label{eq:nonlin:sy}
\end{align}
where the colored blocks occur multiple times. The system \eqref{eq:nonlin:sy}
is no longer block lower triangular and thus cannot be solved by forward
substitution. We hence fallback on solving this non-linear system by Newton's
method. In each step we have to compute and invert the Jacobi matrix. This is a
matrix of size $(p+1)(n+1)$. Thus the costs are in $O(n^{3}p^{3})$ for each
Newton step for each eigenvalue. We observe that 4 to 8 Newton steps are
typically sufficient, since we already start with a good approximation.
We truncate the series for $A(\mu)$, $\lambda(\mu)$, and $v(\mu)$ all after
$p+1$ terms, so that they are all polynomials of degree $p$. Different choice
may be possible but we did not see an advantage in using different degrees for
$A$, $\lambda$, and $v$ in our preliminary experiments.
Nonlinear systems can have many solutions. However, the special constructions of
\eqref{eq:nonlin:sy} ensures that every of its solutions represents a Chebyshev
approximation of an eigenpair of the parametric eigenvalue problem. For the
solution
\begin{align*}
&\begin{bmatrix}
\lambda_{0} & \phantom{\gamma}v_{0} & \lambda_{1} & \phantom{\gamma}v_{1} & \dotsb & \lambda_{p-1} & \phantom{\gamma}v_{p-1}
\end{bmatrix}^{T}
\intertext{%
there are infinitely many more of the form:%
}
&\begin{bmatrix}
\lambda_{0} & \gamma v_{0} & \lambda_{1} &\gamma v_{1} & \dotsb & \lambda_{p-1} & \gamma v_{p-1}
\end{bmatrix}^{T}
\end{align*}
for all $ \gamma$ with $\abs{\gamma}=1$. Newton's method does not guarantee that
we converge to the closest root of these. Thus, despite starting with $n$ initial
approximations, one close to each root, there is no guarantee that we end up
with $n$ distinct eigenpairs $(\lambda(\mu),\gamma v(\mu))$ not just differing
in $\gamma$.
For polynomial rootfinding this problem can be overcome by the Ehrlich-Aberth
method \cite{aberth1973iteration,ehrlich1967modified}, see for instance
\cite{bini2014solving}. This is possible for polynomial rootfinding since the
eigenvectors corresponding to each root can be derived easily from said
root. Unfortunately, this is not possible here. As a consequence we are left
with the hope that the starting vectors are sufficiently close. In the special
case that all eigenvalues and eigenvectors are real, there are only two choices
for $\gamma$, $\gamma=1$ and $\gamma=-1$. In this case we had some heuristic
success in employing the Ehrlich-Aberth method. However, with poorly chosen
start-vectors it was still possible to trick the algorithm into finding the same
eigenpair twice.
\subsection{Accuracy}
We now want to discuss the accuracy of the computed and truncated Chebyshev
expansions for $\lambda(\mu)$ and $v(\mu)$. The Chebyshev theorem states that
the best approximation in the Chebyshev basis $b_{n}$ fulfills
\begin{align}
f(x) - b_{n}(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \prod_{i=0}^{n}(x-z_{i}), \label{eq:cheb:thm}
\end{align}
for unknown interpolation points $z_{i}\in [\mu_{1},\mu_{2}]$ and an unknown
point $\xi\in[\mu_{1},\mu_{2}]$ \cite{Dzy95}. The reason is that the Chebyshev
approximation $b_{n}(x)$ intersects with $f(x)$ at $n$ unknown points in
$[\mu_{1},\mu_{2}]$. Thus it is an interpolation polynomial at these points and
for such an interpolation polynomial \eqref{eq:cheb:thm} holds. We can bound the
right-hand side by
$(b-a)^{n} \sup_{\xi\in[\mu_{1},\mu_{2}]} \frac{f^{(n+1)}(\xi)}{(n+1)!}$. This
is a good bound for the eigenvalues, but for eigenvectors the derivative may be
huge when two eigenvalues are near to each other, see \cite[Sect. 7.2.2]{GolV13}
for details.
Additionally, the Chebyshev coefficients we compute with Newton's method are
perturbed because we only approximate the solution of \eqref{eq:nonlin:sy}.
\subsection{Proof-of-Concept}
We repeat the experiments from Section~\ref{sec:proofofoconcpet} based on
Example~\ref{example:1} with Chebyshev expansion instead of Taylor series
expansion.
\input{code/fig52_cheb}
Figure~\ref{fig:example4_0_8_20_0.25_1} is the Chebyshev version of
Figure~\ref{fig:example4_1_8_26_0.20_1} with Chebyshev expansions on the
interval $[\tfrac{1}{4},1]$. Contrary to the Taylor approximation we do not just
see a small error near the expansion point but a small error over the whole
interval. The error increases slightly towards the endpoints of the intervals,
with the error near $1$ being $10^{-13}$, while the error near $\tfrac{1}{4}$ is
closer to $10^{-14}$. Outside the approximation interval the error increases the
further we are from the endpoints. Here, we do not observe that an increase in
the order leads to worse results, since the use of Newton's method to solve the
nonlinear equation gets us around the error accumulation problem. However, in
Figure~\ref{fig:example4_0_8_30_0.50_2} depicting the error in the eigenvectors,
we observe that the error outside the approximation interval increases for large
$p$.
\section{Numerical Experiments}
\label{sec:numerical:experiments}
In the last section we used Example~\ref{example:1} to verify that the proposed
methods work for the arguably most easy problem of symmetric positive definite
matrices. In this section we will present further experiments for other
examples.
\begin{figure}
\centering
\scalebox{0.8}{
\begin{tikzpicture}
\fill [pattern = north west lines] (0,-1) rectangle (-0.2,1);
\draw [thick] (0,-1) -- (0,1);
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (0,0) -- (1.5,0) node [midway, above =1.0ex]{$1$};
\node[draw, minimum width = 1cm, minimum height = 0.75cm, anchor=west] (m1) at (1.5,0) {1};
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (2.5,0) -- (4.0,0) node [midway, above =1.0ex]{$1$};
\node[minimum width = 1cm, minimum height = 0.75cm, anchor=west] (m2) at (4.0,0) {$\dotsm$};
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (5.0,0) -- (6.5,0) node [midway, above =1.0ex]{$1$};
\node[draw, minimum width = 1cm, minimum height = 0.75cm, anchor=west] (m3) at (6.5,0) {$\mu$};
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (7.5,0) -- (9.0,0) node [midway, above =1.0ex]{$1$};
\node[draw, minimum width = 1cm, minimum height = 0.75cm, anchor=west] (m4) at (9.0,0) {$\mu$};
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (10.0,0) -- (11.5,0) node [midway, above =1.0ex]{$1$};
\node[minimum width = 1cm, minimum height = 0.75cm, anchor=west] (m5) at (11.5,0) {$\dotsc$};
\draw [decoration ={%
coil,%
segment length = 1mm,%
amplitude = 2mm,%
aspect = 0.5,%
post length = 2mm,%
pre length = 2mm},%
decorate] (12.5,0) -- (14.0,0) node [midway, above =1.0ex]{$1$};
\fill [pattern = north east lines] (14,-1) rectangle (14.2,1);
\draw [thick] (14,-1) -- (14,1);
\end{tikzpicture}}
\caption{Example~\ref{example:2}---Sketch of the springs and masses.}%
\label{fig:example2}%
\end{figure}
\begin{example}\label{example:2}
We investigate a sequence of masses connected with springs as in
Figure~\ref{fig:example2}. All springs have a stiffness of~$1$. All but two
masses in the middle have mass~$1$. The two masses in the middle are both of
mass~$\mu$. This example is an extreme simplification of a passenger of
unknown mass in a car.
This problem leads to a generalized eigenvalue problem dependent on~$\mu$. As
mentioned earlier generalized eigenvalue problems are significantly more
difficult to handle with the expansion approaches. Thus we choose to turn the
problem into a standard eigenvalue problem. Inversion of the mass matrix leads
to
\begin{align*}
\setlength\arraycolsep{4pt}
A(\mu) =
\begin{bmatrix}
1\\
&\smash{\ddots}\\
&&1\\
&&&\mu\\
&&&&\mu\\
&&&&&1\\
&&&&&&\smash{\ddots}\\
&&&&&&&1\\
\end{bmatrix}^{-1}
\begin{bmatrix}
2&-1\\
-1&\smash{\ddots}&\smash{\ddots}\\
&\smash{\ddots}&2&-1\\
&&-1&2&-1\\
&&&-1&2&-1\\
&&&&-1&2&\smash{\ddots}\\
&&&&&\smash{\ddots}&\smash{\ddots}&-1\\
&&&&&&-1&2\\
\end{bmatrix}.
\end{align*}
Thus only two rows of $A(\mu)$ are dependent on $\mu$.
The matrix $A(\mu_{0})$ is not symmetric in this example. However, using the
square root of the mass matrix would permit to construct a symmetric problem
instead.
\end{example}
We apply Algorithm~\ref{algo:taylor1} with $\mu_{0}=0.8$ and its Chebyshev
sibling for the interval $[\mu_{1},\mu_{2}]=[0.5,1.0]$ to
Example~\ref{example:2}. The resulting eigenvalues are depicted in
Figure~\ref{fig:example1_1_8_7_0.80_2}.
Figure~\ref{fig:example4_1_8_26_0.80_2}
shows the error of the Taylor approximation to sampled
eigenvalues. Figure~\ref{fig:example4_0_8_30_0.50_2} does the same for the
Chebyshev approximation. We observe that in this example we need a higher order
for the Chebyshev approximation than for the Taylor approximation in order to
obtain the same level of accuracy. However, the Chebyshev approximation is
better over a larger interval. The example demonstrates that the technique
computing the Chebyshev approximation does not suffer from an accumulation of
errors and thus we can compute the higher order approximation sufficiently
accurate. It also shows that the Chebyshev approximation is of good quality in
the inner part of the interval $[\mu_{0},\mu_{1}]$. However, near the endpoints
the quality deteriorates significantly.
\input{code/fig61_both}
\input{code/fig62}
\input{code/fig62_cheb}
The examples we looked at so far had a full set of eigenvectors for all
parameter values in the interval of interest. The following examples uses a
basic Jordan block to test the algorithm for an example with defective
eigenvalues.
\begin{example}
\label{example:3}
We use a Jordan block with a parameter in the lower left corner, that is
\begin{align*}
A(\mu) =
\begin{bmatrix}
1 & 1 & &&0\\
0 & 1 & 1\\
\vdots&\ddots & \ddots &\ddots\\
0&\cdots & 0& 1& 1\\
\mu &0&\cdots & 0 &1
\end{bmatrix}
\end{align*}
The eigenvalues of $A(\mu)$ are the roots of the characteristic polynomial
\begin{align*}
p(\lambda,\mu) = (\lambda-1)^{n}-\mu.
\end{align*} The roots are the unit roots $\rho_{1},\dotsc,\rho_{n}$ of
$\rho^{n}=-1$ scaled by $\mu^{1/n}$ and shifted by $1$ to the right.
\end{example}
We note that we primarily focused our algorithms on the case of non-defective,
preferable positive definite matrices, which Example~\ref{example:3} is very
much not. Nevertheless we are interested in how far the techniques described in
this paper produce meaningful results in this case.
We apply Algorithm~\ref{algo:taylor1} with $\mu_{0}=0.2$ and $\mu_{0}=-0.2$ and
its Chebyshev sibling with $[\mu_{1},\mu_{2}]=[0.5,1.0]$ to
Example~\ref{example:3}. The real and imaginary parts of the Taylor
approximation eigenvalues are depicted in
Figure~\ref{fig:example1_1_8_26_0.20_3} for $\mu_{0}=0.2$. A similar figure can
be obtained
for $\mu_{0}=-0.2$. We observe that the Taylor series approximations do not
provide useful approximations beyond the singularity at $\mu=0$. This is
expected, since the eigenvalues are not analytic at
$\mu=0$. Algorithm~\ref{algo:taylor1} failed for the expansion point
$\mu_{0}=0$.
Using the Chebyshev approximation we observe a very similar behavior, see
Figure~\ref{fig:example1_0_8_20_0.10_3}. The algorithm failed to provide a
meaningful approximation when $0\in[\mu_{1},\mu_{2}]$. The Chebyshev
approximation is also not capable of approximating the eigenvalues beyond $0$.
\input{code/fig71}
\input{code/fig71_cheb}
\subsection{Assessing the Quality of the Eigenvector Approximation}
We are now going to assess the quality of the eigenvector approximation. We use
the Taylor and Chebyshev approximation of the eigenvectors, respectively. We
observe that despite our efforts to produce normalized eigenvectors, the
function values $v(\mu)$ are not of norm $1$. In particular, the
Chebyshev approximation techniques produces eigenvectors far from norm $1$. For
our comparison we evaluate the function $v(\mu)$ for a particular $\mu$ and then
normalize the result before comparing it to the normalized eigenvectors of
$A(\mu)$.
We fix $\mu_{j}$. Let $V_{s}$ be the matrix of eigenvectors of $A(\mu_{j})$ with
$A(\mu_{j})V_{s}=D_{s}V_{s}$, with $D_{s}$ diagonal, and let $V$ be the result
of the normalization of $v(\mu_{j})$. We use the Matlab command
\texttt{max(abs(max(abs(Vs'*V))-1))} to compute the maximum deviation of the
eigenvectors for $\mu_{j}$.
Figure~\ref{fig:example5_1_8_26_0.20_1}--\ref{fig:example5_0_8_30_0.50_2} show
the deviation of the eigenvectors for Example~\ref{example:1} and~\ref{example:2}. We observe in
Figure~\ref{fig:example5_1_8_26_0.20_1} and
Figure~\ref{fig:example5_0_8_30_0.50_2} that increasing the order of
approximation provides diminishing returns and eventually a decrease in
approximation quality.
Surprisingly, Figure~\ref{fig:example5_0_8_20_0.25_1} shows that order 10 is
sufficient to approximate the eigenvectors with the Chebyshev approach well,
while Figure~\ref{fig:example4_0_8_20_0.25_1} shows that we need a higher order
to approximate the eigenvalues well. Knowing the eigenvectors $v(\mu_{i})$ for a
given $\mu_{i}$ and a known matrix $A(\mu_{i})$ allows to compute an
approximation to the eigenvalues $\lambda(\mu_{i})$ using the Rayleigh
quotient. It is a well known fact that for symmetric matrices an approximation
$q$ to the eigenvector $v$ with $\norm{q-v}=O(\delta)$ implies that the Rayleigh
quotient $\rho=q^{T}Aq/q^{T}q$ approximates the eigenvalue $\lambda$ to $v$ with
$\abs{\lambda-\rho}=O(\delta^{2})$, see for instance \cite[Exercise
5.4.33]{b148ed3}. Thus for symmetric matrices using the eigenvector approximation
together with the Rayleigh quotient is very likely going to produce a better
approximation to the eigenvalue. This is aided by the fact that for the Rayleigh
quotient one can use the exact $A(\mu)$. We tried this out and found a better
approximation than using $\lambda(\mu_{i})$, cp.\
Figure~\ref{fig:example7_0_8_20_0.25_1} with
Figure~\ref{fig:example4_0_8_20_0.25_1}.
\input{code/fig61_taylor_1}
\input{code/fig61_taylor_2}
\input{code/fig61_cheb_1}
\input{code/fig61_cheb_2}
\input{code/fig71_cheb_1}
\subsection{Sampling}
We have presented a method to find approximations to $\lambda(\mu)$. These
approximations can be used to speed-up the following sampling process: Assume a
distribution of $\mu$ is given and can be sampled. For the following tests we
use $\mu\sim \ensuremath{\mathcal{N}}(\mu_{0},0.01)$, that is normally distributed around
$\mu_{0}=0.2$ with standard deviation $0.1$. We can thus produce samples
$\mu_{i}$ and use these to compute samples $\lambda(\mu_{i})$ of the
distribution of the eigenvalues. In the example below the second and third
largest eigenvalue are sampled. This is faster than computing $A(\mu)$ and then
using a standard eigenvalue algorithm to compute the eigenvalues of
$A(\mu)$. For $10,000$ samples using Example~\ref{example:1} with $n=8$ and
approximations of $6$th order we observed that $3.6938$ s were spend on forming
the matrix as a function, $0.0096$ s were spend on solving \eqref{eq:sevp} with
the Taylor approximation and $0.1464$ s with the Chebyshev approximation. For
sampling the eigenvalues using the full matrix $A(\mu)$ $1.6386$ s were needed,
while sampling with the obtained Taylor (using Horner's method) and Chebyshev
approximations could be done in $0.0807$ s and $0.0616$ s, respectively. Thus
the sampling was done 20.3 and 26.6 times faster, respectively, or 18.14 and 7.87
times faster if the one-time costs for solving \eqref{eq:sevp} are added to the
costs for the sampling. The more accurate results obtained through
the Rayleigh quotient are slightly more expensive and, thus, are only 5.58 and
5.89 times faster, respectively. We summarize these numbers together with the
theoretical complexities in Table~\ref{tab:sampling}.
\begin{table}[t]
\caption{Comparing direct sampling of $s=10,000$ eigenvalues of $A(\mu)$ with first
computing a Taylor or Chebyshev expansion, for $A(\mu)\in\ensuremath{\mathbb{R}^{n\times n}}$, with $n=8$ and
$p=6$.}
\centering
\begin{tabular}{ccccc}
\toprule
&& Taylor & Chebyshev & direct sampling\\
\midrule
\multirow{2}{*}{\rotatebox{90}{setup}}&theoretical costs & $O((25+p^{2})n^{3})$ & $O(n^{4})$ & ---\\
&runtime & 0.0096 s & 0.1464 s & ---\\
\midrule
\multirow{5}{*}{\rotatebox{90}{sampling}} & theoretical costs & $O(ps)$ & $O(ps)$ &
$O(sn^{3})$\\
&runtime & 0.0807 s & 0.0616 s & 1.6386 s\\
&speed up & 20.3$\times$ & 26.6$\times$ & ---\\
&with Rayleigh quot. & 0.2838 s & 0.1319 s & ---\\
&speed up & 5.77$\times$ & 12.4$\times$ & ---\\
\midrule
\multirow{4}{*}{\rotatebox{90}{combined}}
&runtime & 0.0903 s & 0.2080 s & 1.6386 s\\
&speed up & 18.14$\times$ & 7.87$\times$ & ---\\
&with Rayleigh quot. & 0.2934 s & 0.2783 s & ---\\
&speed up & 5.58$\times$ & 5.89$\times$ & ---\\
\bottomrule\\[0.2ex]
\end{tabular}
\label{tab:sampling}
\end{table}
With the help of the sampling we generated the histogram plots in
Figure~\ref{fig:histogram1}. The histogram plots appear visually very similar
and can be used to obtain a general impression of the distribution. In
particular for $\mu$ far from $\mu_{0}$ or from $[\mu_{1},\mu_{2}]$ the
approximation quality is low.
We also plotted $\lambda(\mu)$ over $\mu$ (\,\begin{tikzpicture}[x=1cm,y=1cm]
\draw[very thick,SPECCorange] (0,0)--(0.4,0);
\node[minimum height=0,minimum width=0] at (0.2,0) {};
\end{tikzpicture}\,) in Figure~\ref{fig:curve1}
together with the error. We observe that the Taylor approximation (\,\begin{tikzpicture}[x=1cm,y=1cm]
\draw[very thick,SPECblue] (0,0)--(0.4,0);
\node[minimum height=0,minimum width=0] at (0.2,0) {};
\end{tikzpicture}\,) provides a
good approximation near $\mu_{0}$, while the Chebyshev approximation (\,\begin{tikzpicture}[x=1cm,y=1cm]
\draw[very thick,SPECgreen] (0,0)--(0.4,0);
\node[minimum height=0,minimum width=0] at (0.2,0) {};
\end{tikzpicture}\,) is better over the whole interval $[\mu_{1},\mu_{2}]$. We can again
observe that the Rayleigh quotient can be used to improve the approximation to
$\lambda(\mu)$ for both the Taylor and the Chebyshev approximation; dashed
lines \begin{tikzpicture}[x=1cm,y=1cm]
\draw[very thick,SPECblue,dashed] (0,0)--(0.4,0); \node[minimum
height=0,minimum width=0] at (0.2,0) {};
\end{tikzpicture}\,and\,\begin{tikzpicture}[x=1cm,y=1cm]
\draw[very thick,SPECgreen,dashed] (0,0)--(0.4,0);
\node[minimum height=0,minimum width=0] at (0.2,0) {};
\end{tikzpicture}, respectively. However, this improvement only works for symmetric matrices,
like Example~\ref{example:1}. Figure~\ref{fig:curve2} presents the last row of
Figure~\ref{fig:curve1} but for the non-symmetric Example~\ref{example:2}, and
hence, there is only a minor or no improvement visible when using the Rayleigh
quotient approximation.
\begin{figure}
\centering
\includegraphics{histogram1}
\caption{Example~\ref{example:1}---Histogram plot of eigenvalue distribution second and third
largest eigenvalue, $6$th order Taylor approximation with $\mu_{0}=0.20$ and
$6$th order Chebyshev approximation with $[\mu_{1},\mu_{2}]=[0.1,0.3]$.}%
\label{fig:histogram1}%
\end{figure}
\begin{figure}
\centering
\includegraphics{curve1}
\caption{Example~\ref{example:1}---$\lambda(\mu)$ over $\mu$ second (left) and
third largest eigenvalue (right), exact eigenvalue
(orange line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECCorange] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {}; \protect\end{tikzpicture}\,), $6$th
order Taylor approximation with $\mu_{0}=0.20$ (solid
blue line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECblue] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {}; \protect\end{tikzpicture}\,), and
$6$th order Chebyshev approximation eigenvalue (solid
green line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECgreen] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {}; \protect\end{tikzpicture}\,) with
$[\mu_{1},\mu_{2}]=[0.1,0.3]$; and eigenvalue approximation using the Rayleigh
quotient based on the eigenvector approximation (dashed lines
\protect\begin{tikzpicture}[x=1cm,y=1cm] \protect\draw[very
thick,SPECblue,dashed] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,and\,\protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECgreen,dashed] (0,0)--(0.4,0);
\protect\node[minimum height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,). For comparison the Rayleigh quotient
$v(\mu_{0})^{T}A(\mu)v(\mu_{0})/v(\mu_{0})^{T}v(\mu_{0})$
(grey line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECblack,dashed] (0,0)--(0.4,0);
\protect\node[minimum height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,).}%
\label{fig:curve1}%
\end{figure}
\begin{figure}
\centering
\includegraphics{curve2}
\caption{Example~\ref{example:2}---$\lambda(\mu)$ over $\mu$ second (left) and
third largest eigenvalue (right), $6$th order Taylor approximation with
$\mu_{0}=0.80$ (solid
blue line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECblue] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {}; \protect\end{tikzpicture}\,), and $6$th order
Chebyshev approximation eigenvalue (solid
green line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECgreen] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {}; \protect\end{tikzpicture}\,))
with $[\mu_{1},\mu_{2}]=[0.6,1.0]$; and eigenvalue approximation using the
Rayleigh quotient based on the eigenvector approximation (dashed lines
\protect\begin{tikzpicture}[x=1cm,y=1cm] \protect\draw[very
thick,SPECblue,dashed] (0,0)--(0.4,0); \protect\node[minimum
height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,and\,\protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECgreen,dashed] (0,0)--(0.4,0);
\protect\node[minimum height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,). For
comparison the Rayleigh quotient
$v(\mu_{0})^{T}A(\mu)v(\mu_{0})/v(\mu_{0})^{T}v(\mu_{0})$
(grey line \protect\begin{tikzpicture}[x=1cm,y=1cm]
\protect\draw[very thick,SPECblack,dashed] (0,0)--(0.4,0);
\protect\node[minimum height=0,minimum width=0] at (0.2,0) {};
\protect\end{tikzpicture}\,).}%
\label{fig:curve2}%
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have presented a Taylor approximation based method to compute approximations
to $\lambda(\mu)$ and $v(\mu)$ for the parametric eigenvalue problem
\eqref{eq:sevp}. The presented algorithm works for small degrees $p$ and the
investigated examples. The algorithm accumulates the errors when increasing the
degree $p$. Together with the errors present in $E$ when forming the matrix in
double (or single) precision, this leads to a serious limitation regarding the
maximum degree $p_{\max}$. This limits the usefulness of this approach to a
small neighborhood around $\mu_{0}$. We were able to verify that the runtime
complexity of this algorithm is within the expectations set by counting the
number of flops. The algorithm computes all eigenvalues in
$O((25+p^{2})n^{3})$.
To overcome the limitations we extended the approach to Chebyshev
approximation. This requires the solution of a non-linear system with Newton's
method. The Chebyshev approach has higher costs. Despite a good starting point
for the Newton iteration, there is no guarantee that all eigenvalues can be
found. However, in the experiments the Chebyshev approximations are superior to
the Taylor approximations, in particular since a high accuracy can be achieved
over a given interval and not just in the neighborhood of an expansion point. We
showed that the method can be used for the sampling of eigenvalues if a
distribution of the parameter is given. Depending on the number of sampling
points the method presented here can be significantly faster than Monte-Carlo
methods.
\section*{Acknowledgments}\label{sec:ack}
The authors are grateful for informative discussions with Patrick K\"urschner
(HTWK Leipzig), Siobhan Correnty and Elias Jarlebring (both KTH Stockholm), and Omar
De La Cruz and Lothar Reichel (both Kent State University). We thank
Omar De La Cruz for pointing out \cite{alghamdi2022greedy} and
\cite{ruymbeek2022tensor}.
\section*{Availability of data and materials}
The code used for the numerical experiments is available from GitHub,
\url{https://github.com/thomasmach/PEVP_with_Taylor_and_Chebyshev}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
In this paper, the distributed average tracking problem for a team of agents is studied, where each agent uses local information to calculate the average of individual time varying reference inputs, one per agent.
The problem has found applications in distributed sensor fusion \cite{olfati2004consensus}, feature-based map merging \cite{aragues2012distributed}, distributed optimization \cite{salarsheida2015}, and distributed Kalman filtering \cite{bai2011}, where the scheme has been mainly used as an estimator.
However, there are some applications such as region following formation control \cite{cheah2009region} and coordinated path planning \cite{vsvestka1998coordinated} that require the agents' physical
states instead of estimator
states to converge to a time varying network quantity, where each agent only has a local and incomplete copy of that quantity.
Since the desired trajectory is not available to any agent, the distributed average tracking poses more theoretical challenges than the consensus and tracking problem.
In the literature, some researchers have employed linear distributed algorithms, where the time varying reference inputs are required to satisfy restrictive constraints
\cite{spanos2005dynamic,freeman2006,bai2010,kiaauthority,kiaDACsingularity}.
In \cite{freeman2006}, a proportional algorithm and a proportional-integral algorithm are proposed to achieve distributed average tracking for slowly-varying
reference inputs with a bounded tracking
error, where accurate estimator initialization is relaxed in the proportional-integral algorithm.
In \cite{bai2010}, an extension of the proportional integral algorithm is employed for a special group of time varying reference inputs with a common denominator in their Laplace transforms, where the denominator is required to be used in the estimator design.
In \cite{kiaauthority}, a distributed average tracking problem is addressed with some steady-state errors, where the privacy of each agent's input is preserved.
However, linear algorithms cannot ensure distributed average tracking for a more general group of reference inputs. Therefore, some researchers employ nonlinear tracking algorithms.
In \cite{Nosrati2012}, a class of nonlinear algorithms is proposed for reference inputs with bounded deviations, where the tracking error is proved to be bounded.
A non-smooth algorithm is proposed in \cite{chen2012distributed}, which is able to track arbitrary time varying reference inputs with bounded derivatives.
However, all the aforementioned works study the distributed average tracking problem from a distributed estimation perspective without the requirement for agents to obey certain physical dynamics.
There are various applications, where the distributed average tracking problem is relevant for designing distributed control laws for physical agents.
The region-following formation control is one application \cite{cheah2009region}, where a swarm of robots move inside a dynamic region while keeping a desired formation.
Distributed average tracking for double-integrator agents is studied in \cite{feidoubleintegrator}, where the reference inputs are allowed to have bounded accelerations.
Refs. \cite{zhao2013distributed} and \cite{FeiRobust} study the distributed average tracking for physical agents with general linear dynamics, where reference inputs and their control inputs are bounded.
In particular, \cite{FeiRobust} proposes a discontinuous algorithm, while a continuous algorithm is employed in \cite{zhao2013distributed} with, respectively, static and adaptive coupling strengths.
Ref. \cite{ghapani2015distributed} introduces a discontinuous algorithm and filter for a group of physical double-integrator agents, where each agent uses the relative positions and neighbors' filter outputs to remove the velocity measurements.
However, in real applications physical agents might need to track the average of a group of real time varying reference inputs, where both physical agents and reference inputs have more complicated dynamics rather than single- or double-integrator dynamics.
In fact, the dynamics of the physical agents and reference inputs must be taken into account in the control law design and the dynamics themselves introduce further challenges in the tracking and stability analyses.
Therefore, the control law designed for physical agents with single- and double-integrator dynamics can no longer be used directly for physical agents subject to more complicated dynamic equations.
For example, \cite{fei2015EulerDAC} extends a proportional-integral control scheme to achieve distributed average tracking for physical Euler-Lagrange systems for two different kinds of reference inputs with steady states and with bounded derivatives.
In this paper, a distributed algorithm (controller design combined with filter design) is introduced to achieve the distributed average tracking for physical second-order agents with nonlinear dynamics.
Here, a nonlinear term satisfying the Lipshitz-type condition is considered in both agents' and reference inputs' dynamics to describe more complicated dynamics.
Due to the presence of the nonlinear term in the agents' dynamics, a local filter is introduced for each agent to estimate the average of the reference inputs and the reference velocities.
Further, a non-smooth control input is introduced to drive the agents to track the filter outputs.
Since the unknown term can be unbounded, we are faced with extra challenges.
Therefore, novel time varying state-dependent gains are proposed in each agent's filter and control input to overcome the unboundedness effect.
Finally, the filter dynamics and control input are simplified with constant gains to achieve the distributed average tracking, where the nonlinear term is bounded.
{\it Notations:} Throughout the paper, $\mathbb{R}$ denotes the set of all real numbers and $\mathbb{R}^+$ the set of all positive real numbers.
The transpose of matrix $A$ and vector $x$ are shown as $A^T$ and $x^T$, respectively.
Let $\mathbf{1}_n$ and $\mathbf{0}_n$ denote, respectively, the $n \times 1$ column vector of all ones and all zeros.
Let $\mbox{diag}(z_1,\ldots,z_p)$ be the diagonal matrix with diagonal entries $z_1$ to $z_p$.
We use
$\otimes$ to denote the Kronecker product, and $\mbox{sgn}(\cdot)$ to denote the $\mbox{signum}$ function defined component-wise.
For a vector function ${x(t):\mathbb{R}\mapsto\mathbb{R}^m}$, define $\|x\|_\mathfrak{p}$ as the $\mathfrak{p}$-norm,
${x(t)\in\mathbb{L}_2}$ if
$\int_{0}^{\infty} \|x(\tau)\|^2_2\mbox{d}\tau<\infty$ and
${x(t)\in\mathbb{L}_{\infty}}$ if for each element of $x(t)$, denoted as
$x_i(t)$, ${\sup_{t \geq 0}|x_i(t)|<\infty}$, $i=1,\ldots,m$.
\section{Problem Statement}
Consider a multi-agent system consisting of $n$ physical agents described by the following nonlinear second-order dynamics
\begin{align} \label{non-agents-dynamic}
\dot{x}_i(t)=&v_i(t), \notag \\
\dot{v}_i(t)=&f(x_i(t),v_i(t),t)+u_i(t), \qquad i=1,\ldots,n,
\end{align}
where $x_i(t) \in \mathbb{R}^\mathfrak{p}$, $v_i(t) \in \mathbb{R}^\mathfrak{p}$ and $u_i(t) \in \mathbb{R}^\mathfrak{p}$ are $i$th agent's position, velocity and control input, respectively, and $f:\mathbb{R}^\mathfrak{p} \times \mathbb{R}^\mathfrak{p} \times \mathbb{R}^+ \to \mathbb{R}^\mathfrak{p}$ is a
vector-valued term which will be defined later.
An \textit{undirected} graph $G \triangleq (V,E)$ is used to characterize the interaction topology among the agents, where ${V \triangleq \{1,\ldots,n\}}$ is the node set and $E \subseteq V \times V$ is the edge set.
An edge $(j,i) \in E$ means that node $i$ can obtain information from node $j$ and vice versa.
Self edges $(i,i)$ are not considered here.
Let $m$ denote the number of edges in $E$, where the edges $(j,i)$ and $(i,j)$ are counted only once.
The \textit{adjacency matrix} ${\mathbf{A} \triangleq [a_{ij}] \in \mathbb{R}^{n \times n}}$ of the graph $G$ is defined such that the edge weight ${a_{ij}=1}$ if ${(j,i) \in E}$ and ${a_{ij}=0}$ otherwise. For an undirected graph, ${a_{ij}=a_{ji}}$.
The \textit{Laplacian matrix} ${L \triangleq [l_{ij}] \in \mathbb{R}^{n \times n}}$ associated with $\mathbf{A}$ is defined as ${l_{ii}=\sum_{j \ne i} a_{ij}}$ and ${l_{ij}=-a_{ij}}$, where ${i \ne j}$.
For an undirected graph, $L$ is symmetric positive semi-definite.
By arbitrarily assigning an orientation for the edges in $G$, let $D \triangleq [d_{ij}] \in \mathbb{R}^{n \times m}$ be the \textit{incidence matrix} associated with $G$, where $d_{ij} = -1$ if the edge $e_j$ leaves node $i$, $d_{ij} = 1$ if it enters node $i$, and $d_{ij} = 0$ otherwise.
The \textit{Laplacian matrix} $L$ is then given by $L=DD^T$ \cite{GodsilRoyle01}.
\begin{assumption} \label{conn-graph}
Graph $G$ is connected.
\end{assumption}
\begin{lemma} \cite{GodsilRoyle01} \label{eigen}
Under Assumption \ref{conn-graph}, the \textit{Laplacian matrix} $L$ has a simple zero eigenvalue such that $0=\lambda_1(L)<\lambda_2(L) \leq \ldots \leq \lambda_n(L)$, where $\lambda_i(\cdot)$ denotes the $i$th eigenvalue. Furthermore, for any vector $y \in \mathbb{R}^n$ satisfying ${\mathbf{1}_n^T y=0}$, $\lambda_2(L) y^Ty \leq y^T L y \leq \lambda_n(L) y^Ty$.
\end{lemma}
Suppose that each agent has a time varying reference input $r_i(t) \in \mathbb{R}^\mathfrak{p}$, $i=1,\ldots,n$, satisfying
\begin{align*}
\dot{r}_i(t)=& v_i^r(t), \notag \\
\dot{v}_i^r(t)=& f(r_i,v_i^r,t),
\end{align*}
where $v_i^r(t) \in \mathbb{R}^\mathfrak{p}$ is the reference velocity.
Define ${r(t)=[r^T_1,\ldots,r^T_n ]^T}$, ${v^r(t)=[{v_1^r}^T,\ldots,{v_n^r}^T ]^T}$ and ${f(r,v^r,t)=[f^T(r_1,v^r_1,t),\ldots,f^T(r_n,v^r_n,t)]^T}$.
Here the goal is to design $u_i(t)$ for agent $i$, $i=1,\ldots,n$, to track the average of the reference inputs and reference velocities, i.e.,
\begin{align}\label{goal}
\lim \limits_{t \to \infty} \|x_i(t)-\frac{1}{n} \sum_{j=1}^n r_j(t)\|_2=&0, \notag \\
\lim \limits_{t \to \infty} \|v_i(t)-\frac{1}{n} \sum_{j=1}^n v_j^r(t)\|_2=&0,
\end{align}
where each agent has only local interaction with its neighbors.
As it was mentioned, there are many applications that the physical agents should track a time varying trajectory, where each agent has an incomplete copy of this trajectory.
While, the physical agents and reference trajectory might be described by more complicated dynamics rather than the double-integrator dynamics in real applications.
Therefore, we consider a more general group of physical agents, where the unknown term $f(\cdot,\cdot,t)$ in their dynamics satisfies the Lipschitz-type condition.
\subsection{Main Result
}\label{section2}
In this subsection, we study the distributed average tracking of second-order multi-agent systems with nonlinear dynamics.
We assume that the nonlinear term, $f(\cdot,\cdot,t)$, in both agents' and reference inputs' dynamics satisfies the Lipschitz-type condition.
First, a local filter is introduced for each agent to estimate the average of the reference inputs and the reference velocities.
Then, the control input $u_i$, $i=1,\ldots,n$, is designed for each agent such that $x_i$ and $v_i$ track, respectively, $p_i$ and $q_i$, where $p_i$, $q_i \in \mathbb{R}^\mathfrak{p}$ are the filter outputs.
For notational simplicity, we will remove the index $t$ from variables in the reminder of the paper.\newline
\begin{assumption}\label{lip} \cite{yu2010second}
The vector-valued term $f(\cdot,\cdot,t)$ satisfies the following inequality $\forall t \geq 0$
\begin{align*}
\|f(x,v,t)-f(y,z,t)\|_1 \leq \rho_1 \|x-y\|_1 +\rho_2 \|v-z\|_1,
\end{align*}
where $x$, $v$, $y$, $z \in \mathbb{R}^\mathfrak{p}$, $\rho_1$, $\rho_2 \in \mathbb{R}^+$ and $f(0,0,t)=0$.
\end{assumption}
\begin{remark}
Note that Assumption \ref{lip} is a Lipschitz-type condition, satisfied by many well-known systems such as the pendulum system with a control torque, car-like robots, the Chua's circuit, the Lorenz system, and the Chen system \cite{mei2013distributed}.
\end{remark}
Under Assumption \ref{lip}, the unknown term $f(\cdot,\cdot,t)$ might be unbounded. Therefore, two novel state-dependent time varying gains will be introduced to overcome the unboundedness effect of this term.
The following local filter is introduced for agent $i$, ${i=1,\ldots,n}$, to estimate the average of reference inputs and reference velocities
\begin{align}\label{p-term2}
\small
p_i=& z_i+r_i, \notag \\
\ddot{z}_i=&- \kappa (p_i-r_i)- \kappa (q_i-v^r_i) \notag \\
& -\alpha \psi_i \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \Big \{ (p_i+q_i)-(p_j+q_j) \Big \} \Big],
\end{align}
\normalsize
where $q_i=\dot{p}_i$, $z_i \in \mathbb{R}^\mathfrak{p}$ is an auxiliary filter variable, $\psi_i=\|r_i\|_1+\|v^r_i\|_1+\gamma$, and $\kappa$, $\alpha$, $\gamma \in \mathbb{R}^+$ are control gains to be designed.
The distributed control input $u_i$, $i=1,\ldots,n$, is designed to drive $x_i$ and $v_i$ to track, respectively, $p_i$ and $q_i$,
\begin{align}\label{con-inp2}
u_i=&-\eta \tilde{x}_i-\eta \tilde{v}_i \notag -\eta (\|x_i-r_i\|_1+\|v_i-v^r_i\|_1+\gamma) \times \notag \\ &\mbox{sgn}(\tilde{x}_i+\tilde{v}_i) +\ddot{z}_i,
\end{align}
where $\tilde{x}_i=x_i-p_i$, $\tilde{v}_i=v_i-q_i$ and $\eta \in \mathbb{R}^+$ is a control gain to be designed.
\begin{theorem} \label{thm:DAC-lip}
Under the control law given by \eqref{p-term2} and \eqref{con-inp2} for \eqref{non-agents-dynamic}, the distributed average tracking goal \eqref{goal} is achieved asymptotically, provided that Assumptions \ref{conn-graph} and \ref{lip} hold and the control gains $\alpha$, $\gamma$, and $\eta$ are chosen such that $\kappa>1$, $\alpha > \max \{\rho_1,\rho_2 \}+\kappa$ and $\eta > \max \{ 1, \rho_1,\rho_2 \}$, where $\rho_1$ and $\rho_2$ are defined in Assumption \ref{lip}.
\end{theorem}
\emph{Proof}:
The proof contains two steps.
First, it is proved that for the $i$th agent, $\lim\limits_{t \to \infty} p_i = \frac{1}{n} \sum\limits_{j=1}^{n} r_j$ and $\lim\limits_{t \to \infty} q_i = \frac{1}{n} \sum\limits_{j=1}^{n} v^r_j$.
Second, it is shown that by using the control input \eqref{con-inp2} for agent $i$, $\lim\limits_{t \to \infty} x_i = p_i$ and $\lim\limits_{t \to \infty} v_i = q_i$.
Using $q_i=\dot{p}_i$, the filter dynamics \eqref{p-term2} can be rewritten as
\begin{align}\label{clos-lip}
\small
\dot{p}_i=& q_i, \notag \\
\dot{q}_i=&- \kappa (p_i-r_i)- \kappa (q_i-v^r_i) \notag \\
& -\alpha \psi_i \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \{ (p_i+q_i)-(p_j+q_j) \} \Big] \notag \\
& + f(r_i,v^r_i,t),
\end{align}
\normalsize
Let $\tilde{p}=(M \otimes I_\mathfrak{p}) p$ and $\tilde{q}=(M \otimes I_\mathfrak{p}) q$, where $p=[p_1^T,\ldots,p_n^T]^T$, $q=[q_1^T,\ldots,q_n^T]^T$ and $M=I_n- \frac{1}{n}\textbf{1}^T_n \textbf{1}_n$.
Now the local filter's closed dynamics \eqref{clos-lip} can be rewritten in vector form as
\begin{align*}
\small
\dot{\tilde{p}}=& \tilde{q}, \notag \\
\dot{\tilde{q}}=& -\kappa \tilde{p} +\kappa (M \otimes I_\mathfrak{p}) r - \kappa \tilde{q} +\kappa (M \otimes I_\mathfrak{p}) v^r \notag \\
& -\alpha (M \psi \otimes I_\mathfrak{p}) \mbox{sgn} [( L \otimes I_\mathfrak{p})(\tilde{p}+\tilde{q})] \notag \\
&+(M \otimes I_\mathfrak{p}) f(r,v^r,t),
\end{align*}
\normalsize
where $\psi=\mbox{diag}(\psi_1,\ldots,\psi_n)$.
Consider the following Lyapunov function candidate
$V_1= \frac{1}{2}
\begin{bmatrix}
\tilde{p}^T && \tilde{q}^T
\end{bmatrix}
( L \otimes
\begin{bmatrix}
2 \kappa && 1 \\
1 && 1
\end{bmatrix}
\otimes I_\mathfrak{p})
\begin{bmatrix}
\tilde{p} \\
\tilde{q}
\end{bmatrix}$.
Since $(\mathbf{1}_n \otimes I_\mathfrak{p}) ^T \tilde{p} =\mathbf{0}_{n\mathfrak{p}}$ and $(\mathbf{1}_n \otimes I_\mathfrak{p}) ^T \tilde{q} =\mathbf{0}_{n\mathfrak{p}}$, by using Lemma \ref{eigen}, we have
$V_1 \geq \frac{\lambda_2(L)}{2} \begin{bmatrix}
\tilde{p}^T && \tilde{q}^T
\end{bmatrix} (
\begin{bmatrix}
2 \kappa && 1 \\
1 && 1
\end{bmatrix} \otimes I_{n\mathfrak{p}})
\begin{bmatrix}
\tilde{p} \\
\tilde{q}
\end{bmatrix}$,
where $\lambda_2 (L)$ is defined in Lemma \ref{eigen}.
It can be proved that if $\kappa >\frac{1}{2}$, then $\begin{bmatrix}
2 \kappa && 1 \\
1 && 1
\end{bmatrix}>0$, which means that $V_1$ is positive definite.
The derivative of $V_1$ is given as
\begin{align*}
\small
\dot{V}_1 =& 2 \kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{q}+\tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q}-\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} \notag \\
&+\kappa \tilde{p}^T (L \otimes I_\mathfrak{p})r -\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{q} +\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) v^r \notag \\
&-\alpha\tilde{p}^T (L \psi \otimes I_\mathfrak{p}) \mbox{sgn} [( L \otimes I_\mathfrak{p})(\tilde{p}+\tilde{q})] \notag \\
&+\tilde{p}^T (L \otimes I_\mathfrak{p}) f(r,v^r,t)-\kappa \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{p} \notag \\
&+\kappa \tilde{q}^T (L \otimes I_\mathfrak{p})r -\kappa \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} +\kappa \tilde{q}^T (L \otimes I_\mathfrak{p}) v^r \notag \\
&-\alpha\tilde{q}^T (L \psi \otimes I_\mathfrak{p}) \mbox{sgn} [( L \otimes I_\mathfrak{p})(\tilde{p}+\tilde{q})] \notag \\
&+\tilde{q}^T (L \otimes I_\mathfrak{p}) f(r,v^r,t)\\
=&-\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
&+ \kappa (\tilde{p}+\tilde{q})^T (L \otimes I_\mathfrak{p}) (r + v^r) \\
& -\alpha \sum\limits_{i=1}^{n} \psi_i \Big [ \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big\} \Big ]^T \times \\
& \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \Big\{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big\} \Big] \\
& + \sum\limits_{i=1}^{n} \Big [ \sum\limits_{j=1}^{n} a_{ij} \Big\{(\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big\} \Big ]^T f(r_i,v^r_i,t) \\
=&-\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
&+ \kappa \sum\limits_{i=1}^{n} \Big [ \sum\limits_{j=1}^{n} a_{ij} \Big\{(\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big ]^T \Big[ r_i+v^r_i \Big] \\
& -\alpha \sum\limits_{i=1}^{n} \psi_i \Big [ \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big\} \Big ]^T \times \\
& \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \Big\{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big\} \Big] \\
&+\sum\limits_{i=1}^{n} \Big [ \sum\limits_{j=1}^{n} a_{ij} \Big \{(\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big ]^T \times \\
& \Big[ f(r_i,v^r_i,t)-f(0,0,t) \Big],
\end{align*}
\normalsize
where $\tilde{p}_i$ and $\tilde{q}_i$ are, respectively, the $i$th components of $\tilde{p}$ and $\tilde{q}$ and we have used $LM=L$ and Assumption \ref{lip} to obtain, respectively, the first and the third equalities.
Under Assumption \ref{lip} and using the triangular inequality, we have
\begin{align*}
\small
\dot{V}_1 \leq&-\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
&+\kappa \sum\limits_{i=1}^{n} \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 \times \\
& ( \|r_i\|_1+\|v^r_i\|_1) \\
&-\alpha \sum\limits_{i=1}^{n} \psi_i \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 \\
&+\sum\limits_{i=1}^{n} \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 \times \\
& (\rho_1 \|r_i\|_1+\rho_2\|v^r_i\|_1) \\
= & -\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
&-\alpha \sum\limits_{i=1}^{n} \psi_i \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 \\
&+\sum\limits_{i=1}^{n} \Big ( (\kappa+\rho_1) \|r_i\|_1+(\kappa+\rho_2)\|v^r_i\|_1 \Big ) \times \\
& \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 \\
= & -\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
&+\sum\limits_{i=1}^{n} \Big ( (\kappa+\rho_1-\alpha) \|r_i\|_1+(\kappa+\rho_2-\alpha)\|v^r_i\|_1 -\alpha \gamma\Big ) \times \\
& \Big \| \sum\limits_{j=1}^{n} a_{ij} \Big \{ (\tilde{p}_i+\tilde{q}_i)-(\tilde{p}_j+\tilde{q}_j) \Big \} \Big \|_1 ,
\end{align*}
\normalsize
where we have used the definition of $\psi_i$ to obtain the last equality.
Since $\alpha > \max \{\rho_1,\rho_2 \}+\kappa$, we will have
\begin{align*}
\dot{V}_1 \leq & -\kappa \tilde{p}^T (L \otimes I_\mathfrak{p}) \tilde{p} - (\kappa -1 ) \tilde{q}^T (L \otimes I_\mathfrak{p}) \tilde{q} \\
\leq & -\kappa \lambda_2(L) \tilde{p}^T \tilde{p}-(\kappa -1)\lambda_2(L) \tilde{q}^T \tilde{q} < 0,
\end{align*}
where we have used Lemma \ref{eigen}, since $(\mathbf{1}_n \otimes I_\mathfrak{p}) ^T \tilde{p} =\mathbf{0}_{n\mathfrak{p}}$ and $(\mathbf{1}_n \otimes I_\mathfrak{p}) ^T \tilde{q}=\mathbf{0}_{n\mathfrak{p}}$, and $\kappa>1$ to obtain the second inequality.
Using Theorem 4.10 in \cite{khalil2002nonlinear}, it is concluded that $\begin{bmatrix}
\tilde{p} \\
\tilde{q}
\end{bmatrix}=\mathbf{0}_{2n \mathfrak{p}}$ is globally exponentially stable, which means $ \lim\limits_{t \to \infty} p_i = \frac{1}{n} \sum\limits_{j=1}^{n} p_j$ and $ \lim\limits_{t \to \infty} q_i = \frac{1}{n} \sum\limits_{j=1}^{n} q_j$ for $i=1,\ldots,n$.
Now it is enough to show that $\lim\limits_{t \to \infty} \sum\limits_{j=1}^{n} p_j= \sum\limits_{j=1}^{n} r_j$ and $ \lim\limits_{t \to \infty} \sum\limits_{j=1}^{n} q_j=\sum\limits_{j=1}^{n} v^r_j$ which results in $\lim\limits_{t \to \infty } p_i = \frac{1}{n} \sum\limits_{j=1}^{n} r_j$ and $\lim\limits_{t \to \infty } q_i = \frac{1}{n} \sum\limits_{j=1}^{n} v^r_j$.
Define the variables ${S_1=\sum_{i=1}^n (p_i- r_i)}$ and ${S_2=\sum_{i=1}^n (q_i-v_i^r) }$, we can get from \eqref{clos-lip} that
\begin{align}\label{sum-lip}
\dot{S}_1=& S_2, \notag \\
\dot{S}_2=&- \kappa S_1- \kappa S_2 \notag \\
& -\alpha \sum\limits_{i=1}^{n} \psi_i \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \Big\{ (p_i+q_i)-(p_j+q_j) \Big\} \Big],
\end{align}
We then use input-to-state stability to analyze the system \eqref{sum-lip} by treating the term $\sum\limits_{i=1}^{n} \psi_i \mbox{sgn} \Big[ \sum\limits_{j=1}^{n} a_{ij} \Big\{ (p_i+q_i)-(p_j+q_j) \Big\} \Big]$ as the input and $S_1$ and $S_2$ as the states.
Since $\kappa>1$, the matrix $\begin{bmatrix}
\mathbf{0}_\mathfrak{p} & I_\mathfrak{p} \\
-\kappa I_\mathfrak{p} & -\kappa I_\mathfrak{p}
\end{bmatrix}$ is Hurwitz.
Thus, the system \eqref{sum-lip} with zero input is exponentially stable and hence input-to-state stable.
Since $p_i+q_i \to p_j+q_j$, $i,j=1,\cdots,n$, as $t \to \infty$, it follows that $S_1 \to 0$ and $S_2 \to 0$.
Therefore, we have that $\lim\limits_{t \to \infty}\sum_{i=1}^n p_i = \sum_{i=1}^n r_i$ and $\lim\limits_{t \to \infty} \sum_{i=1}^n q_i = \sum_{i=1}^n v_i^r$, respectively.
Employing the results of these two parts, it is concluded that $\lim\limits_{t \to \infty} p_i \to \frac{1}{n} \sum_{j=1}^n r_j$ and ${\lim\limits_{t \to \infty} q_i \to \frac{1}{n} \sum_{j=1}^n v_j^r}$.
In the remaining, we will prove that $\lim\limits_{t \to \infty} x_i = p_i$ and $\lim\limits_{t \to \infty} v_i = q_i$ asymptotically.
Using the control input \eqref{con-inp2} for \eqref{non-agents-dynamic}, we get the closed-loop dynamics in vector form as
\begin{align*}
\small
\dot{\tilde{x}}=&\tilde{v}, \\
\dot{\tilde{v}}=& f(x,v,t) -\eta \tilde{x}-\eta \tilde{v} \\
&-\eta (\|x-r\|_1+\|v-v^r\|_1+\gamma) \mbox{sgn}(\tilde{x}+\tilde{v}) - f(r,v^r,t),
\end{align*}
\normalsize
where $\tilde{x}=[\tilde{x}_1^T,\ldots,\tilde{x}_n^T]^T$, $\tilde{v}=[\tilde{v}_1^T,\ldots,\tilde{v}_n^T]^T$ and $f(x,v,t)=[f^T(x_1,v_1,t),\ldots,f^T(x_n,v_n,t)]^T$.
Consider the candidate Lyapunov function
$V_2= \frac{1}{2}
\begin{bmatrix}
\tilde{x}^T && \tilde{v}^T
\end{bmatrix}
\begin{bmatrix}
2\eta I_{n\mathfrak{p}} && I_{n\mathfrak{p}} \\
I_{n\mathfrak{p}} && I_{n\mathfrak{p}}
\end{bmatrix}
\begin{bmatrix}
\tilde{x} \\
\tilde{v}
\end{bmatrix}$.
Since $\eta>\frac{1}{2}$, $V_2$ is positive definite. By taking the derivative of $V_2$, we will have
\begin{align*}
\dot{V}_2=& 2\eta \tilde{x}^T \tilde{v}+\tilde{v}^T \tilde{v} -\eta \tilde{x}^T (\tilde{x}+\tilde{v})\\
& -\eta \sum\limits_{i=1}^{n} (\|x_i-r_i\|_1+\|v_i-v^r_i\|_1+\gamma)\tilde{x}_i^T\mbox{sgn}(\tilde{x}_i+\tilde{v}_i) \\ &+\sum\limits_{i=1}^{n} \tilde{x}_i^T (f(x_i,v_i,t)-f(r_i,v^r_i,t))
-\eta \tilde{v}^T (\tilde{x}+\tilde{v}) \\
& -\eta \sum\limits_{i=1}^{n} (\|x_i-r_i\|_1+\|v_i-v^r_i\|_1+\gamma) \tilde{v}_i^T\mbox{sgn}(\tilde{x}_i+\tilde{v}_i) \\
&+\sum\limits_{i=1}^{n} \tilde{v}_i^T (f(x_i,v_i,t)-f(r_i,v^r_i,t)) \\
=&-\eta \tilde{x}^T \tilde{x}+(1-\eta) \tilde{v}^T \tilde{v} \\
&-\eta \sum\limits_{i=1}^{n} (\|x_i-r_i\|_1+\|v_i-v^r_i\|_1+\gamma)\|\tilde{x}_i+\tilde{v}_i\|_1 \\ &+\sum\limits_{i=1}^{n}(\tilde{x}_i+\tilde{v}_i)^T (f(x_i,v_i,t)-f(r_i,v^r_i,t)) \\
\leq & -\eta \tilde{x}^T \tilde{x}+(1-\eta)\tilde{v}^T \tilde{v}-\eta \gamma \sum\limits_{i=1}^{n} \|\tilde{x}_i+\tilde{v}_i\|_1 \\
&-\eta \sum\limits_{i=1}^{n} (\|x_i-r_i\|_1+\|v_i-v^r_i\|_1)\|\tilde{x}_i+\tilde{v}_i\|_1 \\
& +\sum\limits_{i=1}^{n} (\rho_1 \|x_i-r_i\|_1+\rho_2 \|v_i-v^r_i\|_1 ) \|\tilde{x}_i+\tilde{v}_i\|_1,
\end{align*}
where we have used Assumption \ref{lip} to obtain the inequality.
Since $\eta >\max \{ \rho_1,\rho_2 \}$, we can get that
$\dot{V}_2 \leq -\eta \tilde{x}^T \tilde{x} -(\eta-1) \tilde{v}^T \tilde{v} \leq 0$,
where we have used $\eta >1$ to obtain the last inequality.
Using Theorem 4.10 in \cite{khalil2002nonlinear}, it is concluded that $\begin{bmatrix}
\tilde{x} \\
\tilde{v}
\end{bmatrix}=\mathbf{0}_{2n\mathfrak{p}}$ is globally exponentially stable. Therefore, it is concluded that for $i$th agent, $i=1,\ldots,n$, $\lim\limits_{t \to \infty} x_i = \frac{1}{n} \sum\limits_{j=1}^{n} r_j$ and
$ \lim\limits_{t \to \infty} v_i = \frac{1}{n} \sum\limits_{j=1}^{n} v^r_j$ asymptotically.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
\begin{remark}
Due to the presence of the unknown term $f(\cdot,\cdot,t)$ in the agents' dynamics, the proposed algorithms for double-integrator agents are not applicable to achieve the distributed average tracking.
For example, by employing the algorithm in \cite{feidoubleintegrator} for \eqref{non-agents-dynamic}, the two equalities $\sum_{j=1}^n x_j =\sum_{j=1}^n r_j$ and $\sum_{j=1}^n v_j = \sum_{j=1}^n v_j^r$ do not hold anymore.
In fact, the unknown term $f(\cdot,\cdot,t)$ functions as a disturbance and it will not allow the average of the positions and velocities to track the reference inputs and the reference velocities, respectively.
This shows the essence of using the local filter \eqref{z-term} in our algorithm.
\end{remark}
\subsection{Discussion}\label{section1}
Suppose that the unknown term $f(\cdot,\cdot,t)$ satisfies the boundedness condition
\begin{align}\label{bounded-f}
{\text{sup}_{t \in [0,\infty)} \| f(x,v,t)\|_1 \leq \bar{f}},
\end{align}
where $x$, $v \in \mathbb{R}^\mathfrak{p}$, and $\bar{f} \in \mathbb{R}^+$.
Then, the proposed algorithm in Subsection \ref{section2} can be simplified as the following local filter dynamics and control input with constant control gains to achieve the distributed average tracking.
\begin{align}
p_i=& z_i+r_i,
\notag \\
\ddot{z}_i=&-\alpha \sum\limits_{j=1}^{n} a_{ij} \mbox{sgn}\Big[(p_i+q_i)-(p_j+q_j) \Big], \label{z-term} \\
u_i=& -\eta \mbox{sgn} \Big [\tilde{x}_i+\tilde{v}_i \Big] +\ddot{z}_i,
\label{con-inp}
\end{align}
where $p_i$, $q_i$, $z_i$, $\tilde{x}_i$ and $\tilde{v}_i$ are defined in Subsection \ref{section2} and $\alpha$, $\eta \in \mathbb{R}^+$ are control gains to be defined.
\begin{assumption}\label{initial}
The variables $z_i$ and $\dot{z}_i$ are initialized such that $\sum\limits_{i=1}^{n} z_i(0)=\mathbf{0}_{\mathfrak{p}}$, and $\sum\limits_{i=1}^{n} \dot{z}_i(0)=\mathbf{0}_{\mathfrak{p}}$. \footnote{A special choice is $z_i(0)=\mathbf{0}_{\mathfrak{p}}$ and $\dot{z}_i(0)=\mathbf{0}_{\mathfrak{p}}$ for all ${i=1,\ldots,n}$.}
\end{assumption}
\begin{theorem} \label{DAC-non-dynamic}
Under the control law given by \eqref{z-term} and \eqref{con-inp} for \eqref{non-agents-dynamic}, the distributed average tracking goal \eqref{goal} is achieved asymptotically, provided that Assumption \ref{conn-graph} and the boundedness condition \eqref{bounded-f} hold and the control gains $\alpha$ and $\eta$ are chosen such that
$\alpha > \frac{2 \|s(0)\|_1 + \|(D^T \otimes I_\mathfrak{p}) x(0)\|_1 + n\bar{f}}{\lambda_2 (L)}$ and $\eta > 2\|\tilde{s}(0)\|_1 + \|\tilde{x}(0)\|_1 +2 \bar{f}$, where $\lambda_2 (L)$ and $\bar{f}$ are defined, respectively, in Lemma \ref{eigen} and \eqref{bounded-f}.
\end{theorem}
\emph{Proof}:
Similar to the Theorem \ref{thm:DAC-lip}, the proof contains two steps.
The general idea of the first step of the proof is adopted from \cite{feidoubleintegrator}, where $a_i^r$ is replaced by $f(r_i,v^r_i,t)$.
The second step of proof is similar to the second step of proof of Theorem \ref{thm:DAC-lip}.
Thus, the proof is omitted
\hspace*{\fill}~\QED\par\endtrivlist\unskip
\begin{remark}
In \eqref{z-term}-\eqref{con-inp}, each agent needs its neighbors' filters' outputs besides its own filter outputs, absolute position and absolute velocity, while in \eqref{p-term2}-\eqref{con-inp2}, the agent needs its neighbors' reference inputs and reference velocities too.
Further, in both algorithms, there is no need for correct position and velocity initialization, where the initialization of the physical variables might not be feasible for real applications.
In the algorithm \eqref{z-term}-\eqref{con-inp}, only the initialization of the filters' auxiliary variables is required, which can be easily satisfied.
\end{remark}
\begin{remark}
The unknown bounded term $f(\cdot,\cdot,t)$ that satisfies \eqref{bounded-f}, can be interpreted as a perturbation resulting from modeling errors, uncertainties, and disturbances that exist in many realistic problems.
However, Assumption \ref{lip} includes a more general group of systems including unbounded nonlinear terms.
In addition if $\gamma$ is chosen properly, the algorithm \eqref{p-term2}-\eqref{con-inp2} is still applicable in the presence of an additive bounded term in either agents' or reference inputs' dynamics.
\end{remark}
\section{CONCLUSIONS}
In this paper, the distributed average tracking of physical second-order agents with nonlinear dynamics was studied.
A nonlinear term was assumed in both agents' and reference inputs' dynamics satisfying the Lipschitz-type condition.
Due to the presence of the nonlinear term in agents' dynamics, a filter design combined with controller design was introduced to solve the problem.
The idea was that each filter outputs converge to the average of the reference inputs and the reference velocities asymptotically and in parallel the agent's position and velocity are driven to track the filter outputs.
Since the nonlinear term could be unbounded, novel state-dependent time varying gains were employed in each agent's filter and control input to overcome the unboundedness effect.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{General structure of reparametrization transformation}
In the series of papers Weinberg
\cite{Weinberg1964,Weinberg1964a,Weinberg1965}, Boulware and Deser
\cite{Boulware1975} have shown that the massless particles of helicity $\pm 2$
are described by the effective theory, satisfying the equivalence principle.
Boulware and Deser have shown that the corresponding effective action coincides
with the classical Einstein action~\footnote{We put $\hbar=c=16\pi G=1$,
restoring the dimension in the final results only.}
\begin{align}
\bar{S}_g=\int d^4x \left[-\sqrt{-\bar{g}}\,\bar{R}(\bar{g})\right].\label{e1}
\end{align}
In general case, it is necessary to supplement the lagrangian density with a
number (may be infinite) of terms of higher orders in $\partial_\alpha
\bar{g}_{\mu\nu}$. The natural property of any effective theory is the
\textit{reparametrization invariance}. It implies that a scattering amplitude
on mass shell does not depend on the choice of field variables. In general
relativity one of natural parametrizations of the gravitational field
$h_{\alpha\beta}$ is the decomposition of the covariant metric tensor:
$\bar{g}_{\mu\nu}=g_{\mu\nu}+f_{\mu\nu}(h)$, where $f$ is an arbitrary
symmetric tensor function, the expansion of which begins with a linear in
$h_{\alpha\beta}$ term. For example, to derive the counter lagrangian of the
gravity, interacting with a massless scalar field, 't\,Hooft and Veltman
\cite{Hooft1974} have used the trivial parametrization
\begin{align}
\bar{g}_{\mu\nu}=g_{\mu\nu}+h_{\mu\nu},\label{e2}
\end{align}
where $g_{\mu\nu}$ is the background field, $h_{\mu\nu}$ is the operator field,
characterizing quantum fluctuations. The action of scalar field in external
gravitational field has the form
\begin{align}
\bar{S}_{m}=\int d^{4}x\,\frac{\sqrt{-\bar{g}}}{2}\left( \bar
{g}^{mn}\partial_{n}\bar{\phi}\,\partial_{m}\bar{\phi}-m^{2}\bar{\phi}%
^{2}\right).\label{e3}
\end{align}
Similarly to (\ref{e2}), we decompose the field $\bar{\phi}$
\begin{align}
\bar{\phi}=\tilde{\phi}+\phi.\label{e4}
\end{align}
Supplementing the action (\ref{e1}) with a gauge fixing part
\begin{align}
S_{f}=\int d^{4}x\,\frac{\sqrt{-g}}{2} \left( h_{\mu
\mid\alpha}^{\alpha}{}-\frac{1}{2}h_{\mid\mu}\right) \left(h_{\mid\beta
}^{\mu\beta}{}-\frac{1}{2}h^{\mid\mu}\right),\label{e5}
\end{align}
and a corresponding action of ghosts $\eta_\mu$, we find the expansion of the
action up to second order in fluctuations:
\begin{align}
\bar{S}_{g}+\bar{S}_{m}+S_{f}+S_{gh}(\eta)=\int d^{4}x\sqrt{-g}\left(
\mathcal{L}+\underline{\mathcal{L}}+\underline
{\underline{\mathcal{L}}}\right),\label{e6}
\end{align}
with
\begin{align}
\mathcal{L} & = - R+\frac{1}{2}\left( g^{\mu\nu}\partial_{\nu}\tilde{\phi}%
\,\partial_{\mu}\tilde{\phi}-m^{2}\tilde{\phi}^{2}\right), \label{e7}\\
\underline{\mathcal{L}} &
=\left(R_{\mu}^{\nu}-\frac{1}{2}\delta_{\mu}^{\nu}R-\frac
{1}{2}T_{\mu}^{\nu}\right)h^{\mu}_{\nu}
+\left(\tilde{\phi}_{\mid\lambda}^{\mid\lambda}+m^{2}\tilde{\phi}\right)
\phi,\label{e8}\\
\underline{\underline{\mathcal{L}}} & =-\frac{1}{4}\left( h_{\alpha\beta
}P^{\alpha\beta}_{\gamma\delta}{}h^{\gamma\delta}{}_{\mid\lambda}^{\mid\lambda
}+h_{\alpha\beta}X^{\alpha\beta}_{\gamma\delta}h^{\gamma\delta}\right),\label{e9}\\
& +\frac{1}{4}h_{\alpha\beta}W^{\alpha\beta}_{\gamma\delta}h^{\gamma\delta}
+\eta^{\dagger\,\mu}\left( \eta_{\mu}{}^{\mid \lambda}{}_{\mid \lambda}+R_{\mu}^{\lambda}%
\eta_{\lambda}\right)\notag\notag,\\
& +\phi\left( P^{\mu\nu}_{\gamma\delta}\partial_{\mu}\tilde{\phi}\,D_{\nu
}+P^{\mu\nu}_{\gamma\delta}\tilde{\phi}_{\mid\mu\nu}-\frac{1}{2}\,g_{\gamma
\delta}m^{2}\tilde{\phi}\right) h^{\gamma\delta}.\notag
\end{align}
In expressions (\ref{e7})-(\ref{e9}) indices are raised and lowered by means of
the tensor $g_{\mu\nu}$, $R_{\mu\nu}$ and $R$ are the Ricci tensor and Riemann
curvature of the background field, respectively. We introduce also the notation
$h=h^\lambda_{\lambda}$. Indices following vertical lines denote covariant
derivatives relative to the metric tensor $g_{\mu\nu}$. Matrices, appearing in
the expressions (\ref{e7})-(\ref{e9}), have the following form:
\begin{align}
P_{\gamma\delta}^{\alpha\beta} & =\delta_{(\gamma}^{\alpha}\delta_{\delta
)}^{\beta}-\frac{1}{2}g^{\alpha\beta}g_{\gamma\delta}\,,\label{e9a}
\\
X^{\alpha\beta}_{\gamma\delta} & =P^{\alpha\beta}_{\rho\sigma}\left[
R_{(\gamma }{}^{\rho}{}_{\delta)}{}^{\sigma}+\delta^{\sigma}_{(\delta}\left(
R^{\rho
}_{\gamma)}-\frac{1}{2}\delta^{\rho}_{\gamma)}R\right) \right] \notag\\
& +(\alpha\beta \leftrightarrow\gamma\delta)\,,\label{e10}\\
W^{\alpha\beta}_{\gamma\delta} & = T^{(\alpha}_{\sigma}\,
P^{\sigma\beta)}_{\gamma\delta}+T_{(\gamma}^{\sigma}\,
P_{\sigma\delta)}^{\alpha\beta}\notag\\
& +\frac{1}{2}P^{\alpha\beta}_{\gamma\delta}\left( m^{2}\tilde{\phi}%
^{2}-T\right).\label{e11}
\end{align}
In the expressions (\ref{e9a})-(\ref{e11}) indices with brackets are to be
symmetrized. $T_{\mu\nu}$ is the stress tensor of the scalar field:
\begin{align}
T_{\mu\nu}=\partial_{\mu}\tilde{\phi}\,\partial_{\nu}\tilde{\phi}-\,\frac{1}%
{2}\,g_{\mu\nu}\left( g^{\rho\sigma}\partial_{\rho}\tilde{\phi}\,\partial
_{\sigma}\tilde{\phi}-m^{2}\tilde{\phi}^{2}\right).
\end{align}
The first variation of the action (\ref{e8}) eventually supply us with the
equations of motion for the background fields:
\begin{align}
& R_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}R=\frac
{1}{2}\,T_{\mu\nu}\,,\label{e12}\\
&g^{\mu\nu}\tilde{\phi}_{\mid\mu\nu}+m^{2}\tilde{\phi}=0\,.\label{e13}
\end{align}
In Ref.\,\cite{Boulware1975} it has been shown that, at fixed gauge, the three
graviton vertex is matched by the gravitational interaction with stress tensor
of the classical free spin 2 field up to four parameters, corresponding to the
reparametrization of the field $h_{\mu\nu}$~\footnote{There are linear changes
of variables such as $h_{\mu\nu}\to c_1 h_{\mu\nu}+c_2 g_{\mu\nu} h$, but we
leave them aside for the sake of simplicity.}:
\begin{align}
& \bar{g}_{\mu\nu}=g_{\mu\nu}+h_{\mu\nu}+a_1\,h_{\mu\lambda}h^{\lambda}_{\nu}\notag\\
& + a_2\,h_{\mu\nu} h + a_3\,g_{\mu\nu} h^{\alpha}_{\beta} h_{\alpha}^{\beta}+
a_4\,g_{\mu\nu} h^2. \label{e14}
\end{align}
Loop corrections to the scattering amplitude have been studied in
Refs.\,\cite{Bjerrum-Bohr:2002ks}, \cite{Khriplovich:2004cx}. Corrections
concerned were proportional to $\ln\vert q^2\vert$, where $q^2$ is the transfer momentum
squared. In particular, it was found that, after averaging over the
fluctuations, corrections to the Schwarzschild metric appeared:
\begin{align}
g_{\mu\nu}=g^{\mathrm{cl}}_{\mu\nu}+\underline{g}{}_{\mu\nu}\,,\label{e15}
\end{align}
where $g^{\mathrm{cl}}_{\mu\nu}$ is the classical Schwarzschild solution,
$\underline{g}{}_{\mu\nu}$ is the quantum correction to it.
Quite apparently, the \textit{leading} corrections to metric must be
independent of the way of parametrization of the field $h_{\mu\nu}$. Actually,
being quadratic in fluctuations, additional terms in the parametrization
(\ref{e14}) generate additional structures to
$\underline{\underline{\mathcal{L}}}$ only due to replacement of the field
$h_{\mu\nu}$ in the lagrangian density $\underline{\mathcal{L}}$,
i.\,e., these structures vanish after taking into account the equations of
motion (\ref{e12}). However, in perturbation theory it happens only if
\textit{all diagrams} have been taken into account. In
Ref.\,\cite{Bjerrum-Bohr:2002ks} only certain part of the diagrams have been
considered, namely, the graviton propagator corrections and the corrections to
one of the vertices (Fig.\,\ref{f1}). As we will show, the contribution of these
diagrams is not reparametrization invariant.
\begin{figure}
\includegraphics{pic1.eps}
\caption{The diagrams taken into account in Ref.\,\cite{Bjerrum-Bohr:2002ks}
\label{f1}}
\end{figure}
\section{Example of reparametrization transformation}
As an example, we parametrize the gravitational field in the following way
\begin{align}
\bar{g}_{\mu\nu}=g_{\mu\nu}+h_{\mu\nu}-\frac{a}{4}\,h_{\mu\alpha}h^\alpha_\nu.\label{e16}
\end{align}
As stated above, the lagrangian quadratic in fluctuation changes due to the
linear terms only. The reparametrization (\ref{e16}) is equivalent to the
replacement of the matrices $X$ and $W$ in the lagrangian density (\ref{e9}) by
the matrices $X+a \mathcal{X}$ and $W+a \mathcal{W}$, respectively, there
\begin{align}
\mathcal{X}_{\gamma\delta}^{\alpha\beta} & =\delta_{(\gamma}^{(\alpha
}P^{\beta)}{}_{\delta),\kappa\lambda}R^{\kappa\lambda}\,,\label{e17}\\
\mathcal{W}_{\gamma\delta}^{\alpha\beta} & =\frac{1}{2}\delta_{(\gamma
}^{(\alpha}T_{\delta)}^{\beta)}\,.\label{e17a}%
\end{align}
Graviton propagator corrections are generated by the counter lagrangian of pure
gravity. The counter lagrangian has been derived in~Ref.\,\cite{Hooft1974}, we
aim here to find its transformation under the reparametrization transformation
(\ref{e16}). Using the general formula for the counter lagrangian derived
in~Ref.\,\cite{Hooft1974}, we find:
\begin{align}
L_{count.}^{(a)} & =\frac{\sqrt{-g}}{8\pi^{2}(d-4)}\frac{1}%
{4}\,Sp\left\{2a\left( X\mathcal{X}\right)\right.\notag\\
& \left. +\frac{a}{3}R\left( P\mathcal{X}\right)
+a^{2}\left(P\mathcal{X}P\mathcal{X}\right)\right\}\,.\label{e18}
\end{align}
In the expression (\ref{e18}) the matrices $P, X$ and $\mathcal{X}$ should be
read as $10\times 10$ matrices in relation to the number of the components of
the symmetric tensor $h_{\mu\nu}$. Adding up the results
of~Ref.\,\cite{Hooft1974} and (\ref{e18}) yields the counter lagrangian for the
case of pure gravity
\begin{align}
L_{count.} & =\frac{\sqrt{-g}}{8\pi^{2}(d-4)}\left[ \left(
\frac{7}{20}+\frac{a^{2}}{8}\right) R_{mn}R^{mn}\right. \\
& \left. +\left( \frac{1}{120}+\frac{a}{8}\left( \frac{14}{3}%
+ a\right) \right) R^{2}\right] \,.\label{e18a}
\end{align}
This lagrangian gives the following corrections to the pure time component of
the metric (diagram Fig.\,\ref{f1}a):
\begin{align}
\underline{g}{}^{\ref{f1}a}_{00}= - \left[\frac{43}{15}+a\left(
\frac{14}{3}+2a\right) \right] \frac{G^{2}\hbar m}{\pi
c^{5}r^{3}}\,.\label{e19}
\end{align}
Using the additional vertices (\ref{e17}), (\ref{e17a}), it is easy to find the
contributions of the diagrams depicted in Figs.\,\ref{f1}b,\,c:
\begin{align}
\underline{g}{}^{\ref{f1}b}_{00} & =\left[ \frac{26}{3}+a\left(
\frac{37}{3}+2a\right)
\right] \frac{G^{2}\hbar m}{\pi c^{5}r^{3}}\,,\label{e20}\\
\underline{g}{}^{\ref{f1}c}_{00} & =-\left( \frac{5}{3}+5a\right)
\frac{G^{2}\hbar m}{\pi
c^{5}r^{3}}\,.\label{e21}%
\end{align}
Summing up the results (\ref{e19})-(\ref{e21}), we get the following
contributions of the diagrams Fig.\,\ref{f1}:
\begin{align}
\underline{g}^{\ref{f1}a+\ref{f1}b+\ref{f1}c}_{00}=\left(
\frac{62}{15}-\frac{2}{3}\,a\right) \frac {G^{2}\hbar m}{\pi c^{5}r^{3}}\,.
\label{e22}
\end{align}
The a-independent part of Eq.\,(\ref{e22}) coincides with the result of
Ref.\,\cite{Bjerrum-Bohr:2002ks}. From Eq.\,(\ref{e22}) one can see that this
contribution is \textit{not reparametrization invariant}. Whereas the sum of
the contributions of all the diagrams, listed in Ref.\,\cite{Khriplovich:2004cx},
is {reparametrization invariant} for the obvious reason stated above
\begin{align}
\underline{g}^\mathrm{qu}_{00}=\frac{107}{30}\frac {G^{2}\hbar m}{\pi
c^{5}r^{3}}\,.\label{e23}
\end{align}
Parametrization dependence on the contribution of the diagrams Fig.\,\ref{f1}
(i.\,e., diagrams containing a single graviton propagator attached to one of
the particles) is the direct consequence of the fact that, in general
relativity, separation of these diagrams from other loop ones is a matter of
convention only, because \textit{they do not contain the pole} in $q^2$
\footnote{In contrast to QED or QCD these corrections lead to the
renormalization of the operators of higher dimensions than (\ref{e1}), for
example (\ref{e18a}), thus they bear no relation to the renormalization of
$G$.}. Being unrelated to the renormalization of the amplitude with pole in
$q^2$, these diagrams should be considered in line with other ones. As it has
been shown, reparametrization transformations mix this diagrams with, for
example, diagram proportional to $Sp\{PWPW\}$ (see Eqs.\,(\ref{e9a}),
(\ref{e11})). Due to Eq.\,(\ref{e12}) there is no difference between the
contribution of the diagrams Fig.\,\ref{f1} on mass shell and, for example, the
diagram proportional to $Sp\{PWPW\}$.
\section{Aside on classical corrections}
\begin{figure}
\includegraphics{pic2.eps}%
\caption{Tree diagrams \label{f2}}
\end{figure}
The correction (\ref{e23}) is the leading one in $l_p^2/r^2$, where $l_p$ is
the Plank length. From the standpoint of leading corrections, the
parametrizations (\ref{e2}), (\ref{e16}) are indeed indistinguishable, because,
after averaging over the quantum fluctuation, the information about
parametrization of these fluctuation is lost.
Therein lies the main difference between the leading quantum corrections and
nonleading classical corrections of the order $r^2_g/r^2$, there $r_g$ is the
Schwarzschild radius. Let us consider this aspect in detail. The diagram
depicted in Fig.\,\ref{f1}c contributes to the classical correction to the
Minkowski metric
\begin{align}
\underline{\underline{g}}{}^{\mathrm{cl}}_{00}=(2+a)\,\frac{G^{2}m^{2}}{c^{4}r^{2}}\,.
\label{e23a}%
\end{align}
However, this correction is actually induced by the tree diagram
(Fig.\,\ref{f2}a). The decomposition on the background field and its fluctuations
(\ref{e2}) has no sense for such diagrams, because the integration momentum
(flowing through the "legs with crosses"\ in Fig.\,\ref{f2}a) is of the order of
$q$. It follows that the leading classical correction to the Minkowski metric
\begin{align}
\underline{g}{}^{\mathrm{cl}}_{00}=- 2\,\frac{Gm}{c^2 r}
\end{align}
is of the same order as the field $h_{\mu\nu}$; consequently, it serves no
purpose to distinguish them. Since the correction (\ref{e23a}) is not the
leading one, therefore it is possible to turn back to the initial variables
(\ref{e2}) rather than (\ref{e16}), i.\,e.
\begin{align}
\underline{\underline{g}}{}_{00}^{\mathrm{harm}}=\underline{\underline{g}}{}^{\mathrm{cl}}_{00}
-\frac{a}{4}\left(\underline{g}{}^{\mathrm{cl}}_{00}\right)^{2}=\frac{2G^{2}m^{2}}{c^{4}r^{2}}\,,
\label{e24}%
\end{align}
\noindent where $\underline{\underline{g}}{}^{\mathrm{harm}}_{00}$ is the second order
term in the expansion of the Schwarzschild metric in the harmonic coordinates.
It should be repeated once again that the quantum correction (\ref{e23}) is the
leading one, therefore the trick (\ref{e24}) does not permit to turn back to
the former variables, i.\,e, the correction must be invariant by itself. An
important point is that $a$-dependent contributions to the potential vanish in
the sum of the diagrams Fig.\,\ref{f2}a and Fig.\,\ref{f2}b only, i.\,e., even on
the level of classical gravity one cannot introduce the physically meaningful
"one-particle-irreducible potential"\ (contrary to the section VIII of
Ref.\,\cite{Bjerrum-Bohr:2002ks}).
\begin{acknowledgments}
I would like to thank I.B. Khriplovich for his helpful comments and
discussions. The investigation was supported by the Russian Foundation for
Basic Research through Grant No. 05-02-16627-a.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Image captioning is a process of portraying an image that is done by combining two fields of deep learning. These fields are computer vision and natural language processing (NLP). For many years researchers have examined methods to caption images automatically. This method involves recognizing the objects, attributes, and their relationships with the corresponding images to correctly generate fluent sentences. This is a very challenging task. Image captioning can be used for social and security purposes. It can be used for increasing children's interest in early education or Security camera footage can be captioned in real-time to prevent theft or prevent any hazard like fire.
The image caption is a sequence modeling problem that employs a CNN-RNN-based encoder-decoder framework. In this task, the encoder is used to extract the image feature to obtain feature vectors, then pass it through an RNN to generate the language description. Previously all researchers utilized this CNN-RNN \cite{25}, \cite{26}, \cite{30} approach to generate captions from images. However, this method has a drawback that is due to the structure of the LSTM or
other RNNs, the current output depends on the hidden state at the previous moment. As a result, they can only operate in time steps, this makes it implausible to parallelize the process of generating the captions. Nevertheless, Vaswani et al. propose the Transformer \cite{1} model solved the parallelism problem. The Transformer can run
in parallel during the training phase as it is based on an attention mechanism there is no sequence dependence on this model.
Recently, some researchers \cite{28}, \cite{29} have utilized the Transformer model instead of an RNN to generate captions from images. But, these researches were conducted on English datasets. To see how this model performs in the Bengali dataset we utilized three Bengali datasets. The approach to caption image in Bengali using the transformer model is illustrated in Fig. \ref{fig7}. Furthermore, we compare the performance with the visual attention-based approach to caption images in Bengali that was proposed by Ami et al. \cite{31}. This visual attention-based approach is shown in Fig. \ref{figatt}. Bengali is the $7^{th}$ most used language worldwide\footnote{\url{ https://www.vistawide.com/languages/top\_30\_languages.htm}} and most of the natives in some parts of India and Bangladesh do not know English. Hence, it is also necessary to caption images in Bengali alongside English. The contributions of this paper are as follows:
\begin{itemize}
\item Three Bengali dataset used to train the model.
\item Transformer model combined with CNN to generate captions from images in Bengali.
\item Employed a visual attention-based approach to compare its performance with the transformer-based approach.
\item Compared the performance of other models and the transformer-based model to caption images in Bengali.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{pictures/transformermodelnew.png}
\caption{Visualization of how the Transformer model generates words from an input image. First of all, image features extracted were by the CNN and passed to the Encoder of the Transformer. Then the vocabulary was passed to the Decoder part of the Transformer. The Transformer then generated a Bengali caption for the corresponding image.}
\label{fig7}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{pictures/methodology_final.jpg}
\caption{Illustration of how our model learns words from input image to generate caption using visual attention-based approach \cite{31}. First of all, image features were extracted using CNN. Then Attention scores were given to the image features and then passed to the GRU. On the other hand, tokenized words were passed to the embedding layer to covert vocabulary to vectors. These word vectors were also passed to the GRU. The GRU then generates Bengali captions word by word using word vectors and Attention weighted image features.
}
\label{figatt}
\end{figure}
\section{Related Works} \label{sec2}
This section depicts the progress in image captioning. Hitherto, many types of research have been conducted and many models have been developed in order to get captions that are syntactically corrected.
\subsection{Image captioning in Bengali}
Only seven works have been done on image captioning in Bengali till now. \cite{13} was the first paper in image captioning in Bengali followed by \cite{14}, \cite{15}, \cite{16} and \cite{17}. Rahman et al. \cite{13} have aimed to outline an automatic image captioning system in Bengali called 'Chittron'. Their model was trained to predict Bengali caption from input images one word at a time. The training process was carried out on 15700 images of their own dataset BanglaLekha. In their model Image feature vector and words were converted to vectors after passing them through the embedding, the layer was fed to the stacked LSTM layer. One drawback of their work was that they utilized the sentence BLEU score instead of the Corpus BLEU score. On the other hand, Deb et al. \cite{14} illustrated two models Par-Inject Architecture and Merge Architecture for image captioning in Bengali. In the Par-Inject model image, feature vectors were fed into intermediate LSTM and the output of that LSTM and word vectors were combined and fed to another LSTM to generate caption in Bengali. Whereas, in the Merge model image feature vectors and words vector were combined and passed to an LSTM without the use of an intermediate LSTM. They utilized 4000 images of the Fickr8k dataset and the Bengali caption their models generated were not fluent. Paper \cite{15} used a CNN-RNN based model where VGG-16 was used as CNN and LSTM with 256 channels was used as RNN. They trained their model on the BanglaLekha dataset having 9154 images. On the other hand, paper \cite{16} proposed a CNN-ResNet-50 merged model, consisting of a ResNet-50 as image feature extractor and 1D-CNN with word embedding for generating linguistic information. Later, these two features were given as inputs to a multimodal layer that predicts what to generate next
using the information at each time step. Furthermore, \cite{17} utilized the BNLIT dataset to implement a CNN-RNN model where they used both BRNN and LSTM as RNN. M. Humaira et al. \cite{30} proposed a hybridized Encoder-Decoder approach where two word embeddings fastText and GloVe were concatenated. They also utilized beam search and greedy search to compute the BLEU scores. Additionally, A. S. Ami et al. \cite{31} employed visual attention with the Encoder-Decoder approach to caption images in Bengali. They added attention weights to image features and passed them to the GRU with word vectors to generate captions. However, they did not use corpus BLEU scores to evaluate the captions. We will compare the corpus BLEU scores of the visual attention-based approach with the transformer-based approach.
\subsection{Image captioning in other Languages}
Previously many research was conducted on English as the available datasets were all in the English language. The authors in \cite{23} adapted the attention mechanism to generate caption. For vision part of image captioning VGG-16 were used by most of the papers \cite{24}, \cite{25}, \cite{26} as CNN but some of them also used AlexNet \cite{24}, \cite{26} or ResNet \cite{24} as CNN for feature extraction. However, some of the researchers also utilized BiLSTM \cite{24}. Alongside English researchers also generated captions in Chinese \cite{21}, \cite{22}, Japanese \cite{18}, Arabic \cite{19} and Bahasa Indonesia \cite{20}.
\subsection{Image captioning using Transformer}
The transformer model was used previously for image captioning using an English dataset. Li et al \cite{5} investigated a Transformer-based sequence modeling framework for image captioning which was built only with attention layers and feedforward layers. Additionally, paper \cite{6} employed object spatial relationship modeling for image captioning, specifically within the Transformer encoder-decoder architecture by incorporating the object relation module within the Transformer encoder. Paper \cite{7} proposed the use of augmentation of image captions in a dataset including augmentation using BERT to improve a solution to the image captioning problem. Furthermore, paper \cite{8} utilized two streams of transformer-based architecture. One for the visual part and another for the textual part. Paper \cite{27} used a transformer-based architecture that consists of an encoder and decoder model where the encoder part is a CNN model and the decoder part is a transformer model. It also uses a stacked self-attention mechanism. Paper \cite{28} uses the CNN as an encoder to extract image features, the output of the encoder is a context vector that contains the necessary information from the image, then put it into Transformer to generate the captions. On the other hand, paper \cite{29} introduced the image transformer for image captioning, where each transformer layer implements multiple sub-transformers, to encode spatial relationships between image regions and decode the diverse information in the image regions.
\subsection{Image captioning using Attention Mechanism}
Visual attention on English datasets was used previously by many researchers. In the past two main types of attention were used by researchers in encoder-decoder for image or video captioning. These two types of attention are Semantic attention that is using attention in text and Spatial attention which is applying attention to images. Xu et al. \cite{9} proffered the first visual attention model in image captioning. They used "hard" pooling that designates the most probably attentive region, or "soft" pooling that averages the spatial features with attentive weights. Additionally, Chen et al. \cite{12} utilized Spatial attention and Channel wise Attentions in a CNN. Paper \cite{10} also employed visual attention to generating captions. Lastly, paper \cite{11} employed a semantic attention model to combine the visual feature with visual concepts in a recurrent neural network that generates the image caption.
\section{Model Architecture} \label{sec3}
We utilized the transformer model and the attention-based model proposed by \cite{31} to caption images in Bengali. The transformer model does not process sequence in order but the attention-based model processes sequence in order. Hence, the transformer model allows parallel processing of captions. The transformer model is illustrated in Fig. \ref{fig1} and the attention-based approach is shown in Fig. \ref{figatt}.
\subsection{Transformer-based Approach}
Transformer \cite{1} is a deep learning model that utilizes the mechanism of attention, to give weights to the influences to different parts of the input data. The transformer is made of a stack of encoder and decoder components. In Fig. \ref{fig1} left block marked $N_{x}$ is the encoder and the right block marked $N_{x}$ is the decoder. Here N is a hyperparameter that represents the number of encoder and decoder components. This model takes two inputs these are image features extracted by the CNN in the Encoder and the vocabulary formed from the list of target captions in the dataset in the Decoder.
\begin{figure}[h!]
\centering
\includegraphics[width=.72\linewidth]{pictures/transformerfinalmodel.png}
\caption{Illustration of the transformer based model to caption image in Bengali.}
\label{fig1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=.72\linewidth]{pictures/OurApproachAttentionFinal.png}
\caption{Illustration of the attention-based approach \cite{31} to caption image in Bengali.}
\label{figatt}
\end{figure}
\subsubsection{Encoder}
InceptionV3 was used as the CNN in this experiment. The images in the dataset were at first passed to the CNN. InceptionV3 extracts image features and passes them through a dense layer having ReLU as an activation function to take the dimension of the image feature vector from d to $d_{model}$ where $d_{model}$ is the dimension of the word embedding. These image feature vectors were then summed with the positional encoding then passed through N encoder layers. Each of these encoder layers was made up of two sublayers one of which is Multi-head attention with padding mask and the other is Point wise feed-forward networks. Masking ensures that the model does not treat padding as the input. The output of the encoder was then passed to the decoder as K (key) and V (Value). The Multi-Head mechanism is explained in Section \ref{sec7}.
\subsubsection{Decoder}
The decoder takes as input the target captions in the dataset were passed through an embedding which was summed with the positional encoding. Positional encoding is added to give the model some information about the relative position of the words in the sentence based on the similarity of their meaning and their position in the sentence, in the d-dimensional space. The output of the summation was then passed through N decoder layers. Each of these decoder layers was made of three sub-layers one of which was the Masked multi-head attention with a look ahead mask and padding mask, another one was Multi-head attention with padding mask where V (value) and K (key) receive the encoder output as inputs and Q (query) received the output from the masked multi-head attention sublayer. The third layer was Point wise feed-forward networks. The output of the decoder was then sent to the linear layer as input. Finally, using probabilistic softmax predictions one word at a time, and uses the output so far to decide what to do next. This whole process is illustrated in Fig. \ref{fig1}.
\subsection{Visual Attention-based Approach}
To focus only on the relevant parts of the image visual attention-based model was used. It is an Encoder-Decoder approach that processes sequence in order. This model has three main parts. Firstly, a Convolutional Neural Network (CNN) extracts features from images. Secondly, an attention mechanism was utilized to give weights to image features. Bengali vocabulary was then converted to word vectors using an embedding layer. Finally, Gated Recurrent Units (GRU) \cite{35} which is a sequence generator took word vectors and weighted images features as input and generated Bengali captions in order. This process is illustrated in \ref{figatt}.
\section{Dataset} \label{sec4}
The main aim of this research is to generate a Bengali caption from the image. To accomplish this task a dataset in the Bengali language is needed which must have several images and a text file containing Bengali captions associated with each image. However, all the datasets available for image captioning are in English. The only available Bengali dataset till now is the BanglaLekha dataset. Since one dataset is not enough to validate the performance of the models we created a new Bengali dataset named Bornon. Furthermore, we utilized the Flickr8k dataset by translating its English captions to Bengali and then merging it with the BanglaLekha and Bornon dataset to form a newly merged dataset. This merged dataset was constructed especially to test the transformer model since the transformer-based models are data-hungry. For generating Bengali cations from images, these datasets were split into three parts: training, testing, and validation. The split ratio of each dataset used in our experiment is shown in Table \ref{tab1}.
\begin{table}
\caption{Distribution of Data for Different Bengali Dataset used in our experiment.}
\label{table}
\setlength{\tabcolsep}{19pt}
\begin{tabular}{c c c c c }
\hline
\hline
\textbf{Dataset}
&\textbf{Total Image}
&\textbf{Training}
&\textbf{Validation}
&\textbf{Testing}\\
\hline
Flickr8k & 8000 & 6000 (75\%) & 1000 (15\%) & 1000 (15\%) \\ \hline
BanglaLekha & 9154 & 7154 (78\%) & 1000 (11\%) & 1000 (11\%) \\\hline
Bornon & 4100 & 2900 (72\%) & 600 (14\%) & 600 (14\%) \\ \hline
merged & 21414 & 12850 (60\%) & 4282 (11\%) & 4282 (11\%) \\\hline
\end{tabular}
\label{tab1}
\end{table}
\subsection{Flickr8K\_BN}
Flickr8k\footnote{https://www.kaggle.com/adityajn105/flickr8k/activity} dataset is a publically available English dataset that contains 8091 images of which 6000 (75\%) images are employed for training, 1000 (12.5\%) images for validation, and 1000 (12.5\%) images are used for testing. Moreover, with each image of the Flickr8K dataset five ground truth captions describing the image are designated which adds up to a total of 40455 captions for 8091 images. For image captioning in Bengali, those 40455 captions were converted to Bengali language using Google Translator\footnote{https://translate.google.com/}. Unfortunately, some of the translated captions were syntactically incorrect as shown in Fig \ref{figflickr}. Hence, we manually checked all 40455 translated captions and corrected them. Flickr8K-BN is the Bengali Flickr8K dataset. Some images of the Flickr8k dataset along with their associated captions are shown in Fig. \ref{fig2}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/flickrError.png}
\caption{Illustration of Bengali captions after being translated using Using Google Translator. Sentences marked with red color indicate syntactically incorrect Bengali sentences and the sentences inside the brackets are the manually corrected sentences.}
\label{figflickr}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/flickrdatasetnew.png}
\caption{Illustration of some images of the Flickr8k dataset along with their five Bengali captions.}
\label{fig2}
\end{figure}
\subsection{BanglaLekha}
We also utilized the BanglaLekha\footnote{https://data.mendeley.com/datasets/hf6sf8zrkc/2} dataset which consists of 9154 images of which 7154 (78\%) images are employed for training, 1000 (11\%) images for validation, and 1000 (11\%) images are used for testing. It is the only available Bengali dataset till now. All its captions are human-annotated. One problem with this dataset is that it has only two captions associated with each image resulting in 18308 captions for those 9154 images. Hence, vocabulary size is lower than Flickr8k-BN. Flickr8k-BN consists of 12953 unique Bengali words, and BanglaLekha consists of 5270 unique Bengali words. It can be seen that the BanglaLekha dataset has a vocabulary size even lower than Flickr8k-BN. Some images of the BanglaLekha dataset along with their associated captions are shown in Fig. \ref{fig3}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/banglalekhdatasetnew.png}
\caption{Illustration of some images of the BanglaLekha dataset along with their two Bengali captions.}
\label{fig3}
\end{figure}
\subsection{Bornon}
Due to the lack of a Bengali image captioning dataset and to overcome the shortcomings of the existing Bengali image captioning datasets Banglalekha \cite{13}, we created a new dataset named Bornon. The Bornon dataset consists of 4100 images and each image has five captions describe them. Thus, there is a total of 20500 captions for 4100 images. Images were kept in a folder and the associated captions were kept in a text file. Some images of the Bornon dataset along with their associated captions are shown in Fig. \ref{fig4}.
The images of this dataset were taken from a personal photography club. All images were in jpg format. These images portray various objects like Animals, Birds, People, Food, Weather, Trees, Flower, Buildings, Cars, Boat. Frequent Bengali words in this dataset are illustrated in Fig. \ref{figchart}. Around 17 people who are native Bengali speakers were responsible for annotating and evaluating the captions.
Only two captions are associated with each image in the BanglaLekha dataset this reduced the vocabulary size hence we gave five captions for each image in our Bornon dataset. The vocabulary size of the Bornon dataset was 6228 unique Bengali words for only 4100 images whereas the BanglaLekha dataset had a vocabulary size of 5270 unique Bengali words for 9154 images. If vocabulary size is the small repetition of words is observed in predicted captions. However, this 4100 data is not enough to train a transformer-based mode. Therefore, in the future, we plan to increase the number of images in our dataset.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/bornondatasetnew.png}
\caption{Illustration of some images of the Bornon dataset along with their five Bengali captions.}
\label{fig4}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/chartBornon.png}
\caption{Illustration most frequent Bengali words in the Bornon Dataset.}
\label{figchart}
\end{figure}
\subsection{Merged Dataset}
The transformer model is data-hungry. It performs well when a huge number of data is provided. However, all the Bengali datasets mentioned above have a small amount of data which is not enough to train the transformer model. As a result, we merged three datasets Flickr8k, BanglaLekha, and Bornon. This resulted in 21414 images and each image had two captions associated with them which add up to a total of 42828 captions. We took two captions from all the datasets because the BanglaLekha dataset had only two Bengali captions describing each image. This merging led to a vocabulary size of 13416 unique Bengali words and images of various categories.
\section{Text Embedding} \label{sec5}
Firstly, the maximum length of target captions in each dataset was computed. Then all the sentences having lengths less than maximum length were padded with zeroes. Afterward, the top 5000 unique Bengali words were selected from each dataset to tokenize the Bengali captions using Keras's text tokenizer. Since we cannot train a model using text we converted these tokens to numeric form using text embedding. The embedding model embeds these tokens to one-hot vectors having d\_modeldimensions. All these embedding vectors in one sentence are combined into a matrix and provided as input to the Transformer or the GRU.
\section{Convolutional Neural Network} \label{sec6}
InceptionV3 \cite{2} was utilized as the Convolutional Neural Network (CNN) for extracting features from images for the transformer-based model. As this is not a classification task, the last layer of InceptionV3, a softmax layer, was removed from the model. Then all the images were preprocessed to the same size, that is, 299×299 before feeding them into the model. Hence, the shape of the output of this layer was 8x8x2048. The features were extracted and stored as .npy files and then pass those features through the encoder.
Two different CNN InceptionV3 and Xception \cite{32} were employed in the different experimental setups of the visual attention-based model. The last layer of both CNN was removed. Then like the transformer model the attention-based model also took images of size 299x299. As a result, images were reshaped and feed to CNN. The extracted image features were then to .npy files and attention weight was added to them.
\section{Attention in Transformer} \label{sec7}
Self-attention is calculated using vectors. Three matrices Query, Key, and Value from each of the encoder's inputs are needed to calculate self-attention. These matrices are obtained by multiplying the embedding matrix and the Weight trained weight matrices. Finally, self-attention matrices are calculated using the following formula.
\begin{equation}
Z = softmax(\frac{Q*K^{T}}{\sqrt{d_{k}}}) * V\label{eq1}
\end{equation}
Where Z is the self-attention matrix, Q is the Query matrix, K is the Key matrix, V is the Value matrix $d_{k}$ is the dimension of the key matrix. The paper \cite{1} further refined the self-attention layer by adding a mechanism called "Multi-Head" attention. Multi-Head attention implements the self-attention calculation eight different times with different weight matrices.
\section{Visual Attention Mechanism}
Two types of spatial attention that are used widely are Global attention \cite{34} and Local attention. We employed Local attention which is also known as Bahdanau attention \cite{33} because Global attention is computationally expensive and unfeasible for large sentences. Global attention place attention on all source position whereas Bahdanau attention focuses on a small subset of hidden states of the encoder per target word. To implement the Bahdanau Attention several steps have been followed. Firstly, the extracted image features were passed through a Fully connected layer using CNN Encoder to produce a hidden state of each element. Then Alignment scores were calculated using the hidden state produced by the decoder in the previous time step and the encoder outputs using the formula shown in Eq. \ref{eq25}. This Alignment score is the main component of the attention mechanism.
\begin{equation}
score_{alignment} = W_{combined}.tanh(W_{decoder}.H_{decoder}+W_{encoder}.H_{encoder})\label{eq25}
\end{equation}
The Alignment Scores were then passed through the SoftMax function and represented in a single vector called attention weights using Eq. \ref{eq26}. This vector was then multiplied with image features to form the context vector using Eq. \ref{eq27}.
\begin{equation}
a_{jt} = Softmax(e_{jt})
\label{eq26}
\end{equation}
Where, $a_{jt}=\frac{e^{e_{jt}}}{\sum_{k=1}^{T_{x}}e^{e^{kt}}}$ ,such that $\sum_{j = 1}^{T_{x}} a_{jt} = 1$ and $a_{ij}\geq 0 $ and $e_{jt}$ is the score-alignment.
\begin{equation}
c_{t}=\sum _{j = 1}^{T_{x}} \alpha_{jt}h_{j}\label{eq27}
\end{equation}
Where $a_{jt}=\sum_{j = 1}^{T_{x}} a_{jt} = 1 $ and $a_{ij}\geq 0 $ and $c_{t}$ is the context vector that is the weighted sum of the input.\newline
Finally, this context vector was concatenated with the previous decoder output. It was then fed into the decoder Gated Recurrent Unit (GRU) to produce a new output.
\section{Gated Recurrent Units}
In the attention-based approach, Gated Recurrent Units (GRU)\cite{35} was employed as a sequence generator. Before passing words to the GRU they were converted to vectors using the embedding layer. Afterward, this word embedding of Bengali words was passed to GRU. The GRU then predicts the next word in the sequence using the previous hidden state of the decoder, the previous predicted word, and the context vector calculated in the attention model. The equation used to predict the next word is depicted in Eq. \ref{eq28}.
\begin{equation}
s_{t} = RNN(s_{t-1},[e(\hat{y}_{t-1}),c_{t}])\label{eq28}
\end{equation}
Where,$s_{t}$ is the new state of the decoder, $s_{t-1}$is previous state of decoder, e($\hat{y}_{t-1}$)is previous predicted word and
$c_{t}$ is the context vector that is the weighted sum of the input.
However, the sequence problem remains. We need to process the data so that is we need to process the beginning of the sequence before the end. To solve this issue we utilized the transformer-based model to caption images in Bengali.
\section{Hyperparameters} \label{sec8}
The techniques used in this experiment were implemented by Jupyter Notebook. These models were developed based on Keras 2.3.1 and Tensorflow 2.1.0. We ran our experiments on NVIDIA RTX 2060 GPU. RTX 2060 offers 1920 CUDA cores with 6 GB GDDR6 VRAM. Using these settings it took approximately three hours to train each of the experimental setups.
The number of layers and the number of heads in the transformer were varied to tune the transformer-based model. Three, five, and seven were used as some layers in the transformer and one and two were used as the number of heads of the transformer. Furthermore, Internal validation was employed in this model to test the generalization ability of our trained model. Moreover, this model was trained for 50 epochs and utilized the Adam optimizer with a custom learning rate scheduler according to the Eq. \ref{eq2} where 4000 was used as the warmup\_steps. Additionally, SparseCategoricalCrossentropy was utilized as the loss function. The loss plot of one of the experimental setups using the transformer-based model is shown in Fig. \ref{fig5}.
\begin{equation}
l_{rate} = d^{-0.5}_{model} * min(step\_num^{-0.5},\quad step\_num * warmup\_steps^{-1.5} )\label{eq2}
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\linewidth]{pictures/bornontransformerloss3y2head.png}
\caption{Loss plot for transformer-based model over 50 epoch using 3 layers and 2 heads for Bornon dataset.}
\label{fig5}
\end{figure}
Two different CNN InceptionV3 and Xception were used in the different experimental setups as hyperparameters in the visual attention-based model. Furthermore, Internal validation was employed in this model to test the generalization ability of our trained model. Moreover, this model was trained for 50 epochs and had a batch size of 64. Additionally, Adam optimizer was used as the optimizer and for calculating the loss SparseCategoricalCrossentropy was utilized. Fig. \ref{figincloss} and Fig. \ref{figxcloss} demonstrates how loss decreased over 50 epoch for InceptionV3 and Xception respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\linewidth]{pictures/InceptionLoss.png}
\caption{Loss plot for visual attention-based model using InceptionV3 over 50 epoch using Flickr8k-BN dataset.}
\label{figincloss}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\linewidth]{pictures/XceptionLoss.png}
\caption{Loss plot for visual attention-based model Xception over 50 epoch using Flickr8k-BN dataset.}
\label{figxcloss}
\end{figure}
\section{Experimental Results} \label{sec9}
After generating captions, the most important part is evaluating them to verify how similar the generated captions are to human-annotated captions. Hence, we took the aid of two evaluation metrics BLEU and METEOR to justify the accuracy of our proposed models.
\subsection{BLEU}
Bilingual Evaluation Understudy (BLEU) \cite{4} is the most wielded metric nowadays to evaluate the merit of text. It depicts how normal sentences are compared with human-generated sentences. It is predominantly utilized to evaluate the performance of Machine translation. Sentences are compared based on modified n-gram precision for generating BLEU scores. BLEU scores are computed using the following equations:
\begin{equation}
P(i) = \frac{Matched(i)}{H(i)}\label{eq3}
\end{equation}
P(i) is the precision for each i-gram where i = 1, 2, ...N, the percentage of the i-gram tuples in the hypothesis that also occurs in the references is computed. H(i) is the number of i-gram tuples in the hypothesis and Matched(i) is computed using the following formula:
\begin{equation}
Matched(i) = \sum_{t_{i}}\min{\{C_{h}(t_{i}), \max_{j}C_{hj}(t_{i})\}}\label{eq4}
\end{equation}
where $t_{i}$ is an i-gram tuple in hypothesis h, $C_{h}(t_{i})$ is the number of times $t_{i}$ occurs in the hypothesis, $C_{hj}(t_{i})$ is the number of times $t_{i}$ occurs in reference j of this hypothesis.
\begin{equation}
\rho = exp\{\min(0, \frac{n-L}{n})\}\label{eq5}
\end{equation}
where $\rho$ is the brevity penalty to penalize short translation, n is the length of the hypothesis and L is the length of the reference. Finally, the BLEU score is computed by:
\begin{equation}
BLEU = \rho \{\prod_{i=1}^{N}P(i)\}^{\frac{1}{N}}\label{eq6}
\end{equation}
\subsection{METEOR}
Metric for Evaluation of Translation with Explicit Ordering (METEOR) \cite{3} is based on unigram matching between reference and predicted sentences by machine using the harmonic mean of unigram precision and recall. The recall is weighted higher than precision here. It was formulated to mend some of the issues found in BLEU metrics. Unigram precision P is calculated as:
\begin{equation}
P = \frac{m}{w_{t}}\label{eq7}
\end{equation}
Where m is the number of unigrams in the candidate translation that are also found in the reference translation, and $w_{t}$ is the number of unigrams in the candidate translation. Unigram recall R is computed as follows:
\begin{equation}
R = \frac{m}{w_{r}}\label{eq8}
\end{equation}
Where m is as above, and $w_{r}$ is the number of unigrams in the reference translation. Precision and recall are combined using the harmonic mean. There recall is weighted 9 times more than precision as shown in the equation below:
\begin{equation}
F_{mean} = \frac{10PR}{R + 9P}\label{eq9}
\end{equation}
To account for congruity concerning larger segments that appear in both the reference and the candidate sentence a penalty p is added. The penalty is calculated using the following equation.
\begin{equation}
p = 0.5 (\frac{C}{u_{m}})^{3}\label{eq10}
\end{equation}
Where C is the number of chunks, and $u_{m}$ is the number of unigrams that have been mapped. Finally, the METEOR score for a segment is calculated as M as shown in the equation below.
\begin{equation}
M = F_{mean} (1-p)\label{eq11}
\end{equation}
\subsection{Result Analysis}
We employed BLEU and METEOR scores for every experimental setup. These scores of the transformer-based model are shown in Table \ref{tab2} and the scores for the visual attention-based model are shown in Table \ref{tab3}. The highest BLEU for all datasets score using the Transformer-based model was obtained using 3 layers. On the other hand, METEOR was higher for the BanglaLekha dataset with 7 layers and higher for the Bornon dataset when 3 layers were used. For the merged dataset METEOR scores did not show any trend of increasing or decreasing with the number of layers. These scores were even better than BLEU scores obtained by paper \cite{14}, paper \cite{15} and paper \cite{16}. Bornon and BanglaLekha datasets performed slightly better than the Merged dataset using the transformer-based method.
From Table \ref{tab3} it can be seen that experimental setups of the visual attention-based model with Xception as CNN gave higher scores. However, the overall BLEU scores are lower than the transformer-based model. As the visual attention-based model used a GRU as a sequence generator whereas the transformer model was used as a sequence generator in the transformer-based model. This proves the fact that only improving the computer vision side of the image captioning models won't improve the results. Since image captioning is a mixture of two fields computer vision and NLP equal importance must be given to both fields to get better results. In Table \ref{tab3} we illustrated the corpus BLEU scores which were not done by \cite{31}.
\begin{table}
\caption{Result of Transformer-based model using InceptionV3 as CNN over 50 epochs.}
\label{table}
\setlength{\tabcolsep}{12pt}
\begin{tabular}{c c c c c c c c }
\hline
\hline
\textbf{Dataset}&
\textbf{Layers(N)}&
\textbf{Heads}&
\multicolumn{4}{c}{\textbf{BLEU}}&
\textbf{METEOR}\\\cline{4-7}
&
&
&
\textbf{1}&
\textbf{2}&
\textbf{3}&
\textbf{4}&
\\
\hline
& & 1 & \textbf{0.665} & \textbf{0.556} & \textbf{0.476} & \textbf{0.408} & 0.255 \\ \cline{3-8}
& 3 & 2 & 0.662 & 0.548 & 0.462 & 0.389 & 0.241 \\ \cline{2-8}
& & 1 & 0.648 & 0.546 & 0.470 & 0.402 & 0.251 \\ \cline{3-8}
BanglaLekha & 5 & 2 & 0.660 & 0.557 & 0.480 & 0.415 & 0.263 \\ \cline{2-8}
& & 1 & 0.633 & 0.541 & 0.471 & 0.409 & 0.267 \\ \cline{3-8}
& 7 & 2 & 0.644 & 0.548 & 0.476 & 0.412 & \textbf{0.268} \\ \hline
& & 1 & \textbf{0.696} & \textbf{0.589} & \textbf{0.507} & \textbf{0.439} & \textbf{0.361} \\ \cline{3-8}
& 3 & 2 & 0.687 & 0.572 & 0.486 & 0.415 & 0.346 \\ \cline{2-8}
& & 1 & 0.688 & 0.583 & 0.502 & 0.437 & 0.359 \\ \cline{3-8}
Bornon & 5 & 2 & 0.683 & 0.575 & 0.493 & 0.425 & 0.340 \\ \cline{2-8}
& & 1 & 0.684 & 0.567 & 0.478 & 0.405 & 0.340 \\ \cline{3-8}
& 7 & 2 & 0.665 & 0.556 & 0.477 & 0.411 & 0.346 \\ \hline
& & 1 & \textbf{0.621} & \textbf{0.492} & \textbf{0.398} & \textbf{0.326} & 0.196 \\ \cline{3-8}
& 3 & 2 & 0.624 & 0.494 & 0.400 & 0.329 & 0.201 \\ \cline{2-8}
& & 1 & 0.616 & 0.482 & 0.384 & 0.311 & 0.189 \\ \cline{3-8}
merged & 5 & 2 & 0.607 & 0.483 & 0.391 & 0.323 & \textbf{0.200} \\ \cline{2-8}
& & 1 & 0.592 & 0.468 & 0.376 & 0.308 & 0.187 \\ \cline{3-8}
& 7 & 2 & 0.602 & 0.481 & 0.390 & 0.322 & 0.197 \\ \hline
\end{tabular}
\label{tab2}
\end{table}
\begin{table}
\caption{ Result of \textbf{Visual attention-based model} using GRU as sequence generator over 50 epochs.}
\label{table}
\setlength{\tabcolsep}{15pt}
\begin{tabular}{c c c c c c c c }
\hline
\hline
\textbf{Dataset}
&\textbf{CNN}
&\multicolumn{4}{c}{\textbf{BLEU}}
&\textbf{METEOR}\\\cline{3-6}
\textbf{}
&\textbf{}
&\textbf{1}
&\textbf{2}
&\textbf{3}
&\textbf{4}
&\textbf{}\\
\hline
Flickr8k-BN & InceptionV3 & 0.543 & 0.445& 0.362 & 0.294 & \textbf{0.161} \\ \cline{2-7}
& Xception & \textbf{0.546} & \textbf{0.447} & \textbf{0.364} & \textbf{0.29}6 & 0.156 \\ \hline
BanglaLekha & InceptionV3 & 0.567 & 0.460 & 0.385 & 0.319 & 0.204 \\ \cline{2-7}
& Xception & \textbf{0.570} & \textbf{0.462} & \textbf{0.387} & \textbf{0.322} & \textbf{0.208} \\ \hline
Bornon & InceptionV3 & 0.596 & 0.475 & 0.390 & 0.324 & 0.314 \\ \cline{2-7}
& Xception & \textbf{0.605} & \textbf{0.492} & \textbf{0.412} & \textbf{0.351} & \textbf{0.348} \\ \hline
\end{tabular}
\label{tab3}
\end{table}
We tested the transformer-based model and the visual attention-based model using a test set that contains different images that were not present in the training set or validation set. The Bengali captions generated by various experimental setups of the transformer-based model using three datasets BanglaLekha, Bornon, and merged dataset are shown in Fig. \ref{fig13}. Additionally, the Bengali captions generated by the visual attention-based model using Flickr8k-BN, BanglaLekha, and Bornon datasets are illustrated in Fig. \ref{fig14}. From these figures, it can be seen that the transformer-based model gave much better and accurate Bengali captions than the attention-based model.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/transformercapfinal.png}
\caption{Illustration of Bengali captions generated by Transformer-based models.}
\label{fig13}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/attentioncaptions.png}
\caption{Illustration of Bengali captions generated by visual attention-based models.}
\label{fig14}
\end{figure}
Since both transformer-based model and visual attention-based model were trained using BanglaLekha dataset and Bornon dataset a brief comparison of caption generated for the same test images of these datasets is depicted in Fig. \ref{fig15}. From this figure, it can be seen that the visual attention-based model generated Bengali captions related to the objects present in the caption whereas the transformer-based model gave a general Bengali caption that describes the whole image. Performances of three of the transformer-based model were compared with the performance of other papers and the results are illustrated in Table \ref{tab4}. This table shows that the transformer-based model performed better than other research done on Bengali image captioning using the same datasets.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{pictures/compattandtrans.png}
\caption{Illustration of Bengali captions generated by visual attention-based models and transformer-based models. Visual attention-based Bengali captions were generated using Xception and the transformer-based Bengali captions were generated using 3 layers and 4 heads.}
\label{fig15}
\end{figure}
\begin{table}
\caption{A brief comparison of BLEU scores for existing models and the transformer-based model.}
\label{table}
\setlength{\tabcolsep}{15pt}
\begin{tabular}{c c c c c c c c}
\hline
\hline
\textbf{Dataset}&
\textbf{Model}&
\multicolumn{4}{c}{\textbf{BLEU}}\\ \cline{3-6}
&
&
\textbf{1}&
\textbf{2}&
\textbf{3}&
\textbf{4} \\
\hline
& VGG-16+LSTM \cite{15} & 0.667 &0.436 &0.315 &0.238 \\ \cline{2-6}
BanglaLekha& CNN-ResNet-50 \cite{16} & 0.651 &0.426 &0.278 &0.175 \\ \cline{2-6}
& Transformer Model & \textbf{0.665} & \textbf{0.556} & \textbf{0.476} & \textbf{0.408} \\\hline\\
Flickr8k(4000 images) & Inception+LSTM \cite{14}& 0.62 &0.45 &0.33 &0.22\\
\hline
\end{tabular}
\label{tab4}
\end{table}
\section{Conclusions} \label{sec10}
In our work, we employed a visual attention-based approach that gives attention weight to image features. This was a traditional Encoder-Decoder approach so we compared it with a transformer-based approach. In the transformer-based approach, we combine the feature vector extracted by CNN and target Bengali captions into the Transformer model. This model learns to generate Bengali captions using a multi-head attention mechanism. Not only the model can improve the original performance, but also uplift the training speed by allowing parallelism. Later it was validated that the transformer-based method indeed performs better than the visual attention-based method. Hence, in the future, the transformer-based model can replace the traditional encoder-decoder architecture. This will enhance the performance and efficiency of caption generation from images. We also utilized various Bengali datasets to test both approaches. This proves the fact that the transformer model can be used to generate captions from images in other languages alongside English.
\bibliographystyle{unsrtnat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{Introduction}
The recognition of the key role entanglement plays in the
understanding of the radical departure of quantum from classical
physics came historically remarkably late. In the early years of
quantum mechanics starting from the mid twenties of the last
century, often referred to as the `golden years', this aspect was
not quite in the center of activities: Researchers were occupied
with successfully applying the new theory to a wide range of
physical phenomena, continuously adding to the impressive list of
theoretical studies matching experimental findings. It was not
until the year 1935, when Einstein, Podolsky and Rosen expressed
their dissatisfaction with the state of the theory, constructing a
Gedanken experiment involving a measurement setup in a `separated
laboratories paradigm' that should identify the description
provided by quantum mechanics as incomplete \cite{EPR35}. This
Gedanken experiment involved local measurements on constituents of
a composite quantum system prepared in an entangled state in a
distributed setup. In the same year, Schr{\"o}dinger, also one of
the major contributors to the theory, formulated some of the
mathematical implications of entanglement on the statistics of
measurement outcomes; and actually coined the term `entanglement'
both German and in English \cite{schroedinger35}. The program envisioned by Einstein and
colleagues -- to demonstrate the incompleteness of a quantum
mechanical description -- may be fairly said to have essentially
failed. They nevertheless could pinpoint the aspect of quantum
theory in which it would crucially depart from a local classical
statistical theory.
This was fully realized in the 1960ies, when Bell reconsidered the
situation discussed by Einstein, Podolsky and Rosen, restated in a
setting involving spin-$1/2$ degrees of freedom due to Bohm
\cite{Bohm51}. He demonstrated the validity of bounds to
correlation functions of measurement outcomes of dichotomic
measurements, provided that they would be resulting from a `local
realistic model', meaning from a {\it local classical statistical
theory} \cite{Bell64}. These bounds are indeed violated by the
predictions of quantum mechanics. After the advent of reliable
sources of entangled states, many experiments were performed, all
consistent with the predictions of quantum theory, and none with
the bounds put forth in form of {\it Bell's inequalities} (see,
e.g., ref.~\cite{As81,Zei99}). It can be said that it is in the
role of entanglement where the departure of
quantum from classical physics is most manifest,
indicating that the intrinsic
randomness in quantum theory can
{\it not} be thought of as resulting from mere classical ignorance
in an underlying classical statistical theory.
In the meantime, it has become clear that entanglement plays a
central role also from a different perspective: it can serve as an
essential ingredient in applications of {\it quantum information
processing} \cite{NielsenBook}. For example, entanglement is
required for an established key to be unconditionally secure in
quantum key distribution \cite{BB84,Ekert91,Shor00,Curty04}.
Entanglement is also believed to be responsible for the remarkable
speedup of a quantum computer compared to classical computers
\cite{Braunstein99,Lloyd00,JL02,Miy01,Vidal03}, the underlying logic of which being based on the laws of classical
physics \cite{Feynman,Deutsch,Shor97,Grover}.
Formally, entanglement is defined by what it is not: a quantum
state is called entangled, if it is not classically correlated.
Such a {\it classically correlated state} is one that -- in the
distant laboratories paradigm -- can be prepared using physical
devices locally, where all correlations are merely due to shared
classical randomness. The kind of correlations in such
preparations are hence of the same origin as ones that one can
realize in classical systems by means of communicating over
telephone lines. This is in sharp contrast to the situation in
entangled states, which cannot be produced using local physical
apparata alone. This facet of entanglement hence concentrates on
the preparation procedure. The concept of {\it distillable
entanglement} \cite{IBMPure} in turn grasps directly entanglement
as a resource, and asks whether maximally entangled states can be
extracted, distilled, within a distant laboratories paradigm.
Part of the theoretical challenge of understanding entanglement
lies in the fact that is also (at least partly) responsible for
the quantum computational speedup: {\it state space is big}. The
dimension of state space, the set of all quantum states
corresponding to legitimate preparation procedures, grows very
rapidly with the number of constituents in a composite quantum
system. In fact, it grows exponentially. Consequently, in the
whole development of quantum information theory, it has been a
very useful line of thought to investigate situations where the
involved quantum states could be described with a smaller number
of parameters, while still retaining the essential features of the
problem at hand. So certain `theoretical laboratories' facilitated
the detailed investigation of phenomena, properties, and protocols
that arise in quantum information theory, while keeping track of a
set of states that is at most polynomially growing in dimension.
So-called stabilizer states \cite{Gottesman,NielsenBook}, Werner
states \cite{We89}, matrix-product states \cite{MPS}, or
quasi-free states \cite{Gaussian} are instances of such
`laboratories'. In the center of this review article are the {\it
graph states}, the structure of which can be described in a
concise and fruitful way by mathematical graphs. They have been
key instrumental tools in the development of models for quantum
computing, of quantum error correction, and of grasping the
structure of bi- and multi-partite entanglement.
{\it Graph states} are quantum states of a system embodying
several constituents, associated with a graph\footnote{Note that
in the literature one finds several inequivalent concepts of
quantum states that are in one way or another associated with
graphs \cite{Graph,Graph2}. For example, entanglement sharing
questions have been considered in a multi-partite quantum system
based on quantum states defined through mathematical graphs, see
refs.~\cite{Rings,Buzek,Zanardi,Parker}.}. This graph may be
conceived as an interaction pattern: whenever two particles,
originally spin-$1/2$ systems, have interacted via a certain
(Ising) interaction, the graph connecting the two associated
vertices has an edge. Hence, the adjacency matrix of a simple
graph, a symmetric $N\times N$ matrix for a system consisting of
$N$ qubits with entries taken from $\{0,1\}$, fully characterizes
any graph state at hand \cite{Briegel01,OneWay3,
He04,Du03a,Nest04a}.
In this sense the graph can be understood as a summary of the
interaction history of the particles. At the same time, the
adjacency matrix encodes the stabilizer of the states, that is, a
complete set of eigenvalue equations that are satisfied by the
states \footnote{In some sense, this graphical representation
plays a similar pedagogical role as Feynman diagrams in quantum
electrodynamics: The latter provide an intuitive description of
interaction processes in spacetime, but, at the same time, they
have a concise mathematical meaning in terms of the corresponding
propagator in an expansion of the scattering operator.}. Thus
graph states are actually stabilizer states \cite{Gottesman}. This
class of graph states play a central role in quantum information
theory indeed.
To start with, graph states form a universal resource for quantum
computing based on measurements \cite{OneWay1,OneWay2,OneWay3,
OneWay5}. In such {\it one-way computing}, one starts off with a
cluster state, which is a specific instance of a graph state, and
performs von-Neumann measurements at single sites associated with
vertices. In fact, it was in the form of such {\it cluster states}
\cite{Briegel01}, when graph states have first been considered, in
a narrower sense, with respect to a graph reflecting a cubic
lattice. On the subspace that is retained any unitary can be
implemented, thereby realizing universal computation without the
need of making use of any controlled two-system quantum gates. The
cluster state hence forms a universal resource for quantum
computation. The performed measurements introduce a probabilistic
aspect to the scheme; yet, the overall set-up of the one-way
computer is deterministic, as the process can be de-randomized by
means of appropriate local unitaries taken from the Pauli group
applied before the readout step \cite{OneWay1,OneWay3}.
Such one-way computing is interesting at least from two
perspectives: on the one hand, it provides a {\it computational
model} \cite{OneWay2} different from the original gate-based
model, which resembles more closely the gate model of classical
computation. In fact, questions of complexity and simulatability
often become more transparent in the picture of the one-way
computer than in the gate model. For instance, operations from a
certain set of operations, the so-called {\it Clifford operations}
\cite{NielsenBook}, can be kept track of classically in an
efficient manner. When measuring Pauli operators in the course of
the computation, the states of the whole system will be a sequence
of graph states that can be described by a sequence of adjacency
matrices \cite{OneWay3}. However, if in the first step of the
gate-based model, a non-Clifford operation is applied, unlike in
the picture of the one-way computer, it is no longer obvious in
the gate-model that the dynamics of the rest of the network can be
described efficiently.
On the other hand, there are good reasons to believe that the
undoubtedly tremendous challenges associated with actually
realizing a quantum computer can be lessened when relying on an
{\it architecture based on graph states}. In many quantum systems
in a lattice, nearest-neighbor interactions are natural, rendering
the preparation of cluster states through one dynamical process a
relatively feasible step. Furthermore, and maybe more importantly,
one realizes a certain distinction between the process of creating
entanglement and the process of consuming it. Hence, even if one
exploits a lossy or even probabilistic process in order to prepare
the graph states, in many set-ups one can, in principle, end-up
with graph states free of errors, then to be used as a resource of
the actual computation. Even if the entangling operations are
themselves faulty, e.g. subject to photon loss in a linear optical
implementation, fault tolerant schemes can be devised, in
principle up to a loss rate of several tens of percents.
In quantum-gate-based quantum computation, graph states also play
a prominent role as {\it codewords in quantum error correction},
allowing for reliable storage and processing of quantum
information in the presence of errors. This is done by
appropriately encoding quantum information in quantum states of a
larger number of quantum systems. This is the second branch how
the concept of graph states originally came into play, namely in
form of {\it graph codes} \cite{Schlinge02a,
SchlingeHabilschrift}. These instances of quantum codes are determined by the underlying
graph.
Finally, the idea of the `theoretical laboratory' itself allows
for a wide range of applications. Aspects of bi-partite and, in
particular, {\it multi-partite entanglement} are typically
extraordinarily hard to grasp \cite{Du99}. This is again partially
due to the fact that state space is so rapidly increasing in
dimension with a larger number of constituents. The {\it decision
problem} whether a mixed state is entangled or classically
correlated is already provably a computationally hard problem in
the dimension of the constituents. Apart from the {\it
classification} of multi-particle entanglement, even for pure
states, a complete meaningful {\it quantification} of
multi-particle entanglement is yet to be discovered \cite{LiPo98}.
Already at the origin is the question in what units to phrase any
result on quantification matters: there is no multi-particle
analog of the maximally entangled pair of spin-$1/2$-particles, to
which any pure bi-partite entanglement is equivalent: any pure
state can be transformed into such maximally entangled pairs in an
asymptotically lossless manner, one of the key results of
entanglement theory \cite{pureState,Th02}. This means that the
achievable rate of distillation and formation is the same. The multi-partite analogue of such a {\it reversible
entanglement generating set} (as it is formed by the maximally entangled qubit pair in the bi-partite setting) has not yet been identified, at least none that has a finite
cardinality\footnote{For a brief review on multi-particle
entanglement, see, e.g., ref.~\cite{Eisert05}.}.
Within the setting of graph states, some of the intriguing
questions related to {\it aspects of multi-particle entanglement}
can be addressed, expressing properties of entanglement in terms
of the adjacency matrix. Central tasks of interconverting
different forms of multi-particle entanglement, in particular of
{\it `purifying'} it, can and have been studied: here, the aim is
to extract pure graph states from a supply of noisy ones, having
become mixed through a decoherence process \cite{Du03a}. Based on
such purification protocols, even protocols in quantum
cryptography have been devised, making use of the specific
structure of this class of quantum states \cite{Lo04,Du05c}. In
turn, graph states form an ideal platform to study the {\it
robustness of multi-particle entangled states} under such in
realistic settings anyway unavoidable decoherence processes. In a
nutshell, multi-particle entangled states may exhibit a surprising
robustness with respect to decoherence processes, independent from
the system size \cite{Du04b}.
Finally, questions of inconsistency of quantum mechanics with
local classical statistical theories become very transparent for
graph states \cite{Gue04}. In all of the above considerations, the
class of graph states is sufficiently restricted to render a
theoretical analysis feasible, yet often complex enough to
appropriately grasp central aspects of the phenomenon at hand.
Departing slightly from the original formulation, {\it weighted
graph states} have been considered \cite{CDHB05}, where the interaction is no
longer a fixed Ising interaction with a constant weight. Such
states can be thought of as resulting from a {\it semi-classical
Boltzmann gas}: classical particles carrying a quantum spin-$1/2$
degree of freedom would in an idealized description give rise to
such a graph state through collision of particles. Such weighted
graph states find numerous applications in the description of
random quantum systems. They can also be taken as a basis set of
states to approximate ground states of many-body systems in a
variational principle. Owing the structural similarity to the fact
that mathematically, discrete and continuous Weyl systems have so
much in common, graph states in {\it harmonic infinite-dimensional
quantum systems} have finally been studied that resemble very much
the situation of graph states for finite-dimensional quantum
systems \cite{HC,Pl04}.
This review article aims at providing a self-contained
introduction to the theory of graph states in most of these
aspects, both conceived as a tool and as a central resource in
quantum information theory in their own right. We will introduce
the basic notions of graph states, discuss possible descriptions
of their entanglement content, their interconversion and
purification, and various applications in the context of quantum
information science.
\subsection{Outline}
We start with a detailed introduction of graph states, whose entanglement properties are analyzed in the following chapters. After setting basic notations in sec.~\ref{GS_Notations} that are frequently used throughout this article, we give essentially two alternative definitions for graph states in sec.~\ref{DefOfGS}, namely, in terms of the underlying interaction pattern and in terms of the stabilizer. We illustrate how elementary properties of a graph state and basic operations, such as (Pauli) measurements, on these states, can be phrased concisely in terms of the underlying graph. It is shown that the action of Clifford operations on graph states (sec.~\ref{Pauli measurements}) and the reduced states for graph states (sec.~\ref{Reduced_GS}) can be determined efficiently from the underlying graph. These relations will allow for a classification of graph states in sec.~\ref{Local_Equivalence} and for an efficient computation of entanglement properties in sec.~\ref{EntanglementGS}. We discuss some examples and applications to quantum error correction, multi--party quantum communication, and quantum computation in sec.~\ref{GS_Examples}. We briefly discuss some generalizations of
graph states in the language of discrete Weyl systems in
sec.~\ref{DefOfGS}.
Sec.~\ref{Implementations} contains also a short review about possible realizations of graph states in
physical systems.
In sec.~\ref{Local_Equivalence} we will then discuss the classification of graph states in terms of equivalence classes under different types of local operations and provide a complete classification for graph states with up to seven vertices. The results indicate that graph states form a {\em feasible and sufficiently rich class of multi-party entangled states}, which can serve as good starting point for studies of multi-party entanglement.
In sec.~\ref{EntanglementGS} we discuss various aspects of entanglement in graph states. We briefly review some results about the `non-classicality' of graph states and how their entanglement can be detected with Bell inequalities. The genuine multi-particle entanglement of graph states is characterized and quantified in terms of the Schmidt measure, to which we provide upper and lower bounds in graph theoretical terms.
Finally, we introduce two possible extensions of graph states. On the one hand, in sec.~\ref{WeightedGS} the class of {\em weighted graph states} is introduced, which comprises particles that interact for different times and provides an interesting model for the study of the entanglement dynamics in many--particle systems. Here, we consider $N$ initially disentangled spins, embedded in a ring or $d$-dimensional lattice of arbitrary geometry, which interact via some long--range Ising--type interaction. We investigate relations between entanglement properties of the resulting states and the distance dependence of the interaction in the limit $N \to \infty$ and extend this concept to the case of spin gases.
On the other hand, in sec.~\ref{GS_decoherence}, {\em graph diagonal states} serve as standard forms for mixed states and occur naturally whenever pure graph states are exposed to decoherence. We show that the lifetime of (distillable) entanglement for GHZ-type superposition states decreases with the size of the system, while for a class of other graph states the lifetime is independent of the system size. These results are largely independent of the specific decoherence model. Finally, the concept of entanglement purification is applied to graph states and possible applications to quantum communication are described.
\subsection{Notations}\label{GS_Notations}
\begin{wrapfigure}[9]{r}{0.27\textwidth}
\vspace{-0.5cm}
\hspace{0.05cm}\includegraphics[width=0.25\textwidth]{Ring.eps}
\caption{This ring depicts a graph with $7$ vertices}\label{Ring}
\end{wrapfigure}
At the basis this review article lies the concept of a graph \cite{Graph,Graph2}. A graph is a collection of vertices and a description which of the vertices are connected by an edge. Each graph can be represented by a diagram in a plane, where a vertex is represented by a point and the edges by arcs joining two not necessarily distinct vertices.
In this pictorial representation many concepts related to graphs can be visualized in a transparent manner. In the context of the present article, vertices play the role of physical systems, whereas edges represent an interaction.
Formally, an (undirected, finite) {\em graph}\index{graph $G$} is a pair
\begin{equation}
G=(V,E)
\end{equation}
of a finite set $V=\{ 1,\ldots ,N \}$ and a set $E\subset [V]^2$, the elements of which are subsets of $V$ with two elements each \cite{Graph}. The elements of $V$ are called {\em vertices}\index{vertex set $V$}, the elements of $E$ {\em edges}\index{edge set $E$}. In the following, we will mainly consider {\em simple} graphs\index{simple graph}. A simple graph contains neither loops (edges connecting vertices with itself) nor multiple edges. We also regard a generalization of these simple graphs, where each edge $\{a,b\}$ is associated with a weight $\varphi_{ab}$ representing the strength of the respective interaction. Although the concept of a {\em weighted graph}\index{weighted graph} is more general than that of a simple graph, we will use the notion of a graph in the more narrow sense of a simple graph, unless we explicitly mention that a certain section is devoted to weighted graphs.
Since there are in general
\begin{equation}
\tbinom{N}{2}= \tfrac{N(N-1)}{2}
\end{equation}
different possibilities for choosing set of edges $E$ in a graph of $|V|=N$ vertices, the number of distinct graphs is $2^{\tbinom{N}{2}}$. Graph theory is mostly interested in problems that are invariant under permutations of the vertices, when these permutations respect the neighborhood relation, i.e., map neighbored vertices onto neighbored vertices. Such permutations are called {\em graph isomorphisms}\index{graph isomorphism}. Two graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ are called {\em isomorphic} if there exists a bijection $f:V_1\mapsto V_2$ such that \begin{equation} \{a,b\} \in E_1 \hspace{1cm} \Longleftrightarrow \hspace{1cm} \{f(a),f(b)\} \in E_2\; .\end{equation}
Note that the number of non-isomorphic graphs still grows exponentially with the number $N$ of vertices \cite{Harary73}.
Vertices $a,b\in V$ that are the endpoints of an edge are referred to as being {\em adjacent}\index{adjacent}. The adjacency relation gives rise to an {\em adjacency matrix}\index{adjacency matrix $\mathbf{\Gamma}$}
$\mathbf{\Gamma}_G=\mathbf{\Gamma}$ associated with a graph. $\mathbf{\Gamma}$ is a symmetric $N\times N$-matrix, with elements
\begin{equation}
\mathbf{\Gamma}_{ab} =
\left\{
\begin{array}{ll}
1,& \text{ if $\{a,b\}\in E$,}\\
0 & \text{otherwise}.
\end{array}
\right.
\end{equation}
In the case of weighted graphs, the adjacency matrix also specifies the weights of the edges, i.e., $\mathbf{\Gamma}_{ab}=\varphi_{ab}$.
We will make repeated use of the {\em neighborhood}\index{neighborhood $N_a$} \begin{equation} N_a:=\left\{ b\in V\,| \, \{a,b\} \in E\right\} \end{equation} of a given vertex $a\in V$. The neighborhood is the set of vertices adjacent to a given vertex. The number $|N_a|$ of neighbors is called the {\em degree}\index{degree of a vertex} of the vertex $a$. A vertex $a\in V$ of degree $|N_a|=0$ will be called an {\em isolated vertex}\index{isolated vertex}.
An $\{a,b\}$-path is an ordered list of vertices $a=a_1,a_2,\ldots,a_{n-1},a_n=b$, such that $a_i$ and $a_{i+1}$ are adjacent for all $i$. A {\em connected graph}\index{connected graph (state)} is a graph that has an $\{a,b\}$-path for any two $a,b\in V$. Otherwise it is referred to as {\em disconnected}\index{disconnected graph (state)} .
When a vertex $a$ is deleted in a graph $G$, together with all edges incident with $a$, one obtains a new graph, denoted by $G\setminus a$. For a subset of vertices $U\subset V$ of a graph $G=(V,E)$ let us denote with $G\setminus U$ the graph that is obtained from $G$ by deleting the set $U$ of vertices and all edges which are incident with an element of $U$. In a mild abuse of notation, we will also write $G\setminus F$ for the graph that results from a deletion of all edges $e\in F$, where $F\subset E\subset [V]^2$ is a set of edges.
For a set of edges $F\subset [V]^2$ we will write $G\cup F:= (V,E \cup F)$ and $G + F := (V, E + F)$, where
\begin{equation}\label{+}
E+F= (E \cup F) \setminus (E \cap
F)
\end{equation}
is the symmetric difference of $E$ and $F$. Note that the symmetric difference corresponds to the addition modulo $2$ or the component-wise XOR if the sets are considered as binary vectors over the integer field $\mathbb{F}_2$ modulo two \index{binary field $\mathbb{F}_2$}.
Similarly, an induced {\em subgraph} $G[A]$\index{induced subgraph $G[A]$} of a graph $G=(V,E)$, where $A\subset V$, is obtained by deleting all vertices (and the incident edges) that are not contained in $A$.
Graphs may be colorable. A proper {\em two-coloring of a graph}\index{coloring}\index{two-colorable graph (state)}\index{bi-partite graph (state)} is a labeling $V\longrightarrow \{1,2\}$, such that all adjacent vertices are associated with a different element from $\{1,2\}$, which can be identified with two colors.
In graph theory these graphs are also called `bi-partite graphs', since the set of vertices can be partitioned into two disjoint sets, often called {\em sinks} or {\em sources}, such that no two vertices within the same set are adjacent. It is a well known fact in graph theory that a graph is two-colorable if and only if (iff) it does not contain any cycles of odd length.
In the remainder of this article each vertex stands for a two--level quantum system $\mathbf{H}^a\simeq \mathbb{C}^2$ or qubit. The state vector of the single--qubit system $\mathbf{H}^a$ can be written as $|\psi\rangle^a = \alpha |0\rangle^a + \beta |1\rangle^a$ with $|\alpha|^2+|\beta|^2=1$. The vectors $|0\rangle$ and $|1\rangle$ are the eigenvectors of the Pauli matrix $\sigma_z$ with eigenvalue $+1$ and $-1$. The matrices $\sigma^a_0=\mathbf{1}_a$, $\sigma^a_1=\sigma_x^a$, $\sigma^a_2=\sigma_y^a$ and $\sigma^a_3=\sigma_z^a$ are the Pauli matrices of this two--level system, where the upper index specifies the Hilbert space on which the operator acts. Note that these operators form an orthogonal basis of Hermitian operators with respect to the scalar product $\langle A,B\rangle := \text{tr} (A^\dagger B)$. Up to the phase factors $\pm 1$ and $\pm i$ they also generate the {\em Pauli group}\index{Pauli group $\mathcal{P}$} $\mathcal{P}:=\langle \{\pm 1, \pm i\}\times \{\sigma_0,\sigma_x,\sigma_y,\sigma_z\} \rangle$. We will frequently use the projectors onto the eigenvectors of the Pauli operators. For example,
\begin{equation}
P^a_{z,\pm} = \frac{1\pm \sigma_z^a}{2}
\end{equation}
denotes the projector onto the eigenvector $|z,\pm \rangle$ of
$\sigma_z^a$ with eigenvalue $\pm 1$ (similarly for
$\sigma_x^a$ and $\sigma_y^a$).
To simplify notations, we use subsets $U\subseteq V$ as an upper index for states, operators and sets. They denote the respective tensor product of a given state or sets, e.g.
\begin{equation} \label{PlusState} |+ \rangle^{V} = \bigotimes_{a \in V} |+\rangle^{a} \hspace{0.7cm}\text{or}\hspace{0.7cm} \mathcal{P}^V = \bigotimes_{a \in V} \mathcal{P}^{a} \; ,\end{equation}
where $|+\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle \right)$. The subsets are also used to label those vertices where the operator acts non-trivially, for example
\begin{equation}
\sigma_z^U=\bigotimes_{b \in U} \sigma_z^{b}\; .
\end{equation}
Moreover, we identify sets $U$ and their corresponding {\em binary vectors}\index{binary vector $U$} $U=(U_b)_{b\in V} \simeq (U_1,\ldots, U_{N})$ over the {\em binary field } $\mathbb{F}_2^V$ with the same symbol. Finally, $a$ refers to both the vertex and the corresponding one-element set $\{a\}$ ensuring that $\sigma_x^a \equiv \sigma_x^{\{a\}}$.
These notations allow us to use set and binary operations in the same formula. For example, for $A,B \in \mathcal{P}(V) \cong \mathcal{F}_2^V$ we will write $A\cup B$, $A\cap B$ and $A\setminus B$ ($\bar{A}:= V\setminus A$) for the union, intersection and difference (complement) as well as $A+B$ and $\langle A, B\rangle$ for the addition and the scalar product modulo $2$.
In the multi-partite case one can group the vertices into different partitions and, for example, study the entanglement with respect to these partitions. Here, any tuple $(A_1,...,A_M)$ of disjoint subsets $A_i \subset V$ with $\bigcup^M_{i=1} A_i =V$ will be called a {\em partition}\index{partition} of $V$. We will write
\begin{equation}
(A_1,...A_M) \leq (B_1,...,B_{M'}),
\end{equation}
if $(A_1,...A_M)$ is a {\em finer partition}\index{finer partition} than $(B_1,...,B_{M'})$. which means that every $A_i$ is contained in some $B_j$. The latter is then a {\em coarser partition}\index{coarser partition}.
\section{Definitions for graph states}\label{DefOfGS}
With the notations introduced in the previous section we can provide some definitions for graph states. Throughout this article, we mainly consider two alternative descriptions. Most naturally, graph states can be regarded as the result of an interaction of particles initially prepared in some product state. Certainly not all imaginable interaction patterns can be represented reasonably by a simple graph. In sec.~\ref{DefOfGS_Int} we introduce the description of graph states in terms of the interaction pattern and show that such a definition is also meaningful if all particles interact with the same Ising-type interaction but possibly for different interaction times. This description generalizes to so called weighted graph states, which are introduced in sec.~\ref{WeightedGS}. The alternative definition proposed in sec.~\ref{DefOfGS_Stab}, on the other hand, is restricted to the class of states that correspond to a simple graph. Such states can be described efficiently in terms of their stabilizer, which is a subgroup of the Pauli group. We briefly address the question of local unitary equivalence, discuss the relation to stabilizer states and illustrate an alternative representation of the stabilizer formalism in terms of its binary representation\footnote{Although the remainder of the article will not be based on the binary representation.}. We sketch a possible extension of the stabilizer formalism to $d$-level systems and finally summarize further alternative approaches to graph states in sec.~\ref{DefOfGS_Alternative}.
\subsection{Interaction pattern}\label{DefOfGS_Int}
In this subsection we give a careful motivation for the concept of graph states in terms of interaction patterns, concluding with a precise definition given at the end of this subsection.
Let us consider a set of $2$-level systems (qubits) that are labeled by the vertices $V$ of a graph $G=(V,E)$. The qubits are prepared in some initial state vector $|\Psi\rangle$ and then are coupled according to some interaction pattern represented by the graph $G$. For each edge $\{a,b\}\in E$ the qubits of the two adjacent vertices $a$ and $b$ interact according to some (non-local) unitary $U_{ab}=e^{-i\varphi_{ab} H_{ab}}$. Here, $H_{ab}$ denotes the interaction Hamiltonian and $\varphi_{ab}$ represents the coupling strength or (with appropriate physical units) the interaction time. The most general of such setups, in which the qubits can interact according to different $2$-body interactions $H_{ab}$, has to be described by graphs, whose edges carry a labeling that specifies both the different unitaries $U_{ab}$ as well as the ordering in which interactions occur.
Under which conditions can the outcome of this interaction pattern be completely specified by a {\it simple} graph $G$? If the graphs shall give a sufficient characterization for a large class of interaction patterns, we can pose the following constraints:
\begin{itemize}
\item[{\bf (1)}] Since the graph $G$ does not provide any ordering of the edges, all two--particle unitaries $U_{ab}$ involved must commute:
\begin{equation}\label{Constraint1} [ U_{ab},U_{bc}] = 0 \hspace{1cm} \forall a,b,c \in V \; .\end{equation}
\item[{\bf (2)}] Because we deal with {\em undirected} graphs\footnote{In a {\em directed} graph the set of edges $E$ is given by ordered pairs $(a,b)$. The order implies that vertices $a$ and $b$ are connected by a directed edge from $a$ to $b$.}, the unitaries must be symmetric:
\begin{equation}\label{Constraint2} U_{ab}=U_{ba} \hspace{1cm} \forall a,b \in V \; .\end{equation}
\item[{\bf (3)}] If the edges are not further specified by weights, all qubits should interact through the same two--particle unitary $\mathbf{U}$:
\begin{equation}\label{Constraint3} U_{ab}=\mathbf{U}^{\{a,b\}} \hspace{1cm} \forall a,b \in V \; . \end{equation}
\end{itemize}
In the case of qubits condition {\bf (1)} is already sufficient to restrict the analysis of particles to the case where the qubits interact according to the same {\em Ising interaction}\index{Ising interaction $H^I_{ab}$, $U^I_{ab}$}, e.g. $H^I_{ab}=\sigma_z^a\sigma_z^b$. This statement is reflected in the following proposition.
{\proposition[\bf Standard form for commuting interactions] With an appropriate choice of the local basis in each individual qubit system, any set of commuting two--particle unitaries, i.e., the unitaries fulfill {\bf (1)}, does only contain interactions of the form \begin{equation}\label{Ising+LU}\varphi_{ab}\, H_{ab} = \varphi_{ab}\, \sigma_z^a\sigma_z^b + \alpha_a\, \sigma_z^a + \alpha_b\, \sigma_z^b \; .\end{equation}
In other words, any interaction pattern in which the qubits interact according to some two--particle unitaries chosen from a set of commuting interactions, is up to local $z$-rotations\footnote{I.e., $V^a=e^{i\beta_a \sigma_z^a}$ to be performed before or after the interaction pattern.} an Ising interaction pattern
\begin{equation}\label{IsingUnitary} U^I_{ab}(\varphi_{ab}):= e^{-i \,\varphi_{ab} \,\sigma_z^a \sigma_z^b} \; .\end{equation}
}
{\em Proof:}
It suffices to consider condition {\bf (1)} for two different unitaries $U=e^{-iH}$ and $\tilde{U}=e^{-i\tilde H}$ in the two settings of three vertices $a$, $b$ and $c$: \\
(i) $U_{ab}=U^{\{a,b\}}$ and $U_{bc}=\tilde U^{\{b,c\}}$: $[ H^{\{a,b\}},\tilde H^{\{b,c\}}] = 0 $,\\
(ii) $U_{ab}= \tilde U^{\{a,b\}}$ and $U_{bc}= U^{\{b,c\}}$: $[ \tilde H^{\{a,b\}},H^{\{b,c\}}] = 0 $.\\
Note that here $H$ and $\tilde H$ denote the complete Hermitian generator that includes the interaction time or coupling strength $\varphi$. We have also used the fact that $[f(A),f(B)]=0$ iff $[A,B]=0$. Every such Hermitian operator $H$ allows for a real decomposition with respect to the basis of Pauli operators $\{\sigma_0, \sigma_x,\sigma_y,\sigma_z\}$, i.e., $H^{\{a,b\}}= \sum_{ij}\, A_{ij}\, \sigma_i^a \sigma_j^b$. Moreover, a local unitary transformation at a single qubit system translates to an orthogonal transformation of the corresponding operator basis $\sigma_i \mapsto \sigma_i'$, i.e., $A'=OAO^T$ for some orthogonal matrix $O$. By local unitaries we can thus diagonalize one of the Hamiltonians, say $H^{\{a,b\}}=\sum_i A_i \sigma_i^a\sigma_i^b$ and represent the other Hermitian matrix $\tilde H$ with respect to this basis, i.e., $\tilde H^{\{b,c\}}= \sum_{jk}\, B_{jk}\, \sigma_j^b \sigma_k^c$. With these decompositions (i) reads
\begin{equation} \sum_{ijk} \,A_i B_{jk} \, \sigma^a_i \otimes [\sigma_i,\sigma_j]^b \otimes \sigma_k^c = 0 \; ,\end{equation} from which
\begin{equation} \forall i,j=1,2,3 \;\text{with}\; i\neq j \; :\hspace{0.7cm} A_i =0 \vee B_{jk}= 0 \;\; \forall k=0,1,2,3 \; \end{equation}
follows. If $H$ corresponds to a non-trivial two-body interaction up to a (local) change of basis we can assume that $A_3\neq 0$. Rewriting (ii) with these decompositions one essentially arrives at two different cases: If another component, say $A_2\neq 0$, then all components in $B$ except $B_{00}$ have to vanish, which would imply that $\tilde H$ is a trivial interaction. If instead $A_1=A_2=0$, then at least all components in $B$ apart from $B_{00}$, $B_{03}$, $B_{30}$ and $B_{33}$ have to vanish. Since the component $B_{00}$ correspond to some negligible global phase factor, we thus have shown that any two commuting interaction Hamiltonians have to be of the form eq.~(\ref{Ising+LU}). Any terms due to $B_{03}$ or $B_{30}$ correspond to local $z$-rotations and all these rotations commute with the Ising interaction terms $H^I_{ab}=\sigma_z^a\sigma_z^b$. Thus the interaction pattern can alternatively be described by an interaction pattern with pure Ising interaction according to the same graph and some local $z$-rotations to be applied before or after the coupling operation.
\hfill\fbox\\\medskip
The remainder of this article is largely devoted to the entanglement properties of states that result from an interaction pattern described by a simple or weighted graph. We can omit the $z$-rotations, since they do not change these entanglement properties. In the following we thus consider an interaction pattern of qubits that are coupled only by pure Ising interactions. Note that the Ising interaction $H^I_{ab}=\sigma_z^a\sigma_z^b$ is already symmetric and hence {\bf (2)} does not yield an additional constraint.
Without condition {\bf (3)} the state, which results from the application of the interaction pattern, is determined by (a) the initial state vector $|\Psi\rangle$ and (b) by a weighted graph. This weighted graph identifies the pairs $\{a,b\}$ of qubits which interact together with the interaction strength $\varphi_{ab}$ (interaction time) of the respective interactions.
The resulting states are {\em weighted graph states} \index{weighted graph state \texttt{"|}$G\rangle$} as they are introduced in sec.~\ref{WeightedGS}. However, in the remainder of this section we will restrict to states that can be described by simple graphs. Now {\bf (3)} implies that we have to fix the interaction strength $\varphi$ in eq.~(\ref{IsingUnitary}). For graph states according to simple graphs we will from now on choose $\varphi=\frac{\pi}{4}$. Together with the choice of
\begin{equation} |\Psi\rangle = |+\rangle^V \end{equation} for the initial state this ensures that this interaction creates maximal entanglement between to qubits in the state with state vector $|+\rangle$, i.e., $U^I_{ab}|+\rangle|+\rangle$ is {\em maximally entangled}\footnote{A state vector $|\Psi\rangle^{ab}$ is maximally entangled\index{maximally entangled} iff the reduced state at one qubit is maximally mixed, i.e., $\text{tr}_a |\Psi\rangle^{ab}\langle \Psi| = \frac{1}{2}\mathbf{1}_b$.}. In sec.~\ref{DefOfGS_Stab} we will see that this choice also allows for an efficient description of the resulting states in terms of their stabilizer.
To further simplify notations we will not use the Ising interaction in eq.~(\ref{IsingUnitary}) but rather the {\em (controlled) phase gate}\index{phase gate $U_{ab}$}
\begin{equation}\label{PhaseGate} U_{ab}(\varphi_{ab}):= e^{-i \varphi_{ab} H_{ab}} \hspace{0.7cm} \text{with} \hspace{0.7cm} H_{ab}:= |1\rangle^a\langle 1| \otimes |1\rangle^b\langle 1|\end{equation}
as the elementary two-qubit interaction. Note that the corresponding interaction strength now is $\varphi_{ab}=\pi$, because from
\begin{equation} H_{ab}=\frac{\mathbf{1}_a-\sigma_z^a}{2}\,\frac{\mathbf{1}_b-\sigma_z^b}{2}= \frac{1}{4}\left(\mathbf{1}_{ab} - \sigma_z^a -\sigma_z^b + H^I_{ab} \right)\end{equation} we find
\begin{equation} U_{ab}(\varphi_{ab})= e^{-i\frac{\varphi_{ab}}{4}}\, e^{i\frac{\varphi_{ab}}{4} \sigma_z^a}\, e^{i\frac{\varphi_{ab}}{4} \sigma_z^a}\, U^I_{ab}(\frac{\varphi_{ab}}{4})\; .\end{equation}
In other words, the phase gate corresponds to the Ising interaction up to some additional $\frac{\pi}{4}$--rotations around the $z$-axes at each qubit. For simple graphs, i.e., $\varphi_{ab}=\pi$, we find that
\begin{equation}\label{CPhase}
U_{ab}:=U_{ab}(\pi)=\;P^a_{z,+}\otimes {\mathbbm{1}}^b + P^a_{z,-}\otimes \sigma_z^b\; .
\end{equation}
This gate corresponds to a controlled $\sigma_z$ on qubits $a$ and $b$, i.e.
\begin{equation}\nonumber
U_{ab} \; \stackrel{\cdot}{=} \; \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{array}
\right)\, .
\end{equation}
The choice $\varphi_{ab}=\pi$ ensures not only that the state vector
\begin{equation}\label{Graph_Bell_State} U_{ab} |+\rangle^a |+\rangle^b = \frac{1}{\sqrt{2}}\, \left( |0\rangle^a |+\rangle^b \, +\, |1\rangle^a |-\rangle^b \right) \end{equation}
is maximally entangled ({\em Bell state}) but also that $U^2_{ab}=\mathbf{1}_{ab}$ or $U_{ab}={U_{ab}}^\dagger$. Consequently, the phase gate $U_{ab}$ {\em creates} as well as {\em deletes} the edge $\{a,b\}$ in a graph $G$ depending on whether the edge is already contained in $G$ or not.
We conclude the above findings into our first definition for graph states:
\begin{wrapfigure}[11]{r}{0.4\textwidth}
\vspace{-0.1cm}\includegraphics[width=0.4\textwidth]{Ring1.eps}
\caption{The preparation procedure to obtain a graph state that corresponds to a ring graph with $5$ vertices.}\label{Ring1}
\end{wrapfigure}
\hspace{2.5cm}{\bf Graph states (I)} \\
{\em Let $G=(V,E)$ be a graph. The {\em graph state}\index{graph state \texttt{"|}$G\rangle$} $|G\rangle$ that corresponds to the graph $G$ is the pure state with state vector
\begin{equation} \label{GS_Preparation} |G\rangle = \prod_{\{a,b\}\in E} U_{ab}\, |+ \rangle^{ V} \; .\end{equation}
We will also refer to the state
vector $|G\rangle$ of the pure state as a graph state.
The {\em preparation procedure}\index{preparation procedure for graph states}\index{graph state preparation} reads:
\begin{itemize}
\item[1.] Prepare the qubits at each vertex in the pure state with state vector $|+\rangle$ as eigenvector of $\sigma_x$ with eigenvalue $+1$.
\item[2.] Apply the phase gate $U_{ab}$ to all vertices $a,b$ that are adjacent in $G$.
\end{itemize}
}
Since $U_{ab}$ is the unitary two-qubit operation on the vertices $a,b$, which adds or removes the edge $\{a,b\}$, the {\em initial state} with state vector $|+\rangle^V$ of the preparation procedure can also be regarded as the graph state that corresponds to the {\em empty graph}\index{empty graph (state)}.
\subsection{Stabilizer formalism}\label{DefOfGS_Stab}\index{stabilizer formalism|(}
\begin{wrapfigure}[11]{r}{0.45\textwidth}
\vspace{-0.7cm}\hspace{0.0cm}\includegraphics[width=0.45\textwidth]{Ring3.eps}
\caption{The correlation operators for a graph state that
corresponds to a ring with $5$ vertices}\label{Ring3}
\end{wrapfigure}
It is often more convenient to work with the stabilizer of a
quantum state (or subspace) than with the state (or subspace)
itself. Quantum information theory uses the stabilizer formalism
in a wide range of applications. Among those, quantum error
correcting codes (stabilizer codes) are a very prominent example
\cite{Gottesman}. Here, the {\em stabilizer}\index{stabilizer
$\mathcal{S}$}\footnote{We refer the reader to ref.~\cite{NielsenBook}
for an introduction to the stabilizer formalism.} $\mathcal{S}$ is
a commutative subgroup of the Pauli group $\mathcal{P}^V$ that
does not contain $-\mathbf{1}_V$ (and thus not $\pm i
\mathbf{1}_V$). Apart from the interaction pattern, graph states
can also be defined uniquely in terms of their stabilizer:
\begin{proposition}{\bf Graph states (II)}\label{GS_Def2}
Let $G=(V,E)$ be a graph. A {\em graph state vector }\index{graph state \texttt{"|}$G\rangle$} $|G\rangle$ is the unique, common eigenvector in $({\mathbb{C}}^2)^{ V}$ to the set of independent commuting observables:
\begin{equation}\label{GS_Stabilizer} \nonumber K_{a} = \sigma_x^a \,
\sigma_z^{N_a} := \sigma_x^{a}\prod_{b \in
N_a}\sigma_z^{b},\end{equation} where the eigenvalues to the {\em correlation
operators}\index{correlation operator $K_a$} $K_{a}$ are $+1$ for
all $a\in V$. The Abelian subgroup $\mathcal{S}$ of the local
Pauli--group $\mathcal{P}^V$ generated by the set $\{K_a\, |\, a
\in V\}$ is called the stabilizer of the graph state.
\end{proposition}
{\em Proof:} The fact that $|G\rangle$ is actually uniquely
defined by its correlation operators, follows from the subsequent
Proposition~\ref{Graph state basis}, since the set of eigenstates
to all possible eigenvalues for $K_{a}$ is a basis for
$({\mathbb{C}}^2)^{V}$. Nevertheless, Proposition~\ref{GS_Def2}
provides also an alternative definition for graph states. Hence,
we have to proof that this definition is equivalent to the
definition in the previous section. Note that the graph state for
the empty graph actually is stabilized by the set of independent
Pauli matrices $\{\sigma_x^a \,|\, a \in V\}$. Hence, by induction
over the set of edges $E$ it suffices to show the following: Given
a graph state vector $|G\rangle$ stabilized by $K_a$, the application of
the phase gate $U_{ab}$ in eq.~(\ref{CPhase}) leads to a graph
state vector $|G'\rangle$ with a new stabilizer generated by $K'_a$,
which corresponds to the graph $G'$ that is obtained after the
edge $\{a,b\}$ is added (or removed). This certainly holds for all
vertices $c\in V\setminus \{a,b\}$, since $K_{c}$ commutes with
$U_{ab}$. For the remaining vertex $b$, we find \begin{equation}
U_{ab}\,K_a\,{U_{ab}}^\dagger = U_{ab}\, \left( P^a_{z,-}\, +\, P^a_{z,+}\sigma_z^b \right) \; K_a
= \sigma_z^b\,K_a \; , \end{equation} because $\sigma_x P_{z,\pm} =
P_{z,\mp} \sigma_x $. Due to $U_{ab}= U_{ba}$, we similarly obtain
for $a$
\begin{equation}
U_{ab}K_b{U_{ab}}^\dagger =\sigma_z^a\,K_b\; ,
\end{equation}
so that the transformed stabilizer corresponds indeed to a graph
$G'$, where the edge $\{a,b\}$ is added modulo $2$. \hfill\fbox\\\medskip
The generators $K_a$ of the stabilizer $\mathcal{S}$ have a
straightforward interpretation in terms of {\em correlation
operators}: Consider a graph state vector $|G\rangle$ that is measured
according to the measurement pattern given by
$K_a=\sigma_x^a\sigma_z^{N_a}$, i.e., the qubit at vertex $a$ is
measured in $x$-direction and the vertices $b$ in $N_b$ in
$z$-direction. Then $K_a$ provides constraints to the {\em
correlations} between the measurement outcomes $m^a_x=\pm1$ and
$m^b_z=\pm1$, namely \begin{equation}\label{Correlations} m_x^a \prod_{b \in
N_a} m_z^b = 1 \; .\end{equation} Since all elements $\sigma \in \mathcal{S}$
stabilize $|G\rangle$ they give rise to different constraints to
the correlations of the spin statistics for graph states.
That the set of correlation operators has a unique common
eigenstate, is most easily seen by considering the {\em graph
state basis}:
{\proposition[{\bf Graph state basis}]\label{Graph state
basis}\index{graph state basis \texttt{"|}$U\rangle_G$} \em Given
a graph state vector $|G\rangle$, the set of states \begin{equation}|W\rangle =
\sigma_z^W |G\rangle \end{equation} is basis for $({\mathbb{C}}^2)^{V}$. The
states $|W\rangle$ are the eigenstates for the correlation
operators $K_a$ according to different eigenvalues $W_a$ where
$W=(W_1,\ldots,W_N)$, i.e., \begin{equation}\label{GS_Basis} K_a |W\rangle_G
=(-1)^{W_a} \, |W\rangle \;. \end{equation} The projector onto the graph
state\index{graph state \texttt{"|}$G\rangle$} has a direct
representation in terms of the corresponding stabilizer
$\mathcal{S}$: \begin{equation}\label{GS_Projector} |G\rangle\langle G | =
\frac{1}{2^N}\,\sum_{\sigma \in \mathcal{S}} \sigma \end{equation} }
{\em Proof:} For the verification of eq.~(\ref{GS_Basis}) it
suffices to consider a single $\sigma_z^a$ operator at some vertex
$a$. $\sigma_z^b$ commutes with all correlation operators $K_a$
for $a\neq b$ and anti-commutes with $K_b$. For all $a\in V$, we
obtain \begin{equation} K_a |W\rangle = K_a \prod_{b \in W} \sigma_z^b
|G\rangle = (-1)^{\delta_{a\in W}} \prod_{b\in W} \sigma_z^b
|G\rangle = (-1)^{W_a} \, |W\rangle \; , \end{equation}
where $\delta_{a\in U}=1$ if $a\in U$ and zero otherwise. Thus, any two distinct sets $W,W'\subseteq V$ correspond to eigenvectors $|W\rangle, |W'\rangle$ for the set of generators $\{K_a \, |\, a\in V\}$ but with eigenvalues that differ in at least one position $a\in V$. Hence $\langle W |W' \rangle = \delta_{WW'}$, where $\delta_{WW'}=1$ if $W=W'$ and zero otherwise. Since there are $2^N$ possible sets, the eigenvectors $\left\{|W\rangle\right\}_{W\subseteq V}$ form a basis of $({\mathbb{C}}^2)^{V}$.\\
A similar calculation verifies eq.~(\ref{GS_Projector}): \begin{equation}
\langle W |\, \sum_{\sigma \in \mathcal{S}} \sigma\,|W' \rangle
\;=\; 2^N\, \delta_{W\emptyset} \, \delta_{W'\emptyset}\end{equation} for any
basis vectors $|W\rangle$ and $|W'\rangle$. The normalization
constant $\frac{1}{2^N}$ is due to $\text{tr} (|G\rangle\langle
G|) = 1$ and because the number $|\mathcal{S}|$ of stabilizer
elements is $2^N$. \hfill\fbox\\\medskip
In the following we will briefly address local equivalence for
the class of graph states and relate this class to the more
general concept of stabilizer states and codes. We also present an
alternative description of the stabilizer of a graph state in
terms of its binary representation and review a possible
generalization of the stabilizer formalism to $d$-level systems.
\subsubsection{\it \small Stabilizer states and codes}\label{Def_Stab_States}
There exists a natural generalization of the description of graph
states within the stabilizer formalism. Each stabilizer
$\mathcal{S}$, i.e., any commutative subgroup of the Pauli group
$\mathcal{P}^V$ that does not contain $-\mathbf{1}_V$, uniquely
corresponds to some {\em stabilized subspace}\index{stabilizer
$\mathcal{S}$} $\mathbf{H}_\mathcal{S}\subseteq
({\mathbb{C}}^2)^{V}$, which is the largest subspace satisfying
$\mathcal{S}\, \mathbf{H}_\mathcal{S} = \mathbf{H}_\mathcal{S}$.
The minimal number
\begin{equation}\label{GroupRank}
r_\mathcal{S} := \text{rank}(\mathcal{S})
= \min\left\{ n\,|\, \langle\{s_1,...,s_n\}\rangle = \mathcal{S}\, ,\; s_i\in \mathcal{S} \right\} \index{rank of a stabilizer}\leq N \end{equation}
of generators for a stabilizer is a well-defined quantity and is
called the {\em rank} of the stabilizer. Thus, a necessary
requirement for some stabilizer $\mathcal{S}$ to represent a graph
state is that it has rank $N$, or, equivalently, that
$\mathcal{S}$ is generated by $N$ independent stabilizer elements.
More generally, any full rank stabilizer ${\cal S}$ stabilizes
exactly one (up to an overall phase factor) pure state, which is
called a \emph{stabilizer state} and which is in short denoted by
$|{\cal S}\rangle$. This stabilizer state is the pure state with the unique common
eigenvector with eigenvalue 1 of all elements of ${\cal S}$, which
is denoted by ${\cal S} |{\cal S}\rangle= |{\cal S}\rangle$. Thus,
every graph state is a stabilizer state; however, the class of
stabilizer states is strictly larger than the class of graph
states.
It is clear that an $N-$qubit stabilizer state vector $|{\cal S}\rangle$
is completely determined by a set of $N$ independent generators
$\{s_a\}_{a\in V}$ of ${\cal S}$. Note that, for computational
reasons, it is often much more efficient to deal with such a set
of independent generators rather than with the
complete stabilizer itself.
However, this leads to an ambiguity in the description of a
stabilizer state, since there are many independent generating sets
for every stabilizer. Therefore, the question frequently arises
whether two given sets of generators $\{s_a\}_{a\in V}$ and
$\{s'_a\}_{a\in V}$ generate the same stabilizer $\mathcal{S}$. In
section \ref{BinaryRepr} we will see an efficient approach to
answer this question.
If a stabilizer $\mathcal{S}$ does not have full rank $r<N$, it
only stabilizes an $N-r$ dimensional subspace $\mathcal{C}$ of
$({\mathbb{C}}^2)^{V}$ \cite{Gottesman,NielsenBook}. In principle,
such a subspace corresponds to an $[r,N-r]$--{\em stabilizer
code}\index{stabilizer code} encoding $N-r$ into $N$ qubits. For a
decent stabilizer code the $N-r$ degrees of freedom are used to
detect possible errors. The main idea is to arrange the
code\index{quantum error correcting code (QEC)} in such a way
that, under the influence of errors, the complete
$N-r$-dimensional space containing the encoded information is
mapped onto an orthogonal eigenspace of $\mathcal{S}$. The
coherent information encoded in this $r$-dimensional space can
then be maintained by some error correction procedure. More
precisely, suppose that some error operator $\sigma \in
\mathcal{P}^V$ occurs, i.e., the underlying noise process has a
Kraus representation with $\sigma$ as one of its Kraus operators
\footnote{Note that a restriction in the error considerations to
Pauli errors is legitimate, since error correction capabilities of
a code $\mathcal{C}$ can be determined w.r.t. any basis of
operation elements $E_i$ (see e.g. Theorem 10.1 and 10.2 in
ref.~\cite{NielsenBook}).}. Then if $\sigma \in \mathcal{S}$ the
stabilized subspace $\mathcal{C}$, and thus also any encoded
quantum information, is not affected at all. On the other hand, if
$\sigma \in \mathcal{P}^V\setminus \mathbf{N}(\mathcal{S})$, where
$\mathbf{N}(\mathcal{S})=\{ \sigma \in \mathcal{P}^V \, |\, \sigma
\mathcal{S} \sigma^\dagger \subseteq \mathcal{S} \}$ denotes the
{\it normalizer} of the subgroup $\mathcal{S}$, then $\sigma$
anti-commutes with at least one element of the stabilizer
$\mathcal{S}$ and thus transforms the complete subspace
$\mathcal{C}$ into an orthogonal subspace. By measuring a
generating set of stabilizer elements $s_i$ the corresponding
error thus can be detected. Only if the error $\sigma \in
\mathcal{P}^V$ is an element of the normalizer
$\mathbf{N}(\mathcal{S})$ but not of the stabilizer itself, i.e.
$\sigma \in \mathbf{N}(\mathcal{S}))\setminus \mathcal{S}$, then
this transformation remains within the codespace $\mathcal{C}$ and
thus cannot be detected. More generally, for a correction of a
set of possible errors $\{ E_i\}\subseteq \mathcal{P}^V$, i.e., a
noise process with Kraus operators $E_j$, the effect of different
errors has to be distinguishable by the error syndrome $\beta_i$
obtained through measuring the stabilizer generators $s_i$, i.e.
$s_i E_j\mathcal{C}=\beta_i E_j\mathcal{C}$. One finds
\cite{Gottesman,NielsenBook} that the set of errors $\{ E_i\}$ is
correctable if $E_jE_k\notin \mathbf{N}(\mathcal{S}))\setminus
\mathcal{S}$ for all $j$ and $k$. If there is a unique error $E_j$
associated with a given error syndrome $\beta_i$, the error can be
corrected by applying $E_j^\dagger$. If, however, two errors $E_j$
and $E_k$ correspond to the same error syndrome $E_i$, which
implies $E_j\mathcal{C}=E_k\mathcal{C}$, both errors can be
corrected by applying either of the operators $E_j^\dagger$ and
$E_k^\dagger$, since if $E_j$ occurred but $E_k^\dagger$ is
applied for error correction one nevertheless finds $E_k^\dagger
E_j \mathcal{C} = \mathcal{C}$ by assumption.
Supplementing the $N-r$ generators of the stabilizer by $r$
additional elements $Z_1,\ldots,Z_r \in\mathcal{P}^V$ to form a
full rank stabilizer corresponds to the choice of a basis of
codeword vectors\footnote{Choose
$|W\rangle=|(W_1,\ldots,W_r)\rangle \in \mathcal{C}$ such that
$Z_i|W\rangle = (-1)^{W_i} |W\rangle$.} in the codespace
$\mathcal{C}$. This basis is frequently called the `logical
computational basis'. As we will discuss in
sec.~\ref{Application_QEC} graph states, or more generally
stabilizer states, also appear as codeword vectors in stabilizer
codes. Together with similarly defined logical $X$-operators
$X_i$, the logical $Z$-operations $Z_i$ allow for concise
manipulations\footnote{ For details we refer the reader to
ref.~\cite{Gottesman,NielsenBook}.}
of the underlying code space such
as error detection and correction, and the concatenation of codes
in order to improve error correction capabilities.
\index{stabilizer formalism|)}
\subsubsection{\it \small Local Clifford group and LC equivalence}\label{Def_LC}
Each graph state vector $|G\rangle$ corresponds {\em uniquely} to a graph
$G$. In other words, two different graphs $G=(V,E)$ and
$G'=(V,E')$ cannot describe the same graph state: the interaction
picture tells us that $|G\rangle=|G'\rangle$ would yield a
contradiction \begin{eqnarray} |+\rangle^V & = & \prod_{\{a,b\}\in
E'}U_{ab}\, |G'\rangle = \prod_{\{a,b\}\in E'}U_{ab}\, |G\rangle
\\ & =& \prod_{\{a,b\}\in E'}U_{ab}\,\prod_{\{a,b\}\in
E}U_{ab}\,|+\rangle^V = \prod_{\{a,b\}\in E +
E'}U_{ab}\,|+\rangle^V \; .\nonumber \end{eqnarray} Here, $E+E'$ denotes the
symmetric difference eq.~(\ref{+}) of the edge sets, which is by
assumption not empty and thus yields a non-vanishing interaction.
However, graph states of two different graphs might be equal up to
some local unitary (LU) operation. We will call two graphs
$G=(V,E)$ and $G'=(V,E')$ {\em LU-equivalent}\index{local unitary
(LU)}\index{equivalence under!local unitaries (LU)}, if there
exists a local unitary $U\in \mathbf{U}(2)^{V}$ such that
\begin{equation}\label{LU_graphs}
|G'\rangle = U\,|G\rangle.
\end{equation}
Locality here refers to the systems associated with vertices of
$G$ and $G'$. Denoting \begin{equation} \label{Sigma}\Sigma':=
U\mathcal{S}U^{\dagger} = \{UsU^{\dagger}\ |\ s\in
\mathcal{S}\},\end{equation} where $\mathcal{S}$ is the stabilizer of the
state vector $|G\rangle$, one finds that $s'|G'\rangle = |G'\rangle$ for
every $s'\in\Sigma'$. In this sense the group $\Sigma'$ is a
'stabilizing subgroup' of the state vector $|G'\rangle$, being a group of
(local) unitary operators that have $|G'\rangle$ as a fixed point;
however, in general $\Sigma'$ is not equal to stabilizer of
$|G'\rangle$, since in general $\Sigma'$ is not a subgroup of the
Pauli group\footnote{This issue is closely related to the problem
of local unitary versus local Clifford equivalence of graph
states, which is discussed in sec.~\ref{Local_Equivalence}.}.
In view of this observation, it is interesting to consider the
subclass of those local unitary operators $U$ satisfying
$\mathcal{P}^V = U\, \mathcal{P}^V\, U^\dagger $, meaning that $U$
maps the whole Pauli group $\mathcal{P}^V$ onto itself under
conjugation. The set \begin{equation}\label{Def_CliffU}
\mathcal{C}_1^V:=\{U\in\mathbf{U}(2)^{V}\, |\, U \mathcal{P}^{V}
U^\dagger = \mathcal{P}^{V} \}\index{Local Clifford group
$\mathcal{C}_N$} \end{equation} of such unitaries is a group, the so-called
{\em local Clifford group} (on $N$ qubits). If $|G\rangle$ and
$|G'\rangle$ are graph states such that $U|G\rangle=|G'\rangle$
for some $U\in{\cal C}_1^V$, then the group $\Sigma'$ in
(\ref{Sigma}) is equal to the stabilizer of $|G'\rangle$.
Therefore, the action of local Clifford operations on graph states
can entirely be described within the stabilizer formalism -- and
this is one of the main reasons why the local Clifford group is of
central importance in the context of graph states. In the
following, we will call two graph states $|G\rangle$ and
$|G'\rangle$ {\em LC-equivalent}\index{equivalence under!local
Clifford unitaries (LC)} iff they are related by some local
Clifford unitary $U\in\mathcal{C}_1^V$, i.e., $|G'\rangle = U
|G\rangle$.
The local Clifford group is the $N-$fold tensor product of the
one-qubit Clifford group $\mathcal{C}_1$ with itself, where
$\mathcal{C}_1$ is defined by \begin{equation}
\mathcal{C}_1:=\{U\in\mathbf{U}(2)\, |\, U \mathcal{P} U^\dagger =
\mathcal{P}\}.\end{equation} One can show that, up to a global phase factor,
any one-qubit Clifford operation $U\in\mathcal{C}_1$ is a product
of operators chosen from the set $\{H, S\}$, where
\begin{gather}
H=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix}
\;\; \text{(Hadamard gate)}\index{Hadamard gate $H$} \hspace{1cm}
S=\begin{pmatrix} 1&0\\0&i \end{pmatrix} \; \;\text{(single-qubit
phase gate)}.
\end{gather}
The action of the Clifford group $\mathcal{C}_1$ under conjugation
permutes the Pauli matrices $\sigma_1$, $\sigma_2$ and $\sigma_3$
up to some sign $\pm 1$. This can be shown as follows: First, the
matrices $\pm \sigma_0$ and $\pm i \sigma_0$ are left unchanged
under conjugation. Secondly, the set $\{\pm\sigma_1, \pm\sigma_2,
\pm\sigma_3\}$ has to be mapped onto itself, since
$U\sigma_iU^\dagger$ is Hermitian iff $\sigma_i$ is Hermitian.
Because the conjugation is invertible, the conjugation permutes
the matrices $\sigma_1$, $\sigma_2$ and $\sigma_3$ up to some
sign $\pm1$.
Also, note that it suffices to fix the action of $U$ for two
traceless Pauli matrices, say $\sigma_1$ and $\sigma_2$, since the
action for the other matrices follows from linearity of the
conjugation and the relation $\sigma_3=-i\sigma_1\sigma_2$.
If one
disregards the overall phases of its elements, the one-qubit
Clifford group has finite cardinality. In Tab.~\ref{Tab_LC} we
have itemized all $24$ single-qubit Clifford unitaries,
disregarding such global phases. For each unitary we have also
included a possible decomposition in terms of Pauli operators and
the $\frac{\pi}{4}$-rotations \begin{equation}\label{sqrt_sigma} \sqrt{\pm i
\sigma_j} = e^{\pm i \frac{\pi}{4} \sigma_j} \hspace{0.7cm}
j=1,2,3\; ,\end{equation} that we frequently use throughout this article.
These rotations correspond to the elementary permutations
$\{1,2,3\}\mapsto \{1,3,2\}$, $\{1,2,3\}\mapsto \{3,2,1\}$ and
$\{1,2,3\}\mapsto \{2,1,3\}$ that only permute two indices.
Instead of $H$ and $S$ any two of these elementary permutations
can be used to generate the Clifford group $\mathcal{C}_1$.
\begin{table}
\begin{center}
\begin{minipage}{0.9\textwidth}
\begin{tabular}{ccc}
\begin{minipage}{0.45\textwidth}
\begin{tabular}{|ccc|c|}
\hline
$\sigma_1$ & $\sigma_2$ & $\sigma_3$ & Decomposition\\
\hline \hline
$\sigma_1$ & $\sigma_2$ & $\sigma_3$ & $\sigma_0$\\
$\sigma_1$ & $-\sigma_2$ & $-\sigma_3$ & $\sigma_1$\\
$-\sigma_1$ & $\sigma_2$ & $-\sigma_3$ & $\sigma_2$\\
$-\sigma_1$ & $-\sigma_2$ & $\sigma_3$ & $\sigma_3$\\
\hline
$\sigma_1$ & $-\sigma_3$ & $\sigma_2$ & $\sqrt{ i \sigma_1} $\\
$\sigma_1$ & $\sigma_3$ & $-\sigma_2$ & $\sqrt{- i \sigma_1}$\\
$-\sigma_1$ & $-\sigma_3$ & $-\sigma_2$ & $\sigma_3 \sqrt{ i \sigma_1}$\\
$-\sigma_1$ & $\sigma_3$ & $\sigma_2$ & $\sigma_3 \sqrt{- i \sigma_1}$\\
\hline
$\sigma_3$ & $\sigma_2$ & $-\sigma_1$ & $\sqrt{ i \sigma_2} $\\
$-\sigma_3$ & $\sigma_2$ & $\sigma_1$ & $\sqrt{- i \sigma_2}$\\
$\sigma_3$ & $-\sigma_2$ & $\sigma_1$ & $\sigma_3 \sqrt{ i \sigma_2}$\\
$-\sigma_3$ & $-\sigma_2$ & $-\sigma_1$ & $\sigma_3 \sqrt{- i \sigma_2}$\\
\hline
\end{tabular}
\end{minipage} & \hspace{-0.5cm} &
\begin{minipage}{0.45\textwidth}
\begin{tabular}{|ccc|c|}
\hline
$\sigma_1$ & $\sigma_2$ & $\sigma_3$ & Decomposition\\
\hline \hline
$-\sigma_2$ & $\sigma_1$ & $\sigma_3$ & $\sqrt{ i \sigma_3} $\\
$\sigma_2$ & $-\sigma_1$ & $\sigma_3$ & $\sqrt{- i \sigma_3}$\\
$\sigma_2$ & $\sigma_1$ & $-\sigma_3$ & $\sigma_1 \sqrt{ i \sigma_3}$\\
$-\sigma_2$ & $-\sigma_1$ & $-\sigma_3$ & $\sigma_1 \sqrt{- i \sigma_3}$\\
\hline
$-\sigma_2$ & $-\sigma_3$ & $\sigma_1$ & $\sqrt{ i \sigma_3}\sqrt{ i \sigma_1} $\\
$\sigma_2$ & $-\sigma_3$ & $-\sigma_1$ & $\sqrt{ i \sigma_3}\sqrt{- i \sigma_1}$\\
$-\sigma_2$ & $\sigma_3$ & $-\sigma_1$ & $\sqrt{- i \sigma_3} \sqrt{ i \sigma_1}$\\
$\sigma_2$ & $\sigma_3$ & $\sigma_1$ & $\sqrt{- i \sigma_3} \sqrt{- i \sigma_1}$\\
\hline
$\sigma_3$ & $\sigma_1$ & $\sigma_2$ & $\sqrt{ i \sigma_3}\sqrt{ i \sigma_2} $\\
$-\sigma_3$ & $\sigma_1$ & $-\sigma_2$ & $\sqrt{ i \sigma_3}\sqrt{- i \sigma_2}$\\
$\sigma_3$ & $-\sigma_1$ & $-\sigma_2$ & $\sqrt{- i \sigma_3} \sqrt{ i \sigma_2}$\\
$-\sigma_3$ & $\sigma_1$ & $-\sigma_2$ & $\sqrt{- i \sigma_3} \sqrt{- i \sigma_2}$\\
\hline
\end{tabular}
\end{minipage}
\end{tabular}
\end{minipage}
\end{center}
\caption{All $24$ single-qubit Clifford unitaries and their
decomposition into elementary permutations.}\label{Tab_LC}
\end{table}
An important result in the theory of graph states and stabilizer
states is that any stabilizer state is LC-equivalent to some graph
state. This statement was first proven in ref.~\cite{Schlinge02b} and
\cite{Grassl02} for the more general setup of stabilizer codes
over $d$-level systems. {\proposition[{\bf Stabilizer states}] Any
stabilizer state vector $|\mathcal{S}\rangle$ is LC-equivalent to some
graph state vector $|G\rangle$ , i.e., $|\mathcal{S}\rangle = U |G\rangle$
for some LC-unitary $U\in \mathcal{C}^{V}$. This unitary can be
calculated efficiently. }
{\em Proof:} A proof for the qubit case in terms of the binary
framework (see sec.~\ref{BinaryRepr}) can be found in
ref.~\cite{Nest04a}. \hfill\fbox\\\medskip
A similar statement holds more generally for all stabilizer
codes: Any stabilizer code is LC-equivalent to some {\em graph
code}.
Thus, graph states can be regarded as standard forms\footnote{In ref.~\cite{Auden05} some normal forms for stabilizer states are suggested that do not rely on graph states, but which also allow for an efficient calculation of various (entanglement) properties.} for
stabilizer states, since many properties, such as entanglement,
are invariant under LC operations. Note that this standard form is
however not unique. A stabilizer state vector $|\mathcal{S}\rangle$ can
be LC-equivalent to several graph states $|G_1\rangle= U_1
|\mathcal{S} \rangle$ and $|G_2\rangle = U_2 |\mathcal{S}
\rangle$, whenever these graph states are LC-equivalent
$|G_1\rangle = U_1 U_2^\dagger |G_2\rangle$. Thus, the study of
local equivalence of stabilizer states reduces to that of local
equivalence of graph states.
Note that in general there are $24^N$ different Clifford unitaries
(up to global phases) to relate two graphs states with $N$
vertices. Therefore, the difficulty to decide whether two graph
states are LC-equivalent seems to increase exponentially with the
number of parties. However in sec.~\ref{BinaryRepr} we will
briefly mention a method due to ref.~\cite{Nest04b} that scales only
polynomially with the number of vertices.
Interestingly, the action of local Clifford operations on graph
states can be described in terms a simple graph transformation
rule, called \emph{local complementation} \cite{Bouchet}: letting
$G=(V, E)$ be a graph and $a\in V$, the local complement of $G$ at
$a$, denoted by $\tau_a(G)$, is obtained by complementing the
subgraph of $G$ induced by the neighborhood $N_a$ of $a$ and
leaving the rest of the graph unchanged: \begin{equation} \tau_a:\, G\mapsto
\tau_a(G):=G+N_a \; .\end{equation} \index{local complementation $\tau_a$}
With this notation the following result can be stated \cite{Glynn02,Nest04a}:
\begin{proposition}[{\bf LC-rule}]\label{loc}\index{LC-rule}
By local complementation of a graph $G$ at some vertex $a\in V$ one obtains an LC-equivalent graph state $|\tau_a(G)\rangle$:
\begin{equation} |\tau_a(G)\rangle = U^\tau_a(G)\,|G\rangle \; ,\end{equation} where
\begin{equation}\label{LU_Rule_U} U^\tau_a(G)= e^{-i\frac{\pi}{4}\sigma_x^a}
e^{i\frac{\pi}{4}\sigma_z^{N_a}}\propto \sqrt{K_a} \end{equation} is a local
Clifford unitary. Furthermore, two graph states $|G\rangle$ and
$|G'\rangle$ are LC-equivalent\index{equivalence under!local
Clifford unitaries (LC)} iff the corresponding graphs are related
by a sequence of local complementations, i.e.
$G'=\tau_{a_1}\circ\ldots\circ\tau_{a_n}(G)$ for some
$a_1,\ldots,a_n\in V$.
\end{proposition}
\begin{wrapfigure}[12]{r}{0.45\textwidth}
\vspace{-1cm}\hspace{0.0cm}{\includegraphics[width=0.45\textwidth]{LURule.eps}}
\caption{\label{fig:LUruleExample1} An example for a successive
application of the LC-rule, which exhibits the whole equivalence
class associated with graph No.\ 1. The rule is successively
applied to the vertex, which is colored red in the figure.}
\end{wrapfigure}
Fig.~\ref{fig:LUruleExample1} depicts an example for such a
successive application of the LC-rule. Starting with the first
graph the complete orbit can be obtained by applying the LC-rule
to the vertices in the preceding graph that appear above the arrow
of the following diagram:
\[ \begin{CD}
\text{No.\ }1 @>3>> \text{No.\ }2 @>2>> \text{No.\ }3 @>3>> \\
\text{No.\ }4 @>1>> \text{No.\ }5 @>3>> \text{No.\ }6 @>1>> \\
\text{No.\ }7 @>3>> \text{No.\ }8 @>4>> \text{No.\ }9 @>1>> \\
\text{No.\ }10 @>2>> \text{No.\ }11
\end{CD} \]
{\em Proof of Proposition~\ref{loc}:}
Let $G$ be a graph with correlation operators $K_b$ and
$G'=\tau_a(G)$ the corresponding graph under local
complementation at vertex $a$ with correlation operators $K'_b$.
For $c \in V\setminus N_a$ we find
\begin{equation}
U^\tau_aK_c (U^\tau_a)^\dagger=K_c=K'_c \; .
\end{equation}
For $b \in N_a$, we compute \begin{eqnarray} U^\tau_a\,K_b\,(U^\tau_a)^\dagger
& = & \left(- i\sigma_x^a\right) \; \left( i \sigma_z^{b}\right)
\,\sigma_x^b \; \,\sigma_z^a \sigma_z^{N_b \setminus a }
\nonumber\\ & = & \sigma_x^a\, \sigma_z^{N_a}\; \cdot \;
\sigma_x^b\,\sigma_z^{N_b+N_a} \nonumber\\ & = & K'_a\; \cdot \;
K'_b \, .\nonumber \end{eqnarray} Thus the stabilizer $ U^\tau_a\,
\mathcal{S}\,(U^\tau_a)^\dagger $ is generated by $\{K'_c\}_{c\in
V\setminus N_a} \cup \{K'_a K'_b\}_{b\in N_a}$. By multiplication
of the generators $K'_a K'_b$ with $K'_a$ (since $a\in V\setminus
N_a$) it follows that $ U^\tau_a\, \mathcal{S}\,(U^\tau_a)^\dagger
$ is also generated by $\{K'_a\}_{a\in V}$ and therefore
stabilizes the graph state $|G'\rangle$.
This proves that a sequence LC-rule applications yields an
LC-equivalent graph state. That the action of any unitary within
the Clifford group on graph states can be decomposed into a
sequence of LC-rule applications is however more involved and we
refer to refs.~\cite{Glynn02,Nest04a} for a proof. \hfill\fbox\\\medskip
\subsubsection{\it \small Clifford group}\label{Def_Cliffordgroup}
Whereas the LC group ${\cal C}_1^V$ is defined to consist of all
\emph{local} unitary operators mapping the Pauli group to itself
under conjugation, one is also often interested in the group of
all unitary operators with this property, i.e., the group \begin{equation}
\mathcal{C}_N:=\{U\in\mathbf{U}(2^N)\, |\, U \mathcal{P}^V
U^\dagger = \mathcal{P}^V\},\end{equation} which is called the \emph{Clifford
group} (on $N$ qubits). By definition, Clifford operations map
stabilizer states to stabilizer states. Up to a global phase
factor any Clifford operation $U$ can be decomposed into a
sequence of ${\cal O}(N^2)$ one- and two-qubit gates
\cite{NielsenBook,Gottesman} in the set $\{H, S, CNOT\}$, where
$H$ and $S$ are defined as before and
\begin{gather}
\text{CNOT} = \begin{pmatrix} 1&0&0&0\\0&1&0&0 \\0&0&0&1\\0&0&1&0
\end{pmatrix} \; \;\text{(controlled-NOT gate)}\; .
\end{gather}
For example, the controlled phase gate $U_{ab}$\index{controlled
NOT gate (CNOT)}\index{phase gate $U_{ab}$} can be decomposed into
$U_{ab}=H^b\text{CNOT}_{ab} H^b$.
\subsubsection{\it \small Binary representation}\label{BinaryRepr}\index{binary representation|(}
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-0.0cm}
\fbox{
$\left(\mathbf{X}|\mathbf{Z}\right) =\left(
\begin{array}{ccccc|ccccc}
1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0
\end{array} \right)
$ } \caption{The generator matrix for a graph state that
corresponds to a ring with $5$ vertices.}\label{Ring4}
\end{wrapfigure}
We now briefly review an alternative representation of the
stabilizer formalism in terms of its binary representation
\cite{Gottesman,Nest04a}. This description is frequently used in
the literature, since it allows one to treat the properties of the
stabilizer in terms of a symplectic subspace of the vector space
$\mathbb{F}_2^{2N}$.
Any element $u$ of the Pauli group $\mathcal{P}^{V}$ can be
represented uniquely, {\em up to some phase factor}, by a vector
$\mathbf{U}=(\mathbf{U}_x,\mathbf{U}_z)\in \mathbb{F}_2^{2N}$,
where $\mathbf{U}_x,\mathbf{U}_z\in\mathbb{F}_2^N$: \begin{equation}
u=\sigma_x^{\mathbf{U}_x}\sigma_z^{\mathbf{U}_z}\equiv \prod_{a
\in V} \sigma_x^{U_x^a} \, \prod_{a \in V} \sigma_z^{U_z^a}\; ,\end{equation}
where $\mathbf{U}_x^a,\mathbf{U}_z^a\in \mathbb{F}_2$ for every
$a\in V$. At each qubit we have used the following encoding of the
Pauli matrices as pairs of bits:
\begin{equation} \sigma_0 \mapsto (0|0),\hspace{0.7cm} \sigma_x \mapsto (1|0),\hspace{0.7cm} \sigma_y \mapsto (1|1),\hspace{0.7cm} \sigma_z \mapsto (0|1) \; . \end{equation}
For example, the correlation operator $K_1$ of the ring in
fig.~\ref{Ring3} has the binary representation \begin{equation} K_1=\sigma_x^1
\sigma_z^2 \sigma_z^5 \longmapsto
(10000|01001)\in\mathbb{F}_2^{10}\; .\end{equation}
It is important to note that the binary representation captures
the features of a Pauli operator only up to a phase factor.
The binary representation has the following two important
properties: letting $u,v,x \in \mathcal{P}^{V}$ with corresponding
binary vectors $\mathbf{U},\mathbf{W},\mathbf{X} \in
\mathbb{F}_2^{2N}$, one finds that \begin{eqnarray}
\text{(i)} & u\, w \sim x & \ \longleftrightarrow \hspace{0.5cm} \mathbf{U} + \mathbf{W}\, =\, \mathbf{X} \hspace{0.5cm}\text{(mod $2$)}\; , \nonumber \\
\text{(ii)} & [u,w]=0 & \ \longleftrightarrow \hspace{0.5cm}
\mathbf{U}^T\, \mathbf{P}\, \mathbf{W}\, = \,\mathbf{0}
\hspace{0.5cm} \text{(mod $2$)}\; ,\end{eqnarray} where $\sim$ denotes
equality up to a global phase factor and where the $2N\times 2N$
matrix \begin{equation}
\mathbf{P} = \left(\begin{array}{c|c} \mathbf{0} & \mathbf{1} \\
\hline \mathbf{1}& \mathbf{0}
\end{array}\right) \end{equation} defines a \emph{symplectic inner product} on
the binary space $\mathbb{F}_2^{2N}$. Property (i) shows that the
encoding $u\in \mathcal{P}^{V}\mapsto
\mathbf{U}\in\mathbb{F}_2^{2N}$ is a homomorphism of groups. Note
that the multiplicative structure of the group $\mathcal{P}^{V}$
is mapped to the additive structure of $\mathbb{F}_2^{2N}$, where
addition has to be performed modulo $2$. Property (ii) shows that
two Pauli operators commute if and only if the corresponding
binary vectors are orthogonal with respect to the symplectic inner
product.
It follows from (i) and (ii) that, within the binary
representation, the stabilizer $\mathcal{S}$ of any stabilizer
state $|\mathcal{S}\rangle$ on $N$ qubits is an
\emph{$N$--dimensional, self-dual linear
subspace}\index{self-orthogonal subspace} $\mathbf{S}$ of
$\mathbb{F}_2^{2N}$. By self-duality it is meant that
\begin{itemize}
\item $\mathbf{U}^T \mathbf{P} \mathbf{V} = 0$ for every
$\mathbf{U}, \mathbf{V}\in \mathbf{S}$, and \item if
$\mathbf{X}\in \mathbb{F}_2^{2N}$ and
$\mathbf{X}^T\mathbf{P}\mathbf{U}=0$ for every
$\mathbf{U}\in\mathbf{S}$, then $\mathbf{X}\in \mathbf{S}$.
\end{itemize}
The subspace $\mathbf{S}$ is usually presented in terms of a {\em
generator matrix} $\left(\mathbf{X}|\mathbf{Z} \right)$ (where
$\mathbf{X}$ and $\mathbf{Z}$ are $N\times N$ matrices), which is
a full rank $2N\times N$ matrix, the rows of which form a basis of
$\mathbf{S}$; a generator matrix is obtained by assembling the
binary representations $\{\mathbf{S}_a^T\}_{a\in V}$ of a set of
independent stabilizer generators $\{s_a\}_{a\in V}$ as the rows
of a $2N\times N$ matrix. Note that any generator matrix
$\left(\mathbf{X}|\mathbf{Z} \right)$ of a self-dual subspace
$\mathbf{S}$ satisfies \begin{equation} \left(\mathbf{X}|\mathbf{Z} \right)
\mathbf{P}\left(\mathbf{X}|\mathbf{Z} \right)^T = \mathbf{0}\; \end{equation}
from the self-duality of $\mathbf{S}$.
The generator matrix for a {\em graph state} $|G\rangle$ has the
standard form \begin{equation} \nonumber \left(\mathbf{X}|\mathbf{Z}
\right)=\left(\mathbf{1}|\mathbf{\mathbf{\Gamma}} \right) \; ,\end{equation}
where $\mathbf{\mathbf{\Gamma}}$ is the {\em adjacency matrix} of
the graph $G$. Fig.~\ref{Ring4} displays the generator matrix for
the ring on five qubits (see also fig.~\ref{Ring3}).
Choosing a different set of generators in $\mathcal{S}$
corresponds to a transformation of the generator matrix of the
form \begin{equation}\label{GenMatrix_Trafo} \left(\mathbf{X}|\mathbf{Z}
\right)\mapsto \left(\mathbf{X'}|\mathbf{Z'} \right) =
\mathbf{R}\left(\mathbf{X}|\mathbf{Z} \right),\end{equation} where
$\mathbf{R}$ is some $\mathbb{F}_2$--invertible $N\times
N$-matrix. From the definition of self-duality, two generator
matrices $\left(\mathbf{X}|\mathbf{Z} \right)$ and
$\left(\mathbf{X'}|\mathbf{Z'} \right) $ correspond to the same
stabilizer iff \begin{equation}\label{Equivalence_of _stab_bin}
\left(\mathbf{X'}|\mathbf{Z'} \right)
\mathbf{P}\left(\mathbf{X}|\mathbf{Z} \right)^T = \mathbf{0}\;
.\end{equation}
In section \ref{Def_Stab_States} we encountered the problem of
recognizing whether two sets of commuting Pauli operators are
generating sets of the same stabilizer. This issue has a simple
solution within the binary representation as follows. First we
must test eq.~(\ref{Equivalence_of _stab_bin}) for the
corresponding generator matrices. In addition we have to compute
the transformation matrix $\mathbf{R}=(R_{ab})$ in
eq.~(\ref{GenMatrix_Trafo}), which can be achieved by Gaussian
elimination over $\mathbb{F}_2$. Finally, we ensure that, at the
level of the actual stabilizer rather than its binary
representation, $\{s_a\}_{a\in V}$ is indeed transformed into
$\{s'_a\}_{a\in V}$, i.e., whether \begin{equation} s_a' \;=\; \prod_{b \in V}
\, (s_b)^{R_{ab}} \hspace{0.5cm} \forall a \in V \; .\end{equation}
The action of (local) Clifford operations on stabilizer states
also has an elegant translation in terms of the binary stabilizer
framework. Let $U \in\mathcal{C}_N$ be an arbitrary (possibly
non-local) Clifford operation and let $f_U:\mathbb{F}_2^{2N}\to
\mathbb{F}_2^{2N}$ be the unique function such that
$f(\mathbf{X})$ is the binary representation of $UxU^{\dagger}$
when $x\in{\cal P}^V$ with binary representation
$\mathbf{X}\in\mathbb{F}_2^{2N}$. First, it follows from the
property \begin{equation} Uxx'U^{\dagger}= (UxU^{\dagger})(Ux'U^{\dagger})\end{equation}
for every $x, x'\in{\cal P}^V$,
that \begin{equation} f_U(\mathbf{X}+\mathbf{X'})=
f_U(\mathbf{X})+f_U(\mathbf{X'})\end{equation} for every $\mathbf{X},
\mathbf{X'}\in\mathbb{F}_2^{2N}$. In other words, $f_U$ is a
linear transformation of $\mathbb{F}_2^{2N}$ and we write
$f_U(\mathbf{X})= \mathbf{Q} \mathbf{X}$, for some (nonsingular)
$2N\times 2N$ matrix $Q$ over $\mathbb{F}_2$. Secondly, the
property $[UxU^{\dagger},Ux'U^{\dagger}] = [x, x']$ implies that
\begin{equation} \mathbf{X}^T\mathbf{Q}^T\mathbf{P}\mathbf{Q}\mathbf{X'}=
\mathbf{X}^T\mathbf{P}\mathbf{X'}\end{equation} for every $\mathbf{X},
\mathbf{X'}\in\mathbb{F}_2^{2N}$, showing that $\mathbf{Q}$ is a
\emph{symplectic transformation}, i.e., $
\mathbf{Q}^T\mathbf{P}\mathbf{Q} = \mathbf{P}.$ One can also prove
the reverse statement, i.e., that every symplectic transformation
can be realized as a Clifford operation.
It follows that conjugation of the stabilizer $\mathcal{S}'= U
\mathcal{S}U^\dagger$ corresponds to the linear transformation \begin{equation}
\left(\mathbf{X}|\mathbf{Z} \right) \mapsto
\left(\mathbf{X}|\mathbf{Z} \right) \mathbf{Q}^T \; .\end{equation}
\index{equivalence under!local Clifford unitaries (LC)} Letting
${\cal S}$, ${\cal S}'$ be full rank stabilizers on $N$ qubits
with generator matrices $\left(\mathbf{X}|\mathbf{Z} \right)$,
$\left(\mathbf{X}'|\mathbf{Z}' \right)$, respectively, it is
straightforward to prove the following chain of equivalent
statements: \begin{eqnarray} & & \mathcal{S}'= U \mathcal{S}U^\dagger
\hspace{0.3cm}\text{for some}
\hspace{0.2cm} U \in \mathcal{C}_N \mbox{ with corresponding symplectic matrix } \mathbf{Q}\hspace{0.3cm} \nonumber \\
& \Leftrightarrow & \left(\mathbf{X'}|\mathbf{Z'} \right) =
\mathbf{R}\left(\mathbf{X}|\mathbf{Z} \right) \mathbf{Q}^T
\hspace{0.3cm} \text{for some invertible}\hspace{0.1cm}
\mathbf{R} \\
& \Leftrightarrow & \left(\mathbf{X'}|\mathbf{Z'} \right)
\mathbf{P}\mathbf{Q} \left(\mathbf{X}|\mathbf{Z} \right)^T =
\mathbf{0} \hspace{0.3cm} \label{LC_Equiv_bin}\; .
\end{eqnarray}
In the special case where $U$ is a local Clifford operation, i.e.,
$U\in{\cal C}_1^V$, the corresponding symplectic matrix
$\mathbf{Q}$ has the following particular structure: \begin{equation}\mathbf{Q}
= \left[ \begin{array}{cc} \mathbf{A}&\mathbf{B}\\
\mathbf{C}&\mathbf{D}\end{array}\right],\end{equation} where $\mathbf{A},
\mathbf{B}, \mathbf{C}, \mathbf{D}$ are diagonal $N\times N$
matrices. The property $ \mathbf{Q}^T\mathbf{P}\mathbf{Q} =
\mathbf{P}$ is then equivalent to
$\mathbf{A}\mathbf{D}+\mathbf{B}\mathbf{C}=\mathbf{I}$. This is in
turn equivalent to stating that the $N$ $2\times 2$ matrices
\begin{equation}\mathbf{Q}_a = \left[ \begin{array}{cc} \mathbf{A}_{aa}&\mathbf{B}_{aa}\\
\mathbf{C}_{aa}&\mathbf{D}_{aa}\end{array}\right]\end{equation} are
nonsingular (over $\mathbb{F}_2$) for every $a\in V$. Note that
the matrices $\mathbf{Q}_a$ correspond to the one-qubit tensor
factors of $U$ and, up to a simultaneous permutation of rows and
columns, the matrix $\mathbf{Q}$ is equal to $ \mathbf{Q}_1 \oplus
\ldots \oplus \mathbf{Q}_N$.
Now, suppose that $G$ and $G'$ are two graphs with adjacency
matrices $\mathbf{\Gamma}$ and $\mathbf{\Gamma}'$, respectively.
Then, from eq.~(\ref{LC_Equiv_bin}), the graph states $|G\rangle$
and $|G'\rangle$ are LC-equivalent iff there exist $N\times N$
diagonal matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}$
satisfying $\mathbf{A}\mathbf{D}+\mathbf{B}\mathbf{C}=\mathbf{I}$,
such that
\begin{equation}\label{LC_linear_system}\mathbf{\Gamma}'\mathbf{B}\mathbf{\Gamma}
+ \mathbf{D}\mathbf{\Gamma} + \mathbf{\Gamma'}\mathbf{A} +
\mathbf{C}= \mathbf{0}.\end{equation}
Thus, in order to check whether $|G\rangle$ and $|G'\rangle$ are LC-equivalent, one has to decide
whether the linear system of equations (\ref{LC_linear_system}),
together with the additional quadratic constraints
$\mathbf{A}\mathbf{D}+\mathbf{B}\mathbf{C}=\mathbf{I}$, has a
solution. This approach leads to an efficient algorithm to
recognize LC-equivalence of graph states \cite{Nest04a, Bouchet}
as follows. First, note that the set ${\cal V}$ of solutions
$(\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D})$ to the linear
equations (\ref{LC_linear_system}), with disregard of the
constraints, is a linear vector space. A basis $B=\{b_1,\dots,
b_d\}$ of ${\cal V}$ can be calculated efficiently in ${\cal O
}(N^4)$ time by standard Gauss elimination over $\mathbb{F}_2$. Then
we can search the space ${\cal V}$ for a vector which satisfies
the constraints. As (\ref{LC_linear_system}) is for large $N$ a
highly overdetermined system of equations, the space ${\cal V}$ is
typically low-dimensional. Therefore, in the majority of cases
this method gives a quick response. Nevertheless, in general one
cannot exclude that the dimension of ${\cal V}$ is of order ${\cal
O}(N)$ and therefore the overall complexity of this approach is
non-polynomial. However, it was shown in ref.~\cite{Bouchet} that it is
sufficient to enumerate the subset \begin{equation}{\cal V'}:= \left\{ b +b' \
|\ b, b'\in B \right\}\subseteq {\cal V}\end{equation}
in order to find a solution which
satisfies the constraints, if such a solution exists, where one
observes that $|{\cal V}'| = {\cal O}(N^2)$. This leads to a
polynomial time algorithm to detect LC-equivalence of graph
states. The overall complexity of the algorithm is ${\cal
O}(N^4)$. We note, however, that it is to date not known whether it is
possible to compute the LC orbit of an arbitrary graph state
efficiently.
\index{binary representation|)}
\subsubsection{\it \small Generalizations to $d$-level systems}\index{qudits|see{d-level system}}\index{d-level system}\label{GS_dlevel}
The stabilizer formalism can be generalized to $d$-level systems, where
$d$ is a prime power, see
refs.~\cite{Schlinge02a,schlinge04,ZZXS03,Hostens04,David}.
In such a generalizations to
systems where the Hilbert spaces of constituents
are ${\mathbbm{C}}^d$, a lot
of the intuition developed for binary systems carries over.
We will here
sketch the situation only in which
the individual constituents take a prime
dimension. Ironically, in this more general
framework, the case of
binary stabilizer states even constitutes a special case, which has to be
treated slightly differently than other prime dimensions. Actually,
the language reminds in many respects of
the setting of `continuous-variable systems' with
canonical coordinates \cite{Gaussian}.
The familiar real phase space in the latter setup is then
replaced by a discrete phase space. Also
the Weyl operators, so familiar
in the quantum optical context, find their equivalent
in the discrete setting.
At the foundation of this construction is the
unique {\it finite field} ${\mathbbm{F}}_d$ of prime
order $d$. All arithmetic
operations are defined modulo $d$.
We may label a basis of ${\mathbbm{C}}^d$ as usual as
$|0\rangle,...,|d-1\rangle$.
The {\it shift operators} and the {\it clock} (or {\it phase} or
{\it multiplier}) {\it operators}
are then defined as
\begin{eqnarray}
U_{x} |m\rangle &:=& | m + x\rangle,\\
V_{p} |m\rangle &:=& e^{i \frac{2\pi}{d} p m }
|m\rangle .
\end{eqnarray}
for $x,p\in {\mathbbm{F}}_d$. The number
$\omega:= e^{i \frac{2\pi}{d}}$ is a primitive $d$-th root
of unity.
Let us assume that $d$ is prime but
exclude the case $d=2$.
Using the above shift and clock
operators, one can associate with each point
$(x,p)\in{\mathbbm{F}}_d^2$ in
phase space a
{\it Weyl operator} according to
\begin{equation}
w(x,p ) :=
V_p U_x.
\end{equation}
These Weyl operators correspond to translations
in phase space. In analogy to the
previous considerations, we may define the
elementary operators
\begin{equation}
X= U_1,\,\,\, Z=V_1,
\end{equation}
satisfying $X^d={\mathbf{1}}$ and $Z^d={\mathbf{1}}$.
The operators
\begin{equation}
\omega^{v}\omega^{-2^{-1} px}
w(x,p)
\end{equation}
for $(x,p,v)\in {\mathbbm{F}}_d^3$
form a representation of the {\it Heisenberg group}
with its associated group composition law.
The Weyl operators in this sense
can be conceived as generalized
Pauli operators familiar from the binary setting.
They satisfy the {\it Weyl commutation
relations}
\begin{equation}
w(x,p) w(x',p') =
\omega^{ - p' x}w(x+x', p+p'),
\end{equation}
as can be readily verified using the above
definitions,
so the product of two Weyl operators
is up to a number again a Weyl operator.
This is the discrete analog of the familiar
canonical commutation relations for position and
momentum for Weyl operators in continuous
phase space, which takes essentially
the same form. It follows that
two Weyl operators $w(x,p)$ and $w(x',p')$
defined in this way commute if and only if
\begin{equation}
[(x,p),(x',p')] =0,
\end{equation}
so if and only if the {\it standard symplectic
scalar product} vanishes, which is defined as
\begin{equation}
[(x,p), (x',p')] := (x,p)\left(
\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}
\right)
(x',p')^T = p x' - x p'
\end{equation}
for $(x,p)\in{\mathbbm{F}}_d^2$.
This is a antisymmetric bi-linear form in that
\begin{equation}
[(x,p), (x',p')] = - [(x',p'), (x,p)].
\end{equation}
Hence, the discrete phase space is a
symplectic space over a finite field.
In turn,
the linear combinations of all Weyl
operators form an algebra, the full
observable algebra of the system.
The composition of $N$ constituents of a composite systems, each of dimension $d$,
can be incorporated in a natural fashion. We now encounter coordinates in phase space
$(x_1,p_1,...,x_N, p_N)$ with $\mathbf{x}\in {\mathbbm{F}}^N_d$ and $\mathbf{p} \in {\mathbbm{F}}^N_d$.
The above symplectic scalar product is then replaced by the one defined as
\begin{equation}
[(\mathbf{x},\mathbf{p}),(\mathbf{x}',\mathbf{p}')] = (\mathbf{x},\mathbf{p}) \cdot {\pmb \sigma}
\cdot (\mathbf{x}',\mathbf{p}')^T,
\end{equation}
with
\begin{equation}
{\pmb\sigma} := \bigoplus_{j=1}^N
\left(
\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}
\right).
\end{equation}
Similarly, the Weyl operators become
\begin{equation}
\mathbf{w}(\mathbf{x},\mathbf{p}) = w(x_1,p_1)
\otimes ...\otimes w(x_N,p_N),
\end{equation}
and let us set
\begin{equation}
W(\mathbf{x},\mathbf{p}) := \omega^{-\frac{\mathbf{p}\cdot \mathbf{x}}{2} } \mathbf{w}(\mathbf{x},\mathbf{p}).
\end{equation}
We now turn to the actual definition of stabilizer codes and
stabilizer states \cite{Schlinge02a,schlinge04,David}.
At the foundation here is the
definition of an isotropic
subspace. An {\it isotropic space}
$\mathbf{S} \subset {\mathbbm{F}}_d^{2N}$
is a subspace on which the symplectic
scalar product vanishes
for all pairs of its vectors, so where
\begin{equation}
[(\mathbf{x},\mathbf{p}),(\mathbf{x}',\mathbf{p}')]=0
\end{equation}
for all $(\mathbf{x},\mathbf{p}),(\mathbf{x}',\mathbf{p}')\in \mathbf{S}$.
Now, let $\chi$ a character as a map from $\mathbf{S}$ into the circle
group (for example, the map mapping all elements of $\mathbf{S}$ onto
$1$).
Let us denote the dimension of
$S$ with $k$. Then, the projector onto the stabilizer
code associated with the isotropic subspace $\mathbf{S}$ and
the character $\chi$ can be written as
\begin{equation}
P = \frac{1}{d^k} \sum_{(x,p)\in \mathbf{S}} \chi^*(\mathbf{x},\mathbf{p})
W(\mathbf{x},\mathbf{p}).
\end{equation}
In particular, the state vectors $|\psi\rangle$
from this stabilizer code are exactly those
that satisfy
\begin{equation}
\chi^*(\mathbf{x},\mathbf{p})
W(\mathbf{x},\mathbf{p}) |\psi\rangle = |\psi\rangle ,
\end{equation}
for all $(\mathbf{x},\mathbf{p})\in \mathbf{S}$. In other words, this state vector
$|\psi\rangle$ is an eigenvector of
all of
the operators $\chi^*(\mathbf{x},\mathbf{p}) W(\mathbf{x},\mathbf{p})$ with the same eigenvalue $+1$, as we encountered it in the binary
setting. Again, it is said that $|\psi\rangle$ is {\it stabilized} by these operators. The above Weyl operators are, notably, no longer Hermitian. Hence, they do per se allow for an interpretation in terms of natural constraints
to the correlations present in the state.
This setting can be naturally been generalized to
prime power dimension $d=p^r$ with $p$ being prime
and $r$ being an integer. If, however, the underlying integer
ring is no longer a field, one loses the vector
space structure of ${\mathbbm{F}}_d$, which
demands some caution with respect to the concept
of a basis\footnote{If $d$ contains multiple prime factors the stabilizer, consisting of $d^N$ different elements, is in general no longer generated by a set of only $N$ generators. For the minimal generating set more elements $N\leq m\leq 2N$ of the stabilizer might be needed \cite{Hostens04}.}
Similarly, a {\it stabilizer code} can be conceived in this picture as
the image of an isotropic subspace $\mathbf{S}$ under the
Weyl representation. If a stabilizer code is
one-dimensional, it is a {\it stabilizer state}.
Again, any stabilizer state can be represented as
a graph state, up to local Clifford operations.
This has been shown in refs.~\cite{Grassl02,Schlinge02b}.
The notion of a {\it Clifford operation}
still makes sense,
as a unitary that maps Weyl operators onto
Weyl operators under conjugation,
\begin{equation}
U W(\mathbf{x},\mathbf{p}) U^\dagger \propto W({\mathbf{Q}}(\mathbf{x},\mathbf{p})),
\end{equation}
where ${\mathbf{Q}}$ is an element of the symplectic group,
so preserves the above symplectic form.
The respective graph state corresponds to a {\it weighted graph} with
weights ${\bf \Gamma}_{ab}\in {\mathbbm{F}}_d$,
with $a,b$ again being associated with the vertices
of the underlying graph. A graph state is now
a state stabilized by the operators
\begin{eqnarray}
K_a =
U^a_1 \prod_{b\in N_a} (V^b_1)^{{\Gamma}_{ab}}
=
X^a \prod_{b\in N_a} (Z^b)^{{\Gamma}_{ab}}.
\end{eqnarray}
The symmetric adjacency
matrix ${\bf \Gamma}$ contains elements
${\Gamma}_{ab} \in {\mathbbm{F}}_d$ and, thus, it has no longer
binary entries as in the case of a simple graph as in qubit systems.
The interaction is instead specified by a {\it strength} $r={\bf \Gamma}_{ab} $
given by the weight of the edge $\{a,b\}$ in the weighted
graph. Note, however, that this concept of a graph state
based on a weighted graph is different from the one used in the
remaining part of this review article (see sec.~\ref{WeightedGS}).
When conceiving the preparation of the graph state
via the successive application of {\it phase gates}, the
associated unitary $U_{ab}^r$ acting on the
Hilbert spaces of the systems labeled $a$ and $b$
is given by
\begin{equation} U^r_{ab}\,
|m\rangle^a|n\rangle^b\, = \, \omega^{- r m n}
|m\rangle^a|n\rangle^b \; .\end{equation}
This picture of graph states in discrete Weyl systems,
embodying the case of $d$-dimensional systems, as
well as their processing in the context of the one-way
computer, has been considered in detail in refs.~\cite{schlinge04}. Also,
quantum error correcting codes have in this setting
been described in ref.~\cite{schlinge04}.
This language of discrete
Weyl systems provides a clear-cut picture to describe
finite-dimensional systems in phase space.
\subsubsection{\it \small Remarks on harmonic systems}
\index{qudits|see{harmonic systems}}\index{harmonic systems}
\label{GS_harmonic}
Finally, it is worth noting that
the close analogy between discrete and continuous
Weyl systems suggests the existence of
similar structures as graph states in the setting
of quantum systems with canonical coordinates,
so systems in a real phase space with
position and momentum coordinates.
Variants of such an idea have been considered
in a number of publications \cite{HC,Pl04,Frust,Zhang}; to describe
them in detail, yet, would be beyond the scope of
this review article. Here, we rather note the
structural similarities to the previous
finite-dimensional setting.
The phase space of a system with
$N$ canonical degrees of freedom -- $N$ harmonic oscillators --
is ${\mathbbm{R}}^{2N}$, equipped with an antisymmetric bi-linear form
defined by
\begin{equation}
{\pmb\sigma} =
\bigoplus_{j=1}^N
\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0\\
\end{array}
\right).
\end{equation}
This form originates from the canonical
commutation relations between the {\it canonical
coordinates} of position and momentum,
which can be collected in a row vector as
$(r_1,..., r_{2N})=
( x_1, p_1,..., x_N, p_N)$. These canonical coordinates satisfy the
{\it canonical commutation relations} between position and momentum,
although these variables can, needless to say, correspond to
quadratures of field modes. As before (in this now real
phase space) we
may introduce {\it Weyl operators} embodying translations in
real phase space, defined as
\begin{equation}
W(\mathbf{x},\mathbf{p}) = e^{i (\mathbf{x},\mathbf{p}) {\pmb\sigma} (\mathbf{x},\mathbf{p})^T}
\end{equation}
for $(\mathbf{x},\mathbf{p})\in {\mathbbm{R}}^{2N}$. This Weyl operator is, in a number
of different conventions, a frequently used tool in quantum
optics under the name of {\it displacement operator}.
These Weyl operators inherit the canonical commutation relations:
it is easy to see that they satisfy the Weyl relations
\begin{equation}
W(\mathbf{x},\mathbf{p}) W(\mathbf{x}',\mathbf{p}') = e^{-i (\mathbf{x},\mathbf{p}) \sigma (\mathbf{x}',\mathbf{p}') }
W(\mathbf{x}+\mathbf{x}',\mathbf{p}+\mathbf{p}').
\end{equation}
The structural similarities are obvious.
The {\it characteristic function} is here, just
as in the discrete case, defined as the expectation value of the Weyl operator,
\begin{equation}
f(\mathbf{x},\mathbf{p}) = \text{tr}( W(\mathbf{x},\mathbf{p}) \rho)
\end{equation}
for $(\mathbf{x},\mathbf{p})\in {\mathbbm{R}}^{2N}$.
This is a generally complex-valued function in phase space,
uniquely defining the quantum state. Hence, the description in terms
of Weyl systems serves also as a language appropriate for the
description of both the
discrete and as well as the infinite-dimensional setting.
A certain class of states for which the assessment of entanglement
is relatively accessible is the important class of {\it Gaussian states}.
They
are those quantum states for which the characteristic function is a Gaussian.
Then, the first moments, $d_j =
\text{tr}(r_j\rho)$ and the second moments fully characterize the quantum
state. The {\it second moments}, in turn,
can be embodied in the real symmetric $2N\times 2N$-matrix
${\pmb \gamma}$, the entries of which are given by
\begin{equation}
\gamma_{j,k}=
2 \text{Re}\,
\text{tr}
\left((r_j - d_j)
(r_k - d_k)\rho \right),
\end{equation}
$j,k=1,...,N$. This matrix is typically referred to as
the {\it covariance matrix} of the state.
Similarly, higher moments
can be defined.
Analogues or `close relatives'
of graph states in the Gaussian setting
now arise in several context: (i) They can be thought of as originating
from an interaction pattern, similar to the interaction pattern for Ising
interactions \cite{Zhang}. These interactions may
arise from squeezing and Kerr-like interactions. (ii)
They can also be resulting as ground states from
Hamiltonians which are specified by a simple graph, in turn
reflecting the interaction terms in the Hamiltonian
\begin{equation}
H= \mathbf{p} \mathbf{p}^T/2 + \mathbf{x} \mathbf{V} \mathbf{x}^T,
\end{equation}
where again $\mathbf{p}=(p_1,...,p_N)$, $\mathbf{x}=(x_1,...,x_N)$, and the
real symmetric matrix $N\times N$-matrix $\mathbf{V}$ incorporates
the interaction pattern as the adjacency matrix of a weighted graph
\cite{HC,Pl04,Frust,Gap}. Then, the resulting covariance matrix is
nothing but $\gamma = \mathbf{V}^{-1/2}\oplus \mathbf{V}^{1/2}$, when ordering
the entries in the convention of $(x_1,...,x_N, p_1,...,p_N)$.
(iii) Also, the
direct analog of stabilizer state vectors
(conceived as state vectors
`stabilized by a stabilizer')
in the setting of continuous
Weyl systems still makes sense, yet one has
to allow for singular
states \cite{Infinite} which can no longer be associated
with elements of the Hilbert space of square
integrable functions, but can conveniently be
described in an algebraic language (or within a Gelfand
triple approach).
\subsection{Alternative approaches}\label{DefOfGS_Alternative}
Due to the description in terms of their stabilizer, graph states can be represented, under suitable interpretations, by various mathematical structures, which connect these objects also to other areas of applications in classical cryptography and discrete mathematics. For example, a graph code can be described by a {\it self-dual additive code over the field $\mathbb{F}_4=GF(4)$} or by a (quantum) {\it set of lines of the projective space over $\mathbb{F}_2=GF(2)$} \cite{Calderbank98,Glynn02}. Graph states are also equivalent to {\it quadratic boolean functions} \cite{database}, which are used in classical cryptography.
\index{density matrix renormalization group (DMRG)|(}
\index{Valence Bond Solids (VBS)|(}
\begin{wrapfigure}[13]{r}{0.4\textwidth}
\vspace{-0.5cm}
\includegraphics[width=0.4\textwidth]{Ring2.eps}
\caption{\label{Fig:Ring2}A graph state for a ring with $5$ vertices as valence bond solid.}\label{Ring2}
\end{wrapfigure}
In the remainder of this section we focus on another description of graph states in terms of {\it Valence Bond Solids} (VBS), which does not rely on the stabilizer formalism and, hence, can be extended to weighted graph states (see sec.~\ref{WeightedGS}). This representation was recently introduced by Verstraete and Cirac \cite{Ve04} and has already found interesting applications in density-matrix renormalization group (DMRG)\footnote{For a review of these methods we refer the reader to ref.~\cite{Sc05}.} methods. The VBS picture has its roots in the Affleck-Kennedy-Lieb-Tasaki (AKLT) model \cite{AKLT}, which allows to find exact expressions for the ground states or exited states of some particular Hamiltonians. In DMRG variational methods are applied to a generalization of these AKLT-states, the so-called {\it matrix-product states} \cite{MPS}, in order to perform numerical studies of various physical systems, especially within the field of condensed matter physics. Lately, VBS states have attracted some attention, since they allow for a clear reformulation of DMRG algorithms leading to improvements of the DMRG methods for the simulation of many-body systems in two or higher dimensions or with periodic boundary conditions \cite{Ve042d,AdvancedDMRG}.
In the context of this article graph states can also be regarded as particular VBS states: Here, the {\em graph state} $|G\rangle$ arises from a set of Bell pairs (bonds) \begin{equation} |B\rangle^{a^ib^j} = U_{a^ib^j} |+\rangle^{a^i} |+\rangle^{b^j}
\end{equation} between some {\em virtual} qubits after some suitable projections
$P$ onto the {\em real} qubits (see fig.~\ref{Fig:Ring2}):
\begin{equation} |G\rangle = \prod_{a \in V} P_a \prod_{\genfrac{}{}{0pt}{}{a^i,b^j}{\{a,b\}\in E}} |B\rangle^{a^ib^j} \end{equation}
More precisely, the graph state can be obtained following the procedure:
\begin{itemize}
\item[1.] Replace the real qubits at each vertex $a$ by $d_a$ virtual qubits $a^1,\ldots, a^{d_a}$, where $d_a=|N_a|$ denotes the {\em degree} of the vertex $a$.
\item[2.] For each edge $\{a,b\}$ in $G$ create a Bell pair $|B\rangle^{a^ib^j}$ between some virtual qubit $a^i$ at vertex $a$ and some virtual qubit $b^j$ at vertex $b$ by using the Ising interaction $U$.
\end{itemize}
\begin{wrapfigure}[11]{r}{0.55\textwidth}
\vspace{-1cm}{\setlength{\unitlength}{1cm}
\begin{picture}(8,4)
\put(0.5,0){\includegraphics{VBS.eps}}
\put(6.5,1.5){$P$}
\put(4,0.7){$U$}
\put(2,1.5){$P$}
\put(4,2.8){$U$}
\end{picture}}
\caption{\label{CommutingDiagram} The phase gate on the level of the virtual qubits `commutes' with the projection onto the real physical qubits.}
\end{wrapfigure}
\begin{itemize}
\item[3.] Project all virtual qubits at each vertex $a$ into the real qubit (sub-)system by
\begin{equation} P_a := ^a\hspace{-0.05cm}|\tilde 0\rangle\langle 0 |^{a^1}\ldots\langle 0 |^{a^{d_a}} + ^a\hspace{-0.05cm}|\tilde 1\rangle\langle 1 |^{a^1}\ldots\langle 1 |^{a^{d_a}} \, . \end{equation}
\end{itemize}
That this procedure provides an equivalent description for graph states, can be shown inductively using the fact that the phase gate $U_{a_ib_j}$ on the level of the virtual qubits `commutes' with the projection onto the real physical qubits, i.e.
\begin{equation} [U_{a^ib^j},P_{c}]=0 \hspace{1cm} \forall a^i,b^j, c=(c^1,\ldots,c^{d_c}) \; .\end{equation} For the twisted four-qubit ring this is depicted by the commutative diagram in fig.~\ref{CommutingDiagram}.
\index{density matrix renormalization group (DMRG)|)}
\index{Valence Bond Solids (VBS)|)}
\section{Clifford operations and classical simulation}\label{Pauli measurements}\index{Clifford operations|(}
The stabilizer formalism is not only suited to describe states (or codes), but also to calculate the action of Clifford operations on these states. {\em Clifford operations} are (possibly non-local) Clifford unitaries $U\in \mathcal{C}_N$ (see eq.~(\ref{Def_CliffU})) and projective measurements of a Pauli operator $s\in \mathcal{P}^V$, which we will call {\em Pauli measurements}. The restriction to projective measurements in the Pauli basis ensures that such measurements performed on stabilizer states [codes] yield again stabilizer states [codes] as measurement results \cite{Gottesman,NielsenBook}.
It is not necessary to consider measurements of arbitrary Pauli operators $s\in \mathcal{P}^V$. Since it is possible to efficiently decompose an arbitrary Clifford unitary $U\in \mathcal{C}_N$ in terms of the one- and two-qubit gates $H$, $S$ and CNOT (see sec.~\ref{Def_LC}), any Clifford operation can be simulated by a sequence of at most ${\cal O}(N^2)$ of these gates together with one Pauli measurement (say $s=\sigma_z$) at a single vertex. Thus a sequence of Clifford operations acting on some stabilizer state can be replaced by a sequence of one- and two-qubit gates $H$, $S$ and CNOT and single-qubit Pauli measurements with only a polynomial overhead ${\cal O}(N^2)$ in the number of gates. In the circuit model for quantum computation an equivalent scheme is often considered. The class of quantum computations that involve only
\begin{itemize}
\item state preparations in the computational basis,
\item the one- and two qubit gates $H$, $S$ and CNOT and\index{Hadamard gate $H$}\index{controlled NOT gate (CNOT)}
\item measurements of observables in the Pauli group $\mathcal{P}$, including the classical control of gates conditioned on the outcome of such measurements,
\end{itemize}
is called the class of {\em stabilizer circuits}\index{stabilizer circuit}. All the states of the `quantum register' in each step of such a stabilizer circuit are stabilizer states. These states can be characterized by their set of stabilizer generators. A formal representation of this set of generators in the memory of a classical computer\footnote{In the binary representation a computer has to store the generator matrix and additional phases at each qubit. The matrix requires ${\cal O}(N^2)$ memory size, whereas for the phases a register of size ${\cal O}(N)$ is sufficient. With this information the complete stabilizer can be recovered (see sec.~\ref{Def_Stab_States}).} allows one to efficiently keep track of all changes by pure classical computation. The effect of the one- and two-qubit gates as well as the one-qubit Pauli measurements to the generating set can be calculated using ${\cal O}(N^3)$\footnote{Note that the update of the stabilizer can be determined in only ${\cal O}(N^2)$, but the determination of the exact measurement result in the case of measuring a Pauli-matrix $\pm\sigma_i \in \mathcal{S}$ seems to require some Gaussian elimination, which needs ${\cal O}(N^3)$ time in practice \cite{Aaronson04}.} steps on a classical computer. In this way, any Clifford operation can be efficiently simulated on a classical computer, which is the content of the {\em Gottesman--Knill theorem}\index{Gottesman--Knill theorem} \cite{Gottesman99,NielsenBook}.
{\proposition[{\bf Gottesman--Knill theorem}]\label{GottKnillTh}
Any stabilizer circuit on a quantum register of $N$ qubits, which consists of $M$ steps, can be simulated on a classical computer using at most ${\cal O}(N^3 M)$ elementary classical operations.
}
\hfill\fbox\\\medskip
\begin{wrapfigure}[16]{r}{0.5\textwidth}
\vspace{-0.0cm}
\includegraphics[width=0.47\textwidth,clip]{xMeasurementExample1.eps}
\caption{\label{fig:xMeasurementExample1}\small Example for a $\sigma_x$-measurement at vertex $1$ in graph No.\ 1, which is followed by a $\sigma_z$-measurement at vertex $2$: In graph No.\ 1 a $\sigma_x$-measurement is performed at the vertex $1$. For the application of the $x$-measurement rule, vertex $2$ was chosen as the special neighbor $b_0$, yielding the graph No.\ 2 up to a local unitary $U_{x,\pm}^{(1)}= (\pm i \sigma_y^{(2)} )^{1/2}$. As stated in Table \ref{tabl}, the subsequent $\sigma_z$-measurement on the new graph state is therefore essentially another $\sigma_x$-measurement, now at vertex $2$ with a single neighbor $b_0=5$. The final graph is then graph No.\ 3.}
\end{wrapfigure}
Although this result was already known for a few years \cite{Gottesman99}, only very recently, such classical simulator was implemented \cite{Aaronson04} that actually requires only ${\cal O}(N^2)$ elementary operations on classical computer. In the remainder of this section we will see how graph states can provide an alternative algorithm. This algorithm is based on elementary graph manipulations and was proposed and implemented in ref.~\cite{Anders}. Its complexity for the elementary gate operation requires ${\cal O}(d^2)$ basic steps on a classical computer, where $d$ denotes the maximal degree of the graph representing the quantum register. This proposals is hence advantageous if this maximal degree remains comparably small to $N$ for the different register states throughout the computation.
According to sec.~\ref{Def_Stab_States} any stabilizer state $|\mathcal{S}\rangle$ can be represented as a graph state $|G\rangle$ up to some LC-unitaries $U\in\mathcal{C}_1^V$. Thus, in order to keep track\footnote{Usually a stabilizer circuit is required to start in some kind of standard input state $|0\rangle^V$ of the computational basis, which has a trivial representation in terms of graph states, i.e., $|0\rangle^V = H^V |+\rangle^V$. But the following argumentation will also hold for an arbitrary stabilizer state as the input, if one allows for a polynomial overhead at the beginning of classical simulation in order to determine the corresponding graphical representation.} of the different steps $i=1,\ldots,M$ in the stabilizer circuit computation, one has to store a local Clifford unitary $W_i$ as well as the graph $G_i$ of the graph state $|G_i\rangle$, which is LC-equivalent to the actual stabilizer state $|\mathcal{S}_i\rangle$ in the quantum register of step $i$, i.e., $|\mathcal{S}_i\rangle = W_i |G_i\rangle$. Note that the storage of a graph $G_i$ on $N$ vertices requires only $\frac{N(N-1)}{2}$ bits for the entries of the corresponding adjacency matrix. Moreover as discussed in sec.~\ref{Def_Stab_States} at each vertex $a$ the list of single-qubit LC-unitaries $W_i$ can be characterized by one of $24$ `permutations' $W_i^a$ of the Pauli matrices depicted in Table~\ref{Tab_LC}.
In order to be an efficient representation for the classical simulation of the stabilizer circuit, the graphical description remains to be provided with a set of graphical rules that account for the changes of the stabilizer when a one- or two-qubit gate or some single-qubit Pauli measurement is applied to it. The one-qubit Clifford unitary occurring at a vertex can easily be dealt with by updating the corresponding unitary according to some fixed `multiplication table'. For the two-qubit unitaries we will, instead of the CNOT gate, consider the phase gate $U_{ab}$ in eq.~\ref{CPhase}, since on `pure' graph states it simply acts by adding or deleting the corresponding edge $\{a,b\}$. However, the case where $U_{ab}$ does not act directly on $|G\rangle$ but on $W |G\rangle$ for some LC-unitary $W$ that is non-trivial at the vertices $a$ and $b$, i.e., $W^a\neq \mathbf{1}_a$ or $W^b\neq \mathbf{1}_b$, requires a more careful treatment. Remember that each of the possible single-qubit unitaries has a decomposition in terms of elementary $\frac{\pi}{4}$--rotations given in Table~\ref{Tab_LC}. Since all unitaries $W_a = \mathbf{1}_a, z^a, x^a, y^a , \sqrt{\pm i \sigma_z}$ can be `commuted through' the phase gate yielding at most some additional $\sigma_z$ on the vertex $a$ or $b$, e.g. $U_{ab}\sigma_x^a = \sigma_x^a \sigma_z^b U_{ab}$, we are left with the analysis of the $14$ additional cases, for which at least one unitary $W_a$ or $W_b$ is of the type $\sqrt{\pm i \sigma_x}$ or $\sqrt{\pm i \sigma_y}$.
The graphical rules for these cases can be obtained using the LC-rule derived in sec.~\ref{Def_LC} in order to remove these unitaries. For example, one finds that for an arbitrary graph $G$
\begin{equation} U_{ab} \,\sqrt{\pm i \sigma_x}^a |G\rangle = \sqrt{\mp i \sigma_z}^{N_a}\, U_{ab} |\tau(G) \rangle \; ,\end{equation}
since $\sqrt{+ i \sigma_x}^a = \sqrt{- i \sigma_z}^{N_a} (U^\tau_a)^\dagger$ (similarly for $\sqrt{- i \sigma_x}^a$), where $U^\tau_a$ denotes the LC-unitary in eq.~(\ref{LU_Rule_U}) for the LC-rule.
In fact in ref.~\cite{Anders} it is shown that, in this way, any of the remaining LC-unitaries $W_a$ at some vertex $a$ can be removed by means of at most five local complementations $\tau$ applied at this vertex or at one of its neighbors.
\begin{table}
\begin{center}
\begin{minipage}{0.8\textwidth}
\begin{tabular}{ccc}
\begin{minipage}{0.45\textwidth}
\begin{tabular}{|c|}
\hlin
$P_{x,\pm} \sigma_z = \sigma_z P_{x,\mp}$,\\
$P_{y,\pm} \sigma_z = \sigma_z P_{y,\mp},$\\
$P_{z,\pm} \sigma_z = \sigma_z P_{z,\pm},$ \\
\hline
$P_{x,\pm} (-i \sigma_z)^{1/2} = (-i\sigma_z)^{1/2} P_{y,\mp},$ \\
$ P_{x,\pm} (i \sigma_y)^{1/2} = (i \sigma_y)^{1/2} P_{z,\pm}$,\\
$ P_{x,\pm} (- i \sigma_y)^{1/2} = (- i \sigma_y)^{1/2} P_{z,\pm}$,\\
$ P_{x,\pm} (i \sigma_z)^{1/2} = (i\sigma_z)^{1/2} P_{y,\pm},$ \\
\hlin
\end{tabular}
\end{minipage}
& \hspace{-1cm} &
\begin{minipage}{0.45\textwidth}
\begin{tabular}{|c|}
\hlin
$P_{y,\pm} (-i\sigma_z)^{1/2} = (-i\sigma_z)^{1/2} P_{x,\pm}, $ \\
$P_{y,\pm} (i\sigma_y)^{1/2} = (i\sigma_y)^{1/2} P_{y,\pm}, $ \\
$ P_{y,\pm} (-i\sigma_y)^{1/2} = (-i\sigma_y)^{1/2} P_{y,\pm}, $ \\
$ P_{y,\pm} (i \sigma_z)^{1/2} = (i\sigma_z)^{1/2} P_{x,\mp}, $ \\
\hlin
$P_{z,\pm} (-i\sigma_z)^{1/2} = (-i\sigma_z)^{1/2} P_{z,\pm}, $ \\
$P_{z,\pm} (i \sigma_y)^{1/2} = (i \sigma_y)^{1/2} P_{x,\pm},$\\
$P_{z,\pm} (- i \sigma_y)^{1/2} = (- i \sigma_y)^{1/2} P_{x,\pm},$\\
$P_{z,\pm} (i \sigma_z)^{1/2} = (i\sigma_z)^{1/2} P_{z,\pm}, $ \\
\hlin
\end{tabular}
\end{minipage}
\end{tabular}
\end{minipage}
\caption{ The relevant commutation relations for Pauli projections and Clifford
operators if a sequence of Pauli measurements is applied to a graph state.} \label{tabl}
\end{center}
\end{table}
Let us finally examine the effect of {\em single-qubit Pauli measurements} in more detail, since the graphical rules will be used in the subsequent secs. We will at first consider the case of a projective measurement of some Pauli operator $\sigma_x$, $\sigma_y$ or $\sigma_z$ at a singe vertex $a$ in a graph state without additional LC-unitaries at this vertex and will later mention how to cope with the general case.
For a Pauli measurement of the graph state $|G\rangle$ at a vertex $a$ we find that the graph $|G'\rangle$ on the remaining unmeasured vertices can be obtained from the initial graph $G$ by means of vertex deletion and local complementation:
\begin{description}\label{MeasRuleList}\index{Pauli measurements}\index{measurement rule}
\item[$\sigma_z:$] deleting the vertex $a$ from $G$;
\item[$\sigma_y:$] inverting $G[N_a]$ and deleting $a$;
\item[$\sigma_x:$] choosing any $b_0 \in N_a$, inverting $G[N_{b_0}]$, applying the rule for $\sigma_y$ and finally inverting $\tilde{G}[N_{b_0}]$ again.
\end{description}
This is the content of the following proposition \cite{He04,schlinge04}.
{\proposition[{\bf Local Pauli measurements}] \label{Pauli_Measurement}
A projective measurement of $\sigma_{x}$, $\sigma_{y}$, or $\sigma_{z}$ on the qubit associated with a vertex $a$ in a graph $G$ yields up to local unitaries $U_{i,\pm}^a$ a new graph state $|G'\rangle$ on the remaining vertices.
The resulting graph $G'$ is
\begin{eqnarray}
P^a_{z,\pm} |G\rangle & = & \frac{1}{\sqrt{2}}\, |z,\pm\rangle^a \otimes U_{z,\pm}^a |G-a\rangle ,\\
P^a_{y,\pm} |G\rangle & = & \frac{1}{\sqrt{2}}\, |y,\pm\rangle^a \otimes U_{y,\pm}^a |\tau_a(G)-a\rangle, \\
P^a_{x,\pm} |G\rangle & = & \frac{1}{\sqrt{2}}\, |x,\pm\rangle^a \otimes U_{x,\pm}^a |\tau_{b_0}\left(\tau_a\circ\tau_{b_0} (G)-a\right)\rangle\, ,
\end{eqnarray}
for any choice of some $b_0 \in N_a$, whenever the $\sigma_x$-measurement is not performed at an isolated vertex. If $a$ is an isolated vertex, then the outcome of the $\sigma_x^a$-measurement is $+1$, and the state is left unchanged.
The local unitaries $U_{i,\pm}^a$ are
\begin{eqnarray}
U_{z,+}^a = &\mathbf{1}, & U_{z,-}^a = \sigma_z^{N_a}, \label{uz}\\
U_{y,+}^a = &\sqrt{- i\sigma_z}^{N_a}, & U_{y,-}^a = \sqrt{ + i \sigma_z}^{N_a} \label{uy}\\
U_{x,+}^a = &\sqrt{+ i \sigma_y}^{b_0} \sigma_z^{N_a\setminus(N_{b_0}\cup b_0)}, &
U_{x,-}^a = \sqrt{- i \sigma_y}^{b_0} \sigma_z^{N_{b_0}\setminus(N_{a}\cup a)}.\label{uxm}
\end{eqnarray}
}
For a measurement of $\sigma_{x}$ the local unitary $U_{x,\pm}$ depends on the choice of $b_0$. But the resulting graph states arising from different choices of $b_0$ and $b'_0$ will be equivalent via the LC-unitary $U_{b'_0}U^\dagger_{b_0}\in\mathcal{C}_1^V$.
For a sequence of local Pauli measurements, the local unitaries have to be taken into account, if the measured
qubit is affected by the unitary. We have summarized the necessary commutation relations in Table \ref{tabl}, which denote the transformation of the measurement basis, if a subsequent measurement is applied to a unitarily transformed graph state. Fig.~\ref{fig:xMeasurementExample1} shows two subsequent applications of
the rather complicated $\sigma_x$-measurement. An exhaustive table can also be provided for the general case that the Pauli measurement occurs at some vertex $a$ that has an arbitrary non-trivial LC-unitary $W_a$ attached to it.
{\em Proof} (of Proposition~\ref{Pauli_Measurement}):
The $\sigma_z$-measurement rule follows directly from the definition of graph states in terms of the underlying interaction pattern (see eq.~(\ref{GS_Preparation})):
\begin{eqnarray}
P_{z,\pm}^a |G\rangle & = & P_{z,\pm}^a \prod_{\{a,b\} \in E} U_{ab} \prod_{\genfrac{}{}{0pt}{}{\{c,d\} \in E}{c,d \neq a}} U_{cd} |+\rangle^V \\
& = & P_{z,\pm}^a \,\left(P_{z,+}^a + P_{z,-}^a \sigma_z^{N_a}\right) \,|+\rangle^a \otimes|G\setminus a\rangle^{V\setminus a} \nonumber \\
& = & \frac{1}{\sqrt{2}} \,\left\{ \begin{array}{ll} |z,+\rangle^a \otimes |G\setminus a\rangle^{V\setminus a} & \text{\small if measurement result is}\; m_z^a =+1 \\ |z,-\rangle^a \otimes \sigma_z^{N_a}|G\setminus a\rangle^{V\setminus a } & \text{\small if measurement result is}\; m_z^a =-1 \end{array} \right. \nonumber
\end{eqnarray}
In other words, with probability $\tfrac{1}{2}$ a $\sigma_z$-measurement at a vertex of some graph state gives either $|G\setminus a \rangle$ as the graph state on the remaining vertices if the measurement outcome is $m_z=+1$ or $\sigma_z^{N_a}|G\setminus a \rangle$ as the graph state on the remaining vertices if the measurement outcome is $m_z=-1$.
With the LC-rule at hand (see Proposition~\ref{loc}) one can now derive the measurement rules for a $\sigma_x$- or $\sigma_y$-measurement from this $\sigma_z$-measurement rule.
For this, one can use commutation relations, which are similar to those in Table~\ref{tabl}, in order to show that
\begin{eqnarray} \label{ProofXMeas} P^{a}_{x,\pm} & = & U^\tau_{b_0}(G) P^{a}_{y,\pm} (U^\tau_{b_0}(G))^\dagger \\
\label{ProofYMeas} P^{a}_{y,\pm} & = & U^\tau_{a}(G) P^{a}_{z,\mp} (U^\tau_{a}(G))^\dagger \, \end{eqnarray} where $b_0$ is a neighbor\footnote{Note that if $a$ is an isolated vertex the graph state is $|G\rangle = |+\rangle^a \otimes |G'\rangle^{V\setminus a}$ for some graph on the remaining vertices. In this case a $\sigma_x$-measurement yields $+1$ with probability $1$.} of $a$ in $G$.
With eq.~(\ref{ProofYMeas}) and using $e^{\pm i\frac{\pi}{4}\sigma_k}=\sqrt{\pm i\sigma_k}=\frac{\mathbf{1}\pm i\sigma_k}{\sqrt{2}}$ we can now compute
\begin{eqnarray}
P^{a}_{y,\pm} |G\rangle & = & U^\tau_{a}(G) P^{a}_{z,\mp} |\tau_a(G)\rangle \nonumber \\
& = & U^\tau_{a}(G) \frac{1}{\sqrt{2}} \,\left\{ \begin{array}{ll} |z,-\rangle^a \otimes \sigma_z^{N_a}|\tau_a(G)\setminus a\rangle^{V\setminus a} & \text{\small if }\; m_y^a =+1 \\ |z,+\rangle^a \otimes |\tau_a(G)\setminus a\rangle^{V\setminus a } & \text{\small if}\; m_y^a =-1 \end{array} \right. \nonumber \\
& = & \frac{1}{\sqrt{2}} \,\left\{ \begin{array}{ll} \sqrt{-i\sigma_x^a}|z,-\rangle^a \otimes \sqrt{i\sigma_z^{N_a}} \sigma_z^{N_a}|\tau_a(G)\setminus a\rangle^{V\setminus a} & \text{\small if }\; m_y^a =+1 \\ \sqrt{-i\sigma_x^a} |z,+\rangle^a \otimes \sqrt{i\sigma_z^{N_a}} |\tau_a(G)\setminus a\rangle^{V\setminus a } & \text{\small if }\; m_y^a =-1 \end{array} \right. \nonumber \\
& \propto & \frac{1}{\sqrt{2}} \,\left\{ \begin{array}{ll}
|y,+\rangle^a \otimes \sqrt{-i\sigma_z^{N_a}} |\tau_a(G)\setminus a\rangle^{V\setminus a} & \text{\small if }\; m_y^a =+1 \\
|y,-\rangle^a \otimes \sqrt{i\sigma_z^{N_a}} |\tau_a(G)\setminus a\rangle^{V\setminus a} & \text{\small if }\; m_y^a =-1
\end{array} \right.
\end{eqnarray}
This is the $\sigma_y$-measurement rule, from which the $\sigma_x$-measurement rule now can be derived along the same lines using eq.~(\ref{ProofXMeas}).
\hfill\fbox\\\medskip
\index{Clifford operations|)}
\section{Examples and applications}\label{GS_Examples}
In this section we give some prominent, not necessarily distinct classes of examples for graph states. We also sketch important applications of these states in multi-party quantum communication, quantum computation and quantum error correction. These examples illustrate that graph states do not only provide an interesting model to study multi-party entanglement, as it will be carried out in the following sections, but can also be an important resource for quantum information processing.
\subsection{GHZ--states}\label{GHZ_GS}
\begin{wrapfigure}[9]{r}{0.45\textwidth}
\vspace{-0.5cm}
\includegraphics[width=0.2\textwidth]{CompleteGraphGHZ.eps}\hspace{0.03\textwidth}\includegraphics[width=0.2\textwidth]{StarGraphGHZ.eps}
\caption{\label{fig:GHZ} The GHZ-state is LU-equivalent to the graph state corresponding to a star graph or the complete graph.}
\end{wrapfigure}
We start by considering the $N$--qubit Greenberger-Horne-Zeilinger states\index{GHZ state}
\begin{equation}
|GHZ\rangle=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes N} + |1\rangle^{\otimes N}),\label{GHZstate}
\end{equation}
which were introduced in ref.~\cite{GHZ89} for the case of three qubits and since serve as a `text book' example for multi-party entangled `Schr\"odinger cat' states. These states are special examples of states that maximally violate multi-partite Bell inequalities \cite{BI} and can, for instance, be used to improve frequency standards \cite{Frequency}. GHZ states have also become an interesting resource for multi-party quantum communication, e.g. in the context of secret sharing and secure function evaluation \cite{Secure}. The multi--party {\em GHZ-state} corresponds to the {\em star graph}\index{star graph (state)|see{GHZ state}} and the {\em complete graph}\index{complete graph (state)}. This is easily seen by applying Hadamard unitaries $H^{V\setminus a}$ to all but one qubit $a$ in the GHZ-state, which yields the star graph state with $a$ as the central qubit. A further application of the LC-unitary $U_a^\tau$ for the LC-rule $\tau_a$ in Proposition~\ref{loc} then transforms the star graph state into the complete graph state. Thus the star graphs for different central vertices as well as the complete graph are LC-equivalent representations of the GHZ-state.
\subsection{Cluster states and the one-way quantum computer}\label{one-way-QC}\index{one-way quantum computer ($\text{QC}_G$)|(}\index{quantum computation}
The initial resource for quantum computation in the {\em one-way quantum computer} ($\text{QC}_G$), as it was introduced in refs.~\cite{Briegel01,OneWay1,OneWay2,OneWay3,OneWay5}, is the cluster state in two dimensions, which corresponds to a rectangular lattice. More generally, a {\em $d$-dimensional cluster state}\index{cluster (graph) state} is represented by a graph of the form of a $d$-dimensional lattice. But only cluster states of dimension two and higher can serve as a {\em universal resource} for quantum computation \cite{Nielsen05a}.
\begin{figure}
\vspace{-0.3cm}
\begin{center}
\includegraphics[width=0.25\textwidth]{QFT3.eps}\hspace{0.05\textwidth} \includegraphics[width=0.35\textwidth]{QFT1.eps} \\ \includegraphics[width=0.5\textwidth]{QFT2.eps}
\end{center}
\vspace{-0.5cm}
\caption{\label{Fig:QFT3_Appl} --{\em Quantum Fourier Transformation on $3$ qubits}\index{quantum Fourier transformation (QFT)}-- The upper left graph is a $13\times 5$-cluster state and represents the initial resource for implementing a QFT on three qubits in the framework of the one-way quantum computer. After removing some superfluous vertices by means of $\sigma_z$-measurements the right upper graph is obtained, whereas the lower graph is achieved after performing {\em all} Pauli measurements within the local measurement protocol associated with the QFT, except the measurements on the input and output vertices and corresponds to the `non-classical' resource for the QFT. The form and color of the vertices indicates the different measurement directions involved or whether the respective vertex corresponds to an input or output qubit. }
\end{figure}
In the following we consider the example of the quantum Fourier transform (QFT) on three qubits as depicted in fig.~\ref{Fig:QFT3_Appl} to sketch the basic concept of the $\text{QC}_G$ model. In the standard framework of quantum computation\footnote{For details we refer e.g. to ref.~\cite{NielsenBook}.} this QFT is implemented by performing a sequence of elementary ($2$-qubit) gates operating on the $n=3$ input qubits. In the $\text{QC}_G$ model we start instead with the preparation of a highly entangled resource state via the same entangling procedure as for the cluster state, except that a subset of $n$ of these qubits representing the input qubits initially are prepared in the desired input state. Then the actual computation consists in a sequence of local measurements which turns the state on the remaining, non-measured qubits into the desired (pure) output state.
In general, the measurement outcomes of the local measurements have to be stored in a classical register ('information flow vector') of $2n$ bits, since the direction of subsequent measurements have to be adapted according to these results \cite{OneWay1,OneWay2,OneWay3,OneWay5}. Moreover, each measurement round requires an update of the information flow vector that contains the `algorithmic information', from which, at the end, the outcome of the quantum computation\footnote{E.g. as it would have been obtained by a corresponding network of quantum gates.} can be read off directly. The different dependencies of the measurement directions upon previous measurement results induce a temporal ordering on the set of vertices that can be measured simultaneously, whose total length represent the temporal complexity of the quantum computation in the $\text{QC}_G$ model. In a first measurement round all Pauli-measurements can be performed at the same time, since these do not depend on previous measurement results. This fraction of the quantum computation corresponds to the part of the network model that solely consists of Clifford operations. According to the Gottesmann-Knill theorem\footnote{See Proposition~\ref{GottKnillTh} in sec.~\ref{Pauli measurements}.}\index{Gottesmann--Knill theorem} this Clifford part can be efficiently simulated on a classical computer. Note that stabilizer circuits are not even universal for classical computation \cite{Aaronson04}.
In the framework of the $\text{QC}_G$ the stabilizer circuits can be dealt with by using only a single measurement round. If the input state is some stabilizer state, e.g. some state of the computational basis, then such a Pauli-measurement round still ends up in a stabilizer state, whereas the states obtained after some subsequent measurements can in general no longer be described within the stabilizer formalism. Note that the first measurement round might as well contain the input and output\footnote{If the objective of the quantum computation is not only to compute a (coherent) quantum state on the output qubits but also to perform a read out in order to obtain some classical result.} vertices, which in general are measured in $\sigma_x$- and $\sigma_z$-direction if the quantum computation is carried out with respect to the computational basis. In other words, input and output qubits can also be measured long before the whole quantum computation is completed, since then the actual result of the quantum computation is given by the information flow vector after the last measurement round.
In fig.~\ref{Fig:QFT3_Appl} we have depicted the graphs that are obtained after performing all $\sigma_z$ measurements and after all Pauli measurements except those at the input and output vertices. The corresponding graph states can still be used to implement the QFT. For this the input state is teleported\footnote{Note that this teleportation can be described within the $\text{QC}_G$ picture. For this the qubit holding the input state at qubit $a_1$, some auxiliary qubit $a_2$ in the $|+\rangle$ state and the qubit at the input vertex $b$ are entangled to form a chain attached to the graph at $b$ . Then $a_1$ and $a_2$ are measured in $\sigma_x$-direction. The resulting state on the graph vertices now coincides with the state after some teleportation of the input state into the vertex $b$ (up to LC).} into the input vertices and the usual measurement protocol is applied. Alternatively one can regard the graph as a preparation procedure for an initial resource that already incorporates the input state. More precisely one prepares the input qubits in the input state and the remaining qubits in the $|+\rangle$ state, entangles the qubits according to the interaction pattern given by the graph and finally carries out the measurement protocol as described above.
In \cite{OneWay1,OneWay2,OneWay3,OneWay5} it is shown that any quantum algorithm within the network picture requiring only a polynomial\footnote{In $n$ input qubits.} amount of temporal, spatial and operational resources can be efficiently simulated by a $\text{QC}_G$ that requires (i) a polynomially overhead of elementary operations in the classical preprocessing to derive the equivalent setup and measurement protocol for the $\text{QC}_G$, (ii) a polynomially bounded amount of classical and quantum resources, and finally (iii) a polynomial increasing time cost for the classical and quantum processing during the computation\footnote{The complexity of the quantum processing is given by the number of local measurements. For the classical processing the number of elementary classical steps for the update of the information flow vector and for the determination of subsequent measurement directions is logarithmic in $n$ and proportional to the number of measurement rounds.}. In this way the $\text{QC}_G$ can serve as a {\em universal} model for quantum computation that has quite promising {\em scaling behavior} for those practical implementations,
in which the resource cluster state or more generally the initial graph state can be prepared by a homogeneous Ising interaction, i.e., independently of the system size. Furthermore, the $\text{QC}_G$ is equivalent \cite{Ve04,1QCequiv} to other {\em measurement-based schemes for quantum computations}.
First results towards fault-tolerant quantum computation with the $\text{QC}_G$ were also obtained in refs.~\cite{OneWay1,OneWay2,OneWay3,OneWay5}. For a reasonable noise model, including noisy cluster state preparation as well as erroneous measurements, a quantum computation subjected to decoherence below a certain threshold can be implemented on a $\text{QC}_G$ in a fault-tolerant way. This means that for sufficiently small noise of a specific type the quantum computation can be implemented on a $\text{QC}_G$ with `any' desired accuracy if one allows for a reasonable overhead in the computational resources. But note that the derived error threshold seems rather unrealistic for practical purposes.
Current research focuses on an improvement of this threshold and a generalization of the underlying noise model combining standard concepts of quantum error correction with some intrinsic error detection possibilities of the $\text{QC}_G$ and purification methods for the underlying graph state as the computational resource (see sec.~\ref{CSS_GS}). Further results on fault tolerant quantum computation can also be found in ref.~\cite{FaultTolOneWay}.
\index{one-way quantum computer ($\text{QC}_G$)|)}
\subsection{Quantum error correcting codes}\label{Application_QEC}\index{quantum error correcting code (QEC)|(}\index{stabilizer code}
For {\em quantum error correcting codes} based on stabilizer codes~\cite{Gottesman} the {\em codewords} as well as the {\em encoding procedures} can be represented as graphs \cite{Schlinge02a,Schlinge02b,Grassl02}. The latter can be understood along the lines of the previous subsection, because the graph state is the computational resource for implementing the encoding process in terms of the $\text{QC}_G$ model. The graphs depicted in fig.~\ref{Fig:CodeAppl} for example correspond to the encoding procedures for the five-qubit {\em Steane code}\index{Steane code} and the {\em concatenated $[7,1,3]$-CSS-code}\index{CSS code} that encode a state on one qubit into some state on five and $49$ qubits respectively.
\begin{wrapfigure}[12]{r}[0.1\textwidth]{0.55\textwidth}
\vspace{-0.5cm}{
\raisebox{0.3cm}{\includegraphics[width=0.2\textwidth]{5code.eps}}\hspace{0.01\textwidth}
\includegraphics[width=0.4\textwidth]{CSS2.eps}
\caption{\label{Fig:CodeAppl}--{\em Five-Qubit-Code and concatenated CSS-Code}\index{CSS code}-- The graphs representing the encoding procedure for the five-qubit and the concatenated $[7,1,3]$-CSS-code with input (red), auxiliary (blue) and output (black) vertices.}}
\end{wrapfigure}
The encoding consists in preparing the qubit at the (red) input vertex in the input state and the remaining qubits in the $|+\rangle$ state, entangling the qubits according to the graph and finally measuring the (blue) auxiliary vertices\footnote{Whereas the five-qubit code does not require any auxiliary qubits to be measured, the auxiliary vertices of concatenated CSS-code are located at the corners of the inner cube, where the open side indicates the input qubit.} and the input vertex in the $\sigma_x$-direction. In this way the input state is encoded into the state on the remaining (black) qubits. Alternatively one might as well prepare the graph states depicted in the figures as such, teleport the input state into the input vertex and then perform the same measurement protocol.
When applying this procedure to an eigenstate of one of the Pauli-matrices $\sigma_x$, $\sigma_y$ or $\sigma_z$, the encoded state is a graph state and can be regarded as a code word vector for the respective quantum code (see sec.~\ref{Def_Stab_States}). Note that the $[7,1,3]$-CSS-code is a concatenation of the Steane code depicted in fig.~\ref{Fig:SteaneCodeAppl}. This means that each of the $7$ output qubits of the first encoding level is encoded again by the same encoding procedure, such that the overall encoding procedure maps some input state into an encoded state on the $7\times 7=49$ output qubits of the second level. This is graphically reflected by the fact that the right graph in fig.~\ref{Fig:CodeAppl} is obtained from the left graph in fig.~\ref{Fig:SteaneCodeAppl} by attaching another cube at each `output'-corner in the initial cube and merging the former output and input vertex into one auxiliary vertex.
\index{quantum error correcting code (QEC)|)}
\subsection{CSS--states and secret sharing}\label{CSS_GS}
\index{bi-partite graph (state)}\index{two-colorable graph (state)}
The class of {\em CSS states}\index{CSS state} corresponds to the class of {\em two-colorable graphs}\index{two-colorable graph (state)|(}, i.e., graphs that allow for a coloring of the vertices with two colors such that no two adjacent vertices have the same color (see sec.~\ref{GS_Notations}). In general a CSS-code is a stabilizer code, whose stabilizer can be generated by stabilizer elements $\sigma$ that either consists of $\sigma_x$- or $\sigma_z$-matrices, i.e., $\sigma=\sigma_x^U$ or $\sigma=\sigma_z^U$. By performing Hadamard operation to the vertices of one coloring partition any two-colorable graph state is easily seen to be of CSS-type\footnote{I.e., a CSS code with stabilizer of full rank $\text{rank}(\mathcal{S})=N$.}. Conversely it is shown in ref.~\cite{Lo04} that any CSS-state is also LC-equivalent to some two-colorable graph state.
\begin{wrapfigure}[10]{r}{0.45\textwidth}
\vspace{-0.2cm}
\includegraphics[width=0.22\textwidth]{SteaneCode1.eps}\hspace{0.01\textwidth}\includegraphics[width=0.22\textwidth]{SteaneCode2.eps}
\caption{\label{Fig:SteaneCodeAppl} The encoding graph for the Steane code is two-colorable.}
\end{wrapfigure}
In \cite{Lo04} CSS states are used to design a `prepare and measure' protocol for quantum cryptography that can be used for {\em conference key agreement}\footnote{This protocol allows several parties to share a secure conference key in the presence of an eavesdropper.}\index{conference key agreement} and {\em quantum sharing of classical secrets}\footnote{This protocol allows a party to share a classical secret with $n$ parties, such that at least k other parties are needed to reveal this key (for some fixed $k>\frac{n}{2}$).}\index{quantum secret sharing}.
We remark that not all Pauli measurements preserve the two-colorability of the underlying graph. Whereas local $\sigma_x$- or $\sigma_z$-measurements in two-colorable graphs yield graph states according to two-colorable graphs, $\sigma_y$-measurements of two-colorable graphs can lead to graph states which are not even locally equivalent to two-colorable graphs \cite{He04}. This might be important in the context of 1-way QC models, in particular in the context of fault tolerance. There, graph states corresponding to the non--clifford part of a computation may be purified via multi-particle entanglement purification, where current protocols require two--colorability of the underlying graph.
\index{two-colorable graph (state)|)}
\subsection{Entanglement purification and secure state distribution}
\label{EPP_Blind}
Despite of their central use for quantum information processing purposes, in reality, multi-partite entangled pure states, such as graph states, will not be available with unit fidelity. The reasons for this are manifold. For instance, the operations to create these states are always noisy, the distribution of these states among distant parties occurs through noisy quantum channels, and the storage of states is subjected to decoherence. For the class of two-colorable graph states {\em entanglement purification}\index{entanglement purification} procedures are known to maintain or purify these states. Some of these procedures \cite{Du03a} provably apply to the regime of noisy control operations. The basic idea in entanglement purification is to use several copies of low-fidelity entangled states in order to distill a few highly entangled states. Thereby, one copy of the state with low fidelity is measured in order to reveal information about either one copy (recurrence and pumping schemes) or several other copies (hashing protocols) of the imperfect graph state. For the purification step each of the imperfect copies (mixed states) is transformed\footnote{This purification is done by means of some local twirling operations (see Proposition~\ref{GraphTwirling} in sec.~\ref{GS_decoherence}), so with respect to the average with respect
to an appropriate local symmetry group.} into a mixed state that is diagonal in a graph state basis associated with the two-colorable graph state in question. Given that the initial fidelity is sufficiently high, an ensemble of these graph diagonal states can then be purified. In \cite{Du03a} genuine multi-party entanglement purification protocols based on a recurrence or pumping schemes and based on hashing procedures (see also \cite{Lo04}) were considered and compared with procedures using bi-partite purification\footnote{ Here one e.g. establishes some highly entangled pairs shared between different parties, in order to teleport the desired $N$-particle state prepared at one party to the remaining parties.}. The different purification schemes have their own advantages. But for local noise, multi-party purification protocols are, in most cases, not only more efficient but provide also better achievable fidelities for the distilled states in the case of noisy control operations. For practical purposes it is useful that the purification regime\footnote{I.e., the purification regime is given by the necessary fidelity of the initial low-entangled copies, such that some higher-entangled state can be distilled.} for the recurrence protocols as well as the error threshold for the noisy control operations involved seem not to depend on the number of particles in the graph state but rather on the maximal degree of the underlying graph. Thus two-colorable graph states provide a reservoir of entangled states between a large number of particles, which can be created and maintained even in the presence of decoherence. Entanglement purification schemes will be discussed in more detail in sec.~\ref{EPP}.
These purification schemes were also modified to {\em blind purification protocols}\index{blind purification}. That is, the purification takes place in such a way that even the involved parties (except of a central party) do neither know the identity of the state they are attempting to purify nor have a possibility to learn it. Such protocols allow for the secure distribution of two-colorable graph states, which remain unknown to the different parties and any potential eavesdropper \cite{Du05c}. Such {\em secure state distribution} is a quantum primitive in a real--world scenario (i.e., taking imperfections in local control operations and channel noise into account) which may be used as a basic building block for both quantum and classical security applications.
\section{Physical implementations}\label{Implementations}\index{graph state preparation}
Let us briefly discuss a few proposals of how to
prepare graph states in real physical systems, aiming at realizing
some of the applications, such as the one-way quantum computer ($\text{QC}_G$).
In general, graph states do not appear as ground states of
physical systems, since, in most cases, they rely dominantly on two-body interactions \cite{Nielsen05a}.
Following the preparation
procedure in sec.~\ref{DefOfGS_Int},
graph states can nevertheless be obtained
in any physical system that allows to implement an Ising interaction
$H_{ab}^I = \sigma_z^a \sigma_z^b$. In particular, graph states
can be generated, of course, in all physical devices for {\it universal}
quantum computation. Clearly,
the complexity of the graph state preparation
depends on the respective computational primitives of the
physical realization \cite{Perdrix}.
One may distinguish two classes of physical systems where cluster
or graph state preparation very naturally reflects the underlying
architecture. The first one consists in lattice systems, such as
cold atoms in {\it optical lattices}. Here, a neighborhood
relation is inherited by the physical neighborhood of constituents
in the lattice itself, and by means of appropriate switching of
interactions, one can generate cluster states of some dimension
\cite{Briegel-HaenschBand}.
The second one comprises of physical
systems where the neighborhood relation can be freely chosen, such
as in purely optical systems or hybrid setups making use of
`flying' optical qubits for the entangling process and `static'
matter qubits for storage of the actual graph state. We will
briefly sketch both approaches, starting with the former.
The preparation of a two- or three-dimensional cluster state (of
`arbitrary' size) can be regarded as a fixed operational step, if
the Ising interaction can be implemented {\it homogeneously} (for
all directions) throughout the whole lattice
\cite{Briegel01,Briegel-HaenschBand}.
This globally tunable Ising interaction can be realized in optical
lattices with ultra-cold atoms via state-selective displacement of
the atoms and controlled cold collisions \cite{Jaksch99} or via
tunneling \cite{Duan03}. For instance, employing a superfluid-Mott
insulation quantum phase transition a Bose-Einstein condensate can
be loaded into an optical lattice achieving almost unit occupation
per lattice site \cite{Jaksch98,Greiner02}.
Interference experiments \cite{Greiner03}
indicate that graph states can be obtained within such setups.
Graph states prepared in optical lattices therefore are a
promising resource for scalable quantum computation in the
framework of the one-way quantum computer, since cluster states
can be created with an operational effort that is independent of
the system size. Existing implementations are currently still
facing at least two major challenges: On one hand, the number of
atoms at each lattice site has to be exactly controllable. In
particular, the presence of defects, i.e., empty lattice sites,
might spoil any quantum computation. \index{one-way quantum
computer ($1$-QC)} On the other hand, the one-way computer
requires the atoms at different sites to be {\it individually
addressable} by local measurements.
A number of proposals have already been made to overcome these
obstacles (see, e.g., refs.~\cite{Kay05,Vollbrecht04,Rabl03}), and
one can say that optical lattices remain one of the prime
candidates to study multi-particle entanglement and scalable
quantum computation. Alternative implementations have also been
suggested for physical systems, in which the underlying
Hamiltonian is given by an (isotropic) {\it Heisenberg
interaction} \cite{Clark04,Loss05}. These proposals can be
realized in XY-spin chains formed within an optical lattice of
neutral atoms as well as in solid-state systems, such as quantum
dots.
The second type of physical systems does not directly exploit
immediate adjacency of the respective constituents: In this class
of physical systems the key point is the fact that one introduces
a separation between the act of creating entanglement and the act
of performing the actual computation. This is of central
importance in particular in setups where the elementary gates
function on a probabilistic basis, or where erasure errors such as
photon losses constitute the predominant obstacle that has to be
overcome. This very much applies to architectures of quantum
computing based on {\it linear optics}. Linear optical setups are
attractive, as photons are relatively robust with respect to
decoherence, and accurate state control is possible using linear
optical elements \cite{KLM}. However, in order to achieve
universal quantum computing based on {\it dual rail encoding} --
where logical qubits are encoded in state vectors $|0\rangle |
1\rangle$ and $|1\rangle | 0\rangle$ of two modes -- measurements
are necessary, rendering any scheme probabilistic. Notably, the
scheme of Knill, Laflamme, and Milburn in ref.~\cite{KLM} employs
the non-linear sign shift gate, acting as
\begin{equation}
x_0 |0\rangle + x_1 |1\rangle + x_2 |2\rangle\mapsto
x_0 |0\rangle + x_1 |1\rangle - x_2 |2\rangle
\end{equation}
as a primitive probabilistic gate, realized using additional
modes and measurement. Here, $|0\rangle$, $|1\rangle$,
and $|2\rangle$
denote the state vectors of states containing $0,1,2$ photons.
It has been shown that, although these
gates operate on a probabilistic basis (only certain measurement
outcomes are accepted as being successful), the overall computational
scheme can be uplifted to almost unit success probability. This is
done using teleportation, based on
appropriate entangled resources which are
in turn built up by invoking these probabilistic gates.
This is a powerful theoretical idea, heavily exploiting
the previously described stabilizer formalism. So in this way,
near-deterministic quantum computation is possible in principle
using linear optical elements, single-photon sources, photon-resolving
detectors, and feedforward, meaning later action may depend on
earlier measurement outcomes \cite{KLM}.
Practically, the key
obstacle is, however, that the required resources are tremendous
(approximately $10^5$ beam splitters required for a single CNOT gate
operating with a failure rate of $10^{-4}$), essentially originating from
the fact that the elementary gates work with such a small probability
of success \cite{NSBound}.
Graph state methods can indeed significantly reduce the
surmounting number of required resources (although still giving
rise to challenging prescriptions). In the remainder of this
section, we will thus briefly review recent work discussing graph
states and one-way quantum computation in the context of linear
optical setups.
Ref.~\cite{Reznik} is the first work making use of states related
to graph states to reduce the overhead in resources. In
ref.~\cite{Nielsen04}, the presented scheme has explicitly been
phrased in terms of cluster and graph states, making use of the
setting of the scheme by Knill-Laflamme-Milburn.
The number or necessary resources
was further reduced in refs.~\cite{Browne04,Ralph05},
employing two types of
fusing gates that glue pieces of linear one-dimensional clusters or
of two-dimensional graph states together. Essentially, the basic
resource here are Bell states using non-deterministic {\it parity-check
measurements} \cite{PC},
involving the combination of photons on polarizing
beam splitters, followed by measurements on the output modes.
The computational basis is again the one of dual
rail encoding, here specifically the basis
\begin{equation}
|{H}\rangle := |0\rangle|1\rangle,\,\,\,\,
|{V}\rangle := |1\rangle|0\rangle
\end{equation}
corresponding
to a horizontally and vertically polarized photon.
A fusion of the first type amounts to mixing the input,
assumed to contain only one photon per spatial mode,
at a polarizing beam splitter, rotating the output by $45^o$
and measuring it with a polarization-discriminating photon
detector. This is the parity-check operation considered in ref.~\cite{PC}. In the case when one and only one photon is detected --
occurring with a probability of $1/2$ and considered the
successful event -- the state is
transformed according to a non-trace-preserving
completely positive maps with either of the two Kraus
operators,
\begin{eqnarray}
K_1 &=&
\left(|{H}\rangle\langle {H} | \langle {H} | - |{V}\rangle\langle {V} | \langle {V} | \right)/\sqrt{2} ,\\
K_2 &=&
\left( |{H}\rangle\langle {H} | \langle {H} | + |{V}\rangle\langle {V} | \langle {V} |
\right)/\sqrt{2} .
\end{eqnarray}
Given such a successful event, one glues two pieces
of a graph state together \cite{Browne04}. From
maximally entangled two-dimensional resources,
one can build up linear cluster states.
For a first experimental
demonstration, see ref.~\cite{Zhang05}. To build up
higher-dimensional structures, a variant of a
destructive CNOT gate can be used \cite{PC}, in ref.~\cite{Browne04}
referred to as fusion of second type.
This fusion gate of the second type consists of a
polarizing beam splitters, two polarization rotators of
$45^o$ at the inputs and outputs each, followed again by
polarization resolving photon detection.
These gates are still
probabilistic, yet, failures can be tolerated by merely
rebuilding the affected graph section, assuming that failures
are heralded, meaning that one has classical knowledge of
inappropriate measurement outcomes. First steps towards
fault tolerance with respect to photon
losses have been taken \cite{Trees}.
In this way, graph states
can be prepared from essentially probabilistic ingredients, giving
rise to deterministic quantum computation with smaller overhead in
resources.
Quite recently, arbitrary graph states on four qubits
were experimentally generated in an entirely optical system, with
the four qubits being represented by the polarization state of
four photons \cite{Walther05}.
Although the feasibility of $1$-QC was demonstrated through a
set of one- and two-qubit operations, at the present stage such
experiments rather constitute a proof-of-principle than a clear-cut
route to scalable quantum computation. This is mainly because the
scalability of the preparation by means of parametric down conversion is currently
bound to only a few photons. Using such technology as source of
entangled photons, hence, the overall
success probability is exponentially decreasing.
An alternative route in the second framework is provided by hybrid
solutions, where the advantages of photons providing coupling
mechanisms and of {\it matter systems} as long-lived
storage devices are simultaneously exploited. Based on
schemes that allow for the preparation of entangled
states using matter systems in leaking
cavities using flying optical qubits
\cite{flyingQubits,Cabrillo},
one can immediately construct schemes that allow for
graph state preparation \cite{Sean,Almut,Benjamin}, even in
setups where in intermediate steps no further local
rotations of the state of the matter qubits
is required.
The matter qubits may be spatially separated, and the adjacency
pattern of the graph states to be prepared is in then, in principle,
completely arbitrary. Such schemes can even be made
essentially deterministic, in that if an entangling operation fails,
it can often be repeated until a successful operation
occurs, without
damage to the nascent graph state
\cite{Almut,Benjamin}.
Using multi-port linear
optical devices, the need of doing intermediate
local rotations can also be largely eliminated in the
preparation of the graph states.
Also, whole parts of graph states can be fused together,
in an essentially deterministic manner.
In architectures making use of optically active
electron-nuclear
systems, such as in N-V-centers in diamond,
one has effectively a ${\mathbbm{C}}^2 \otimes {\mathbbm{C}}^2 $-system
locally available, allowing for a way to entangle matter qubits in a way
that is more robust with respect to unavoidable photon loss \cite{NV}.
All these schemes have in common that the neighborhood
relation between constituents is fairly arbitrary, and that arbitrary
graph states can -- in principle -- directly be prepared.
One general lesson to learn from graph state approaches
from the perspective of implementations
is that there are probably
no generally valid uncompromising
requirements for physical systems
to allow for scalable quantum computation.
Instead, it very much depends
on the underlying physical architecture what computational model
is advantageous; hence rendering otherwise inappropriate architectures
useful and promising.
Schemes for quantum computing based on graph states
promise to significantly lessen the challenges
posed by the obvious requirement in any scheme for quantum
computation, that one needs extraordinarily good isolation against
decoherence effects and precise state manipulation at the same time.
\section{Reduced states of graph states}\label{Reduced_GS}\index{reduced state $\rho_G^A$|(}
As we will discuss in sec.~\ref{EntanglementGS}, for pure multi-partite states the reduced state obtained after tracing out (forgetting the information in) some part of the system captures many interesting entanglement properties of the state. More precisely, consider a pure state $|\psi\rangle^{AB}$ of a joint system of parties $A$ and $B$. On the one hand the reduced state
\begin{equation} \rho^A:=\text{tr}_B (\rho)\end{equation}
represents the information available for one subsystem $A$, if it is not provided with any information that corresponds to some measurement statistics on party $B$. The study of entanglement accessible to subsystem $A$ in this way is an interesting issue in itself. On the other hand, for pure states $\rho=|\psi\rangle\langle\psi|$ the state $|\psi\rangle$ is {\em entangled} with respect to the partitioning $(A,B)$ iff the reduced state $ \rho^A=\text{tr}_A (\rho)$ is mixed and thus cannot be written as a
product state $|\psi\rangle^{AB}= |\psi\rangle^{A} |\psi\rangle^{B}$. Moreover, the mixedness of this reduced state $|\psi\rangle^{AB}$, for example in terms of some entropic measure, allows one to quantify the amount of entanglement contained in this state between the parties $A$ and $B$.
The following proposition shows that for graph states the reduced density matrices can be represented efficiently in terms of their stabilizer elements or their adjacency matrix \cite{He04,Nest04c}.
{\proposition[{\bf Reduced state}]\label{reduced_GS} Let $A\subseteq V$ be subset of
vertices for a graph $G=(V,E)$ and $B=V\setminus A$ the
corresponding complement in $V$. The reduced state
$\rho^A_G:=\text{tr}_B\left(|G\rangle\langle G|\right)$ is given
by \begin{equation}\label{reduced_GS_1} \rho^A_G= \frac{1}{2^{|A|}}\,
\sum_{\sigma \in \mathcal{S}_A} \, \sigma \; ,\end{equation} where \begin{equation}
S_A:=\{ \sigma \in \mathcal{S}\, | \, \text{supp} (\sigma)
\subseteq A \}\end{equation} denotes the {\em subgroup} of stabilizer
elements $\sigma\in\mathcal{S}$ for $|G\rangle$ with {\em
support}\footnote{The support of a Pauli operator
$\sigma=\sigma_{i_1}^1\otimes \ldots \otimes \sigma_{i_N}^N $ is
the set of all indices $a\in V$ for which $\sigma$ acts
non-trivially, i.e., $i_a\neq 0$.}\index{support $\text{supp}(\sigma)$} on the set of vertices within
$A$. $\rho_G^A$ is up to some factor a projection, i.e., \begin{equation}\label{RhoGProj} \left(\rho_G^A\right)^2 = \frac{|\mathcal{S}_A|}{2^{|A|}}\, \rho_G^A\; .\end{equation} It projects onto the subspace in
$\mathbf{H}^A$ spanned by the vectors \begin{equation} |\mathbf{\Gamma}' B'\rangle_{G[A]} = \sigma_z^{\mathbf{\Gamma}' B'}\, |G[A]\rangle\hspace{1cm}
(B'\subseteq B) \; , \end{equation} where $G[A]=G\setminus B$ is the subgraph
of $G$ induced by $A$ and
$\mathbf{\Gamma}':= \mathbf{\Gamma}^{AB}$ denotes the $|A|\times
|B|$--off--diagonal sub-matrix of the adjacency matrix $\mathbf{\Gamma}$ for
$G$ that represents the edges between $A$ and $B$:
\begin{equation}\label{Gamma for bi-partition}\index{adjacency matrix $\mathbf{\Gamma}$}
\left(\begin{array}{cc}
\mathbf{\Gamma}_{A} & \mathbf{\Gamma}_{AB} \\
\mathbf{\Gamma}_{AB}^T & \mathbf{\Gamma}_{B} \\
\end{array}\right)
= \mathbf{\Gamma} \;.
\end{equation} In this basis, $\rho^A_G$ can be written as \begin{equation} \label{reduced_GS_2}
\rho^A_G = \frac{1}{2^{|B|}}\, \sum_{B'\subseteq B}\, |\mathbf{\Gamma}' B'
\rangle_{G[A]}\langle \mathbf{\Gamma}' B' |\; . \end{equation} }
{\em Proof:} Eq.~(\ref{reduced_GS_1}) immediately follows from
eq.~(\ref{GS_Projector}) and the fact that the partial trace of
$\sigma=\sigma_{i_1}^1\otimes \ldots \otimes \sigma_{i_N}^N
\in\mathcal{S}$
can be taken successively over the different vertices in $b\in B$
and gives $\text{tr}_{b} (\sigma_{i_b}^b) = 2 \delta_{i_b 0} $.
$\rho_G^A$ is proportional to a projection, i.e., \begin{equation} \left(\rho_G^A\right)^2
= \frac{1}{4^{|A|}}\, \sum_{\sigma,\sigma' \in \mathcal{S}_A} \,
\sigma \sigma' = \frac{|\mathcal{S}_A|}{4^{|A|}}\, \sum_{\sigma \in \mathcal{S}_A} \,
\sigma = \frac{|\mathcal{S}_A|}{2^{|A|}}\, \rho_G^A \end{equation} follows, because $\mathcal{S}_A$ is a subgroup of $\mathcal{S}$. To show
eq.~(\ref{reduced_GS_2}), the partial trace over $B$ can be taken
in the basis of $B$ given by \begin{equation} |B'\rangle^B_z \equiv
\bigotimes_{b\in B} |B'_b\rangle^b_z := \sigma_x^{B'}|0\rangle^B
\; . \end{equation} This basis decomposition corresponds to successive local
$\sigma_z$-measurements of all vertices in $B$. The set
$B'\subseteq B$ or the corresponding binary vector determines the
measurement outcomes, i.e., if $b \notin B'$ or likewise the
corresponding component of the binary vector $B'_b=0$ then the
measurement outcome at this vertex is $+1$ and $-1$ otherwise.
According to Proposition~\ref{Pauli_Measurement}, after
measurement of $\sigma_z^{b}$ the state of the remaining vertices
is the graph state vector $| G \setminus b\rangle$ in the case of
the outcome $+1$ , and $\sigma_z^{N_b} | G \setminus b\rangle$ if the outcome is $-1$.
This can be summarized to
\begin{equation}
\left(
\sigma_z^{N_b}\right)^{B'_b} | G\setminus b
\rangle,
\end{equation}
since $B'_b \in \{0,1\}$ represents the measurement result $\{+1,-1\}$. Because the subsequent measurements commute with the previous local unitaries, the final state vector according to the result
$B'=(B'_b)_{b\in B}\in {\mathbbm F}_2^B$ is
\begin{equation}
\prod_{b \in B} \left( \sigma_z^{N_b} \right)^{B'_b}\, | G\setminus B\rangle^A \otimes |B'\rangle_z^B
\; =\; \sigma_z^{\left(\sum_{b\in B} N_b B'_b\right)} \, | G[A]\rangle^A \otimes |B'\rangle_z^B
\; = \; \sigma_z^{\mathbf{\Gamma} B'}\, | G[A]\rangle^A \otimes
|B'\rangle_z^B \; ,\end{equation} because the sum is performed modulo 2, and
for all $b\in B$ the binary vector corresponding to $N_b$ is the
$b$-th column of the adjacency matrix $\mathbf{\Gamma}$. Distinguishing the
two parts of the adjacency matrix $\mathbf{\Gamma}_B$ and $\mathbf{\Gamma}_{AB}$
that correspond to edges within $B$, and edges between $A$ and $B$,
one arrives at the state vector associated with the measurement
result given by $B'$
\begin{equation}\label{SchmidtdecompSummands} (-1)^{\langle B' | \mathbf{\Gamma}_{B} B'
\rangle}\, \sigma_z^{\mathbf{\Gamma}_{AB} B'} \,| G[A]\rangle^A \otimes
|B'\rangle^B_z \; =\; (-1)^{\langle B' | \mathbf{\Gamma}_{B} B'
\rangle}\, | \mathbf{\Gamma}_{AB} B'\rangle^A_{G[A]} \otimes |B'\rangle^B_z
\; . \end{equation} Because the possible measurement results are attained
with probability $1/2$, this proves eq.~(\ref{reduced_GS_2}). \hfill\fbox\\\medskip
\index{reduced state $\rho_G^A$|)}
\section{Equivalence classes under local operations}\label{Local_Equivalence}
\index{equivalence under!local Clifford unitaries (LC)|(}
\index{equivalence under!local unitaries (LU)|(}
\index{equivalence under!stochastic local operations and classical
communication (SLOCC)|(}
For a characterization of the entanglement in graph states we now
examine the equivalence classes under local operations. Concerning
locality we restrict to the finest partitioning, i.e., each vertex
$a\in V$ represents a single party and the quantum operations
$\mathcal{E}$ in question are a certain subclass of the class of
all completely positive maps (CPM) that are {\em separable with
respect to this finest partitioning}. Since we are interested in
the equivalence classes of graph states under these local
operations, we can consider the situation in which a pure state
$|\psi_1 \rangle$ is mapped onto another pure state
$|\psi_2\rangle$ by the CPM $\mathcal{E}$ with non-zero
probability. It is generally quite a subtle problem to
characterize the class of all transformations $\mathcal{E}$ that
can be implemented by means of {\em local operations and classical
communication} (LOCC), and in the following we restrict to a
subclass of LOCC-protocols where $\mathcal{E}$ factors out as the
tensor product of a local operator $E_i$ for each party: \begin{equation}
\mathcal{E}(\rho) = E_1^1\otimes \ldots \otimes E_N^N \, \rho \,
(E_1^1)^\dagger\otimes \ldots \otimes (E_N^N)^\dagger \; .\end{equation} This
means that the pure state $|\psi_1 \rangle$ is converted (in
general: stochastically) into the state $|\psi_2 \rangle=
E_1^1\otimes \ldots \otimes E_N^N \, |\psi_1 \rangle $.
In the following we will consider three different classes of local
operations, namely
\begin{itemize}
\item {\bf SLOCC:} invertible {\em stochastic local operations and
classical communication}, i.e., the operation $E_a\in
\text{SL}(2,\mathbb{C})$ at each qubit is an arbitrary invertible
matrix; SLOCC-equivalence occurs typically with nonunit
probability; \item {\bf LU:} arbitrary local {\em unitaries}, i.e.
the operation $E_a\in \text{SU(2)}$ at each qubit is some unitary
operation; LU-equivalence occurs with unit probability; \item {\bf
LC:} {\em local Clifford unitaries}, i.e., the operation $E_a\in
\mathcal{C}_1$ at each qubit is one of the Clifford unitaries
introduced in sec.~\ref{Def_LC}; LC-equivalence occurs with unit
probability;
\end{itemize}
For $\text{X} \in \{\text{LC}, \text{LU}, \text{SLOCC}\}$, two
states $|\psi_1 \rangle$ and $|\psi_2 \rangle$ are called
$\text{X}-$equivalent if there exists an $\mbox{X}-$operator
$\mathbf{X}$ such that $\mathbf{X}|\psi_2 \rangle\sim|\psi_1
\rangle$. We will frequently write \begin{equation} |\psi_1 \rangle
\longleftrightarrow_\text{X} |\psi_2 \rangle \end{equation} if both states
are {\em X-equivalent}.
Let us first give a brief overview of entanglement classes for
arbitrary states, i.e., not necessarily graph or stabilizer
states. We start with the case of two $d$-level systems, i.e., we
consider bi-partite entanglement. Here the {\em Schmidt
decomposition} serves as a well-known standard form, from which
the conditions for different types of interconvertibility can be
read off. A bi-partite state $|\psi_1\rangle$ can be transformed
into the state $|\psi_2\rangle$ \begin{equation} |\psi_1 \rangle
\longrightarrow_\text{X} |\psi_2 \rangle
\hspace{0.7cm}\text{for}\hspace{0.5cm} \text{X} \in \{\text{LU},
\text{LOCC}, \text{SLOCC}\} \end{equation}
by means of (i) SLOCC, (ii) LOCC or (iii) LU operations iff for the
corresponding Schmidt decomposition of
\begin{equation} |\psi_1\rangle^{AB}=\sum_{i=1}^{R_1} \lambda^1_i |i\rangle^A
|i\rangle^B \hspace{0.7cm} \text{and}\hspace{0.7cm}
|\psi_2\rangle^{AB}=\sum_{j=1}^{R_2} \lambda^2_j |j\rangle^A
|j\rangle^B \end{equation} (i) $R_1\geq R_2$, (ii) the coefficient list
$(\lambda^1_i)_{i=1}^{R_1}$ is a majorization of
$(\lambda^2_j)_{j=1}^{R_2}$ or (iii) the coefficient list
$(\lambda^1_i)_{i=1}^{R_1}$ coincides with
$(\lambda^2_j)_{j=1}^{R_2}$ up to permutations, respectively. It follows that in
the bi-partite case the SLOCC-equivalence leads to $d$ distinct
equivalence classes corresponding to the different values for the
{\em Schmidt rank}\index{Schmidt rank $\text{SR}_A$} $R$. On the
other hand, there is an infinite number of bi-partite
LU-equivalence classes, which are parameterized by the $d$ {\em
Schmidt coefficients} $\lambda_i$\index{Schmidt decomposition}.
Moving to multi-partite entanglement, in the case of three qubits
it was shown in ref.~\cite{Du00b} that there are $6$ orbits under
SLOCC-operations. But true three-qubit entanglement is only
contained in the two distinct classes represented by the GHZ-state
$|GHZ\rangle$ (see sec.~\ref{GHZ_GS}) and the W-state
\index{W-state} \begin{equation} |W\rangle = \frac{1}{\sqrt{3}}\,\left(
|0,0,1\rangle + |0,1,0\rangle + |1,0,0\rangle \right) \; .\end{equation} Note that
all two-qubit states are SLOCC-equivalent to a two-qubit graph
state corresponding to either the empty graph (product state) or
connected graph (Bell state). But the W-state is an example for a
pure state on three qubits that is not SLOCC-equivalent to some
graph state. This is because the GHZ-state $|GHZ\rangle$ is the
only `connected' graph state with three vertices, as we will see
below.
For $N$-qubit systems with $N\geq 4$ the number of orbits under
SLOCC-operations is infinite and is in fact specified by an
exponentially increasing number of parameters. The latter scaling
behavior for the number of required parameters is due to the fact
that a generic orbit is specified (up to some irrelevant complex
constant) by $3N$ complex parameters describing the group
$\text{SL}(2,\mathbb{C})^{\otimes N}$ in question whereas a pure
state on $(\mathbb{C}^2)^{\otimes N}$ is specified by $2^N-1$
complex parameters (neglecting an overall complex phase). For $4$
qubits it was shown in ref.~\cite{Verstraete02a} that there exists a
standard form for generic states under SLOCC-operations that is
determined by $3$ complex parameters and $8$ further standard
forms corresponding to classes of `degenerate' pure states.
Needless to say that the number of LU- or LC-equivalence classes
necessarily has to be specified by even more parameters since the
corresponding matrix groups are contained in
$\text{SL}(2,\mathbb{C})$.
In \cite{Verstraete03a} it is shown that any multi-partite state
can be transformed by SLOCC-operations into some {\em standard form} that is unique up to LU, while
maximizing all entanglement monotones\footnote{More precisely the
{\em entanglement monotones} in question are linearly homogeneous
positive functions of the state that remain invariant under
determinant-1-SLOCC operations.}. In this way the problem of
deciding, whether two states are SLOCC-equivalent, can be reduced
to the problem of deciding, whether the standard forms of these
states are LU-equivalent. In this formalism all pure states
$|\psi\rangle$ with maximally mixed single-qubit reduced density
matrices \begin{equation} \rho_\psi^a = \text{tr}_{V\setminus a}(|\psi
\rangle\langle \psi |)= \frac{1}{2}\, \mathbf{1}_a \hspace{1cm}
\forall a\in V \end{equation}
are standard forms of this local filtering sequence and
they are {\em maximally entangled}\index{maximally entangled} in
that these states maximize all entanglement monotones.
Let us return to the question of equivalence under local
operations for graph states. We start with SLOCC-equivalence.
According to Proposition~\ref{reduced_GS}, any single-qubit
reduced state $\rho_G^a$ is maximally mixed for all graph states
corresponding to connected graphs. Therefore, `connected' graph
states are already in standard form under SLOCC in the above
sense. Thus, when restricting to connected graph states all
SLOCC-equivalence classes coincide with LU-equivalence classes;
this property can easily be extended to the case of general (not
necessarily `connected') graph states by considering each
connected component separately. We arrive at the following result:
{\proposition[\bf SLOCC- equals LU-equivalence] Two graph states
$|G_1\rangle$ and $|G_1\rangle$ are SLOCC-equivalent iff they are
LU-equivalent \cite{Nest04c}: \begin{equation} |G_1 \rangle
\longleftrightarrow_\text{SLOCC} |G_2 \rangle \hspace{0.2cm}
\Longleftrightarrow \hspace{0.2cm} |G_1 \rangle
\longleftrightarrow_\text{LU} |G_2 \rangle \; .\end{equation} }
This result was first obtained in ref.~\cite{Nest04c}. We remark that
the question whether for graph states also \begin{equation} |G_1 \rangle
\longleftrightarrow_\text{LU} |G_2 \rangle \hspace{0.7cm}
\Longrightarrow \hspace{0.7cm} |G_1 \rangle
\longleftrightarrow_\text{LC} |G_2 \rangle \end{equation} holds is still an
open question. Note that the backward implication is trivial since
the group of LC-unitaries is a proper subgroup of all LU. For a
large subset\footnote{A stabilizer element $\sigma \in
\mathcal{S}$ with {\em minimal} support\index{support
$\text{supp}(\sigma)$!minimal} $\text{supp}(\sigma)$, i.e., no
other stabilizer element $\sigma'\in \mathcal{S}$ has a support
$\text{supp}(\sigma')$ that is a proper subset of
$\text{supp}(\sigma)$, is called {\em minimal element} of
$\mathcal{S}$. Let $\mathcal{M}$ denote the subgroup generated by
all minimal elements in $\mathcal{S}$. Now the notion of LU- and
LC-equivalence coincide for all stabilizer states, for which
$\sigma_x$, $\sigma_y$ and $\sigma_z$ occurs at each qubit in
$\mathcal{M}$.} of graph states, however, it was shown in
\cite{Nest04d} that both notions of equivalence coincide. The
hypothesis of a general coincidence for all graph states is
further supported by results \cite{CliffordInvariants} about the
corresponding invariants under these operations, which will be
briefly reviewed later in this section, and the classification of
graph states with up to seven vertices that we will discuss below.
Note that a general coincidence of SLOCC-, LU- and LC-equivalence
for graph states would be particularly advantageous for the
following two reasons: since in this case all 3 local equivalences
would correspond to LC-equivalence, firstly, checking whether two
given graph states are locally equivalent could then be done
efficiently (see sec.~\ref{BinaryRepr}); secondly, all 3 locally
equivalences would entirely be described by the local
complementation rule, yielding a description of local equivalence
of graph states in simple, purely graph theoretic terms.
To distinguish the different equivalence classes under LU
(or, equivalently, SLOCC) we will first derive a simple (though
not complete) set of invariants that can be efficiently computed
for graph states from the underlying adjacency matrix of the
graph. As mentioned above, for any pure state $|\psi\rangle^{AB}$
in the joint system $\mathbf{H}_A\otimes\mathbf{H}_B$ of two
parties $(A,B)$ with arbitrary dimensionality
$d_A=\text{dim}_{\mathbb{C}} \mathbf{H}_A$ and
$d_B=\text{dim}_{\mathbb{C}} \mathbf{H}_B$ the Schmidt
rank\footnote{Note that from the Schmidt decomposition it follows
that $\text{SR'}^{A}(\psi) =\text{SR'}^{B}(\psi) $.}
$\text{SR'}^{A}(\psi) := \text{rank}\left(\rho_\psi^A \right)$ is
an entanglement monotone with respect to $(A,B)$-local operations.
In the case where $(A, B)$ is a bi-partition of a many-qubit system
we will for simplicity consider not the rank
$\text{SR'}^{A}(\psi)$ of the reduced states but rather the
logarithm of it with respect to the basis $2$, i.e., \begin{equation}
\text{SR}_{A}(\psi) :=
\text{log}_2\left[\text{rank}\left(\rho_\psi^A \right)\right]\;
.\end{equation} With this notational simplification the Schmidt rank
$\text{SR}_{A}(\psi)$ of pure state $|\psi\rangle$ has the
transparent interpretation\footnote{This follows straightforwardly
from statement (i) in this section about the interconvertibility
of pure bi-partite states under SLOCC-operations.} in terms of the
maximal number of Bell pairs that
\begin{itemize}
\item are required to prepare $|\psi\rangle$ and \item can be
extracted from $|\psi\rangle$
\end{itemize}
with finite probability of success (i.e., under SLOCC-operations).
\begin{wrapfigure}[20]{r}{0.5\textwidth}
\vspace{-0.5cm}
\hspace{0.25cm}\includegraphics[width=0.5\textwidth]{FIG8.eps}
\caption{\label{fig:LUclassExample2} An example of an equivalence
class which is a proper subset of the class No.\ 4 in List~{\bf A}
but which is not LC-equivalent to any of the graphs depicted in
fig.~\ref{fig:LUruleExample1}. With the LC-rule in
sec.~\ref{Def_LC} it is straightforward to check that all graphs
within this class are LC-equivalent.}
\end{wrapfigure}
We will now study the entanglement in graph states $|G\rangle$ by
considering the Schmidt rank $\text{SR}_{A}(G)$ with respect to
different bi-partitions $(A,B)$. For a bi-partition
$(A,B)$\footnote{I.e., $A\cup B=V$ and $A\cap B=\emptyset$.} of a
graph $G=(V,E)$ we can again use the decomposition for the
adjacency matrix $\mathbf{\Gamma}$ of eq.~(\ref{Gamma for
bi-partition}) into $\mathbf{\Gamma}_A$, $\mathbf{\Gamma}_B$ and
$\mathbf{\Gamma}_{AB}$ according to edges within $A$, edges within
$B$ and those edges between $A$ and $B$. In this notation the
corresponding Schmidt rank $\text{SR}_{A}(G)$ is simply given by
the binary rank (i.e., the rank over GF(2)) of the
$|A|\times|B|$-off-diagonal sub-matrix
$\mathbf{\Gamma}'=\mathbf{\Gamma}_{AB}$.
{\proposition[\bf Schmidt rank]\label{Schmidt_rank} Let $(A,B)$ be
a bi-partition for some graph state $|G\rangle$. Then the Schmidt
rank of the graph state with respect to this bi-partition is given
by \begin{equation}\label{SR1} \text{SR}_{A}(G) = \text{rank}_{\mathbb{F}_2}
\left(\mathbf{\Gamma}'\right) \; .\end{equation} Alternatively, the Schmidt
rank is determined in terms of the rank\footnote{See
eq.~(\ref{GroupRank}) in sec.~\ref{Def_Stab_States}.} of the
subgroup $\mathcal{S}_A$ of stabilizer elements with support in
$A$ by the formula: \begin{equation}\label{SR2} \text{SR}_{A}(G) = |A| -
\text{rank}\,(\mathcal{S}_A)\; .\end{equation} }
{\em Proof:} Because of Proposition~\ref{Graph state basis}, the
linear independence of the vectors
$|\mathbf{\Gamma}'B'\rangle_{G[A]}$\footnote{In the graph state
basis for the graph $G[A]$ of Proposition~\ref{reduced_GS}.} is in
one-to-one correspondence to the linear independence of the
corresponding vectors $\mathbf{\Gamma}'B'$ over $\mathbb{F}_2^A$.
Hence we find for the rank of the reduced state $\rho_G^A$
\begin{eqnarray}
\text{rank}\left(\rho_G^A \right) & = & \text{dim}_\mathbb{C} \,
\text{span}\left\{|\mathbf{\Gamma}'B'\rangle_{G[A]} \,|\,
B'\subseteq B \right\} \nonumber \\ & = &
\text{dim}_{\mathbb{F}_2} \, \text{span}\left\{\mathbf{\Gamma}'B'
\,|\, B'\subseteq B \right\} \nonumber \\ & = &
\text{rank}_{\mathbb{F}_2}\left(\mathbf{\Gamma}'\right) \; .
\end{eqnarray} Taking the $\text{log}_2$ we obtain the
eq.~(\ref{SR1}). Eq.~(\ref{SR2}) follows from the fact that
according to eq.~(\ref{RhoGProj})
$\frac{2^{|A|}}{|\mathcal{S}_A|} \rho_G^A$ is a projection and
thus we can compute the rank alternatively as \begin{equation}
\text{rank}\left(\rho_G^A \right) \; = \;
\text{tr}\left(\frac{2^{|A|}}{|S_A|} \rho_G^A\right) \; = \;
\frac{2^{|A|}}{|\mathcal{S}_A|} \,\text{tr} (\rho_G^A) \; =\;
\frac{2^{|A|}}{|\mathcal{S}_A|} \; .\end{equation} Taking again the
$\text{log}_2$ we obtain eq.~(\ref{SR2}), since the minimal number
of generators for $\mathcal{S}_A$ is given by
$\text{log}_2(|\mathcal{S}_A|)$. \hfill\fbox\\\medskip
\begin{table}
\begin{center}
\begin{tabular}{||c|cccccccc||}
\hline
$A$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{4\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{1,4\}$\\
$A$ & $\{1,2,3,4\}$ & $\{2,3,4\}$ & $\{1,3,4\}$ & $\{1,2,4\}$ & $\{1,2,3\}$ & $\{3,4\}$ & $\{2,4\}$ & $\{2,3\}$\\
\hline \hline
$\text{SR}_A(G_1)$ & 0 & 1 & 1 & 1 & 1 & 1 & 2 & 2 \\
$\text{SR}_A(G_1)$ & 0 & 1 & 1 & 1 & 1 & 2 & 1 & 2 \\
\hline
\end{tabular}
\end{center}
\caption{\label{SR_List_Comparison} The list of Schmidt ranks
$\text{SR}_A(G_1)$ and $\text{SR}_A(G_2)$ for the graphs $G_1$ in
fig.~\ref{fig:LUruleExample1} and for the graphs $G_2$ in
fig.~\ref{fig:LUclassExample2}. }
\end{table}
Thus, for any partition $(A,B)$ the Schmidt rank $\text{SR}_A(G)$
is an invariant under arbitrary local unitaries that can be
formulated in purely graph theoretic terms. We now consider the
list of Schmidt ranks with respect to all possible
bi-partitions\footnote{Since $\text{SR}_A(G)=\text{SR}^B(G)$ the
different bi-partitions are fixed by choosing the smaller
partition, say $A$, of the bi-partition $(A,B)$. This gives
$2^{N-1}$ possible bi-partitions.}. This yields a set of
invariants which has already been considered in graph theory under
the name {\em connectivity function }\index{connectivity function}
\cite{Bouchet}. For example, a comparison of the invariants for
the graphs depicted in fig.~\ref{fig:LUruleExample1} and
fig.~\ref{fig:LUclassExample2} shows that the corresponding lists
of Schmidt ranks within each of these figures coincide but differ
between the two figures (see Tab.~\ref{SR_List_Comparison}). This
implies that the corresponding sets of graph states are not
equivalent neither under LC-operations nor under general local
unitaries.
\begin{figure}
\hspace{4cm}\includegraphics[width=0.4\textwidth]{Petersen.eps}
\caption{\label{fig:PetersenGraph} The Petersen Graph. The
depicted labeled graph is not LC-equivalent to the graph which is
obtained from it by exchanging the labels at each end of the five
"spokes", i.e., the graph isomorphism which permutes the vertices
$1,2,3,4$ and $5$ with $6,7,8,9$ and $10$, respectively. However,
the lists of Schmidt ranks (or, equivalently, the connectivity
functions) of these graphs coincide.}
\end{figure}
We note that the Schmidt rank list does \emph{not} provide a {\em
complete set of invariants}\index{local invariants!for graph
states} that would characterize all equivalence classes under
LC-operations. For the {\em Petersen graph}\footnote{This
counter-example has first been discovered in ref.~\cite{Flaass96}.}\index{Petersen graph (state)} shown in
fig.~\ref{fig:PetersenGraph} and the isomorphic graph, which is
obtained from it by exchanging the labels at each end of the five
"spokes", no local complementations exists that transforms one
graph into the other, although the Schmidt rank lists for both
graphs coincide.
In \cite{CliffordInvariants} a complete set of polynomial LU
invariants for graph states is obtained (in terms of binary
trees). As stated in eq.~(\ref{SR2}) the set of invariants
$\{\text{SR}_A(G)\}_{A\subseteq V}$, corresponding to polynomial
invariants of degree $k=2$, can for stabilizer states be
formulated in terms of the dimension of a subspace of the
stabilizer $\mathcal{S}$. It was shown in
\cite{CliffordInvariants} that, similarly, the invariants of
degree $k\geq 3$ correspond to dimensions of certain subspaces of
the $k-1$-fold direct product $\mathcal{S}^{\times (k-1)}$.
ref.~\cite{CliffordInvariants} also provides a finite complete set
of polynomial invariants for the smaller group $G=\mathcal{C}_1^N$
of local Clifford unitaries (see sec.~\ref{Def_LC}). For graph
states these LC-invariants of degree $k$ are again given by the
dimension of a subspaces in $\mathcal{S}^{\times (k-1)}$. Note
that the algebra of polynomial LC-invariants is in general larger
than the algebra of polynomial LU-invariants. In
\cite{CliffordInvariants} it is shown that the set of polynomial
LU-invariants for the degree $k=2$ and $k=3$ are equivalent to the
corresponding sets of LC-invariants. These results support the
conjecture (see sec.~\ref{Def_LC} and sec.~\ref{Local_Equivalence})
that for graph states the notion of LU-equivalence and
LC-equivalence coincide. \index{local invariants!polynomial|)}
\index{equivalence classes|(} In the remainder of this section we
will now discuss LU-equivalence for graphs with $N\leq 7$
vertices \footnote{The classification of non-equivalent graph
states naturally generalizes to the case of stabilizer states or
codes, for which the graph represents a particular standard form
(see sec.~\ref{Def_Stab_States} and sec.~\ref{Application_QEC}).
For the quest of good error correcting (self-dual) codes such a
classification has recently attracted some attention. A similar
classification can also be found in ref.~\cite{Glynn02,Hoehn03} and has
been extended to graphs with up to $N=12$ vertices in
refs.~\cite{database,Glynn04}.}.
We have examined the graph states of
all non-isomorphic, connected graphs with up to seven vertices.
More precisely, from the set of all possible graphs with $7$
vertices ($2^{\tbinom{7}{2}} \approx 2\times 10^{6}$
possibilities), we consider the subset of $1252$ graphs on up to
$7$ vertices that are non-isomorphic with respect to graph
isomorphisms. Note that a graph isomorphism physically comes up to
an exchange of particles. We furthermore restrict to those $996$
states that correspond to connected graphs\index{connected graph
(state)}\index{disconnected graph (state)}. This is because a
state $|G\rangle$ corresponding to a disconnected graph $G$ is
simply the tensor product $|G\rangle=|G_1\rangle^{A_1}\otimes
\ldots \otimes |G_M\rangle^{A_M}$ of the graph states
$|G_i\rangle$ corresponding to the connected components $G_i$ of
$G$, where $(A_1,\ldots,A_M)$ $(M\leq N)$ is some partitioning of
the vertex set $V$. Thus all entanglement properties of the
composite state $|G\rangle$ are essentially determined by the
entanglement properties of its components $|G_i\rangle$. In
particular, such a $(A_1,\ldots,A_M)$-product state is LU- or
LC-equivalent to some other graph state $|G'\rangle$ iff
$|G'\rangle$ allows for a decomposition
$|G'\rangle=|G'_1\rangle^{A_1}\otimes \ldots \otimes
|G'_M\rangle^{A_M}$ with respect to the same partitioning
$(A_1,\ldots,A_M)$ with components $|G'_i\rangle$ that are LU- or
LC-equivalent to the respective states $|G_i\rangle$ for
$|G\rangle$. Of the $996$ isomorphism-classes of corresponding
graph states, $46$ classes have turned out to be not invariant
under local unitary operations.
\begin{table}
\begin{center}
\begin{tabular}{||c||c|c||c|c||}
\hline
& & non-isomorphic & & non-isomorphic \\
N & non-isomorphic & and non-LC-equivalent & non-isomorphic & and non-LC-equivalent \\
& graphs & graphs & connected graphs & connected graphs\\
\hline \hline
1 & 1 & 1 & 1 & 1 \\
2 & 2 & 2 & 1 & 1 \\
3 & 4 & 3 & 2 & 1 \\
4 & 11 & 6 & 6 & 2 \\
5 & 34 & 11 & 21 & 4 \\
6 & 156 & 26 & 112 & 11 \\
7 & 1,044 & 59 & 853 & 26 \\
8 & 12,346 & 182 & 11,117 & 101 \\
9 & 274,668 & 675 & 261,080 & 440 \\
10 & 12,005,168 & 3,990 & 11,716,571 & 3,132 \\
11 & 1,018,997,864 & 45,144 & 1,006,700,565 & 40,457 \\
12 & 165,091,172,592 & 1,323,363 & 164,059,830,476 & 1,274,068 \\
\hline
\end{tabular}
\end{center}
\vspace{-0.5cm} \caption{\label{numbers_of_GS} In the first column
the number of graph states with $N=1,\ldots,12$ vertices is
listed that are non-isomorphic under graph isomorphisms. The third
column instead contains the corresponding number of non-isomorphic
graph states that correspond to connected graphs. The values for
both columns are taken from \cite{online ency}, where they can be
found under the sequence number {\em A000088} and {\em A001349}.
Similarly column No.~2 and No.~4 contain the corresponding numbers
of graph states that are not equivalent under graph isomorphism
and LC-operations. The values in the second column were computed
in ref.~\cite{database} together with a database of representatives for
each equivalence class. By checking the list of Schmidt ranks we
have shown that the values in the second and fourth column for
$N\leq 7$ vertices coincide with the numbers of non-isomorphic
graph states when considering the larger group of LU- or
SLOCC-operations. The values for $N=8,9,10,11,12$ in both columns
were again taken from ref.~\cite{database}, where also
a database of
representatives for each equivalence class can be found.}
\end{table}
Within each of these classes all graph states are equivalent
modulo local unitaries {\em and} additional graph isomorphisms.
Thus, if we exclude the graph isomorphisms, as e.g. in quantum
communication scenarios, the number of inequivalent classes of
graph states is even larger (see Tab.~\ref{numbers_of_GS}).
In List {\bf A} and {\bf B} of Table~\ref{TablePage} we give a
list of simple representatives of each equivalence class together
with a table summarizing some interesting properties of these
states. For this we have generated the of $996$ non-isomorphic,
connected graphs with the MATHEMATICA package described in
sec.~\ref{Introduction} and tested for local equivalence
considering only LC-unitaries (see sec.~\ref{BinaryRepr}). By
considering the Schmidt rank with respect to all possible
bi-partitions, the corresponding lists of Schmidt ranks for each
representative turned out to be different even if we allow
arbitrary permutations of the vertices. This shows that the found
sets of locally invariant graph states are minimal even with
respect to the larger group of all LU. \vspace{0.5cm}
\begin{table}
\begin{tabular}{cc}
\hspace{-1cm}\includegraphics[width=0.45\linewidth]{FIG5.eps} &
\raisebox{4cm}{\footnotesize
\begin{minipage}{0.75\linewidth}
\begin{tabular}{|ccccccccc|}
\hlin
No.\ & $|\text{LUclass}|$ & $|V|$ & $|E|$ & $ \text{SR}_\text{max}$ & $\text{PP}$ & $RI_3$ & $RI_2$ & $2-col$ \\
\hlin
1 & 1 & 2 & 1 & 1 & 1 & & & yes \\
2 & 2 & 3 & 2 & 1 & 1 & & & yes \\
3 & 2 & 4 & 3 & 1 & 1 & & (0,3) & yes \\
4 & 4 & 4 & 3 & 2 & 2 & & (2,1) & yes \\
5 & 2 & 5 & 4 & 1 & 1 & & (0,10) & yes \\
6 & 6 & 5 & 4 & 2 & 2 & & (6,4) & yes \\
7 & 10 & 5 & 4 & 2 & 2 & & (8,2) & yes \\
8 & 3 & 5 & 5 & 2 & 3 & & (10,0) & no \\
9 & 2 & 6 & 5 & 1 & 1 & (0,0,10)& (0,15) & yes \\
10 & 6 & 6 & 5 & 2 & 2 & (0,6,4)& (8,7) & yes \\
11 & 4 & 6 & 5 & 2 & 2 & (0,9,1)& (8,7) & yes \\
12 & 16 & 6 & 5 & 2 & 2 & (0,9,1)& (11,4) & yes \\
13 & 10 & 6 & 5 & 3 & 3 & (4,4,2)& (12,3) & yes \\
14 & 25 & 6 & 5 & 3 & 3 & (4,5,1)& (13,2) & yes \\
15 & 5 & 6 & 6 & 2 & 2 & (0,10,0)& (12,3) & yes \\
16 & 5 & 6 & 6 & 3 & 3 & (4,6,0)& (12,3) & yes \\
17 & 21 & 6 & 6 & 3 & 3 & (4,6,0)& (14,1) & yes \\
18 & 16 & 6 & 6 & 3 & 3 & (6,4,0)& (15,0) & yes \\
19 & 2 & 6 & 9 & 3& 4 & (10,0,0)& (15,0) & no \\
\hlin
\end{tabular}
\end{minipage}
}
\\
\hspace{-1cm}\includegraphics[width=0.45\linewidth]{FIG6.eps} &
\raisebox{5.3cm}{\footnotesize
\begin{minipage}{0.77\linewidth}
\begin{tabular}{|ccccccccc|}
\hlin
No.\ & $|\text{LUclass}|$ & $|V|$ & $|E|$ & $ \text{SR}_\text{max}$ & $\text{PP}$ & $RI_3$ & $RI_2$ & $2-col$ \\
\hlin
20 & 2 & 7 & 6 & 1 & 1 & (0,0,35)& (0,21) & yes \\
21 & 6 & 7 & 6 & 2 & 2 & (0,20,15)& (10,11) & yes \\
22 & 6 & 7 & 6 & 2 & 2 & (0,30,5)& (12,9) & yes \\
23 & 16 & 7 & 6 & 2 & 2 & (0,30,5)& (14,7) & yes \\
24 & 10 & 7 & 6 & 2 & 2 & (0,33,2)& (15,6) & yes \\
25 & 10 & 7 & 6 & 3 & 3 & (12,16,7)& (16,5) & yes \\
26 & 16 & 7 & 6 & 3 & 3 & (12,20,3)& (16,5) & yes \\
27 & 44 & 7 & 6 & 3 & 3 & (12,21,2)& (17,4) & yes \\
28 & 44 & 7 & 6 & 3 & 3 & (16,16,3)& (18,3) & yes \\
29 & 14 & 7 & 6 & 3 & 3 & (20,12,3)& (18,3) & yes \\
30 & 66 & 7 & 6 & 3 & 3 & (20,13,2)& (19,2) & yes \\
31 & 10 & 7 & 7 & 2 & 2 & (0,34,1)& (16,5) & yes \\
32 & 10 & 7 & 7 & 3 & 3 & (12,22,1)& (16,5) & no \\
33 & 21 & 7 & 7 & 3 & 3 & (12,22,1)& (18,3) & no \\
34 & 26 & 7 & 7 & 3 & 3 & (16,18,1)& (18,3) & yes \\
35 & 36 & 7 & 7 & 3 & 3 & (16,19,0)& (19,2) & no \\
36 & 28 & 7 & 7 & 3 & 3 & (20,14,1)& (18,3) & no \\
37 & 72 & 7 & 7 & 3 & 3 & (20,15,0)& (19,2) & no \\
38 & 114 & 7 & 7 & 3 & 3 & (22,13,0)& (20,1) & yes \\
39 & 56 & 7 & 7 & 3 & 4 & (24,10,1)& (20,1) & no \\
40 & 92 & 7 & 7 & 3 & 4& (28,7,0)& (21,0) & no \\
41 & 57 & 7 & 8 & 3 & 4& (26,9,0)& (20,1) & no \\
42 & 33 & 7 & 8 & 3 & 4& (28,7,0)& (21,0) & no \\
43 & 9 & 7 & 9 & 3 & 3 & (28,7,0)& (21,0) & yes \\
44 & 46 & 7 & 9 & 3 & 4& (32,3,0)& (21,0) & no \\
45 & 9 & 7 & 10 & 3 & 4& (30,5,0)& (20,1) & no \\
\hlin
\end{tabular}
\end{minipage}
}
\end{tabular}
\begin{minipage}{1.15\linewidth}
\hspace{-1cm} \caption{ \label{TablePage} List {\bf A}: List of
connected graphs $N=2,3,4,5,6$ vertices that are not equivalent
under LU transformations and graph isomorphisms. List {\bf B}:
List of connected graphs with seven vertices that are not
equivalent under LU transformations and graph isomorphisms. The
corresponding tables list for each equivalence class the number of
vertices $|V|$ and edges $|E|$, the maximal Schmidt rank
$\text{SR}_\text{max}$, the Pauli persistency $\text{PP}$ (see
sec.~\ref{EntMeas_GS}), the rank index $RI_3$ and $RI_2$ (for
splits with 2 or 3 vertices in the smaller partition), the number
of non-isomorphic but LU equivalent graphs $|\text{LUclass}|$ and
the two-colorable property $2-col$.}
\end{minipage}
\end{table}
We have also listed the sizes of the corresponding equivalence
classes under LU and graph isomorphisms, as well as whether
two-colorable representatives exist. By the rank index given in
List {\bf A} and {\bf B} of Table~\ref{TablePage}, we simply
compressed the information contained in the Schmidt rank list with
respect to all bi-partite splittings, counting how many times a
certain rank occurs in splittings with either two or three
vertices in the smaller partition. For example, the rank index
$RI_3=(20,12,3)$ of graph number $29$ means that the rank $3$
occurs 20 times in all possible $3$-$4$-splits, the rank $2$
twelve times and the rank $1$ only three times. Similarly, because
of $RI_2=(18,3)$ the rank $2$ ($1$) occurs $18$ ($3$) times in
all $2$-$5$-splits of the graph number $29$. As it can be seen
from Tab.~\ref{SR_List_Comparison}, although the classes of graph
states in fig.~\ref{fig:LUruleExample1} and
fig.~\ref{fig:LUclassExample2} have different Schmidt rank lists
and thus are non-LU-equivalent, both classes have the same rank
index $RI_2=(2,1)$, since the rank indices are invariant under
arbitrary permutations of the vertices. Thus no graph in
fig.~\ref{fig:LUruleExample1} is locally equivalent to any graph
in the equivalence class represented in
fig.~\ref{fig:LUclassExample2}. But both belong to the same
equivalence class represented by graph No.~4 in List~{\bf A} when
considering both local unitary transformations {\em and} graph
isomorphisms. In fact a permutation of vertices $2$ and $3$ maps
graph No.~4 in fig.~\ref{fig:LUclassExample2} onto graph No.~1 in
fig.~\ref{fig:LUruleExample1}.
In Tab.~\ref{numbers_of_GS} we have summarized the number of
equivalence classes for connected and all (i.e., possibly
disconnected) graphs and compared them with the corresponding
values when disregarding LC-equivalence. From a quantum
information point of view, which only considers states up to
LU-equivalence, the class of non-equivalent graph states provides
a significant reduction of the set of all graph states. The table
nevertheless shows that the obtained set of non-LU-equivalent
graph states is still sufficiently rich to form an interesting
subclass of states to serve as a starting point for the study of
multi-party entanglement. Besides the question of local
equivalence we will see in the remainder of this thesis that many
other interesting entanglement properties of graph states have a
concise and efficient translation in terms of the underlying
graph. This allows for an exemplary study of multi-party
entanglement in the regime of many parties.
\index{equivalence classes|)} \index{equivalence under!local
Clifford unitaries (LC)|)} \index{equivalence under!local
unitaries (LU)|)} \index{equivalence under!stochastic local
operations and classical communication (SLOCC)|)}
\section{Entanglement in graph states}\label{EntanglementGS}
As discussed in the previous sections, graph states provide an interesting class of multi-partite states that are relatively easy to survey even in the regime of many parties. Since the graph essentially encodes a preparation procedure of the state, we will now examine the question how the entanglement in a graph state is related to the topology of its underlying graph. More precisely we address the issue of quantifying and characterizing the entanglement of graph states.
We start in sec.~\ref{Bell&Witness} with a review of results from \cite{Gue04,To04} about the `non-classicality' of graph states and how the entanglement present in these states can be experimentally verified considering Bell inequalities or entanglement witnesses. Then, classical correlations and entanglement between pairs of particles are discussed in sec.~\ref{Corr}. The main part of this section is finally devoted to the quantification of entanglement in graph states in terms of the Schmidt measure, which will be introduced in sec.~\ref{EntMeas_GS}. We present bounds and rules that render the evaluation of this measure feasible for some interesting classes of graph states and discuss some examples of practical interest.
\subsection{Bell inequalities and entanglement witnesses}\label{Bell&Witness}
The notion of entanglement in quantum mechanics as it was posed in ref.~\cite{EPR35,schroedinger35} by Einstein, Podolsky, Rosen and Schr\"odinger in the year 1935 has -- since Bell's reformulation \cite{Bell64} in 1964 -- frequently been used synonymous with `{\em non-classical correlations}'\index{non-classical correlations}, although today it is well-known \cite{We89,Po95} that one has to consider a finer distinction. Up to some so-called `detection loophole', first experiments \cite{As81} were able to verify that some quantum states can indeed reveal correlations which cannot be predicted by some {\em local hidden variable}\index{local (realistic) hidden variable (LHV) model} (LHV) models. In these models any observable has a predetermined value ({\em realism}), regardless of whether it is measured or not. Moreover, the choice of which observable is measured does not `affect the other parties' ({\em locality}). The two constraints can be phrased in terms of {\em Bell inequalities}, which bound the possible correlations that can be explained within these LHV models. For a precise formulation of the concepts of LHV description and the derivation of the corresponding Bell inequalities in the multi-party setting, we refer the reader to refs.~\cite{Me90,We01a,Zu01} and for a brief review to ref.~\cite{Per99,We01b}. Different Bell inequalities can also be regarded as {\em entanglement witnesses} \cite{Ter00,Lew00} for different types of entanglement in a multi-party entangled state. These witnesses can be quite useful to detect entanglement in the vicinity of graph states. In the following we will shortly review the results of \cite{Gue04,To04} on Bell inequalities and entanglement witnesses for graph states.
But let us first consider an extension of the GHZ-argument \cite{GHZ89} that rules out a LHV description of the spin statistics for GHZ-states, to the case of general graph states \cite{Sc04, Gue04}: The non-trivial graph state with two qubits is LC-equivalent to the singlet state and thus violates the original Bell inequality proposed by Bell in
ref.~\cite{Bell64}. For any connected graph state on more than three vertices any connected subgraph on three vertices $a,b,c$ gives rise to a contradiction within any possible explanation of the observed correlations between spin measurements at different particles. Consider, for example, the case where all three vertices $a,b,c$ are pairwise adjacent, i.e., $\{a,b\}, \{b,c\}, \{a,c\} \in E$. Due to the {\em non-commutativity} of the spin observable algebra one easily computes that the product of the corresponding correlation operators $K_a=\sigma_x^a\sigma_z^{N_a}$, $K_a=\sigma_x^b\sigma_z^{N_b}$ and $K_a=\sigma_x^c\sigma_z^{N_c}$ yields
\begin{equation} K_a K_b K_c \,=\, - \sigma_x^a \sigma_x^b \sigma_x^c \sigma_z^{N_a+N_b+N_c} \; .\end{equation}
\begin{wrapfigure}[9]{r}{0.3\textwidth}
\vspace{-1.2cm}
\includegraphics[width=0.3\textwidth]{NonLocFig.eps}
\caption{\label{NonLocFig} Any connected subgraph on three vertices gives rise to a violation of local realism.}
\end{wrapfigure}
As discussed in sec.~\ref{DefOfGS_Stab} these stabilizer elements provide constraints to the four different measurement settings
\begin{eqnarray}
(I)\hspace{0.2cm} m_x^a m_z^{N_a} = 1 & & (III)\hspace{0.2cm} m_x^c m_z^{N_c} = 1 \nonumber \\
(II)\hspace{0.2cm} m_x^b m_z^{N_b} = 1 & & (IV) \hspace{0.2cm} - m_x^a m_x^b m_x^c m_z^{N_a+N_b+N_c} = 1\; ,\nonumber
\end{eqnarray}
where $m_x^a=\pm 1$ and $m_z^a=\pm 1$ denote the measurement outcomes if the qubit at vertex $a$ is measured in spin-$x$- or $z$-direction. More precisely any LHV model, which assigns predetermined\footnote{That means independently of the chosen measurement setting in (I)-(IV).} values $m_x$, $m_z$ to measurement outcomes to $x$-and $z$-measurements at the different vertices in $N_a\cup N_b\cup N_c$ with some probability, must be such that these assignments obey all the four equations (I)-(IV). Due to the {\em commutativity} for the multiplication of the measurement results the product of eq.~(I), (II) and (III) gives
\begin{equation} m_x^a m_x^b m_x^c m_z^{N_a+N_b+N_c} = 1\end{equation}
and thus a contradiction with eq.~(IV). A similar argument holds for the case where only two pairs of the three vertices $a,b,c$ are adjacent. Thus we have obtained \cite{Sc04, Gue04}:
{\proposition[\bf Non-classicality of graph states]\index{GHZ argument} Any graph state corresponding to a connected graph violates local realism. More precisely, for a connected graph state with more than two vertices any connected subgraph on three vertices $a,b,c$ yields a contradiction when trying to explain the correlations between the different Pauli-spin-observables present in the reduced state $\rho_G^{A}$ (on the subset $A=N_a\cup N_b\cup N_c$) by means of some LHV model.
}
More quantitatively one can compare (convex) functions of the correlations in a state obtained for different measurement settings with the results which can be found for these settings when considering all LHV models. Upper bounds to these functions give rise to so-called Bell inequalities\index{Bell inequality|(}, which the correlations arising from different measurement settings have to obey, if they can be explained by a LHV model. In fact for $N$ parties and two dichotomic\footnote{I.e., the respective observables have two measurement outcomes $\pm 1$.} observables at each party the set of states that allow for a LHV description, can be characterized by only one inequality \cite{We01a,Zu01}, which is a generalization of the CHSH inequality \cite{CHSH69} for the two-party setting. For the multi-party setting the different Bell inequalities can in general capture different types of entanglement according to different partitionings. For any non-trivial graph state Bell inequalities with three dichotomic measurements per site have been derived in ref.~\cite{Gue04} that are maximally violated by this graph state:
{\proposition[\bf Bell inequalities for graph states] A (connected) graph state $|G\rangle$ with stabilizer $\mathcal{S}$ maximally violates\footnote{I.e., $\left|\sum_{\sigma \in\mathcal{S}} \langle G | \sigma | G\rangle \right| = 2^N$. } the Bell inequality \begin{equation}\label{BellIneq} \left|\sum_{\sigma \in\mathcal{S}} \langle \sigma\rangle \right| \leq \mathcal{C} \; ,\end{equation} where $\mathcal{C}$ is the maximum of the absolute value of the mean value $\langle \sum_{\sigma \in\mathcal{S}} \sigma \rangle$ taken over all deterministic\footnote{Due to convexity it suffices to consider deterministic LHV models, which assign definite values $\pm 1$ to all observables (with unit probability).} LHV models. In the case that $\mathcal{C}\geq 2^{N-1}$ the Bell inequality detects only states that are NPT with respect to every partitioning. \index{partial transposition $\rho^{T_A}$}\index{NPT (negative partial transpose)}}
ref.~\cite{Gue04} also provides general rules for the computation of the value $\mathcal{C}$, which a priori demands to check an exponentially (with N) increasing number of LHV models. Numerical results for graphs with up to $10$ vertices show that Bell inequalities for rings and chains give rise to a large relative violation $\frac{2^N}{\mathcal{C}}$ for these states while the Bell inequalities for the corresponding GHZ states yield a small relative violation.\index{Bell inequality|)}\index{entanglement criterion!Bell inequality}
This Bell inequality can be expressed in terms of an entanglement witness\footnote{An entanglement witness $\mathcal{W}$ is an observable with a positive or zero expectation value $\text{tr}(\mathcal{W}\rho)\geq 0$ for {\em all} separable states and a negative expectation value $\text{tr}(\mathcal{W}\rho)< 0$ for {\em some} entangled states. In a measurement of this witness the entanglement present in the latter states can thus be detected (`witnessed').} of the type \begin{equation}\label{Witness0} \mathcal{W}\,:=\, \frac{\mathcal{C}}{2^N}\mathbf{1}_V-|G\rangle\langle G|\; ,\end{equation} i.e., a state $\rho$ violates the eq.~(\ref{BellIneq}) iff \begin{equation} \langle \mathcal{W} \rangle = \text{tr}(\mathcal{W}\rho) < 0\; .\end{equation}
To detect entanglement itself and not `non-classicality' with respect to LHV descriptions, the constant $\mathcal{C}$ can be chosen according to a maximization over the smaller set of separable states only\footnote{Note that there exists non-separable states that still allow for a LHV description \cite{We89}.}.
A concrete procedure to measure this witness in an experiment can be found by decomposing $\mathcal{W}$ into a sum of locally measurable operators. But, for an arbitrary graph state, the witness in eq.~(\ref{Witness0}) seems to be decomposable only into an exponentially (with $N$) increasing number of local measurement settings \cite{To04}.
T\'oth and G\"uhne therefore proposed more practical entanglement witnesses and Bell inequalities for graph states that in many cases, such as CSS states, can be evaluated in only two measurement settings \cite{To04,To05,To05b}\index{two-colorable graph (state)}\index{CSS state}\index{bi-partite graph (state)}:
{\proposition[\bf Entanglement witnesses for graph states]\index{entanglement witness}\index{entanglement criterion!witness} Let $|G\rangle$ be a graph state corresponding to a connected graph. Then
\begin{equation}\label{Witness1} \mathcal{W}_1^{ab}\, :=\,\mathbf{1}_V- K_a - K_b \end{equation}
is an entanglement witness for the $|G\rangle$ that detects entanglement in the reduced state $\rho_G^{A}$ ($A=N_a\cup N_b \cup\{a,b\}$) with only two measurement settings and thus can rule out {\em full separability} of the total graph state. The entanglement witness \begin{equation}\label{Witness2} \mathcal{W}_2 \,:= \,(N-1) \mathbf{1}_V- \sum_{a\in V} K_a \end{equation} detects {\em genuine multi-party entanglement}. If $G$ is $M$-colorable, then the evaluation of the witness eq.~(\ref{Witness2}) requires at most $M$ local measurement settings.\index{coloring}
}
For further entanglement witnesses that are particularly robust against global white noise and that can be derived for some special cases such as GHZ states and linear cluster states, we refer the reader again to ref.~\cite{To04,To05}.
In practice it is quite useful that the witness eq.~(\ref{Witness2}) for genuine entanglement gives also lower bounds to the fidelity $\langle G | \rho | G \rangle $ with the ideal graph state, when $\rho$ denotes the outcome of a actual preparation procedure for the graph state. This provides an efficient method to verify that, in a given experiment, the entanglement is really present in a form that is sufficiently close to a desired graph state. An alternative approach to detect multi-partite entanglement, which is particularly suited for implementations in optical lattices and magnetic micro-traps, can be found in ref.~\cite{MA04}.
\subsection{Two-particle correlations and localizable entanglement}\label{Corr}
In this section we consider the entanglement properties of the reduced state $\rho_G^{\{a,b\}}$ of two qubits $a$ and $b$ that is obtained after tracing out or disregarding the information about the remaining particles $c\in V\setminus\{a,b\}$ in a graph state $|G\rangle$. As discussed in sec.~\ref{Reduced_GS} the reduced state
\begin{equation} \rho_{\{a,b\}}=\sum_{\sigma\in\mathcal{S}_{\{a,b\}}} \sigma \end{equation}
is essentially given by those stabilizer elements $\sigma\in\mathcal{S}_{\{a,b\}}$ that act non-trivially only on the qubits $a$ and $b$. Let us first examine the {\em classical correlations}\index{correlation function $\text{Q}^{ab}_{ij}$} between two non-isolated vertices $a$ and $b$ in a graph state $|G\rangle$
\begin{equation}\label{CorrelationFunc} Q^{ab}_{ij} := \langle G|\sigma_i^a\otimes \sigma_j^b|G \rangle \, - \, \langle G|\sigma_i^a |G \rangle \, \langle G| \sigma_j^b|G \rangle \hspace{0.7cm} i,j=1,2,3\; .\end{equation}
Note that these {\em correlation functions} $\text{Q}^{ab}_{ij}$ only depend on the reduced state $\rho_G^{\{a,b\}}$ of the particles $a$ and $b$ and are given by
\begin{equation}\label{CorrelationFunc_1}
\text{Q}^{ab}_{ij}=\text{tr}\left(\rho_G^{\{a,b\}} \sigma_i^a \sigma_j^b\right)
= \left\{ \begin{array}{ccc} 1 & \text{if} & \sigma_i^a \sigma_j^b \in \mathcal{S} \\ 0 & \text{if} & \sigma_i^a \sigma_j^b \notin \mathcal{S} \end{array} \right.\; ,
\end{equation}
since the expectation values e.g. $\langle G|\sigma_i^a |G \rangle = \text{tr}\left(\rho_G^{a} \sigma_i^a \right) = \text{tr}\left(\frac{1}{2}\mathbf{1}_a \sigma_i^a \right)$ vanish for both (non-isolated !) vertices $a$ and $b$.
For the {\em maximal classical correlation}\index{maximal classical correlation $\text{Q}_\text{max}^{ab}$} between two vertices $a$ and $b$ in a graph state $|G\rangle$
\begin{equation}\label{MaxCorrelation} \text{Q}_\text{max}^{ab} = \max_{i,j = 1,2,3} \, |\text{Q}_{ij}^{ab}| \end{equation}
we find that it vanishes whenever the neighborhoods $N_a\setminus b$ and $N_b\setminus a$ of the two vertices with respect to the remaining graph are non-empty and distinct.
{\proposition[{\bf Two-party classical correlation}]\index{two-particle correlations}\label{2partyclassCorr}
For two non-isolated\footnote{If one vertex is isolated all correlation functions vanish, since the corresponding state is a product state e.g. $|G\rangle=|+\rangle^a|G\setminus a\rangle^{V \setminus a}$.} vertices $a,b \in V$ in some graph $G=(V,E)$ we have
\begin{equation}
\text{Q}_\text{max}^{ab}\, =\, \left\{ \begin{array}{ccl} 0 & \text{if} & (N_a\setminus b),( N_b \setminus a) \neq \emptyset \; \text{and} \; (N_a\setminus b)\neq( N_b \setminus a)\\ 1 & & \text{otherwise} \end{array} \right. \; .
\end{equation}
}
{\em Proof:} According to eq.~(\ref{CorrelationFunc_1}) we have to show that all stabilizer elements $\sigma \in \mathcal{S}$ have support $\text{supp}(\sigma)$ on more than the two vertices $a$ and $b$ iff $(N_a\setminus b)\neq( N_b \setminus a)$ and $(N_a\setminus b),( N_b \setminus a) \neq \emptyset$. Since all generator elements are generated by different combinations of the correlation operators $\prod_{c\in C} K_c$ ($C\subseteq V$), the sufficiency can be derived as follows. In order to generate a stabilizer element $\sigma=\prod_{c\in C} K_c$ with $\text{supp}(\sigma)\subseteq \{a,b\}$ at most the two correlation operators on vertices $a$ and $b$ can be considered, i.e., $C \subseteq \{a,b\}$, because any other correlation operator $K_c$ for $c\in V\setminus \{a,b\}$ leads to a non-vanishing Pauli operator $\sigma_x^c$ or $\sigma_y^c$ on the central vertex $c$ outside of $\{a,b\}$. Moreover any support $\text{supp}(K_a)\setminus \{a,b\} \neq \emptyset$ or $\text{supp}(K_b)\setminus \{a,b\} \neq \emptyset$ of the correlation operators $K_a$ and $K_b$ outside of $\{a,b\}$ can only be compensated if both vertices have the same neighbors outside of $\{a,b\}$, i.e., $ (N_a\setminus b)=( N_b \setminus a)$. In this case we are left with the two possibilities
\begin{equation} K_a K_b = \left\{ \begin{array}{ccc} \sigma_y^a \sigma_y^b & \text{if} & \{a,b\}\in E \\ \sigma_x^a \sigma_x^b & \text{if} & \{a,b\}\notin E \end{array} \right.\; .
\end{equation}
\hfill\fbox\\\medskip
Although a graph state might contain some non-vanishing {\em classical} two-party correlations it does generally not include any entanglement between any two qubits, unless -as discussed below- the remaining parties are allowed to assist the revealing of such entanglement.
{\proposition[{\bf Two-party `quantum correlation'}]\index{non-classical correlations}
For any vertices $a,b \in V$ the reduced state $\rho_G^{\{a,b\}}$ of some graph state $|G\rangle$ is separable unless the graph contains the isolated edge\footnote{If $G$ contains an isolated edge $\{a,b\}$ the state $|G\rangle$ decomposes into a pure Bell state on the vertices $\{a,b\}$ and some other graph state on the remaining vertices.} $\{a,b\}$.
}
{\em Proof:}
If $G$ does not contain the edge $\{a,b\}$ as an isolated edge then according to Proposition~\ref{reduced_GS} the reduced state $\rho_G^{\{a,b\}}$ is either the rank-$2$-projector $\rho_G^{\{a,b\}}=\frac{1}{2}\left(P_{12} +\sigma_z^{C} P_{12} \sigma_z^{C}\right)$ ($P_{12}:= |G[\{1,2\}]\rangle\langle G[\{1,2\}]|$) for some set $C= \{1\},\{2\},\{1,2\}$ or it is a rank-$4$-projector and thus the maximally mixed state $\rho_G^{\{a,b\}}=\frac{1}{4}\mathbf{1}_{ab}$. In the latter case the above statement is trivial. That also the rank-$2$-projectors correspond to separable states can be derived from the fact that they are PPT (which means that their partial transpose is positive) according to eq.~(\ref{PTofa}), which is a sufficient condition for separability in $2\times 2$-systems.
\hfill\fbox\\\medskip
Although there is {\em per se} no entanglement between arbitrary two particles in a connected graph state, such entanglement can be revealed between any two parties $a$ and $b$, if the remaining parties are allowed to perform local measurements \cite{Briegel01}. More generally, the notion of {\em localizable entanglement}\index{localizable entanglement $\text{LE}^{ab}$} $\text{LE}^{ab}(\rho)$ was introduced in refs.~\cite{Frank,Verstraete04b,Popp04} for multi-spin states $\rho$, defined as the maximal amount\footnote{
E.g. in terms of its concurrence.} of entanglement that can be created (or {\em localized}), on average, between two spins at position $a$ and $b$ by performing local measurements on the other spins. For a general state $\rho$ it has been shown in ref.~\cite{Frank} that the localizable entanglement $\text{LE}^{ab}$ is related to the maximal classical correlation $Q_\text{max}$ and the {\em entanglement of assistance}\footnote{The entanglement of assistance extends the concept of localizable entanglement in that it allows also joint measurements to be performed on the other spins.} $\text{AE}^{ab}$\index{entanglement of assistance $\text{AE}^{ab}$} as measured by the concurrence \cite{DiVincenzo98,Laustsen03} :
\begin{equation} \text{Q}_\text{max}^{ab}(\rho)\;\leq\; \text{LE}^{ab}(\rho)\;\leq \;\text{AE}^{ab}(\rho) \hspace{1cm} \; .\end{equation}
Despite of the separability of the reduced states, measurements on the remaining particles can nevertheless create maximal entanglement between any two vertices in a `connected' graph state, which corresponds to maximal localizable entanglement in this case.
{\proposition[{\bf Localizable entanglement}]\label{locEntGS}
Consider any two vertices $a,b \in V$ in a graph state corresponding to a {\em connected} graph $G=(V,E)$\index{connected graph (state)}.
In all measurement branches of the following protocol \cite{Briegel01} a maximal entangled state is created between the vertices $a$ and $b$:
\begin{enumerate}
\item Choose any path $(a_0=a,a_1,\ldots,a_{n-1},a_n=b)$ connecting the vertices $a$ and $b$.
\item Measure the spin of all vertices except $a_i$ in $z$-direction.
\item Measure the spin of all vertices $a_i$ for $i=1,\ldots, n-1$ in $x$-direction.
\end{enumerate}
Thus the {\em localizable entanglement $\text{LE}^{ab}$} of a `connected' graph state is maximal.
}
{\em Proof:}
If the graph is connected there exists a path connecting any two vertices $a,b \in V$.
According to Proposition~\ref{Pauli_Measurement} the $\sigma_z$-measurements in the second step simply remove all vertices but those on the path $(a_0=a,a_1,\ldots,a_{n-1},a_n=b)$. Note that the different local unitaries according to the different $\sigma_z$-measurement outcomes consists only of $\sigma_z$-operators and thus do not alter the measurement direction of the subsequent $\sigma_x$-measurements in the 3.~step. The same proposition also implies that a sequence of $\sigma_x$ measurements on the inner vertices $a_i$ ($i=1,\ldots, n-1$) of this path removes these inner vertices but keeps the connectivity between the two neighboring particles $a_{i-1}$ and $a_{i+1}$ in this chain. E.g. an $\sigma_x$-measurement at vertex $a_1$ of the initial chain $G_0$ yields the shorter chain $G_1=(G_0\setminus a_1) \cup \{a_0,a_2\}$ on the remaining vertices $(a_0=a,a_2,\ldots,a_{n-1},a_n=b)$. The LC-unitaries corresponding to the different measurement outcomes again do not rotate the subsequent measurement directions. Thus we `inductively' arrive at the final graph $G_{n-1}$ corresponding to the maximally entangled state $|G_{n-1}\rangle $ of eq.~(\ref{Graph_Bell_State}).
\hfill\fbox\\\medskip
Concluding, the results of this section indicate that the entanglement present in graph states is not based on {\em bi-partite} `quantum correlation' but the entanglement is rather {\em delocalized} among all particles.
\subsection{Quantifying entanglement}\label{EntMeas_GS}
We have seen that graph states are entangled quantum states that exhibit complex structures of genuine
multi-particle entanglement. The main aim of this section is to apply the quantitative theory of multi-particle entanglement to the study of correlations in graph states.
Needless to say, despite considerable research effort, there is no known computable entanglement measure that grasps all aspects of multi-particle entanglement in an appropriate manner, if there is any way to fill such a phrase with meaning. Several entanglement measures for multi-particle systems have yet been suggested and their properties studied \cite{Schmidt,Wei03,Tangle,Plenio,Meyer,Barnum,Fat04}.
In this section the underlying measure of entanglement is taken to be the Schmidt measure \cite{Schmidt}, which is a proper multi-particle entanglement monotone that is tailored to the characterization of such states. As holds true for any known measure of multi-particle entanglement, its computation is exceedingly difficult for general states, yet for graph states this task becomes feasible to a very high extent. We present various upper and lower bounds for the Schmidt measure in graph theoretical terms, which largely draw from stabilizer theory. These bounds allow for an evaluation of the Schmidt measure for a large number of graphs of practical importance.
\index{Schmidt measure $\text{E}_S$|(}\index{entanglement measure}
The Schmidt measure has been employed to quantify the degree of entanglement, as a generalization of the Schmidt rank in the bi-partite setting \cite{Schmidt}. This measure is sufficiently coarse to be accessible for systems consisting of many constituents and to allow for an appropriate discussion of multi-particle entanglement in graph states.
Any state vector $|\psi\rangle\in {\bf H}_{1} \otimes ...\otimes {\bf H}_{N}$
of a composite quantum system with $N$ components
can be represented as
\begin{equation}\label{SchmidtM}
|\psi \rangle = \sum_{i=1}^R \xi_i |\psi_i^{1}\rangle \otimes\ldots\otimes |\psi_i^{N}\rangle,
\end{equation}
where $\xi_i\in{\mathbbm{C}}$ for $i=1,...,R$, and $|\psi_i^{n}\rangle \in {\bf H}_{n}$ for $n=1,...,N$.
The {\em Schmidt measure}\index{Schmidt measure $\text{E}_S$} associated with a state vector $|\psi\rangle$ is then defined as
\begin{equation}
\text{E}_S(|\psi\rangle ) = \log_2 (R_\text{min}),
\end{equation}
where $R_\text{min}$ is the minimal number $R$ of terms in the sum of eq.~(\ref{SchmidtM}) over all linear decompositions into product states. It can be extended to the entire state space (and not only the extreme points)
via a convex roof extension\footnote{Note that every positive function $E_\text{pure}$ defined on the set of pure states that vanishes exactly on product states (see property (i)) and is non-increasing under SLOCC (see property (ii)), can be extended to the entire state space by \[E(\rho):=\text{inf} \,\{\,\sum_i \lambda_i E_\text{pure}(|\psi_i\rangle) \; |\; \rho = \sum_i \lambda_i |\psi_i\rangle \langle\psi_i| \,\text{with} \, \lambda_i\geq 0 \,\sum_i \lambda_i=1\}\;. \] Note that infimum of the average entanglement can be regarded as taken with respect to all preparation procedures of the mixed state. The entanglement measure $E$ can be proven to (a) vanish exactly on all separable states and is (b) convex as well as (c) non-increasing under SLOCC operations.}. It should be noted that the Schmidt measure $ \text{E}_S$ is {\em discrete}, e.g. in the case of two-level systems ${\bf H}_{n}=\mathbb{C}^2$ it can only take the values \begin{equation} \text{E}_S(|\psi\rangle ) \in \{\text{log}_2(m) \,|\, m=1,\ldots,2^N\}\; ,\end{equation}
and thus fails to be {\em continuous}, which requires some care when extending the measure to general mixed states via the convex roof construction. However, since the set of graph states is discrete itself, the case does not occur that two graph states become arbitrarily close in terms of some distance measure but not with respect to their Schmidt measure. In any case the Schmidt measure is a general entanglement monotone with respect to general local operations and classical communication (LOCC), which typically leave the set of graph states.
If the Schmidt measure of a state vector $|\psi\rangle$ is evaluated with respect to a partitioning $(A_1,...,A_M) $, it will be appended,
\begin{equation}
\text{E}_S^{(A_1,...,A_M) }(|\psi\rangle),
\end{equation}
in order to avoid confusion. For a graph $G=(V,E)$, the partitioning with $M=N$ and $A_n=\{n\}$ will be referred to as {\em finest partitioning}. If no upper index is appended to the Schmidt measure, the finest partitioning will be implicitly assumed.
Among the properties that are important for the rest of this section are the following \cite{Schmidt,MarcPhD}:
{\proposition[Schmidt measure]\index{entanglement measure} $ $
{\em
\begin{itemize}
\item [{\bf (i)}] $ \text{E}_S$ {\em vanishes on product states} (only), i.e.
\begin{equation}\label{E of product states}
\text{E}_S(| \psi\rangle)= 0 \hspace{1cm}\Longleftrightarrow \hspace{1cm}|\psi \rangle =|\psi^{1}\rangle \otimes \ldots \otimes |\psi^{N}\rangle \; .
\end{equation}
\item [{\bf (ii)}]\index{entanglement measure!monotonicity} $ \text{E}_S$ is {\em non-increasing under SLOCC} \cite{Schmidt}, i.e.
\begin{equation}\label{E under SLOCC}
|\psi\rangle \longrightarrow_{\text {SLOCC}} |\psi'\rangle \hspace{1cm}\Longrightarrow \hspace{1cm} \text{E}_S(|\psi'\rangle ) \leq \text{E}_S(|\psi\rangle ) \; .
\end{equation}
Similarly for the LU-equivalence we find
\begin{equation}\label{E under LU}
|\psi\rangle \longleftrightarrow_{\text {LU}} |\psi'\rangle \hspace{1cm}\Longrightarrow \hspace{1cm} \text{E}_S(|\psi'\rangle ) = \text{E}_S(|\psi\rangle ) \; .
\end{equation}
\item [{\bf (iii)}] $ \text{E}_S$ is {\em non-increasing under a coarse graining of the partitioning}, i.e.
\begin{eqnarray} \label{E under coarse graining}
(A_1,...,A_M)\leq (B_1,...,B_{M'}) \;\Longrightarrow \; \text{E}_S^{(A_1,...,A_M)}(|\psi\rangle) \geq \text{E}_S^{(B_1,...,B_{M'})}(|\psi\rangle) \; .
\end{eqnarray}
Thus if two components are merged in order to form a new component, then the Schmidt measure can only decrease.
\item[{\bf (iv)}]\index{entanglement measure!subadditivity} $ \text{E}_S$ is {\em sub-additive}, i.e.,
\begin{equation}\label{E sub-additivity}
\text{E}_S^{(A_1,...,A_M,B_1,...,B_{M'})} \left( | \psi_1 \rangle \otimes | \psi_2 \rangle \right) \leq \text{E}_S^{(A_1,...,A_M)} \left( | \psi_1 \rangle \right) + \text{E}_S^{(B_1,...,B_{M'})} \left( | \psi_2 \rangle \right)\; .
\end{equation}
\item[{\bf (v)}] {\em For any bi-partition $(A,B)$} $ \text{E}_S$ coincides with the {\em Schmidt rank}\index{Schmidt rank $\text{SR}_A$} $ \text{E}_S(|\psi\rangle) = \text{SR}_{A} (|\psi\rangle)= \log_2({\text{rank}} (\text{tr}_A[|\psi\rangle\langle\psi|])) $. In particular, $ \text{E}_S$ is additive within a given bi-partitioning, i.e., if $A=A_1\cup A_2$ and $B=B_1 \cup B_2$,
then
\begin{equation}\label{E additivity for bi-partitions}
\text{E}_S^{(A,B)}(|\psi_1\rangle \otimes |\psi_2\rangle ) = \text{E}_S^{(A_1,B_1)}(|\psi_1\rangle) + \text{E}_S^{(A_2,B_2)}(|\psi_2\rangle) \;.
\end{equation}
\end{itemize}
}}
It should be noted that for general pure states of multi-partite quantum systems the Schmidt
measure is -- as any other measure of multi-partite entanglement -- exceedingly difficult to compute \cite{He04}. In the following we will provide lower and upper bounds for the Schmidt measure of graph states in graph theoretic terms, which will coincide in many cases, and will then apply these rules, and calculate the Schmidt measure for some of graphs and graph classes that are of interest for applications..
\index{Schmidt measure $\text{E}_S$|)}
\index{Schmidt measure $\text{E}_S$!for graph states|(}
\index{Schmidt measure $\text{E}_S$!lower bound}\index{maximal Schmidt rank $\text{SR}_\text{max}$|(}\index{Schmidt rank $\text{SR}_A$}
Let us first derive a {\em lower bound} to the Schmidt measure, namely the {\em maximal Schmidt rank} $\text{SR}_\text{max}$
\begin{equation} \text{SR}_\text{max}(G)\, :=\, \max_{A\subseteq V}\,\text{SR}_A (G) \; .\end{equation}
If one maximizes over all bi-partitionings $(A,B)$ of a graph $G=(V,E)$, then according to eq.~(\ref{E under coarse graining}) one obtains a lower bound for the Schmidt measure with respect to the finest partitioning.
Since the Schmidt ranks $\text{SR}_A (\psi)$ for the different bi-partitions are already entanglement monotones with respect to $(A,B)$-local SLOCC-operations, it is straightforward to see that $\text{SR}_\text{max}$ is a proper though also discrete entanglement measure\footnote{$\text{SR}_\text{max}$ has properties {\bf (i)} -- {\bf (iv).}} that captures the notion of maximal bi-partite entanglement contained in a multi-party entangled state. In fact this measure was also considered in ref.~\cite{Vidal03,Vidal04} in the context of an efficient simulation of a quantum algorithm on a classical computer. There it was shown that, if through a pure-state quantum computation all underlying pure states of the $N$-qubit quantum register have polynomial bounded `Schmidt rank', i.e., $2^{\text{SR}_\text{max}}\leq \text{poly}(N)$, then this quantum computation can be {\em efficiently simulated by a classical algorithm}\index{quantum computation} requiring only polynomial increasing memory space and computational time. The classical algorithm allowing for a simulation of quantum computations with only {\em slightly entangled} quantum register uses an efficient decomposition of the pure state as in eq.~(\ref{SchmidtM}) that allows to directly read off and manipulate the Schmidt coefficients for different bi-partitionings through the quantum computation. This algorithm proved to be very useful also for an efficient simulation of the dynamics in one-dimensional quantum many-body systems due to its tight connection to existing methods such as {\em density matrix renormalization group (DMRG)}\index{density matrix renormalization group (DMRG)}\cite{Sc05,Daley04} and opens a way to even simulate systems in higher dimensions, where these methods so far have not seemed to be very suitable \cite{AdvancedDMRG,Ve042d}.
For graph states the maximal Schmidt rank $\text{SR}_\text{max}$ actually coincides with continuous entanglement measures such as the {\em entropy of entanglement}\index{entropy of entanglement ${S}_{A}$}\index{entanglement measure!entropy ${S}_{A}$} or the {\em purity of the reduced density matrices}\index{purity}\index{entanglement measure!purity}. From eq.~(\ref{reduced_GS_2}) one can compute that the entropy or the purity of the reduced density matrices for $|G\rangle$ according to a bi-partition $(A,B)$ gives
\begin{equation} \text{SR}_A(G) \,=\,- \text{tr} [\rho^A_G\log_2(\rho^A_G)] \,=\, \log_2(\text{tr} [(\rho^A_G)^2]) \;. \end{equation} This again expresses the fact that, for a non-empty graph, $|G\rangle$ is a `maximally' $(A,B)$-entangled
state vector with $2^{ \text{E}_S^{(A,B)}}$ Schmidt coefficients.
Finally, the Schmidt rank of a graph state is {\em closely related to error correcting properties of a corresponding graph code}\index{stabilizer code}\index{quantum error correcting code (QEC)} (see sec.~\ref{Application_QEC}). Let $A$ be a partition, according to which $|G \rangle $ has maximal
Schmidt rank. Then, according to ref.~\cite{Schlinge02a}, choosing a subset $X\subseteq A$, the graph code, which encodes an input on vertices $X$ in output on vertices $Y=V\setminus X$ according to $G$, detects the error configuration $E=A\setminus X$, i.e., any errors occurring on only one half of the vertex set $E$ can be corrected.
In particular, all {\em strongly error correcting graph codes} in ref.~\cite{Schlinge02a} must have Schmidt measure $N/2$. The following proposition gives at least a sufficient condition \cite{He04,MarcPhD} when a partition has a maximal Schmidt rank with respect to the corresponding bi-partite split.
{\proposition[\bf Maximal Schmidt rank]\label{sufficient crit for max rank}
A {\em sufficient criterion} for a bi-partite split $(A,B)$ to have maximal Schmidt rank is that the graph $G_{AB}$ corresponding to the edges between the partitions contains no cycles, and that the smaller partition contains at most one leaf with respect to the subgraph $G_{AB}$. If $G_{AB}$ is not connected, then it is sufficient that the above criterion holds for every connected component of $G_{AB}$.
}
Note that a {\em leaf}\index{leaf in a graph} is a vertex of degree 1, i.e., a vertex to which exactly one edge is incident \cite{Graph}.
\index{maximal Schmidt rank $\text{SR}_\text{max}$|)}
\index{persistency|(}
\index{Pauli persistency $\text{PP}$|(}
\index{Schmidt measure $\text{E}_S$!upper bound}
For the upper bound we consider a sequence of local projective measurements that finally completely disentangles the state vector $| \psi \rangle$ in each of the measurement results. Let $m$ denotes the number of measurement results with non-zero probability. Clearly, any of the states resulting from the different measurement outcomes is a product state and thus the whole measurement procedure gives rise to a decomposition of the initial state $| \psi \rangle$ as in eq.~(\ref{SchmidtM}). Thus we obtain the upper bound
\begin{equation}\label{Persistency}
\text{E}_S(|\psi \rangle) \leq \log_2 (m)\; .
\end{equation}
In particular, for any sequence of measurements in the Pauli basis $\sigma_x$, $\sigma_y$ or $\sigma_z$ that yields an
empty graph, the number of local measurements in this sequence gives an upper bound on the Schmidt measure of the corresponding graph state. This is because --apart from the trivial case of a $\sigma_x$-measurement at an isolated vertex--, both measurement results $\pm1$ of a local Pauli measurement are attained with probability $1/2$ and yield locally equivalent graph state vectors $|G'\rangle$ and $|G''\rangle$. More generally we find
\begin{equation}\label{E under projective measurement}
\text{E}_S(|G'\rangle ) \leq \text{E}_S(|G\rangle ) \leq \text{E}_S(|G'\rangle ) \,+\, 1\; .
\end{equation}
In the following we will call the minimal number of local Pauli measurements to disentangle a graph state its {\em Pauli persistency} $\text{PP}(G)$. The notion of {\em persistency} was introduce in ref.~\cite{Briegel01} in the context of general projective measurements in order to study the stability of entanglement in cluster states with respect to local measurements. Since each $\sigma_z$ measurement simply deletes all edges incident to a vertex, any subset $V'\subseteq V$ of vertices in a graph $G$, to which any edge of $G$ is incident, allows for an disentangling sequence of local measurements. In graph theory those vertex subsets are called {\em vertex covers}\index{vertex cover}\index{minimal vertex cover $\text{VC}$} $\text{VC}(G)$.
Thus we have found the upper bounds
\begin{equation} \mathbf{ \text{E}_S}(|G\rangle)\,\leq\, \text{\bf PP}(G) \,\leq\, \text{\bf VC}(G) \; .\end{equation}
\begin{wrapfigure}[16]{r}{0.5\textwidth}
\vspace{-0.8cm}\includegraphics[width=0.5\textwidth]{FIG2.eps}
\caption{\label{fig:LUclassExample3} A single $\sigma_y$-measurement at an arbitrary vertex in the complete graph No.
7 suffices to disentangle the corresponding state. Similarly, a single $\sigma_z$-measurement at the central vertex in the graphs No.\ 1--6 or a single $\sigma_x$-measurement at the non-central vertices is a disentangling measurement. This is due to the fact that all graphs (No.\ 1--7) are locally equivalent by LC-unitaries, which transform the measurement basis correspondingly.}
\end{wrapfigure}
For graphs with many edges a combination of $\sigma_z$ and $\sigma_y$ will give better bounds than restricting to $\sigma_{z}$ measurements only. For example, according to the measurement rules in Proposition~\ref{Pauli_Measurement}, any complete graph (in which all vertices are adjacent) can be disentangled by just one $\sigma_y$-measurement at any vertex (see fig.~\ref{fig:LUclassExample3}). As we have seen, this corresponds to the fact that these graph states are LC-equivalent to the GHZ-type graph states (see also sec.~\ref{GHZ_GS}), in which every vertex is adjacent to the same central vertex.
\index{persistency|)}
\index{Pauli persistency $\text{PP}$|)}
Let us briefly summarize the relevant bounds for our further considerations in a proposition.
{\proposition[\bf Bounds to the Schmidt measure]
For any graph state $|G\rangle$ the Schmidt measure $\text{E}_A$ is bounded from below by the maximal Schmidt rank $\text{SR}_\text{max}$ and from above by the Pauli persistency $\text{PP}$ or the minimal vertex cover $\text{VC}$, i.e.
\begin{equation}\label{Bounds_SM} \text{\bf SR}_\text{\bf max}(G)\,\leq\, \text{\bf E}_S(|G\rangle)\,\leq\, \text{\bf PP}(G) \,\leq\, \text{\bf VC}(G) \; .\end{equation}
}
An application of the LC-rule (see Proposition~\ref{loc}), of course, does not change the Schmidt measure. But also other local changes to the graph, such as the deletion of edges or vertices, have only a bounded effect on the Schmidt measure \cite{He04}:
{\proposition[\bf Edge-/Vertex rule]\label{vertexr}\index{Schmidt measure $\text{E}_S$!vertex rule}
\index{Schmidt measure $\text{E}_S$!edge rule}
\hspace{5cm}
\begin{itemize}
\item By {\em deleting or adding edges} between two vertices of a graph $G$ the Schmidt measure of the resulting graph $G'$ can at most decrease or increase by $1$;
\item If a {\em vertex is deleted}, the Schmidt measure of the resulting graph $G'$ decreases, but at most by $1$.
\end{itemize}
}
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.44\textwidth]{FIG13a.eps}\hspace*{0.03\textwidth}\includegraphics[width=0.44\textwidth]{FIG13b.eps}
\end{center}
\vspace{-0.5cm}
\caption{\label{fig:3DExampleEven} An example for the $(4,5,3)$-cluster state and the graph corresponding to the adjacency matrix $\mathbf{\Gamma}^{AB}$ (see eq.~(\ref{Gamma for bi-partition})) for a bipartitioning $(A,B)$ with maximal Schmidt rank $\text{SR}_A = \text{SR}_\text{max}$. Here the vertices in $A$ are depicted by small boxes $\vrule height4pt width3pt depth0pt$ . }
\end{figure}
Let us now apply these findings to evaluate the Schmidt measure for some important classes of graph states:
{\proposition[\bf Examples]\label{E_SExamples}
\hspace{5cm}
\begin{itemize}
\item The Schmidt measure for any multi--partite {\bf GHZ states} is $1$.
\item The Schmidt measure of a {\bf 1D-, 2D-, 3D-cluster} state is $\lfloor \frac{N}{2} \rfloor$.
\item The Schmidt measure of an entangled {\bf ring} with an even number $N$ of vertices is given by
$N/2$.
\item The Schmidt measure of a {\bf tree} is the size of its minimal vertex cover $\text{VC}$.
\end{itemize}
}
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.85\textwidth]{Tree.eps}
\end{center}
\vspace{-0.5cm}
\caption{\label{fig:TreeExample1} The graph No.\ 1 represents a tree. Its bi-partitioning $(A,B)$, for which in graph
No.\ 2 the vertices in $A$ are depicted by disks, neither is a minimal vertex cover nor yields maximal partial rank. Instead the set of vertices $A$, represented by large disc's in graph No.\ 3, is a minimal vertex cover with maximal partial rank.}
\end{figure}
In all these cases, the upper and lower bounds, i.e., the maximal Schmidt rank and the Pauli persistency, coincide \footnote{Since the maximal Schmidt rank for any state can be at most $\lfloor \frac{N}{2} \rfloor$, the Schmidt measure of those cases, where upper and lower bound coincide, is also bounded by this number. Thus all graph states which allow for a determination of the Schmidt measure along these lines have Schmidt measure of at most $\lfloor \frac{N}{2} \rfloor$.} Although we refer to ref.~\cite{He04} for a detailed proof, for a even ring or a cluster state with at least one side of `even length', we have indicated the partition $A$ with maximal Schmidt rank $\text{SR}_A=\text{SR}_\text{max}$ (see Proposition~\ref{sufficient crit for max rank}) in fig.~\ref{fig:3DExampleEven} or fig.~\ref{fig:EvenRingExample} respectively.
\begin{wrapfigure}[12]{r}{0.5\textwidth}
\vspace{-0.7cm}
\includegraphics[width=0.45\textwidth]{FIG15.eps}
\caption{\label{fig:EvenRingExample} Graph No.~1 is an entangled ring on $18$ vertices.
Graph No.~2 represents the graph corresponding to the adjacency matrix $\mathbf{\Gamma}^{AB}$ (see eq.~(\ref{Gamma for bi-partition})) for a bipartitioning $(A,B)$ with maximal Schmidt rank $\text{SR}_A = \text{SR}_\text{max}$. The vertices of partition $A$ are depicted by boxes.}
\end{wrapfigure}
Fig.~\ref{fig:TreeExample1} gives an example for a tree for which the Schmidt measure does not
coincide with the size of the smaller bi-partition, the upper bound according to
Proposition \ref{E_SExamples}.
\index{equivalence classes}
We have also computed the lower and upper bounds to the Schmidt measure, the Pauli persistency and the maximal partial rank, for the non equivalent graphs in List {\bf A} and {\bf B} in Table~\ref{TablePage}. They are listed in the corresponding tables in Sec~\ref{Local_Equivalence}.
For connected graphs the Schmidt rank $0$ cannot occur for any bi-partite splitting $(A,B)$, since this would correspond to an empty graph $G_{AB}$ between the partitions. Because the rank index is invariant under permutations of the partitions according to graph isomorphisms it provides information about whether two graph states are equivalent under local unitaries {\em plus} graph isomorphisms as treated in sec.~\ref{Local_Equivalence}. But note
that graph number $40$, $42$ and $44$ are examples for non-equivalent graphs with the same rank index.
Nevertheless, comparing the list of Schmidt ranks with respect to all bi-partitions in detail shows that no
permutation of the vertex set exists (especially none which is induced by a proper graph isomorphism on both sides), which would cause a permutation of the corresponding rank list, such that two of the graphs could be locally equivalent.
For $295$ of $995$ non-isomorphic graphs the lower and upper bound differs and that in these cases the Schmidt measure also non-integer values in $\text{log}_2\{1,...,2^{N} \}$ are possible. Moreover note that only graph number $8$ and $19$ have maximal Schmidt rank with respect to all bi-partite splits. Entanglement here is distributed symmetrically between all parties, which makes it "difficult" to disentangle the state by few measurements. From this one can understand why the gap between the lower and upper bound occurs in such cases. As discussed above, from all graph codes with less than $7$ vertices only these two are candidates for strongly error detecting graph codes introduced in ref.~\cite{Schlinge02a}.
\begin{wrapfigure}[13]{r}{0.4\textwidth}
\vspace{-0.5cm}
\includegraphics[width=0.4\textwidth]{FIG16.eps}
\caption{\label{fig:bee} \footnotesize Resource graph state for the concatenated $[7,1,3]$-CSS code.}
\end{wrapfigure}
In the remainder we will briefly discuss two examples which were already introduced in sec.~\ref{GS_Examples} in the context of quantum error correction and the one-way model for quantum computation, namely the graph that is used to realize the QFT on three qubits in the one-way quantum computer. The vision behind this is to flesh out the notion of entanglement as an algorithmic resource, as it has been put forward in ref.~\cite{OneWay1,OneWay2,OneWay3,OneWay5}.
\vspace{0.1cm}
{\em Example 1: Concatenated $[7,1,3]$-CSS-code.}\index{CSS code}\index{quantum error correcting code (QEC)}\index{stabilizer code}
As discussed in sec.~\ref{Application_QEC} the graph $G$ depicted in Fig.~\ref{fig:bee} represents an
encoding procedure for the concatenated $[7,1,3]$-CSS-code. The corresponding graph state has Schmidt measure $28$. For encoding, seven $\sigma_x$-measurements at all vertices of the inner square except $\circ$ have to be performed. In this resulting graph obtained without measuring the vertex $\circ$ represents the resource for the
alternative encoding procedure. It has maximal Schmidt measure $25$, whereas the corresponding $0$ and $1$ code words have Schmidt measure $24$. They can be obtained with probability $1/2$ from $|G'\rangle$ by a $\sigma_z$-measurement at the vertex $\circ$.
{\em Example 2: Quantum Fourier Transform (QFT) on $3$ qubits.}\index{quantum Fourier transformation (QFT)}
\begin{wrapfigure}[21]{r}{0.4\textwidth}
\vspace{-0.8cm}
\begin{center}
\includegraphics[width=0.4\textwidth]{FIG17New.eps}
\end{center}
\caption{\label{fig:3QFT}\footnotesize Graphs associated with the QFT on 3 qubits in the one-way quantum computer. The boxes denote the input (left) and output (right) vertices.
}
\end{wrapfigure}
The graph No.\ 1 in fig.~\ref{fig:3QFT} is a
simple example of an entangled graph state as it occurs in the
one-way computer of refs.~\cite{OneWay1,OneWay2,OneWay3,OneWay5}. This specific
example represents the initial resource (part of a cluster)
required for the quantum Fourier transform QFT on 3 qubits
\cite{OneWay1,OneWay2,OneWay3,OneWay5}. It has Schmidt measure $15$, where the partition
\begin{equation}
A=\{2, 4, 7, 9, 11, 13, 15, 18, 20, 22, 24, 26, 28, 30, 32\}
\nonumber
\end{equation}
is a minimal vertex cover with maximal Schmidt rank. In the process
of performing the QFT, all vertices except the output vertices
$5,16,33$ are measured locally. During this process, the
entanglement of the resource state (with respect to every
partitioning) can only decrease. Similar as with the graph state vector
$|G'\rangle$ obtained from fig.~\ref{fig:bee}, graph No.\ 2
represents the input-independent resource needed for the essential
(non-Clifford) part of the QFT protocol \cite{OneWay1,OneWay2,OneWay3,OneWay5}. It has
Schmidt measure $5$, where the partition $A=\{2, 9, 10, 11, 15\}$
now provides a minimal vertex cover with maximal Schmidt rank.
\vspace{-0.0cm}
\index{Schmidt measure $\text{E}_S$!for graph states|)}
\section{Weighted graph states}\label{WeightedGS}\index{weighted graph state \texttt{"|}$G\rangle$|(}\index{weighted graph}
In this section we extend the concept of a graph describing some state to the class of weighted graph states \cite{He04,Du05a}. We return to the definition of graph states in terms of the underlying interaction pattern. In sec.~\ref{DefOfGS_Int} it was shown that any interaction pattern, for which the temporal order of the interactions between the particles is irrelevant and thus can be described by a graph, can only contain interactions $H_{ab}$ that are up to local $z$-rotations Ising-type interactions or likewise are given by the phase gate Hamiltonian
$H_{ab} = |1\rangle^a\langle1|\otimes |1\rangle^b\langle1|$. In order to describe the state by a simple graph we have fixed the interaction time for the particles to a phase $\pi$ in the previous sections.
We will now allow the particles to interact according to the same Hamiltonian $H_{ab}$ but for different interaction times $\varphi_{ab}$. This corresponds to the situation of a disordered system as it occurs e.g. in a spin glass or semi--quantal Boltzmann gas as described below. The interaction pattern is now summarized by a weighted graph, in which every edge is specified by a phase $\varphi_{ab}$ corresponding to the time the particles $a$ and $b$ have interacted.
The {\em weighted graph state} $|G\rangle$ is thus given by
\begin{equation}
|G\rangle = \prod_{\{a,b\} \in E} U_{ab} |+\rangle^{V} \label{wgraph}
\end{equation}
where the operations $U_{ab}$ depend on the interaction phases $\varphi_{ab}$:
\begin{equation}
U_{ab}:= e^{-i \varphi_{ab} H_{ab}} = e^{-i \frac{\varphi_{ab}}{4} \left(\mathbf{1}^{k}-\sigma_z^{k}\right)\otimes\left(\mathbf{1}^{l}-\sigma_z^{l}\right)}.
\end{equation}\index{Ising interaction $H^I_{ab}$, $U^I_{ab}$}
The corresponding adjacency matrix $\mathbf{\Gamma}$\index{adjacency matrix $\mathbf{\Gamma}$} for this weighted graph collects the weights $\Gamma_{ab}=\varphi_{ab}=\varphi_{ba}$ and gives rise to a concise decomposition of the weighted graph states with respect to the standard basis $|W\rangle_z=|W_1\rangle^1\cdots|W_N\rangle^N$ ($W=(W_1,\ldots,W_N)\in \mathbb{F}_2^N$) \cite{CDHB05}:
\begin{equation}\label{WGSdecomp} |G\rangle \,= \, 2^{-\frac{N}{2}}\, \sum_{W\subseteq V} \prod_{\{a,b\} \in E} U_{ab} |W\rangle_z \, = \, 2^{-\frac{N}{2}}\, \sum_{W\subseteq V} e^{i \frac{1}{2} W \cdot \mathbf{\Gamma} \cdot W} |W\rangle_z\; .\end{equation}
This easily follows for each basis state $|W\rangle_z$ by induction over the involved interaction unitaries $U_{ab}$ using the fact that \begin{equation} U_{ab}|W_a\rangle^a|W_b\rangle^b=e^{i\varphi_{ab}W_aW_b}|W_a\rangle^a|W_b\rangle^b=e^{i\frac{1}{2}\left(W_a\varphi_{ab}W_b + W_b\varphi_{ba}W_a\right)}|W_a\rangle^a|W_b\rangle^b\; . \end{equation}
We remark that for any weighted graph states $|G\rangle$, the set of states \begin{equation}|W\rangle = \sigma_z^W |G\rangle
\end{equation} form a basis for $({\mathbb{C}}^2)^{V}$. This can most easily be seen by realizing that $\sigma_z^W$ commutes with $U_{ab}$, and $\sigma_z^W|+\rangle^{V}$ gives rise to a orthonormal basis consisting of pure product states. The application of $\prod_{\{a,b\} \in E} U_{ab}$ transforms this orthonormal basis into a basis of weighted graph states, where each of the basis states has equivalent entanglement properties and the states are connected by local unitary operations.
In contrast to a straightforward extension of the interaction picture to the case of weighted graph states, no such generalization of the stabilizer formalism in terms of generators within the Pauli group is possible. This implies that many results obtained in the previous sections do no longer hold for weighted graph states. Nevertheless one is still able to show \cite{CDHB05} that entanglement in a weighted graph state is closely related to the connectivity properties of the underlying graph. More precisely, the weighted graph state is entangled (non-separable) with respect to the bi-partition $(A,B)$ iff there are non-vanishing interaction phases $\varphi_{ab}>0$ for some $a\in A$ and $b\in B$ between these partitions\index{connected graph (state)}. This is because the rank of the reduced density matrix as computed in Proposition~\ref{ReducedRhoWGS} in sec.~\ref{WeightedGS} is larger than one\footnote{More precisely, for any $\varphi_{ab}\neq 0$ the rows according to index $\emptyset$ and $a\equiv\{a\}$ differ: We find for the matrix elements $\frac{C_{\emptyset \emptyset}}{C_{a \emptyset}} \neq \frac{C_{\emptyset a}}{C_{a a}}$, since $C_{\emptyset \emptyset}=C_{aa}=1$ and $C_{\emptyset a}= C_{a \emptyset} = e^{i\sum_{b\in N_a}\varphi_{ab}/2}\prod_{b\in N_a}\cos[\frac{\varphi_{ab}}{2}] $.} in this case. Moreover, between arbitrary two parties $A$ and $B$ entanglement can be created (localized) by performing local operations on the remaining vertices (i.e., in $V\setminus (A\cup B)$) iff the parties are connected by a path with non-vanishing interaction phases\index{localizable entanglement $\text{LE}^{ab}$}. The localizable entanglement is not zero in these cases, since the protocol in Proposition~\ref{locEntGS} of sec.~\ref{Corr} can also be applied to weighted graph states in order to reveal entanglement between some vertices $a\in A$ and $b\in B$ \cite{CDHB05}.
Moreover for the following analysis of entanglement present in weighted graph states it is crucial that the reduced density matrices of these states can still be determined efficiently:
{\proposition[{\bf Reduced state for weighted graph states}]\label{ReducedRhoWGS}\index{reduced state $\rho_G^A$}
Let $A\subseteq V$ be a subset of vertices for a weighted graph $G$ and $B=V\setminus A$ the corresponding complement in V. Then the reduced state $\rho_G^A =\text{tr}_B(|G\rangle\langle G|)$ is given by \begin{equation}\label{rhoversustilderho} \rho_G^A = \prod_{\{a,b\} \in E_A} U_{ab} \tilde \rho_G^A U_{ab}^\dagger\; ,\end{equation} where $E_A=E\cap (A\times A)$ denotes the set of edges within $A$ and
\begin{equation}\label{ReducedWGS} \tilde\rho_G^A = \sum_{A_1,A_2\subseteq A} C_{A_1A_2} |A_1\rangle_z^A\langle A_2| \;.\end{equation}
The matrix elements are
\begin{eqnarray}\label{CorrReducedWGS} C_{A_1A_2} &=& \frac{1}{2^{N}}\sum_{B'\subseteq B} e^{i (A_1-A_2) \cdot \mathbf{\Gamma}'\cdot B'} \nonumber \\
&=& e^{i \frac{1}{2} \sum_{b \in B} (A_1-A_2)\cdot \Gamma_b'}\prod_{b \in B} \cos\left[\frac{1}{2}(A_1-A_2)\cdot \Gamma_b'\right] \; .\end{eqnarray}
Here $\Gamma_b'$ denotes the $b$-th column of the matrix $\mathbf{\Gamma}'=\mathbf{\Gamma}_{AB}$ representing the edges $\{a,b\}$ with weights $\varphi_{ab}$ between the partitions $A$ and $B$ (see eq.~(\ref{Gamma for bi-partition})).
}
{\em Proof:}
From elementary facts\footnote{I.e., $\text{tr}_B [C^A\otimes U^B \rho D^A\otimes (U^B)^\dagger ] = C \,\text{tr}_B(\rho)\, D$ for arbitrary unitaries $U$ and matrices $C$, $D$.} about the partial trace $\rho_G^A =\text{tr}_B(|G\rangle\langle G|)$ it follows that, for the computation, we might as well apply the interaction unitaries $U_{aa'}$ in eq.~(\ref{wgraph}) acting solely on vertices in $A$ after the partial trace and neglect all unitaries acting only on $B$. Thus $\tilde\rho_G^A $ in eq.~(\ref{rhoversustilderho}) is the reduced density matrix corresponding to the graph $G_{AB}$ with adjacency matrix $\mathbf{\Gamma}'=\mathbf{\Gamma}_{AB}$. In order to determine $\tilde\rho_G^A $ we split up the basis vectors $|W\rangle$ in eq.~(\ref{WGSdecomp}), e.g. $|W_1\rangle^V=|A_1\rangle^A|B_1\rangle^B$, according to the partitioning $(A,B)$ and compute
\begin{eqnarray} \tilde\rho_G^A & = & \frac{1}{2^N} \, \text{tr}_B\,\left[ \sum_{W_1,W_2 \subseteq A} e^{i \frac{1}{2}\left( W_1 \cdot \mathbf{\Gamma}' \cdot W_1 - W_2 \cdot \mathbf{\Gamma}' \cdot W_2\right)} |W_1\rangle_z\langle W_2|\right]\nonumber \\ &=& \sum_{A_1,A_2\subseteq A} \left(\frac{1}{2^{N}}\sum_{B'\subseteq B} e^{i (A_1-A_2) \cdot \mathbf{\Gamma}'\cdot B'}\right) |A_1\rangle_z^A\langle A_2| \; .\end{eqnarray}
The matrix element $C_{A_1A_2}$ can be further simplified as stated in eq.~(\ref{CorrReducedWGS}):
\begin{eqnarray} C_{A_1A_2} & = &\frac{1}{2^{N}}\sum_{B'\subseteq B} \prod_{b \in B'} e^{i (A_1-A_2) \cdot \mathbf{\Gamma}'_b} = \frac{1}{2^{N}}\prod_{b \in B} \left( 1 + e^{i (A_1-A_2) \cdot \mathbf{\Gamma}'_b}\right) \\
& = & e^{i \frac{1}{2} \sum_{b \in B} (A_1-A_2)\cdot \Gamma_b'}\prod_{b \in B} \cos\left[\frac{1}{2}(A_1-A_2)\cdot \Gamma_b'\right]
\end{eqnarray}
using $\frac{1}{2}(1+e^{i\phi})=e^{i\frac{\phi}{2}}\cos(\frac{\phi}{2})$.
\hfill\fbox\\\medskip
Since the entanglement properties between the partitions $(A,B)$ are invariant under LOCC, we can disregard the unitaries remaining on $A$ and thus directly examine $\tilde\rho_G^A$. From eq.~(\ref{CorrReducedWGS}) one sees that the total effect of the interactions with particles in $B$ on the matrix elements (coherences) $C_{A_1A_2}$ consists in multiplying the effects for each individual particle $b\in B$. Thus $\tilde\rho_G^A$ can alternatively be obtained as the Hadamard product\footnote{I.e., componentwise matrix multiplication of matrices written in the standard basis.}\index{Hadamard product} of the reduced states $\tilde\rho_{G_b}^A$ due to sole effect of particle $b\in B$, i.e., according to the induced graph $G_b:=G[A\cup b]$ on the vertex $b$ and all vertices within $A$.
Note that the computational effort to calculate the reduced density matrix scales exponentially in $|A|$ but only linearly in the number $|B|$ of particles in the remaining system, while for general pure states an exponential increase (with $N$) in computation time and memory cost is required for both parties. Hence all quantities that depend on the reduced density operator can be determined efficiently. For instance, from $\rho_G^A$ of one and two qubits, we can calculate all two-point (and also higher order) correlation functions $\text{Q}_{ij}^{ab}$,
\begin{equation}
\text{Q}_{ij}^{ab}=\langle\sigma_i^{a}\sigma_j^{b}\rangle- \langle\sigma_i^{a}\rangle\langle\sigma_j^{b}\rangle \hspace{1cm} i,j=1,2,3,
\end{equation}
the entanglement of formation\footnote{For a two-qubit mixed state $\rho^{ab}$ the entanglement of formation $\text{E}_F(\rho)=f(C(\rho))$ is related to the concurrence $C(\rho):=\max\{0,\lambda_1-\lambda_2-\lambda_3-\lambda_4\}$, where $\lambda_i$ are the eigenvalues of the Hermitian matrix $(\sqrt{\rho}(\sigma_y^a\sigma_y^b\rho^*\sigma_y^a\sigma_y^b)\sqrt{\rho})^{1/2}$
in decreasing order, by some monotonically increasing function $f$ \cite{Wooters98}.} between pairs of particles, as well as lower and upper bounds on the localizable entanglement $\text{LE}^{ab}$ \cite{Frank} that was already discussed in sec.~\ref{Corr} for simple graph states. Note that the maximal classical correlation $\text{Q}_{\max}^{ab}$\index{maximal classical correlation $\text{Q}_\text{max}^{ab}$}\index{correlation function $\text{Q}^{ab}_{ij}$} between two particles is given by the largest singular value of the matrix $\text{Q}_{ij}^{ab}$ \cite{Frank}. Recall that the localizable entanglement $\text{LE}^{ab}$ is the maximum amount of entanglement that can be established between a pair of particles $a,b$, on average, by performing local measurements on all other particles. Moreover the relation $\text{Q}_\text{max}^{ab}\leq \text{LE}^{ab}\leq \text{AE}^{ab}$ holds \cite{Frank}, where $\text{AE}^{ab}$ is the concurrence of assistance\index{entanglement of assistance $\text{AE}^{ab}$} \cite{Laustsen03}.
With an efficient calculation of the reduced state\index{reduced state $\rho_G^A$} we can also determine the {\em entropy}\footnote{Note that for pure bi-partite states the entanglement of formation and distillation both are given by the entropy of entanglement $S_A$. } {\em of bi-partite entanglement}\index{entropy of entanglement ${S}_{A}$}\index{entanglement measure!entropy ${S}_{A}$} \begin{equation} {S}_{A}(|G\rangle) := S(\rho^A_G)\equiv - \text{tr} [\rho^A_G\log_2(\rho^A_G)] \end{equation} between a small number of vertices $A$ and the rest, as well as the multi-partite entanglement measure $E_{\rm MW}$\index{entanglement measure!Meyer Wallach $\text{E}_{\rm MW}$} proposed in ref.~\cite{Meyer,Br03}. This measure $\text{E}_{\rm MW}$ is given by $\text{E}_{\rm MW} = 2[1-\frac{1}{N}\sum_{a\in V} \text{tr}({\rho^a_G}^2)]$ \cite{Meyer,Br03}.
\index{Valence Bond Solids (VBS)|)}
This method for calculating reduced density matrices can be readily extended to the case, in which the particles are initially prepared in an arbitrary product state $\bigotimes_{a\in V} \left(\alpha_a|0\rangle + \beta_a|1\rangle\right)$ instead of $|+\rangle^V$ or even to slightly entangled\footnote{I.e., pure states with small Schmidt rank with respect to the respective bi-partition.} initial states. Similarly the VBS--picture can be generalized to all states produced by the interaction Hamiltonian $H_{ab}$ acting on arbitrary product input states $|\phi_1\rangle^1\cdots |\phi_N\rangle^N$. However, further modifications of the involved states are required. The (unnormalized) VBS--like state is of the form $|\tilde \Psi\rangle=\bigotimes_{a,b}|\chi_{a^ib^j}\rangle$ with $|\chi_{a^ib^j}\rangle = U_{a^ib^j}|\sqrt[d_a]{\phi_a}\rangle^{a^i}|\sqrt[d_b]{\phi_b}\rangle^{b^j}$, where $|\phi\rangle =\alpha|0\rangle+\beta|1\rangle$ and$|\sqrt[n]{\phi}\rangle:=\sqrt[n]{\alpha}|0\rangle + \sqrt[n]\beta|1\rangle$. But, for the sake of simplicity, in the remainder of this article we restrict to weighted graph states, i.e., states arising from input states $|+\rangle^{V}$.
Since the class of weighted states comprises particles interacting for different interaction times, it provides an interesting model for the study of the entanglement dynamics in many--particle systems. In the remainder we will briefly review a few results about the static and dynamic entanglement properties of spin lattices \footnote{The investigation of entanglement properties of strongly interacting many body systems has proven to be a fruitful approach.
Clearly, the ground
states of interacting many-body systems at zero temperature are
correlated. These correlations are not only reflected by
scaling laws for two point correlation functions:
In fact, it turns out that characteristic scaling
laws concerning ground state entanglement can be established
reminding of the behavior of such
two point correlation functions
\cite{Nielsen,Osterloh,HC,Latorre,Frank,Vi03,Pl04}.
This observation refers on the one hand to
entanglement properties
of two distinguished constituents of a many-body system.
On the other hand, it holds for
{\it block entanglement} of a number $L$ of
consecutive constituents and the rest of an
infinite chain \cite{Latorre,HC}. Notably, the specifics of
the scaling of entanglement were shown to indicate
quantum phase transitions
\cite{Nielsen,Osterloh,Latorre,Frank,Vi03}.
Long--range correlations can even be found in systems with gapped Hamiltonians in the sense of a divergent characteristic length
of the {\it localizable entanglement}
\cite{Verstraete04b,Ve03b}.}. For studies on spin
gases, see refs.~\cite{Du05a,CDHB05}; these results are based on the efficient method to calculate the reduced density operators of a small number
$L\leq 10$ of arbitrary spins as described above.
In harmonic systems \cite{HC,Pl04}, it may be remarked,
also higher-dimensional
systems and systems of non-integer dimension
can be studied, leading in particular to
`area-theorems', i.e., statements on the relationship between
the degree of entanglement of a distinguished region of
the full lattice system and its boundary area. These statements that gapped harmonic
systems indeed imply the validity of an area theorem hold
true on harmonic systems even
defined on general graphs \cite{Gap}.
Let us start our considerations with the case where the graph has some lattice structure\index{lattice graph (state)}. Thus we consider $N$ spin 1/2 systems (qubits) with pairwise interactions, described by an Ising--type Hamiltonian (see eq.~(\ref{PhaseGate}))
\begin{equation}\label{InteractionHamiltonian}
H=\sum_{a < b} f(a,b) \frac{1}{4}(\mathbf{1}-\sigma_z^a)\otimes(\mathbf{1}-\sigma_z^b).
\end{equation}
We assume that the spins are arranged on a $d$--dimensional lattice with fixed geometry and are initially completely polarized in $x$--direction, i.e., $|\Psi_0\rangle = |+\rangle^{V}$. As indicated in sec.~\ref{WeightedGS} the methods we develop can also describe disordered systems with random coefficients $f(a,b)$ and can take arbitrary (product) input states into account. We are interested in (entanglement) properties of the state
\begin{equation}\label{UnitaryEvolution}
|\Psi_t\rangle := e^{-itH}|\Psi_0\rangle.
\end{equation}
We consider the situation where the coupling between spins obeys a certain distance law, in the sense that the coefficients $f(a,b)$, describing the strength of the coupling, only depend on the distance $r_{ab}:= \|a-b\|$ between particles $a$ and $b$, i.e., $f(a,b)=f(r_{ab})$. In the example of ions stored in microtraps \cite{Ci00b,Ja02} one finds for instance $f(r_{ab})=r_{ab}^{-3}$ \cite{Ja02}.
In the following the (bi-partite) entanglement between blocks of a small number $L\leq 10$ of neighboring spins as measured by the entropy of entanglement will be denoted by $S_L=S(\rho_L)$. Clearly, $0 \leq S_L \leq L$, where $S_L=L$ indicates maximal entanglement between the blocks\index{entropy of entanglement ${S}_{A}$}.
For different distance laws, we have investigated the scaling of block-wise entanglement $S_L$ in ref.~\cite{Du05a} and observed the following:
\begin{itemize}
\item The maximal two point correlations in a spin chain decay slower than exponential for all power laws $f(r_{ab})\propto r_{ab}^{-\alpha}$. Therefore, the {\em correlation length}\index{correlation length} $\xi$ and also the {\em entanglement length}\index{entanglement length} $\xi_E$ {\em diverge} \cite{Frank}. This indicates long--range quantum correlations for all power laws, as we find that only exponential fall--off functions $f(a,b)=e^{-\kappa r_{ab}}$ lead to a finite correlation length.
\item In the limit $N\to \infty$ and $L \to \infty$, entropy $S_L$ saturates as a function of $L$ for power laws $f(r_{ab}) = r_{ab}^{-\alpha}$ with $\alpha>1$. This result generalizes to $d$--dimensional lattices. When considering blocks of $L$ particles contained in a $d$--dimensional ball, $S_L$ can at most grow like the volume of that ball, whereas we find that for $\alpha > d$ the upper bound on $S_L$ grows at most like the surface of the ball.
\item With the results of the previous section about the Schmidt rank one can also examine the special case of simple graph states analytically. Here, the interaction Hamiltonian has a fixed interaction length $\lambda$ and constant interaction strength $f(r_{ab}) = 1$ if $r_{ab} \leq \lambda$ and zero otherwise. One finds for the resulting states $|\Psi_{\pi}\rangle$, i.e., for $t=\pi$, that the entropy $S_L$ equals $L$ if the radius of the hypersphere is smaller than $\lambda$. Otherwise, $S_L$ scales essentially like the volume of a surface shell with thickness $\lambda$ that is $S_L \propto \lambda L^{(d-1)/d}$.
\item One can also consider the {\em dynamics} of entanglement that means the change of entanglement and correlations of the state $|\Psi_t\rangle$ with time. The scaling of the entropy with the block size $L$ is essentially still governed by the specific form of the distance dependence for any finite $t$, because infinitely remote regions still influence a subsystem in a similar way as discussed before. For large times $t$, more and more of the interaction phases $\varphi_{ab}=f(a,b)t$ start to oscillate (as they are effectively taken modulo $\pi$) and approach in the limit of large $t$ a (quasi)--random distribution. In the limit of an infinite chain and $t \to \infty$, the entropy of the reduced density operator of any finite group $A$ of particles is maximal, $S(\rho_A)=|A|$. This can be seen by considering the off diagonal elements of reduced density operators, which all contain infinite products of cosines of (sums of) {\em random} angles. All these products tend to zero for $N\to \infty$, leading to a maximally mixed state.
\end{itemize}
\begin{wrapfigure}[9]{r}{0.45\textwidth}
\vspace{-0.3cm}
\includegraphics[width=0.45\textwidth]{BoltzmannGas_picture1.eps}
\caption{\label{Fig:BoltzmannGas} Schematical drawing of a semi-quantal Boltzmann gas \cite{CDHB05}. }
\end{wrapfigure}
Finally we will slightly change the physical setup and consider a {\em spin gas} that is a system of interacting spins with coupling strengths that are now {\em stochastic} functions in time \cite{CDHB05}. One example is given by a {\em semi-quantal Boltzmann gas} of particles where each particle follows a classical trajectory but carries a quantum degree of freedom that is affected whenever two particles collide. During a collision of two particles, the internal degrees of freedom interact and can become entangled. This model was introduced in ref.~\cite{He04} and closely analyzed in ref.~\cite{CDHB05}. There an ideal gas was considered in thermal equilibrium with elastic collisions, whose mean free path is comparable to the size of the enclosing volume. For a fixed interaction pattern of the particles the time dependence of the weighted graph state $|\Psi_t\rangle$, representing the internal degrees of freedom, is given by the adjacency matrix $\mathbf{\Gamma}(t)$ as a function of time. From this the corresponding statistical state can be obtained through an averaging process by assigning a probability to every collision history and assuming that the collision particles acquire an interaction phase $\varphi_{ab}$ inversely proportional to their relative velocities. Thus the weighted graph state, corresponding to the resulting statistical szenario, is specified by the density, the volume and the temperature of the underlying Boltzmann gas as well as by the initial spatial distribution of the particles, which was chosen to be homogeneous. Whereas for low temperatures $T$ the initial entanglement generation rate is proportional to $\sqrt{T}$, in the case of high temperatures the entanglement generation is governed by the slow collision events and proportional to $\sqrt{T^{-1}}$ \cite{CDHB05}. Moreover in the long time limit and for sufficiently many particles the state $|\Psi_\infty \rangle$ (equilibrium state) is maximally entangled with respect to all bi-partitions $(A,B)$, i.e., $S_A\simeq N_A$.
In \cite{CDHB05} another model is also discussed, where the gas particles hop randomly between different sites of a lattice. This {\em semi-quantal lattice gas}\index{lattice gas} allows the (numerical) study of highly correlated classical trajectories. This framework provides microscopic models to compare Markovian and non-Markovian as well as correlated and uncorrelated decoherence\index{decoherence} processes. Non-Markovian decoherence\footnote{Here the decoherence of the state for a set of particles is due to collisions with the remaining particles, after tracing out the environmental degrees of freedom.} of a set of particles is dominated by (repeated) collisions with the same set of environmental particles (high density), whereas Markovian noise is mainly driven by independent collisions with different particles (dilute gas). Similarly, correlated decoherence processes can be modeled by collisions of some particles with the same particle from the environment.
We also remark that the fact that for weighted graph states reduced density operators of (few) particles can be calculated efficiently can be used to develop a novel method for ground state approximation of strongly correlated quantum systems \cite{APDB05}. In particular, (superpositions) of locally transformed weighted graph states can be used to approximate the ground state of a strongly correlated system in any spatial geometry or dimension. As expectation values of local observables, including energy, can in this case be efficiently calculated, a (numerical) variation over weighted graph states can be performed and good approximation to the ground state may be obtained. The fact that weighted graph states exhibit rich entanglement features, including infinite correlation and entanglement length, maximal localizable entanglement and maximal block--wise entanglement, may even allow for a successful treatment of critical systems or systems in dimension $d\geq 2$.
\section{Graph states in the presence of decoherence}\label{GS_decoherence}
\index{graph diagonal state|(}
Let us now return to the case of simple graphs. Real implementations for graph states are subjected to decoherence\index{decoherence}\index{noisy graph states}. Thus the corresponding real state $\rho$ is in general some mixed state that is --depending on the quality of the preparation procedure-- more or less close\footnote{E.g. in terms of the fidelity $\langle G|\rho|G\rangle$.} to the ideal pure state $|G\rangle$.
First, we study the stability of entanglement in graph states when they are exposed to noise and discuss the lifetime of entanglement with respect to the number of particles in the system.
Finally, a method to overcome the effect of decoherence is introduced, namely {\it entanglement purification}, which was already mentioned in sec.~\ref{EPP_Blind}, where we also indicate possible applications to fault tolerant quantum computation (see also sec.~\ref{one-way-QC}) and to secret sharing. In this section we will focus on one of these applications and show how the purification protocols can be modified such that the distribution of the (two-colorable) graph states can remain unknown to the different parties ({\it blind purification}).
\index{graph state preparation}
Let us start with a few remarks underlying noise models that we consider in the following.
An arbitrary noise process acting on an $N$-qubit system that ideally is prepared in the pure graph state $\rho_G=|G\rangle\langle G|$, is frequently represented as a completely positive map $\mathcal{D}$ (CPM) \index{completely positive maps (CPM)} that can be decomposed in terms of Pauli matrices acting from left and right:
\begin{equation}\label{ArbitrayDecohNqubits} \rho\; =\;\mathcal{D}(\rho_G) \;:=\; \sum_{\genfrac{}{}{0pt}{}{k_i,l_i =0}{(i=1,\ldots,N)}}^3\,\mathbf{E}_{\genfrac{}{}{0pt}{}{k_1,\ldots, k_N }{l_1,\ldots,l_N}}\, \sigma^1_{k_1} \cdots\sigma_{k_N}^N \,\rho_G\, \sigma^1_{l_1} \cdots\sigma_{l_N}^N \;.\end{equation}
Under certain conditions on the noise process \cite{GKS}, the completely positive maps $\mathcal{D}=\mathcal{D}_t$ are the solutions to a master equation describing the dynamics of the noise process in time. But, in the following, we keep our considerations on the level of CPMs, whose Kraus coefficient matrix may be time dependent, i.e., $\mathbf{E}_{\genfrac{}{}{0pt}{}{k_1,\ldots, k_N }{l_1,\ldots,l_N}}=\mathbf{E}_{\genfrac{}{}{0pt}{}{k_1,\ldots, k_N }{l_1,\ldots,l_N}}(t)$, if some dynamical description\footnote{For details we refer e.g. to ref.~\cite{Du05b}.} is imposed.
In \cite{Du05b} it is shown that by randomly choosing some Pauli matrix for each individual particle and applying them {\em before} and {\em after} the actual decoherence process occurs, an arbitrary noise process can be `depolarized' such that the overall noise process is described by a tensor $\mathbf{E}_{\genfrac{}{}{0pt}{}{k_1,\ldots, k_N}{l_1,\ldots, l_N}}$ that is diagonal, i.e.
\begin{equation}\label{PauliDecohNqubits} \rho'\; =\;\mathcal{D}'(\rho_G) \;=\; \sum_{\genfrac{}{}{0pt}{}{k_i=0 }{(i=1,\ldots,N)}}^3\,\mathbf{E}_{k_1,\ldots, k_N }\, \sigma^1_{k_1} \cdots\sigma_{k_N}^N \,\rho_G\, \sigma^1_{k_1} \cdots\sigma_{k_N}^N \;. \end{equation}
We refer to such channels $\mathcal{D}'$ that are diagonal when decomposed with respect to the Pauli matrices, as ($N$-party) {\em Pauli channels}\index{Pauli channel}. For graph states we can make use of the following relations\footnote{These relation directly follow from the stabilizing properties of the correlation operators $K_a$ corresponding to the graph state $|G\rangle$.}
\begin{equation} \sigma_x^a |G\rangle \;=\; \sigma_z^{N_a} |G\rangle \hspace{2cm} \sigma_y^a |G\rangle \;=\; \sigma_z^{N_a+a} |G\rangle\; \end{equation} in order to rewrite all $\sigma_x$- and $\sigma_y$-operators in the decomposition eq.~(\ref{PauliDecohNqubits}) in terms of $\sigma_z$-operators. In this way one verifies that the resulting state $\rho'$ in eq.~(\ref{PauliDecohNqubits}) is diagonal in the graph state basis $|U\rangle_G =\sigma_z^U |G\rangle$\index{graph state basis \texttt{"|}$U\rangle_G$} (see Proposition~\ref{Graph state basis}). In the following we call such states {\em graph diagonal states}. In order to achieve the simplified standard form for the map describing the decoherence process, i.e., for arbitrary input states, one has to perform some operation {\em before} the actual noise process has affected the state. Considering noisy preparation procedures for graph states, a `before' does not make much sense and thus the resulting state $\mathcal{D}(\rho_G)$ can hardly be regarded as some standard form for an imperfectly prepared graph state $\rho$. But it was shown in ref.~\cite{Du03a} that the resulting standard form can nevertheless be obtained by performing another twirling operation {\em after} the (noisy) preparation procedure that gives rise to the same depolarization for the corresponding mixed state $\rho$ \cite{Du03a,MarcPhD}:
{\proposition[{\bf Graph-diagonal states as standard forms for mixed states}]\label{GraphTwirling}\index{twirling!in graph state basis} Any mixed state $\rho$ of $N$ qubits can be depolarized into some {\em graph diagonal state} \begin{equation} \label{RhoG}
\rho' \;= \; \sum_{U\subseteq V} \, \lambda_{U}\, |U\rangle_G \langle U | \end{equation} for some graph $G$ with $N$ vertices by uniformly choosing the $2^N$ stabilizer elements $\sigma\in \mathcal{S}$ and applying them to the state, i.e., the corresponding twirling protocol is
\begin{equation} \rho' \;=\; \frac{1}{2^N}\, \sum_{\sigma \in \mathcal{S}}\, \sigma \rho \, \sigma \; .\end{equation}
}
For the rest of this section, we restrict to decoherence that arises if all qubits {\em individually} (or independently) are affected by noise described by the same Pauli channel
\begin{equation}\label{Dec_Pauli}
{\cal D} (\rho) = \sum_{i=0}^3 p_i\sigma_i \rho \sigma_i \hspace{0.5cm} \text{with} \hspace{0.5cm} (\sum_{i=0}^3 p_i =1)\, .
\end{equation}
These decoherence models are of particular interest in quantum information theory, especially in the study of fault-tolerant quantum computation, and contain for example:
\begin{itemize}
\item[1.] for \; $p_0=\frac{1+3p}{4}$\; and\; $p_1=p_2=p_3=\frac{1-p}{4}$\; the {\em depolarizing channel}\index{depolarizing channel}
\begin{equation} \mathcal{D}(\rho) \;=\; p \, \rho + (1-p)\,\frac{1}{2}\mathbf{1} \; ;\end{equation}
\item[2.] for \; $p_0=\frac{1+p}{2}$,\, $p_1=p_2=0$\; and \;$p_3= \frac{1-p}{2}$\;the {\em dephasing channel}\index{dephasing channel};
\begin{equation} \mathcal{D}(\rho) \;=\; p \, \rho + \frac{1-p}{2}\,(\rho + \sigma_z \rho \sigma_z) \; ;\end{equation}
\item[3.] for \;$p_0=\frac{1+p}{2}$, $p_2=p_3=0$\; and \;$p_1=\frac{1-p}{2} $\; the {\em bitflip channel}\index{bitflip channel}
\begin{equation} \mathcal{D}(\rho) \;=\; p \, \rho + \frac{1-p}{2}\,(\rho + \sigma_x \rho \sigma_x) \; .\end{equation}
\end{itemize}
The corresponding coefficients $\lambda_U$ are given by the following proposition \cite{Du04b}.
{\proposition[{\bf Effect of individual Pauli channels on a graph state}]\label{IndivPauliChGS}
Under decoherence described by the same individual Pauli channel $\mathcal{D}$, the graph state $|G\rangle$ transforms into a mixed state $\rho = \prod_{a \in V} {\cal D}^{a}(\rho_G)\; $ that is diagonal in the graph state basis $|U\rangle_G $ .
The diagonal elements $\lambda_U$ in eq.~(\ref{RhoG}) can be computed to be of the form\footnote{In both expressions we have made use of the notational simplifications described in sec.~\ref{GS_Notations}. For example $\mathbf{\Gamma} U'$ denotes both the set and the binary vector that is obtained by the multiplication (modulo $2$) of the adjacency matrix $\mathbf{\Gamma}$ with the binary vector corresponding to the set $U'$.
}
\begin{equation}
\label{PauliLambda}
\lambda_U \; = \; p_0^{|V|} \, \sum_{U'\subseteq V} \, q_1^{|U'\setminus (\mathbf{\Gamma} U' + U)|}\, q_2^{|U'\cap (\mathbf{\Gamma} U' + U)|} \,q_3^{| (\mathbf{\Gamma} U' +U)\setminus U'|}\; ,
\end{equation}
where $q_i:=\frac{p_i}{p_0}$ for $i=1,2,3$.
In the case of the depolarizing channel $(q:=q_1=q_2=q_3=\frac{1-p}{3p+1})$ this simplifies to
\begin{equation}
\label{DepolLambda}
\lambda_U = p_0^{|V|} \, \sum_{U'\subseteq V} \, q^{|U'\,\cup\, (\mathbf{\Gamma} U' +U)|}\; .
\end{equation}
}
In the next subsection, we will analyze the time dependence of the entanglement properties of the decohered state $\rho(t)$ for different initial graph states $\rho(0)=|G\rangle\langle G |$. Although the results of the following subsections can be generalized to more general decoherence models, we will restrict our review to the special case of noise that effects each particle in the graph state individually. In other words the evolution at each qubit is described by the map ${\cal D}^a$ given by eq.~(\ref{Dec_Pauli}) with Pauli operators $\sigma_j$ acting on qubit $a$. We will be interested in the evolution of a given pure state $|\Psi\rangle$ of $N$ qubits under this decoherence model. That is, the initial state $|\Psi\rangle$ suffers from decoherence and evolves in time to a mixed state $\rho(t)$ given by
\begin{equation}\label{decoh}
\rho(t) = {\cal D}^1 {\cal D}^2 \ldots {\cal D}^N |G\rangle\langle G| \; .
\end{equation}
The depolarizing channel with $p(t)=e^{-\kappa t}$ is of particular interest, since the decohered state due to an arbitrary noise channel can be further depolarized to some state, which might
also be obtained directly by some depolarizing channel. Moreover, among the stated noise models the depolarizing channel is the only channel that is basis independent, i.e., invariant under unitary transformations.
We will frequently use the Pauli channel and will describe the entanglement properties of $\rho(t)$ in terms of the parameters $p_i$. Nevertheless one has to keep in mind that the time dependence itself is already included in the parameters $p_i=p_i(t)$.
\subsection{Stability of entanglement}\label{StabilityOfGS}
Let us now examine the stability of entanglement in graph states under the influence of decoherence. As we have seen, multi-particle entanglement is a central property for many practical applications in quantum information. For all these applications it is therefore of great interest, to determine the lifetime of entanglement when it is exposed to noise. A second motivation aims at a more fundamental direction. With nowadays available technologies it is possible to prepare and observe entanglement on microscopic scales; but it is also often argued that this task might become exceedingly difficult when considering a macroscopic number of particles. So this subsection addresses the question, whether multi-particle entanglement can be stable on a macroscopic level.
For the lifetime of entanglement it is not only necessary to specify the underlying decoherence model, but also the very notion of multi particle entanglement\footnote{The lifetime discussed in this chapter differs conceptually from the often used $T_1$- or $T_2$-decoherence (or -relaxation) rates in that the latter are rather related to the stability of quantum coherences and classical correlations rather than to the stability of entanglement. For some some interrelations between these notions see e.g. ref.~\cite{Tolkunov04}.} itself. This is mainly due to the fact that multi party entanglement is a subtle issue in quantum information theory (see e.g. \cite{Du99,multi-party}). Apart from some special cases, the existence of an entanglement measure that is satisfying for information theoretic purposes as well as applicable and calculable for mixed states, is still an open problem\footnote{We note that for systems with only a few number of qubits ($N\leq7$), quite recently \cite{Carvalho04} the effect of decoherence on GHZ- and W-states was studied in terms of an entanglement measure which is a generalization of the concurrence.}.
In the following we will therefore concentrate on the discussion of two qualitative entanglement criteria. Throughout the chapter we will consider $N$ two--level systems (qubits) with corresponding Hilbert space ${\cal H} =(\mathbb{C}^2)^{\otimes N}$. The $N$ particles are distributed among $N$ parties $1,\ldots ,N$. Starting with a pure GHZ or graph state we will consider the $N$-party separability and distillability properties of the decohered state $\rho(t)$ (see eq.~(\ref{decoh}))\index{lifetime of entanglement}:
On the one end of the scale the state $\rho(t)$ can still be
{\em $N$-party distillable entangled}\index{distillable!N-party entangled}, as it is the case for the corresponding pure states in question. Hereby we call $\rho(t)$ $N$-party distillable, if any
other true $N$-party entangled state $|\Phi\rangle$ can be obtained (distilled) asymptotically from multiple copies of $\rho$ under local operations and classical communication (LOCC) \cite{Du99,Th02}:
\begin{equation}\label{Ndistillable}
\rho^{\otimes k} \;\longrightarrow_{\text{LOCC}} \; |\Phi\rangle\langle \Phi|\; .
\end{equation}
We remark that in the multi--copy case all true $N$--party entangled states are equivalent since they can be transformed into each other by LOCC. That is, the condition that any true $N$ party entangled state can be created can be replaced by the condition that some $N$--party entangled state, e.g. the initial pure state, can be created. Disregarding the practicability of the underlying distillation protocol, the state $\rho(t)$ is then as useful as any other entangled state and therefore can in principle be regarded as a universal resource for quantum information processing such as quantum communication.
On the other end of the scale, $\rho(t)$ might have also become completely separable or classical in the sense that it can be described by a classical mixture of product states, i.e., $\rho$ is {\em $N$-party separable}\index{separable states!N-party (completely)}, if
\begin{equation}\label{Nseparable}
\rho (t) = \sum_k\, p_k\, \rho_k^{1}\otimes \rho_k^{2}\otimes \ldots \otimes \rho_k^{N} \; .
\end{equation}
If a state is completely separable, it is no longer entangled with respect to any partitioning. In between these two extremal cases, $\rho(t)$ can contain different types of {\em blockwise entanglement}\index{blockwise entanglement}.
We can consider different partitionings of particles into $M$ groups ($M\leq N$), where each group forms a subsystem with a higher dimensional state space and consists of several particles. {\em $M$-party distillability [separability]}\index{separable states!M-party} can then be defined {\em with respect to a given partitioning} in a similar way, where the notion of {\em local} operation has to be adapted accordingly. We will call $\rho(t)$ {\em $M$-party distillable}\index{distillable!M-party entangled}, if there exists at least one partitioning, with respect to which $\rho(t)$ is $M$-party distillable.
Based on the notion of $M$--party separability and distillability, one can define the lifetime of entanglement. A $N$--party state $|\Psi\rangle\langle\Psi|$ which is subjected to decoherence for time $t$ evolves into a mixed state $\rho(t)$. The lifetime of $N$--party distillable entanglement is given by the time after which the state $\rho(t)$ looses the property of $N$--party distillability. This implies that lower bounds on the lifetime of distillable entanglement can be obtained by showing that the state $\rho(t)$ is distillable, while an upper bound can be obtained by proving non--distillability of $\rho(t)$. When considering partitions of the system into $M$ groups, the lifetime of $M$--party entanglement with respect to a given partition is defined accordingly. We refer to the lifetime of $M$--party entanglement as the time after which $\rho(t)$ is non--distillable with respect to {\em all} $M$--party partitions. In a similar way, one can define a lifetime with respect to the separability properties of $\rho(t)$.
In order to determine entanglement properties of the mixed states in question, we will continuously make use of the partial transposition criterion\index{partial transposition $\rho^{T_A}$|textbf} \cite{Peres96,Ho97}, an entanglement criterion which provides necessary conditions for distillability and separability. The partial transposition is defined for bi-partite systems only, while a system can in general consist of several parties. Making use of the concept of partitionings of the system, in particular considering all bi-partitionings, one can use the partial transposition criteria also for multi-partite states. Let $A$ denote a subset of $m$ parties $a_1, \ldots ,a_m$. In general, given an operator $X$ acting on $\mathbb{C}^{d_A}\otimes\mathbb{C}^{d_B}$, we define the partial transpose of $X$ with
respect to the first subsystem in the basis
$\{|1\rangle,|2\rangle,\ldots,|d_A\rangle\}$, $X^{T_A}$, as follows:
\begin{equation}
X^{T_A} := \sum_{i,j=1}^{d_A}\sum_{k,l=1}^{d_B}
\langle i,k|X|j,l\rangle \; |j,k\rangle\langle i,l|.
\end{equation}
A Hermitian operator $X$ has a non--positive [positive] partial transpose\index{PPT (non-negative partial transpose)}\index{NPT (negative partial transpose)}
(NPT) [(PPT)] if $X^{T_A}$ is not positive [positive] respectively. That is, $X^{T_A}$ is NPT if there exist some
$|\Psi\rangle$ such that $\langle\Psi|X^{T_A}|\Psi\rangle <0$.
The positivity of the operator $\rho^{T_A}$ gives a necessary criterion for separability\index{entanglement criterion!partial transposition}, whereas the non-positivity of $\rho^{T_A}$ is necessary for the distillability of the density operator $\rho$. In particular, if a bi-partite density operator is PPT, then it is certainly not distillable \cite{Ho97}. This implies \cite{Du99} that if a multi-particle density operator $\rho$ is PPT with respect to at least one bi-partite partition, then $\rho$ is certainly not $N$--party distillable. On the other hand, positivity of all bi-partite partitions is a necessary condition for $N$--party separability. In the case of two dimensional systems $\mathbb{C}^2\otimes\mathbb{C}^2$ the PPT [NPT] criterion is necessary {{\it and} sufficient for separability [distillability] \cite{Peres96,Ho96}. A detailed discussion of the application of the partial transposition criteria to multi-partite systems can be found in ref.~\cite{Du99}.
Although the computation of the spectrum for the partial transposition of a general density matrix requires a numerically demanding diagonalization procedure\footnote{Note that the size of the spectrum grows exponentially with the number of particles.}, for graph diagonal states this spectrum can be directly determined from the spectrum $(\lambda_U)_{U\subseteq V}$ of the graph diagonal state itself \cite{Du04b}:
{\proposition[{\bf Partial transposition for graph diagonal states}]\index{partial transposition $\rho^{T_A}$}\label{PTofGSLemma}
For any graph diagonal state $\rho$ (\ref{RhoG}) the {\em partial transposition} $\rho^{T_A}$ with respect to some partition $A$ is again diagonal in the (same) graph state basis $|U\rangle_G$.
In order to compute the corresponding eigenvalues, let $\mathbf{\Gamma}'=\mathbf{\Gamma}_{AA^c}$ denote the adjacency matrix of the graph between the partition $A$ and its complement $A^c$ (see eq.~(\ref{Gamma for bi-partition})).
Then:
\begin{eqnarray}
\label{PTofRhoG}
\rho^{T_A} & = & \sum_{U \subseteq V} \, \lambda'_U \, |U \rangle_G\langle U| \; \; \text{with}\\
\lambda'_U & = & \frac{|\text{ker}\,\mathbf{\Gamma}'|}{2^{|A|}}\, \sum_{\genfrac{}{}{0pt}{}{(X,Y) \in }{ (\text{ker}\,\mathbf{\Gamma}')^{\bot} \times (\text{Im}\,\mathbf{\Gamma}')}} \,
(-1)^{\langle X , A_Y \rangle} \, \lambda_{\left( U + X + Y\right)} \nonumber \; ,
\end{eqnarray}
where $A_Y \in A$ is any set with $\mathbf{\Gamma}' A_Y = Y$, and the kernel $\text{ker}$ or the orthocomplement $\bot$ are taken with respect to the subspace $\mathcal{P}(A)$ spanned by the sets in $A$.
}
Let us give two examples for formula (\ref{PTofRhoG}):
If $\mathbf{\Gamma}'$ is invertible, then $\text{ker}\, \mathbf{\Gamma}' =\{0\}$ and $(\text{ker}\, \mathbf{\Gamma}')^\bot =\mathcal{P}(A)$ holds.
Moreover $(\ref{PTofRhoG})$ can be simplified by parameterizing $\text{Im}\,\mathbf{\Gamma}'$ with $Y=\mathbf{\Gamma}' A_2$, where $A_2 \subseteq A$:
\begin{equation}
\lambda'_U = \frac{1}{2^{|A|}}\, \sum_{A_1,A_2 \subseteq A} \,
(-1)^{\langle A_1 , A_2 \rangle} \, \lambda_{\left( U + A_1 + \mathbf{\Gamma}' A_2\right)} \; .\end{equation}
If $A=\{a\}$ for a non-isolated vertex $a \in V$ the eigenvalues of the partial transposition with respect to $A$ are
\begin{equation}\label{PTofa}
\lambda'_U = \frac{1}{2}\, \left( \lambda_U + \lambda_{U + N_a} + \lambda_{U + a} - \lambda_{U + N_a + a}\right) \; .
\end{equation}
Similarly for the partial transposition with respect to the split $A=\{a,b\}$ versus rest, where $a,b \in V$ are two non-adjacent vertices
with linearly independent neighbor sets $N_a$ and $N_b$, one obtains:
\begin{equation}\label{PTofab}
\lambda'_U = \frac{1}{4}\, \left( \sum_{X \in \mathcal{M}_+} \, \lambda_{U+X} - \sum_{X \in \mathcal{M}_-} \, \lambda_{U+X} \right) \; ,
\end{equation}
where
\begin{eqnarray}
\mathcal{M}_+ & = & \{ \emptyset, a, b, a+b, N_a, N_b, N_a+N_b, a+N_b, b+N_a, a+b+N_a+N_b\}\; \;\text{and} \cr
\mathcal{M}_- &= &\{ a+ N_a,b+ N_b, a+N_a+N_b, b+N_a+N_b, a+b+N_a, a+b+N_b\} \; .\nonumber
\end{eqnarray}
If $a$ and $b$ are adjacent the same formula holds but with neighbor sets $N'_a=N_a\setminus b$ and $N'_b=N_b\setminus a$ restricted to $A^c$.
This procedure to compute the eigenvalues of the partial transposition described in (iii) does not require the diagonalization of a $2^N\times2^N$-matrix and therefore allows the evaluation of the PPT criteria with respect to different partitions, as long as the vector consisting of the initial eigenvalues $\lambda_U$ (which is already of length $2^N$) is small enough to be stored and -in the case that it occurs as a result of Pauli channel- as long as this vector can also be computed fast enough.
\begin{wrapfigure}[15]{r}{0.5\textwidth}
\vspace{-0.1cm}
\includegraphics[width=0.5\textwidth]{DepolRing.eps}
\caption{\label{fig:DepolRing} \small For the case of particles in rings of size $N\leq 10$, which individually decohere according to the same depolarizing channel with parameter $p$: the critical value $p_{\text{crit}}$, after which the first [last] partition becomes PPT $\triangle$ [$\Box$], the lower and upper bounds according to ref.~\cite{Du04b}. }
\end{wrapfigure}
Consider, for illustration, rings up to size $N=10$ suffering from decoherence due to the depolarizing channel, which are examined with the help of the partial transpose with respect to all possible partitions. Fig.~\ref{fig:DepolRing} depicts the critical value for $p$, after which the state $\rho$ first becomes PPT with respect to some partition, which implies that at this point the state $\rho$ is certainly no longer $N$-party distillable. For fig.~\ref{fig:DepolRing} the critical value $p_{\text{crit}}$ has also been computed, after which the state $\rho$ has become PPT with respect to all partitions, i.e., after which $\rho$ contains at most
bound entanglement with respect to any partition. In contrast to the case of $N$-party GHZ states,
for which the one-versus-rest partition is the first to become PPT, the numerical results for small $N$ indicate that in rings this split seems to be most
stable against decoherence due to noise described by individual depolarizing channels, and that the smallest eigenvalue of the partial transposition with respect to these one-versus-rest splits $\{a\}$ is given by $\lambda_{N_a+a}$.
If the initial state is a GHZ state, e.g. in the star graph representation with center at qubit $1$, a direct decomposition with respect to the logical basis $(|0\rangle,|1\rangle)$ is sometimes more advantageous for the computation of the partial transposition and its eigenvalues.
In particular, a certain subclass of such GHZ diagonal states\index{GHZ diagonal state}\index{GHZ state}, namely the one where $\lambda_{U}=\lambda_{U+1}$ for all $U\subseteq V$ except $\emptyset$ and $\{1\}$, allows for a very precise determination\footnote{\label{GHZDfootnote}More precisely, starting with a pure star graph state $|G\rangle$ this class of states $\rho$ is given by \begin{equation} \rho \;=\; \lambda_0 |G\rangle\langle G| + \lambda_{1} |1\rangle_G\langle 1| + \sum_{U\subseteq V\setminus 1} \lambda_U \left( |U\rangle_G\langle U| + |U+1\rangle_G\langle U+1| \right) \nonumber \end{equation} with $\Delta=\lambda_0-\lambda_1\geq0$ and thus determined by $2^{N-1}$ parameters. In \cite{Du99} it is shown that (i) any mixed state $\rho'$ can be depolarized to this class of states $\rho$ by means of random local operations (leaving $\lambda_0\equiv \lambda'_\emptyset$, $\lambda_1\equiv \lambda'_{\{1\}}$ and all $\lambda_U=1/2 (\lambda_{U} +\lambda_{U+1})$ invariant), (ii) $\rho$ is {\em PPT with respect to the bi-partite split $(A,B)$}\index{PPT (non-negative partial transpose)} iff $\Delta\leq 2\lambda_{A\setminus 1}$, (iii) $\rho$ is {\em separable with respect to the partition $(A_1,\ldots,A_M)$} iff all bi-partite splits $(A,B)\geq (A_1,\ldots,A_M)$ are PPT, (iv) a maximally entangled state is distillable between the pair $a$ and $b$ iff all bi-partite splits $(A,B)$ such that $a\in A$ and $b\in B$ are NPT\index{NPT (negative partial transpose)} and (v) a maximally entangled $M$-party GHZ state is distillable between the vertices $a_1,\ldots, a_M$ iff all bi-partite splits $(A,B)$, such that neither all $a_1,\ldots, a_M\in A$ nor all $a_1,\ldots, a_M\in B$, are NPT.} of its entanglement properties \cite{Du99}.
This class of GHZ-diagonal states is, for example, naturally obtained, if the qubits of the initial GHZ state are effected individually by white noise. In this case, most of the above entanglement properties of the decohered mixed state $\rho(t)$ can be determined analytically \cite{Si02,Du04b,Band04}:
{\proposition[\bf GHZ states in the presence of individual white noise]\hspace{4cm}
Consider a GHZ state that is exposed to decoherence described by individual depolarizing channels $\mathcal{D}_t$ with parameter $p(t)=e^{-\kappa t}$. Then the resulting mixed state $\rho(t)=\mathcal{D}_t^V\,|GHZ\rangle\langle GHZ|$ is $N$-party distillable [separable] iff the partial transpose $\left(\rho_G\right)^{T_A}$ w.r.t. all possible partitions $A$ are non-positive [positive]. More precisely,
\begin{itemize}
\item[1.] $\rho(t)$ remains {\em $N$-party distillable entangled} as long as the {\em most fragile} splits,\\ $1$-versus-{\small $(N-1)$} particles, remain NPT.
\item[2.] $\rho(t)$ becomes {\em $N$-party separable} as soon as the {\em most stable} splits,\\ $\frac{\lfloor N \rfloor}{2}$-versus-$\frac{\lceil N \rceil}{2}$ particles, become PPT.
\item[3.] The {\em lifetime of $N$-party distillable entanglement} {\bf decreases} as the number of particles $N \rightarrow \infty$.
\end{itemize}
The {\em $M$-party distillability [separability]} of the resulting mixed state $\rho(t)$ with respect to a fixed partitioning is determined by the subsystem that contains the smallest number of particles, since the corresponding partial transposition is the first one to become positive. In particular:
\begin{itemize}
\item[4.] The maximum number $M$ of subsystems that remain entangled, i.e., the effective size of {\em $M$-party distillable entanglement} {\bf decreases} in time.
\item[5.] As $N \rightarrow \infty$ any partitioning into groups of size $m$ leads to vanishing [finite] {\em lifetime of the corresponding $M=\frac{N}{m}$-party entanglement}, if the size $m$ of each group is finite [tends to $\infty$].
\end{itemize}
{\em Encoding} the qubits of a GHZ state by some quantum error correcting code and performing error correction at the end of the noise process, the lifetime of [blockwise] entanglement between the {\em logical qubits} can be {\bf increased} effectively. Thus the lifetime $M$-party entanglement in encoded GHZ states can also remain finite on a macroscopic level as long as the level of encoding (e.g. concatenations) is sufficiently large.
}
Thus, we find that without error-correction the lifetime of $N$-party entanglement in GHZ-states vanishes on a macroscopic level. This result can also be extended to more general decoherence models \cite{Du04b,Band04}. In the remainder of this subsection we now discuss the lifetime of $N$-party entanglement in graph states and show\footnote{An analysis of long-range entanglement in the thermal state of a three-dimensional cluster state can be found in ref.~\cite{Raussen04}.} that for a significant subclass, such as the cluster states, the lifetime of distillable entanglement is, in contrast to GHZ-states, essentially independent of $N$.
To this aim, we establish a lower bound on the lifetime of graph states by considering an explicit entanglement distillation protocol. The distillability of a $N$--party entangled state can be shown by providing a procedure that allows one to generate maximally entangled pairs shared between any pair of neighboring particles. This is sufficient, as any $N$--party entangled states can be then produced by local operations from these entangled pairs. The distillability of such pairs serves only as a tool to show $N$--party distillability, and no conclusions about the nature of entanglement contained in the state can be drawn. In particular, one should not conclude that entanglement contained in a cluster state were in some sense only bi-partite.
For decoherence of individual particles described by Pauli channels, one can make use of the following facts \cite{Du04b}: (i) measuring all but two particles $a,b$ in the eigenbasis of $\sigma_z$ results into the creation of another graph state with only a single edge $\{a,b\}$ (see Proposition~\ref{Pauli_Measurement} in sec.~\ref{Pauli measurements}); (ii) the action on a specific graph state of any operator $O$ which is a tensor product of Pauli operators can be equivalently described by an operator $O'$ consisting of only $\sigma_z$ operators acting on the same graph state. This implies that a Pauli--diagonal map ${\cal D}^k$ acting on qubit $k$ of a graph state $|G\rangle$ can be described by a map ${\cal M}$ whose Kraus operators contain only $\sigma_z$ operators acting on qubit $k$ and its neighbors. This follows from $\sigma_x^{a}|U\rangle_G = (-1)^{U_a}\sigma_x^{a} K_a |U\rangle_G$, where $S_x^{a} := \sigma_x^{a} K_a$ is an operator which contains only products of $\sigma_z$ operators at neighboring particles of particle $a$, and similarly for $\sigma_y$.
Hence measurements of $\sigma_z$ on all but particles $a,b$ commute with the action of maps describing the decoherence process. The resulting state $\rho_{ab}$ of particles $a,b$ is only influenced by noise acting on particles $a,b$ and their neighbors $N_a,N_b$. The measurements effectively decouple particles $a,b$ from the remaining particles. Distillability of $\rho_{ab}$ can be determined by employing the partial transposition criterion, where for a two qubit system negative partial transposition already ensures distillability. As $\rho_{ab}$ is only determined by noise operators acting on particles $a,b$ and their neighbors, this already implies that for all graph states with constant degree (e.g. cluster states), the distillability of $\rho_{ab}$ will {\em not} depend on the size of the system $N$. In fact, one finds a threshold value for parameters describing the decoherence process that only depends on the local degree of the graph. To be precise, the influence of independent neighbors and joint neighbors is slightly different. We remark that in certain cases, e.g. for GHZ states, the local degree itself may depend on $N$. If this is however not the case, the lower bound on the lifetime of distillable entanglement obtained in this way is constant and shows no scaling with the size of the system $N$.
\begin{wrapfigure}[16]{r}[0.1\textwidth]{0.5\textwidth}
\vspace{-0.4cm}
\includegraphics[width=0.5\textwidth]{LowerBounds.eps}
\caption{\label{fig:Lower} \small Under individual coupling due to the same depolarizing channel the lower bounds on $\kappa t$ to the lifetime of distillable $N$-party entanglement for the 1D- , 2D-, 3D-cluster state remain constant for arbitrary system sizes $N$. For the $N$-party GHZ state the lower bound as well as the exact value for $\kappa t$, until which GHZ state remains distillable entangled, strictly decreases and goes to zero as $N\rightarrow \infty$. }
\end{wrapfigure}
For local depolarizing channel, one finds for instance that $\rho_{ab}$ is distillable entangled if $p^{|N_a|+1} + p^{|N_a+N_b|} + p^{|N_b|+1} > 1$ \cite{Du04b}. Solving this polynomial inequality yields: for GHZ states: $\kappa t > ln(2)/N$, in agreement with the analytic results for GHZ states; for 1D, 2D, 3D cluster states: $\kappa t > 0.3331, 0.1886, 0.1318$ respectively. The results are summarized in fig.~\ref{fig:Lower}.
The results can be extended to any noise model with some finite range, i.e., all Kraus operators act non--trivially only on a finite, localized number of qubits. Also in this case, for graph states corresponding to graphs with a constant degree there exists a lower bound on the lifetime of distillable entanglement that is finite and constant, independent of the total number of particles $N$.
Several {\em upper bounds} on the distillable entanglement in general graph states can also be provided \cite{Du04b}, which again do not depend on the size of the system as long as the maximal degree remains constant. These bounds are obtained by employing the partial transposition criterion, or by considering the noisy interactions resulting from interactions between adjacent particles in the graph and phase noise acting on them. In the second case, an upper bound on the lifetime is obtained by ensuring that the resulting CPMs are not capable to generate entanglement. Considering blockwise entanglement a detailed analysis for general graph states has not been accomplished yet. Nevertheless, the scaling behavior of $M$-party entanglement is restricted to a range between the upper and lower bounds, which in the case of cluster and similar graph states are independent of the number $N$ of particles. In this sense also the scaling behavior of blockwise entanglement in these states must be essentially independent of the size of the system.
Finally, most of the above results can also be extended to the class of weighted graph states (see sec.~\ref{WeightedGS}).
To summarize, we found that under quite general assumptions about the underlying noise model the lifetime of true $N$-party-entanglement in GHZ-states decreases with the size of the system, whereas for cluster and similar graph states the lifetime of N-party entanglement is essentially independent of the size of the system. These results suggest {\em a remarkable robustness of certain classes of macroscopic entangled states --namely all graph states with constant degree-- under various decoherence processes.}
\subsection{Entanglement purification}\label{EPP}\index{entanglement purification}
Although the results of the previous section suggest a robust scaling behavior of the entanglement in graph states with respect to system size, we nevertheless need methods to protect theses states from noise. Many applications make use of pure graph states, and hence a method to generate high--fidelity approximations to such pure graph states is of significant importance. The reasons that in a realistic situation one ends up with mixed states rather than pure states are manifold. For instance, the qubits constituting the graph state may interact with uncontrollable degrees of freedom of the environment, leading to decoherence; in distributed scenarios which have to be considered in the context of certain multi-party communication settings, the multi-particle entangled states are distributed through noisy quantum channels. In this section, we will discuss ways to maintain or recover the entanglement of such (noisy) graph states. In particular, we will describe {\em multi-party entanglement purification protocols} that allow one to create, from many identical copies of noisy entangled graph states, few copies with increased fidelity. The fidelity can, under the idealized assumption of perfect local control operations, be made arbitrarily close to one. Even in the presence of noisy local control operations, a significant enhancement of the fidelity is possible. The entanglement purification protocols show in fact a remarkable robustness against noise, where channel errors of the order of several tens of percent are correctible, and errors in local control operations of the order of percent are tolerable.
In ref.~\cite{Du03a} multi-particle entanglement purification
protocols for all two--colorable graph states have been developed.
These protocols are generalizations of a protocol for GHZ--state
purification introduced in ref.~\cite{Mu98} and further developed in
ref.~\cite{Ma01}. In the following we will briefly discuss the recurrence protocols
\index{recurrence protocol} for the purification of general
two--colorable graph states \index{two colorable graph states}. We remark that also hashing and breeding protocols are not discussed here,
which operate jointly on many copies and which allow for purification with
a finite yield were developed in refs.~\cite{Du03a, Lo04}. Given a noisy graph state $\rho'$
corresponding to a two colorable graph $G$, one can always transform
$\rho'$ to a standard form $\rho$ diagonal in the graph states basis
corresponding to $|G\rangle$ without changing the diagonal elements,
\begin{equation}
\rho= \sum_W \lambda_W |W\rangle\langle W|,
\end{equation}
where $|W\rangle =\sigma_z^{W}|G\rangle$ and $\lambda_W ={\rm tr} (|W\rangle\langle W| \rho')= {\rm tr} (|W\rangle\langle W| \rho)$ (recall that $\{|W\rangle\}$ form a basis). The transformation to this standard form is achieved by applying randomly the local operations corresponding to the correlation operators $K_a$ of $|G\rangle$ \cite{Du03a}. For notational convenience, we distinguish the two sets $A$ and $B$ corresponding to the two colors in the two--coloring of the graph, with indices $W_A$ and $W_B$, and identify $|W_A,W_B\rangle \equiv |W\rangle$.
We consider a recurrence purification protocol that consists of two sub--protocols, $P1$ and $P2$, each of which acts on two identical copies $\rho_1=\rho_2=\rho$, $\rho_{12}:=\rho_1\otimes \rho_2$. The basic idea consists in transferring (non--local) information about the first pair to the second, and reveal this information by measurements. In sub--protocol $P1$, all parties who belong to the set $A$ apply local CNOT operations to their particles, with the particle belonging to $\rho_2$ as source, and $\rho_1$ as target. Similarly, all parties belonging to set $B$ apply local CNOT operations to their particles, but with the particle belonging to $\rho_1$ as source, and $\rho_2$ as target. The action of such a multilateral CNOT operation on two graph states can be determined within the stabilizer formalism and is given by \cite{Du03a}
\begin{eqnarray}
|W_A,W_B\rangle|V_A,V_B\rangle \longmapsto |W_A,W_B + V_B\rangle |V_A + W_A,V_B\rangle,
\end{eqnarray}
where $W_B + V_B$ again denotes bitwise addition modulo 2. A measurement of all particles of $\rho_2$ follows, where the particles belonging to set $A$ [$B$] are measured in the eigenbasis $\{|0\rangle_x,|1\rangle_x\}$ of $\sigma_x$ [$\{|0\rangle_z,|1\rangle_z\}$ of $\sigma_z$] respectively. The measurements in sets $A$ [$B$] yield results $(-1)^{\xi_a}$ [$(-1)^{\zeta_b}$], with $\xi_a,\zeta_b \in\{0,1\}$. Only if the measurement outcomes fulfill $(\xi_a+\sum_{b\in N_a}\zeta_b){\rm mod}2=0 ~\forall a$ \,--which implies $W_A + V_A={\bf 0}$--\, the first state is kept. In this case, one finds that the remaining state is again diagonal in the graph--state basis, with new coefficients
\begin{eqnarray}
\tilde\lambda_{U_A,U_B}=\sum_{\{(W_B,V_B)|W_B+ V_B = U_B\}}\frac{1}{2K} \lambda_{U_A,W_B}\lambda_{U_A,V_B},
\end{eqnarray}
Here $K$ is a normalization constant such that ${\rm tr}(\tilde \rho)=1$, indicating the probability of success of the protocol.
In sub-protocol $P2$ the role of sets $A$ and $B$ are exchanged, and a similar expression for the action of the protocol on initial graph diagonal states can be derived \cite{Du03a}. The total purification protocol consists in a sequential application of sub--protocols $P1$ and $P2$. While sub--protocol $P1$ serves to gain information about $W_A$, sub--protocol $P2$ reveals information about $W_B$. Typically, sub--protocol $P1$ increase the weight of all coefficients $\lambda_{{\bf 0},W_B}$, while $P2$ amplifies coefficients $\lambda_{W_A,{\bf 0}}$. In total, this leads to the desired amplification of $\lambda_{{\bf 0},{\bf 0}}$, where the fixed point of the protocol is $\lambda_{{\bf 0},{\bf 0}}=1$, i.e., iterative applications lead to the distillation of states with fidelity arbitrarily close to unity.
A numerical analysis of the purification regime of these recurrence protocols was performed in ref.~\cite{Du03a}. Results analogous to the robustness of entanglement under decoherence are found for the purification regime of the protocols. That is, for GHZ states the acceptable amount of channel noise (per particle), such that purification is still possible, decreases with increasing particle numbers. For cluster states, or more generally all two colorable graph states with constant degree, this threshold value turns out to be independent of the particle number $N$. Channel errors of the order of several tens of percent are correctable.
The applicability and performance of the protocol was also analyzed when taking noisy local control operations into account. In this case, no maximally entangled states can be produced, but still the fidelity of the states can be increased provided the local operations are not too noisy. The reachable fidelity and fixed point is determined by the noise in local control operations. Again, for GHZ state the threshold value for local control operations such that purification is possible is more stringent for higher particle numbers, while errors of the order of several percent are tolerable in the case of cluster--state purification of arbitrary size \cite{Du03a}. This remarkable robustness of entanglement purification schemes opens the way for applications based on high fidelity graph states in real world scenarios.
\subsection{Multipartite secure state distribution}\index{secure state distribution}
Based on the multi-party entanglement purification protocol described in the previous section, a modification of the scheme was found that allows one to purify two--colorable graph states in such a way that all (but one) of the involved parties do not know ---and have no means to learn--- which state they are trying to purify. Such a process is the basic tool to achieve secure state distribution (see \cite{Du05c}) and constitutes a novel quantum primitive with possible applications in distributed secure applications. The goal is to generate a high fidelity approximation to a secretly chosen graph state among spatially separated agents. This is done by distributing multi-party entangled states from a central station via noisy quantum channels to local agents, and increasing the fidelity of the states via entanglement purification. It is important that at no stage of the protocol, the involved local agents should be able to learn about the identity of the state they are processing, and at the end they should be able to hand over a high fidelity approximation of this state to end--users (who have placed the order for this state). The secrecy in this context is of particular importance, as different graph states represent different resources for security (and other) applications, and the end--user may want to keep the ressources unknown to the other parties or to potential eavesdroppers.
The basic idea of the modification is to keep the identity of the state at any stage of the protocol secret, and generate effectively states that are --from the point of view of the local agents-- described by completely mixed states. To this aim, random local basis changes using Pauli operations are independently applied to the individual copies before distributing the states and hence before purification starts. This ensures not only that initial states are randomized (recall that already application of $\sigma_z$ operations at different locations produces all possible basis states), but also that measurement outcomes and measurement statistics are randomized. The purification protocol can be adopted to account for these additional basis changes. There are a number of technical details regarding the exact protocol to ensure unconditional security, and we refer the interested reader to ref.~\cite{Du05c} for details. We remark that two alternative ways to achieve the secure distribution of two--colorable graph states were also proposed in ref.~\cite{Du05c}, one based on the purification of Bell pairs with subsequent teleportation, the other based on the purification of enlarged graph states where randomization (and secrecy) is achieved by additional entanglement with some central station. The three different protocols are applicable in different regimes, where generally direct multi-particle entanglement purification turns out to allow for higher target fidelities than schemes based on bi-partite purification.
\section{Summary}
Graph states form a rich class of entangled states that exhibit
key aspects of multi-partite entanglement. At the same time, they
can be described by a number of parameters that grows only
moderately with the system size. They have a variety of
applications in quantum information theory, most prominently as
algorithmic resources in the context of the one-way quantum
computer, but also in other fields such as quantum error
correction and multi-partite quantum communication, as well as in
the study of foundational issues such as non-locality and
decoherence.
In this review, we have given a tutorial introduction into the
theory of graph states. We have introduced various equivalent ways
how to define graph states, and discussed the basic notions and
properties of these states. The focus of this review has been on
their entanglement properties. These include aspects of
non-locality, bi-partite and multi-partite entanglement and its
classification in terms of the Schmidt measure, the distillability
properties of mixed entangled states close to a pure graph state,
as well as the robustness of their entanglement under decoherence.
We have also reviewed some of the known applications of graph
states, as well as proposals for their experimental
implementation. Some of the latter material, specifically about
implementations, should thus be taken as preliminary and
reflecting only the current state of research.
\section*{Acknowledgements}
We would like to thank a number of colleagues for fruitful
discussions in the context of this work during the past
few years. The (most likely incomplete) list of people includes
S. Anders, H. Aschauer, S. Benjamin, J. Calsamiglia, D. Browne, J.~I. Cirac,
J. Dehaene, D. Gross, O. G\"uhne, E. Hostens, L. Hartmann,
A. Miyake, M.~B. Plenio, E. Rains, D. Schlingemann, S. Perdrix,
T. Stace, G. T\'oth, F. Verstraete, and P. Zoller,
as well as other people with whom we had the pleasure
to exchange our ideas on conferences or meetings.
This work was supported by the Austrian Science Foundation (FWF); by the
European Union through projects QUPRODIS, PROSECCO, OLAQUI, SCALA and QAP;
by the \"Osterreichische Akademie der Wissenschaften through project APART;
by the Deutsche Forschungsgemeinschaft (DFG); by EPSRC; by the Microsoft
Research Foundation; by the European Research Councils (EURYI Scheme); by
MURI under grant No.~DE-FG03-92-ER40701; by the NSF under contract number
PHY-0456720; and by the Flemisch fund for scientific research
(FWO) through projects G.0120.03 and G.0452.04.
\addcontentsline{toc}{section}{\protect\numberline{} References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Researchers are looking for collaborations more than ever~\citep{beaver2001reflections, freeman20151,lariviere2015team,viana2013time}. Over the past few decades, team sizes have been increasing across many research fields in science, with scholars seeking collaborations to increase productivity, enrich knowledge by sharing experiences with colleagues, access new resources, attain higher impact, among many other reasons~\citep{bukvova2010studying, abramo2009research}. Projects bridging two or more fields are usually undertaken by larger teams with researchers from different disciplines. However, this is a relatively recent notion in Science and may not have spread uniformly across different disciplines as some areas incorporate collaborations in the research process most naturally than others~\citep{qin1997types}.
Collaborations can emerge in different contexts. Sometimes, it happens between graduate students, researchers affiliated to the same institutions, supervisors and supervised, etc. In general, it is expected that authors have few collaborators at the beginning of their academic life, working predominantly with their supervisors more than other researchers. At the same time, different research fields may have different forms of measuring success (e.g., through productivity, citations, etc)~\citep{abramo2013individual,abramo2014you}. In addition to that, even though the use of similar metrics, such a task can be challenging once different disciplines also present different dynamics on how authors collaborate and beginning of the author's academic career.
In the literature, we can find various approaches aiming to quantify scholars' academic productivity, contribution and impact~\citep{garcia2012extension, abramo2014you, fenner2014altmetrics,correa2017patterns,brito2021associations}. However, these characterizations rarely take into account the authors' collaborations nuances, for example, how distributed are the citations across different team sizes. It is not trivial to quantify the individual impact on authors' success given such a variety of patterns and scales of collaborations. In this work, we aim to characterize authors productivity taking into account their collaborations, and compare these patterns across different disciplines.
We focus our analyses on understanding the effect of collaboration ties on different fields of study for established authors. In particular for researchers that already published at least a minimum number of papers. We consider the co-authorship of papers as a proxy for collaboration ties between authors. The analysis is then performed by means of author-level success metrics grouped by research fields of study. We use metrics such as the h-index, the number of citations, the number of publications and verify how they vary with the presence or absence of their prolific collaborator (or top collaborator). In this context, the top collaborator is the collaborator of an author whose coauthored papers received more citations. Our analysis is guided by the following research questions:
\begin{enumerate}
\item How does the collaborations strength change in different fields of study?
\item What is the impact of the top collaborator on authors' productivity?
\end{enumerate}
We found that the top collaborator influences to authors metrics have different patterns depending on their fields of study. In particular, three main patterns were observed across disciplines. First, humanities disciplines were found to have authors predominantly publishing single-authored papers or with a few collaborators. In this case, the top collaborator does not seem to have a strong influence on individual metrics. The second type of pattern was observed mostly for natural and applied disciplines, with authors having many collaborators but with the top collaborator also not influencing much on their success metrics. The third group of patterns is composed of formal and applied sciences. These disciplines display authors metrics highly influenced by their top collaborators. Interestingly, these tendencies are different for the specific case of highly cited authors since their success metrics are not highly influenced by their top collaborators. This pattern was evident across all studied fields of study.
Our results may shed light on the understanding of the importance of collaborations to researchers' visibility.
\section{Related works}
\label{sec:related}
\subsection{Comparing productivity in different disciplines}
In order to measure individual contributions, \cite{batista2006possible} normalized the traditional h-index by the average number of authors co-authoring papers in the h-set. The study compared Brazilian researchers in four disciplines, namely Physics, Chemistry, Biology and Mathematics.
Lists comprising the top 10 Brazilian authors were generated according to the traditional and the proposed normalized h-index. The overlap between these two lists was found to be field dependent. The highest overlap occurred in Mathematics (90\%), which indicates that the normalization barely impacts the rank. Conversely, Physics presented the lowest overlap value, 10\%. The results suggest that the consideration of the number of collaborators can highly impact the ranking of authors. \cite{batista2006possible} advocates, however, that the proposed normalization allows comparing h-indexes across fields.
\cite{ajiferuke2010citer} and \cite{ajiferuke2010comparison} proposed the use of \emph{citers} instead of citations to quantify academic influence. The idea is to count the number of individuals influenced by the author's work.
In addition to using traditional citation metrics such as citation counts, they also proposed the ch-index. This measurement is analogous to the traditional h-index. But instead of counting citations, the authors count the number of \emph{citers} in publications. Therefore, ch-index = $x$ if $x$ is the highest integer for an author with $x$ publications that are cited by at least $x$ different \emph{citers}.
Significant correlations were found between the ch-index and traditional citations metrics. However, the ch-index seems to play a complementary and relevant role to investigate \emph{citers} and citations variations in different fields or to compare individuals with lower citation counts.
\cite{ajiferuke2010comparison} extended the study of \cite{ajiferuke2010citer} in order to compare \emph{citer} patterns across disciplines. In addition to the already mentioned metrics, they introduced the \emph{reciter} rate, which is defined as the number of unique authors citing a paper in a recurrent way. \cite{ajiferuke2010comparison} analyzed 90 highly cited authors from three different disciplines: Social Sciences, Mathematical/Engineering Sciences, and Biological/Medical Sciences. In general, \emph{citer} and citation measurements were found to be correlated. However, some different patterns were found in different disciplines.
In Social Sciences, the number of authors per paper and reciter rates are lower than the other fields. The authors conclude that \emph{citer}-based metrics seem to be useful when the number of collaborations in the field is low. Conversely, in Mathematical, Engineering, Biological and Medical Sciences, {\emph{citer} and \emph{citation} metrics may present distinct values, even though they are correlated.
}
Visibility analysis based on \emph{citer/reciter} counts also possibilities to identify if a large number of citations corresponds to large number of authors being influenced.
\cite{ioannidis2016multiple} proposed a composition of indexes to quantify authors visibility. Their analysis a large list of citations metrics: number of citations, number of citations for the papers single-authored or first author, citations for single, first or last author, h-index and Schreiber co-authorship adjusted Hm-index. The combination of these indexes, referred to as Composite Score, was to used to
they identify the top $1000$ researchers in the considered dataset.
The authors found that an expressive number of notorious authors appear in their list while being absent in the list of top cited researchers. \cite{ioannidis2020updated} provided an updated version of the list of most influential researchers according to the Composite Score.
\subsection{Collaborations and productivity}
\cite{abramo2009research} advocates that the relationship between collaboration and productivity is not trivial. The authors studied the
Italian academic system across eight disciplines. To measure the productivity they used the total number of papers.
{The relevance (referred to as quality index) of the research conducted by a group of researchers was computed by considering not only the number of papers published, but also journals impact factor.}
The study found that the way collaborations influence productivity depends on the research discipline. A strong correlation was observed for industrial and information engineering. In both mathematics and computer science fields, a correlation {between collaboration intensity and the average quality of the publications was found.}
{For most of the disciplines investigated, the average quality {index} turned out to be positively affected by inter-university collaborations.}
This work also found that interdisciplinary fields have usually display a more collaborative behavior.
\cite{lariviere2015team} investigated papers published in two broad fields of study: natural/medical sciences and social sciences/humanities. They studied collaboration at the author level. Inter-institutional and country collaborations were also investigated. The results showed that all three types of collaborations are increasing in both fields. Single authored papers were found to be less usual, mainly in the natural and medical sciences. Most of the papers were found to be result of inter-institutional collaborations.
Most importantly, \cite{lariviere2015team} showed that scientific impact seems to be influenced by the number of authors of the papers. They found no correlation between the diversity of authors country and impact.
\cite{abramo2015relationship} conducted a similar investigation for the Italian case, most of their findings confirmed results the results obtained by \cite{lariviere2015team}. The main conclusions pointed to the positive correlation between the number of authors of the papers and the number of citations received. Surprisingly, the correlating was even stronger for Social Sciences, Art, and Humanities.
\section{Methodology}
\label{sec:meth}
\subsection{Dataset and fields of study}
The Microsoft Academic Graph (MAG) is a large dataset containing scientific publications, citations, publications metadata such as authors and their institutions, journals, conferences, and fields of study~\citep{wang2020microsoft}. In this paper, we use papers published between 1950 and 2020. Because we are mostly interested in analyzing the collaborative patterns of mid-career and senior researchers, we considered only authors with at least $10$ papers and 200 citations in the dataset. We also disregarded papers co-authored by more than 10 authors, since the actual collaboration between authors in these cases is not trivial to measure. This same procedure has been applied in similar studies on collaboration networks~\citep{li2019reciprocity}.
{Figure \ref{fig:num-authors} of the Supplementary Information (SI) shows the distribution of the number of authors per publication.} After this filtering step, the resulting number of authors and papers was 2,630,275 and 243,030,343; respectively.
Because we are also interested in collaboration patterns arising from diverse fields, one must first classify authors in fields of study. Here we employ the MAG fields of study, which are defined for each paper. Fields and subfields are organized in a hierarchy represented by a directed acyclic graph. However, because some subfields can be too specific, we did not use them directly. For example, the topic \emph{Katz centrality} is a child of \emph{Network science}, which in turn is a child of \emph{Complex Networks}. Instead, we map papers to fields at a hierarchy level corresponding to the major knowledge fields, such as Mathematics, Computer Science and Physics. This is accomplished by first mapping all the subfields to \emph{top fields} by walking upwards (towards the root) across the hierarchy until we reach a field of study at the desired level (major areas of knowledge). For instance, papers associated to the \emph{Katz centrality} field of study, are mapped both to \emph{Mathematics} and \emph{Computer Science}. A list of fields considered in this work is illustrated in Figure \ref{fig:authorsdist}. To account for multiple fields of study associated to a paper (and subfields associated to multiple top fields) we also employ a weight $w_P(f_{top})$ indicating the relevance of the a top field of study $f_{top}$ to a paper $P$, which is computed as:
\begin{equation}
w_P(f_{top}) = \frac{1}{|F_P|} \sum_{f\,\in\,F_P} \frac{|\textrm{Parents}(f) \cap \{f_{top}\}|}{|\textrm{Parents}(f)|},
\end{equation}
where $F_P$ is the set of subfields of paper $P$ and $\textrm{Parents}(f)$ is the set of top fields obtained after mapping a subfield $f$ upwards across the hierarchy.
The next step is assigning fields for authors based on their published papers. Here we considered each paper contribution as having the same relevance to define the authors field (and respective weight). Thus the weight $w_A'(f_{top})$ associating an author $A$ to a top field $f_{top}$ can be given by:
\begin{equation}
w_A'(f_{top}) = \sum_{p\,\in\,P_A} w_p(f_{top}),
\end{equation}
where $P_A$ is the list of papers published by author $A$.
Finally, the field of study of an author $A$ is the field of study with the highest weight:
\begin{equation}
F(A) = \argmax_{f_{top}}~w_A'(f_{top}).
\end{equation}
The obtained distribution of fields of study at the author-level is shown in Figure \ref{fig:authorsdist}. As expected, fields size are diverse. While Medicine and Biology comprises almost half of all authors, Environmental Sciences encompasses less than $0.1\%$ of the universe of considered researchers.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\textwidth]{authors_distribution_2020.pdf}
\caption{Number of authors per field of study considering papers published between 1950 and 2020. We also show after each bar the corresponding fraction of authors in the respective field.
}
\label{fig:authorsdist}
\end{figure}
\subsection{Network of influencies}
We propose a directed and weighted network to quantify collaborations importance. All nodes and edges are established as in a traditional collaboration network~\citep{amancio2015topological,amancio2012use,pelacho2021analysis,viana2013time}. Two authors are linked if they co-authored at least one paper. We now aim at quantifying for an author A how important is the collaborator B. Here we \emph{illustrate} the importance of collaborations in terms of citations of scientific works accrued in the collaboration. As we shall show, any relevance index can be used to define this weight.
If $c_{AB}$ is the number of citations accrued by papers co-authored by $A$ and $B$, and $c_A$ is the number of citations received by $A$, then the weight $w_{AB}$ is computed as
\begin{equation} \label{eq:besos}
w_{AB} = \frac{c_{AB}}{c_A}.
\end{equation}
Note that $w_{AB}$ can be interpreted as the importance of the collaboration between $A$ and $B$ for researcher $A$. Note that $w_{AB}$ may not be equal to $w_{BA}$.
For this reason, we are given a directed network of influence. In our analysis, we assign the weight $w_{AB}$ for the edge with $A$ and $B$ and source and target nodes, respectively. Thus,
out-going edge weights are normalized, i.e.
$\sum_X w_{AX} = 1,$
where $X$ is co-author of $A$. The \emph{top-collaborator} of $A$, $T(A)$, is defined as:
\begin{equation} \label{eq:topa}
T(A) = \argmax_X w_{AX}.
\end{equation}
Figure \ref{fig:toynet} illustrates an example of the network of influence with edges weights defined according to equation \ref{eq:besos}. $A$ and $B$ accrued respectively $c_A = 200$ and $c_B = 300$ citations. Papers co-authored by $A$ and $B$ received $c_{AB} = 50$ citations. Taking $A$ as reference, the importance of $B$ is $w_{AB} = 0.25$, i.e. $B$ contributes to $25\%$ of all citations received by $A$. Note that, in this example, $A$ contributes to only 17\% of the citations received by $B$, i.e. $w_{BA} = 0.17$. $B$ is more benefited from collaborations with $D$, since 50\% of citations to B comes from papers co-authored with $D$, i.e. $w_{BD} = 0.50$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{toynet.pdf}
\caption{A toy example of a network of influences. Author A and B received $c_A = 200$ and $c_B = 300$ citations. Considering only the set of paper co-authored by $A$ and $B$, they received $c_{AB} = 50$ citations. For this reason, the importance of $B$ for author $A$ is $w_{AB} = 50/200 = 0.25$. Analogously, the importance of $B$ for $A$ is $w_{BA} = 50/300 = 0.17$. }
\label{fig:toynet}
\end{figure}
In addition to the number of citations, we also computed the weight in equation \ref{eq:besos} in terms of the productivity (i.e. number of published papers). Both productivity and visibility were also considered by using the h-index. Because we are interested in ranking authors, the traditional h-index is not enough because many authors may share the same h-index value. For this reason,
we used an extended version of the h-index that considers the distribution of citations in the $h$-set~\citep{garcia2012extension}. The extended version of the h-index ($h^{(E)}$) is computed as a sequence of $h$'s, i.e. $h^{(E)} = \{h_1, h_2,h_3\ldots\}$. To compute $h^{(E)}$, papers are sorted in decreasing order of citations, as shown in Figure \ref{fig:extendedhindex}. $h_1$ corresponds to the traditional h-index. $h_2$ denotes the $h$-index computed in the $h_1$ set, but considering that every paper in the $h_1$ set has $h_1$ less citations than it actually has. This process in then recursively applied to compute $h_2, h_3\ldots$. A graphical illustration of this recursive process is illustrated in Figure \ref{fig:extendedhindex}.
\begin{figure}[h]
\centering
\includegraphics[width=0.55\textwidth]{hindex.pdf}
\caption{Extended h-index example. The traditional h-index ($h_1$) is computed by using all set of papers (green square). $h_1$ is the side length of the largest square fitting within the distribution. The square has its left down vertex located at the position $(0,0)$.
A recursive idea is used to computed $h_2$ and $h_3$. To compute $h_2$, we detect the side length of the largest square fitting within the distribution (see red square). In this case, the the considered square has its left down vertex located at the position $(0,h_1)$. The same procedure is applied to compute $h_3$, but now with the left down vertex located at the position $(0, h_1+h_2)$. }
\label{fig:extendedhindex}
\end{figure}
\section{Results} \label{sec:results}
\subsection{Top-collaborator influence across fields}
Here we aim at understanding the importance of top collaborators in authors productivity and visibility. Because some patterns may arise due to the different characteristics of the disciplines, we first analyze the main features of our dataset to better understand if the patterns we find can be trivially explained by simple subfield characteristics. For this reason, we analyze the distribution of the number of citations, number of papers and authors' birth year of each subfield.
Figure \ref{fig:r1} shows some basic statistics extracted from our database. The shape of author citation distribution is similar across fields (Figure \ref{fig:r1}(a)), with differences in the number of citations received by different areas.
Typically authors from Biology and Medicine are the ones receiving more citations, while other areas are much less cited ({see e.g. History, Art and Geography}). The number of citations received seems to be related with productivity, as shown in Figure \ref{fig:r1}(b). As expected, different areas have different productivity and visibility patterns.
Figure \ref{fig:r1}(c) shows the cumulative distribution of authors birth year. The birth year here is defined as the year of the author's first publication. We show as a dashed horizontal line the median value. Once again the differences across fields are evident, even though shapes are similar. Apart from History, Art, Philosophy and Population, all other subfields have authors with median birth year between 1990 and 2000. Surprisingly, a difference of almost 30 years is observed when comparing the median average birth year of Chemistry and History authors.
We investigated the influence of the top-collaborator (as defined in equation \ref{eq:topa}) in different subfields. We analyzed the weight associated with the top-collaborator of A (i.e. $\max_X w_{AX}$).
{The median values of the distribution of $\max_X w_{AX}$ when defining weights in terms of the publication citation counts is shown in Table \ref{tab:my_label} of the SI.}
In Figure \ref{fig:sub4}(a), we show the cumulative distribution of authors for $\max_X w_{AX}$, where weights are defined in terms of authors productivity. Equivalently, the figure illustrates the cumulative distribution of the fraction of papers published with the top collaborator. Interestingly, there are significant differences across fields. The median top collaborator influence (denoted as dashed horizontal line) ranges between 7\% and can reach roughly 50\%. History, Art, Philosophy and Sociology are the areas displaying lowest influence of top collaborators in publication counts. Conversely, Material Sciences, Chemistry, Biology and Medicine are the areas with a high influence of top collaborators. For instance, for half of all Chemistry authors, top collaborators accounts for roughly 50\% of publication counts.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{papers_cits_dist.pdf}
\caption{Basic author statistics. Each curve represents authors in a specific field. {Gray dashed horizontal lines indicates the median}. \textbf{(a)} Distribution of number of papers. \textbf{(b)} Distribution of citation counts. \textbf{(c)} Cumulative distribution of authors birth year.
}
\label{fig:r1}
\end{figure}
Figure \ref{fig:sub4}(b) shows the cumulative density of top-collaborators influence when considering citation counts. The shapes are very similar to the ones found for publication counts. However, when considering citations, the influence of top-collaborators seems to be stronger. The median influence can increase by a margin of $20\%$ in some disciplines. In Material Sciences, the median influence based on citation reaches $66\%$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{top_colab_perc.pdf}
\caption{Cumulative distributions of top-collaborators influence considering in different fields of study. The grey line indicates the median. The top-collaborator influence is measured in terms of (a) number of papers; and (b) citation counts. }
\label{fig:sub4}
\end{figure}
We note that there is no trivial correlation between the features of areas (as shown in Figure \ref{fig:r1}) and influence indexes found in Figure \ref{fig:sub4}. While Biology and Medicine are the disciplines with the largest number of papers and citations, the top-collaborator seems to be more influential in both Chemistry and Material Sciences. Similarly, Art is the area with the lowest number of papers and citations, however, the lowest influence of top-collaborators is observed in History.
We could also observe that authors' birth year can not totally explain top-collaborators influence. While Computer Science seems to be one of the youngest fields, the median dependence is much higher in fields such as Biology and Chemistry. These results suggest that the influence of top-collaborators might be related to additional factors other than productivity/citation behavior or authors age distribution.
An alternative way to investigate the influence of top-collaborators concerns the analysis of how top-collaborators can affect authors visibility metrics. In this context, we analyzed the variation of authors visibility indexes when the contribution of the top-collaborator is removed. In this analysis, we should expect that if a single top collaborator accounts for most of the success of an author, if the contribution (i.e. co-authored papers) of that collaborator is disregarded, the visibility of the author is affected by a large margin. Conversely, if collaborations are diverse in quantity and quality, one should expect that disregarding top-collaborators contributions would cause only a minor decrease in the considered visibility indexes.
In Figure \ref{fig:cits} we analyze the robustness of authors metrics when the contribution of top-collaborators are disregarded. More specifically, we analyze how \emph{citation counts} are affected when papers with the top-collaborators are disregarded from the analysis. {A similar study analyzing the effect on \emph{publication counts} is available in Figure \ref{fig:papers} of the SI.}
For each subpanel, the x-axis corresponds to the original author citation counts, and the y-axis denotes the number of citations accrued by authors in papers \emph{not co-authored with their top-collaborators}. While there are noteworthy differences across fields, it is possible to identify that, for authors with low citations, removing the top-collaborator can strongly affect citation performance. This is not a surprising result, since this might be the case of young researchers who mostly collaborated with their supervisors. This effect might not only be restricted to younger researchers, since only a small percentage of authors have birth date before 2010 (as shown in Figure \ref{fig:r1}(c)). In fact, a correlation analysis between authors age and top-collaborators influence showed that there is no strong relationship between the two variables (see Figure \ref{fig:agecor} of the SI).
The influence of top-collaborators on citation counts is field dependent. Particularly, a small dependence can be observed in History, Art, Philosophy, Sociology, Population and Environment Sciences. Even for highly cited authors, when removing the top-collaborator, citation counts are not affected by a large margin. Conversely, it is interesting to note that, for some fields, removing the top-collaborator may cause a large impact on author citations. This pattern emerges in several fields, including Engineering, Computer Science, Physics, Medicine, Biology, Chemistry and Material Sciences. It is also worthy noting that even highly-cited authors can be strongly affected by removing top-collaborators publications. In Medicine, authors with more than $4,000$ citations can share more than $3,000$ citations with their single top-collaborators. In Economics, Mathematics and Psychology the removal of top-collaborators from the analysis can also impact significantly citation counts of highly-cited authors. However, this effect is not as frequent as observed in Medicine and Chemistry.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{fos_hist2d_before_after_2020_citations.pdf}
\caption{Total number of citations (x-axis) vs. citations obtained disregarding papers co-authored with top-collaborators (y-axis). While in History, Art, Philosophy and Sociology top-collaborators account for a small percentage of citations, in other disciplines such as Medicine and Chemistry, top-collaborators may play an important individual role to authors citation counts.
}
\label{fig:cits}
\end{figure}
In Figure \ref{fig:rankhindex} we conducted a similar experiment by considering the \emph{generalized h-index} as visibility metric. The x-axis corresponds to authors' rank when sorting author according to their \emph{h-indexes}. The y-axis is the analogous ranking, but disregarding the contributions of top-collaborators. The best ranked authors are located at the top left region of each subpanel.
We observe again that the importance in terms of h-indexes is clearly field dependent.
Once again the main differences across areas arises for the most visible (best ranked) researchers. In History, Art, Philosophy, Sociology, Population and Environmental Sciences, the original rank is affected mostly for the authors with the lowest h-indexes values. The h-indexes of the best ranked authors seems not to be strongly dependent on the contributions of top-collaborators for these disciplines.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{fos_hist2d_hrank_2020.pdf}
\caption{Analysis of how $h$-based authors ranking is affected by top-collaborators contributions. The x-axis represents the ranking obtained with the generalized h-index and the y-axis corresponds to the ranking of the same metrics but disregarding top-collaborator contributions.
}
\label{fig:rankhindex}
\end{figure}
The disciplines mostly dependent on top-collaborators are Engineering, Computer Science, Physics, Medicine, Biology, Chemistry and Materiaks Sciences. For some authors in these areas, their ranking can be strongly impacted if top-collaborators contributions are disregarded. Even top-ranked researchers can be affected by removing top-collaborators papers. This result suggests that a single, recurrent collaboration may play a prominent role in establishing a high visibility level in these disciplines.
\subsection{Relationship between number of co-authors and top-collaborators influence}
The previous section showed that top-collaborators influence based on visibility metrics is field dependent. It becomes thus interesting to investigate whether these patterns can be explained by the differences in the typical number of collaborators in each field. This is an issue worthy studying because, when authors publish several single-authored papers, the lack of top-collaborators can be trivially explained. As we shall show, the predominance of single-authored papers in humanities could be one reason for the low influence of top-collaborators in disciplines such as History, Art and Philosophy~\citep{sabharwal2013comparing}.
Figure \ref{fig:ncolabs} shows the distribution of the number of collaborators for each discipline. We observe that for some disciplines the low influence of top-collaborators may be correlated with the typical low number of collaborators. This is exactly the case of humanities disciplines. A large dependence of top-collaborators also might be correlated with a large number of collaborations, and this is the case of Medicine and Biology. However, not all influence patterns can be explained totally by the number of collaborations. Geology and Psychology have similar top-collaborator influence profiles (see Figure \ref{fig:rankhindex}). However, the typical number of collaborators are very distinct: Geology has roughly 75\% more collaborators than Psychology. In a similar fashion, Geology and Biology share a similar typical number of collaborators, but top-collaborators influence is clearly stronger in the Biology field.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{number_of_colabs_per_fos_2020_percentile95.pdf}
\caption{Distribution of number of collaborators per field of study. The average value is calculated ignoring the values greater than the 0.95 percentile.
}
\label{fig:ncolabs}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{parallel_rank_metrics.pdf}
\caption{Parallel coordinates illustrating the relationship between the number of collaborators, the influence of top-collaborators on citations and the influence of top-collaborators on publication counts. Metrics are normalized by the maximum quantity in each axis. }
\label{fig:comparison}
\end{figure}
Figure \ref{fig:comparison} ranks the disciplines according to different authors' metrics: the average number of collaborators and the median of the influence of the top-collaborator on citations and number of papers. The values in each each axis is normalized by the maximum value observed for the specific variable. The non-normalized median values of top-collaborators influence are displayed in Table \ref{tab:my_label} of the SI. The figure confirms that the relationship between top-collaborators influence and the number of collaborators is not trivial. Medicine is the discipline with the highest number of collaborations, but the influence of top-collaborators is not as high as in other disciplines, such as Material Sciences and Chemistry. The curve observed for Geology is similar to the Medicine curve. An interesting behavior is also observed for Business. Despite being an area with a relative low number of collaborators, the average top-collaborator influence on citations is high.
All in all our analysis showed that more collaborative disciplines are more likely to have top-collaborators with a higher level of influence on authors productivity and visibility. In some cases, however, top-collaborators influence are relevant even in disciplines characterized by a lower number of collaborators. It remains to be shown, therefore, that other factors other than the number of collaborators may affect top-collaborators influence.
\section{Conclusions}
\label{sec:conc}
In scientific research, collaborations play a fundamental role in promoting and spreading diverse ideas. Despite the large number of works conducted to study collaboration networks, the influence of a single collaborator on authors productivity and visibility has been limited to a few works. In this paper, we analyzed the influence of the most relevant collaborators (referred to as \emph{top-collaborator}) on author metrics. The influence of top-collaborators on the number of papers, citations and h-index was studied in several disciplines.
The influence of the top-collaborator was defined as the fraction of papers/citations obtained in studies co-authored with the top-collaborator. Several interesting patterns distinguishing disciplines were found. We identified that disciplines with a typical low number of collaborators (such as Philosophy, History and Art) are more likely to have a high top-collaborator importance. In this sense, the number of papers and citations are not highly affected if top-collaborators contributions are not affected. A second group of discipline are formed by authors with more collaborators and moderate influence of the top collaborators. This group includes e.g. Economics and Mathematics. A third group of disciplines are those displaying an inversely proportional relationship between number of collaborators and influence of the top-collaborator. Medicine and Geology are the two most collaborative disciplines, however, the influence of top-collaborators is surpassed by other disciplines. In an opposite behavior, Business is a typically a discipline with a relative low number of collaborations, but the influence of top-collaborators in citations is roughly the same the influence observed for Geology, which is a much more collaborative discipline.
Our results showed that the top-collaborator may play an important role in particular disciplines even for highly cited authors. This implies disentangling individual's success from collaborative their teams has become a challenging task and will get more complex as science becomes even more collaborative. In addition to that, we found different patterns of collaboration influence among different disciplines. Thus, any new metric to measure individual research impact should be versatile enough to account for the specificities of each discipline, which also poses as a challenging as science has become more interdisiplinary over the past years.
In future works, we are interested in using top-collaborator influence metrics to investigate individual careers. We are particularly interested in investigating if collaboration patterns with top-collaborators could predict a higher future visibility or other relevant author metrics~\citep{brito2021associations}.
In this case, we could use the proposed network representation to measure the effective number of collaborators via accessibility or symmetry metrics~\citep{correa2017patterns,tohalino2018extractive}. Another interesting topic worthy studying is the contribution performed by top-collaborators~\citep{correa2017patterns}. In other words, we could investigate, e.g. if top-collaborators are more likely to contribute in a specific task, such as writing or research design.
\newpage
\section*{Acknowledgments}
A.C.M.B. acknowledges financial support from São Paulo Research Foundation (FAPESP Grant no. 2020/14817-2) and Capes-Brazil for sponsorship. D.R.A. acknowledges financial support from FAPESP (grant no. 20/06271-0) and CNPq-Brazil (grant no. 304026/2018-2).
\bibliographystyle{abbrvnat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The tit-for-tat strategy has been studied extensively in repeated games, where two agents playing tit-for-tat adjust their behavior to match that of their opponent in the past. For example, in the repeated Prisoner's dilemma, the choice of a player is between cooperation and retaliation. An agent playing tit-for-tat will cooperate as long as the their opponent cooperates as well, and will subsequently copy the previous action of the other agent. More generally, a tit-for-tat strategy prescribes proportional retaliation to avoid escalation of conflict. Axelrod~\cite{axelrod_book} studies tit-for-tat to explain how high levels of cooperation can be achieved in groups of animals and human societies even when each individual player is selfish. Tit-for-tat can also be used to study economic policies such as setting tariffs between countries, where an increase in tariffs by one country is followed by corresponding increases in tariffs by other countries.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fractions_n=50_v17.png}
\includegraphics[scale=0.5]{fractions_n=50_v6.png}
\caption{{\textit{Tit-for-tat in two markets with $n=50$ players. Each player $i$ starts with an initial amount $x_i(0)$ of good $i$ and a way of distributing its good given by a vector $\vec{y_i}(0)$, where $y_{i,j}(0)$ is the fraction player $i$ gives to player $j$ (from good $i$) at round $0$. The players repeatedly exchange goods, produce their good from the bundles acquired, and then update the fractions according to the tit-for-tat rule. The picture shows the fractions $y_{j,i}(t)$, which have large oscillations.}}}
\end{figure}
In production economies there is a set of $n$ players, each of which starts with some amount of an eponymous good and a production recipe. The players can produce their own good given as input a bundle of various amounts of goods from the market.
In the simple case of additive production, there is a matrix $\vec{a}$ so that player $i$ can make $a_{i,j}$ units of its good given as input one unit of good $j$.
The production economy model was studied first by Von Neumann~\cite{vNeumann46} and Gale~\cite{gale1976linear} at the steady state, where the economy expands by the same factor as time progresses. Local dynamics in this type of market were recently studied in~\cite{BMN18}, where the players are starting with some arbitrary way of investing in goods and repeatedly adapting their investments based on past performance.
\bigskip
\noindent \textbf{Contribution}. Our contribution is to study the tit-for-tat dynamic in production markets with additive production, where the worth of a good is the same to everyone; that is, the input of player $j$ has value $a_{i,j} = v_j$ for any player $i$.
In tit-for-tat, each player decides in what fractions to split its good among its neighbours, and then updates the fractions in the next round proportionally to the contribution of each good in its production.
Examples from the simulation of economies for two and three players are given in Figure~\ref{fig:n=2} and Figure~\ref{fig:n=3}.
\begin{figure}[h!]
\centering
\subfigure[Fractions $y_{j,i}(t)$ for all $i,j$ over $ 70$ rounds.]
{
\includegraphics[scale=0.5]{fractions_n=2_short.png}
}
\subfigure[Fractions $y_{j,i}(t)$ for all $i,j$ over $250$ rounds.]
{
\includegraphics[scale=0.5]{fractions_n=2.png}
}\\
\subfigure[Amount of good $1$ over time.]
{
\includegraphics[scale=0.5]{amounts_n=2_player1.png}
}
\subfigure[Amount of good $2$ over time.]
{
\includegraphics[scale=0.5]{amounts_n=2_player2.png}
}
\caption{\textit{Tit-for-tat dynamic in an economy with $n=2$ players, where $v = [0.91, 1.186]$. Each player $i$ starts with an initial amount $x_i(0)=1$ of good $i$. The initial fractions are $\vec{y}(0)= [[0.95, 0.05], [0.55, 0.45]]$, where $y_{j,i}(t)$ is the fraction received by player $i$ from good $j$ at time $t$. The players repeatedly exchange goods, produce from the bundles acquired, and then update the fractions according to the tit-for-tat rule.}}
\label{fig:n=2}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[Fractions $y_{j,i}(t)$ over $T = 200$ rounds.]
{
\includegraphics[scale=0.5]{fractions_n=3.png}
}
\subfigure[Amounts over $T = 30$ rounds. The amount of good $1$ shown in black, good $2$ in blue, and good $3$ in red.]
{
\includegraphics[scale=0.5]{amounts_n=3.png}
}
\caption{\textit{Tit-for-tat dynamic in an economy with $n=3$ players, where $v = [1.61, 0.03, 1.51]$, the initial amounts are $x_i(0)=1$, and the initial fractions are $\vec{y}(0)= [[0.54, 0.2, 0.26], [0.32, 0.31, 0.37], [0.07, 0.54, 0.39]]$, where $y_{j,i}(t)$ is the fraction received by player $i$ from good $j$ at time $t$. The players repeatedly exchange goods, produce from the bundles acquired, and then update the fractions according to the tit-for-tat rule. Note the fractions $y_{j,i}(t)$ in subfigure (a) have large oscillations initially.}}
\label{fig:n=3}
\end{figure}
\newpage
We study the phase transitions and give an exact characterization of which players grow or vanish over time. In particular, if $v^*$ is the maximum value of any good in the economy, then the amounts of all the players $i$ with
$v_i > 1/ v^*$ grow in the limit. On the other hand, the players $i$ with $v_i < 1 / v^*$ have vanishing amounts in the limit. If $v_i = 1 /v^*$ then the amount of player $i$ stays in a bounded region throughout time.
We also study how the fractions of their investments evolve in the long term, showing that in the limit the players invest their good only on the players with optimal production capacity.
\subsection{Related Work}
\noindent \textbf{Tit-for-tat}.
The tit-for-tat dynamic was studied before in pure exchange markets (without production) by Wu and Zhang~\cite{WZ07}, where there is a set of players, such that each player $i$ owns one unit of good $i$. The players have additive valuations given by a matrix $\vec{a}$, where $a_{i,j}$ is the value of player $i$ for a unit of good $j$.
Each player repeatedly brings to the market one unit of a good in order to exchange it for other goods that are potentially more valuable.
In tit-for-tat each player decides in what fractions to split its good among its neighbours, and then updates the fractions in the next round proportionally to the utility received from each neighbour.
The main result in~\cite{WZ07} is that when the valuations are symmetric \footnote{Symmetric valuations are such that the value of any player $i$ for a good $j$ is $a_{i,j} = v_j$.} the dynamic converges to market equilibria for all non-degenerate starting configurations. The work in ~\cite{BDR19} further showed the tit-for-tat dynamic cycles in exchange economies with non-symmetric valuations.
\medskip
\noindent \textbf{Proportional response}. A related dynamic studied in markets is proportional response, in which every player starts with some amount of good and an initial budget of money that it can use for acquiring goods. In each round, the players split their budget into bids on the goods, then each good is allocated to each player in proportion to the bid amount and the seller collects the money made from selling.
In the exchange economy, each player updates their bids on the goods in proportion to the contribution of each good in their utility; there, the dynamic converges to market equilibria~\cite{BDR19} for any economy with additive valuations.
In Fisher markets, proportional response converges to market equilibria, which was shown for additive valuations in~\cite{Zhang11} and for all CES valuations by~\cite{CCT18}. In Fisher markets with additive valuations, the proportional response dynamic is equivalent to gradient descent~\cite{BDX11}.
In production markets with additive production, the proportional response dynamic was shown in~\cite{BMN18} to
lead to universal growth of the market, where the amount of goods produced
grows over time (whenever growth is possible), but also to growing inequality between the players on the most efficient
production cycle and the rest. In particular, the dynamic learns through local interactions a global feature of the exchange
graph---the cycle with the highest geometric mean.
\medskip
\noindent \textbf{Tatonement}. Another central dynamic studied in markets is tatonement, which has been studied in a series of papers on markets for players with additive, CES (constant elasticity of substitution), and Leontief valuations~\cite{CCT18,cheung2019tatonnement,cole2008fast}. Tatonement does not prescribe how allocations may be formed, but defines how prices are adapted depending on the demand in the previous round.
The Shapley-Shubik game~\cite{shapley1977trade}, which forms the basis of the proportional response dynamic, was studied in static Fisher and exchange settings for its equilibrium qualities~\cite{johari2004efficiency,feldman2005price,BGM17}.
\section{Model}
There is a set $N = \{1, \ldots, n\}$ of players, so that each player $i$ can make good $i$ using as ingredients various amounts of goods in the system. The production recipe is given by a vector $\vec{v}$, so that player $i$ can make $v_{j}$ units of its good given one unit of good $j$ \footnote{Note that in the most general case of additive production, there is a matrix $\vec{a}$ so that player $i$ can make $a_{i,j}$ units of good $i$ given one unit of good $j$.}.
The tit-for-tat dynamic is defined next. Note that there is no money involved; instead, each player decides how to allocate its good among the players in the economy, and then updates its fractions depending on what it received from the other players.
\begin{definition}[Tit-for-tat Dynamic] \label{def:tft}
Each good $i$ is worth $v_i$ to every player $j$. The initial amount of good $i$ is $x_i(0)$. Each player $i$ starts by allocating its good in initial fractions $\vec{y}_i(0)$.
At each time $t$, the following steps take place:
\begin{itemize}
\item \textbf{\emph{Exchange.}} Every player $i$ receives an amount $w_{i,j}(t) = y_{j,i}(t) \cdot x_{j}(t)$ of each good $j$, where $y_{j,i}(t)$ is the fraction given by player $j$ to player $i$ (from good $j$) in round $t.$
\item \textbf{\emph{Production.}} Each player $i$ produces its good from the bundle acquired:
$$
x_i(t+1) = \sum_{j=1}^{n} v_j \cdot w_{i,j}(t)\,.
$$
\item \textbf{\emph{Fractions update.}} Each player $i$ updates the fraction it gives to player $j$ according to the contribution of good $j$ in its production:
$$
y_{i,j}(t+1) = \frac{v_{j} \cdot w_{i,j}(t)}{x_i(t+1)}\,.
$$
\end{itemize}
\end{definition}
The starting configuration is non-degenerate: $x_i(0) > 0$ and $y_{j,i}(0) > 0$ for all $i,j$.
\begin{definition}[Growth and decay]
A player $i$ grows if $\lim_{t \to \infty} x_i(t) = \infty$ and vanishes if $\lim_{t \to \infty} x_i(t) =0$, respectively.
\end{definition}
\begin{definition}[Optimal good]
A good $i$ is valuable if $v_i >1$ and
optimal if $v_i \geq v_j$ for all $j \in [n]$. Define $v^* = \max_{i \in [n]} v_i$.
\end{definition}
\begin{figure}[h!]
\centering
\subfigure[Fractions $y_{j,i}(t)$ in a 5 player economy.]
{
\includegraphics[scale=0.5]{fractions_n=5_v7.png}
}
\subfigure[Fractions $y_{j,i}(t)$ in a 10 player economy.]
{
\includegraphics[scale=1.35]{fractions_n=10_v2_v2.png}
}
\subfigure[Fractions $y_{j,i}(t)$ in a 25 player economy.]
{
\includegraphics[scale=0.5]{fractions_n=25_v6.png}
}
\subfigure[Fractions $y_{j,i}(t)$ in a 50 player economy.]
{
\includegraphics[scale=0.5]{fractions_n=50_v19.png}
}
\caption{\textit{Fractions $y_{j,i}(t)$, for all $i,j$, in four markets with $n=5$ players (a), $n=10$ players (b), $n=25$ players (c), and $n=50$ players (d) over time. Recall $y_{j,i}(t)$ is the fraction received by player $i$ from good $j$ at time $t$; each color represents one trajectory of $y_{j,i}(t)$ for some pair $(i,j)$. Note some trajectories have very large fluctuations before converging and so appear as a region.}}
\end{figure}
\section{Long Term Behavior}
We characterize the rate of growth for each player as follows.
\begin{theorem} \label{thm:char}
Consider an economy with $n$ players following the tit-for-tat dynamic. Recall $v^* = \max_{j \in [n]} v_j$. Then for each player $i$ there exist constants $c_i, d_i > 0$ so that the amount of good $i$ satisfies
$$
c_i \cdot (v_i \cdot v^*)^{\lfloor t/2 \rfloor} \leq x_{i}(t) \leq d_i \cdot (v_i \cdot v^*)^{\lfloor t/2 \rfloor}, \; \mbox{for all} \; t \in \mathbb{N}
$$
In particular, the amount of good $i$ grows over time if $v_{i} > 1 / v^*$, vanishes if $v_{i} < 1/v^*$, and stays in a bounded region throughout time if $v_{i} = 1/v^*$.
\end{theorem}
\begin{proof}
From the update rule in Definition \ref{def:tft}, it follows that for each pair of players $(i,j)$ we have
\begin{align} \label{eq:def_id}
\frac{y_{i,j}(t+1)}{y_{j,i}(t)} = v_j \cdot \frac{x_j(t)}{x_i(t+1)} \; \mbox{ and }\;
\frac{y_{j,i}(t)}{y_{i,j}(t)} = v_i \cdot \frac{x_i(t-1)}{x_j(t)}
\end{align}
Multiplying the two identities in (\ref{eq:def_id}) gives
\begin{align} \label{eq:useful_id}
& \frac{y_{i,j}(t+1)}{y_{i,j}(t-1)} = (v_i \cdot v_j) \cdot \frac{x_i(t-1)}{x_i(t+1)}
\end{align}
We consider two cases. If $t = 2 \ell+1$, then from (\ref{eq:useful_id}) we get
\begin{align} \label{eq:odd_id}
\prod_{\ell=1}^{t} \frac{y_{i,j}(2\ell+1)}{y_{i,j}(2\ell-1)} = \prod_{\ell=1}^{t} (v_i \cdot v_j) \cdot \frac{x_{i}(2\ell-1)}{x_{i}(2\ell+1)} \implies \frac{y_{i,j}(2\ell+1)}{y_{i,j}(1)} = (v_i \cdot v_j)^{\ell} \cdot \frac{x_i(1)}{x_i(2\ell+1)}
\end{align}
Rewriting (\ref{eq:odd_id}) we obtain
\begin{align} \label{eq:odd_id_nice}
x_{i}(2\ell+1) \cdot y_{i,j}(2\ell+1) = (v_i \cdot v_j)^{\ell} \cdot x_i(1) \cdot y_{i,j}(1)
\end{align}
Let $i^*$ be a player with maximum value good: $v_{i^*} = v^*$. Recall $y_{i,j}(t) \leq 1$ for all $i,j$. By taking $j = i^*$ in equation (\ref{eq:odd_id_nice}) we get
\begin{align}
x_i(2\ell+1) = \frac{(v_i \cdot v_{i^*})^{\ell} \cdot x_i(1) \cdot y_{i,i^*}(1)}{y_{i,i^*}(2\ell+1)} \geq (v_i \cdot v_{i^*})^{\ell} \cdot x_i(1) \cdot y_{i,i^*}(1)
\end{align}
Thus there is a constant $\widetilde{c}_i > 0$ so that
\begin{align} \label{eq:odd_lb}
x_i(2\ell+1) \geq \widetilde{c}_i \cdot (v_i \cdot v^*)^{\ell} \; \mbox{ for each } \; \ell \in \mathbb{N}
\end{align}
To upper bound the amount of good $i$, recall that $\sum_{j=1}^n y_{i,j}(t) = 1$.
By summing over all $j$ in equation (\ref{eq:odd_id_nice}) we get:
\begin{align} \label{eq:odd_ub}
& \sum_{j=1}^n x_{i}(2\ell+1) \cdot y_{i,j}(2\ell+1) = \sum_{j=1}^n (v_i \cdot v_j)^{\ell} \cdot x_i(1) \cdot y_{i,j}(1) \iff \notag \\
& \sum_{j=1}^n x_{i}(2\ell+1) = \sum_{j=1}^n (v_i \cdot v_j)^{\ell} \cdot x_i(1) \cdot y_{i,j}(1) \leq \sum_{j=1}^n (v_i \cdot v^*)^{\ell} \cdot x_i(1) \cdot 1 = n \cdot x_i(1) \cdot (v_i \cdot v^*)^{\ell} \iff \notag \\
& x_{i}(2\ell+1) \leq x_i(1) \cdot (v_i \cdot v^*)^{\ell}
\end{align}
By taking $\widetilde{d}_i = x_i(1) > 0$, we get
$x_{i}(2\ell+1) \leq \widetilde{d}_i \cdot (v_i \cdot v^*)^{\ell}$ for each $\ell$. Together with
(\ref{eq:odd_lb}), this implies that for each player $i$ there exist constants $\widetilde{c}_i, \widetilde{d}_i > 0$ so that
\begin{align}
\widetilde{c}_i \cdot (v_i \cdot v^*)^{\ell} \leq x_{i}(2\ell+1) \leq \widetilde{d}_i \cdot (v_i \cdot v^*)^{\ell}
\end{align}
If $t = 2\ell$, then the identity in (\ref{eq:useful_id}) gives
\begin{align} \label{eq:even_id}
\frac{y_{i,j}(2\ell)}{y_{i,j}(0)} = (v_i \cdot v_j)^{\ell} \cdot \frac{x_i(0)}{x_i(2\ell)}
\end{align}
From (\ref{eq:even_id}) there exist constants $c_i',d_i'$ so that $c_i' \cdot (v_i \cdot v^*)^{\ell} \leq x_i(2\ell) \leq d_i' \cdot (v_i \cdot v^*)^{\ell}$ for all $\ell \in \mathbb{N}$. By combining the cases of even and odd $t$ it follows that there exist $c_i, d_i > 0$ so that
\begin{align} \label{eq:bound_amounts}
c_i \cdot (v_i \cdot v^*)^{\lfloor t/2 \rfloor} \leq x_{i}(t) \leq d_i \cdot (v_i \cdot v^*)^{\lfloor t/2 \rfloor}, \; \mbox{for all} \; t \in \mathbb{N}
\end{align}
This gives the required long term behavior for the sequence $x_i(t)$.
\end{proof}
\begin{figure}[h!]
\centering
\subfigure[Fractions $y_{j,i}(t)$ in a 100 player economy.]
{
\includegraphics[scale=0.5]{fractions_n=100_v7.png}
}
\subfigure[Fractions $y_{j,i}(t)$ in a 100 player economy.]
{
\includegraphics[scale=0.5]{fractions_n=100_v1.png}
}
\caption{\textit{Fractions $y_{j,i}(t)$, for all $i,j$, in two economies with $n=100$ players over 50 rounds (a) and 300 rounds (b). Recall $y_{j,i}(t)$ is the fraction received by player $i$ from good $j$ at time $t$. Note the fractions have large oscillations.}}
\end{figure}
\begin{remark}
Every player with a valuable good grows in the limit. This follows from Theorem~\ref{thm:char}.
It can also be shown directly by considering the potential function $g_i(t) = x_i(t) \cdot y_{i,i}(t)$ for each player $i$ and noting that $g_i(t) = v_i^t \cdot g_i(0)$. Then $x_i(t) = v_i^t \cdot g_i(0) / y_{i,i}(t)$, and so for $v_i > 1$ we get $\lim_{t \to \infty} x_i(t) = \infty$.
\end{remark}
Note, however, that players may grow even if their production coefficient is less than one as long as it is above a minimum threshold. Thus the dynamic supports some diversity in the set of players that survive in the long run.
\newpage
We also show that in the limit, each player sends its good only to those players with optimal production.
\begin{corollary}
Let $i$ be any player and $j$ any player with sub-optimal production, i.e. with $v_j < v^*$. Then player $i$ gives a fraction of $0\%$ of its good to player $j$ in the limit: $\lim_{t \to \infty} y_{i,j}(t) = 0$.
\end{corollary}
\begin{proof}
Using identity (\ref{eq:odd_id_nice}) and inequality (\ref{eq:bound_amounts}) in Theorem \ref{thm:char}, for each player $i$ we get
\begin{align} \label{eq:y_ij_ub}
y_{i,j}(2\ell+1) & = (v_i \cdot v_j)^{\ell} \cdot \frac{x_i(1) \cdot y_{i,j}(1)}{x_i(2\ell+1)} \notag \\
& \leq (v_i \cdot v_j)^{\ell} \cdot \frac{x_i(1) \cdot y_{i,j}(1)}{d_i \cdot (v_i \cdot v^*)^{\ell}} \notag \\
& = \frac{x_i(1) \cdot y_{i,j}(1)}{d_i} \cdot \Bigl(\frac{v_j}{v^*}\Bigr)^{\ell}
\end{align}
For any $j$ with $v_j < v^*$, taking the limit of $\ell \to \infty$ in (\ref{eq:y_ij_ub}) gives $\lim_{\ell \to \infty} y_{i,j}(2\ell+1) = 0$. We can similarly use (\ref{eq:even_id}) and inequality (\ref{eq:bound_amounts}) in Theorem \ref{thm:char} to show that $y_{i,j}(2\ell) = \frac{x_i(0) \cdot y_{i,j}(0)}{d_i} \cdot \Bigl(\frac{v_j}{v^*}\Bigr)^{\ell}$, which gives $\lim_{\ell \to \infty} y_{i,j}(2\ell) = 0$. Thus $\lim_{t \to \infty} y_{i,j}(t) =0$.
\end{proof}
Note that even though all the fractions $y_{i,j}(t)$ eventually converge to $0$ or $1$, in the examples shown there are large fluctuations in the initial periods.
\begin{comment}
\begin{todo}
Also write lower bound for the fractions ; I think the fractions at time $t$ cannot decay faster than some rate.
Need a theorem to explain the fluctuations. Check asynchronous version, where at each point in time a player is chosen at random and only that player updates his fractions. Check if this has the same properties for who grows and whether it has fewer fluctuations.
Also make connection with games.
\end{todo}
\end{comment}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The internet makes an unprecedented variety of opportunities
available to people. Whether looking for a place to go for vacation,
an apartment to rent, or a PC to buy, the potential customer is
faced with countless possibilities. Most people have difficulty
finding exactly what they are looking for, and the current tools
available for searching for desired items are widely considered
inadequate. Artificial intelligence provides powerful techniques
that can help people address this essential problem. Search engines
can be very effective in locating items if users provide the correct
queries. However, most users do not know how to map their
preferences to a query that will find the item that most closely
matches their requirements.
{\em Recommender
systems}~\shortcite{Resnick94grouplens,Adomavicius05,ref:burke-survey}
address this problem by mapping explicit or implicit user
preferences to items that are likely to fit these preferences. They
range from systems that require very little input from the users to
more user-involved systems. Many collaborative filtering techniques
~\shortcite{ref:collaborative-filtering}, infer user preferences
from their past actions, such as previously purchased or rated
items. On the other hand, popular comparison
websites\footnote{E.g., www.shopping.com} often require that users
state at least some preferences on desired attribute values before
producing a list of recommended digital cameras, portable computers,
etc.
In this article, we consider tools that provide recommendations based on explicitly
stated preferences, a task that we call preference-based search. In particular, the problem is defined as:
\begin{quote}
Given a collection ${\cal O} = \{o_1,..,o_n\}$ of $n$ options,
preference-based search (PBS) is an interactive process that helps
users identify the most preferred option, called the {\em target}
option $o_t$, based on a set of preferences that they have stated on
the attributes of the target.
\end{quote}
Tools for preference-based search face a tradeoff between two
conflicting design goals:
\begin{itemize}
\item decision accuracy, measured as the percentage of time that the user
finds the target option when using the tool, and
\item user effort, measured as the number of interaction cycles or task time that
the user takes to find the option that she believes to be the target
using the tool.
\end{itemize}
By target option, we refer to the option that a user prefers most
among the available options. To determine the accuracy of a product
search tool, we measure whether the target option a user finds with
the tool corresponds to the option that she finds after reviewing
all available options in an offline setting. This procedure, also
known as the switching task, is used in consumer decision making
literature~\cite{ref:decision-accuracy}.
Notice that such procedure is only used to measure the accuracy of a
system. We do not suggest that such procedure models human decision
behavior.
In one approach, researchers focus purely on accuracy in order to
help users find the most preferred choice. For example, Keeney and
Raiffa~\citeyear{ref:keeney-maut} suggested a method to obtain a
precise model of the user's preferences.
This method, known as the value function assessment procedure, asks
the user to respond to a long list of questions. Consider the case
of search for an ideal apartment. Suppose the decision outcome
involves trading off some preferred values of the size of an
apartment against the distance between the apartment and the city
center. A typical assessment question is in the form of ``All else
being equal, which is better: 30 sqm at 60 minutes distance or 20
sqm at 5 minutes distance?'' Even though the results obtained in
this way provide a precise model to determine the most preferred
outcome for the user, this process is often cognitively arduous. It
requires the decision maker to have a full knowledge of the value
function in order to articulate answers to the value function
assessment questions. Without training and expertise, even
professionals are known to produce incomplete, erroneous, and
inconsistent answers~\cite{Tversky1974}. Therefore, such techniques
are most useful for well-informed decision makers, but less so for
users who need the help of a recommender system.
Recently, researches have made significant improvement to this
method. \citeA{ref:chajewska} consider a prior probability
distribution of a user's utility function and only ask questions
having the highest value of information on attributes that will give
the highest expected utility. Even though it was developed for
decision problems under uncertainty, this adaptive elicitation
principle can be used for preference elicitation for product search
which is often modeled as decision with multiple objectives (see in
the related work section the approach
of~\citeR{ref:price-messinger}). \citeA{Boutilier2002} and
\citeA{ref:boutilier-ijcai05} further improved this method by taking
into account the value assigned to future preference elicitation
questions in order to further reduce user effort by modeling the
maximum possible regret as a stopping
criterion
In another extreme, researchers have emphasized providing
recommendations with as little effort as possible from the users.
Collaborative filtering
techniques~\cite{ref:collaborative-filtering}, for example, infer an
implicit model of a user's preferences from items that they have
rated. An example of such a technique is Amazon's ``people who
bought this item also bought..." recommendation. However, users may
still have to make a significant effort in assigning ratings in
order to obtain accurate recommendations, especially as a new user
to such systems (known as the new user problem). Other techniques
produce recommendations based on a user's demographic
data~\cite{Rich79,krulwich97}.
\subsection{Mixed Initiative Based Product Search and Recommender Systems}
In between these two extremes, mixed-initiative dialogue systems
have emerged as promising solutions because they can flexibly scale
user's effort in specifying their preferences according to the
benefits they perceive in revealing and refining preferences already
stated. They have been also referred to as utility and
knowledge-based recommender systems according to
Burke~\citeyear{ref:burke-survey}, and utility-based decision
support interface systems (DSIS) according to Spiekermann and
Paraschiv \citeyear{Spiekermann02}. In a mixed-initiative system,
the user takes the initiative to state preferences, typically in
reaction to example options displayed by the tool. Thus, the user
can provide explicit preferences as in decision-theoretic methods,
but is free to flexibly choose what information to provide, as in
recommender systems.
The success of these systems depends not only on the AI techniques
in supporting the search and recommending task, but also on an
effective user-system interaction model that motivates users to
state complete and accurate preferences. It must strike the right
compromise between the recommendation accuracy it offers and the
effort it requires from the users. A key criterion to evaluate these
systems is therefore the accuracy vs. effort framework which favors
systems that offer maximum accuracy while requiring the same or less
user effort. This framework was first proposed by~\citeA{Payne1993}
while studying user behaviors in high-stake decision making settings
and later adapted to online user behaviors in medium-stake decision
making environments by Pu and Chen~\citeyear{ref:pu-ec05} and Zhang
and Pu~\citeyear{Zhang06}.
In current practice, a mixed-initiative product search and recommender system computes its display set
(i.e., the items presented to the user) based on the closeness of
these items to a user's preference model. However, this set of items
is not likely to provide for diversity and hence may compromise on
the decision accuracy. Consider for example a user who is looking
for a portable PC and gives a low price and a long battery life as
initial preferences. The best matching products are all likely to be
standard models with a 14-inch display and a weight around 3
kilograms. The user may thus never get the impression that a good
variety is available in weight and size, and may never express any
preferences on these criteria. Including a lighter
product in the display set may greatly help a user identify her true
choice and hence increase her decision accuracy.
Recently, the need for recommending not only the best matches,
called the {\em candidates}, but also a diverse set of other items,
called {\em suggestions}, has been recognized. One of the first to
recognize the importance of suggestive examples was
ATA~\cite{linden97interactive}, which explicitly generated examples
that showed the extreme values of certain attributes, called {\em
extreme} examples. In case-based recommender systems, the strategy
of generating both similar and diverse cases was used
\cite{McSherry02,Smyth2003ijcai}. \citeA{ref:diversity-aaai05}
investigated algorithms for generating similar and diverse solutions
in constraint programming, which can be used to recommend
configurable
products. The complexity
of such algorithms was further analyzed.
So far, the suggestive examples only aim at providing a diverse set
of items without analyzing more deeply whether variety actually
helps users make better decisions. One exception is the
compromise-driven diversity generation strategy by
McSherry~\citeyear{ref:McSherry-compromise} who proposes to suggest
items which are representative of all possible compromises the user
might be prepared to consider. As Pu and Li~\citeyear{ref:pu-ec05}
pointed out, tradeoff reasoning (making compromises) can increase
decision accuracy, which indicates that the compromise-driven
diversity might have a high potential to achieve better decision
quality for users. However, no empirical studies have been carried
out to prove this.
\subsection{Contribution of Our Work}
We consider a mixed-initiative framework with an explicit preference
model, consisting of an iterative process of showing examples,
eliciting critiques and refining the preference model. Users are
never forced to answer questions about preferences they do not yet
possess. On the other hand, their preferences are {\em volunteered and constructed}, not directly
asked. This is the key difference between navigation-by-proposing used in the mixed-initiative user interaction model as opposed to value assessment-by-asking used in traditional decision support systems.
With a set of simulated and real-user involved experiments, we
argue that including diverse suggestions among the examples shown
by a mixed initiative based product recommender is a significant
improvement in the state-of-the-art in this field. More specifically,
we show that the model-based suggestion techniques that we have developed
indeed motivate users to express more preferences and help
them achieve a much higher level of decision accuracy without
additional effort.
The rest of this article is organized as follows. We first describe
a set of {\em model-based} techniques for generating suggestions in
preference-based search. The novelty of our method includes: 1) it
expands a user's current preference model, 2) it generates a set of
suggestions based on an analysis of the likelihood of the missing
attributes, and 3) it displays {\em suggested options} whose
attractiveness stimulates users' preference expression. To validate
our theory, we then examine how suggestion techniques help users
identify their target choice in both simulation environments and
with real users. We base the evaluation of these experiments on two
main criteria. Firstly, we consider the completeness of a user's
preference model as measured by preference enumeration, i.e., the
number of features for which a user has stated preferences. The
higher the enumeration, the more likely a user has considered all
aspects of a decision goal, and therefore the decision is more
likely to be rational. Secondly, we consider decision accuracy as
measured by the contrary of the switching rate, which is the number
of users who did not find their target option using the tool and
choose another product after reviewing all options in detail. The
smaller the switching rate, the more likely a user is content with
what she has chosen using the tool, and thus the higher decision
accuracy.
The success of the suggestion techniques is confirmed by
experimental evaluations. An online evaluation was performed with
real users exploring a student housing database. A supervised user
study was additionally carried out with 40 users, performed in a
within-subject experiment setup that evaluated the quantitative
benefits of model-based suggestion. The results demonstrate that
model-based suggestion increased decision accuracy by up to $78\%$,
while the user's effort is about the same as using the
example-critiquing search tool without suggestions. Such user
studies which consider the particular criteria of accuracy vs.
effort have never been carried out by other researchers for
validating suggestion strategies or optimal elicitation procedures.
Finally, we end by reviewing related works followed by a conclusion.
\section{Example-critiquing}
In many cases, users searching for products or information are not
very familiar with the available items and their characteristics.
Thus, their preferences are not well established, but {\em
constructed} while learning about the
possibilities~\cite{Payne1993}. To allow such construction to take
place, a search tool should ask questions with a complete and
realistic context, not in an abstract way.
\begin{figure}
\centerline{\psfig{file=pics/example-critiquing.eps,width=8cm}}
\caption{\small Example-critiquing interaction. The dark box is the
computer's action, the other boxes show actions of the user.}
\label{fig:example-critiquing}
\end{figure}
A good way to follow this principle is to implement an {\em example
critiquing} interaction (see Figure~\ref{fig:example-critiquing}).
It shows examples of available options and invites users to state
their critique of these examples. This allows users to better
understand their preferences.
Example-critiquing has been proposed by numerous
researchers in two main forms: systems without and with explicit preference
models:
\begin{itemize}
\item in systems without preference models, the user proceeds by {\em tweaking}
the current best example (``I like this but cheaper'',``I like this
but French cuisine'') to make it fit with his or her preferences
better. The preference model is represented implicitly by the
currently chosen example and the interaction is that of
navigation-by-proposing. Examples of such systems are the FindMe
systems~\cite{Burke97findme,ref:burke-findme-journal}, the
ExpertClerk system~\cite{ShimazuExpertClerk}, and the dynamic
critiquing systems~\cite{ReillyMMS04}.
\item in systems with preference models, each critique is added to an explicit preference
model that is used to refine the query. Examples of systems with
explicit preference models include the ATA
system~\cite{linden97interactive}, SmartClient~\cite{pu00enriching},
and more recently incremental
critiquing~\cite{ref:incremental-critiquing}.
\end{itemize}
In this article, we focus on example-critiquing with an explicit
preference model for the advantage of effectively resolving users'
preference conflicts. Moreover, this approach not only helps users
make a particular choice, but also obtains an accurate preference
model for future purchases or cross-domain recommendations.
\subsection{Example}
As a simple example consider a student looking for housing. Options
are characterized by the following 4 attributes:
\begin{enumerate}
\item rent in Swiss Francs;
\item type of accommodation: room in a shared apartment,
studio, or apartment
\item distance to the university in minutes;
\item furnished/unfurnished.
\end{enumerate}
Assume that the choice is among the following options:
\begin{center}
\begin{tabular}{rllll}
& rent & type-of-accommodation & distance-to-university & furnished
\\ \hline
$o_1$ & 400 & room & 17 & yes \\
$o_2$ & 500 & room & 32 & yes \\
$o_3$ & 600 & apartment & 14 & no \\
$o_4$ & 600 & studio & 5 & no \\
$o_5$ & 650 & apartment & 32 & no \\
$o_6$ & 700 & studio & 2 & yes \\
$o_7$ & 800 & apartment & 7 & no \\
\end{tabular}
\end{center}
Assume that the user initially only articulates a preference for the
lowest price. She also has hidden preferences for an unfurnished
accomodation, and a distance of less than 10 minutes to the
university. None of the options can satisfy all of these
preferences, so the most suitable option requires the user to make a
{\em tradeoff} among her preferences. Let us assume that the
tradeoffs are such that option $o_4$ would be the user's most
preferred option. We call this the {\em target} option.
The user may start the search with only the first preference (lowest
price), and the tool would show the $k$ best options according to
the order shown in the table. Here, let $k = 1$ so that only
option $o_1$ is shown.
In an example-critiquing tool without a preference model, the user
indicates a {\em critique} of the currently shown example, and the
system then searches for another example that is as similar as
possible to the current one while also satisfying the critique. In
this case, the user might critique $o_1$ for being furnished, and
the tool might then show $o_3$ which is most similar to the unfurnished
preference. The user might add the critique that the option should be at
most 10 minutes from the university, and the system would then
return $o_7$ as the most similar option that satisfies this
critique. The user might again critique this option as being too
expensive, in which case the system would return to $o_3$ as
most similar to the preference on the "cheaper" option. As there is no memory of earlier
critiques, the process is stuck in a cycle, and the user can never
discover the target $o_4$.
In a tool with a preference model, the user is able to state her
preference for an unfurnished option, making $o_3$ the best option.
Next, she might add the additional preference for a distance of less
than 10 minutes to the university, ending up with $o_4$ which is her
target choice. This illustrates how an explicit preference model
ensures the convergence of the process. In fact, decision theory
shows that when all preferences have been expressed, a user will
always be able to identify the target choice. Note however that more
complex scenarios might require explicit tradeoffs among preferences
to locate the right target choice~\cite{ref:pu-ec04}.
A popular approach to obtain a preference model is to {\em elicit}
it by asking questions to the user. However, this can lead to {\em
means objectives}~\cite{ref:keeney-1992} that distract from the true
target choice. As an example, the tool might first ask the user
whether she prefers a room, a studio or an apartment. If the user
truly has no preference, she might try to translate her preference
for an unfurnished option into a preference for an apartment, since
this is most likely to be unfurnished. However, this is not her true
preference and will shift the best tradeoff from $o_4$ to $o_3$ or
even $o_7$. This illustrates the importance of a mixed-initiative
approach where the user can state preferences in any order on her
own initiative.
The example-critiquing framework raises issues of how to model
preferences, how to generate the solutions shown to the user, and
how to efficiently implement the process. We now briefly summarize
the results of our previous work addressing these issues.
\subsection{Preference Modeling}
When a tool forces users to formulate preferences using particular
attributes or a particular order, they can fall prey to {\em means
objectives}~\cite{ref:keeney-1992} because they do not have the
catalog knowledge to relate this to their true intentions. Means
objectives are objectives that a person believes to correlate
positively to the true objectives. For example, a manufacturer with
a reputation for good quality may become an objective when it is
impossible to state an objective on the quality itself.
To avoid such means objectives, we require a preference model that
allows users to state preferences incrementally using any attribute,
in any order they wish. Furthermore, the preference model must be
easy to revise at each critiquing cycle by adding or removing
preferences.
This rules out commonly used techniques such as question-answer
dialogues or selection of a fixed set of preferences that are
commonly used on the web today.
An effective formalism that satisfies these criteria is to formulate
preferences using soft constraints. A soft constraint is a function
from an attribute or a combination of attributes to a number that
indicates the degree to which the constraint is violated. More
generally, the values of a soft constraint can be elements of a
semiring~\cite{Bistarelli1997}. When there are several soft
constraints, they are combined into a single preference measure.
Examples of combination operators are summing or taking the maximum.
The overall preference order of outcomes is then given by this
combined measure.
For example, for an attribute that can take values a, b and c, a
soft constraint indicating a preference for value b could map a and
c to 1, and b to 0, thus indicating that only b does not violate the
preference. A preference for the surface area to be at least 30
square meters, where a small violation of up to 5 square meters
could be acceptable, can be expressed by a piecewise linear
function:
\begin{eqnarray*}
1 & & \mbox{if } x < 25 \\
0.2 (30 -x) & & \mbox{if } 25 \leq x \leq 30 \\
0 & & \mbox{if } x > 30
\end{eqnarray*}
In example-critiquing, each critique can be expressed as a soft
constraint, and the preference model is incrementally constructed by
simply collecting the critiques. Note that it is also possible for a
user to express several preferences involving the same attributes,
for example to express in one soft constraint that the surface area
should be at least 30 square meters (as above), and in another soft
constraint that it should be no more than 50 square meters. If the
soft constraints are combined by summing their effects, this result
leads to a piecewise linear function:
\begin{eqnarray*}
1 & & \mbox{if } x < 25 \\
0.2 (30 -x) & & \mbox{if } 25 \leq x \leq 30 \\
0 & & \mbox{if } 30 < x < 50 \\
0.2 (x -50) & & \mbox{if } 50 \leq x \leq 55 \\
1 & & \mbox{if } x > 55
\end{eqnarray*}
Thus, soft constraints allow users to express relatively complex
preferences in an intuitive manner. This makes soft constraints
a
useful model for example-critiquing preference models.
Furthermore,
there exist numerous algorithms that combine branch-and-bound
with
constraint consistency techniques to efficiently find the most
preferred options in the combined order.
More details on how to use
soft constraints for preference models are provided by Pu \&
Faltings~\citeyear{ref:constraint-journal-04}.
However soft constraints are a technique that allows a user to
partially and incrementally specify her preferences. The advantage
over utility functions is that it is not necessary to elicit a
user's preference for every attribute. Only attributes whose values
concern the current decision context are elicited. For example, if a
user is not interested in a certain brand of notebooks, then she
does not have to concern herself with stating preferences on those
products. This parsimonious approach is similar to the adaptive
elicitation method proposed by Chajewska et
al.~\citeyear{ref:chajewska}. However, in example-critiquing for
preference-based search, user's preferences are {\em volunteered} as
reactions to the displayed examples, not elicited; users are never
forced to answer questions about preferences without the benefit of
a concrete decision context.
\subsection{Generating Candidate Choices}
In general, users are not able to state each of their preferences
with numerical precision. Instead, a practical tool needs to use an
approximate preference model where users can specify their
preferences in a qualitative way.
A good way to implement such a preference model is to use
standardized soft constraints where numerical parameters are chosen
to fit most users. Such models will necessarily be inaccurate for
certain users. However, this inaccuracy can be compensated by
showing not just one, but a set of $k$ best candidate solutions. The
user then chooses the most preferred one from this set, thus
compensating for the preference model's inaccuracy. This technique
is commonly used in most search engines.
We have analyzed this technique for several types of preference
models: weighted soft constraints, fuzzy-lexicographic soft
constraints, and simple dominance
relations~\cite{Faltings2004solution_generation}.
A remarkable result is that for both weighted and
fuzzy-lexicographic constraint models, assuming a bound on the
possible error (deviation between true value and the one used by the
application) of the soft constraints modeling the preferences, the
probability that the true most preferred solution is within $k$
depends only on the number of the preferences and the error bound of
the soft constraints but not on the overall size of the solution
set. Thus, it is particularly suitable when searching a very large
space of items.
We also found that if the preference model contains many different
soft constraints, the probability of finding the most preferred
option among the $k$ best quickly decreases. Thus, compensating
model inaccuracy by showing many solutions is only useful when
preference models are relatively simple. Fortunately, this is often
the case in preference-based search, where people usually lack the
patience to input complex models.
As a result, the most desirable process in practice might be a
two-stage process where example-critiquing with a preference model
is used in the first stage to narrow down the set of options from a
large (thousands) space of possibilities to a small (20) most
promising subset. The second phase would use a tweaking interaction
where no preference model is maintained to find the best choice. Pu
and Chen~\citeyear{ref:pu-ec05} have shown tradeoff strategies in a
tweaking interaction that provide excellent decision accuracy even
when user preferences are very complex.
\subsection{Practical Implementation}
Another challenge for implementing example-critiquing in large scale
practical settings is that it requires solutions to be computed
specifically for the preference model of a particular user. This may
be a challenge for web sites with many users.
However, it has been
shown~\cite{Torrens1998,ref:constraints-journal-2002} that the
computation and data necessary for computing solutions can be coded
in very compact form and run as an applet on the user's computer.
This allows a completely scaleable architecture where the load for
the central servers is no higher than for a conventional web site.
Torrens, Faltings \& Pu~\citeyear{ref:constraints-journal-2002}
describe an implementation of example-critiquing using this
architecture in a tool for planning travel arrangements. It has been
commercialized as part of a tool for business
travelers~\cite{pu00enriching}.
\section{Suggestions}
In the basic example-critiquing cycle, we can expect users to state
any additional preference as long as they perceive it to bring a
better solution. The process ends when users can no longer see
potential improvements by stating additional preferences and have
thus reached an optimum. However, since the process is one of
hill-climbing, this optimum may only be a local optimum. Consider again the example of a user looking for a notebook computer
with a low price range. Since all of the presented products have
about the same weight, say around 3 kg, she might never bother to
look for lighter products. In marketing science literature, this is
called the anchoring effect~\cite{Tversky1974}. Buyers are likely to
make comparisons of products against a reference product, in this
case the set of displayed heavy products. Therefore, a buyer might
not consider the possibility of a lighter notebook that might fit
her requirements better, and accept a sub-optimal result.
Just as in hillclimbing, such local minima can be avoided by
randomizing the search process. Consequently, several authors have
proposed including additional examples selected in order to educate
the user about other opportunities present in the choice of
options~\cite{linden97interactive,ShimazuExpertClerk,McSherry02,ref:smyth-diversity}.
Thus, the displayed examples would include:
\begin{itemize}
\item {\bf candidate} examples that are optimal for the current
preference query, and
\item {\bf suggested} examples that are chosen to stimulate the
expression of preferences.
\end{itemize}
Different strategies for suggestions have been proposed in
literature. Linden~\citeyear{linden97interactive} used extreme
examples, where some attribute takes an extreme value. Others use
diverse examples as
suggestions~\cite{ref:smyth-diversity,Smyth2003ijcai,ShimazuExpertClerk}.
Consider again the example of searching for
housing mentioned in the previous section. Recall that
the choice is among the following options:
\begin{center}
\begin{tabular}{rllll}
& rent & type-of-accommodation & distance-to-university & furnished
\\ \hline
$o_1$ & 400 & room & 17 & yes \\
$o_2$ & 500 & room & 32 & yes \\
$o_3$ & 600 & apartment & 14 & no \\
$o_4$ & 600 & studio & 5 & no \\
$o_5$ & 650 & apartment & 32 & no \\
$o_6$ & 700 & studio & 2 & yes \\
$o_7$ & 800 & apartment & 7 & no \\
\end{tabular}
\end{center}
In the initial dialogue with the system, the user has stated the
preference of lowest price. Consequently, the options
are ordered
$o_1 \succ o_2 \succ o_3 = o_4 \succ o_5 \succ o_6 \succ o_7$.
Assume that the system shows only one candidate, which is the
most promising option according to the known preferences: $o_1$.
What other options should be shown as suggestions to motivate
the user to express her remaining preferences?
Linden et al.~\citeyear{linden97interactive} proposed using {\em
extreme} examples, defined as examples where some attribute takes an
extreme value. For example, consider the distance: $o_6$ is the
example with the smallest distance. However, it has a much higher
price, and being furnished does not satisfy the user's other hidden
preference. Thus, it does not give the user the impression that a
closer distance is achievable without compromising her other
preferences. Only when the user wants a distance of less than 5
minutes can option $o_6$ be a good suggestion, otherwise $o_4$ is
likely to be better. Another problem with extreme examples is that
we need two such examples for each attribute, which is usually more
than the user can absorb.
Another strategy~\cite{ref:smyth-diversity,McSherry02,ref:McSherry-compromise,Smyth2003ijcai,ShimazuExpertClerk}
is to select suggestions to achieve a certain diversity, while also
observing a certain goodness according to currently known preferences.
As the tool already shows $o_1$ as the optimal example, the most different
example is $o_5$, which differs in all attributes but does not have an
excessive price. So is $o_5$ a good suggestion? It shows the user
the following opportunities:
\begin{itemize}
\item apartment instead of room: however, $o_3$ would be a cheaper way
to achieve this.
\item distance of 32 instead of 17 minutes: however, $o_2$ would be a
cheaper way to achieve this.
\item unfurnished instead of furnished: however, $o_3$ would be a cheaper
way to achieve this.
\end{itemize}
Thus, while $o_5$ is very diverse, it does not give the user an
accurate picture of what the true opportunities are. The problem is
that diversity does not consider the already known preferences, in
this case price, and the dominance relations they imply on the
available options. While this can be mitigated somewhat by combining
diversity with similarity measures, for example by using a linear
combination of
both~\cite{ref:smyth-diversity,ref:McSherry-compromise}, this does
not solve the problem as the effects of diversity should be limited
to attributes without known preferences while similarity should only
be applied to attributes with known preferences.
We now consider strategies for generating suggestions based on the current preference model. We
call such strategies {\em model-based} suggestion strategies.
We assume that the user is minimizing his or
her own effort and will add preferences to the model only when he or
she expects them to have an impact on the solutions. This is the
case when:
\begin{itemize}
\item the user can see several options that differ in a possible preference, and
\item these options are relevant, i.e. they could be acceptable choices, and
\item they are not already optimal for the already stated preferences.
\end{itemize}
In all other cases, stating an additional preference is irrelevant:
when all options would evaluate the same way, or when the preference
only has an effect on options that would not be eligible anyway or
that are already the best choices, stating it would be wasted
effort. On the contrary, upon display of a suggested outcome
whose optimality becomes clear only if a particular preference is
stated, the user can recognize the importance of stating that
preference. This seems to be confirmed by our user studies.
This has led us to the following principle, which we call the {\em
look-ahead} principle, as a basis for model-based suggestion
strategies:
\begin{quote}
Suggestions should not be optimal under the current preference
model, but should provide a high likelihood of optimality when an
additional preference is added.
\end{quote}
We stress that this is a heuristic principle based on assumptions
about human behavior that we cannot formally prove. However, it is
justified by the fact that suggestion strategies based on the
look-ahead principle work very well in real user studies, as we
report later in this article.
In the example, $o_4$ and $o_3$ have the highest probability of
satisfying the lookahead principle: both are currently dominated by
$o_1$. $o_4$ becomes Pareto-optimal when the user wants a studio,
an unfurnished option, or a distance of less than 14 minutes.
$o_3$ becomes Pareto-optimal when the user wants an apartment, an
unfurnished option, or a distance of less than 17 minutes.
Thus, they give a good illustration of what is possible within
the set of examples.
We now develop our method for computing suggestions and show that how
it can generate these suggestions.
\label{sec:definitions}\subsection{Assumptions about the Preference
Model}
To further show how to implement model-based suggestion strategies,
we have to define preference models and some minimal assumptions
about the shape that user preferences might take. We stress that
these assumptions are only made for generating suggestions. The
preference model used in the search tool could be more diverse or
more specific as required by the application.
We consider a collection of options ${\cal O} = \{o_1,..,o_n\}$ and
a fixed set of $k$ {\em attributes} $A=\{A_1,..,A_k \}$, associated
with domains $D_1,..,D_n$. Each option $o$ is characterized by the
values $a_1(o),...,a_k(o)$; where $a_i(o)$ represents the value that
$o$ takes for attribute $A_i$.
A {\bf qualitative domain} (the color, the name of neighborhood)
consists in an enumerated set of possibilities; a {\bf numeric
domain} has numerical values (as price, distance to center), either
discrete or continuous. For numeric domains, we consider a function
$range(Att)$ that gives the range on which the attribute domain is
defined. For simplicity we call qualitative (respectively numeric)
attributes those with qualitative (numeric) domains.
The user's {\em preferences} are assumed to be
independent and
defined on individual attributes:
\begin{definition}
A {\em preference} $r$ is an order relation $\preceq_{r}$ of the
values of an attribute $a$; $\sim_{r}$ expresses that two values are
equally preferred. A {\em preference model} R is a set of
preferences $\{r_1,..,r_m\}$.
\end{definition}
Note that $\preceq_{r}$ might be a partial or total order.
If there can be preferences over a combination of attributes, such
as the total travel time in a journey, we assume that the model
includes additional attributes that model these combinations so that
we can make the assumption of independent preferences on each
attribute.
The drawback is that the designer has to know the preferential
dependence in advance. However, this is required for designing the
user interface anyway.
As a preference $r_i$ always applies to the same attribute $a_i$, we
simplify the notation and apply $\preceq_{r_i}$ and $\sim_{r_i}$ to the
options directly: $o_1 \prec_{r_i} o_2$ iff $a_i(o_1) \prec_{r_i}
a_i(o_2)$. We use $\prec_{r_i}$ to indicate that $\preceq_{r_i}$ holds
but not $\sim_{r_i}$.
Depending on the formalism used for modeling preferences, there are
different ways of combining the order relations given by the
individual preferences $r_i$ in the user's preference model $R$ into
a combined order of the options. For example, each preference may be
expressed by a number, and the combination may be formed by summing
the numbers corresponding to each preference or by taking their
minimum or maximum.
Any rational decision maker will prefer an option to another if the
first is at least as good in all criteria and better for at least
one. This concept is expressed by the {\em Pareto-dominance} (also
just called dominance), that is a partial order relation of the
options.
\begin{definition}
An option $o$ is {\em Pareto-dominated} by an option $o'$ with respect to $R$ if and only if
for all $r_i \in R$, $o \preceq_{r_i} o'$ and for at least one $r_j \in R$,
$o \prec_{r_j} o'$. We write $o \prec_{R} o'$ (equivalently we can say that
$o'$Pareto-dominates $o$ and write $o' \succ_{R} o$).
We also say that $o$ is {\em dominated} (without specifying $o'$).
\end{definition}
Note that we use the same symbol $\prec$ for both individual
preferences and sets of preferences. We will do the same with
$\sim$, meaning that $o \sim_{R} o'$ if $\forall r \in R, o \sim_{r}
o'$.
In the following, the only assumption we make about this combination
is that it is {\em dominance-preserving} according to this
definition of Pareto-dominance. Pareto dominance is the most general
order relation that can be defined based on the individual
preferences. Other forms of domination can be defined as extensions
of Pareto dominance. In the following, whenever we use ``dominance''
without further specification, we refer to Pareto-dominance.
\begin{definition}
A preference combination function is {\em dominance-preserving} if
and only if whenever an option o' dominates another option o in all
individual orders, then o' dominates o in the combined order.
\end{definition}
Most of the combination functions used in practice are
dominance-preserving. An example of a combination that is not
dominance-preserving is the case where the preferences are
represented as soft constraints and combined using {\tt Min()}, as
in fuzzy CSP~\cite{Ruttkay94}. In this case, two options with the
constraint valuations
\begin{itemize}
\item[$o_1$] (0.3, 0.5, 0.7)
\item[$o_2$] (0.3, 0.4, 0.4)
\end{itemize}
will be considered equally preferred by the combination
function as $Min(0.3, 0.5, 0.7)= 0.3 = Min(0.3, 0.4, 0.4)$, even
though $o_1$ is dominated by $o_2$.
\subsection{Qualitative Notions of Optimality}
The model-based suggestion strategies we are going to introduce are
based on the principle of selecting options that have the highest
chance of becoming optimal. This is determined by considering
possible new preferences and characterizing the likelihood that they
make the option optimal. Since we do not know the weight that a new
preference will take in the user's perception, we must evaluate this
using a qualitative notion of optimality. We present two qualitative
notions, one based only on Pareto-optimality and another based on
the combination function used for generating the candidate
solutions.
We can obtain suggestion strategies that are valid with any
preference modeling formalism, using qualitative optimality criteria
based on the concept of {\em Pareto-dominance} introduced before.
\begin{definition}
An option $o$ is {\em Pareto-optimal} (PO) if and only if it is not
dominated by any other option.
\label{def:pareto-opt}
\end{definition}
Since dominance is a partial order, Pareto optimal options can be
seen as the maximal elements of $O$. Pareto-optimality is useful
because it applies to any preference model as long as the
combination function is dominance-preserving.
For any dominance-preserving combination function, an option $o^*$
that is most preferred in the combined preference order is
Pareto-optimal, since any option $o'$ that dominates it would be
more preferred. Therefore, only Pareto-optimal solutions can be
optimal in the combined preference order, no matter what the
combination function is. This makes Pareto-optimality a useful
heuristic for generating suggestions independently of the true
preference combination in the user's mind.
In example-critiquing, users initially state only a subset $R$ of
their eventual preference model $\overline{R}$. When a preference is
added, dominated options with respect to $R$ can become
Pareto-optimal. On the other hand, no option can loose its
Pareto-optimality when preferences are added except that an option
that was equally preferred with respect to all the preferences
considered can become dominated.
Note that one can also consider this as using {\em weak
Pareto-optimality} as defined by Chomicki \citeyear{ref:chomicki},
as we consider that all options are equal with respect to attributes
where no preference has been stated.
We now introduce the notions of \emph{dominating set} and
\emph{equal set}:
\begin{definition}
The dominating set of an option $o$ with respect to a set of
preferences $R$ is the set of all options that dominate $o$:
$O^{>}_{R}(o) = \{ o' \in O : o' \succ_{R} o \}$. We write
$O^{>}(o)$, without specifying $R$, the set of preferences, if $R$
is clear from the context.
The equal set of an option $o$ with respect to $R$ is the set of
options that are equally preferred to $o$: $O^{=}_{R}(o) = \{ o' \in
O : o' \sim_{R} o \}$. We also use $O^{\geq}$ for $O^{>} \cup
O^{=}$.
\end{definition}
The following observation is the basis for evaluating the likelihood
that a dominated option will become Pareto-optimal when a new
preference $r_i$ is stated.
\begin{proposition}\label{prop:pareto-dom}
A dominated option $o$ with respect to $R$ becomes Pareto-optimal
with respect to $R \cup r_i$ if and only if $o$ is
\begin{itemize}
\item strictly better
with respect to $r_i$ than all options that dominate it with respect
to $R$ and
\item not worse with respect to $r_i$ than all options that are equally preferred
with respect to $R$.
\end{itemize}
\end{proposition}
\begin{proof}
Suppose there was an option $o'$ that dominates $o$ with respect to
$R$ and that $o$ is not strictly better than $o'$ in the new
preference $r_i$; then $o'$ would still dominate $o$, so $o$ could
not be Pareto-optimal. Similarly, suppose that $o$ is equally
preferred to $o''$ and $o''$ is strictly better than $o$ with
respect to $r_i$; then $o''$ would dominate $o$, so $o$ could not be
Pareto-optimal.
\end{proof}
Thus, the dominating set $O^{>}$ and the equal set $O^{=}$ of a
given option are the potential dominators when a new preference is
considered.
\paragraph{Utility-dominance}
We can consider other forms of dominance as long as they imply
Pareto-dominance. In particular, we might use the total order
established by the combination function defined in the preference
modeling formalism, such as a weighted sum. We call this {\em
utility-domination}, and the utility-optimal option is the most
preferred one.
We may ask when an option can become utility-optimal. A weaker form
of Proposition~\ref{prop:pareto-dom} holds for utility domination:
\begin{proposition}\label{prop:utility-dom}
For dominance-preserving combination functions, a utility-dominated
option $o'$ with respect to $R$ {\em may} become utility-optimal
with respect to $R \cup r_i$ only if $o'$ is strictly better with
respect to $r_i$ than all options that currently utility-dominate it
and not worse than all options that are currently equally preferred.
\end{proposition}
\begin{proof}
Suppose there was an option that became utility-optimal without
being more preferred according to the new preference; then there
would be a violation of the assumption that the combination function
was dominance-preserving.
\end{proof}
Even though it is not a sufficient condition,
Proposition~\ref{prop:utility-dom} can be used as a heuristic to
characterize an option's chance to become utility-optimal.
\subsection{Model-based Suggestion Strategies}
We propose model-based suggestion strategies that can be
implemented both with the concept of Pareto- and utility-dominance.
They are based on the look-ahead principle discussed earlier:
\begin{quote}
suggestions should not be optimal under the current preference model,
but have a high likelihood of becoming optimal when an
additional preference is added.
\end{quote}
We assume that the system knows a subset $R$ of the user's
preference model $\overline{R}$. An ideal suggestion is an option
that is optimal with respect to the full preference model
$\overline{R}$ but is dominated in $R$, the current (partial)
preference model. To be optimal in the full model, from
Propositions~\ref{prop:pareto-dom} and~\ref{prop:utility-dom} we
know that such suggestions have to break the dominance relations
with their dominating set. Model-based strategies order possible
suggestions by the likelihood of breaking those dominance relations.
\subsubsection{Counting Strategy}
The first suggestion strategy, the {\em counting strategy}, is based
on the assumption that dominating options are independently
distributed. From Proposition~\ref{prop:pareto-dom} we can compute
the probability that a dominated option $o$ becomes Pareto-optimal
through a currently hidden preference as:
\begin{displaymath}
p_{opt}(o) = \prod_{o^{'} \in O^{>}(o)} p_d(o,o^{'}) \prod_{o' \in
O^{=}(o)} p_{nw}(o,o')
\end{displaymath}
where $p_d$ is the probability that a new preference makes $o$
escape the domination relation with a dominating option $o'$, i.e.
if $o$ is preferred over $o'$ according to the new
preference;
similarly $p_{nw}$ is the probability that $o$ is not worse than a
equally preferred option $o'$.
Evaluating this probability requires the exact
probability distribution of the possible preferences, which is in
general difficult to obtain.
The strategy assumes that $p_d = p_{nw}$ is constant for all
dominance relations.
\begin{eqnarray*}
p_{opt}(o) & = & \prod_{o^{'} \in O^{\geq}(o)} p_d\\
& = & p_d^{|O^{\geq}(o)|}
\end{eqnarray*}
Since $p_d \leq 1$, this probability is largest for the smallest set
$O^{\geq}(o)$. Consequently, the best suggestions are those with the
lowest value of the following counting metric:
\begin{equation}
F_C(o) = |O^{\geq}(o)|
\end{equation}
The counting strategy selects the option with the lowest value of
this metric as the best suggestion.
\subsubsection{Probabilistic Strategy}
The \emph{probabilistic strategy} uses a more precise estimate of
the chance that a particular solution will become Pareto-optimal.
\paragraph{General assumptions}
We assume that each preference $r_i$ is expressed by a cost function
$c_i$. In order to have a well-defined interface, these cost functions
will usually be restricted to a family of functions parameterized by
one or more parameters. Here we assume a single parameter $\theta$,
but the method can be generalized to handle cases of multiple parameters:
\begin{displaymath}
\mbox{ {\bf c}$_i$} = c_i(\theta, a_i(o)) = c_i(\theta,o)
\end{displaymath}
We assume that the possible preferences are characterized by the
following probability distributions:
\begin{itemize}
\item $p_{a_i}$, the probability that the user has a preference
over an attribute $a_i$,
\item $p(\theta)$, the probability distribution of the parameter
associated with the cost function of the considered attribute
\end{itemize}
In the user experiments in the last section, we use a uniform
distribution for both. The probability that a preference on
attribute $i$ makes $o_1$ be preferred to $o_2$ can be computed
integrating over the values of $\theta$ for which the cost of $o_1$
is less than $o_2$. This can be expressed using the Heavyside step function $H(x)\equiv\mbox{\bf if } (x>0) \mbox{ \bf then } 1 \mbox{ \bf else } 0$:
\begin{displaymath}
\delta_i(o_1,o_2)=\int_{\theta} H(c_i(\theta,o_2)-c_i(\theta,o_1))
p(\theta) d\theta
\end{displaymath}
For a qualitative domain, we iterate over $\theta$ and sum up the
probability contribution of the cases in which the value of $\theta$
makes $o_1$ preferred over $o_2$:
\begin{displaymath}
\delta_i(o_1,o_2)=\sum_{\theta \in D_i}
H(c_i(\theta,o_2)-c_i(\theta,o_1)) p(\theta)
\end{displaymath}
To determine the probability of simultaneously breaking the
dominance relation with all dominating or equal options in $O^{\geq}$, a first
possibility is to assume independence between the options, and thus
calculate $\delta_{i}(o,O^{\geq}) = \prod_{o' \in O^{\geq}}
\delta_i(o,o')$, where $\delta_i$ is the chance of breaking one
single domination when the preference is on attribute $i$.
A better estimate can be defined that does not require the
independence assumption, and directly considers the distribution of
all the dominating options. For breaking the dominance relation with
all the options in the dominating set through $a_i$, all dominating
options must have a less preferred value for $a_i$ than that of the
considered option.
For numeric domains, we have to integrate over all possible values of
$\theta$, check whether the given option $o$ has lower cost than all
its dominators in $O^{>}$ and weigh the probability of that
particular value of $\theta$.
\begin{displaymath}
\delta_i(o,O^{>})=
\int [\prod_{o' \in O^{>}}H(c_i(\theta,o')-c_i(\theta,o)) ]
p(\theta)d\theta
\end{displaymath}
For qualitative domains, we replace the integral with a
summation over $\theta$.
We also need to consider the second condition of
Proposition~\ref{prop:pareto-dom}, namely that no new dominance relations
with options in the equal set should be created. This can be done by adding
a second term into the integral:
\begin{equation}
\delta_i(o,O^{\geq})=
\int[\prod_{o' \in O^{>}}H(c_i(\theta,o')-c_i(\theta,o)) \prod_{o'' \in O^{=}}H^{*}(c_i(\theta,o'')-c_i(\theta,o))]
p(\theta)d\theta \label{eq:general-breakall-dom}
\end{equation}
where $H^{*}$ is a modified Heavyside function that assigns value 1 whenever
the difference of the two costs is 0 or greater. ($H^{*}(x)\equiv\mbox{\bf if }
(x\geq0) \mbox{ \bf then } 1 \mbox{ \bf else } 0$).
We consider the overall probability of becoming Pareto optimal when
a preference is added as the combination of the event that the new
preference is on a particular attribute, and the chance that a
preference on this attribute will make the option be preferred over
all values of the dominating options:
\begin{equation}
F_P(o) = 1 - \prod_{a_i \in A_u} (1 - P_{a_{i}}
\delta_i(o,O^{\geq}))
\end{equation}
If we assume that the user has only one hidden preference, we can
use the following simplification:
\begin{equation}
F_P(o) = \sum_{a_i \in A_u} P_{a_{i}} \delta_{i}(o,O^{\geq})
\end{equation}
which is also a good approximation when the probabilities for
additional preferences are small.
In both cases, we select the options with the highest values as
suggestions.
The computation depends on the particular choice of preference
representation and in many cases it can be greatly simplified by
exploiting properties of the cost functions. In general, the
designer of the application has to consider what preferences the
user can express through the user interface and how to translate
them into quantitative cost functions. A similar approach is taken
by Kiessling \citeyear{KiesslingK02} in the design of {\tt
PREFERENCE SQL}, a database system for processing queries with
preferences.
We now consider several examples of common preference functions and show
how the the suggestions can be computed for these cases.
\paragraph{Preference for a single value in a qualitative domain}
Let $\theta$ be the value preferred by the user; the function
$c_i(\theta,x)$ gives a penalty to every value for attribute $a_i$
except $\theta$ . This would allow to express statements like ``I
prefer German cars'', meaning that cars manufactured in Germany are
preferred to cars manufactured in another country.
\begin{displaymath}
c_i(\theta, x) \equiv \mbox{ \bf if } a_i(x)=\theta \mbox{ \bf then
} 0 \mbox{ \bf else } 1.
\end{displaymath}
The probability of breaking a dominance relation between option
$o_1$ and $o_2$ simplifies to the probability that the value of
option $o_1$ for attribute $i$ is the preferred value, when it
differs from the value of $o_2$.
\begin{displaymath}
\delta_i(o_1,o_2)= \left \{
\begin{array}{ll}
p[\theta=a_i(o_1)] & \mbox{ {\bf if} } a_i(o_1) \neq a_i(o_2)\\
0 & \mbox{{\bf otherwise}} \\
\end{array}
\right
\label{eq-calcdelta-qual-semplified}
\end{displaymath}
Assuming a uniform distribution, $p(\theta)= \left .
\begin{array}{c}
1\\
\hline
|D_i| \\
\end{array}
\right .$ for any $\theta$ (meaning that any value
of the domain is equally likely to be the preferred value), the probability becomes $1/|D_i|$ when $a_i(o_1) \neq a_i(o_2)$, and 0 otherwise.
The probability of breaking all dominance relations with a set of dominators
without creating new dominance relations is the same as that for a single
dominator, as long as all these options have a different value for $a_i$:
\begin{eqnarray}
\delta_{i}(o,O^{\geq}) \left \{
\begin{array}{ll}
1/|D_i| & \mbox{{\bf if}}\, (\forall o' \in O^{>})\; a_i(o) \neq a_i(o') \\
0 & \mbox{{\bf otherwise}} \\
\end{array}
\right
\end{eqnarray}
Note that, given the structure of the preference,
$\delta_{i}(o,O^{\geq})=\delta_{i}(o,O^{>})$, because an option $o$
can only break the dominance relations if $a_i(o)$ takes the preferred
value and in that case, no other option can be strictly better
with respect to that preference.
\paragraph{Directional preferences}
\begin{figure}
\centerline{\psfig{file=pics/naturalpreferences.eps,width=8cm}}
\caption{\small In a directional preference, the cost function is
a monotone function of the attribute value. In the case shown here,
smaller values are preferred.} \label{fig:naturalpreferences}
\end{figure}
A particular case of preferences in numeric domains is when the
preference order can be assumed to have a known direction, such as for price (cheaper is always preferred, everything else being equal).
In this case, $\delta(o_1,o_2)$ can be computed by
simply comparing the values that the options take on that attribute
(Figure~\ref{fig:naturalpreferences}).
\begin{eqnarray}
\delta_i(o_1,o_2) \left \{
\begin{array}{ll}
\mbox{{\bf if} } a_i(o_1) < a_i(o_2) \mbox{ {\bf then} }1\mbox{ {\bf else} }0 & \mbox{ }a_i \mbox{ numeric, natural preference }< \\
\mbox{{\bf if} } a_i(o_1) > a_i(o_2) \mbox{ {\bf then} }1\mbox{ {\bf else} }0 &\mbox{ }a_i \mbox{ numeric, natural preference }>\\
\end{array}
\right
\end{eqnarray}
For a set of options $O^{\geq}$ whose values on
$a_i$ lie between $l_i$ and $h_i$ we have
\begin{eqnarray}
\delta_i(o,O^{\geq}) \left \{
\begin{array}{ll}
1 \,\mbox{ {\bf if} } a_i(o) < l_i \\
0 \,\mbox{ {\bf otherwise}} \\
\end{array}
\right
\end{eqnarray}
when smaller values are always preferred, and
\begin{eqnarray}
\delta_i(o,O^{\geq}) \left \{
\begin{array}{ll}
1 \,\mbox{ {\bf if} } a_i(o) > h_i \\
0 \,\mbox{ {\bf otherwise}} \\
\end{array}
\right
\end{eqnarray}
when larger values are always preferred. Note that in both cases,
the expressions are independent of the shape of the cost function as
long as it is monotonic.
\paragraph{Threshold preferences in numeric domains}
\begin{figure}
\centerline{\psfig{file=pics/steppreferences.eps,width=8cm}}
\caption{\small \small When the preference {\tt LessThan($\theta$)}
is represented by a step function, an option is preferred over a
set of options with minimum value $l_i$ if the reference value $\theta$
falls in between the values of the given option and $l_i$.}
\label{fig:steppreferences}
\end{figure}
\begin{figure}
\centerline{\psfig{file=pics/monotonicpreferences.eps,width=8cm}}
\caption{\small When the preference {\tt LessThan($\theta$)}
is represented by a graded step function, an option is preferred over a
set of options with minimum value $l_i$ if the reference value $\theta$
falls in the interval between $a_i(o)-t$ and
$l_i$, where $t = 1/\alpha$.} \label{fig:monotonic-preferences}
\end{figure}
Another commonly used preference expression in numeric domains is
to define a smallest or largest acceptable threshold, i.e. to express
a preference {\tt LessThan($\theta$)} (the value
should be lower than $\theta$) or {\tt GreaterThan($\theta$)} (the
value should be greater than $\theta$). Such a preference is most
straightforwardly expressed by a cost function that follows a step curve~(Figure~\ref{fig:steppreferences}).
To express the fact that there is usually some tolerance for small violations,
more generally a graded step function, where the cost gradually increases,
might be used~(Figure~\ref{fig:monotonic-preferences}).
A possible cost function for {\tt LessThan} might be the following:
\begin{equation}
c_{less-than}(\theta, x) = \left \{
\begin{array}{ll}
\mbox{Min}(1, \alpha*(x-\theta)) & \mbox{ {\bf if} } x > \theta\\
0 & \mbox{{\bf otherwise}} \\
\end{array}
\right
\label{eq-sample-continous-penalty}
\end{equation}
assigning a penalty when the option takes a value greater than the
reference value $\theta$; such cost is the difference between the
value and the reference, up to a maximum of 1. $\alpha$ is a
parameter that expresses the degree to which the violations can be
allowed; for the following computations it is convenient to use
the length of the ramp from 0 to 1 $t = 1/\alpha$.
In this case the computation of $\delta(o_1,o_2)$ will be, if
$a_i(o_1)<a_i(o_2)$:
\begin{displaymath}
\delta_i(o_1,o_2)=\int_{a_i(o_1)- t}^{a_i(o_2)} 1
p(\theta)d\theta = p[(a_i(o_1)-t) < \theta < a_i(o_2)];
\end{displaymath}
and 0 otherwise (since lower values are preferred in Equation
~\ref{eq-sample-continous-penalty}).
When the transition phase from 0 to 1 is small (the cost function
approximates a step function as in
Figure~\ref{fig:steppreferences}), $\delta_i(o_1,o_2)\simeq
p[a_i(o_1) -t < \theta < a_i(o_2)]$, approximating the probability of
the reference point falling between the two options. Assuming
uniform distribution, the probability evaluates to $(a_i(o_2) -
a_i(o_1) + t)/range(a_i)$, where $range(a_i)$ is that difference
between the largest and smallest values of $a_i$.
The reasoning is illustrated by Figure~\ref{fig:monotonic-preferences}.
The probability computed is conditioned on the knowledge of the polarity
of the user's preference ({\tt LessThan} in this case), and
needs to be weighted by the probability of that polarity. Below,
we assume that both polarities are equally likely, and use a weight of 1/2.
All the dominance relations can be broken simultaneously only if the
considered option has a value for that attribute that is smaller or
bigger than that of all the options in the dominating set. To estimate the
probability that the reference value for the new preference falls in
such a way that all the dominance relations are broken, it is
sufficient to consider the extrema of the values that the dominating
options take on the considered attribute:
\begin{itemize}
\item $h_i=max_{o' \in O^{>}}a_i(o')$
\item $l_i=min_{o' \in O^{>}}a_i(o')$
\end{itemize}
If the values for the current option lies outside the interval
$[l_i, h_i]$, we can consider the probability of breaking all the
relations as in the single dominance case. It will be proportional
to the difference between the current option value and the
minimum/maximum, scaled by the range of values for $a_i$:
\begin{eqnarray}
\delta_{i}(o,O^{\geq}) \left \{
\begin{array}{ll}
(a_i(o_1) - h_i+t)/2*range(a_i) &
\mbox{if $a_i(o_1) > h_i$ }\\
&\\
(l_i - a_i(o_1)+t)/2*range(a_i) &
\mbox{if $a_i(o_1) < l_i$ } \\
&\\
0 & \mbox{otherwise} \\
\end{array}
\right
\end{eqnarray}
\paragraph{Peaked preferences for numeric domains}
\begin{figure}
\centerline{\psfig{file=pics/peakedpreferences.eps,width=8cm}}
\caption{\small An example of peaked preferences. $g_i$ is the
greatest value below $a_i(o)$ of $a_i$ for any option in $O^{\geq}(o)$,
$s_i$ is the smallest value above $a_i(o)$.
$m_1=(a_i(o)+g_i)/2$, $m_2=(a_i(o)+s_i)/2$ are the two midpoints
between $a_i(o)$ and $g_i$, $s_i$. To make $o$ be preferred over all
options in $O^{\geq}(o)$, $\theta$ has to fall between $max(m_1,a_i(o)-t)$
and $min(m_2,a_i(o)+t)$. As it can be seen graphically, in this case
the interval is $]m_1, a_i(o)+t[$.} \label{fig:peakedpreferences}
\end{figure}
Another common case is to have preferences for a particular
numerical value $\theta$, for example {\em ``I prefer to arrive around 12am''}.
To allow some tolerance for deviation, a cost function might have a
slope in both directions:
\begin{displaymath}
c_{peak}(x,\theta) = \alpha*|a_i(o)-\theta|.
\end{displaymath}
In this case, an option is preferred to another one if it is closer
to $\theta$. For example, letting $m$ be the midpoint between
$a_i(o_1)$ and $a_i(o_2)$ and supposing $a_i(o_1) < a_i(o_2)$, we
have
\begin{displaymath}
\delta(o_1,o_2) = p[\theta < m]
\end{displaymath}
For calculating the probability of simultaneously breaking all the
dominance relations without generating new ones, we define $g_i$ as the maximum of all dominating or equal options with a value for $a_i$ less than $a_i(o)$ and $s_i$ as the minimum value of all dominating or equal options greater than $a_i(o)$. As option $o$ is more preferred whenever $a_i(o)$ is closer to $\theta$,
and the interval for $\theta$ where this is the case is one half the interval
between $s_i$ and $g_i$, we have:
\[
\delta(o,O^{\geq})=\frac{s_i-g_i}{range(a_i)}
\]
A more realistic cost function would include a ``saturation point'' from which the cost always evaluates to 1, as shown in Figure~\ref{fig:peakedpreferences}:
\begin{equation}
c_{peak-with-saturation}(x,\theta) = \mbox{Min}(1,
\alpha*|a_i(o)-\theta|).
\end{equation}
Let $t=1/\alpha$ be the tolerance of the preference to either side,
$g_i$ be the greatest value below $a_i(o)$ of $a_i$ for any option in $O^{\geq}(o)$, and $s_i$ be the smallest value above $a_i(o)$.
We define two midpoints $m_1=(a_i(o)+g_i)/2$ and $m_2=(a_i(o)+s_i)/2$,
and we then have:
\begin{displaymath}
\delta(o,O^{\geq})=p[\mbox{max}(m_1, a_i(o)-t)<\theta<\mbox{min}(m_2,
a_i(o)+t)]
\end{displaymath}
If the reference point is uniformly distributed, this evaluates to:
\begin{equation}
\delta(o,O^{\geq})= \frac{\min(m_2,a_i(o)+t) - \max(m_1, a_i(o)-t)}{range(a_i)}
\end{equation}
\subsection{Example}
The following table shows the relevant values for the example
shown earlier. Recall that we had earlier identified $o_4$ and
$o_3$ as the most attractive suggestions.
\begin{center}
\begin{scriptsize}
\begin{tabular}{rlllllllll}
& $O^+$ & rent & type & $\delta_2$ & distance &
$\delta_3$ & furnished & $\delta_4$ & ${\bf p_{opt}}$ \\
& & ($a_1$) & ($a_2$) & & ($a_3$) & & ($a_4$) & & \\
\\ \hline
$o_1$ & - & 400 & room & - & 17 & - & yes & - & - \\
$o_2$ & $o_1$ & 500 & room & 0 & 32 & 0.25 & yes & 0 & 0.125 \\
$o_3$ & $o_1,o_2$ & 600 & apartment & 0.5 & 14 & 0.05 & no & 0.5 & 0.451 \\
$o_4$ & $o_1,o_2$ & 600 & studio & 0.5 & 5 & 0.20 & no & 0.5 & 0.494 \\
$o_5$ & $o_1-o_4$ & 650 & apartment & 0 & 32 & 0 & no & 0 & 0 \\
$o_6$ & $o_1-o_5$ & 700 & studio & 0 & 2 & 0.05 & yes & 0 & 0.025\\
$o_7$ & $o_1-o_6$ & 800 & apartment & 0 & 7 & 0 & no & 0 & 0 \\
\end{tabular}
\end{scriptsize}
\end{center}
In the counting strategy, options are ranked according to the size of the
set $O^+$. Thus, we have $o_2$ as the highest ranked suggestion, followed
by $o_3$ and $o_4$.
In the probabilistic strategy, attribute values of an option are
compared with the range of values present in its dominators. For
each attribute, this leads to the $\delta$ values as indicated in
the table. If we assume that the user is equally likely to have a
preference on each attribute, with a probability of $P_{a_i} = 0.5$,
the probabilistic strategy scores the options as shown in the last
column of the table. Clearly, $o_4$ is the best suggestion, followed
by $o_3$. $o_2$ and also $o_6$ follow further behind.
Thus, at least in this example, the model-based strategies are successful
at identifying good suggestions.
\subsection{Optimizing a Set of Several Suggestions}
The strategies discussed so far only concern generating single
suggestions. However, in practice it is often possible to show a set
of $l$ suggestions simultaneously. Suggestions are interdependent,
and it is likely that we can obtain better results by choosing
suggestions in a diverse way. This need for diversity has also been
observed by others~\cite{ShimazuExpertClerk,ref:smyth-diversity}.
More precisely, we should choose a group $G$ of suggested options by
maximizing the probability $p_{opt}(G)$ that at least one of the suggestions
in the set $G$ will become optimal through a new user preference:
\begin{equation}
p_{opt}(G) = 1 - \prod_{a_i \in A_u} (1 - P_{a_{i}} (1 - \prod_{o'
\in G} (1 - \delta_i(o',O^{\geq}(o')))))
\end{equation}
Explicitly optimizing this measure would lead to combinatorial
complexity. Thus, we use an algorithm that adds suggestions one by
one in the order of their contribution to this measure given the
already chosen suggestions. This is similar to the algorithm used by
Smyth and McClave~\citeyear{ref:smyth-diversity} and by Hebrard et
al.~\citeyear{ref:diversity-aaai05} to generate diverse solutions.
The algorithm first chooses the best single suggestion as the first
element of the set $G$. It then evaluates each option $o$ as to how
much it would change the combined measure $p_{opt}(G)$ if it were
added to the current $G$, and adds the option with the largest increment.
This process repeats until the desired size of set $G$ is reached.
\subsection{Complexity}
Let $n$ be the number of options, $k$ the number of attributes and
$m$ the number of preferences, $d$ the number of dominators, $A_u$
the attributes on which the user did not state any preference.
All three model-based strategies are based on the dominating set of
an option. We use a straightforward algorithm that computes this as
the intersection of the set of options that are better with respect
to individual preferences. There are $m$ such sets, each with at
most $n$ elements, so the complexity of this algorithm is $O(n^2
m)$. In general, the dominating set of each option is of size
$O(n)$ so that the output of this procedure is of size $O(n^2)$, so
it is unlikely that we can find a much better algorithm.
Once the dominating sets are known, the counting strategy has
complexity $O(nd)$, while the attribute and probabilistic strategies
have complexity $O(ndk_{u})$, where $k_{u}=| A_u |$ and $k_{u}<k$.
In general, $d$ depends on the data-set. In the worst case it can be
proportional to $n$, so the resulting complexity is $O(n^2)$.
When utility is used as a domination criterion, the dominating set
is composed by the options that are higher in the ranking. Therefore
the process of computing the dominating set is highly simplified
and can be performed while computing the candidates. However the
algorithm still has overall worst case complexity $O(n^2)$: the the
last option in the ranking has $n-1$ dominators, and so $d=O(n)$.
When several examples are selected according to their diversity, the
complexity increases since the metrics must be recomputed after
selecting each suggestion.
In comparison, consider the \emph{extreme} strategy, proposed
initially by Linden et al. in ATA~\citeyear{linden97interactive}. It
selects options that have either the smallest or the largest value
for an attribute on which the user did not initially state any
preference. This strategy needs to scan through all available
options once. Its complexity is $O(n)$, where $n$ is the number of
options (the \emph{size} of the catalog). Thus, it is significantly
more efficient, but does not appear to provide the same benefits as
a model-based strategy, as we shall see in the experiments.
Another strategy considered for comparison, that of generating a
maximally diverse set of
options~\cite{ref:diversity-aaai05,ref:smyth-diversity}, has an
exponential complexity for the number of available options. However,
greedy approximations~\cite{ref:diversity-aaai05} have a complexity
of only $O(n^2)$ , similar to our model-based strategies.
The greedy algorithm we use for optimizing a set of several suggestions
does not add to the complexity; once the distances $\delta_i$ have
been computed for each attribute, the greedy algorithm for computing the set
of suggestions has a complexity proportional to the product of the number of
options, the number of attributes, and the square of the number of
suggestions to be computed.
We suspect that an exact optimization would be NP-hard in the number
of suggestions, but we do not have a proof of this.
\section{Experimental Results: Simulations}
\begin{figure}
\centering
\includegraphics[width=9.5cm]{pics//sim_pareto_unildbase.eps}
\caption{\small Simulation results on a database of actual apartment
offers. For 100 simulated users, each with a randomly chosen
preference model of 6 hidden preferences, we plot the number of
times that the simulation discovered at least the number of
preferences shown on the abscissa. The higher the curve, the more
preferences were discovered on average.}
\label{fig:simulation-results-unildb}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9.5cm]{pics//sim_pareto_randdbase.eps}
\caption{\small Simulation results for randomly generated catalogs.
For 100 simulated users, each with a randomly chosen preference
model of 8 hidden preferences, we plot the number of times that the
simulation discovered at least the number of preferences shown on
the abscissa. The higher the curve, the more preferences were
discovered on average.} \label{fig:simulation-results-random}
\end{figure}
The suggestion strategies we presented are heuristic, and it is not
clear which of them performs best under the assumptions underlying
their design. Since evaluations with real users can only be carried
out for a specific design, we first select the best suggestion
strategy by simulating the interaction of a computer generated user
with randomly generated preferences. This allows us to compare the
different techniques in much greater detail than would be possible
in an actual user study, and thus select the most promising
techniques for further development. This is followed by real user
studies that are discussed in the next section.
In the simulations, users have a randomly generated set of $m$
preferences on the different attributes of items stored in a
database. As a measure of accuracy, we are interested in whether the
interaction allows the system to obtain a complete model of the
user's preferences. This tests the design objective of the
suggestion strategies (to motivate the user to express as many
preferences as possible) given that the assumptions about user
behavior hold. We verify that these assumptions are reasonable in
the study with real users reported in the next section.
The simulation starts by assigning the user a set of randomly
generated preferences and selecting one of them as an initial
preference. At each stage of the interaction, the simulated user is
presented with 5 suggestions.
We implemented 6 different strategies for suggestions, including the
three model-based strategies described above as well as the
following three strategies for comparison:
\begin{itemize}
\item the {\em random} strategy suggests randomly chosen options;
\item the {\em extremes} strategy suggests options where attributes
take extreme values, as proposed by
Linden~\citeyear{linden97interactive};
\item the {\em diversity} strategy computes the 20 best solutions
according to the current model and then generates a maximally
diverse set of 5 of them, following the proposal of
McSherry~\citeyear{McSherry02}.
\end{itemize}
The simulated user behaves according to an opportunistic model by
stating one of its hidden preferences whenever the suggestions
contain an option that would become optimal if that preference was
added to the model with the proper weight. The interaction continues
until either the preference model is complete, or the simulated user
states no further preference. Note that when the complete preference
model is discovered, the user finds the target option.
We first ran a simulation on a catalog of student accommodations
with 160 options described using 10 attributes. The simulated user
was shown 5 suggestions, and had a randomly generated model of 7
preferences, of which one is given by the user initially. The
results are shown in Figure~\ref{fig:simulation-results-unildb}. For
each value of x, it shows the percentage of runs (out of 100) that
discover at least x out of the 6 hidden preferences in the complete
model. Using random suggestions as the baseline, we see that the
extremes strategy performs only slightly better, while diversity
provides a significant improvement. The model-based strategies give
the best results, with the counting strategy being about equally
good as diversity, and the probabilistic strategies providing
markedly better results.
In another test, we ran the same simulation for a catalog of 50
randomly generated options with 9 attributes, and a random
preference model of 9 preferences, of which one is known initially.
The results are shown in Figure~\ref{fig:simulation-results-random}.
We can see that there is now a much more pronounced difference
between model-based and non model-based strategies. We attribute
this to the fact that attributes are less correlated, and thus the
extreme and diversity filters tend to produce solutions that are too
scattered in the space of possibilities. Also the probabilistic
strategy with both possible implementations (assuming the attributes
values independent or not) give very close results.
\begin{table}
\begin{center}
\begin{tabular}[left]{|c|c|c|c|c|c|c|}
\hline \#P / \#A & random & extreme & diversity & counting & prob1 & prob2 \\
\hline 6/6 & 0.12& 0.09& 0.23 & 0.57& 0.59& 0.64\\\hline 6/9 & 0.12& 0.12& 0.27& 0.65& 0.63& 0.67\\\hline 6/12 & 0.11& 0.13& 0.24& 0.62& 0.64& 0.63\\\hline\end{tabular}
\caption{\small The fraction of preferences that are correctly
discovered as a function of the number of attributes; keeping
constant the number of preferences (6) to be discovered. All
attributes have integer domains.} \label{tab:nattrib-comparison}\end{center}\end{table}
\begin{table}\begin{center}\begin{tabular}[left]{|c|c|c|c|c|c|c|}\hline \#P / \#A & random & extreme & diversity & counting & prob1 & prob2 \\
\hline 3/9 & 0.25& 0.36& 0.28&0.70& 0.71& 0.71 \\\hline 6/9 & 0.11& 0.12& 0.11&0.67& 0.68& 0.68\\\hline 9/9 & 0.041& 0.17& 0.05& 0.66& 0.70&0.73\\\hline\end{tabular}\caption{\small The fraction of preferences that are correctly
discovered (on average) as a function of the number of preferences
to be discovered. All attributes have integer domains.}
\label{tab:npref-comparison}
\end{center}
\end{table}
We investigated the impact of the number of preferences, the number
and type of attributes, and the size of the data set on random data
sets. In the following, {\em prob1} refers to the probabilistic
strategy with the independence assumption, {\em prob2} to the
probabilistic strategy without that assumption.
Surprisingly we discovered that varying the number of attributes
only slightly changes the results. Keeping the number of preferences
constant at 6 (one being the initial preference), we ran
simulations with the number of attributes equal to 6, 9 and 12. The
average fraction of discovered preferences varied for each strategy
and simulation scenario by no more than $5\%$, as shown in
Table~\ref{tab:nattrib-comparison}.
\begin{table}\begin{center}\begin{tabular}[left]{|c|c|c|c|c|c|c|}\hline domain & random & extreme & diversity & counting & prob1 & prob2 \\
type & choice & & & & &\\
\hline mixed & 0.048& 0.30& 0.18& 0.81& 0.87& 0.86\\
\hline integer &0.04& 0.17& 0.05& 0.66& 0.70& 0.72\\
\hline
\end{tabular}
\caption{\small The fraction of preferences that are correctly
discovered as a function of the different kinds of attribute
domains: integer domains against a mix of 5 integer, 2 discrete
domains and 2 domains with a natural order. We ran 100 simulations
with 9 attributes and 9 preferences.} \label{tab:domain-comparison}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}[left]{|c|c|c|c|c|c|c|}
\hline data & random & extreme & diversity & counting & prob1 & prob2 \\
size & choice & & & & &\\
\hline 50 & 0.25 & 0.50 & 0.56 & 0.89 & 0.94 & 0.93\\
\hline 75 & 0.16 & 0.42 & 0.54 & 0.88 & 0.97 & 0.95\\
\hline 100 & 0.11 & 0.29 & 0.57 & 0.90 & 0.96 & 0.97\\
\hline 200 & 0.05 & 0.22 & 0.54 & 0.86 & 0.91 & 0.93\\
\hline
\end{tabular}
\caption{\small The fraction of preferences that are correctly
discovered as a function of the database size. We ran 100
simulations with 9 attributes and 9 preferences (mixed domains).}
\label{tab:size-comparison}
\end{center}
\end{table}
The impact of the variation of the number of preferences to discover
is shown in Table~\ref{tab:npref-comparison}. All of our model-based
strategies perform significatively better than random choice,
suggestions of extrema, and maximization of diversity. This shows
the importance of considering the already known preferences when
selecting suggestions.
The performances are higher with mixed domains than with all numeric
domains (Table~\ref{tab:domain-comparison}). This is easily
explained by the larger outcome space in the second case.
Interestingly, as the size of the item set grows, the performance of
random and extreme strategies significantly degrades while the
model-based strategies maintain about the same
performance~(Table~\ref{tab:size-comparison}).
In all simulations, it appears that the probabilistic suggestion
strategy is the best of all, sometimes by a significant margin. We
thus chose to evaluate this strategy in a real user study.
\section{Experimental Results: User Study}
The strategies we have developed so far depend on many assumptions
about user behavior and can only be truly tested by evaluating them
on real users. However, because of the many factors that influence
user behavior, only testing very general hypotheses is possible.
Here, we are interested in verifying that:
\begin{enumerate}
\item using model-based suggestions leads to more complete preference
models.
\item using model-based suggestions leads to more accurate decisions.
\item more complete preference models tend to give more accurate decisions,
so that the reasoning underlying the model-based suggestions is
correct.
\end{enumerate}
We measure decision accuracy as the percentage of users that find
their most preferred choice using the tool. The most preferred
choice was determined by having the subjects go through the entire
database of offers in detail after they finished using the tool.
This measure of decision accuracy, also called the switching rate,
is the commonly accepted measure in marketing
science~\cite<e.g.,>{ref:decision-accuracy}.
We performed user studies using FlatFinder, a web application for
finding student housing that uses actual offers from a university
database that is updated daily. This database was ideal because it
contains a high enough number - about 200 - of offers to present a real
search problem, while at the same time being small enough that it is feasible
to go through the entire list and determine the best choice in less than 1 hour.
We recruited student subjects who
had an interest in finding housing and thus were quite motivated to
perform the task accurately.
We studied two settings:
\begin{itemize}
\item in an unsupervised setting, we monitored user behavior on a publicly
accessible example-critiquing search tool for the listing. This
allowed us to obtain data from over a hundred different users;
however, it was not possible to judge decision accuracy since we
were not able to interview the users themselves.
\item in a supervised setting, we had 40 volunteer students use the
tool under supervision. Here, we could determine decision accuracy
by asking the subjects to carefully examine the entire database of
offers to determine their target option at the end of the procedure.
Thus, we could determine the switching rate and measure decision
accuracy.
\end{itemize}
There are 10 attributes: type of accommodation (room in a family
house, room in a shared apartment, studio apartment, apartment),
rent, number of rooms, furnished (or not), type of bathroom (private
or shared), type of kitchen (shared, private), transportation
available (none, bus, subway, commuter train), distance to the
university and distance to the town center.
For numerical attributes, a preference consists of a relational
operator (less than, equal, greater than), a threshold value and an
importance weight between 1-5; for example, "price less than 600
Francs" with importance 4. For qualitative attributes, a preference
specifies that a certain value is preferred with a certain
importance value. Preferences are combined by summing their weights
whenever the preference is satisfied, and options are ordered so
that the highest value is the most preferred.
Users start by stating a set $P_I$ of initial preferences, and then
they obtain options by pressing a \emph{search} button.
Subsequently, they go through a sequence of {\em interaction cycles}
where they refine their preferences by critiquing the displayed
examples. The system maintains their current set of preferences, and
the user can state additional preferences, change the reference
value of existing preferences, or even remove one or more of the
preferences. Finally, the process finishes with a final set of
preferences $P_F$, and the user chooses one of the displayed
examples.
The increment of preferences $\mid P_F-P_I \mid$ is the number of
extra preferences stated and represents the degree to which the
process stimulates preference expression.
The search tool was made available in two versions:
\begin{itemize}
\item {\bf C}, only showing a set of 6 candidate apartments without
suggestions, and
\item {\bf C+S}, showing a set of 3 candidate apartments and 3
suggestions selected according to the probabilistic strategy with a
utility-dominance criterion.
\end{itemize}
We now describe the results of the two experiments.
\subsection{Online User Study}
\begin{table}
\centerline{
\begin{tabular}{|c|c|c|}
\hline & tool without suggestions & tool with suggestions \\ \hline
number of critiquing cycles & 2.89 & 3.00 \\
initial preferences & 2.39 & 2.23 \\
final preferences & 3.04 & 3.69 \\
increment & 0.64 & 1.46 \\
\hline
\end{tabular}
} \caption{\small Average behavior of users of the on-line
experiment. We collected logs of real users looking for a student
accommodation with our tool, hosted on the laboratory website.}
\label{tab:online-experiment-results}
\end{table}
FlatFinder has been hosted on the laboratory web-server and made
accessible to students looking for accommodation during the winter
of 2004-2005. For each user, it anonymously recorded a log of the
interactions for later analysis. The server presented users with
alternate versions of the system, i.e. with (\emph{C+S}) and without
(\emph{C}) suggestions. We collected logs from 63 active users who
went through several cycles of preference revision.
In analyzing the results of these experiments, whenever we present a
hypothesis comparing users of the same group, we show its
statistical significance using a paired test. For all hypotheses
comparing users of different groups, we use the impaired student
test to indicate statistical significance. In both cases, we
indicate significance by p, the probability of obtaining the
observed data under the condition that the null hypothesis is true.
Values of p $<$ 0.05 are considered significant, p $<$ 0.01 highly
significant and p $<$ 0.001 very highly significant.
We first considered the increment from initial preference
enumeration $\mid P_I \mid$ to final preference enumeration $\mid
P_F \mid$, as shown in Table~\ref{tab:online-experiment-results}.
This increment was on average 1.46 for the tool with suggestions
\emph{C+S} and only 0.64 for the tool \emph{C} ($128\%$ increase),
showing the higher involvement of users when they see suggestions.
This hypothesis was confirmed with p = 0.002.
It is interesting to see that in both groups the users interacted
for a similar number of cycles (average of 2.89 and 3.00; p = 0.42,
the null hypothesis cannot be rejected), and that the number of
initial preferences is also close (average of 2.39 and 2.23, null
hypothesis cannot be rejected with p = 0.37), meaning that the
groups are relatively unbiased.
The result of the test (Table~\ref{tab:online-experiment-results})
shows clearly that users are more likely to state preferences when
suggestions are present, thus verifying Hypothesis 1. However, as
this is an online experiment, we are not able to
measure decision accuracy. In order to obtain these measures, we
also conducted a supervised user study.
\subsection{Supervised User study}
\begin{table}
\begin{center}
\begin{tabular}{|cc|c|}
\hline
\multicolumn{2}{|c|}{Characteristics} & Participants \\ \hline \hline
Gender & Male & 31 \\
& Female & 9 \\ \hline \hline
Age & 10s & 2 \\
& 20s & 36 \\
& 30s & 2 \\ \hline \hline
Education & Undergraduate & 36 \\
& Phd & 4 \\ \hline \hline
\multicolumn{2}{|l|}{Familiar with online apartment search} & \\
& Yes & 26 \\
& No & 14 \\ \hline \hline
\multicolumn{2}{|l|}{Familiar with apartments in the area} & \\
& Yes & 27 \\
& No & 13 \\ \hline \hline
\end{tabular}
\end{center}
\caption{\small Demographic characteristics of participants for the
supervised user study.} \label{table:demographic}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{} &Interaction with &Interaction with \\
\multicolumn{2}{|c|}{} &first interface &second interface \\
\hline
group 1 & Tool version & \emph{C} & \emph{C+S} \\ \cline{2-4}
({\bf C} first) & Decision Accuracy (mean) & 0.45 & 0.80 \\ \cline{2-4}
& Preference Enumeration (mean) & 5.30 & 6.15 \\
\cline{2-4}
& Interaction cycles (mean) & 5.60 & 4.55 \\ \cline{2-4}
& Interaction time (min.,mean) & 8:09 & 4.33 \\
\hline
\hline \hline
group 2 & Tool version & \emph{C+S} & \emph{C} \\ \cline{2-4}
({\bf C+S} first) & Decision Accuracy (mean) & 0.72 & 0.67 \\ \cline{2-4}
& Preference Enumeration (mean) & 5.44 & 4.50 \\ \cline{2-4}
& Interaction cycles (mean) & 4.05 & 6.25 \\ \cline{2-4}
& Interaction time (mean) & 7.39 & 3.33 \\ \hline
\end{tabular}
\end{center}
\caption{\small Results for the supervised experiment. Decision
accuracy and preference enumeration (the number of preferences
stated) are higher when suggestions are provided (interface
\emph{C+S}, showing 3 candidates and 3 suggestions) rather than when
suggestions are not provided (interface \emph{C}, 6 candidates). }
\label{table:supervised-results}
\end{table}
The supervised user study used the same tool as the online user
study but users were followed during their interaction.
To measure improvement of accuracy, we instructed all of the users
to identify their most preferred item by searching the database
using interface 1. This choice was recorded and was called $c_1$.
Then the users were instructed to interact with the database using
interface 2 and indicate a new choice ($c_2$) if the latter was an
improvement on $c_1$ in their opinion. To evaluate whether the
second choice was better than the initial one, we instructed the
users to review all apartments (100 apartments in this case) and
tell us whether $c_1$, $c_2$, or a completely different one truly
seemed best.
Thus, the experiment allowed us to measure decision accuracy, since
we obtained the true target choice for each user. If users stood by
their first choice, it indicated that they had found their target
choice without further help from the second interface. If users
stood by their second choice, it indicated that they had found their
target choice with the help of the second interface. If users chose
yet another item, it indicated that they had not found their target
choice even though they performed search with both interfaces.
40 subjects, mostly undergraduate students, with 9 different
nationalities took part in the study. Most of them (27 out of 40)
had searched for an apartment in the area before and had used online
sites (26 out of 40) to look for accommodations. Table
\ref{table:demographic} shows some of their demographic
characteristics. The subjects were motivated by the interest of
finding a better apartment for themselves, which meant that they
treated the study seriously.
To overcome bias due to learning and fatigue, we divided the users
in two groups, who were asked to interact with the versions in two
different orders:
\begin{itemize}
\item group $1$ used tool \emph{C} (step 1) and then \emph{C+S} (step 2)
\item group $2$ used tool \emph{C+S} (step 1) and then \emph{C} (step 2)
\end{itemize}
Both groups then examined the entire list to find the true most
preferred option. For each version of the tool and each group, we
recorded the fraction of subjects where the final choice made using
that interface was equal to the target option as decision accuracy.
For both groups, we refer to accuracy of interface 1 as $acc_1$, and
accuracy of interface 2 as $acc_2$.
We expected that the order of presenting the versions would be
important. Once the users realized their own preferences and found a
satisfactory option, they are likely to be consistent with that.
Therefore, we expected $acc_2 > acc_1$ in both cases. However, we
expected that average accuracy would significantly increase with
suggestions, and so the results would show $acc_2 >> acc_1$ in the
first group and $acc_2$ only slightly higher than $acc_1$ in group
2.
Table~\ref{table:supervised-results} shows the results. In the next
section we want to verify Hypothesis 2 (decision accuracy improves
with suggestions) and 3 (preference enumeration improves accuracy).
Finally we will check whether a mediation phenomenon is present
(meaning that the improvement of accuracy is entirely explained by
the fact that suggestions lead to an increase of preferences).
\paragraph{Decision Accuracy improves with suggestions}
\begin{figure}
\begin{center}\mbox{
\subfigure[{\small For group 1, accuracy dramatically increased when they
used the version with suggestions (\emph{C+S}).}]{
\includegraphics[width=0.49\columnwidth]{pics/group1_accuracy.eps}
}
\hspace{0.1cm}
\subfigure[{\small For group 2, accuracy is already very high when they
use the version with suggestions (\emph{C+S}). Further interaction
cycles with the tool \emph{C} showing 6 candidates does not
increase accuracy any further.}]{
\includegraphics[width=0.49\columnwidth]{pics/group2_accuracy.eps}
}}
\mbox{
\subfigure[{\small For group 1, users needed less interaction cycles to make a choice
when using the interface with suggestions (\emph{C+S}).}]{
\includegraphics[width=0.49\columnwidth]{pics/cyclesG1.eps}
}
\hspace{0.1cm}
\subfigure[{\small For group 2, the number of interaction cycles significantly increased
when they used the version without suggestions (\emph{C}).}]{
\includegraphics[width=0.49\columnwidth]{pics/cyclesG2.eps}
}}
\caption{\small Decision accuracy and interaction cycles for both groups of users of the
supervised experiment.}
\label{fig:supervised-experiment-results}
\end{center}
\end{figure}
Figure~\ref{fig:supervised-experiment-results} shows the variation of decision
accuracy and the number of interaction cycles for the two groups.
For group 1, after interaction with tool \emph{C}, the
average accuracy is only 45\%, but after interaction with
\emph{C+S}, the version with suggestions, it goes up to 80\%. This
confirms the hypothesis that suggestions improve accuracy with p =
0.00076. 10 of the 20 subjects in this group switched to another
choice between the two versions, and 8 of them reported that the new
choice was better. Clearly, the use of suggestions significantly
improved decision accuracy for this group.
Users of group 2 used \emph{C+S} straight away and achieved an
average accuracy of 72\% at the outset. We expected that a
consequent use of tool $C$ would have a small positive effect on the
accuracy, but in reality the accuracy decreased to 67\%. 10 subjects
changed their final choice using the tool without suggestions, and 6
of them said that the newly chosen was only equally good as the one
they originally chose. The fact that accuracy does not drop
significantly in this case is not surprising because users remember
their preferences from using the tool with suggestions and will thus
state them more accurately independently of the tool. We can
conclude from this group that improved accuracy is not simply the
result of performing the search a second time, but due to the
provision of suggestions in the tool. Also, the closeness of the
accuracy levels reached by both groups when using suggestions can be
interpreted as confirmation of its significance.
We also note that users of interface \emph{C+S} needed fewer cycles
(and thus less effort) to make decisions (average of 4.15) than
interface \emph{C} (5.92).
Interestingly, the price of the chosen apartment
increased for the first group (average of 586.75 for \emph{C} to
612.50 for \emph{C+S}; p = 0.04, statistically significant), whereas
it decreased for the second group (average of 527.20 for \emph{C+S}
to 477.25 for \emph{C}; p = 0.18, the decrease is not statically
significant). We believe that subjects in the first group did not
find a good choice, and thus paid a relatively high price to get an
apartment with which they would feel comfortable. Conditioned on
this high price, they were then willing to spend even more as they
discovered more interesting features through suggestions. On the
other hand, subjects in group 2 already found a good choice in the
first use of the tool, and were unwilling to accept a high price
when they did not find a better choice in the second search without
suggestions.
Thus, we conclude that Hypothesis~2 is confirmed: suggestions indeed
increase decision accuracy.
\paragraph{Preference enumeration improves accuracy}
In this study, we notice that when suggestions are present, users
state a higher number of preferences (average of 5.8 preferences vs.
only 4.8 without suggestions, p = 0.021), so that Hypothesis~1 is
again confirmed.
To validate hypothesis 3, that a higher preference enumeration also
leads to more accurate decisions, we can compare the average size of
the preference model for those users who found their target solution
with the first use of the tool and those who did not. In both
groups, users who did find their target in the first try stated on
average 5.56 preferences (5.56 in group 1 and 5.57 in group 2) while
users who did not find their target stated only an average of 4.88
preferences (5.09 in group 1 and 4.67 in group 2). This shows that
increased preference enumeration indeed improves accuracy but
unfortunately we did not find this statistically significant (p =
0.17). In fact, there is a chance that this correlation is due to
some users being more informed and thus making more accurate
decisions and stating more preferences.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline found & 0.45 & 0.83 \\ \hline
still not found & 0.55 & 0.17 \\ \hline
& $\Delta |P|<=0$ & $\Delta |P|>0$ \\
\hline
\end{tabular}
\caption{\small For users who did not find their target in the first
use of the tool, the table shows the fraction that did and did not
find their target in the next try, depending on whether the size of
their preference model did or did not increase. ($\Delta |P|$ is the
variation of the number of stated preferences $|P|$ between the two
uses of the tool).} \label{table:pref-accuracy2}
\end{center}
\end{table}
As an evaluation independent of user's a priori knowledge, we
considered those users who did not find their target in the first
try only. As a measure of correlation of preference enumeration and
accuracy, we considered how often an increase in preference
enumeration in the second try led to finding the most preferred
option on the second try. Table~\ref{table:pref-accuracy2} shows
that among users whose preference model did not grow in size, only
$45\%$ found their target, whereas of those that increased their
preference model, $83\%$ found their target. Again, we see a
significant confirmation that higher preference enumeration leads to
a more accurate decision with real users (p = 0.038251).
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\Delta acc>0$ & 0.23 & 0.14 & 0.38 \\
\hline
$\Delta acc=0$ & 0.62 & 0.71 & 0.62 \\
\hline
$\Delta acc<0$ & 0.15 & 0.14 & 0.00 \\
\hline
{\bf } & $\Delta |P|<0$ & $\Delta |P|=0$ & $\Delta |P|>0$ \\
\hline
\end{tabular}\caption{\small Variation of accuracy against variation of the
number of stated preferences $|P|$ between the two uses of the
tool.} \label{table:pref-accuracy1}
\end{center}
\end{table}
Finally, a third confirmation can be obtained by considering the
influence that variations in the size of the preference model have
on decision accuracy, shown in Table~\ref{table:pref-accuracy1}.
Each column corresponds to users where the size of the preference
model decreased, stayed the same, or increased. It also shows the
fraction for which the accuracy increased, stayed the same or
decreased (note that when accuracy is 1 at the first step, it cannot
further increase). We can see that a significant increase in
accuracy occurs only when the size of the preference model
increases. In all other cases there are some random variations but
no major increases. The statistical test shows that the hypothesis
that an increase in preference enumeration causes an increase in
accuracy is confirmed with p = 0.0322.
Thus, we conclude that hypothesis 3 is also validated by the user
study; a more complete preference model indeed leads to more
accurate decisions.
\paragraph{Mediation analysis}
Since our three hypotheses are verified, the presence of suggestions
lead to an increase of the preferences stated and consequently to an
increase in accuracy. With a 3-step mediation analysis we want to
check whether there is a mediation phenomenon, meaning that the
increase of accuracy is entirely explained by the increase of the
preferences.
However, a Sobel test did not show statistical significance
(p=0.14), so we cannot conclude that the increase of the preference
enumeration is a ``mediator''. Our interpretation is that
suggestions influence decision accuracy by also making the users
state better preferences.
\subsection{Other Observations}
A more objective measure of confidence is the price that people
are willing to pay for the chosen option as a measure of their
satisfaction, since they would only pay more if the choice satisfies
them more based on the other attributes. For the 40 subjects, the
average rent of the chosen housing with suggestion was CHF 569.85,
an increase of about $7\%$ from the average without suggestions,
which was CHF532.00. In fact, we can observe a general correlation
between price and accuracy, as 9 out of the 10 subjects that did not
find their target in the first interaction finally chose an
apartment with higher rent.
All subjects notably liked the interaction (average 4.1 out of 5)
with no significant difference between the versions.
We asked the subjects which version they considered more productive.
The majority of them, 22 out of 40, preferred the version with
suggestions, while 13 preferred the version with more candidates and
5 had no opinion.
Another indication that suggestions are helpful is the average time to
complete the decision task: while it took subjects an average of
8:09 minutes to find their target without suggestions, the version
with suggestions took only 7:39 minutes on average. Thus, using
suggestions users take less time but obtain a more accurate decision.
\section{Related Work}
\paragraph{Example-based search tools}
Burke and others \citeyear{Burke97findme} have been among the first
to recognize the challenge of developing intelligent tools for
preference-based search. Their approach, called {\em assisted
browsing} combines searching and browsing with knowledge based
assistance and recognizes that users are an integral part of the
search process.
They developed the {\em FindMe approach}, consisting of a family of
prototypes that implement the same intuition in a variety of domains
(restaurants, apartments, cars, video, etc.). The main features are the
possibility of similarity based retrieval (look for a restaurant
similar to this, but in San Francisco), the support for tweaking
(look for bigger, nicer, closer to centre, ..), abstraction of high
level features (users might look for a restaurant with {\em casual}
look, where look is not defined in the database directly, but
decoupled into a few basic features), and multiple similarity
metrics. The display follows a hierarchical sort where the
preferences (described by goals: minimize price, find a seafood
cuisine) have a fixed priority. The restaurant advisor was tested
on-line for several years.
Another early and similar work is the {\em ATA} system of Linden et
al.~\citeyear{linden97interactive}. ATA is a tool for planning
travel itineraries based on user's constraints. It followed the
so-called candidate-critiquing cycle where users could post
constraints on their travel and would be shown the 3 best matching
flights from the database. ATA was tested on-line for several
months.
In more recent work, Shearin and
Lieberman~\citeyear{shearin01intelligent}, have described {\em
AptDecision}, an example-critiquing interface where the user is able
to guide the search by giving feedback on any feature (in the form
of either positive or negative weights) at any time. All of these
critiques are stored in a profile that is displayed at the bottom
part of the interface and can be modified or stored for later use.
Instead of providing feedback manually, the user might prefer to let
AptDecision learn his or her profile weights by comparing two sample
examples. However, they did not investigate strategies for
suggestions.
\paragraph{Improving example selection}
Techniques to induce users to state their preferences more
accurately have been proposed in various recommender systems.
Suggestion mechanisms include extreme values, diversity, and
compound critiques.
The ATA system of Linden et al.~\citeyear{linden97interactive}
included a suggestion strategy of showing extreme examples applied
to the airplane travel domain, for example the first and last flight
of the day. In our simulations, we compared our model-based
techniques to this strategy.
Several researchers
\cite{ref:bridge-diversity,ref:smyth-diversity,McSherry02,McGinty2003iccbr,Smyth2003ijcai,ref:McSherry-compromise}
have studied the issue of achieving a good compromise between
generating similar and diverse results in case-based retrieval. They
consider the problem of finding cases that are most similar to a
given query case, but at the same time maximize the diversity of
the options proposed to the user. Smyth et.
al~\citeyear{Smyth2003ijcai} improves the common query \emph{show me
more like this}: their adaptive search algorithm alternates between
a strategy that privileges similarity and one that privileges
diversity (\emph{refocus}). McSherry~\citeyear{McSherry02} took this
idea further and provided selection algorithms that maximize
diversity and similarity at the same time.
McSherry~\citeyear{ref:McSherry-compromise} proposes a technique
where retrieved cases are associated with a set of {\em like} cases
that share identical differences with the query case. The like cases
are not displayed among the examples, but accessible to users on
demand. Thus, the retrieval set can be more diverse.
Reilly et al.~\citeyear{ReillyMMS04} also uses a mixture of
similarity and diversity, with the goal of providing possible
standardized critiques to allow trade-offs analysis in an e-commerce
environment. A critique is, in this scope, a modification of a
user's current preferences for narrowing down the search or it is an
indication of a trade-off. Users can select either unit critiques
which revise preferences on individual attributes, or compound
critiques which revise preferences on multiple attributes. The
compound critiques are organized into categories and displayed in
natural language form, for example {\em more memory and larger and
heavier}. One of the innovations in their work is the automatic
generation of sensible critiques involving several features based on
available items using the {\em Apriori} algorithm. Both simulated
and real user studies have shown that compound critiques
significantly reduce the number of interaction cycles.
All of these approaches, however, differ from ours in the sense that
they do not have an explicit preference model. The recent work of
Hebrard et al.~\citeyear{ref:diversity-aaai05} has investigated the
computational problem of generating diverse solutions to constraint
satisfaction problems.
\paragraph{Dialogue-based approaches}
Many other related works try to simulate human conversation in order
to guide the customer through the decision making process.
Shimazu~\citeyear{ShimazuExpertClerk} describes ExpertClerk, an
agent system that imitates a human salesclerk. In the first phase,
the agent tries to narrow down the possibilities by asking
questions. An optimal discrimination tree is built using information
gain (as in ID3 algorithm) where each node represents a specific
question to the user, and the user's answer leads into a specific
portion of the subtree. In fact, each node is equivalent to a crisp
constraint, and the problem of getting to a node with no compatible
examples may occur. In the second phase, the agent proposes three
possible items, chosen to be one in the central and two in the
opposite extreme region of the available product space. It is shown
that an intelligent use of both strategies (asking and proposing) is
more efficient that one of the two strategies alone.
\citeA{Thompson04} also propose a conversational, dialogue-based
approach in {\em ADAPTIVE PLACE ADVISOR}, a conversational
recommendation system for restaurants in the Palo Alto area. Their
approach mimics a conversation that proceeds with questions like
{\em What type of food would you like?}; the user might either
answer with a particular answer like {\em Chinese}, say that he or
she does not care about this aspect, or ask the advisor about the
possible choices. User preferences obtained during the current
conversation are treated as crisp constraints and only items that
satisfy them are considered. When there are no items that satisfy
all preferences, the system may ask the user whether he or she is
willing to relax some constraints.
The tool also develops a long-term user model that keeps track of
preferences expressed in previous interactions. It is used to sort
the results that are shown to the user.
\paragraph{Using prior knowledge}
It is also possible to optimize the set of examples given an
expectation of the user's preferences, without actually asking the
users to state their own preferences. This is the approach described
by Price and Messinger~\citeyear{ref:price-messinger}. This work
differs from ours in that they do not consider preferences of an
individual user, but average preferences for a group of users.
Preference elicitation can be optimized using prior distributions of
possible preferences. This approach was proposed by Chajewska
et~al.~\citeyear{ref:chajewska} to produce a more efficient
preference elicitation procedure. The elicitation is a
question-answering interaction where the questions are selected to
maximize the expected value of information.
Boutilier~\citeyear{Boutilier2002} has extended this work by taking
into account values of future questions to further optimize decision
quality while minimizing user effort. He views the elicitation
procedure itself as a decision process and uses observable Markov
process (POMDP) to obtain an elicitation strategy.
Such approaches require that users are
familiar enough with the available options to answer any question about
value functions without the benefit of example outcomes to assess
them. In contrast, in a mixed-initiative system as described here the
user is free to furnish only the information she is confident about.
It is also questionable whether one can assume a prior distribution
on preferences in personalized recommendation systems where users
may be very diverse.
\section{Conclusion}
We considered AI techniques used for product search and recommender
systems based on a set of preferences explicitly stated by users.
One of the challenges recognized in this field is the elicitation of
an accurate preference model from each user. In particular, we face
the dilemma of accuracy at the cost of user effort.
Some systems may introduce severe errors into the model because
users cannot expend the amount of effort required to state
preferences, while others may require little effort but provide very
general recommendations because the preference model was never
completely established. The ideal solution is one that provides
users with accurate recommendations while minimizing their effort in
stating preferences. Therefore, this article also examined user
interaction issues and emphasized models that motivate users to
state more complete and accurate preferences, while requiring the
least amount of effort from the user.
We conjectured that the benefit of discovering attractive
recommendations presents a strong motivation for users to state
additional preferences. Thus, we developed a model-based approach
that analyzes the user's current preference model and potential
hidden preferences in order to generate a set of suggestions that
would be attractive to a rational user. This suggestion set is
calculated based on the look-ahead principle: a good suggestion is
an outcome that becomes optimal when additional hidden preferences
have been considered. Through simulations, we demonstrated the
superior performance of these model-based strategies in comparison
to the other proposed strategies.
We further validated our hypothesis that such strategies are highly
likely to stimulate users to express more preferences through a
significant within-subject user study involving 40 real users. We
measured decision accuracy, defined as the percentage of users who
actually found their most preferred option with the tool, for an
example-critiquing tool with and without suggestions.
The study showed that users are able to achieve a significantly higher level
of decision accuracy with an example-critiquing tool with suggestions than without suggestions, increasing from
45 to 80$\%$, while the effort spent on both tools is comparable. This shows that there is significant potential
for improving the tools that are currently in use.
It is important to note that this performance is obtained with users
who are not bound to a particular dialogue, but are free to interact
with the system on their own initiative.
This process particularly supports preference expression for users
who are unfamiliar with the domain, and typically for decisions
which require low to medium financial commitments. For highly
important decisions where users understand their preferences well,
other preference elicitation
techniques~\cite{ref:keeney-maut,ref:boutilier-ijcai05} are likely
to provide superior results.
As the strategies are based on the very general notion of Pareto-optimality,
they can be applied to a broad range of preference modeling formalisms,
including utility functions, soft constraints ~\cite{Bistarelli1997}, and
CP-networks ~\cite{Boutilier2004}. This will greatly strengthen the
performance of example-critiquing systems in applications ranging from
decision support to e-commerce.
\section{Acknowledgements}
The authors would like to thank Vincent Schickel-Zuber for his
significant contribution in the development of the web based
interface for FlatFinder, and Jennifer Graetzel for her insightful
suggestions on the various draft versions of this manuscript to
improve its readability. This work was supported by the Swiss
National Science Foundation under contract No. 200020-103421.
\newcommand{\etalchar}[1]{$^{#1}$}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Keywords}
Aluminum Nitride, Single Photon Source, Quantum Optics, Room Temperature
\section{Introduction}
Recently, there has been renewed interest in the group III-nitrides as platforms for quantum optics. In the last few years, point-like single photon sources have been reported in hexagonal boron nitride\cite{Tran2016}, gallium nitride (GaN)\cite{Berhane2017,Zhou2018,Nguyen2019}, and very recently in Aluminum Nitride \cite{Xue2020,Lienhard2017}. By virtue of their deep confinement energies, these color centers demonstrate anti-bunching even at room temperature, adding to a select group of solid state materials that host high temperature quantum emitters, such as diamond \cite{Gruber1997,Kurtsiefer2000,Naydenov2011} and silicon carbide (SiC)\cite{Falk2014,Lohrmann2015,Widmann2015}. The commercial applications of the nitrides means there is considerable expertise in processing and the possibility of epitaxial deposition of complex heterostructures, paving the way to cavity enhancement and optoelectronic devices.
AlN is a semiconductor with a large band gap of \SI{6.2}{\electronvolt}, making it a promising platform for integrated quantum photonic applications \cite{Pernice2012,Xiong2012,Lu2018,Wan2019} due to its transparency in the visible spectrum, strong second-order nonlinearity\cite{Pernice2012}, low-loss high-speed opto-electric phase modulation\cite{Xiong2012}, mature device fabrication and available high-purity substrates. In its wurtzite phase AlN has a hexagonal unit cell, shown in Fig.\ref{fig:Introductory}(a), that lacks inversion-symmetry along the [0001] crystallographic axis. This, along with the finite dipole moment associated with the aluminum-nitrogen bond, leads to internal electric polarisation and piezoelectric effect along [0001]. Recently, AlN has been predicted to host atomic like defects with optically-addressable spin states \cite{Tu2013,Seo2016,Varley2016,Bowes2019}, which would be ideal for quantum technologies that require an interface between flying photonic qubits and stationary trapped spin qubits. The emitters we observe here have a narrower distribution in zero-phonon line energy and a prominent phonon sideband, unlike those in the report of Xue $et$ $al$ \cite{Xue2020}. This suggests the emitters in our sample all arise from the same crystal defect, and have some physical properties consistent with the one emitter reported in the abstract by Lienhard $et$ $al$ \cite{Lienhard2017}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\linewidth]{Figures/Fig1-Introductory.pdf}
\caption{Confocal mapping of point like emitters in AlN. a) Crystal structure of aluminum nitride. The polar crystal axis [0001] is indicated. b) Optical image of the sample with titanium markers. c) Intensity scan map of the sample. The three emitters, C1-3, studied in this work are labelled.}
\label{fig:Introductory}
\end{figure}
\section*{Results and Discussion}
The sample studied within this work is a \SI{1}{\micro\meter} thick epilayer of AlN-on-sapphire. A confocal scanned optical image of the sample shown in Fig.\ref{fig:Introductory}(c) reveals the presence of point-like emitters. A titanium mask on the sample surface is visible, which enables identification of an emitters position. The emitter density is estimated to be \SI{1.25}{\micro\meter^{-2}}, which is sufficient for individual emitters to be optically addressed with a diffraction limited optical spot.
Three color centers, C1-C3, are labelled on the scan map and subsequently studied in detail in Fig.\ref{fig:Statistical}. Spectra of the three emitters shown in Fig.\ref{fig:Statistical}(a) have an obvious spectral peak labelled as a zero phonon line (ZPL) at \SI{2.08}{}, \SI{2.12}{} and \SI{2.09}{\electronvolt}, respectively. This is attributed to the optical excited-to-ground state transition of the color centers without high energy phonon emission, by analogy with the ZPL observed in the NV$^-$ centre in diamond, which has a room temperature ZPL at \SI{1.95}{\electronvolt} \cite{Gruber1997}. In addition to the ZPL peak, each emitters' spectrum demonstrates coupling to high-energy phonons apparent as a broad phonon side band (PSB), spanning about \SI{0.6}{\electronvolt} to the red. Such a broad, prominent, PSB has not been reported in other nitride emitters \cite{Berhane2017,Nguyen2019} but qualitatively resembles that of the NV$^-$ in diamond \cite{Kurtsiefer2000}.
The room temperature FWHM of the ZPLs are \SI[trapambigerr=false]{8.3\pm0.3}{}, \SI[trapambigerr=false]{11.7\pm0.2}{} and \SI[trapambigerr=false]{9.4\pm0.2}{\milli\electronvolt}, respectively. Broadening of the ZPL at room temperature is consistent with coupling to low energy acoustic phonon modes and spectral jitter. The ZPL contains \SI{3.2}{\%} of the total intensity on average. However, enhancement of the light emitted into the ZPL in a phonon-broadened emitter by weak coupling to an optical cavity has been demonstrated in other materials \cite{Faraon2011}. The spectra for the emitters C2 and C3 have significantly different PSBs. The cause of the change in spectral shapes may result from emitters located in different strain fields or in proximity to crystal dislocations and impurities.
The quantum nature of the emitters is proven in Fig.\ref{fig:Statistical}(a) which shows the result of Hanbury Brown and Twiss auto-correlation measurements, under \SI{532}{\nano\meter} excitation at \SI{350}{\micro\watt} at room temperature. The emission was filtered between \SI{1.91}{} and \SI{2.33}{\electronvolt}, to suppress the fluorescence signal from the Cr$^3+$ impurity in the sapphire substrate \cite{Jardin1996}. Anti-bunching below the $g^{(2)}(\tau)<0.5$ limit is observed. $g^{(2)}(0)$ for the three emitters C1-3 are \SI[trapambigerr=false]{0.17\pm 0.01}, \SI[trapambigerr=false]{0.20\pm 0.01} and \SI[trapambigerr=false]{0.23\pm 0.02}, respectively.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/Fig2-Statistical.pdf}
\caption{Room temperature spectroscopy of color centers in AlN. a) Spectral and autocorrelation measurements of the three emitters, labelled C1,C2 and C3 in the intensity scan map in Fig.\ref{fig:Introductory}(c). b) Histogram showing the energy in which the spectra for 20 emitters has fallen to half its maximum intensity, on the high energy side. The orange data represents emitters where an obvious zero phonon line cannot be identified. The spectrum for C1 is overlaid, with phonon-shifted energies of the zero phonon line labelled. c) Raman spectrum showing phonon modes in the AlN on sapphire substrate. }
\label{fig:Statistical}
\end{figure}
The phonon energies available within AlN can be determined from a Stokes-shifted Raman measurement, as is shown in Fig.\ref{fig:Statistical}(c). The measurement can provide an insight into Raman-active phonon modes: it is possible to identify a number of vibrational energies from the both AlN and sapphire \cite{Davydov1998,McNeil1993}. The response of the sapphire substrate is to be expected given to the limited axial resolution of the confocal microscope with respect to the AlN thickness. Using the relation $\Delta E=E_{ZPL}-E_{PX}$, where $E_{PX}$ is peak energy for P1-5 in the spectrum for emitter C1, it is possible to correlate the Raman shift due to the vibrational modes from the AlN with the peak locations. Ignoring the contributions due to vibrational modes from the Sapphire, the Raman shifted transitions $\Delta E$; $E2^{(LOW)}$, $A1(TO)$, $E2^{(HIGH)}$ and $A1(LO)$ are given as \SI{31.2\pm0.1}{}, \SI{76.7\pm0.3}{}, \SI{81.7\pm0.01}{} and \SI[trapambigerr=false]{110.3\pm0.1}{\milli\electronvolt} respectively. Therefore, we hypothesise that the peaks P1 and P3 arise due to phonon assisted replicas from coupling to E2$^{(LOW)}$ and E2$^{(HIGH)}$ respectively, as well as peaks P2 and P5 from the corresponding two-phonon processes. In addition, P4 may be described by a mixed two phonon emission involving $E2^{(LOW)}+E2^{(HIGH)}$. These phonon-replica peaks are illustrated in Fig.\ref{fig:Statistical}(b). We hypothesise the higher energy peaks arise from interactions with multiple-phonon-assisted transitions and the broadening from the small defect size, coupling to a large range of the phonon density of states \cite{Davydov1998} in the brillouin zone. Comparable peaks are not as pronounced in the spectra from C2 and C3, due to the contributions merging into a single PSB. The reduced dimensionality of the h-BN system results in a strikingly different phonon-spectrum consisting of replicas of the ZPL spaced by optical photon frequencies with the fraction of the total emission via the ZPL as high as 0.82 \cite{Tran2016}. Conversely previous reports on color centers in GaN\cite{Berhane2017,Nguyen2019} and AlN\cite{Xue2020} do not report high energy phonon sidebands, where the emitters reported here bare a closer resemblance to the NV center in diamond \cite{Kurtsiefer2000}.
A histogram illustrating the energies of 20 emitters, with a bin width of \SI{50}{\milli\electronvolt}, is shown in Fig.\ref{fig:Statistical}(b). All emitters showed a broad emission shape with a ZPL not always being resolved. Therefore, a half max (HM) value is defined which corresponds to the energy at which each spectrum, at the higher energy side, has fallen to half its highest intensity. The same HM value was defined for spectra that have an obvious ZPL (purple) as well as for no obvious ZPL (orange). The histogram illustrates that the majority of the emitters have their HM between \SI{2.10}{} and \SI{2.25}{\electronvolt}, corresponding to ZPLs between \SI{2.00}{} and \SI{2.15}{\electronvolt}. This represents a smaller spectral distribution as compared to color centers recently discovered in GaN, where the ZPL energy between emitters within the same sample varies over \SI{0.4}{\electronvolt}\cite{Berhane2017}. The smaller variation in the ZPL energy for these AlN emitters provides obvious advantage for their exploitation in photonic and/or optoelectronic devices coupled to narrowband cavities or antennae. In addition, it suggests a common origin for all the emitters we have observed in AlN, with energy shifts between emitters resulting from differences in strain, local dislocation density, impurities or point defects.
The scan image in Fig.\ref{fig:Introductory}(c) shows that some defects are unstable on a time scale comparable to the 10ms dwell time of each pixel. This is also apparent in the confocal scan around the color centre C1 shown in Fig.\ref{fig:E1Spectroscopy}(a). Intensity slices in both the X and Y axis are presented which demonstrate the diffraction limited size of the emission from the color centre C1. The excitation power (P) dependence of the emission intensity of C1 is shown in Fig.\ref{fig:E1Spectroscopy}(b). The data is fit with the relation $I = I_\infty \times P/(P+P_{sat})$. The saturation intensity $I_{sat}=$\SI{157}{kcps} at $P_{sat} =$ \SI{1.44}{\milli\watt}, which demonstrates the brightness of the emitters at room temperature. The emitter brightness, $>$10\textsuperscript{5} counts $s^{-1}$ at high saturation power, is consistent with other III-Nitride emitters in a high-refractive index bulk \cite{Berhane2017,Xue2020}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figures/Fig3-E1Spectroscopy.pdf}
\caption{Photostability, autocorrelation, polarisation and power dependence of C1 at room temperature. a) Confocal scan over the emitter, with an accompanying X and Y slice. b) Power dependent measurement of intensity showing saturation of intensity at high pump laser power. c) Auto-correlation measurement at a pump power of; 30$\mu$W (orange), 350$\mu$W (blue) and 3.2mW (purple), demonstrating both room temperature anti-bunching and bunching. d) Time resolved stability measurement. e) Polarisation measurement in both excitation and collection. f) Illustration of the three energy level model used to fit the data in c).}
\label{fig:E1Spectroscopy}
\end{figure}
Intensity auto-correlation measurements as a function of excitation power are presented in Fig.\ref{fig:E1Spectroscopy}(c), vertically offset for clarity. Anti-bunching on a \SI[trapambigerr=false]{4.1\pm 0.2}{\nano\second} timescale is observed in parallel with bunching on a \SI[trapambigerr=false]{208.5\pm 0.7}{\nano\second} nanosecond timescale at the highest pump power. The amplitude of the bunching is greater at increasing excitation power, indicative of an internal energy structure that is more complex than a 2-level emitter. We use rate equations \cite{Kurtsiefer2000} to obtain a good fit, by including a meta-stable state, which "shelves" the excitation for a finite time greater than the recombination rate of the excited-to-ground state transition. An illustration of the energy level system is shown in Fig.\ref{fig:E1Spectroscopy}(f), the shelving rate being given by $\tau_{12}$. The purity of the emission is given by $g^{(2)}(0)$ = 0.19$\pm$0.01 at P= 30$\mu$W. Photon bunching at room temperature is also observed in diamond NV centers \cite{Kurtsiefer2000}, SiC \cite{Widmann2015}, h-BN\cite{Tran2016} and GaN color centers\cite{Berhane2017,Zhou2018,Nguyen2019,Xue2020}, with antibunching timescales having the same order of magnitude.
A stability measurement, where the number of photons incident on an avalanche photo-diode is recorded every \SI{10}{\milli\second} for a total of 60 seconds, is presented in Fig.\ref{fig:E1Spectroscopy}(d). The plot shows a 10 second snapshot of the intensity detected, showing intermittent variations between two fluorescence intensities, one centred at \SI{63}{kcps} and the other at \SI{55}{kcps}. A statistical analysis of the intensities seen over the whole 60 second measurement is presented, whereby the intensities are binned into \SI{2}{kcps} bins. The data is fit with two Gaussian distributions with the same width, where the $HWHM\times e^{-1/2}$ variance represents the shot noise of $ \sqrt{n}$ associated with detecting $n$ photons per sampling event. For the more frequently occuring fluorescent state, an average of 650 counts per \SI{10}{\milli\second} are detected, giving a noise value of \SI{2.5}{kcps} in the presented measurement. The occurrence of the less intense fluorescence state is \SI{9}{\%}. This instability is observed in a number of other emitters, but not all, and is reported for other III-Nitride color centers We hypothesise that this instability is not caused by shelving the carrier into the metastable state, as the switching has a time scale orders of magnitude greater than the bunching observed in the $g^{(2)}(t)$ measurement in Fig.\ref{fig:E1Spectroscopy}(c), but rather is a caused by an additional noise source such as a nearby impurity periodically charging and discharging.
A measurement of the emission intensity from C1 as the linear polarisation of the excitation or collection beam are rotated in the plane of the sample is presented in Fig.\ref{fig:E1Spectroscopy}(e). For the collection measurement, the excitation polarisation is aligned to its maximum. Both excitation and collection data demonstrate dipole-like emission patterns. The ratio in intensity between the maximum and minimum is 5:1 and 70:1 for both the excitation and collection, in the absorption and emission, respectively. This linear polarisation observed for all emitters suggests their excitonic dipole is always orientated in the plane of the sample. An angular difference between the maxima in absorption and emission polarisations is consistent with other reports on III-Nitride color centers \cite{Berhane2017,Zhou2018,Xue2020}. We hypothesise that this is due to a multi-level energy structure with excitation and emission transitions having orthogonal polarisation selection rules in the plane of the sample. The absorption and emission dipoles for C1 and C3 are orientated parallel to the [$\bar{1}2\bar{1}0$] plane and m-plane [$10\bar{1}0$] respectively. In addition, polarisation measurements for C2 demonstrate linearly-polarised collection \SI{35}{\degree} offset to the m-plane, along the [$11\bar{2}0$] plane.
To gain insight into the temperature dependent properties of the emitters the sample was cooled to \SI{4}{\kelvin} using a closed-cycle helium cryostat. A spectrum of emitter C1 at \SI{4}{\kelvin} is presented in Fig.\ref{fig:LowTemp}(a). A sharpening of the ZPL is observed, as coupling with phonon modes is reduced at lower temperature. The intensity of the emission decreases by a factor of 2 on reducing the temperature from 300 to 4K.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figures/Fig4-LowTempSpectra.pdf}
\caption{Temperature dependence, spectral and autocorrelation measurements of C1 at low temperature. a) Photoluminescence spectrum of C1 at \SI{4}{\kelvin}. b) Temperature dependence of the zero phonon line energy and full width half maximum between \SI{4}{\kelvin} and \SI{300}{\kelvin}. c) Autocorrelation measurement at \SI{4}{\kelvin} under 532 nm excitation at \SI{4}{\milli\watt}.}
\label{fig:LowTemp}
\end{figure}
The ZPL FWHM at \SI{4}{\kelvin} is \SI[trapambigerr=false]{2.4\pm0.1}{\milli\electronvolt}, which is an order of magnitude above the resolution limit of the spectrometer. The temperature dependence of the FWHM (orange) and $E_{ZPL}$ (purple) of the ZPL is presented in Fig.\ref{fig:LowTemp}(b). The FWHM follows a Bose-activated broadening from 10 to \SI{300}{\kelvin}, described by $\gamma = \gamma_0 + \beta/(e^{E/k_B T}-1)$, where $\gamma_0$ is a temperature-independent broadening, $E$ is the activation energy and $\beta$ is the coupling coefficient. A good fit to the data is achieved with the parameters; $\gamma_0$ = \SI[trapambigerr=false]{2.9\pm0.04}{\milli\electronvolt}, $\beta$ = \SI[trapambigerr=false]{29.7\pm0.4}{\milli\electronvolt} and $E$ = \SI[trapambigerr=false]{52.0\pm0.3}{\milli\electronvolt}. The second order correlation measurement in Fig.\ref{fig:LowTemp}(c), taken at \SI{4.0}{\milli\watt} excitation power, confirms the photon statistics at low temperature are comparable to the behaviour measured under ambient conditions (Fig.\ref{fig:E1Spectroscopy}(c)).
\section{Conclusion}
Quantum emission from the sample is attributed to point-like defects embedded deep within the band gap of AlN. The below band gap excitation used here suggests that the defect states are directly being excited. The nature of the emitters is unknown, but several theoretical studies have predicted and studied spin-dependent defects in AlN \cite{Tu2013,Seo2016,Varley2016,Bowes2019}. Secondary ion mass spectrometry (SIMS) measurements from the manufacturer (Dowa Electronics Materials Co.) show trace levels of hydrogen, carbon, oxygen and silicon in the sample. In the future, controlled introduction of other impurities via direct growth or implantation may enable us to engineer desirable spin systems in AlN. Control of the spin states may be possible via the piezoelectric effect, similar to what has been achieved in emitters in SiC \cite{Falk2014}, or through resonant optical fields. The cross-polarised maxima in emission and absorption dipoles presents an ideal arrangement for efficient polarisation filtered resonant control, which conventionally limits the efficiency to \SI{50}{\%} due to the perpendicular excitation and detection optics required to isolate the laser \cite{Wang2019}. Owing to significant existing investment in AlN transducers and sensors this material may be able to compete with diamond and SiC as a viable platform for quantum technologies if it can be shown to host spin-dependent emission from the color centers: in these other materials optical manipulation and read-out of color centre spin states \cite{Neumann2010,Naydenov2011} has enabled sensitive nanoscale sensing \cite{Maze2008,Kraus2014} and promising room temperature qubits.
\section*{Methods}
Room temperature measurements were taken on a confocal microscope with a 0.9NA microscope objective. The collection efficiency is calculated to be \SI{5.1} {\%} into the first lens \cite{Plakhotnik1995}, assuming a dipole orientated in the plane and with a refractive index $n_{AlN}=2.15$ at \SI{600}{\nano\meter}. Low temperature and temperature dependent measurements were taken in an AttoDry 1000 closed-cycle helium cryostat with an internal 0.68NA aspheric lens and a custom optical head. The collection efficiency into the first lens is \SI{2.5}{\percent}. Single mode fibre is used to couple both the excitation and collection into/out of both the room temperature and cryostat microscopes.
Room temperature confocal images were obtained by scanning the laser using dual-axis galvometric mirrors within a optical 4F system. Excitation and collection polarisation measurements was measured using a linear polariser followed by a half-wave plate in the excitation path and a thin film polariser in the collection path.
Time resolved measurements were taken using two SPCM-AQRH silicon avalanche photo-diodes, with unbalanced dark count rates of \SI{50}{} and \SI{520}{cps} respectively. Autocorrelation measurements were taken using a 45:55 fiber beamsplitter. The y-axis offset at time zero in the autocorrelation measurements can be accounted for by including unbalanced dark-count rates, $B_{1}$ and $B_{2}$, on the two detectors, shown in Eq.\ref{eq:DarkCounts}. $S_{1}$ and $S_{2}$ are the signal count rates on the detectors, \SI{4700}{} and \SI{3650}{cps} respectively, and $C_N(\tau)$ the normalised coincidence amplitude at large delay times. After correction, for C1 under 30$\mu$W excitation, we find $g^{(2)}(0) = 0.06$.
\begin{equation}
g^{(2)}(\tau) = C_N(\tau)\frac{S_1 S_2 + 2S_1B_2+2S_2B_1+B_1B_2}{S_1S_2}
- \frac{2S_1B_2+2S_2B_1+B_1B_2}{S_1S_2}
\label{eq:DarkCounts}
\end{equation}
A frequency doubled continuous-wave \SI{532}{\nano\meter} Nd:Yag laser was used for excitation in all measurements. Optical filtering was achieved using a ultrasteep \SI{532}{\nano\meter} longpass filter to enable Raman spectroscopy and a short pass 650nm filter to isolate the color centre emission from the Cr$^3+$ emission (\SI{1.78}{\electronvolt}) from the sapphire \cite{Berhane2017}. For spectral measurements beyond 650nm as presented in Fig.\ref{fig:Statistical}(a), the 650nm SP filter is removed and we background correct using a spectrum taken from an area close to the emitter, which represents emission from the substrate. In the future, the study of free-standing AlN or AlN-on-silicon samples would eliminate the need for this spectral filtering.
\begin{acknowledgement}
The authors thank the financial support provided by the Sêr Cymru National Research Network in Advanced Engineering and Materials, the European Union's Horizon 2020 research and innovation program and the Royal Society for Research Grant RGS\textbackslash R1\textbackslash 191251 and EPSRC grant EP/T017813/1.
\end{acknowledgement}
Since submission of this manuscript, the authors were made aware of a report of single photon emission from defects in AlN by Xue $et$ $al$ \cite{Xue2020}.
\section{Keywords}
Aluminum Nitride, Single Photon Source, Quantum Optics, Room Temperature
\section{Introduction}
Recently, there has been renewed interest in the group III-nitrides as platforms for quantum optics. In the last few years, point-like single photon sources have been reported in hexagonal boron nitride\cite{Tran2016}, gallium nitride (GaN)\cite{Berhane2017,Zhou2018,Nguyen2019}, and very recently in Aluminum Nitride \cite{Xue2020,Lienhard2017}. By virtue of their deep confinement energies, these color centers demonstrate anti-bunching even at room temperature, adding to a select group of solid state materials that host high temperature quantum emitters, such as diamond \cite{Gruber1997,Kurtsiefer2000,Naydenov2011} and silicon carbide (SiC)\cite{Falk2014,Lohrmann2015,Widmann2015}. The commercial applications of the nitrides means there is considerable expertise in processing and the possibility of epitaxial deposition of complex heterostructures, paving the way to cavity enhancement and optoelectronic devices.
AlN is a semiconductor with a large band gap of \SI{6.2}{\electronvolt}, making it a promising platform for integrated quantum photonic applications \cite{Pernice2012,Xiong2012,Lu2018,Wan2019} due to its transparency in the visible spectrum, strong second-order nonlinearity\cite{Pernice2012}, low-loss high-speed opto-electric phase modulation\cite{Xiong2012}, mature device fabrication and available high-purity substrates. In its wurtzite phase AlN has a hexagonal unit cell, shown in Fig.\ref{fig:Introductory}(a), that lacks inversion-symmetry along the [0001] crystallographic axis. This, along with the finite dipole moment associated with the aluminum-nitrogen bond, leads to internal electric polarisation and piezoelectric effect along [0001]. Recently, AlN has been predicted to host atomic like defects with optically-addressable spin states \cite{Tu2013,Seo2016,Varley2016,Bowes2019}, which would be ideal for quantum technologies that require an interface between flying photonic qubits and stationary trapped spin qubits. The emitters we observe here have a narrower distribution in zero-phonon line energy and a prominent phonon sideband, unlike those in the report of Xue $et$ $al$ \cite{Xue2020}. This suggests the emitters in our sample all arise from the same crystal defect, and have some physical properties consistent with the one emitter reported in the abstract by Lienhard $et$ $al$ \cite{Lienhard2017}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\linewidth]{Figures/Fig1-Introductory.pdf}
\caption{Confocal mapping of point like emitters in AlN. a) Crystal structure of aluminum nitride. The polar crystal axis [0001] is indicated. b) Optical image of the sample with titanium markers. c) Intensity scan map of the sample. The three emitters, C1-3, studied in this work are labelled.}
\label{fig:Introductory}
\end{figure}
\section*{Results and Discussion}
The sample studied within this work is a \SI{1}{\micro\meter} thick epilayer of AlN-on-sapphire. A confocal scanned optical image of the sample shown in Fig.\ref{fig:Introductory}(c) reveals the presence of point-like emitters. A titanium mask on the sample surface is visible, which enables identification of an emitters position. The emitter density is estimated to be \SI{1.25}{\micro\meter^{-2}}, which is sufficient for individual emitters to be optically addressed with a diffraction limited optical spot.
Three color centers, C1-C3, are labelled on the scan map and subsequently studied in detail in Fig.\ref{fig:Statistical}. Spectra of the three emitters shown in Fig.\ref{fig:Statistical}(a) have an obvious spectral peak labelled as a zero phonon line (ZPL) at \SI{2.08}{}, \SI{2.12}{} and \SI{2.09}{\electronvolt}, respectively. This is attributed to the optical excited-to-ground state transition of the color centers without high energy phonon emission, by analogy with the ZPL observed in the NV$^-$ centre in diamond, which has a room temperature ZPL at \SI{1.95}{\electronvolt} \cite{Gruber1997}. In addition to the ZPL peak, each emitters' spectrum demonstrates coupling to high-energy phonons apparent as a broad phonon side band (PSB), spanning about \SI{0.6}{\electronvolt} to the red. Such a broad, prominent, PSB has not been reported in other nitride emitters \cite{Berhane2017,Nguyen2019} but qualitatively resembles that of the NV$^-$ in diamond \cite{Kurtsiefer2000}.
The room temperature FWHM of the ZPLs are \SI[trapambigerr=false]{8.3\pm0.3}{}, \SI[trapambigerr=false]{11.7\pm0.2}{} and \SI[trapambigerr=false]{9.4\pm0.2}{\milli\electronvolt}, respectively. Broadening of the ZPL at room temperature is consistent with coupling to low energy acoustic phonon modes and spectral jitter. The ZPL contains \SI{3.2}{\%} of the total intensity on average. However, enhancement of the light emitted into the ZPL in a phonon-broadened emitter by weak coupling to an optical cavity has been demonstrated in other materials \cite{Faraon2011}. The spectra for the emitters C2 and C3 have significantly different PSBs. The cause of the change in spectral shapes may result from emitters located in different strain fields or in proximity to crystal dislocations and impurities.
The quantum nature of the emitters is proven in Fig.\ref{fig:Statistical}(a) which shows the result of Hanbury Brown and Twiss auto-correlation measurements, under \SI{532}{\nano\meter} excitation at \SI{350}{\micro\watt} at room temperature. The emission was filtered between \SI{1.91}{} and \SI{2.33}{\electronvolt}, to suppress the fluorescence signal from the Cr$^3+$ impurity in the sapphire substrate \cite{Jardin1996}. Anti-bunching below the $g^{(2)}(\tau)<0.5$ limit is observed. $g^{(2)}(0)$ for the three emitters C1-3 are \SI[trapambigerr=false]{0.17\pm 0.01}, \SI[trapambigerr=false]{0.20\pm 0.01} and \SI[trapambigerr=false]{0.23\pm 0.02}, respectively.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/Fig2-Statistical.pdf}
\caption{Room temperature spectroscopy of color centers in AlN. a) Spectral and autocorrelation measurements of the three emitters, labelled C1,C2 and C3 in the intensity scan map in Fig.\ref{fig:Introductory}(c). b) Histogram showing the energy in which the spectra for 20 emitters has fallen to half its maximum intensity, on the high energy side. The orange data represents emitters where an obvious zero phonon line cannot be identified. The spectrum for C1 is overlaid, with phonon-shifted energies of the zero phonon line labelled. c) Raman spectrum showing phonon modes in the AlN on sapphire substrate. }
\label{fig:Statistical}
\end{figure}
The phonon energies available within AlN can be determined from a Stokes-shifted Raman measurement, as is shown in Fig.\ref{fig:Statistical}(c). The measurement can provide an insight into Raman-active phonon modes: it is possible to identify a number of vibrational energies from the both AlN and sapphire \cite{Davydov1998,McNeil1993}. The response of the sapphire substrate is to be expected given to the limited axial resolution of the confocal microscope with respect to the AlN thickness. Using the relation $\Delta E=E_{ZPL}-E_{PX}$, where $E_{PX}$ is peak energy for P1-5 in the spectrum for emitter C1, it is possible to correlate the Raman shift due to the vibrational modes from the AlN with the peak locations. Ignoring the contributions due to vibrational modes from the Sapphire, the Raman shifted transitions $\Delta E$; $E2^{(LOW)}$, $A1(TO)$, $E2^{(HIGH)}$ and $A1(LO)$ are given as \SI{31.2\pm0.1}{}, \SI{76.7\pm0.3}{}, \SI{81.7\pm0.01}{} and \SI[trapambigerr=false]{110.3\pm0.1}{\milli\electronvolt} respectively. Therefore, we hypothesise that the peaks P1 and P3 arise due to phonon assisted replicas from coupling to E2$^{(LOW)}$ and E2$^{(HIGH)}$ respectively, as well as peaks P2 and P5 from the corresponding two-phonon processes. In addition, P4 may be described by a mixed two phonon emission involving $E2^{(LOW)}+E2^{(HIGH)}$. These phonon-replica peaks are illustrated in Fig.\ref{fig:Statistical}(b). We hypothesise the higher energy peaks arise from interactions with multiple-phonon-assisted transitions and the broadening from the small defect size, coupling to a large range of the phonon density of states \cite{Davydov1998} in the brillouin zone. Comparable peaks are not as pronounced in the spectra from C2 and C3, due to the contributions merging into a single PSB. The reduced dimensionality of the h-BN system results in a strikingly different phonon-spectrum consisting of replicas of the ZPL spaced by optical photon frequencies with the fraction of the total emission via the ZPL as high as 0.82 \cite{Tran2016}. Conversely previous reports on color centers in GaN\cite{Berhane2017,Nguyen2019} and AlN\cite{Xue2020} do not report high energy phonon sidebands, where the emitters reported here bare a closer resemblance to the NV center in diamond \cite{Kurtsiefer2000}.
A histogram illustrating the energies of 20 emitters, with a bin width of \SI{50}{\milli\electronvolt}, is shown in Fig.\ref{fig:Statistical}(b). All emitters showed a broad emission shape with a ZPL not always being resolved. Therefore, a half max (HM) value is defined which corresponds to the energy at which each spectrum, at the higher energy side, has fallen to half its highest intensity. The same HM value was defined for spectra that have an obvious ZPL (purple) as well as for no obvious ZPL (orange). The histogram illustrates that the majority of the emitters have their HM between \SI{2.10}{} and \SI{2.25}{\electronvolt}, corresponding to ZPLs between \SI{2.00}{} and \SI{2.15}{\electronvolt}. This represents a smaller spectral distribution as compared to color centers recently discovered in GaN, where the ZPL energy between emitters within the same sample varies over \SI{0.4}{\electronvolt}\cite{Berhane2017}. The smaller variation in the ZPL energy for these AlN emitters provides obvious advantage for their exploitation in photonic and/or optoelectronic devices coupled to narrowband cavities or antennae. In addition, it suggests a common origin for all the emitters we have observed in AlN, with energy shifts between emitters resulting from differences in strain, local dislocation density, impurities or point defects.
The scan image in Fig.\ref{fig:Introductory}(c) shows that some defects are unstable on a time scale comparable to the 10ms dwell time of each pixel. This is also apparent in the confocal scan around the color centre C1 shown in Fig.\ref{fig:E1Spectroscopy}(a). Intensity slices in both the X and Y axis are presented which demonstrate the diffraction limited size of the emission from the color centre C1. The excitation power (P) dependence of the emission intensity of C1 is shown in Fig.\ref{fig:E1Spectroscopy}(b). The data is fit with the relation $I = I_\infty \times P/(P+P_{sat})$. The saturation intensity $I_{sat}=$\SI{157}{kcps} at $P_{sat} =$ \SI{1.44}{\milli\watt}, which demonstrates the brightness of the emitters at room temperature. The emitter brightness, $>$10\textsuperscript{5} counts $s^{-1}$ at high saturation power, is consistent with other III-Nitride emitters in a high-refractive index bulk \cite{Berhane2017,Xue2020}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figures/Fig3-E1Spectroscopy.pdf}
\caption{Photostability, autocorrelation, polarisation and power dependence of C1 at room temperature. a) Confocal scan over the emitter, with an accompanying X and Y slice. b) Power dependent measurement of intensity showing saturation of intensity at high pump laser power. c) Auto-correlation measurement at a pump power of; 30$\mu$W (orange), 350$\mu$W (blue) and 3.2mW (purple), demonstrating both room temperature anti-bunching and bunching. d) Time resolved stability measurement. e) Polarisation measurement in both excitation and collection. f) Illustration of the three energy level model used to fit the data in c).}
\label{fig:E1Spectroscopy}
\end{figure}
Intensity auto-correlation measurements as a function of excitation power are presented in Fig.\ref{fig:E1Spectroscopy}(c), vertically offset for clarity. Anti-bunching on a \SI[trapambigerr=false]{4.1\pm 0.2}{\nano\second} timescale is observed in parallel with bunching on a \SI[trapambigerr=false]{208.5\pm 0.7}{\nano\second} nanosecond timescale at the highest pump power. The amplitude of the bunching is greater at increasing excitation power, indicative of an internal energy structure that is more complex than a 2-level emitter. We use rate equations \cite{Kurtsiefer2000} to obtain a good fit, by including a meta-stable state, which "shelves" the excitation for a finite time greater than the recombination rate of the excited-to-ground state transition. An illustration of the energy level system is shown in Fig.\ref{fig:E1Spectroscopy}(f), the shelving rate being given by $\tau_{12}$. The purity of the emission is given by $g^{(2)}(0)$ = 0.19$\pm$0.01 at P= 30$\mu$W. Photon bunching at room temperature is also observed in diamond NV centers \cite{Kurtsiefer2000}, SiC \cite{Widmann2015}, h-BN\cite{Tran2016} and GaN color centers\cite{Berhane2017,Zhou2018,Nguyen2019,Xue2020}, with antibunching timescales having the same order of magnitude.
A stability measurement, where the number of photons incident on an avalanche photo-diode is recorded every \SI{10}{\milli\second} for a total of 60 seconds, is presented in Fig.\ref{fig:E1Spectroscopy}(d). The plot shows a 10 second snapshot of the intensity detected, showing intermittent variations between two fluorescence intensities, one centred at \SI{63}{kcps} and the other at \SI{55}{kcps}. A statistical analysis of the intensities seen over the whole 60 second measurement is presented, whereby the intensities are binned into \SI{2}{kcps} bins. The data is fit with two Gaussian distributions with the same width, where the $HWHM\times e^{-1/2}$ variance represents the shot noise of $ \sqrt{n}$ associated with detecting $n$ photons per sampling event. For the more frequently occuring fluorescent state, an average of 650 counts per \SI{10}{\milli\second} are detected, giving a noise value of \SI{2.5}{kcps} in the presented measurement. The occurrence of the less intense fluorescence state is \SI{9}{\%}. This instability is observed in a number of other emitters, but not all, and is reported for other III-Nitride color centers We hypothesise that this instability is not caused by shelving the carrier into the metastable state, as the switching has a time scale orders of magnitude greater than the bunching observed in the $g^{(2)}(t)$ measurement in Fig.\ref{fig:E1Spectroscopy}(c), but rather is a caused by an additional noise source such as a nearby impurity periodically charging and discharging.
A measurement of the emission intensity from C1 as the linear polarisation of the excitation or collection beam are rotated in the plane of the sample is presented in Fig.\ref{fig:E1Spectroscopy}(e). For the collection measurement, the excitation polarisation is aligned to its maximum. Both excitation and collection data demonstrate dipole-like emission patterns. The ratio in intensity between the maximum and minimum is 5:1 and 70:1 for both the excitation and collection, in the absorption and emission, respectively. This linear polarisation observed for all emitters suggests their excitonic dipole is always orientated in the plane of the sample. An angular difference between the maxima in absorption and emission polarisations is consistent with other reports on III-Nitride color centers \cite{Berhane2017,Zhou2018,Xue2020}. We hypothesise that this is due to a multi-level energy structure with excitation and emission transitions having orthogonal polarisation selection rules in the plane of the sample. The absorption and emission dipoles for C1 and C3 are orientated parallel to the [$\bar{1}2\bar{1}0$] plane and m-plane [$10\bar{1}0$] respectively. In addition, polarisation measurements for C2 demonstrate linearly-polarised collection \SI{35}{\degree} offset to the m-plane, along the [$11\bar{2}0$] plane.
To gain insight into the temperature dependent properties of the emitters the sample was cooled to \SI{4}{\kelvin} using a closed-cycle helium cryostat. A spectrum of emitter C1 at \SI{4}{\kelvin} is presented in Fig.\ref{fig:LowTemp}(a). A sharpening of the ZPL is observed, as coupling with phonon modes is reduced at lower temperature. The intensity of the emission decreases by a factor of 2 on reducing the temperature from 300 to 4K.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figures/Fig4-LowTempSpectra.pdf}
\caption{Temperature dependence, spectral and autocorrelation measurements of C1 at low temperature. a) Photoluminescence spectrum of C1 at \SI{4}{\kelvin}. b) Temperature dependence of the zero phonon line energy and full width half maximum between \SI{4}{\kelvin} and \SI{300}{\kelvin}. c) Autocorrelation measurement at \SI{4}{\kelvin} under 532 nm excitation at \SI{4}{\milli\watt}.}
\label{fig:LowTemp}
\end{figure}
The ZPL FWHM at \SI{4}{\kelvin} is \SI[trapambigerr=false]{2.4\pm0.1}{\milli\electronvolt}, which is an order of magnitude above the resolution limit of the spectrometer. The temperature dependence of the FWHM (orange) and $E_{ZPL}$ (purple) of the ZPL is presented in Fig.\ref{fig:LowTemp}(b). The FWHM follows a Bose-activated broadening from 10 to \SI{300}{\kelvin}, described by $\gamma = \gamma_0 + \beta/(e^{E/k_B T}-1)$, where $\gamma_0$ is a temperature-independent broadening, $E$ is the activation energy and $\beta$ is the coupling coefficient. A good fit to the data is achieved with the parameters; $\gamma_0$ = \SI[trapambigerr=false]{2.9\pm0.04}{\milli\electronvolt}, $\beta$ = \SI[trapambigerr=false]{29.7\pm0.4}{\milli\electronvolt} and $E$ = \SI[trapambigerr=false]{52.0\pm0.3}{\milli\electronvolt}. The second order correlation measurement in Fig.\ref{fig:LowTemp}(c), taken at \SI{4.0}{\milli\watt} excitation power, confirms the photon statistics at low temperature are comparable to the behaviour measured under ambient conditions (Fig.\ref{fig:E1Spectroscopy}(c)).
\section{Conclusion}
Quantum emission from the sample is attributed to point-like defects embedded deep within the band gap of AlN. The below band gap excitation used here suggests that the defect states are directly being excited. The nature of the emitters is unknown, but several theoretical studies have predicted and studied spin-dependent defects in AlN \cite{Tu2013,Seo2016,Varley2016,Bowes2019}. Secondary ion mass spectrometry (SIMS) measurements from the manufacturer (Dowa Electronics Materials Co.) show trace levels of hydrogen, carbon, oxygen and silicon in the sample. In the future, controlled introduction of other impurities via direct growth or implantation may enable us to engineer desirable spin systems in AlN. Control of the spin states may be possible via the piezoelectric effect, similar to what has been achieved in emitters in SiC \cite{Falk2014}, or through resonant optical fields. The cross-polarised maxima in emission and absorption dipoles presents an ideal arrangement for efficient polarisation filtered resonant control, which conventionally limits the efficiency to \SI{50}{\%} due to the perpendicular excitation and detection optics required to isolate the laser \cite{Wang2019}. Owing to significant existing investment in AlN transducers and sensors this material may be able to compete with diamond and SiC as a viable platform for quantum technologies if it can be shown to host spin-dependent emission from the color centers: in these other materials optical manipulation and read-out of color centre spin states \cite{Neumann2010,Naydenov2011} has enabled sensitive nanoscale sensing \cite{Maze2008,Kraus2014} and promising room temperature qubits.
\section*{Methods}
Room temperature measurements were taken on a confocal microscope with a 0.9NA microscope objective. The collection efficiency is calculated to be \SI{5.1} {\%} into the first lens \cite{Plakhotnik1995}, assuming a dipole orientated in the plane and with a refractive index $n_{AlN}=2.15$ at \SI{600}{\nano\meter}. Low temperature and temperature dependent measurements were taken in an AttoDry 1000 closed-cycle helium cryostat with an internal 0.68NA aspheric lens and a custom optical head. The collection efficiency into the first lens is \SI{2.5}{\percent}. Single mode fibre is used to couple both the excitation and collection into/out of both the room temperature and cryostat microscopes.
Room temperature confocal images were obtained by scanning the laser using dual-axis galvometric mirrors within a optical 4F system. Excitation and collection polarisation measurements was measured using a linear polariser followed by a half-wave plate in the excitation path and a thin film polariser in the collection path.
Time resolved measurements were taken using two SPCM-AQRH silicon avalanche photo-diodes, with unbalanced dark count rates of \SI{50}{} and \SI{520}{cps} respectively. Autocorrelation measurements were taken using a 45:55 fiber beamsplitter. The y-axis offset at time zero in the autocorrelation measurements can be accounted for by including unbalanced dark-count rates, $B_{1}$ and $B_{2}$, on the two detectors, shown in Eq.\ref{eq:DarkCounts}. $S_{1}$ and $S_{2}$ are the signal count rates on the detectors, \SI{4700}{} and \SI{3650}{cps} respectively, and $C_N(\tau)$ the normalised coincidence amplitude at large delay times. After correction, for C1 under 30$\mu$W excitation, we find $g^{(2)}(0) = 0.06$.
\begin{equation}
g^{(2)}(\tau) = C_N(\tau)\frac{S_1 S_2 + 2S_1B_2+2S_2B_1+B_1B_2}{S_1S_2}
- \frac{2S_1B_2+2S_2B_1+B_1B_2}{S_1S_2}
\label{eq:DarkCounts}
\end{equation}
A frequency doubled continuous-wave \SI{532}{\nano\meter} Nd:Yag laser was used for excitation in all measurements. Optical filtering was achieved using a ultrasteep \SI{532}{\nano\meter} longpass filter to enable Raman spectroscopy and a short pass 650nm filter to isolate the color centre emission from the Cr$^3+$ emission (\SI{1.78}{\electronvolt}) from the sapphire \cite{Berhane2017}. For spectral measurements beyond 650nm as presented in Fig.\ref{fig:Statistical}(a), the 650nm SP filter is removed and we background correct using a spectrum taken from an area close to the emitter, which represents emission from the substrate. In the future, the study of free-standing AlN or AlN-on-silicon samples would eliminate the need for this spectral filtering.
\begin{acknowledgement}
The authors thank the financial support provided by the Sêr Cymru National Research Network in Advanced Engineering and Materials, the European Union's Horizon 2020 research and innovation program and the Royal Society for Research Grant RGS\textbackslash R1\textbackslash 191251 and EPSRC grant EP/T017813/1.
\end{acknowledgement}
Since submission of this manuscript, the authors were made aware of a report of single photon emission from defects in AlN by Xue $et$ $al$ \cite{Xue2020}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{On MACs for sensor authentication}
\label{constr:notmacs}
One might suggest using MAC authentication in our scheme instead of one-time hash based signatures, which might be slightly more efficient in terms of computation cost for generating a signature, are simpler in usage and do not expire. The question of whether its preferable using symmetric primitives in resource-constrained IoT devices instead of public key cryptography has been raised in academic works~\cite{DBLP:conf/ccs/OzmenY17}, and several motivations to provide efficient public key cryptography techniques in such devices were outlined, most of which are also applicable to our system as follows.
Firstly, signatures provide non-repudiation, which as discussed previously is a needed security property \ref{sec-nonrepud}. Although a way to achieve non-repudiation through MACs could be to use a separate MAC key for each sensor, each key would need to be shared with each group aggregator separately since they should be all able to verify data from all sensors in the group. This would increase the attack surface since an attacker compromising any aggregator could also send bogus data for all sensors. Also considering that aggregators might have to verify data from a great number of sensors, our hash-based verification cost (which involves one hash operation) is cheaper than one MAC operation. Although for sensors a MAC operation is cheaper than a hash-based signature, as we show in section \ref{measurements} a hash-based signature which involves a few hashes and a Quicksort operation is still relatively efficient even for the weakest types of sensors.
Secondly, our chain hash-based scheme has a built-in ``replay protection" against an attacker, since that signature is by definition valid for one time only. A MAC scheme would require extra layers of protection (nonces and/or timers) against replay attacks.
Lastly, by using our hash-based signature scheme we enable public verifiability of signed sensor data on the blockchain, even by entities not authorized to participate in the system.
\section{Chain-Based Hash Signatures}
\label{apdx:chainSign}
\subsection{Digital Signatures.}
A digital signature scheme consists of the following algorithms \cite{Katz:2014:IMC:2700550}:
\begin{itemize}
\item $\signgen{\publickey{}{}}{\secretkey{}{}}$: Outputs a pair of keys $(\publickey{}{},\secretkey{}{})$.
\item $\sign{\secretkey{}{}}{m}{\sigma}$: Takes as input a private key $\secretkey{}{}$ and a message $m$ and outputs a signature $\sigma$.
\item $\svrfy{\publickey{}{}}{m}{\sigma}$: Takes as input a public key $\publickey{}{}$, a message $m$ and a signature $\sigma$, and outputs a bit $b$ where $b=1$ indicates successful verification.
\end{itemize}
A digital signature is considered secure if an adversary $\mathcal{A}$ cannot forge a signature on a message even after adaptively receiving signatures on messages of its choice. To formalize the security definition we first describe the following experiment $\mathsf{SigForge(\lambda)}$:
\begin{enumerate}
\item $\signgen{\publickey{}{}}{\secretkey{}{}}$
\item $\mathcal{A}$ on input $(\publickey{}{})$ queries the signing oracle polynomial number of times $q$. Let $Q: [m_{i},\sigma_{i}]_{i=1}^{q}$ be the set of all such queries.
\item $\mathcal{A}$ outputs $(m^{*},\sigma^{*})$.
\item $\mathcal{A}$ wins if $\mathsf{SVrfy}(m^{*},\sigma^{*})$ = 1 where $m^{*} \notin [m_{i}]_{i=1}^{q}$ and $\mathsf{SigForge}$ outputs ``1", else it outputs ``0".
\end{enumerate}
\begin{defn}
A digital signature scheme is existentially unforgeable under an adaptive chosen-message attack, if for all PPT $\mathcal{A}$, $\pr{\mathsf{SigForge(\lambda) = 1}}$ is negligible in $\lambda$.
\end{defn}
\subsection{One-time signatures} A digital signature scheme that can be used to sign only one message per key pair is called a one-time signature (OTS) scheme.
\begin{defn}
\label{prel:onetime-sigs-defn}
A one-time digital signature scheme is existentially unforgeable under an adaptive chosen-message attack, if for all PPT $\mathcal{A}$ and for $q\leq 1$, $\pr{\mathsf{SigForge(\lambda) = 1}}$ is negligible in $\lambda$.
\end{defn}
\subsection{Hash functions.}
An (unkeyed) hash function $y:=h(m)$ on input of a message $m$ outputs a digest $y$.
A cryptographic hash function is considered secure if the probability to find collisions is negligible (i.e. it is \emph{collision resistant}). More formally, we consider the following experiment $\mathsf{HashColl}$\cite{Katz:2014:IMC:2700550}:
\begin{enumerate}
\item $\mathcal{A}$ picks values $x, x' \in \{0,1\}^{*}$ s.t. $x \neq x'$.
\item $\mathcal{A}$ wins if $h(x) = h(x')$ and $\mathsf{HashColl}$ outputs ``1".
\end{enumerate}
\begin{defn}
\label{prel:hash-secdef}
A hash function $h(): \{0,1\}^* \rightarrow \{0,1\}^\lambda$ is collision resistant if for all PPT $\mathcal{A}$, $\pr{\mathsf{HashColl = 1}}$ is negligible.
\end{defn}
A weaker notion for security of a hash function is preimage-resistance. We consider the following experiment $\mathsf{PreIm}(\lambda,y)$:
\begin{enumerate}
\item $\mathcal{A}$ is given $y \in \{0,1\}^\lambda$
\item $\mathcal{A}$ outputs $x$.
\item $\mathcal{A}$ wins if $h(x)=y$. If $\mathcal{A}$ wins $\mathsf{PreIm}$ outputs ``1", else it outputs ``0".
\end{enumerate}
\begin{defn}
\label{prel:hash-preim}
A hash function $h(): \{0,1\}^* \rightarrow \{0,1\}^\lambda$ is preimage resistant if \space $\forall$ ppt $\mathcal{A}$ and $\forall y \in \{0,1\}^\lambda$,
$\pr{\mathsf{PreIm}(\lambda,y) = 1}$ is negligible in $\lambda$.
\end{defn}
\begin{corr}
A collision resistant hash function is also preimage resistant.
\end{corr}
\subsection{Definition and Security proof}
\label{apdx:proofs}
We first define the API of a chain based signature for a fixed number of messages $n$.
\begin{itemize}
\item $\otkeygen{\publickey{}{}}{\secretkey{n}{}}{s_{0}}{n}$: Outputs a pair of keys $\publickey{}{},\secretkey{n}{}$ and an initial state $s_{0}$, where $\publickey{}{} = h^{n}(\secretkey{n}{})$ and $h()$ is a collision resistant hash function.
\item $\otsign{\secretkey{i}{}}{\secretkey{i-1}{}}{s_{i}}{s_{i-1}}{m}{\sigma}$: Takes as input the system state $s_{i-1}$, a private key $\secretkey{i-1}{}$ and a message $m$, generates a signature $\sigma$ and updates the signer's private key to $\secretkey{i}{}$ where $\secretkey{i-1}{} = h(\secretkey{i}{})$ and his state to $s_{i}$ where $i \leq n$.
\item $\otverify{\publickey{}{}}{m}{\sigma}$: Takes as input a public key $\publickey{}{}$, a message $m$ and a signature $\sigma$, and outputs a bit $b$ where $b=1$ indicates successful verification.
\end{itemize}
To formalize security for chain-based signatures with length of chain $n$, we describe the following experiment $\mathsf{OTSigForge}(\lambda,n)$:
\begin{enumerate}
\item $\otkeygen{\publickey{}{}}{\secretkey{n}{}}{s_{0}}{n}$
\item $\mathcal{A}$ on input $(\publickey{}{},n)$ makes up to $q \leq n$ queries to the signing oracle. Let $Q: [m_{i},\sigma_{i}]_{i=1}^{q}$ the set of all such queries where $m_{i}$ is the queried message and $\sigma_{i}$ is the signature returned for $m_{i}$.
\item $\mathcal{A}$ outputs $(m_{q+1},\sigma_{q+1})$.
\item $\mathcal{A}$ wins if $\mathsf{OTVerify}(\publickey{}{},m_{q+1},\sigma_{q+1}):= 1$ and $h^{i}(\secretkey{i}{}) \neq \publickey{}{}$ $\forall i \leq q$ where $\mathsf{OTSigForge}$ outputs ``1", else it outputs "0".
\end{enumerate}
Note in the above experiment by $h^{i}(\secretkey{i}{}) \neq \publickey{}{}$ $\forall i \leq q$ we restrict $\mathcal{A}$ from winning the game by reusing a secret key $\secretkey{i}{}$ existing in the chain up to distance $q$ from the public key $\publickey{}{}$.
\begin{defn}
\label{defn:OTS}
A chain-based one-time digital signature scheme is existentially unforgeable under an adaptive chosen-message attack, if \space $\forall$ ppt $\mathcal{A}$, $\pr{\mathsf{OTSigForge}(\lambda,n) = 1}$ is negligible in $\lambda$.
\end{defn}
Given the formal definition above we now prove the security of Construction \ref{prel:onetime-sigs-constr}.
\begin{thm}
Let $h: \{0,1\}^{*} \rightarrow \{0,1\}^{\lambda}$ be a preimage resistant hash function.
Then Construction \ref{prel:onetime-sigs-constr} is an existentially unforgeable chain-based one-time signature scheme.
\end{thm}
\begin{proof}
Let $\mathcal{A}$ be an adversary who wins the $\mathsf{OTSigForge}$ game described in Section \ref{prel:onetime-sigs} and therefore can forge signatures using the above scheme with non-negligible probability $p(\lambda)$. That is, $\exists \ \mathcal{A}$ which after performing $q$ queries $\{m_{i}\}_{i=1}^{q}$ where $q \leq n$, can output a signature $\sigma_{q+1}$ for a message $m_{q+1}$ where $\mathsf{OTVerify}(\publickey{}{},m_{q+1},\sigma_{q+1})$ = 1 and $h^{i}(\secretkey{i}{}) \neq \publickey{}{}$ $\forall i \leq q$.
Then, an algorithm $\mathcal{B}$ running the $\mathsf{PreIm}$ experiment would use $\mathcal{A}$ to break preimage resistance of $h$ as follows: On input $(\lambda,y)$, $\mathcal{B}$ would generate a hash chain of length $n$ with seed $y$ as $(y, h(y), ... ,h^{n}(y))$ and forward $(h^{n}(y),n)$ to $\mathcal{A}$.
Then $\mathcal{A}$ makes up to $q \leq n$ queries to $\mathcal{B}$. When $\mathcal{A}$ queries for $m_{i}$ (where $i$ denotes the query number), $\mathcal{B}$ returns $\sigma_{i} = h(m_{i}||h^{n-i+1}(y))||h^{n-i}(y)$ to $\mathcal{A}$. If $q = n$ and $\mathcal{A}$ does not output a forgery, $\mathcal{B}$ returns $\bot$ and starts over.
If $\mathcal{A}$ eventually outputs a forgery $(m^{q+1},\sigma^{q+1})$ to $\mathcal{B}$ and $q < n$, then $\mathcal{B}$ returns $\bot$ as output of $\mathsf{PreIm}$ experiment and starts over, else if $q=n$, $\mathcal{B}$ would parse $\sigma^{n+1} = \sigma^{A}||\sigma^{B}$ and return $\sigma^{B}$. Assuming a uniform probability distribution of the number of queries $q$, $\pr{\mathsf{PreIm}(\lambda,y) = 1} = \frac{\pr{\mathsf{OTSigForge}(\lambda,n) = 1}}{n} = \frac{p(\lambda)}{n}$ which is non-negligible.
\end{proof}
\subsection{Evaluation comparison with modified SPHINCS}
\label{apdx:modsphincs}
As discussed in section \ref{prel:onetime-sigs}, the modified SPHINCS scheme tailored for resource-constrained devices~\cite{PKC:HulRijSch16} could be a candidate scheme for our system. Here we make a direct comparison between modified SPHINCS and our proposed scheme for our system's purposes.
Assuming a hash chain length of $2^{26}$ elements (which as discussed is only exhausted after 21 years assuming generation of one signature every 10 seconds), a signature generation only requires 27 hashing operations in the worst case, which according to our measurements on a 8-bit 16Mhz CPU Arduino device outlined in Section \ref{measurements-sign-verif}, would only need 50 ms on average.
On the other hand, modified SPHINCS' evaluation performed on a resource-constrained device (32-bit 32Mhz Cortex M3 which is more powerful than our Arduino Uno R3) needs 22.81 seconds for signature generation. Also our signature size (excluding the payload) is only 64 bytes for the signature and the program storage requirement 1082 bytes, while modified SPHINCS generates a 41KB signature, streamed serially. Our only additional requirement is an initial precomputation phase using a powerful device, which will have to pre-compute the $2^{26}$ hash chain elements and then send the ``pebbles" to the resource-constrained device.
\subsection{Collision probability analysis}
\label{measurements-coll-prob}
Although we assumed a collision resistant hash function in our hash-based signature construction, given the length of the hash chain (typical length $2^{26}$) there is an increased likelihood of a collision along that chain through the birthday paradox (especially for lower levels of security where the output size of the hash function is small), which would result in ``cycles" of hashes. If such cycles occur, an adversary could then trivially break the security of our scheme and sign bogus sensor data.
Assume a hash chain of length $2^{n}$ and a security parameter $\lambda$. From the birthday paradox, the probability of a collision on the hash chain is approximated by $p = 1 - e^{\frac{-2^{n}(2^{n}-1)}{2^{\lambda+1}}}$. In Figure \ref{coll_prob_secpar} we show that given a chain length of $2^{26}$ as previously discussed, the output size of the hash function $h()$ should be at least 64, and SHA256 which we used in our evaluations satisfies these requirements.
Nevertheless, if birthday attacks become an issue for a small security parameter, we can apply the technique from \cite{ACNS:HuJakPer05} where the chain index is used as salt to prevent such attacks for a small overhead in cost. However since we show that the birthday attack is negligible, we prefer to keep the costs as low as possible.
\begin{figure}[]
\centering
\resizebox{0.48\textwidth}{!}{
\input{evaluation/coll_prob_secpar.pgf}
}
\caption{Collision probability for hash chain length $2^{26}$.}
\label{coll_prob_secpar}
\end{figure}
\section{Consensus}
\label{apdx:consensus}
\subsection{Definitions}
\begin{defn}
\label{prel:cons_def}
Let parties $[P_{i}]_{i=1}^{n}$, each having a view of the blockchain $\mathsf{BC}(i)$, and receive a common sequence of messages in rounds $[1..j..]$
A protocol solves the ledger consensus problem if the following properties hold:
\begin{enumerate}[label=\roman*.]
\item Consistency: An honest node's view of the blockchain on some round $j$ is a prefix of an honest node's view of the blockchain on some round $j+\ell, \ell>0$, or $\mathsf{BC}(i)_{j}||\block{j+1}||...||\block{j+\ell} = \allowbreak\mathsf{BC}(i')_{j+\ell},\allowbreak \forall P_{i},P_{i'},j,\ell$.
\item Liveness: An honest party $P_{i}$ on input of an operation (or transaction) $\mathsf{tx}$, after a certain number of rounds will output a view of the blockchain $\mathsf{BC}(i)$ that includes $\mathsf{tx}$.
\end{enumerate}
\end{defn}
The protocol can be augmented with the existence of a $\trustedparty{}$ assigning membership credentials (which requires an additional trusted setup phase) resulting in an \emph{Authenticated} ledger consensus protocol~\cite{Lamport:1982:BGP:357172.357176}\cite{EPRINT:LinLysRab04}. Such a protocol consists of the following algorithms:
\begin{enumerate}
\item $(\mathsf{pp}_{}) \leftarrow \mathsf{TPSetup}(1^{\lambda})$: A trusted party $\trustedparty{}$ generates the system paremeters $\mathsf{pp}_{}$.
\item $(\publickey{i}{},\secretkey{i}{}) \leftarrow \mathsf{PartyGen}(\mathsf{pp}_{})$: Each party $i$ generates a public-private key pair.
\item $\mathsf{TPMembers}(\widehat{[\publickey{}{}]}) := [\publickey{}{}]$: The $\trustedparty{}$ chooses the protocol participants from a list of public keys $\widehat{[\publickey{}{}]}$, and outputs a list of authenticated participant public keys $[\publickey{}{}]$.
\item $\consensus ([[\publickey{j}{}]_{j=1}^{n},\secretkey{i}{},\nstate{i},\mathsf{BC}(i)]_{i=1}^{n}) := \mathsf{BC}'$: All system participants with state $\nstate{i}$ and a view of the blockchain $\mathsf{BC}(i)]_{i=1}^{n})$, agree on a new blockchain $\mathsf{BC}'$.
\end{enumerate}
\subsection{Byzantine Generals Problem}
One of the first studies attempting to achieve agreement on a distributed synchronous system was proposed by Leslie Lamport in 1982 \cite{Lamport:1982:BGP:357172.357176}. It proposed using either an ``Oral message" solution to achieve binary consensus among $n = 3f + 1$ nodes, where a leader was sending a proposed binary value in a fully connected network, then the honest nodes would propagate that value using the same protocol acting as leaders themselves. Using authentication further improves the resilience of the protocol to $n = 2f + 1$, as Byzantine faults can be tolerated under the assumption that dishonest parties cannot forge the leader's signature. In both cases however, the communication complexity is $O(n^{2})$, which is not considered scalable.
\subsection{PBFT}
The Practical Byzantine Fault Tolerance consensus protocol \cite{Castro:1999:PBF:296806.296824} was an exemplary protocol for many consensus protocols to follow. It assumes a partially asynchronous setting (i.e. offering \emph{eventual synchrony}), where a message is assumed to arrive to the destination node in the network after some unknown but bounded time $t$. In each round, a leader orders messages and propagates them to all nodes in the network in three phases. The leader is defined by a sequence of ``views", and the backup nodes can propose a leader (or view) change if he has faulty behavior (timeout exceeded). The protocol assumes $n = 3f + 1$ faulty nodes, and has communication complexity $O(n^{2})$, which again is in practice not scalable for more than 20 nodes \cite{DBLP:conf/ifip11-4/Vukolic15}. It also assumes static membership of participating nodes.
\subsection{PBFT with dynamic clients}
Chondros et al. \cite{DBLP:conf/middleware/ChondrosKR12} suggest modifying the original PBFT protocol by adding ``Join" and ``Leave" system requests. The ``Join" request, equivalent to a ``client" request in PBFT, would trigger the execution of the original PBFT protocol between the primary and the replicas, and replying back with a challenge to prevent DoS attacks. The client wishing to join, replies with a message containing its credentials and a nonce, which are then associated in an array containing the protocol participants. The protocol is based on fully synchronous assumptions. While the protocol offers dynamic membership, it has high communication overhead as PBFT, therefore it is not a suitable candidate for our \textsc{BBox-IoT}\xspace system.
\subsection{System membership tracking consensus}
Rodrigues et al. \cite{DBLP:journals/tdsc/RodriguesLCLS12} propose a ``Membership service" or a ``Configuration management service" for achieving dynamic membership in the permissioned consensus setting. Following a ``transaction endorsement - consensus nodes decoupling" paradigm, only a chosen subset of the participating nodes would execute a BFT-family protocol, while also being responsible for handling ``add" or ``remove" requests, pinging (or probing) for failed or crashed nodes and removing them after some time (or epochs), and keeping track of the epochs. Since these are only a small subset of the total nodes, the overall system remains scalable. Adding/removal is signed by a TP, which is compatible with our \textsc{BBox-IoT}\xspace architecture.
\subsection{RSCoin}
Designed as a cryptocurrency framework, RSCoin~\cite{NDSS:DanMei16} deviates from the common practice of decentralizing monetary supply, which found in other cryptocurrencies. In RSCoin's architecture, a Trusted Party (the Bank) delegates its authority for validating transactions to known and semi-trusted ``mintettes", which in turn are each responsible for interacting with a subset of the cryptocurrency's addresses, forming ``shards". A basic consensus protocol based on Two-Phase Commit is executed between a group of mintettes and a client, which involves collecting signatures from the majority of mintettes responsible and then sending back the transaction to be included in a block.
Also, since there is no direct communication between all mintettes, but they rather communicate indirectly through the clients, the communications complexity is very low which enables high scalability and performance. RSCoin could also potentially be used as a ``consensus'' replacement for \textsc{BBox-IoT}\xspace, however it would require extensive modifications because of its cryptocurrency-oriented architecture.
\subsection{Hyperledger Frameworks}
In the following paragraphs we summarize the properties of additional Hyperledger frameworks \cite{Hyperledger-architecture-vol1}:
\begin{itemize}
\item \textbf{Hyperledger Indy} uses the \textit{Redundant Byzantine Fault Tolerance (RBFT)} consensus algorithm \cite{6681599}, which is voting-based. As its name suggests, it is based on the PBFT consensus algorithm, modified to execute many protocol instances in parallel for improving performance in the case of a faulty leader. It provides Byzantine fault tolerance and reaches consensus very fast, however the time scales with the number of nodes ($O(n^{2})$ as in PBFT). An additional requirement is that the nodes must be totally connected.
\item \textbf{Hyperledger Iroha} uses the \textit{Sumeragi} consensus algorithm, based on a reputation system. As with RBFT, it provides Byzantine fault tolerance reaching consensus in very short time, but that time scales with the number of nodes, and the nodes must be totally connected.
\item \textbf{Hyperledger Sawtooth} uses the novel \textit{Proof of Elapsed Time (PoET)} consensus algorithm, which is lottery-based. It can be categorized in the ``Proof-of-X" consensus family, by replacing proof of computational work with a proof of elapsed time, using trusted hardware (Intel SGX enclave). Each protocol participant would request a wait time from their trusted hardware, and the shortest would be elected as the leader, by providing a proof that he indeed had the shortest wait time and the new block alongside the proof was not broadcasted until that time had expired \cite{DBLP:journals/corr/abs-1711-03936}. While the algorithm is scalable and Byzantine fault tolerant, there is a possibility of delayed consensus due to forks. Also this algorithm would not suit to our \textsc{BBox-IoT}\xspace system, since we do not require our blockchain ``maintainers" to have trusted hardware capabilities.
\end{itemize}
\subsection{Additional Consensus properties}
\label{apdx:consensus-properties}
\begin{table*}[]
\begin{tabular}{|p{27mm}|p{17mm}|p{17mm}|p{18mm}|p{17mm}|p{17mm}|p{30mm}|}
\hline
Algorithm & Adversarial model & Byzantine tolerant & Dynamic membership & Scalable & DoS resistant & Notes\\
\hline
PBFT & $3f+1$ & \cmark & \xmark & \xmark & Semi & Hyperledger Fabric v0.6\\
Kafka & $2f+1$ & \xmark & \cmark & \cmark & \xmark & Hyperledger Fabric v1.4 \\
BFT-SMaRt & $3f+1$ & \cmark & \cmark & Semi &\cmark & \\
Nakamoto consensus & $2f+1$ & \cmark & Permissionless & \cmark & \cmark & \\
\hline
\end{tabular}
\caption{Consensus algorithm comparison}
\label{prelims:consensus-comparison}
\end{table*}
We outline the following additional consensus properties, which are desirable in our setting but not strictly required:
\begin{enumerate}[label=\roman*.]
\item Byzantine Fault Tolerant: As previously discussed, the crash tolerant model of consensus does not take Byzantine behavior of nodes into account~\cite{DBLP:journals/corr/CachinV17}. Although our system considers the consensus algorithm running among a closed, controlled set of nodes, it might be depoyable in a more uncontrolled environment, where Byzantine behavior is possible (Byzantine consensus)\cite{Lamport:1982:BGP:357172.357176}.
\item No synchronicity assumptions: Due to \cite{DBLP:journals/jacm/FischerLP85}, which excludes deterministic protocols from reaching consensus in a fully asynchronous system, to achieve synchronicity, we would have to use a randomized protocol, else we will have to assume ``eventual synchrony" (i.e. protocol finishes within a fixed but unknown time bound) \cite{CCS:MXCSS16}.
\item Incentive-compatible: Protocol keeps nodes motivated to participate in the system and follow its rules \cite{DBLP:journals/corr/abs-1711-03936}.
\item Minimal setup assumptions: No need for a trusted setup phase (as required by Authenticated Byzantine agreement discussed above).
\item Weaker adversarial model: While most classical consensus protocols require a 33\% adversarial model (which we believe should be sufficient for our purposes), some protocols have a weaker requirement of 49\% adversarial power. However this usually comes at the cost of sacrificing Byzantine tolerance (as we discussed above) or scalability in terms of operations per second \cite{Nakamoto:bitcoin}.
\end{enumerate}
\subsection{An instantiation for Consensus algorithm}
\label{constr-pluggedconsens}
In the generic construction of our scheme, we assumed a ``pluggable" consensus algorithm, decoupled from our construction, similar to the original Hyperledger architecture. Recall that this algorithm, which is executed among all orderers $\orderer{i}$, on input of a blockchain $\mathsf{BC}$ and some orderer state $\nstate{i}$, outputs an agreed new updated blockchain $\mathsf{BC}'$. Here we provide a concrete instantiation of a consensus algorithm for the modified Hyperledger used in \textsc{BBox-IoT}\xspace that matches the PBFT consensus protocol~\cite{Castro:1999:PBF:296806.296824} as follows (note though that PBFT would not satisfy all of the required system properties as discussed in section \ref{prel:consensus}):
\begin{enumerate}
\item $\orderer{i}$ parses $TXL$ from its $\nstate{\orderer{i}}$ extracting a set of transactions $\{\mathsf{tx}_{i}\}$.
\item $\orderer{i}$ based on the current $\mathsf{BC}$ and $\{\mathsf{tx}_{i}\}$ constructs a new block $\block{i}$ which would create $ \mathsf{BC} ||\block{i} \rightarrow \mathsf{BC}'$.
\item $\orderer{i}$ computes $\sigma := \mathsf{Sign}(\secretkey{\orderer{i}}{} , \block{i})$. and sends $\sigma$ to all orderers in $\mathcal{O}$ (equivalent to ``pre-prepare" phase in PBFT).
\item All the other orderers $\orderer{x} \in \mathcal{O}$ parse $(\mathsf{OL}_{\mathsf{BC}})$ from the output of $\mathsf{ReadConfig} (\mathsf{BC})$. Check if $\publickey{\orderer{i}}{} \in \mathsf{OL}_{\mathsf{BC}}$ and $\mathsf{SVrfy}(\publickey{\orderer{i}}{},\sigma,\block{i})=1$. Then it verifies that the proposed block was formed correctly (i.e., it is a valid extension of the current blockchain $\mathsf{BC}$). If all verifications pass, it computes $\sigma_{x} := \mathsf{Sign}(\secretkey{\orderer{x}}{} , \block{i})$ and sends $\sigma_{x}$ to all orderers in $\mathcal{O}$ (equivalent to ``prepare" phase in PBFT).
\item Each $\orderer{x} \in \mathcal{O}$ (including $\orderer{i}$) checks if $\mathsf{SVrfy}(\publickey{\orderer{x}}{},\sigma_{x},\block{i})=1$. If it collects sufficient number of signatures (specific to each consensus protocol) it computes $\sigma_{x}' := \mathsf{Sign}(\secretkey{\orderer{x}}{} , \block{i},1)$
and sends $\sigma_{x}'$ to all orderers in $\mathcal{O}$ (equivalent to PBFT ``commit" phase ).
\item Each $\orderer{x} \in \mathcal{O}$ (including $\orderer{i}$) checks if \\ $\mathsf{SVrfy}(\publickey{\orderer{x''}}{},\sigma_{x}',\block{i},1)=1$. If it receives sufficient number of signatures (specific to each consensus protocol) it updates it state to $\nstate{\orderer{x}}'$ and outputs ``1". It outputs ``0" in all other cases.
\end{enumerate}
The above instantiation satisfied the basic consensus properties in Definition \ref{prel:cons_def}.
\begin{table*}[t]\centering
\caption{Hash-based schemes concrete comparison, 256-bit security}
\label{ots_concrete_comparison}
\begin{tabular}{|l|l|l|l|l|l|l|p{25mm}|}
\hline
Scheme & Stateful & Public key (bytes) & Secret key (bytes) & Signature (bytes) & Sign (msec) & Verify (msec) & Remarks \\ \hline
XMSS & Yes & 68 & & 4963 &610 & 160 & Cortex M3 32MHz 32-bit 16KB RAM \cite{EPRINT:KamFlu17,EPRINT:HulRauBuc17} \\ \hline
SPHINCS & No &1056 &1088 & 41000 & 18410 & 513 & Cortex M3 32MHz 32-bit 16KB RAM \cite{PKC:HulRijSch16} \\ \hline
Our scheme & Yes & 32 & 32 & 32 (64) & 52 & 0.035 & ATmega328P 16MHz 8-bit 2KB RAM \\ \hline
\end{tabular}
\end{table*}
\input{const-algs}
\section{Evaluation details}
\label{apdx:evaldetails}
Algorithms \ref{measurements-sensor-pseudocode} and \ref{measurements-aggregator-pseudocode} show the pseudocode for our evaluations on the sensor and aggregator side respectively. We denote by timer1 the signature computation time, by timer2 the verification time and by timer3 the total verification time, as previously shown in Table \ref{measurements-table}.
\input{pseudocode}
\section{Conclusions}
\label{conclusions}
In this paper we designed and implemented \textsc{BBox-IoT}\xspace, a block-chain
inspired approach for Industrial IoT sensors aiming at offering a
transparent and immutable system for sensing and control information
exchanged between IIoT sensors and aggregators. Our approach
guarantees blockchain-derived properties to even low-Size Weight and
Power (SWaP) devices. Moreover, \textsc{BBox-IoT}\xspace acts as a "black-box" that
empowers the operators of any IoT system to detect data and sensor
tampering ferreting out attacks against even SWaP devices. We posit
that enabling data auditing and security at the lowest sensing level
will be highly beneficial to critical infrastructure environments with
sensors from multiple vendors.
Finally, we envision that our approach will be implemented during the
sensor manufacturing stage: having industrial sensors shipped with
pre-computed pebbles and their key material labeled using QR-code on
the sensor body will allow for a seamless and practical deployment of
\textsc{BBox-IoT}\xspace.
\section{Construction algorithms}
\label{sec-model-defs}
For our construction we assume an existentially unforgeable signature scheme $(\mathsf{SignGen},\mathsf{Sign},\mathsf{SVrfy})$
and an unforgeable one-time chain based signature scheme as defined in Section \ref{prel:onetime-sigs} $(\mathsf{OTKeyGen}$, $\mathsf{OTSign}$, $\mathsf{OTVerify})$. We also assume an authenticated blockchain consensus scheme $(\mathsf{TPSetup}, \mathsf{PartyGen}, \mathsf{TPMembers}, \consensus)$ satisfying the properties outlined in Section \ref{prel:consensus:fundam-properties}.
\begin{enumerate}
\item $\mathsf{SystemInit} (1^{\lambda},\mathsf{LL}, \mathsf{OL}, \mathsf{PL}, \mathsf{Pol})$
lets the $\mathsf{MSP}$ to initialize the $\textsc{BBox-IoT}\xspace$ system. The initialization is optionally based on predetermined initial system participants, where $\mathsf{LL},\mathsf{OL}, \mathsf{PL}$ are lists containing public keys for Local administrators, orderers and peers respectively, as well as a preselected system policy $\mathsf{Pol}$.
\begin{enumerate}
\item $\mathsf{MSP}$ sets as $\mathsf{pp}$ the system parameters for the signature and the hash function,
as well as the consensus algorithm by running $\mathsf{TPSetup}$.
\item Computes a random key pair $\signgen{\allowbreak\publicmsp{}}{\allowbreak\secretmsp{}}$.
\item Assembles and outputs the genesis block $\block{0}$ (serving as the initial configuration block) by copying $\publicmsp{}, \mathsf{pp}$ and $[\mathsf{LL}, \mathsf{OL}, \mathsf{PL}, \mathsf{Pol}]$ from the algorithm inputs.
\item Initializes empty lists in $\mathsf{MSP}$ memory \\ $[\mathsf{LL}_{\mathsf{MSP}}, \mathsf{OL}_{\mathsf{MSP}}, \mathsf{PL}_{\mathsf{MSP}},\mathsf{Pol},\mathsf{oper}]$ where $\mathsf{oper}$ denotes a pending revoke operation list.
\item Copies $\mathsf{Pol}$ to $\mathsf{Pol}_{\mathsf{MSP}}$.
\end{enumerate}
The genesis block $\block{0}$ (as the blockchain $\mathsf{BC}$ in general) is public, while the rest of the outputs remain private to $\mathsf{MSP}$. For the following algorithms and protocols we assume that the security parameter and the system parameters are a default input.
\item $\mathsf{ConfigUpdate} ( \mathsf{BC}, \secretmsp{}, \nstate{\mathsf{MSP}})$ enables $\mathsf{MSP}$ to read system configuration information from its memory that is pending to be updated, and construct a new configuration block to make the new system configuration readable and valid in the blockchain by all system participants.
\begin{enumerate}
\item $\mathsf{MSP}$ parses $\nstate{\mathsf{MSP}}$ as $[\mathsf{LL}_{\mathsf{MSP}}, \mathsf{OL}_{\mathsf{MSP}}, \mathsf{PL}_{\mathsf{MSP}}, \mathsf{Pol}_{\mathsf{MSP}}]$.
\item Assembles a configuration update transaction
$\mathsf{tx}_{u} = \allowbreak[\mathsf{LL}_{\mathsf{MSP}},\allowbreak \mathsf{OL}_{\mathsf{MSP}}, \mathsf{PL}_{\mathsf{MSP}}, \mathsf{Pol}_{\mathsf{MSP}}]$.
\item Parses $\mathsf{ReadConfig} (\mathsf{BC})$ as $\mathsf{PL}_{\mathsf{BC}}$.
\item Sends signed transaction $\sigma_{\mathsf{MSP}}(\mathsf{tx}_{u})$ to all $\aggr{i}{} \in \mathsf{PL}_{\mathsf{BC}}$.
\end{enumerate}
Since $\mathsf{Pol}$ does not apply to transactions signed by $\mathsf{MSP}$, the configuration update transaction is promptly appended to $\mathsf{BC}$ by all aggregators, resulting in public output $\mathsf{BC}'$.
\item $\mathsf{PolicyUpdate} ( \nstate{\mathsf{MSP}},\mathsf{Pol})$ enables $\mathsf{MSP}$ to update system policy parameters.
On input of a new system policy $\mathsf{Pol}$, $\mathsf{MSP}$ copies it to $\nstate{\mathsf{MSP}}[\mathsf{Pol}]$, overwriting the previous policy. The algorithm outputs the new updated $\nstate{\mathsf{MSP}}'$.
\item $\mathsf{ReadConfig} (\mathsf{BC})$ can be run by any system participant to recover the current system configuration.
\begin{enumerate}
\item Parses $\mathsf{BC}$ as a series of blocks $\block{i}$.
\item From the set of blocks marked as ``configuration" blocks where $\block{i}[type="C"]$, selects the block $\block{c}$ with the greatest height $c$.
\item Parses and outputs $\block{c}$ as $([\mathsf{LL}_{\mathsf{BC}}, \mathsf{OL}_{\mathsf{BC}}, \mathsf{PL}_{\mathsf{BC}}],\mathsf{Pol}_{\mathsf{BC}})$.
\end{enumerate}
\item $\mathsf{OrdererSetup} ()$ is run by an orderer $\orderer{i}$ initializing its credentials and state. It computes and outputs signing keys as $\signgen{\publickey{\orderer{i}}{}}{\secretkey{\orderer{i}}{}}$ and initializes an signed transaction list in memory $\nstate{\orderer{i}}[TXL]$.
\item $\mathsf{OrdererAdd} \{ \orderer{i}(\publickey{\orderer{i}}{}, \secretkey{\orderer{i}}{}) \leftrightarrow \\ \mathsf{MSP}(\publicmsp{},\secretmsp{},\nstate{\mathsf{MSP}}[\mathsf{OL}_{\mathsf{MSP}}],\mathsf{BC}) \} $ is an interactive protocol between an orderer $\orderer{i}$ and the system $\mathsf{MSP}$ in order to add that orderer in the system:
\begin{enumerate}
\item $\orderer{i}{}$ first creates a physical identity proof $\pi$, then submits $\pi$ and $\publickey{\orderer{i}}{}$ to $\mathsf{MSP}$.
\item $\mathsf{MSP}$ verifies $\pi$. Then it parses $(\mathsf{OL}_{\mathsf{BC}})$ from the output of $\mathsf{ReadConfig} (\mathsf{BC})$. Check that $(\publickey{\orderer{i}}{} \notin \mathsf{OL}_{\mathsf{MSP}}) \land (\publickey{\orderer{i}}{} \notin \mathsf{OL}_{\mathsf{BC}})$. If all verifications hold, add $\publickey{\orderer{i}}{}$ to its local orderer list $\mathsf{OL}_{\mathsf{MSP}}$ and return ``1" to $\orderer{i}{}$, else return ``0" with an error code.
\end{enumerate}
\item $\mathsf{LAdminSetup} () $ is an algorithm run by a $\localadmin{}$ to initialize its credentials and state and create a new device group $\iotGroup{}$.
A Local Administrator computes and outputs signing keys as $\signgen{\publickey{\localadmin{i}}{}}{\secretkey{\localadmin{i}}{}}$.
Allocates memory for storing group aggregators' and sensors' public keys as $\nstate{\localadmin{i}}[ \mathsf{AL},\mathsf{SL}]$.
\item $\mathsf{LAdminAdd} \{ \localadmin{i} (\publickey{\localadmin{}}{}, \secretkey{\localadmin{}}{}) \leftrightarrow \\
\mathsf{MSP}(\publicmsp{},\secretmsp{}, \nstate{\mathsf{MSP}}[\mathsf{LL}_{\mathsf{MSP}}],\mathsf{BC}) \}$ is an interactive protocol between a local group administrator $\localadmin{i}$ and $\mathsf{MSP}$ in order to add $\localadmin{i}$ in the system.
\begin{enumerate}
\item $\localadmin{i}$ creates a physical identity proof $\pi$, then submits $\pi$ and $\publickey{\localadmin{}}{}$ to $\mathsf{MSP}$.
\item $\mathsf{MSP}$ verifies $\pi$. Then it parses $(\mathsf{LL}_{\mathsf{BC}})$ from the output of $\mathsf{ReadConfig} (\mathsf{BC})$. Check that $(\publickey{\localadmin{i}}{} \notin \mathsf{LL}_{\mathsf{BC}}) \land (\publickey{\localadmin{i}}{} \notin \mathsf{LL}_{\mathsf{MSP}})$. If all verifications hold, add $\publickey{\localadmin{i}}{}$ to $\mathsf{LL}_{\mathsf{MSP}}$ in $\nstate{\mathsf{MSP}}$ and return ``1" to $\localadmin{i}{}$, else return ``0" with an error code.
\end{enumerate}
\item $\mathsf{AggrSetup} \{\localadmin{i} (\publickey{\localadmin{i}}{}, \secretkey{\localadmin{i}}{},
\nstate{\localadmin{i}}) \leftrightarrow \aggr{i}{j}() \}$ is an interactive protocol between an $\localadmin{i}$ and an aggregator $\aggr{i}{j}$ wishing to join group $\iotGroup{i}$.
\begin{enumerate}
\item $\aggr{i}{j}$ computes signing keys as $\signgen{\publicagg{i}{j}}{\secretagg{i}{j}}$ and initializes pending and write transaction sets $\pset{i}\rightarrow \emptyset, \txset{i}\rightarrow \emptyset$ in its $\nstate{\aggr{i}{j}}$.
\item $\aggr{i}{j}$ creates a physical identity proof $\pi$, then submits $\pi$ and $\publicagg{i}{j}$ to $\localadmin{i}$.
\item $\localadmin{i}$ verifies $(\publicagg{i}{j} \notin \mathsf{AL})$ and $\pi$. If these verifications hold, it invokes $\mathsf{AggrAdd}(\publicagg{i}{j})$ with $\mathsf{MSP}$. If $\mathsf{MSP}$ outputs ``1", it adds $\publicagg{i}{j}$ to $\mathsf{AL}$, sends an updated copy of $\mathsf{AL}$ to all $\aggr{i}{j} \in \mathsf{AL}$ and $\publickey{\localadmin{i}}{}
$ to $\aggr{i}{j}$. In all other cases it returns ``0".
\item $\aggr{i}{j}$ copies $\publickey{\localadmin{i}}{}
$ in its memory in $\nstate{\aggr{i}{j}}$.
\end{enumerate}
\item $\mathsf{AggrAdd} \{\localadmin{i} (\publickey{\localadmin{i}}{}, \secretkey{\localadmin{i}}{}, \publicagg{i}{j}) \leftrightarrow \\ \mathsf{MSP}(\publicmsp{},\secretmsp{},\nstate{\mathsf{MSP}}[\mathsf{PL}_{\mathsf{MSP}}],\mathsf{BC}) \}$ is an interactive protocol between a local administrator $\localadmin{i}$ wishing to add an aggregator to the system and $\mathsf{MSP}$.
\begin{enumerate}
\item $\localadmin{i}$ computes $\sign{\secretkey{\localadmin{i}}{}}{\publicagg{i}{j}}{\sigma}$. Send $\sigma$ to $\mathsf{MSP}$.
\item $\mathsf{MSP}$ computes $\svrfy{\publickey{\localadmin{i}}{}}{\publicagg{i}{j}}{\sigma}$. Checks that $(\publickey{\localadmin{i}}{} \in \mathsf{LL}_{\mathsf{MSP}}) \land b \land (\publicagg{i}{j} \notin \mathsf{PL}_{\mathsf{MSP}}) == 1$. If the verification holds, it parses $(\mathsf{PL}_{\mathsf{BC}})$ from the output of $\mathsf{ReadConfig} (\mathsf{BC})$. Check that $(\publicagg{i}{j} \notin \mathsf{PL}_{\mathsf{BC}})$. If the verification holds, add $\publicagg{i}{j}$ to $\mathsf{PL}_{\mathsf{MSP}}$ and returns ``1" to $\localadmin{i}$. It returns ``0" in all other cases.
\end{enumerate}
\item $\mathsf{AggrUpd} \{\localadmin{i}(\secretkey{\localadmin{}}{},
\publicsens{}{}) \leftrightarrow \aggr{i}{j}(
\nstate{\aggr{i}{j}}[\mathsf{CL}]) \}$ is an interactive protocol between $\localadmin{i}$ and an aggregator $\aggr{i}{j}$, both belonging to Group $i$. It is used when $\localadmin{i}$ wants to add a sensor public key $\publicsens{}{}$ to $\aggr{i}{j}$ and update its sensor list $\mathsf{CL}$.
\begin{enumerate}
\item $\localadmin{i}$ computes $\sigma := \mathsf{Sign}(\secretkey{\localadmin{i}}{},\publicsens{}{})$. Send $\sigma$ to $\aggr{i}{j}$.
\item $\aggr{i}{j}$ computes $\svrfy{\publickey{\localadmin{i}}{}}{\publicsens{}{}}{\sigma}$. Checks that $(\publicsens{}{} \notin \nstate{\aggr{i}{j}}) \land b == 1$. If the verification holds, it adds $\publicsens{}{}$ to $\mathsf{CL}$\footnote{$\localadmin{i}$ should run the protocol with every aggregator in the group, however we present this with one aggregator for simplicity.} and returns ``1" to $\localadmin{i}$. It returns ``0" in all other cases.
\end{enumerate}
\item $\mathsf{SensorJoin} \{\localadmin{i}(
\publickey{\localadmin{i}}{},\secretkey{\localadmin{i}}{},\nstate{\localadmin{i}}[\mathsf{SL}]) \leftrightarrow \sens{i}{j}(n) \}$ is an interactive protocol between $\localadmin{i}$ of Group $i$ and a sensor $\sens{i}{j}$ wishing to join the system.
\begin{enumerate}
\item $\sens{i}{j}$ using the one-time signature scheme described in Section \ref{prel:onetime-sigs}:
\begin{enumerate}
\item Samples $k \leftarrow (1^{\lambda})$.
and stores it in $\nstate{\sens{i}{j}}$.
\item Runs $\otkeygen{\publicsens{i}{j}}{\secretsens{i}{j}}{\ell = 1}{n}$\footnote{This step is typically computed by a powerful device.}
\item Stores $\ell = 1$ to $\nstate{\sens{i}{j}}$ where $\ell$ denotes the current ``index" in the hash chain.
\item Creates a physical identity proof $\pi$
\item Sends $(\pi,\publicsens{i}{j})$ to $\localadmin{i}$
\end{enumerate}
\item $\localadmin{i}$ checks $\mathsf{Vrfy}(\pi) \land (\publicsens{i}{j} \notin \mathsf{SL} ) \\ \land \mathsf{AggrUpd}(\secretkey{\localadmin{i}}{},
\publicsens{i}{j}) ==1$ $\forall \aggr{i}{j} \in \iotGroup{i}$. $\localadmin{i}$
adds $\publicsens{i}{j}$ to $\mathsf{SL}$, else it outputs ``0".
\end{enumerate}
\item $\mathsf{SensorSendData} \{\sens{i}{j}(\publicsens{i}{j},\secretsens{i}{j},
m,\nstate{\sens{i}{j}}) \leftrightarrow \\ \aggrset{x}(
\nstate{\aggr{}{}}[\mathsf{CL},\pset{},\txset{}]) \}$ is a protocol between sensor $\sens{i}{j}$ broadcasting data and a subset of aggregators $\aggrset{x} \subseteq \{\aggrset{i}\}$ (where $\{\aggrset{i}\}$ is the aggregator set in $\iotGroup{i}$).
\begin{enumerate}
\item For sending data $m$, $\sens{i}{j}$ computes $\otsign{\secretsens{}{}}{\secretsens{}{}}{\nstate{\sens{i}{j}}'}{\nstate{\sens{i}{j}}}{m}{\sigma}$
\item $\sens{i}{j}$ broadcasts $\sigma$ to $\aggrset{x}$.
\item $\aggr{k}{}$ runs $\otverify{\publicsens{i}{j}}{m}{\sigma}$.\footnote{To avoid redundancy, the protocol can be improved by deterministically defining a ``responsible" aggregator for each transaction as discussed previously in this section.} If $b == 1$ it runs $\mathsf{AggrAgree}$ with all other aggregators in the group. If no ``alarm'' message $m^{A}$ from some other aggregator is received within some time $\delta$, it adds $m,\sigma$ to $\pset{i}$. If at least one ``alarm'' message is received, it outputs $\bot$.
\end{enumerate}
\item $\mathsf{AggrAgree} \{ \aggr{k}{}(\secretagg{k}{},\mathsf{AL},\publicsens{i}{j},m, \sigma) \leftrightarrow \aggrset{}([\secretagg{i}{},\publicsens{i}{j},m'])\}$ is a protocol between an aggregator in $\iotGroup{i}$ and all other aggregators in the group. The purpose of this protocol is to detect any MITM attacks, and verifies that no aggregator in the group has received any message $m',\sigma'$ from $\publicsens{i}{j}$ where $m \neq m'$\footnote{This protocol does not require that all other aggregators in the group are reachable, therefore it does not require a reply from all aggregators to complete.}.
\begin{enumerate}
\item $\aggr{k}{}$ for payload $\mu = (\publicsens{i}{j},m, \sigma)$ computes $s = \mathsf{Sign}(\secretagg{k}{},\mu)$ using an EU-CMA signature scheme and sends $s,\mu$ to all $\aggr{i}{} \in \aggrset{}$.
\item Each $\aggr{i}{}$ checks if it received a message $m'$ with signature $\sigma$ from sensor $\publicsens{i}{j}$ where $m \neq m'$. If there's no such message, it outputs $\bot$. Else it sends an ``alarm'' message $m^{A}$ and respective signature $s$ to $\aggr{k}{}$ and keeps a record in its log.
\end{enumerate}
\item $\mathsf{AggrSendTx} \{\aggrset{}([\publicagg{i}{},\secretagg{i}{},\nstate{\aggr{i}{}},\mathsf{BC}]) \leftrightarrow \mathcal{O}(\publickey{\orderer{j}}{}, \secretkey{\orderer{j}}{},\nstate{j}) \}$ is an interactive protocol between all aggregators $\aggr{i}{} \in \aggrset{}$ and all orderers $\orderer{j} \in \mathcal{O}$. It is initiated when an aggregator wishes to submit a transaction for validation in the system and eventually store it in the blockchain.
\begin{enumerate}
\item An $\aggr{i}{} \in \aggrset{}$ parses $\pset{i}$ in $\nstate{\aggr{i}{}}$ as a set of transactions $\{\mathsf{tx} \}$.
\item $\aggr{i}{}$ samples a nonce $n \leftarrow (1^{\lambda})$ and appends it to $\{\mathsf{tx} \}$.
\item $\aggr{i}{}$ computes $\sign{\secretagg{i}{}}{\{\mathsf{tx} \}}{\sigma}$. Send $\sigma$ to all other $\aggr{j}{} \in \aggrset{x}$.
\item Each $\aggr{j}{}$, parses $(\mathsf{PL}_{\mathsf{BC}})$ from the output of $\mathsf{ReadConfig} (\mathsf{BC})$. Computes $\svrfy{\publicagg{i}{}}{\{\mathsf{tx} \}}{\sigma}$. If $(\publicagg{i}{} \in \mathsf{PL}_{\mathsf{BC}}) \land b==1$, compute\\ $\sign{\secretagg{j}{}}{\{\mathsf{tx} \}}{\sigma_{j}}$. Send $\sigma_{j}$ to $\aggr{i}{}$.
\item $\aggr{i}{}$ parses $\mathsf{ReadConfig}(\mathsf{BC}) \rightarrow \mathsf{Pol}_{\mathsf{BC}} \rightarrow \tau$ where $\tau$ the minimum required number of signatures for a transaction to be submitted on the blockchain, as defined by policy $\mathsf{Pol}$.
\item If $| \{ \sigma_{j} \} | > \tau$, select a reachable orderer $\orderer{}$, send $\{ \sigma_{j} \}$, copy $\{\mathsf{tx} \} \rightarrow \txset{}$ and set $\pset{} \rightarrow \emptyset$.
\item The orderer $\orderer{}$ parses $\nstate{j} \rightarrow TXL$,\\ $\{\mathsf{tx} \} \rightarrow n$, $\mathsf{ReadConfig}(\mathsf{BC}) \rightarrow \mathsf{PL}_{\mathsf{BC}}, \mathsf{Pol}_{\mathsf{BC}}$ and $\mathsf{Pol}_{\mathsf{BC}} \rightarrow \tau$ then checks:
\begin{enumerate}
\item $| \{ \sigma_{j} \} | > \tau$
and $n \notin TXL$
\item Compute $ \svrfy{\publicagg{j}{}}{\{\mathsf{tx} \}}{\sigma_{j}}, \forall j$ then \\ $\prod b_{j} ==1$
\item $\prod_{j}^{}(\{\publicagg{j}{}\} \in \mathsf{PL}_{\mathsf{BC}}) ==1$
\end{enumerate}
If the checks are valid, stores $| \{ \sigma_{j} \} |$ in its $\nstate{\orderer{i}}$ and replies ``1" to $\aggr{i}{}$ as a confirmation, else it replies ``0".
\item If $\orderer{i}$ has created a new block containing ordered transactions, it runs $\consensus$ to update the blockchain.
\item If $\consensus$ succeeds, it runs $\mathsf{UpdateBC}$ with all aggregators to update to the new $\mathsf{BC}'$.
\end{enumerate}
\item $\consensus ([[\publickey{\orderer{j}}{}]_{j=1}^{n}, \secretkey{\orderer{i}}{},\nstate{i},\mathsf{BC}]_{i=1}^{n}) := \mathsf{BC}'$\\
The exact protocol functionality is described in the system parameters $\mathsf{pp}$\footnote{This is equivalent to Hyperledger's ``pluggable" consensus, which is defined in the genesis block.} and follows the definition provided in Section \ref{prel:consensus}. In general this protocol is executed among all orderers $\orderer{i} \in \mathcal{O}$ where they agree on a new updated blockchain $\mathsf{BC}'$. In Appendix \ref{constr-pluggedconsens} we provide a concrete instantiation of a consensus algorithm for our construction.
\item $\mathsf{UpdateBC} \{\orderer{i}(\publickey{\orderer{i}}{}, \secretkey{\orderer{i}}{} ,\nstate{\orderer{i}},\mathsf{BC}')\leftrightarrow\\
\aggrset{}([\publicagg{x}{},\secretagg{x}{},\nstate{\aggr{x}{}},\mathsf{BC}]) \}$ is initiated by an orderer $\orderer{i}$ to append a new block in the blockchain.
\begin{enumerate}
\item $\orderer{i}$ parses its $\nstate{\orderer{i}}$ to retrieve the agreed blockchain update signature set $\{\sigma_{x}' \}$
\item $\orderer{i}$ computes $\sign{\secretkey{\orderer{i}}{}}{(\block{i},\{\sigma_{x}' \})}{\sigma}$ where $\mathsf{BC}' := \mathsf{BC}||\block{i}$ and sends $\sigma$ to all $\aggr{x}{} \in \aggrset{}$.
\item Each $\aggr{x}{}$ computes $\svrfy{\publickey{\orderer{i}}{}}{\block{i}||\{\sigma_{x}' \}}{\sigma}$ and checks if $b==1$. Then it parses $\sigma$ as a transaction set $\{\mathsf{tx}\}$ and removes these from $\txset{x}$. Then it updates $\mathsf{BC}$ to $\mathsf{BC}'$, else it outputs $\bot$.
\end{enumerate}
\item $\mathsf{NodeRevoke}(\publickey{i}{},\sigma,\nstate{\mathsf{MSP}}, \mathsf{BC}) $ is initiated by $\mathsf{MSP}$ to revoke credentials of any system participant.
$\mathsf{MSP}$ parses $\mathsf{ReadConfig}(\mathsf{BC})$ as $[\mathsf{LL}_{\mathsf{BC}}, \mathsf{OL}_{\mathsf{BC}}, \mathsf{PL}_{\mathsf{BC}}]$. It verifies $\sigma$ (if the remove operation was initiated by a $\localadmin{}$) and checks if $\publickey{i}{}$ exists in $[\mathsf{LL}_{\mathsf{BC}}, \mathsf{OL}_{\mathsf{BC}}, \mathsf{PL}_{\mathsf{BC}}]$ or in its $[\mathsf{LL}_{\mathsf{MSP}}, \mathsf{OL}_{\mathsf{MSP}}, \mathsf{PL}_{\mathsf{MSP}}] \in \nstate{\mathsf{MSP}}$ in case participation privileges for $\publickey{i}{}$ have not yet been updated on the $\mathsf{BC}$ through $\mathsf{ConfigUpdate}$. If it finds a match in the blockchain lists, it creates a remove operation $R := (\publickey{i}{}, ``rm")$ and adds $R$ to $\mathsf{oper}$, else if it finds a match in its state lists it removes it from the respective list, else it outputs $\bot$. If $\publickey{i}{} \in \mathsf{PL}_{\mathsf{BC}} \lor \publickey{i}{} \in \mathsf{PL}_{\mathsf{MSP}}$, it also informs $\localadmin{i}$.
\item $\mathsf{GroupRevoke} (\publickey{i}{},\nstate{\localadmin{}}[ \mathsf{AL},\mathsf{SL}])$ is initiated by a Local Administrator to revoke credentials of an aggregator or sensor in its group. $\localadmin{}$ checks if $\publickey{i}{} \in [ \mathsf{AL},\mathsf{SL}]$. If it finds a match and $\publickey{i}{} \in \mathsf{AL}$, it $\localadmin{i}$ computes $\sign{\secretkey{\localadmin{i}}{}}{\publicagg{i}{},"R"}{\sigma}$. Sends $(\sigma,\publicagg{i}{},"R")$ to $\mathsf{MSP}$. On receiving successful removal from $\mathsf{MSP}$ (after it invokes $\mathsf{NodeRevoke}$), $\localadmin{}$ removes $\publickey{i}{}$ from $\mathsf{AL}$. If $\publickey{i}{} \in \mathsf{SL}$, it invokes $\mathsf{AggRevokeSensor}$ with all $\aggr{}{} \in \mathsf{AL}$. After successful completion, it removes $\publickey{i}{}$ from $\mathsf{SL}$.
\item $\mathsf{AggRevokeSensor} \{\localadmin{i}(\secretkey{\localadmin{}}{},
\publicsens{}{}) \leftrightarrow \aggr{i}{j}(
\nstate{\aggr{i}{j}}[\mathsf{CL}]) \}$ is initiated by a Local Administrator as a subroutine of $\mathsf{GroupRevoke}$ to revoke credentials of a sensor in its group.
\begin{enumerate}
\item $\localadmin{i}$ computes $\sigma := \mathsf{Sign}(\secretkey{\localadmin{i}}{},\publicsens{}{}``R")$. Send $\sigma$ to $\aggr{i}{j}$.
\item $\aggr{i}{j}$ computes $\svrfy{\publickey{\localadmin{i}}{}}{\publicsens{}{}}{\sigma}$. Checks that $(\publicsens{}{} \notin \nstate{\aggr{i}{j}}) \land b == 1$. If the verification holds, it removes $\publicsens{}{}$ from $\mathsf{CL}$ and returns ``1" to $\localadmin{i}$. It returns ``0" in all other cases.
\end{enumerate}
\end{enumerate}
\section{Constructions}
\label{constr}
\label{mitm}
\input{iot-primitive}
\subsection{Overall \textsc{BBox-IoT}\xspace Construction}
\label{bbox-costr}
Our \textsc{BBox-IoT}\xspace system consists of the following components as shown in
Figure \ref{secmodelfig-intro} illustrating our modifications to the
Hyperledger Fabric architecture.
\begin{figure}
\includegraphics[width=0.47\textwidth]{secmodel-intro.png}
\caption{\textsc{BBox-IoT}\xspace construction overview} \label{secmodelfig-intro}
\iffull
\else
\vspace{-0.15in}
\fi
\end{figure}
\begin{itemize}[leftmargin=*]
\item A (trusted) Membership Service Provider\footnote{The MSP also
includes the system administrator.} $\mathsf{MSP}$, which resembles a
Trusted Party, and is responsible for authorizing participation in
the system. The $\mathsf{MSP}$ bootstraps the system and forms the genesis
block, which contains hardcoded information on its public key and
the consensus algorithm. The genesis block also initializes the
authorized system participants and the system policy (denoted by
$\mathsf{Pol}$), both of which can be changed later.
\item A permissioned blockchain $\mathsf{BC}$, which consists of
normal ``transaction" blocks and special ``configuration" blocks.
\item A configuration $\mathsf{Config}$ for $\mathsf{BC}$, containing
membership information for local administrator, orderer and
aggregators, as well as system policy data. As in Hyperledger
Fabric, $\mathsf{Config}$ is stored in the configuration blocks.
\item A set of orderer nodes $\mathcal{O}: \{\orderer{1},
\orderer{2},...,\orderer{\ell} \}$, responsible for achieving
consensus on forming new blocks. These nodes are assumed static,
although it can be extended to handle dynamic membership.
\item A set of device groups $\mathcal{G}: \{\iotGroup{1},
\iotGroup{2},...,\iotGroup{n} \}$. On each group $\iotGroup{i}$
there exist:
\begin{itemize}
\item A local administrator $\localadmin{i}$, responsible for its group membership, which includes a set of aggregators and sensors. In order for $\localadmin{i}$ to add or remove an aggregator in the system must also have consent from the $\mathsf{MSP}$, however he does not need permission to handle sensor membership.
\item A set of aggregators $\aggrset{i} : \{\aggr{i}{1}, \aggr{i}{2}, ... , \aggr{i}{m}\}$, which have also the role of \emph{peers} in Hyperledger Fabric. We assume aggregators can perform regular cryptographic operations and aggregate data received from sensors. As discussed in our modified Hyperledger, they also briefly take the role of a ``client".
\item A set of sensors $\sensset{i} : \{\sens{i}{1}, \sens{i}{2}, ... ,\sens{i}{k} \} $, which are assumed to be resource-constrained devices. These would be the equivalent of \emph{clients} in the original Hyperledger Fabric architecture, but here they are assumed to only broadcast their data to nearby group aggregators, without expecting a confirmation. The only step where interaction occurs is during initial setup, where they exchange their public key and other initialization data
with the group administrator. We also assume that sensors can only perform basic cryptographic operations (i.e. hashing), meaning they can't perform public key cryptography operations that use exponentiations.
\end{itemize}
\end{itemize}
We first describe the initialization process for the system's $\mathsf{MSP}$
and genesis block $\block{0}$. After generating its keys, $\mathsf{MSP}$
bootstraps the system with pre-populated participation whitelists of
orderers, local group administrators, and aggregators (denoted by
$\mathsf{OL}, \mathsf{LL}$ and $\mathsf{PL}$ respectively) and a
pre-defined system policy. Sensors do not need to be tracked from the
$\mathsf{MSP}$, as participation authorization for sensors is delegated to the
group local administrators. Local administrators control authorization
privileges with a respective sensor whitelist denoted by
$\sensorList{}$, and they also keep a whitelist of group aggregators
denoted by $\mathsf{AL}$.
Furthermore, we detail the functionality of reading or updating the system's
configuration, including the permissioned participants and the system
policy. Orderers and local administrators can only be authorized for
participation by the $\mathsf{MSP}$, while aggegators need their local
administrator's approval as well. As discussed above, sensor
participation is handled by the local administrators, however, group
aggregators also keep track of group participation for sensors in a
passive manner. The local administrators are also responsible for
revoking participation rights for aggregators and sensors belonging in
their group. In general, granting or revoking participation privileges
is equivalent to adding or removing the participant's public key from
the respective whitelist. \iffull Note that membership verification
can also be handled by accumulators~\cite{EPRINT:BCDLRS17,C:CamLys02}
instead of whitelists to achieve lists of constant size, however we
keep whitelists for simplicity purposes. \fi
Furthermore, on a high-level, sensors ``blindly'' broadcast their data
as signed transactions. Nearby aggregators (belonging to the same
device group) receive and verify the data and collect the required
amount of signatures from other aggregators in the system (as defined
by the system policy), and then submit the signed transaction to the
ordering service. The orderers then by running the consensus
protocol, ``package'' the collected transactions to form a blockchain
block. Finally, the block is sent back to the aggregators, who as the
blockchain ``maintainers'', append it to the blockchain. The core
system functionalities are shown in Construction \iffull
\ref{constr:algorithms} and we provide a detailed description of all
system algorithms and protocols in Appendix
\ref{sec-model-defs}. \else \ref{constr:algorithms}. \fi.
\renewcommand{\figurename}{Construction}
\setcounter{figure}{1}
\begin{figure}
\fbox{\begin{minipage}{\linewidth}
\begingroup
\fontsize{8pt}{12pt}\selectfont
\begin{itemize}[leftmargin=*]
\item[] $\mathsf{SensorJoin} $
\begin{itemize}
\item Sensor generates a seed uniformly at random, and
generates hash chain through $\mathsf{OTKeyGen}$
algorithm. (computation is outsourced to a powerful
device)
\item Sensor stores hash chain ``pebbles'' in its memory and
outputs the last element of the chain as public key to the
$\localadmin{}$
\end{itemize}
\item[] $\mathsf{SensorSendData} $
\begin{itemize}
\item Sensor computes signature $\sigma$ for broadcasted data $m$ using $\mathsf{OTSign}$ algorithm
\item $\sens{i}{j}$ broadcasts $\sigma$ to aggregators in group.
\item Each aggregator after verifying the signature through $\mathsf{OTVerify}$, checks if any other aggregator received a conflicting message. It adds the message - signature pair in its local state, pending for blockchain submission.
\end{itemize}
\item[] $\mathsf{AggrSendTx}$
\begin{itemize}
\item Aggregator parses its local state for pending blockchain operations as a transaction.
\item Aggregator computes signature on transaction and sends it to other aggregators.
\item Each aggregator after verifying signature and sender membership in the system, signs the transaction.
\item The sending aggregator submits signed transaction to ordering service after reaching necessary number of signatures, as dictated by system policy.
\item Each orderer after verifying signatures, runs consensus algorithm which outputs a blockchain update operation.
\item The blockchain operation is received by orderers who update the blockchain state.
\end{itemize}
\item[] $\mathsf{SensorTransfer}$
\begin{itemize}
\item Aggregator encrypts the state for the sensor under the reveiving aggregator's $\publickey{}{}$ (i.e. the most recent received $\secretkey{i}{}$) and submits it to the blockchain using $\mathsf{AggrSendTx}$. Sensor is removed from the device group and is transferred to new group.
\item Receiving aggregator decrypts state from the blockchain and resumes verification of received data from sensor.
\end{itemize}
\end{itemize}
\endgroup
\end{minipage}}
\caption{\textsc{BBox-IoT}\xspace core algorithms and protocols}
\iffull
\else
\vspace{-0.15in}
\fi
\label{constr:algorithms}
\end{figure}
\renewcommand{\figurename}{Figure}
\setcounter{figure}{3}
\emph{Sensor join:} Defined by $\mathsf{SensorJoin}()$ protocol between a sensor
and a Local administrator. This is the only phase when a sensor is
interacting with the system, as the $\localadmin{}$ generates a new
hash chain and its associated pebbles in a powerful device. The
pebbles are then loaded to the sensor, and $\localadmin{}$ updates the
group aggregators with the new sensor's public key.
\emph{Sensor broadcast:} Defined by $\mathsf{SensorSendData}()$ protocol between
a sensor and group aggregators. For some data $m$, the sensor computes
the one-time hash-based signature using $\mathsf{OTSign}()$ and the
signed data $m,\sigma$ is broadcasted to all group aggregators. If
there are any aggregator who receives a different signed message
$m',\sigma$, the message is discarded, else it remains in the
aggregator's pending memory for processing.
\emph{Aggregator transaction:} Defined by $\mathsf{AggrSendTx}()$ protocol
between aggregators and orderers. For an aggregator to submit
aggregated data to the blockchain, it first needs to collect the
needed signatures from other aggregators. Then it submits the signed
transaction to the ordering service, which in turn executes the
$\consensus()$ algorithm to construct a block with a set of signed
transactions. Finally, the block is transmitted to the aggregators, who
append the block as the blockchain maintainers.
\emph{Sensor transfer:} Defined by
$\mathsf{SensorTransfer}$ algorithm, executed when a sensor is
transferred to a new location or device group. The handing-over
aggregator saves its state of our signature scheme w.r.t. that sensor
and encrypts it on the blockchain under the receiving aggregator's
public key. After sensor transfer, the receiving aggregator decrypts
that state and resumes message verification.
Optionally in our construction, a symmetric group key $\groupKey{}$
can be shared between each group's local administrator, aggregators
and sensors for confidentiality purposes. However, the additional
encryption operations have an impact mainly on sensors, which have
constrained computational and storage resources. Note that using such
key for authentication or integrity would be redundant since these
properties are satisfied using public keys existing in the appropriate
membership lists and revocation operations can still be performed at
an equivalent cost using those lists.
\subsection{Security Analysis}
\label{constr:sec-analysis}
\iffull
\begin{thm} [informal]
The construction in Section \ref{bbox-costr} satisfies
participant authentication (\ref{partic-auth}), sensor health \ref{sensor-health} and device group safety properties (\ref{partic-malic}) assuming $(\mathsf{SignGen}$, $\mathsf{Sign}$, $\mathsf{SVrfy})$ is an existentially unforgeable under a chosen-message attack signature scheme, $(\mathsf{OTKeyGen},\mathsf{OTSign},\mathsf{OTVerify})$ is an unforgeable one-time chain based signature scheme, $\mathsf{MSP}$ is honest and not compromised and the consensus scheme $(\mathsf{TPSetup}$, $\mathsf{PartyGen}$, $\mathsf{TPMembers}, \consensus)$ satisfies the consistency property.
\end{thm}
\noindent \textit{Proof Sketch.}
We now provide a proof sketch arguing about the security of our scheme.
\noindent \textit{\ref{partic-auth} Participant Authentication.}
We require that only authenticated participants can participate in the different functions of our protocol. We argue that if an
adversary breaks the participant authentication property then it could break unforgeability of the underlying signature scheme.
Specifically:
\begin{enumerate}
\item For property \ref{partic-auth-a}, in protocol $\mathsf{OrdererAdd}$ (coupled with $\mathsf{ConfigUpdate}$) the use of an unforgeable signature scheme guarantees that no one but the MSP can authenticate orderers, while in protocol $\consensus$ the same scheme guarantees that only the authenticated orderers can perform this core functionality. Also recall from the previous property that an adversary being able to authenticate orderers could break the immutability property.
\item For property \ref{partic-auth-b}, in protocol $\mathsf{LAdminAdd}$ (coupled with $\mathsf{ConfigUpdate}$) the use of an unforgeable signature scheme guarantees that no one but the MSP can authenticate Local Administrators, while $\mathsf{AggrSetup}, \mathsf{AggrAdd}, \mathsf{AggrUpd}$, $\mathsf{SensorSendData}$ and $\mathsf{GroupRevoke}$ guarantee that only the authenticated local administrators can perform these functionalities.
\item For property \ref{partic-auth-c}, in protocol $\mathsf{AggrAdd}$ (coupled with \\$\mathsf{ConfigUpdate}$) the use of an unforgeable signature scheme guarantees that no one but the MSP can authenticate Aggregators, while $\mathsf{AggrUpd}$ and $\mathsf{AggrSendTx}$ the same scheme guarantees that only the authenticated aggregators can perform these functionalities.
\end{enumerate}
\noindent \textit{\ref{sensor-health} Sensor health.} In order for an
adversary to impersonate/clone a sensor, it would either have
to break the unforgeability of our signature scheme, or launch a MITM
attack which is a potential attack vector as discussed in Section
\ref{our-primitive}.
As discussed in Section \ref{threatmdl}, we consider jamming attacks
at the physical layer outside the scope of this paper. Given the
nature of our setting where a sensor's broadcast has typically short
range, we consider MITM and message injection attacks hard and
unlikely to launch but we still consider them as part of our threat
model. Even in these unlikely scenarios, MITM attacks can be easily
mitigated in \textsc{BBox-IoT}\xspace. A first approach for detecting such attacks is
to leverage blockchain properties, where aggregators can compare data
received from a sensor in the blockchain level. Our assumption here is
that sensor data can be received by more than one aggregators in the
vicinity of the sensor which is a reasonable senario for typical dense
IoT deployments. If there's even one dissenting aggregator, probably
victim of a MITM attack, all the associated data would be considered
compromised and disregarded and the operator will be notified of the
data discrepancy detected. The above approach while simple, still
permits a MITM attacker to ``eclipse'' a sensor from the system using
a jamming attack.
An alternative approach is to make a proactive check in a group level,
where each aggregator would verify the validity of its received data
by comparing it with other aggregators before even submitting it to
the blockchain. In both above strategies, the attacker's work
increases significantly because he would need to launch simultaneous
MITM attack between the sensor and all aggregators in the vicinity. We
adopt the second approach in our Construction in Appendix
\ref{sec-model-defs}.
The above properties and strategies ensure that only data broadcasted
by authenticated sensors are accepted by aggregators in
$\mathsf{SensorSendData}$.
\noindent \textit{\ref{partic-malic} Device group safety.} An
adversary wanting to break device group safety would either
have to add or revoke aggregators or sensors in an existing group
through $\mathsf{AggrAdd}$, $\mathsf{AggrUpd}$ or $\mathsf{GroupRevoke}$ thus breaking unforgeability
of the signature scheme used in these protocols or interfere with
existing authenticated sensors in a group through $\mathsf{SensorSendData}$ by
breaking unforgeability of the one-time chain based signature scheme.
\qed
Integrity, non-repudiation and data provenance requirements
(\ref{sec-nonrepud}) are core properties of
any digital signature scheme thus directly satisfied in \textsc{BBox-IoT}\xspace.
Additionaly we argue that our system is DOS resilient (\ref{sec-dos-resil}) in the following scenarios:
\begin{itemize}
\item $\mathsf{MSP}$ offline or not available: The core system functionality is not affected, although there can be no configuration changes in the system. All algorithms and protocols (except those involving adding or revoking orderers, local administrators or aggregators or those involving system policy changes) perform authentication through the configuration blocks and not the $\mathsf{MSP}$ itself.
\item Orderers unavailable: Reduces to tolerance properties of consensus algorithm.
\item $\localadmin{}$ unavailable: The core system functionality is not affected, although there can be no administrative operations in the respective group.
\item $\aggr{}{}$ unavailable: Transactions are not processed only in the respective groups. However if more than $\tau$ aggregators are unavailable as required in $\mathsf{AggrSendTx}$, no transactions can be processed in the whole system.
\end{itemize}
Also an adversary might attempt to flood an aggregator by broadcasting messages and arbitrary signatures. In this scenario the aggregator would be overwhelmed since by running $\mathsf{OTVerify}$ for each message-signature pair separately, it would have to check the signature against all hash chain values up to the first public key. To mitigate this we propose checking only for a few hashes back to the chain (defined by a system parameter ``maxVerifications" as shown in Algorithm \ref{measurements-aggregator-pseudocode} in the Appendix). This parameter can be set by the local administrator but should be carefully selected. A small value might result in need of frequent re-initializations for the sensors - if a long network outage occurs between a sensor and an aggregator and they lose ``synchronization", the local administrator should reinitialize the sensor in the device group. On the other hand, a large value would amplify the impact of DOS attacks.
Policy and configuration security (\ref{sec-pol-conf}) is ensured by algorithms $\mathsf{ConfigUpdate}$ and $\mathsf{ReadConfig}$, as the first algorithm creates a special configuration transaction signed by the $\mathsf{MSP}$ and the second returns configuration data originating from such a transaction.
Revocation (\ref{sec-revoc}) is made possible by $\mathsf{NodeRevoke}$ and $\mathsf{GroupRevoke}$ (in conjuction with whitelists used throughout all system protocols and algorithms). Also the unforgeability of the underlying signature scheme ensures that only the $\mathsf{MSP}$ (and the $\localadmin{}$ respectively only for aggregators and sensors) can revoke these credentials.
\noindent \textbf{Remark.} One might suggest to use MACs instead of our proposed signature scheme for sensor authentication. We discuss this in Appendix \ref{constr:notmacs}.
\else
Given the threat model discussed in Section \ref{threatmdl}, most of
the security properties (all but \ref{sensor-health} and
\ref{sec-dos-resil}) rely on the security of the underlying signature
scheme and consensus properties. As it is straightforward to prove
security for these, we focus on \textit{\ref{sensor-health} sensor
health} security property (which includes resilience to MITM
attacks) and \textit{\ref{sec-dos-resil}} (resilience to DoS attacks).
In order for an adversary $\mathcal{A}$ to impersonate/clone a sensor, it
would either have to break the unforgeability of our signature scheme,
or launch a MITM attack which is a potential attack vector as
discussed in Section \ref{our-primitive}.
As discussed in Section \ref{threatmdl}, we consider jamming attacks
at the physical layer outside the scope of this paper. Given the
nature of our setting where a sensor's broadcast has typically short
range, we consider MITM and message injection attacks hard and
unlikely to launch but we still consider them as part of our threat
model. Even in these unlikely scenarios, MITM attacks can be easily
mitigated in \textsc{BBox-IoT}\xspace. A first approach for detecting such attacks is
to leverage blockchain properties, where aggregators can compare data
received from a sensor in the blockchain level. Our assumption here is
that sensor data can be received by more than one aggregators in the
vicinity of the sensor which is a reasonable senario for typical dense
IoT deployments. If there's even one dissenting aggregator, probably
victim of a MITM attack, all the associated data would be considered
compromised and disregarded and the operator will be notified of the
data discrepancy detected. The above approach while simple, still
permits a MITM attacker to ``eclipse'' a sensor from the system using
a jamming attack.
An alternative approach is to make a proactive check in a group level,
where each aggregator would verify the validity of its received data
by comparing it with other aggregators before even submitting it to
the blockchain. In both above strategies, the attacker's work
increases significantly because he would need to launch simultaneous
MITM attack between the sensor and all aggregators in the vicinity.
Additionaly, we argue that our system is DOS resilient (\ref{sec-dos-resil}) in the following scenarios:
\begin{itemize}[leftmargin=*]
\item $\mathsf{MSP}$ offline or not available: The core system functionality is not affected, although there can be no configuration changes in the system. All algorithms and protocols (except those involving adding or revoking orderers, local administrators or aggregators or those involving system policy changes) perform authentication through the configuration blocks and not the $\mathsf{MSP}$ itself.
\item Orderers unavailable: Reduces to tolerance properties of the consensus algorithm.
\item $\localadmin{}$ unavailable: The core system functionality is not affected, although there can be no administrative operations in the respective group.
\item $\aggr{}{}$ unavailable: Transactions are not processed only in the respective groups. \iffull However, if more than $\tau$ aggregators are unavailable as required in $\mathsf{AggrSendTx}$,\else However if the number of unavailable aggregators exceeds a certain threshold, \fi no transactions can be processed in the whole system.
\end{itemize}
Also, an adversary might attempt to flood an aggregator by broadcasting messages and arbitrary signatures. In this scenario, the aggregator would be overwhelmed since by running $\mathsf{OTVerify}$ for each message-signature pair separately, it would have to check the signature against all hash chain values up to the first public key. To mitigate this, we propose checking only for a few hashes back to the chain specified by a parameter (defined by a system parameter ``maxVerifications" as shown in Algorithm \ref{measurements-aggregator-pseudocode}). This parameter can be set by the local administrator but should be carefully selected. A small value might generate the need of frequent re-initializations for the sensors - if a long network outage occurs between a sensor and an aggregator and they lose ``synchronization", the local administrator should reinitialize the sensor in the device group. On the other hand, a large value would amplify the impact of DoS attacks.
\input{pseudocode}
\fi
\section{Introduction}
The commercial success of low Size Weight and Power (SWaP) sensors and
IoT devices has given rise to new sensor-centric applications
transcending traditional industrial and closed-loop
systems~\cite{zou2018towards,derhamy2015survey}. In their most recent
Annual Internet Report~\cite{cisco2020}, CISCO estimates that there
will be 30 billion networked devices by 2023, which is more than three
times the global population. While very different in terms of their
hardware and software implementations, Industrial IoT (IIoT) systems
share common functional requirements: they are designed to collect
data from a large number of low-SWaP sensor nodes deployed at the
edge. These nodes, which we refer to as edge \emph{sensors}, are
resource-constrained devices used in volume to achieve a broader
sensing coverage while maintaining low cost. Thus, while capable of
performing simple operations, low-SWaP sensors usually depend on
battery power, are equipped with limited storage, and have low
processing speed~\cite{8534563}.
In practice, edge sensors are usually controlled by and report to more
powerful gateway devices (which we refer to as \emph{aggregators})
that process and aggregate the raw sensory data. For instance, in an
Industrial (IIoT) environment, sensors are devices such as temperature
sensors are broadcasting their measurements to the network router,
which in turn submits it to the cloud through the Internet. Until
recently, due to processing and storage constraints, many IoT designs
were geared towards direct to cloud aggregation and data
processing. However, latency, bandwidth, autonomy and data privacy
requirements for IoT applications keep pushing the aggregation and
processing of data towards the edge \cite{lin2019computation}. In
addition, in most use cases, IoT devices need to be mutually
\emph{authenticated} to maintain system integrity and the data origin
has to be verified to prevent data pollution
attacks~\cite{DBLP:journals/sj/LiuXG14,5638628} and in ``model
poisoning'' where an attacker has compromised a number of nodes acting
cooperatively, aiming to reduce the accuracy or even inject backdoors to the resulting analysis
models~\cite{DBLP:conf/icml/BhagojiCMC19,gu2019badnets}.
The use of distributed, immutable ledgers has been proposed as a
prominent solution in the IoT setting allowing rapid detection of
inconsistencies in sensory data and network communications, providing
a conflict resolution mechanism without relying on a trusted
authority~\cite{DBLP:conf/icse/BellLBS17}. A number of relevant schemes has been proposed in the
literature~\cite{DBLP:conf/lcn/ProfentzasAL19,Shafagh:2017:TBA:3140649.3140656},
which \iffull as we discuss in Section~\ref{relwork}, \fi propose
various ways to integrate distributed ledgers (commonly referred to as
\emph{Blockchain}) with IoT.
\noindent \textbf{The Challenge:} One of the main roadblocks for using
Blockchain-based systems as ``decentralized'' databases for sharing
and storing collected data is their dependency on asymmetric
authentication techniques. Typically, to produce authenticated data
packets, sensors have to digitally sign the data by performing public
key cryptographic operations, which are associated with expensive sign
and verification computations and large bandwidth
requirements. Although some high-end consumer sensor gateways and
integrated sensors might be capable of performing cryptographic
operations, a large number of edge sensors have limited computational
power, storage and energy~\cite{boyer2017distributed,KARRAY201889}.
To make matters worse, sensors try to optimize their power consumption
by entering a ``sleep'' state to save power resulting in intermittent
network connectivity and lack of synchronicity. Given such tight
constraints, an important challenge is allowing low-SWaP devices being
extremely constrained both in terms of computational power and memory
(categorized as Class 0 in RFC 7228~\cite{RFC7228}
ref. Section~\ref{measurements-scenario}), to authenticate and utilize
a blockchain-based data sharing infrastructure.
\noindent \textbf{Our Contributions:} We design and implement
\textsc{BBox-IoT}\xspace, a complete blockchain-based system for Industrial IoT
devices aimed to create a decentralized, immutable ledger of sensing
data and operations while addressing the sensor and data
authentication challenge for extremely constrained devices. We aim to
use our system as a "black-box" that empowers operators of an
IIoT enclave to audit sensing data and operational information such as
IIoT communications across all IIoT devices.
To perform sensor and data authentication operations \emph{without}
relying on heavy cryptographic primitives, we introduce a novel
hash-based digital signature that uses an onetime hash chain of
signing keys. While our design is inspired by TESLA broadcast
authentication protocol~\cite{848446,Perrig02thetesla}, our approach
\emph{does not} require any timing and synchronicity assumptions
between signer and verifier. Overcoming the synchronicity
requirement is critical for low-SWaP devices since their internal
clocks often drift out of synchronization (especially those using low
cost computing
parts)~\cite{DBLP:journals/sensors/Tirado-AndresRA19,DBLP:journals/tii/ElstsFDOPC18}.
\iffull Our signature construction is proven secure assuming a
pre-image resistant hash function. Also, we achieved logarithmic
storage and computational (signature/verification) costs using
optimizations~\cite{jakobsson2002fractal,RSA:YSEL09}. \fi Our
proposed scheme further benefits by the broadcast nature of the
wireless communication. Indeed, in combination with the immutable
blockchain ledger, we are able to ferret out man-in-the-middle attacks
in all scenarios where we have more than one aggregators in the
vicinity of the sensors. To bootstrap the authentication of sensor
keys, we assume an operator-initiated device bootstrap protocol that
can include either physical contact or wireless pairing using an
operator-verified ephemeral code between sensors and their receiving
aggregators. Our bootstrap assumptions are natural in the IoT setting,
where sensors often ``report" to specific aggregators and allows us to
overcome the requirement for a centralized PKI. Note that our
signature scheme is of independent interest, in-line with recent
efforts by NIST for lightweight cryptography~\cite{turan2019status}.
For the blockchain implementation where a \emph{consensus} protocol is
needed, we consider a \emph{permissioned} setting, where a trusted
party authorizes system participation at the aggregator level. Our
system supports two main types of IoT devices: low-SWaP sensors who
just broadcast data and self-reliant aggregators who collect the data
and serve as gateways between sensors and the blockchain. While our
system is initialized by a trusted operator, the operator is not
always assumed present for data sharing and is only required for
high-level administrative operations including adding or removing
sensors from the enclave. We build the consensus algorithms for
\textsc{BBox-IoT}\xspace using a modified version of Hyperledger Fabric
\cite{DBLP:journals/corr/abs-1801-10228}, a well known permissioned
blockchain framework, and leverage blockchain properties for
constructing our protocols tailored for constrained-device
authentication. However, \textsc{BBox-IoT}\xspace operations are designed to be
lightweight and do not use public key cryptography based on the RSA or
discrete logarithm assumptions, which are common, basic building
blocks of popular blockchain implementations. We describe our system
in details considering interactions between all participants and argue
about its security.
We implemented and tested a \textsc{BBox-IoT}\xspace prototype in an IIoT setting
comprising of extremely constrained sensors (Class 0 per RFC 7228). We
employed 8-bit sensor nodes with 16MHz micro controllers and 2KB RAM,
broadcast data every 10 seconds to a subset of aggregators (e.g. IIoT
gateways) which in turn submit aggregated data to a cloud
infrastructure. The evaluation shows that the IIoT sensors can compute
our 64-byte signature in 50ms, making our signature scheme practical
even for the least capable of IIoT platforms. Our evaluation section
shows results by considering a sensor/gateway ratio of 10:1. When
compared with ECDSA signing operations, our scheme is significantly
more efficient offering two (2) and three (3) orders of magnitude
speedup for signing and verification respectively. Our theoretical
analysis and implementation shows that we can achieve strong chained
signatures with half signature size, which permits accommodating more
operations in the same blockchain environment. \textsc{BBox-IoT}\xspace is also over
50 times more energy-efficient, which makes our system ideal for edge
cost-efficient but energy-constrained IIoT devices and applications.
Finally, we adopt the same evaluation for Hyperledger Fabric considered in previous work~\cite{DBLP:journals/corr/abs-1801-10228} and estimate the end-to-end costs of \textsc{BBox-IoT}\xspace when running on top of our Hyperledger modification, showing it is deployable in our considered use-cases.
\subsection{Hyperledger}
One of the most promising examples of permissioned blockchains is the
Hyperledger project, established by the Linux Foundation and actively
supported by companies such as IBM and
Intel \cite{Hyperledger-architecture-vol1}. The Hyperledger project
aims to satisfy a wide range of business blockchain requirements, and
has developed several frameworks with different implementation
approaches, such as Hyperledger Fabric, Indy, Iroha and Sawtooth. Each
framework uses a different consensus algorithm, or even supports
``pluggable" (rather than hardcoded) consensus like Hyperledger
Fabric~\cite{DBLP:journals/corr/abs-1801-10228}.
To make pluggable consensus possible, Hyperledger Fabric introduces
the ``execute-order-validate" paradigm in its architecture, instead of
the traditional
``order-execute"~\cite{DBLP:journals/corr/abs-1801-10228}. In this
paradigm, the ``maintainer" participants are decoupled from the
``consensus" participants (called \emph{Peers} and \emph{Orderers}
respectively as we will see below), which eventually leads to
satisfying dynamic membership and scalability.
In the following we focus on Hyperledger Fabric which seems to fit
best in our system and provide a high-level description of its
architecture (shown in Figure \ref{HLF-fig}).
Its main components are categorized as follows:
\begin{enumerate}
\item \textbf{Clients} are responsible for creating a transaction and submitting it to the peers for signing. After collecting a sufficient number of signatures (as defined by the system policy), they submit their transaction to the orderers for including it in a block. Client authentication is delegated to the application.
\item \textbf{Peers} are the blockchain maintainers, and are also responsible for endorsing clients' transactions. Notice that in the context of Hyperledger, ``Endorsing'' corresponds to the process of applying message authentication.
\item \textbf{Orderers} collectively form the ``ordering service" for the system. After receiving signed transactions from the clients, the service establishes \emph{consensus} on total order of a collected transaction set. Then the ordering service by delivering blocks to the peers, and ensures the consistency and liveness properties of the system.
\item The \textbf{Membership Service Provider} ($\mathsf{MSP}$) is responsible for granting participation privileges in the system.
\end{enumerate}
In its initial version v0.6, Hyperledger Fabric used the Byzantine fault-tolerant PBFT consensus algorithm~\cite{Hyperledger-fabric-consensus-v0.6}, which only supports static membership for the ordering service participants. It's current version (v2.2) offers the \textit{Raft}~\cite{DBLP:conf/usenix/OngaroO14} consensus algorithm, which provide crash fault tolerance but not Byzantine fault tolerance, thus preventing the system from reaching consensus in the case of malicious nodes. Hyperledger Fabric could potentially use BFT-SMART in the future~\cite{DBLP:conf/dsn/BessaniSA14,DBLP:conf/dsn/SousaBV18}.
Regarding scalability, although the original version of Hyperledger Fabric has several potential bottlenecks in its architecture, proposals exist to improve its overall performance in terms of operations per second~\cite{DBLP:journals/corr/abs-1901-00910}. These proposals suggest storing blocks in a distributed peer cluster for further scalability improvements. Also while operations (or transactions) per second is one scalability aspect in our setting, the other one is the actual number of peers the system can practically support without heavily impacting its performance. To the best of our knowledge, this has only been experimentally shown for up to 100 peers~\cite{DBLP:journals/corr/abs-1801-10228}. This experiment indicates however a low impact of the number of peers, especially if the network latency is low. Our setting allows such an assumption, since our use-case scenarios are all instantiated in a specific geographical location, as later discussed in section \ref{measurements}.
\else
\subsection{Modifying Hyperledger Fabric}
\label{mod:hyperledger}
Hyperledger \cite{Hyperledger-architecture-vol1}\iffull \else, a well-known open-source blockchain platform in the permissioned model, \fi satisfies a wide range of business blockchain requirements, and has developed several frameworks, supporting different consensus algorithms or even ``pluggable" (rather than hardcoded) consensus like Hyperledger Fabric~\cite{DBLP:journals/corr/abs-1801-10228}.
Its main components are categorized as follows:
\begin{enumerate}[leftmargin=*]
\item \textbf{Clients} are responsible for creating a transaction and submitting it to the peers for signing. After collecting a sufficient number of signatures (as defined by the system policy), they submit their transaction to the orderers for including it in a block. Client authentication is delegated to the application.
\item \textbf{Peers} are the blockchain maintainers, and are also responsible for endorsing clients' transactions. Notice that in the context of Hyperledger, ``Endorsing'' corresponds to the process of applying message authentication.
\item \textbf{Orderers} after receiving signed transactions from the clients, establish \emph{consensus} on total order of a collected transaction set, deliver blocks to the peers, and ensure the consistency and liveness properties of the system.
\item The \textbf{Membership Service Provider} ($\mathsf{MSP}$) is responsible for granting participation privileges in the system.
\end{enumerate}
\fi
\iffull
\subsection{Modifying Hyperledger Fabric}
\label{mod:hyperledger}
While Hyperledger Fabric's \emph{execute-order-validate} architecture offers several advantages discussed previously, we cannot directly use it in our \textsc{BBox-IoT}\xspace system, since we assume that lightweight devices (which for Hyperledger Fabric would have the role of ``clients") are limited to only broadcasting data without being capable of receiving and processing.
\else
Directly using Hyperledger in \textsc{BBox-IoT}\xspace is not possible, since we assume that lightweight devices (which for Hyperledger Fabric would have the role of ``clients") are limited to only broadcasting data without being capable of receiving and processing.
\fi
In Hyperledger Fabric, clients need to collect signed transactions and send them to the ordering service, which is an operation that lightweight devices are typically not capable of performing.
\noindent \textbf{Our modification.} To address this issue, we propose a modification in Hyperledger architecture. In our modified version, as shown in Figure \ref{HLF-fig-mod}, a client broadcasts its ``transaction" message to all nearby peer nodes. However, the transaction is handled by a \emph{specific} peer (which are equivalent to an aggregator as we discuss in the next section), while peers not ``responsible" for that transaction disregard it. That specific peer then assumes the role of the ``client" in the original Hyperledger architecture simultaneously, while also continuing functioning as a peer node. As a client, it would be responsible for forwarding this transaction to other peers, and collecting the respective signatures, as dictated by the specified system policy, in a similar fashion to original Hyperledger Fabric.
It would then forward the signed transaction to the ordering service, and wait for it to be included in a block. The ordering service would send the newly constructed block to all peers, which would then append it to the blockchain.
\noindent \textbf{Security of our modifications.}
The proposed modifications of Hyperledger do not affect the
established security properties (i.e. Consistency and Liveness\iffull
as we define them in Appendix \ref{apdx:consensus} and interpreted
in \cite{DBLP:journals/corr/abs-1801-10228}\fi), since a peer node
simultaneously acting as a client, can only affect the signing process
by including a self-signature in addition to other peers'
signatures. However, because the signing requirements are dynamically
dictated by the system policy, these could be easily changed to
require additional signatures or even disallow self-signatures to
prevent any degradation in security.
We also note that while this modified version of Hyperledger effectively becomes agnostic to the original client, which otherwise has no guarantees that its broadcasted transaction will be processed honestly, our threat model discussed in the next section captures the above trust model.
\subsection{Our Hash-based Signature Scheme}
\label{our-primitive}
\iffull Our construction is inspired by Lamport passwords~\cite{Lamport:onetime} and TESLA~\cite{848446,Perrig02thetesla} but \else Our construction is a digital signature scheme that only requires hashing as the main operation. While inspired by the Lamport passwords~\cite{Lamport:onetime} and TESLA~\cite{848446,Perrig02thetesla}, it \fi \emph{avoids the need for any synchronization} between senders and receivers which is a strong assumption for the IoT setting.
Instead, we assume the existence of a constant-sized state for both the sender and receiver between signing operations. Our scheme allows for a fixed number of messages to be signed, and has constant communication and logarithmic computation and storage costs
under the following requirements and assumptions:
\begin{itemize}[leftmargin=*]
\item There's \emph{no} requirement for time synchronization, and a verifier should only need to know the original signer's $\publickey{}{}$.
\item The verifier should immediately be able to verify the authenticity of the signature (i.e. without a ``key disclosure delay" that is required in the TESLA family \iffull protocols, described in more detail in Section \ref{prel:onetime-sigs} ). \else protocols. \fi
\item Network outages, interruptions or ``sleep'' periods can be resolved by requiring computational work from the verifier, proportional to the length of the outage.
\item We do not protect against Man-in-the-Middle attacks in the signature level, instead, we use the underlying blockchain to detect and mitigate such attacks as we discuss later in Section \ref{constr:sec-analysis}.
\item The signer has very limited computation, power and storage capabilities, but can outsource a computationally-intensive pre-computation phase to a powerful system.
\end{itemize}
\renewcommand{\figurename}{Construction}
\setcounter{figure}{0}
\begin{figure}
\fbox{\begin{minipage}{0.96\linewidth}
\begingroup
\fontsize{8pt}{12pt}\selectfont
Let $h: \{0,1\}^{*} \rightarrow \{0,1\}^{\lambda}$ be a preimage resistant hash function.
\begin{itemize}[leftmargin=*]
\item[] $\otkeygen{\publickey{}{}}{\secretkey{n}{}}{s_{0}}{n}$
\begin{itemize}
\item sample a random ``private seed" $k_{n} \leftarrow\{0,1\}^{*}$
\item generate hash chain $\publickey{}{} = k_{0}= h(k_{1})= h(h(k_{2})) = ... = h^{i}(k_{i}) = h^{i+1}(k_{i+1}) = ...= h^{n-1}(k_{n-1}) = h^{n}(k_{n})$
\item hash chain creates $n$ pairs of $(\publickey{i}{},\secretkey{i}{})$ where:
\\ $(\publickey{1}{},\secretkey{1}{}) = (k_{0},k_{1}) = (h(k_{1}),k_{1})$,
\\ $(\publickey{2}{},\secretkey{2}{}) = (k_{1},k_{2}) = (h(k_{2}),k_{2})$,
\\ ... ,
\\ $(\publickey{i}{},\secretkey{i}{}) = (k_{i-1},k_{i}) = (h(k_{i}),k_{i})$,
\\ ...,
\\ $(\publickey{n}{},\secretkey{n}{}) = (k_{n-1},k_{n}) = (h(k_{n}),k_{n})$
\item initialize a counter $\mathsf{ctr}=0$, store $\mathsf{ctr}$ and pairs as $[(\publickey{i}{},\secretkey{i}{})]_{1}^{n}$ to initial state $s_{0}$
\item output $(\publickey{}{}=\publickey{1}{},\secretkey{n}{},s_{0})$.
\end{itemize}
\textbf{Note:} Choosing to store only $(\publickey{}{},\secretkey{n}{})$ instead of the full key lists introduces a storage-computation trade-off, which can be amortized by the ``pebbling" technique we discuss in this section.
\item[] $\otsign{\secretkey{i}{}}{\secretkey{i-1}{}}{s_{i}}{s_{i-1}}{m}{\sigma}$
\begin{itemize}
\item parse $s_{i-1}$ and read $\mathsf{ctr} \rightarrow i-1$
\item compute one-time private key $\secretkey{i}{} = k_{i}$ from $n-i$ successive applications of the hash function $h$ on the private seed $k_{n}$ (or read $k_{i}$ from $[\secretkey{}{}]_{1}^{n}$ if storing the whole list)
\item compute $\sigma = h(m||\publickey{i}{})||\secretkey{i}{} = h(m||k_{i-1})||k_{i} = h(m||h(k_{i}))||k_{i}$
\item increment $\mathsf{ctr} \rightarrow \mathsf{ctr} + 1$, store it to updated state $s_{i}$
\end{itemize}
\item[] $\otverify{\publickey{}{},n}{m}{\sigma}$
\begin{itemize}
\item parse $\sigma = \sigma_{1}||\sigma_{2}$ to recover $\sigma_{2}=k_{i}$
\item Output $b = (\exists j<n: h^{j}(k_{i}) =\publickey{}{}) \land (h(m||h(k_{i})) = \sigma_{1})$
\end{itemize}
\textbf{Note:} The verifier might choose to only store the most recent $k_{i}$ which verified correctly, and replace $\publickey{}{}$ with $k_{i}$ above resulting in fewer hash iterations.
\end{itemize}
\endgroup
\end{minipage}}
\caption{$n$-length Chain-based Signature Scheme}
\label{prel:onetime-sigs-constr}
\iffull
\else
\vspace{-0.05in}
\fi
\end{figure}
\renewcommand{\figurename}{Figure}
\setcounter{figure}{1}
\begin{figure}
\begin{tikzpicture}[>=stealth']
{[start chain]
\mathsf{Node}[on chain] (A) {$k_{0}$};
\mathsf{Node}[on chain,join=by {<-,"$h$"},right=of A] (B) {$k_{1}$};
\mathsf{Node}[on chain,join=by {<-,"$h$"},right=of B] (C) {$k_{2}$};
\mathsf{Node}[on chain,join=by {<-,"$h$"},right=of C] (D) {$k_{3}$};
\mathsf{Node}[on chain,join=by {<-,"$h$"},right=of D] (E) {$k_{4}$};
\mathsf{Node}[on chain,join=by {<-,"$h$"},right=of E] (F) {$k_{5}$};
}
\end{tikzpicture}
\caption{Key generation for $n = 5$ and seed $k_5$. First signature uses as $\publickey{}{} = k_{0}$ and $\secretkey{}{} = k_{1}$.}
\label{prel:onetime-sigs-ex}
\iffull
\else
\vspace{-0.15in}
\fi
\end{figure}
Our scheme, presented in Construction ~\ref{prel:onetime-sigs-constr}, is a chain-based one-time signature scheme\iffull secure under an adaptive chosen-message attack as formally defined in Definition~\ref{defn:OTS} in the Appendix\fi, with each key derived from its predecessor as $k_{i} \leftarrow h(k_{i+1})$, $i \in \{n-1, n-2, \ldots, 0\}$ and $h$ is a preimage resistant hash function.
The keys when used in pairs $(k_{i},k_{i-1})$ can be viewed as a public-private key pair for a one-time signature scheme, then forming a one-way hash chain with consecutive applications of $h$. The key $k_{n}$ serves as the ``private seed" for the entire key chain. In the context of integrity, a signer with a ``public key" $k_{i-1} = h(k_{i})$ would have to use the ``private key" $k_{i}$ to sign his message. Since each key can only be used once, the signer would then use $k_{i} = h(k_{i+1})$ as his ``public key" and $k_{i+1}$ as his ``private key", and continue in this fashion until the key chain is exhausted.
For example as shown in Figure \ref{prel:onetime-sigs-ex}, we can construct a hash chain from seed $k_{5}$. For signing the 1st message $m_1$, the signer would use $(\publickey{1}{},\secretkey{1}{}) = (k_{0},k_{1})$ and output signature $\sigma = h(m_1||k_{0})||k_{1}$. Similarly, for the 2nd message he would use $(\publickey{2}{},\secretkey{2}{}) = (k_{1},k_{2})$ and for the 5th message $(\publickey{5}{},\secretkey{5}{}) = (k_{4},k_{5})$.
Constructing the one-way hash-chain described above, given the seed $k_{n}$, would require $O(n)$ hash operations to compute $k_{0} = h^{n}(k_n)$, which might be a significant computational cost for resource-constrained devices, as the length of the hash chain $n$ is typically large to offset the constraint of single-use keys. While we could pre-compute all the keys, which would cost a $O(1)$ lookup operation, we would then require $O(n)$ space, which is also a limited resource in such devices.
Using efficient algorithms~ \cite{jakobsson2002fractal,RSA:YSEL09}, we can achieve logarithmic storage and computational costs by placing ``pebbles" at positions $2^j = 1\cdots\left \lceil{log_2(n)} \right \rceil$, which as shown in Section \ref{measurements-sign-verif} makes our construction practical for resource-constrained devices. The verifier's cost is $O(1)$ when storing the most recently-used $k$.
\iffull
\else
In the full version of the paper~\cite{fullversion} we present formal definitions of chain-based signatures and prove unforgeability of our scheme.
\fi
\begin{table}[]\centering
\caption{Hash-based scheme comparison.
}
\label{onewaychain_comparison}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|}
\hline
Scheme & Architecture & NoSync & NoDelay \\ \hline
TESLA~\cite{848446,Perrig02thetesla} & Chain & \xmark & \xmark \\ \hline
$\mu$TESLA 2-level chain \cite{NDSS:LiuNin03} & Chain & \xmark & \xmark \\ \hline
Sandwich, 1-level, light chain \cite{ACNS:HuJakPer05} & Chain & \xmark & \xmark \\ \hline
Comb Skipchain \cite{ACNS:HuJakPer05} & Chain & \cmark & \xmark \\ \hline
Short Hash-Based Signatures~\cite{CANS:DahKra09} & Chain & \cmark & \cmark \\ \hline
XMSS~\cite{PQCRYPTO:BucDahHul11} & Tree & \cmark & \cmark \\ \hline
BPQS \cite{EPRINT:CBHLNS18} & Chain & \cmark & \cmark \\ \hline
SPHINCS \cite{EC:BHHLNP15} & Tree & \cmark & \cmark \\ \hline
Our construction & Chain & \cmark & \cmark \\ \hline
\end{tabular}
}
\iffull
\else
\vspace{-0.15in}
\fi
\end{table}
\begin{table*}[]\centering
\caption{Hash-based scheme comparison for 256-bit messages and 256-bit security parameter. Sizes in bytes. $\mathbb{M}$,$\mathbb{F}$ and $\mathbb{H}$ denote MAC, PRF and hash operations respectively. $n$ denotes length of chain-based schemes.}
\label{hashchain_comparison_concrete}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Scheme & $|\sigma |$ & $|\publickey{}{}|$ & $|\secretkey{}{}|$ & $\mathsf{Sign()}$ & $\mathsf{Verify()}$ \\ \hline \cline{6-6}
Short Hash-Based Signatures \cite{CANS:DahKra09} & $128 + log_2n$ & $32$ & $64(\left\lceil{log_2(n)} \right \rceil+1)$ & $(\left\lceil{log_2(n)} \right \rceil +3) \mathbb{H} + 3\mathbb{F} $ & $\left\lceil{log_2(n)} \right \rceil$ \\ \hline
XMSS \cite{PQCRYPTO:BucDahHul11} & 2692 (4963) & 1504 (68) & 64 & 747$\mathbb{H}$ + 10315$\mathbb{F}$ & 83$\mathbb{H}$ + 1072$\mathbb{F}$ \\ \hline
BPQS \cite{EPRINT:CBHLNS18} & 2176 & 68 & 64 & 1073 $\mathbb{H}$ & 1073 $\mathbb{H}$ \\ \hline
SPHINCS \cite{EC:BHHLNP15} & 41000 & 1056 & 1088 & 386$\mathbb{F}$, 385 PRGs, 167519 $\mathbb{H}$ & 14060 $\mathbb{H}$ \\ \hline
Our Construction & 32(64) & 32 & 32 & $\left\lceil{log_2(n)} \right \rceil \mathbb{H}$ & 1 $\mathbb{H}$ \\ \hline
\end{tabular}
\end{table*}
\textbf{Comparison and Discussion.}
Our scheme is directly comparable with the TESLA Broadcast Message Authentication Protocol~\cite{848446,Perrig02thetesla}, which follows a similar chain-based paradigm but requires some synchronicity between the sender and receiver, and the receiver can only verify a message after some delay. Several other chain-based schemes have been proposed \cite{NDSS:LiuNin03,ACNS:HuJakPer05,CANS:DahKra09}, forming a ``hierarchy'' of chains aiming to improve their efficiency in various aspects. However, most of them do not prevent the synchronicity requirement and delayed verification, in fact some even introduce additional requirements, e.g. special ``commitment distribution'' messages \cite{NDSS:LiuNin03}, where a verifier won't be able to verify a long series of signatures if those are lost.
As our scheme is hash-based, we compare with another family of hash-based signatures schemes that follow a tree structure, e.g. XMSS~\cite{EC:BHHLNP15} and SPHINCS \cite{PQCRYPTO:BucDahHul11}. While these schemes do not have any synchronicity assumptions, their performance is not suited for the low SWaP sensors we consider (even with resource-constrained device optimizations \cite{PKC:HulRijSch16}\iffull which we compare in detail in Appendix \ref{apdx:modsphincs}\fi).
In Table \ref{onewaychain_comparison} we compare with other hash-based schemes in terms of properties (i.e. no synchronicity or delays, denoted as NoSync and NoDelay respectively). In Table \ref{hashchain_comparison_concrete} we provide a concrete comparison with the rest of the schemes satisfying the above properties. In Section \ref{relwork} we discuss some of the above schemes in more detail.
The caveat in our scheme is that it is susceptible to Man-in-the-Middle attacks. Specifically, an attacker might intercept a signature packet in transit (thus learning the ``ephemeral'' private key) and replace it with an arbitrary message and signature. Nevertheless such attacks are unlikely to be successful in our setting as discussed later in Section \ref{constr:sec-analysis}.
\section{Performance Evaluation \& Measurements}
\label{measurements}
\subsection{The IIoT Setting With Constrained Devices}
\label{measurements-scenario}
IIoT environments are complex systems comprising of heterogeneous devices that can be tracked at different organizational layers, namely (a) computational, (b) network, (c) sensor/edge layers \cite{wu2020convergence}. Devices at the higher levels are powerful servers dedicated to the analysis of data, storage, and decision making. They frequently reside outside the factory premises, i.e., in cloud infrastructures.
\begin{table}
\caption{Classes of Constrained Devices in terms of memory capabilities according to RFC 7228.}
\label{tab:constrcapab}
\resizebox{0.5\columnwidth}{!}{
\begin{tabular}{lll}
\hline
Name & RAM & Flash \\ \hline
Class 0 & $<<$10 KiB & $<<$100 KiB \\
Class 1 & ~10 KiB & ~100KiB \\
Class 2 & ~50KiB & ~250KiB \\ \hline
\end{tabular}
}
\iffull
\else
\vspace{-0.15in}
\fi
\end{table}
On the other hand, on-site and at the edge layer, a myriad of low-SWaP devices such as sensors and actuators reside, assigned with the tasks of posting their data or reconfiguring their status based on received instructions. On typical real-life IIoT deployments, the processing speed of such devices ranges from tens (e.g., Atmel AVR family) to hundreds of Mhz (e.g., higher-end models of ARM Cortex M series). Diving even deeper, at the lower end of the spectrum, one may observe sensor-like devices that are severely constrained in memory and processing capabilities.
Such extremely constrained devices have been considered by RFC 7228~\cite{RFC7228}
which underlines that ``most likely they will not have the resources required to communicate directly with the Internet in a secure manner''. Thus, the communication of such nodes must be facilitated by stronger devices acting as gateways that reside at the network layer. In Table \ref{tab:constrcapab} we provide a taxonomy of constrained devices residing at the edge of IIoT according to RFC 7228.
In this work, we consider a generic IIoT application scenario that involves Class 0 devices which are connected to more powerful IoT gateways in a sensor/gateway ratio of 10:1.
The chosen platforms and all experimental decisions were made to provide a realistic scenario under the following assumptions: (a) devices severely constrained in terms of computational power and memory resources (Class 0) and (b) moderately
demanding in terms of communication frequency (i.e. transmission once every 10 seconds).
\subsection{Evaluation Setup}
Our testbed consists of Arduino UNO R3~\cite{arduino-uno-rev3} open-source microcontroller boards equipped with ATmega328P 16 MHz microcontroller and 2KB SRAM fitted with a Bluetooth HC-05 module. These devices are really constrained and they represent the minimum of capabilities in all of IoT sensors utilized in our experimental scenarios (Class 0 in Table \ref{tab:constrcapab}).
For the gateways, we use Raspberry Pi 3 Model B devices equipped with a Quad Core 1.2GHz BCM2837 64bit CPU and 1GB RAM.
We first focus on evaluating our system in a device group level\footnote{Our code is available at \url{https://github.com/PanosChtz/Black-Box-IoT}}.
We use the one-time signature scheme outlined in Construction~\ref{prel:onetime-sigs-constr} and SHA256 as the hash function $h()$.
The length of the hash chain \iffull as defined in section \ref{prel:onetime-sigs} \fi sets the upper bound on the number of one-time signatures each sensor $\sens{i}{}$ can generate. In the case where the sensor's available signatures are depleted, it would enter an ``offline" state and the Local Administrator $\localadmin{}$ would need to manually renew its membership in the system through the $\mathsf{SensorJoin}$ protocol. In a large-scale deployment of our system however, frequent manual interventions are not desirable, so our goal is to pick a sufficiently large $n$ such that the available one-time signatures to the sensor last for the sensor's lifetime.
As discussed above and taking similar schemes' evaluations into account~\cite{TCHES:AmiCurZbi18}, we consider a frequency of one (1) signing operation per 10 seconds for simplicity. We consider sensor lifetimes between 4 months as an lower and 21 years as a upper estimate (as shown in Table \ref{measurements-table}), which imply a hash chain between $2^{20}$ and $2^{26}$ elements respectively.
In the setup phase, we pre-compute the hash-chain as needed by the pebbling algorithm~\cite{jakobsson2002fractal} and load the initial pebble values into the sensor. We first measure the actual needed storage on the sensor for various values of $n$. Note that for $n=2^{26}$, the lower bound for needed storage using a 256-bit hash function is about $26 \cdot 256 =$ 832 bytes of memory. Then we set the sensor device to communicate with the aggregator through Bluetooth in broadcast-only mode and measure the maximum number of signing operations that can be transmitted to the aggregator for various values of $n$, as well as the verification time needed on the aggregator side since it will need to verify a large number of sensor messages. The fact that we are able to run \textsc{BBox-IoT}\xspace on Class 0 devices demonstrates the feasibility of our approach for all low-SWaP sensors.
\subsection{Signing and Verification}
\label{measurements-sign-verif}
We run our experiments under different scenarios and multiple times. Our evaluation results, which are shown in Table \ref{measurements-table}, represent the statistical average across all measurements. Note that for measuring the average signature verification time on the aggregator side, we assume that the aggregator is able to receive all the data broadcasted by the sensor. If a network outage occurs between them (and the sensor during the outage keeps transmitting), the aggregator after reestablishing connection would have to verify the signature by traversing the hash chain back up to the last received secret key, which incurs additional computation time (in Figure \ref{hash-vs-ecdsa-outage} we show the associated verification cost in such occasions). As expected, the verification time is relatively constant in all measurements, about 0.031ms on average. This suggests that such an aggregator could still easily handle $10^{5}$ sensors transmitting data for verification (as we considered one transmission every 10 seconds for each sensor).
Table \ref{measurements-table}, shows that the pebbles data stucture consumes most of the required memory storage in our implementation, while the remaining program requires a constant amount of memory for any number of pebbles.
We also observe a slight impact of the number of pebbles on the total verification time, which is mainly affected by the sensor's capability to compute the signature on its message and the next secret key. For example, the sensor needs 50ms to compute the next signature with $n=2^{26}$ and 49.95ms for $n=2^{24}$. Also by comparing the total verification time with the signature computation time, we conclude the extra 14.3 msec are needed for transmitting the signature.
\begin{table}[]
\caption{Evaluation for sensor-aggregator protocol - Average verification times}
\label{measurements-verif-timers}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|l|l|l|l|l|l|l|l|}
\cline{2-9}
& \multicolumn{4}{c|}{T2 ($\mu$sec)} & \multicolumn{4}{c|}{T3 (msec)} \\ \hline
\multicolumn{1}{|l|}{maxV} & 20 & 22 & 24 & 26 & 20 & 22 & 24 & 26 \\ \hline
\multicolumn{1}{|l|}{100} & 28.12 & 31.18 & 31.34 & 28.95 & 42.83 & 42.84 & 42.91 & 43.08 \\ \hline
\multicolumn{1}{|l|}{500} & 30.78 & 31.94 & 30.31 & 30.63 & 51.25 & 51.23 & 51.37 & 51.39 \\ \hline
\multicolumn{1}{|l|}{1000} & 31.39 & 30.96 & 31.14 & 30.74 & 55.27 & 55.35 & 55.36 & 55.41 \\ \hline
\multicolumn{1}{|l|}{2500} & 30.57 & 30.97 & 32.39 & 30.86 & 60.61 & 60.65 & 60.7 & 60.78 \\ \hline
\multicolumn{1}{|l|}{5000} & 33.26 & 31.7 & 31.66 & 31.43 & 64.66 & 64.74 & 64.79 & 64.83 \\ \hline
\multicolumn{1}{|l|}{10000} & 33.34 & 33.38 & 33.6 & 31.41 & 68.68 & 68.75 & 68.78 & 68.86 \\ \hline
\end{tabular}
}
\iffull
\else
\vspace{-0.15in}
\fi
\end{table}
In Table \ref{measurements-verif-timers} we provide a series of measurement results for the average verification time of 1 signature on the aggregator. By T2 we denote the verification time of a signature and by T3 the total verification time by an aggregator \iffull (we provide detailed algorithms for our measurements in Appendix \ref{apdx:evaldetails}.) \else(as shown in Algorithm \ref{measurements-aggregator-pseudocode}).
\fi
The average total verification time (denoted by maxV) increases significantly as we require more verification operations from the Arduino device. This happens because of dynamic memory fragmentation as the pebbling algorithm updates the pebble values.
\paragraph{Comparison with ECDSA} We compare our lightweight scheme with ECDSA, which is commonly used in many blockchain applications. We assume IoT data payloads between 50 and 220 bytes, which can accommodate common data such as timestamps, attributes, source IDs and values. In Table \ref{sign-verify-costs} we show that our scheme is more efficient compared to ECDSA by 2 and 3 orders of magnitude for signing and verification respectively. Even when considering larger payload sizes which impact hash-based signature operations,
our scheme remains much more efficient. However, verification cost for our scheme increases linearly during network outages, and as shown in Figure \ref{hash-vs-ecdsa-outage} it might become more expensive than ECDSA when more than 2400 signature packets are lost.
Another metric we consider is energy efficiency, which is of particular importance in IoT applications that involve a battery as power source. Our experiments depicted in Figure \ref{energy-efficiency} show that our ATmega328P microcontroller can perform more than 50x hash-based signing operations compared to the equivalent ECDSA operations for the same amount of power.
Finally, while our hash-based signature normally has a size of 64 bytes (as shown in Table \ref{measurements-table}), we can ``compress'' consecutive signatures along a hash chain to 32 bytes by only publishing the most recent $k_{i}$. The verifier would then generate the previous hash chain values at a minimal computational cost. This makes possible to store more authenticated data in the blockchain, as we show below.
\begin{table}[t ]
\caption{Evaluation for sensor-aggregator protocol (average values for 5000 verifications)}
\label{measurements-table}
\begin{tabular}{|p{4cm}|c|c|c|c|}
\hline
& & & & \\[-2.5ex]
Hash Chain length $n$ \xdef\tempwidth{\the\linewidth} & $2^{20}$ & $2^{22}$ & $2^{24}$ & $2^{26}$ \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Sensor lifetime for 1sig/10sec (m: months, y: years)} & 4 m & 16 m & 5 y & 21 y \\
\thickhline \thickhline
Pebble Gen time (seconds) & 1.62 & 6.49 & 24.57 & 95.33 \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Verification time per signature (msec)} & \multicolumn{4}{c|}{0.031} \\ \thickhline \thickhline
Signature size (bytes) & \multicolumn{4}{c|}{64+ $|m|$} \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Total dynamic memory usage (bytes)} & 1436 & 1520 & 1604 & 1678 \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Pebble struct memory usage (bytes)} & 840 & 924 & 1008 & 1082 \\ \hline
Program memory usage (bytes) & \multicolumn{4}{c|}{596} \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Signature computation time (msec)} & 49.82 & 49.88 & 49.95 & 50.00 \\ \hline
\multicolumn{1}{|m{\tempwidth}|}{Average total verification time per signature (msec)} & 64.15 & 64.25 & 64.26 & 64.32 \\ \hline
Communication cost (msec) & \multicolumn{4}{c|}{14.3} \\ \hline
\end{tabular}
\iffull
\else
\vspace{-0.15in}
\fi
\end{table}
\subsection{Consensus Performance}
\label{measurements-consensus}
Considering the use-case scenario discussed in Section \ref{measurements-scenario}, we discuss the performance of our \textsc{BBox-IoT}\xspace system as a whole. We show that the most important metric in the system is the transaction throughput which heavily depends on the ability of the SWaP sensors to transmit data in a group setting. Of course, the scalability of the system overall is also directly proportional to the number of system active participants it can support simultaneously.
\smallskip
\noindent \textit{Sensors.} Our measurements indicate that the aggregator - which is a relatively powerful device - is not the bottleneck in the protocol execution. Based on the measurements in Table \ref{measurements-table}, we can safely assume that a single aggregator can verify over a thousand sensors' data being continuously broadcasted, since the signature computation time by a sensor is three (3) orders of magnitude larger than the verification time by an aggregator. This is still a pessimistic estimation, since we previously assumed that a sensor broadcasts (and signs) data every 10 seconds, which implies that the aggregator can accommodate even more sensors.
\smallskip
\noindent \textit{Orderers.} Since orderers only participate in the consensus protocol to sign blocks, we only need a few orderers such that our system remains resilient to attacks at the consensus level
should a subset of orderers become compromised. Orderers can be strategically distributed over a geographical area to minimize the network latency between an aggregator and the ordering service, controlled by the main organization (which also controls the $\mathsf{MSP}$). Evaluations performed in previous works have shown that by having 3 orderers, 3000 transactions/second can be easily achieved using the consensus protocol used in the current version of Hyperledger Fabric (with a potential of further improvement in a future adoption of BFT-SMART), and even considering up to 10 orderers in the system does not greatly affect its performance~\cite{DBLP:journals/corr/abs-1801-10228,DBLP:conf/dsn/SousaBV18}.
\smallskip
\noindent \textit{Aggregators.}
The expected number of aggregators in the system depends on the use case as it is expected. As discussed in Section \ref{measurements-scenario}, where gateways play the role of \textsc{BBox-IoT}\xspace aggregators, we consider a sensor/gateway ratio of 10:1 for our evaluation purposes.
To our knowledge, no evaluation of Hyperledger Fabric has ever been performed to consider such a great number of peers, which would require a great amount of resources to perform. However, by adopting the evaluation performed in ~\cite{DBLP:journals/corr/abs-1801-10228} which measured the throughput in terms of number of peers up to 100 (which as discussed, are the aggregators in our system), we can extrapolate this evaluation to the order of thousands, which shows that with the aid of a ``peer gossip" protocol, the system remains scalable if the peers are in the same approximate geographical area which implies low average network latency.
\smallskip
\noindent \textit{Blockchain operations.} As discussed, aggregators' role is to aggregate sensor data into blockchain transactions. Assuming that aggregators perform no ``lossy" operations (such as averaging techniques), they would just package many collected sensor data along with the respective signatures into a transaction which in turn would be submitted to the ordering service. If we assume as in \cite{DBLP:journals/corr/abs-1801-10228} a block size of 2MB, we can estimate how much signed sensor data a block can hold.
Given the discussion in Section \ref{measurements-sign-verif}, a Hyperledger block could hold (at most) about 15800 signed sensor data using our hash-based scheme vs. 12700 using ECDSA.
\smallskip
\noindent \textit{Latency.} We also wish to estimate the time from a value being proposed by an aggregator until consensus has been reached on it (assuming the block contains a single transaction). Again we can adopt previous evaluations in Hyperledger Fabric~\cite{DBLP:journals/corr/abs-1801-10228}, which show an average of 0.5 sec for the complete process.
Finally, considering that the previous evaluations mentioned above were all preformed on the original Hyperledger Fabric (while our architecture requires a slight modification as discussed in Section \ref{mod:hyperledger}), for our purposes we assume that the expected performance of aggregators (which are essentially Hyperledger peers also having client application functionalities) is not affected by this additional functionality, since the main affecting factor that can potentially become a bottleneck for the scalability of the whole system is network latency and not computational power.
\begin{table}[t]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|c|c|c|c|}
\cline{2-5}
& \multicolumn{2}{c|}{\textsc{BBox-IoT}\xspace} & \multicolumn{2}{c|}{ECDSA} \\ \hline
\multicolumn{1}{|l|}{Message length} & \multicolumn{1}{l|}{Sensor Sign} & \multicolumn{1}{l|}{Aggr Vrfy} & \multicolumn{1}{l|}{Sensor Sign} & \multicolumn{1}{l|}{Aggr Vrfy} \\ \hline
\multicolumn{1}{|l|}{50} & 50.43 & 0.0339 & \multirow{5}{*}{4200} & \multirow{5}{*}{42.55} \\ \cline{1-3}
\multicolumn{1}{|l|}{100} & 53.47 & 0.0349 & & \\ \cline{1-3}
\multicolumn{1}{|l|}{150} & 56.40 & 0.0357 & & \\ \cline{1-3}
\multicolumn{1}{|l|}{202} & 59.33 & 0.03687 & & \\ \cline{1-3}
\multicolumn{1}{|l|}{218} & 60.06 & 0.0369 & & \\ \hline
\multicolumn{1}{|l|}{Signature size} & \multicolumn{2}{c|}{32} & \multicolumn{2}{c|}{64} \\ \hline
\end{tabular}
}
\caption{Signing and verification costs (in milliseconds) compared with message and signature sizes (in bytes). Note we assume hash-based signatures are aggregated as discussed in Section \ref{measurements-sign-verif}. Signer is ATmega328P microcontroller and verifier is RPi 3.}
\label{sign-verify-costs}
\end{table}
\begin{figure}[t]
\centering
\resizebox{0.48\textwidth}{!}{
\input{evaluation/hash-vs-ecdsa-outage.pgf}
}
\caption{Aggregator verification costs in network outages. \textsc{BBox-IoT}\xspace is more expensive when more than about 2400 signature packets are lost.}
\label{hash-vs-ecdsa-outage}
\end{figure}
\definecolor{red2}{HTML}{D7191C}
\definecolor{orange2}{HTML}{FDAE61}
\definecolor{green2}{HTML}{ABDDA4}
\definecolor{blue2}{HTML}{2B83BA}
\begin{figure}[t]
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tikzpicture}[x={(.002,0)}]
\foreach \l/\x/\c[count=\y] in {\textsc{BBox-IoT}\xspace/3592/blue2,
ECDSA /70/orange2}
{\mathsf{Node}[left] at (0,\y) {\l};
\fill[\c] (0,\y-.4) rectangle (\x,\y+.4);
\mathsf{Node}[right] at (\x, \y) {\x};}
\draw (0,0) -- (4000,0);
\foreach \x in {1000, 2000, ..., 4000}
{\draw (\x,.2) -- (\x,0) node[below] {\x};}
\draw (0,0) -- (0,2.6);
\end{tikzpicture}
}
\caption{Number of signing operations for a 20mWh battery.
}
\label{energy-efficiency}
\iffull
\else
\vspace{-0.1in}
\fi
\end{figure}
\section{Background \& Preliminaries}
\label{backgrnd-prelims}
\subsection{Blockchain System Consensus}
\label{prel:consensus}
\iffull In distributed ledgers including Blockchains, we can
categorize the participants to: a) Blockchain maintainers also called
\emph{miners}, who are collectively responsible for continuously
appending valid data to the ledger, and b) clients, who are reading
the blockchain and posting proposals for new data. While clients are
only utilizing the blockchain in a read-only mode, the blockchain
maintainers who are responsible for ``book-keeping" must always act
according to a majority's agreement to prevent faulty (offline) or
Byzantine (malicious) behavior from affecting its normal
functionality. Reaching agreement requires the existence of a
\emph{consensus} protocol among these maintainers.
Moreover, the type of consensus protocol can be permissioned or
permissionless. In permissionless blockchains, like
Bitcoin~\cite{Nakamoto:bitcoin} and
Ethereum~\cite{buterin2014ethereum}, anyone can participate in the
consensus protocol and sybil attacks are prevented by new consensus
mechanisms such as Proof-of-Work or
Proof-of-Stake~\cite{C:KRDO17}. Although the open nature of
permissionless blockchains seems attractive, it does not really fit
the membership and access control requirements for IoT
deployments. Instead, operators would like to exert control over IoT
sensors and aggregators by authenticating all system participants.
Focusing on permissioned blockchains, consensus is computationally
easier to achieve even if nodes are of limited
capabilities. Distributed consensus mechanisms such as Practical
Byzantine Fault Tolerance (PBFT)~\cite{Castro:1999:PBF:296806.296824}
are relatively efficient, as long as the number of maintainers is
relatively small: Byzantine Fault Tolerant consensus is considered
scalable up to 20
nodes~\cite{DBLP:conf/ifip11-4/Vukolic15,DBLP:conf/middleware/ChondrosKR12}.
The distributed consesus mechanisms use inexpensive cryptographic
operations such as message authentication codes (MACs). Alternatively,
a trusted party $\trustedparty{}$ decides who should be given
participating access by issuing a private credential to each node.
In the following paragraphs we define consensus in the permissioned
setting and establish the properties needed for our system.
In Appendix \ref{apdx:consensus} we present an overview of consensus algorithms in the permissioned setting
inspired by
\cite{DBLP:journals/corr/abs-1711-03936,DBLP:journals/corr/CachinV17,RSA:GarKia20},
and discuss how each one would fit to \textsc{BBox-IoT}\xspace.
\textbf{Fundamental consensus properties:}
\label{prel:consensus:fundam-properties} Informally, the
consensus problem considers a set of state machines, we also refer to
them as \emph{replicas} or \emph{parties}, starting with the same
initial state. Then given a common sequence of messages, each correct
replica, by performing its private computation, should reach a state
consistent with other correct replicas in the system, despite any
possible network outages or other replica
failures~\cite{DBLP:journals/corr/abs-1711-03936}.
Since our system uses a blockchain, we focus on the notion of
\emph{ledger} consensus~\cite{RSA:GarKia20}, where a number of parties
receive a common sequence of messages and append their outputs on a
public ledger. As a special case, an \emph{Authenticated} ledger
consensus
protocol~\cite{Lamport:1982:BGP:357172.357176}\cite{EPRINT:LinLysRab04}
only permits participation through credentials issued by a
$\trustedparty{}$. The two fundamental properties of a ledger
consensus protocol are \cite{RSA:GarKia20}: (a) \emph{Consistency}: An
honest node's view of the ledger on some round $j$ is a prefix of an
honest node's view of the ledger on some round $j+\ell, \ell>0$. (b)
\emph{Liveness}: An honest party on input of a value $x$, after a
certain number of rounds will output a view of the ledger that
includes $x$. We provide formal definitions in Appendix
\ref{apdx:consensus}.
\else In distributed ledgers (or Blockchains), we can categorize the
participants as follows: a) Blockchain maintainers (called also
\emph{miners}), who are collectively responsible for continuously
appending valid data to the ledger, and b) clients, who are reading
the blockchain and posting proposals for new data. While clients are
only utilizing the blockchain in a read-only mode, the blockchain
maintainers who are responsible for ``book-keeping" must always act
according to a majority's agreement to prevent faulty (offline) or
Byzantine (malicious) behavior from affecting its normal
functionality. This assumes that a \emph{consensus}
protocol takes place behind the scenes among these maintainers, which are distinguished to
permissioned or permissionless, according to their participation
controls.
Although the open nature of permissionless blockchains seems
attractive, it does not really fit the membership and access control
requirements for IoT deployments. In such settings, operators prefer to control
the participation of IoT sensors and aggregators by means of
authenticating them. Moreover, in permissioned settings consensus is computationally cheaper
and thus better suited to nodes with limited capabilities.
\textbf{Fundamental Consensus Properties:}
\label{prel:consensus:fundam-properties} Informally, the ledger consensus problem \cite{RSA:GarKia20} considers a number of parties receiving a common sequence of messages, appending their outputs on a public ledger. The two basic properties of a ledger consensus protocol are:
(a) \emph{Consistency}: An honest node's view of the ledger on some round $j$ is a prefix of an honest node's view of the ledger on some round $j+\ell, \ell>0$.
(b) \emph{Liveness}: An honest party on input of value $x$, after a certain number of rounds outputs a ledger view that includes $x$.
\fi
\input{iot-consensus}
\section{\textsc{BBox-IoT}\xspace System properties}
\label{properties}
In \textsc{BBox-IoT}\xspace there are five main types of participants, most of them inherited by Hyperledger Fabric: the $\mathsf{MSP}$, orderers, local administrators, aggregators and sensors. Aggregators are equivalent to \emph{peers} and sensors to \emph{clients} in our modified Hyperledger Fabric architecture discussed in the previous section. We provide a high level description of each participant's role in the system and include detailed definitions in \iffull Appendix \ref{sec-model-defs} \else the full version of our paper~\cite{fullversion}\fi.
\begin{itemize}[leftmargin=*]
\item The $\mathsf{MSP}$ is a trusted entity who grants or revokes authorization for orderers, local administrators and aggregators to participate in the system, based on their credentials. It also initializes the blockchain and the system parameters and manages the system configuration and policy.
\item Orderers (denoted by $\orderer{}$) receive signed transactions from aggregators. After verifying the transactions as dictated by the system policy they package them into blocks. An orderer who has formed a block invokes the consensus algorithm which runs among the set of orderers $\mathcal{O}$. On successful completion, it is transmitted back to the aggregators with the appropriate signatures.
\item Local administrators (denoted by $\localadmin{}$, are lower-level system managers with delegated authority from the $\mathsf{MSP}$. Each $\localadmin{}$ is responsible for creating and managing a local device group $\iotGroup{}$, which includes one or more aggregators and sensors. He grants authorization for aggregators to participate in the system with the permission of the $\mathsf{MSP}$. He is also solely responsible for granting or revoking authorization for sensors in his group, using aggregators to store their credentials.
\item Aggregators (denoted by $\aggr{}{}$) are the blockchain maintainers. They receive blocks from orderers and each of them keeps a copy of the blockchain. They store the credentials of sensors belonging in their group and they pick up data broadcasted by sensors. Then they create blockchain ``transactions" based on their data (after possible aggregation), and periodically collect signatures for these transactions from other aggregators in the system, as dictated by the system policy. Finally, they send signed transactions to the ordering service, and listen for new blocks to be added to the blockchain from the orderers.
\item Sensors (denoted by $\sens{}{}$) are resource-constrained devices. They periodically broadcast signed data blindly without waiting for any acknowledgment. They interact with local administrators during their initialization, while their broadcasted data can potentially be received and authenticated by multiple aggregators.
\end{itemize}
We then define the security
and operational
properties of \textsc{BBox-IoT}\xspace, in accordance with evaluation principles adopted in \cite{coindesk2018,DBLP:journals/corr/DorriKJ16,10.1007/978-3-319-28472-9-9,Shafagh:2017:TBA:3140649.3140656}.
\subsection{Threat model \& Assumptions}
\label{threatmdl}
\noindent \textbf{Physical layer attacks and assumptions.} While our
system cannot prevent physical tampering with sensors that might
affect data correctness, any data discrepancies can be quickly
detected through comparisons with adjacent sensors given the
blockchain immutability
guarantees\iffull~\cite{EPRINT:WusGer17}\fi. Similarly, any malicious
or erroneous data manipulation by an aggregator will result in
detectable discrepancies even when one of the aggregators is not
compromised simultaneously. Of course, if all aggregators become
compromised instantaneously, which is hard in a practical setting, our
system will not detect any discrepancies. This raises the bar
significantly for an adversary who might not be aware or even gain
access to all aggregator nodes at the same time. Finally, attacks such as flooding/jamming and broadcast interception attacks
are out of scope in this paper.
\noindent \textbf{Trust Assumptions.} We assume that $\mathsf{MSP}$ is honest
during system bootstrapping only, and that device group participants
(Local administrators, aggregators and sensors) may behave unreliably
and deviate from protocols. For instance, they might attempt to
statically or dynamically interfere with operations of honest system
participants (e.g. intercept/inject own messages in the respective
protocols), even colluding with each other to do so. This behavior is
expected which our system is designed to detect and thwart.
\noindent \textbf{Consensus Assumptions.} As in Hyperledger, we
decouple the security properties of our system from the consensus
ones. For reference, this implies tolerance for up to 1/3 Byzantine
orderer nodes, with a consensus algorithm satisfying at least the
fundamental and additionally required properties discussed in Section
\ref{backgrnd-prelims}.
Given the above adversarial setting, we define the following security properties\iffull\footnote{We do not consider data confidentiality in our system, however as discussed later our model could be extended to satisfy confidentiality as well.}\fi:
\newlist{UR}{enumerate}{1}
\setlist[UR]{label=S-\arabic*,ref=S-\arabic*}
\begin{UR}
\item \label{partic-auth}
Only authenticated participants can participate in the system. Specifically:
\begin{enumerate}[label=\alph*.,ref=\ref{partic-auth}\alph*.]
\item \label{partic-auth-a} An orderer non-authenticated by the $\mathsf{MSP}$ is not able to construct blocks (i.e., successfully participate in the consensus protocol). The ordering service can tolerate up to $f$ malicious (byzantine) orderers.
\item \label{partic-auth-b} An $\localadmin{}$ non-authenticated by the $\mathsf{MSP}$ is not able to form a device group $\iotGroup{}$.
\item \label{partic-auth-c} If an aggregator is not authenticated by the $\mathsf{MSP}$, then its signatures on transactions cannot be accepted or signed by other aggregators.
\end{enumerate}
\item \label{sensor-health} \textit{Sensor health:}
Sensors are resilient in the following types of attacks:
\begin{enumerate}[label=\alph*.,ref=\ref{partic-auth}\alph*.]
\item Cloning attacks: A non-authenticated sensor cannot impersonate an existing sensor and perform operations that will be accepted by aggregators.
\item Message injection - MITM attack: A malicious adversary cannot inject or modify data broadcasted by sensors.
\end{enumerate}
\item \label{partic-malic} \textit{Device group safety:}
Authenticated participants in one group cannot tamper with other groups in any way, i.e.:
\begin{enumerate}[label=\alph*.]
\item An $\localadmin{}$ cannot manage another group, i.e. add or revoke participation of an aggregator or sensor in another device group, or interfere with the functionalities of existing aggregators or sensors at any time.
\item An aggregator (or a coalition of aggregators) cannot add or remove any sensor in device group outside of their scope, or interfere with the functionalities of existing aggregators or sensors at any time.
\item A sensor (or a coalition of sensors) cannot interfere with the functionalities of existing aggregators or other sensors at any time.
\end{enumerate}
\item \label{sec-nonrepud} \textit{Non-repudiation and data provenance:} Any \textsc{BBox-IoT}\xspace node cannot deny sent data they signed. For all data stored in \textsc{BBox-IoT}\xspace , the source must be identifiable.
\item \label{sec-dos-resil} \textit{DoS resilient:} \textsc{BBox-IoT}\xspace continues to function even if $\mathsf{MSP}$ is offline and not available, or an adversary prevents communication up to a number of orderers (as dictated by the consensus algorithm), a number of aggregators (as dictated by the system policy) and up to all but one sensor. Also an adversary is not able to deny service to any system node (except through physical layer attacks discussed before).
\item \label{sec-pol-conf} \textit{System policy and configuration security:} \textsc{BBox-IoT}\xspace policy and configuration can only be changed by $\mathsf{MSP}$.
\item \label{sec-revoc} \textit{Revocation:} The system is able to revoke authentication for any system participant, and a system participant can have its credentials revoked only by designated system participants.
\end{UR}
\section{Related work}
\label{relwork}
We now discuss a number of works that connect IoT to the blockchain setting or works which build cryptographic primitives to optimize different parts of computation for resource-constrained IoT devices. Note that none of these works addresses the problem of authentication for extremely constrained (Class 0) devices.
\subsection{IoT and Blockchain}
Shafagh et al.~\cite{Shafagh:2017:TBA:3140649.3140656} presented an architecture aiming to handle IoT data in a decentralized manner while achieving confidentiality, authenticity and integrity. This proposed system defines itself as ``IoT compatible" being append-only by a single writer and can be accessed by many readers, and consists of a layered design on top of an existing public blockchain to store access permissions and hash pointers for data, while storing the actual data off-chain using decentralized P2P storage techniques. Other approaches~\cite{8704309,8306880,8621042} also used a similar "layering" paradigm. While these approaches are simpler than ours, they ultimately rely heavily on the performance and properties of the underlying public blockchain and are not specifically tailored to handle resource-constrained IoT devices.
Dorri, Kanhere, and Jurdak~\cite{DBLP:journals/corr/DorriKJ16} considered a ``local" private blockchain maintained by a capable device, managed by the on-site owner and containing the local IoT device transactions. These lower-tier elements would be overlaid by a shared blockchain that can handle hashed data originating from the local blockchain and stored in a cloud storage service, and can enable access to local data. The above approach also offers confidentiality and integrity for submitted data and is suitable for resource-constrained IoT devices, however it is more complex than \textsc{BBox-IoT}\xspace and requires managing and replicating data over several points in the system.
More recently, AlTawy and Gong~\cite{DBLP:journals/popets/AlTawyG19} presented a blockchain-based framework in the supply chain setting using RFIDs. This model considered blockchain smart contracts interacting with an overlay application on the RFID readers and a centralized server that handles membership credentials. This framework offers anonymity for the participating entities, which prove their membership in zero-knowledge, while their anonymity remains revocable by the server. It also provides confidentiality for its transactions and enforces a notion of ``forward secrecy" which enables future product owners in the supply chain to access its entire history. \textsc{BBox-IoT}\xspace differs from the above work in several ways, since it is tailored to handle resource-constrained devices. Our work does not have confidentiality or anonymity as a main goal, although it can be added as an option using symmetric keys. We also do not require any smart contract functionality from the blockchain, and we operate exclusively in the permissioned setting.
IoTLogBlock \cite{DBLP:conf/lcn/ProfentzasAL19} shares a common goal with our work: enabling the participation of low-power devices in a distributed fashion, and similarly uses Hyperledger as a ``cloud service'' in a IoT setting. The crucial difference with our work, is that IoTLogBlock is evaluated on a Class 2 device using ECDSA signatures, which are far more expensive than our proposed hash-based signature and could not have been supported at all by a Class 0 device, while having much larger power consumption (Fig \ref{energy-efficiency}). Our proposed signature scheme is a key component for efficient implementations of blockchain-based systems in the IIoT setting.
Several more approaches have been presented which augmented an IoT infrastructure with a blockchain, focusing on providing two-factor authentication~\cite{8390280}, managing or improving communication among IoT devices~\cite{8029217,8378971}, implementing a trust management system in vehicular networks~\cite{8358773}, providing edge computing services~\cite{8436042}, data resiliency~\cite{8170858}, providing secure and private energy trade in a smart-grid environment~\cite{7589035} and implementing a hierarchical blockhain storage for efficient industrial IoT infrastructures \cite{DBLP:conf/blockchain2/WangSNH19} and
all of which are orthogonal to our work. We point the reader to~\cite{DBLP:journals/comsur/AliVPDAR19,DBLP:journals/iotj/FerragDMDMJ19} for extensive reviews on the related literature.
\iffull
\subsection{Hash-based Signatures}
\label{prel:onetime-sigs}
Early works such as Lamport's One-Time Signatures (OTS)~\cite{Lamport79} allowed the use of a hash function to construct a signature scheme. Apart from being one-time however, this scheme suffered from large key sizes. Utilizing tree-based structures such as Merkle trees \cite{C:Merkle87}, enabled to sign many times while keeping a constant-sized public key as the Merkle root. Winternitz OTS and later WOTS+~\cite{EPRINT:BDEHR11}\cite{AFRICACRYPT:Hulsing13} introduced a way of trading space for computation for the underlying OTS, by signing messages in groups. XMSS \cite{PQCRYPTO:BucDahHul11} further optimized the Merkle tree construction using Winternitz OTS as an underlying OTS. Other works such as HORS~\cite{ACISP:ReyRey02} enabled signing more than once, and more recently SPHINCS and SPHINCS+ ~\cite{EC:BHHLNP15,CCS:BHKNRS19} enabled signing without the need to track state.
Using HORS~\cite{ACISP:ReyRey02} as a primitive combined with a hash chain, Time Valid One-Time Signature (TV-HORS)~ \cite{DBLP:conf/infocom/WangKHN09} improves in signing and verification computational efficiency, but assuming ``loose'' time synchronization between the sender and the verifier.
All of the above scheme families while only involving hash-based operations, still incur either large computational and/or space costs, and cannot be implemented in Class 0 resource-constrained devices we consider. Follow-up work exists for implementing SPHINCS on resource-constrained devices~\cite{PKC:HulRijSch16} which we discuss later in this section and compare in Appendix \ref{apdx:modsphincs}.
The TESLA Broadcast Message Authentication Protocol~\cite{848446,Perrig02thetesla} follows a ``one-way'' chain-based approach for constructing a hash-based message authentication scheme. Based on a ``seed'' value, it generates a one-way chain of $n$ keys, which elements are used to generate temporal MAC keys for specified time intervals. The protocol then discloses each chain element with some time delay $\Delta$, then authenticity can be determined based on the validity of the element in the chain as well as the disclosure time.
The ``pebbling'' algorithms~ \cite{jakobsson2002fractal,RSA:YSEL09} enable logarithmic storage and computational costs as discussed in Section \ref{our-primitive}.
Its main drawback however is that it also requires ``loose'' time synchronization between the sender and the receiver for distinguishing valid keys. In an IoT setting this would require the frequent execution of an interactive synchronization protocol, since IoT devices are prone to clock drifting \cite{DBLP:journals/sensors/Tirado-AndresRA19,DBLP:journals/tii/ElstsFDOPC18}. Also we assume in Section \ref{properties} that IoT devices function in a broadcast-only mode, which would not allow the execution of such interactive protocol in the first place. Furthermore, TESLA introduces a ``key disclosure delay'' which might be problematic in certain IoT applications, and gives up the non-repudiation property of digital signatures.
Several modifications and upgrades to the TESLA protocol have been proposed, with most of them maintaining its ``key disclosure delay'' approach which is also associated with the loose time synchronization requirement \cite{NDSS:LiuNin03,ACNS:HuJakPer05}. A notable paradigm is the ``hierarchical'' (or two-dimensional) one-way chain structure, where the elements of a ``primary'' hash chain serve as seeds for ``secondary'' chains in order to reduce communication costs. \cite{ACNS:HuJakPer05} includes several such proposals.
For instance, its Sandwich-chain uses two separate one-way chains. The first one-way chain is used as a ``primary'' chain, which generates intermediate ``secondary'' chains using the elements of the second one-way chain as salts. However to maintain efficiency, it still assumes some weak time synchronicity between the signer and the verifier by disclosing each element of the ``primary'' chain with some time delay (else the verifier in case of a network outage would have to recompute all the previous secondary chains as well which would defeat its efficiency gains). More importantly however, this construction has much larger storage requirements than ours.
In the same work, the Comb Skipchain construction is asymptotically more efficient in signing costs than our scheme and does not require time synchronicity, but has worse concrete storage requirements which are prohibitive for low-end IoT devices, and still suffers from delayed verification. This work includes other interesting modifications such as the ``light'' chains where the secondary chains are generated using a lower security parameter
and a standard one-dimensional TESLA variant which does not require a MAC.
\else
\subsection{Hash-based Signatures}
\label{prel:onetime-sigs}
Lamport's One-Time Signatures (OTS)~\cite{Lamport79} was the first scheme to allow the use of a hash function to construct a signature scheme. Then, Winternitz OTS and WOTS+~\cite{EPRINT:BDEHR11}\cite{AFRICACRYPT:Hulsing13}enabled a time-memory tradeoff by signing messages in groups, used in turn by XMSS \cite{PQCRYPTO:BucDahHul11} in a Merkle tree construction. Other works such as HORS~\cite{ACISP:ReyRey02} enabled signing more than once, and more recently SPHINCS and SPHINCS+ ~\cite{EC:BHHLNP15,CCS:BHKNRS19} enabled signing without the need to track state.
Using HORS~\cite{ACISP:ReyRey02} as a primitive combined with a hash chain, Time Valid One-Time Signature (TV-HORS)~ \cite{DBLP:conf/infocom/WangKHN09} improves in signing and verification computational efficiency, but assuming ``loose'' time synchronization between the sender and the verifier.All of the aforementioned schemes, while only involving hash-based operations, still incur large computational and/or space costs and cannot be implemented in Class 0 resource-constrained devices we consider.
TESLA~\cite{848446,Perrig02thetesla} constructs a ``one-way'' hash chain to generate temporal MAC keys for specified time intervals, disclosing each chain element with some time delay $\Delta$.
While ``pebbling'' algorithms~ \cite{jakobsson2002fractal,RSA:YSEL09} enable logarithmic storage and computational costs as discussed in Section \ref{our-primitive}, it requires ``loose'' time synchronization between the sender and the receiver for distinguishing valid keys. In an IIoT setting this would require the frequent execution of an interactive synchronization protocol since such devices are prone to clock drifting \cite{DBLP:journals/sensors/Tirado-AndresRA19,DBLP:journals/tii/ElstsFDOPC18}.
Several modifications and upgrades to TESLA have been proposed, but most of them still require time synchronization \cite{NDSS:LiuNin03,ACNS:HuJakPer05}.
\fi
\subsection{Cryptographic Operations in IoT} In the context of improving cryptographic operations in the IoT setting, Ozmen and Yavuz~\cite{DBLP:conf/ccs/OzmenY17} focused on optimizing public key cryptography for resource-constrained devices. This work exploited techniques in Elliptic Curve scalar multiplication optimized for such devices and presented practical evaluations of their scheme on a low-end device. Even though the device used in this work is can be classified as a Class 1 or Class 2 device, our construction signing is more efficient both in terms of computation cost and storage by at least an order of magnitude.
\iffull As discussed above, \fi H{\"u}lsing, Rijneveld and Schwabe \cite{PKC:HulRijSch16} showed a practical evaluation of the SPHINCS hash-based signature scheme \cite{EC:BHHLNP15} on a Class 2 device. At first glance this implementation could also serve our purposes, however our proposed construction, while stateful, is much cheaper in terms of runtime, storage and communication costs, without such additional assumptions. \iffull We directly compare with their scheme in Appendix \ref{apdx:modsphincs}.\fi
Kumar et al.~\cite{DBLP:journals/corr/abs-1905-13369} propose an integrated confidentiality and integrity solution for large-scale IoT systems, which relies on an identity-based encryption scheme that can distribute keys in a hierarchical manner. This solution also uses similar techniques to our work for signature optimization for resource-constrained devices, however, it requires synchronicity between the system participants. Portunes~\cite{DBLP:conf/smartgridcomm/LiDN14} is tailored for preserving privacy (which is not within our main goals in our setting), and requires multiple rounds of communication (while we consider a "broadcast-only" setting)
\iffull
Wander et al.~\cite{Wander:2005:EAP:1048930.1049786} quantified the energy costs of RSA and Elliptic Curve operations as public key cryptography algorithms in resource-constrained devices. In a similar context, Potlapally et al.~\cite{1231830} performed a comprehensive analysis of several cryptographic algorithms for battery-powered embedded systems. However as discussed in Section \ref{prel:onetime-sigs}, we consider hash-based algorithms that are lighter and more efficient.\fi
Finally we mention an extensive IoT authentication survey \cite{DBLP:journals/sensors/El-hajjFCS19}. In this work, our authentication scheme is comparable to \cite{DBLP:conf/esweek/BamasagY15} which utilizes hashing for one-way authentication in a distributed architecture, however our scheme is more storage-efficient, suited for low-SWaP (Class 0) sensors.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction }
The two sides of the Penrose diagram of an eternal AdS black hole are connected by an Einstein-Rosen bridge (ERB). The ERB grows with time: classically it grows forever. On the other hand, the dual boundary theories very quickly come to thermal equilibrium. All evolution seems to stop at the scrambling time $t_{\ast}.$ The scrambling time is a short time, only logarithmically greater than the light-transit-time across the black hole. This leads to a puzzle: if the quantum state of the two sides stops evolving, how can the continuing growth of the ERB, over long periods of time, be described in the dual theory\footnote{Another way to put the question is: are there properties of the gauge theory wave function that can serve as clocks, and for how long can they continue to record the time? There are a number of more or less standard answers. The $\sim N^2$ part of the vertical entanglement (see section \ref{ERB length}) can be used as a clock, but it reaches its maximum after a multiple of the light-crossing time \cite{Hartman:2013qma}\cite{Liu:2013iza}. The decay of local correlation between the left and right CFTs may also be used \cite{Louko:2000tp}\cite{Kraus:2002iv}\cite{Fidkowski:2003nf}. These correlations exponentially decay for a time of order $S$, but then become noisy with an amplitude of order $e^{-S}.$ It is a special feature of quantum mechanics that there are properties of the wave function that are monotonic for much longer times.}?
The answer, of course, is that the quantum state does not stop evolving. Subtle quantum properties continue to equilibrate long after a system is scrambled. These properties can be summarized in a quantity called computational complexity, or just complexity. In the sense that we will use the term, complexity of classical systems cannot get very large. Consider a system of $K$ classical bits in the initial state $(00000....).$ Suppose our goal is to get to some other state. The number of simple operations (one or two c-bit operations) required to accomplish the task will never be larger than $K.$ $K$ is also the maximum entropy of the c-bit system.
In quantum mechanics the situation is different. Entropy is only the tip of a gigantic complexity iceberg. For $K$ qubits the maximum entropy is still $K.$ But for almost all states the number of 2-qubit gates needed to achieve the state is exponential in $K$ \cite{Knill}. Until recently this difference between classical and quantum complexity has not played a large role in formulating physical principles, but this may be changing (see also \cite{Harlow:2013tf}).
In \cite{Susskind:2014rva}, a conjecture was made that the complexity of a state is proportional to the length of the ERB. Here we will refine this conjecture slightly: we propose that the total complexity, measured in gates, is
\begin{equation} \label{conjecture1}
{\cal{C}}(t_L,t_R) = \frac{V(t_L,t_R)}{G_N l_{AdS}}
\end{equation}
where $V$ is the spatial volume of the ERB. This volume is defined using the maximum volume codimension one surface bounded by the CFT spatial slices at times $t_L,t_R$ on the two boundaries. Equation (\ref{conjecture1}) should be understood up to an order one factor of proportionality, since we do not know how to define gate complexity more precisely than this.
A simple check on this proposal can be carried out using the time evolution of the thermofield double state $|TFD\rangle$ corresponding to the analytic eternal two-sided black hole \cite{Maldacena:2001kr}. In particular, in section \ref{ERBsection}, we will see that this formula has the right time dependence and scaling with temperature.
In section \ref{shocksection}, we will examine the conjecture is a less trivial setting. Building on \cite{Dray:1985yt}\cite{vanRaamsdonk}, references \cite{Shenker:2013pqa}\cite{Shenker:2013yza} constructed a wide class of shock wave geometries dual to perturbations of the thermofield double state. The complexity of such states is easy to estimate, so we are able to check (\ref{conjecture1}) for a large class of spherically symmetric states. A preliminary version of this check was carried out in \cite{Susskind:2014ira}.
The conjecture (\ref{conjecture1}) was partially inspired by the connection \cite{Hartman:2013qma} between time evolution, the length of the Einstein-Rosen bridges, and the tensor network description of quantum states. There should be a close connection between the circuit complexity defined in this paper and the minimal size of the tensor network description of a state. We will comment briefly about this after reviewing properties of complexity in section \ref{complexitysection}.
For the convenience of the reader we will list some assumptions conventions, notations that occur throughout the paper.
\begin{itemize}
\item $D$ refers to the space-time dimension of the bulk theory.
\item We will often work in units such the AdS radius $l_{ads}$ is equal to unity.
\item Our discussion of precursors will be limited to the case of black holes with Schwarzschild radius of order the AdS scale $R \sim l_{AdS}$. For such black holes, the temperature of the black hole is of order $T\sim 1/l_{AdS}.$
\item Earlier papers \cite{Susskind:2014rva}\cite{Susskind:2014ira} focused on the length of the Einstein-Rosen bridge denoted by $d$. In this paper the focus is on the spatial volume of the ERB called $V$.
For long, symmetric wormholes, $V$ and $d$ are related by the cross-sectional area $A$ of the ERB. The area is of order the entropy of the black hole in Planck units. Thus, up to factors of order $1$ we have
$V = A d = S \ { l_p^{D-2}} \ d.$
\item Complexity can refer to an operator or to a state. In either case it is measured in gates. It is denoted by ${\cal{C}}.$
\item As usual, the two sides of the eternal black hole will be called left and right. This notation also refers to the two CFT's describing the boundaries.
\item The Killing time in a two-sided black hole is denoted $\tau.$ In the right side $\tau$ increases from past to future in the usual way. On the left side $\tau$ increases from future to past. In the Einstein-Rosen bridge $\tau$ is space-like and increases from left to right.
\item The boundary time $t$ increases from past to future on both sides.
\item The notation $r_m$ indicates a certain value of the (time-like) radial Schwarzschild coordinate inside the black hole where the function (to be defined ) $r^{D-2} \sqrt{|f(r)|}$ has a maximum.
\item We will use $v_D$ to represent the value of this maximum: $v_D = \omega_{D-2}r_m^{D-2}\sqrt{|f(r_m)|}$ where $\omega_{D-2}$ is the volume of a unit $D-2$ sphere.
\item The symbol $t_{\ast}$ is used for the time $\frac{1}{2\pi T}\log S$. This is the scrambling time for black holes with $T\sim 1$.
\item We use $t_f$ to denote the folded time interval associated to a state, defined below.
\end{itemize}
\section{Einstein-Rosen bridges}\label{ERBsection}
\subsection{Black hole geometry}
The metric of a Schwarzschild AdS black hole has the form
\begin{equation}
ds^2 = -f(r) d\tau^2 +f(r)^{-1} dr^2 + r^2 d\Omega_{D-2}^2
\label{metric}
\end{equation}
where $f(r)$ is given by
\begin{equation}
f(r) =r^2+1 - \frac{\mu}{r^{D-3}}.
\label{fD}
\end{equation}
For $D>3$, the parameter $\mu$ is determined by the mass as $\mu = 16\pi G_N M/(D-2)\omega_{D-2}$, with $\omega_{D-2}$ the volume of a $(D-2)$-sphere. In the $D = 3$ case corresponding to a BTZ black hole, the function is $f(r) = r^2 - 8G_N M$.
In equation (\ref{metric}) the time coordinate $\tau$ runs from past to the future on the right side of the Penrose diagram and from future to past on the left side. We will introduce a boundary time $t$ which strictly runs from
past to future on both sides. Thus
\begin{eqnarray}
t &=& \tau \ \ \ \ \ \ \ \ \rm right \ side \it \cr \cr
t&=&-\tau \ \ \ \ \ \ \ \ \rm left \ side. \it
\end{eqnarray}
Let's review the boundary-bulk duality for wave functions. In the dual CFT the system is described by a single state that depends on two times, $t_L, \ t_R. $ The subscripts $L, \ R$
represent left, right. The instantaneous state is written
$$|\Psi(t_L, t_R)\rangle.$$
There are two commuting Hamiltonians that generate independent time-translations,
\begin{eqnarray}
i\partial_{t_L} |\Psi\rangle &=& H_L |\Psi\rangle \cr \cr
i\partial_{t_R} |\Psi\rangle &=& H_R |\Psi\rangle
\end{eqnarray}
The TFD is an eigenvector of $H_R - H_L$ with eigenvalue zero, but it evolves nontrivially with $H_R + H_L.$
From the bulk viewpoint $|\Psi\rangle$ represents a Wheeler DeWitt wave function covering a patch of the space-time geometry. The patch contains
all spacelike surfaces which terminate on the boundaries at times $t_L, \ t_R$ \cite{Maldacena:2013xja}. This is illustrated in figure \ref{A}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.9]{A.pdf}
\caption{The yellow region is the Wheeler-DeWitt patch for the times $t_L, t_R.$ The brown curve indicates a space-like
surface connecting the two boundaries. }
\label{A}
\end{center}
\end{figure}
We will be interested in the volume, $V(t_L, t_R),$ of the ERB that connects the two boundaries at the times $t_L$ and $t_R.$
\subsection{The size of an ERB }\label{ERB length}
The size of an ERB is not a self-evident idea. In \cite{Susskind:2014rva} and \cite{Susskind:2014ira} a naive definition was proposed. The construction involved connecting the boundaries by $(D-2)$-dimensional surfaces similar to the Ryu-Takayanagi surfaces \cite{Ryu:2006bv}\cite{Hubeny:2007xt} used by Hartman and Maldacena \cite{Hartman:2013qma} to study the evolution of vertical entanglement\footnote{For the definition of vertical entanglement see \cite{Susskind:2014rva}\cite{Susskind:2014ira}.} In the special case of BTZ this reduces to geodesics connecting the two boundaries.
In retrospect this was not a good idea for a number of reasons. One serious defect is that existence is not guaranteed. The extremal $(D-2)$ dimensional surfaces are saddle points, and it is possible to conceive of long asymmetric wormholes with no extremal surface connecting the two asymptotic regions. Concretely, because such surfaces are not topologically stable, they could slip off the transverse $(D-2)$-sphere when the ERB becomes long. Or they could run into the singularity. Also, the proposal fails to scale properly with temperature when the black hole is continued away from the Hawking-Page transition point.
There is, however, another simple candidate for the definition of the ERB size, which does not suffer from these problems. Consider the brown curve in figure \ref{A} connecting $t_L$ and $t_R.$ Taken literally, each point along the curve in the Penrose diagram represents a $(D-2)$-sphere. The curve itself is a $(D-1)$ dimensional spatial volume. Surfaces of this type fill the spatial volume of the ERB. Moreover, the extremal surfaces are maxima of the volume, not saddle points. For AdS black holes in Einstein gravity, we believe that the volume of such surfaces is always bounded from above (apart from a UV divergence near the boundary), so the existence of a maximum volume surface is guaranteed.\footnote{In Gauss-Bonnet, there is no global maximum, but for small Gauss-Bonnet parameter, there is still a local maximum.}
\subsubsection{Infinite time}
Consider the black hole geometry described in (\ref{metric}). The first step in constructing a maximal volume surface connecting $t_L$ and $t_R$ is to understand an especially simple limit in which the two boundary times are taken to infinity. This is illustrated in figure \ref{erb} where the blue curve represents the limiting configuration. The simplifying feature is the $\tau$-translation symmetry of the geometry.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.9]{erb.pdf}
\caption{Maximum volume surface for infinite $t_{L,R}.$ }
\label{erb}
\end{center}
\end{figure}
For the infinite-time ERB the volume extends over an infinite range of $\tau,$ and is therefore translationally invariant. Moreover the system is also rotationally invariant. It follows that the surface of maximum volume is located at a fixed value of $r.$ Its volume per unit $\tau$ is equal to
\begin{equation}
\frac{dV}{d\tau} = \omega_{D-2} r^{D-2}\sqrt{|f(r)|}.
\end{equation}
To find the maximal volume surface, we must maximize the RHS over $r$ between $r = 0$ and the horizon radius. From (\ref{fD}), it is easy to see that the function has a maximum at some $r_m$. We denote the maximum by $v_D$:
\begin{equation}
v_D = \omega_{D-2} r_m^{D-2}\sqrt{|f(r_m)|}.\label{F}
\end{equation}
For an AdS-scale black hole with $\mu \sim 1$, $r_m$ is also order unity. For a high temperature black hole $\mu \gg 1$, we find $r_m^{D-2}\sqrt{|f(r_m)|} = \mu/2$ and thus (restoring $l_{AdS}$)
\begin{equation}\label{hight}
v_D = \frac{8\pi G_Nl_{AdS}}{D-2}M \hspace{20pt} (\text{high $T$}).
\end{equation}
\subsubsection{Finite time}
In the appendix to this paper we present formulae for the volumes of symmetric ERBs connecting the boundaries at finite time, and we show how to compute them using a geodesic equation. Here we will give a simplified argument that captures the main features. The basic idea, articulated in a closely related setting by \cite{Hartman:2013qma}, is that if the ERB is long, the maximum volume surface tends to hug the surface defined by the infinite-time limit. In the case of the unperturbed TFD-state it stays close to the infinite-time limit until $\tau_L$ is approximately equal to $-t_L $ and $\tau_R \approx t_R.$ It follows that the regularized volume of the bridge for $|t_L + t_R| \gg \beta$ is given by
\begin{equation}
V(t_L, t_R) = v_{D} |t_L + t_R| \ \ \ \ \ \ \ \rm ( large \ \it t_L+t_R \rm). \it
\label{grow linear}
\end{equation}
Let us now perform a sanity check of the conjecture (\ref{conjecture1}). Using (\ref{hight}), and noting that $M \propto ST$, we find that the complexity of a high-temperature TFD state increases as
\begin{equation}\label{tfdcomplex}
{\cal{C}}(t_L,t_R) \propto S T |t_L + t_R|
\end{equation}
This is precisely the behavior one would expect based on a quantum circuit model of complexity \cite{Hayden:2007cs}\cite{Susskind:2014rva}: the rate of computation measured in gates per unit time is proportional to the product $ST.$ The entropy appears because it represents the width of the circuit and the temperature is an obvious choice for the local rate at which a particular qubit interacts.
We note that $V$ is a function of $(t_L + t_R)$ as a consequence of the symmetry generated by the difference of Hamiltonians $(H_R-H_L).$ Time reversal symmetry of the TFD geometry implies that it should be an even function. For early time, i.e., $t_L+t_R \ll \beta,$ the volume is quadratic in $(t_L +t_R)$. A useful formula to have in mind is the length of geodesics in BTZ. These are not codimension-one surfaces, but the qualitative behavior of the length is similar to the higher-dimensional volume, and the exact formula can be worked out (here, for horizon radius $ = l_{AdS}$):
\begin{equation}
d(t_L + t_R) = 2 \log{ \left[\cosh{\frac{1}{2}}(t_L +t_R) \right]}.
\label{d-cosh}
\end{equation}
\section{Computational complexity}\label{complexitysection}
\subsection{Properties of complexity}\label{properties}
Although it is not essential, we will assume that the black hole can be modeled as a collection of $2K$ qubits. The Hilbert space of the two-sided system is ${\cal{H}}_L \times \cal{H}_R$ with each factor having dimension $2^K.$
We take $K=S.$
Any unitary operator $V$ in the $2^K$ dimensional Hilbert space of the left-side black hole has a computational complexity ${\cal{C}}_V,$ which was defined\footnote{Patrick Hayden has pointed out that a better definition of complexity may be the minimal depth of a quantum circuit, rather than the minimum number of gates, needed to generate $V.$ The minimum depth would be identified with the complexity per qubit. There are some cases where it is important to use circuit-depth rather than number of gates. An example is the complexity needed to scramble. There are circuits with only $K$ gates which can scramble. However, despite the small number of gates the depth of these circuits is $\log K.$ This subtlety is not important in this paper.
A smoother but closely related notion of complexity was defined by Nielsen and collaborators \cite{Dowling}, based on geodesic length in a Riemannian ``complexity geometry." We thank Nathaniel Thomas for helpful explanations of complexity geometry.}
to be the number of 2-qubit gates in the smallest quantum circuit that approximately generates $V$ \cite{Susskind:2013aaa}\cite{Susskind:2014rva}\cite{Susskind:2014ira}. For example the unit operator has zero complexity. Simple products of all the qubits have complexity of order $K.$
Operators of complexity ${\cal{C}} = K \log K$ can scramble an initial product state.
Quantum complexity can grow far beyond the scrambling complexity. However it is bounded by an exponential of $K.$
The maximum complexity satisfies
\begin{equation}
\log {\cal{C}}_{max} \sim K.
\label{cmax}
\end{equation}
Generating an operator with exponential complexity requires an exponential time. But in a technical sense, exponential complexity is not rare. It can be shown \cite{Knill} that almost all unitary operators have complexity satisfying (\ref{cmax}). Nevertheless, our discussion will be restricted to shorter time-scales during which the complexity is much smaller than maximal.
If $U_L(t)$ and $U_R(t)$ are the time evolution operators for the two sides, their complexity will increase with $|t|.$ Typically, apart from a short transient at early time, the complexity will grow linearly, the proportionality factor being $K$ or the entropy.
\begin{equation}
{\cal{C}}(t) = Kt.
\label{linear}
\end{equation}
We expect this behavior to continue until the complexity reaches ${\cal{C}}_{\max}$.
Then it fluctuates around ${\cal{C}}_{max}$ for an extremely long, doubly exponential, quantum recurrence time.
So far, we have discussed the complexity of operators. We can also define the complexity of a quantum state. To do that we need to define a fiducial state of zero complexity. For the $2K$ qubit system, we can take it to be the state
$$|0\rangle \equiv |00000000...00\rangle$$
The complexity of a general state $|\psi\rangle$ is defined as the complexity of the least complex unitary operator which will give $|\psi\rangle$ when applied to $|0\rangle.$
The TFD state is close to being maximally entangled. To an approximation that we will discuss later, it can be identified with a product of Bell pairs in which each pair is shared between the left and right systems. This is illustrated in figure \ref{TFD}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.7]{TFD.pdf}
\caption{qubit model for the TFD state. The TFD consists of a product of Bell pairs shared between the
left and right sides. }
\label{TFD}
\end{center}
\end{figure}
It takes one gate to act on a pair of qubits in the state $|00\rangle$ to turn it into a Bell pair. To create $K$ Bell pairs requires $K$ gates. Therefore the complexity of the TFD is $K$.
After an early-time transient period, the
complexity of the state
\begin{equation}
|\Psi(t_L, t_R) \rangle = U(t_L) U(t_R)|TFD\rangle
\end{equation}
will grow like
\begin{equation}
{\cal{C}}(t_L,t_R) = K|t_L + t_R|
\label{c-form}
\end{equation}
until it becomes maximal. Identifying the temperature as the conversion between time in the CFT and time in the analog qubit system, we find that this agrees with the complexity derived from the volume of the ERB in (\ref{tfdcomplex}). The numerical coefficient of proportionality is ambiguous (and possiby dimension-dependent), because complexity itself is ambiguous up to a numerical factor. However, the relative normalization of complexity and ERB volume can be fixed once and for all by comparing (\ref{c-form}) and (\ref{grow linear}). We will use this below.
\subsection{Complexity of a precursor}\label{precursorSec}
In this section, and in most of the rest of the paper, we will restrict our attention to black holes with temperature of order the AdS scale.
A precursor is a unitary operator of the form,
\begin{equation}
W(t) = U^{\dag}(t)WU(t).
\label{precursor}
\end{equation}
Although it has the form of a Heisenberg operator, we think of it as an operator in the Schrodinger picture. As $t$ grows, either positively or negatively, the complexity of $W(t)$ grows. This requires some explanation. If $W$ is the unit operator then the complexity does not grow since the $U$ and $U^{\dag} $
cancel. In \cite{Susskind:2014rva} it was explained that the chaotic nature of the dynamics destroys the cancelation when an operator $W$ is inserted even if $W$ itself is very simple; say a single qubit. The reason of course is the butterfly effect. This was illustrated in figure 1 in \cite{Susskind:2014rva} which we reproduce as figure \ref{butterfly}. The insertion of $W$ disrupts the time-reversed evolution described by $U^{\dag}$ and quickly causes the trajectories to diverge. For this reason it was argued that for $t\gg t_{\ast}$ the complexity of $W(t)$ is just twice the complexity of $U(t).$ A more refined guess would take into account the partial cancelation that should take place until the butterfly effect kicks in. As we will see the time-scale for the delay is the scrambling time. Therefore a refinement of the estimate would be that the
complexity of $W(t)$ for $t > t_*$ is given (to accuracy of order $K$) by
\begin{equation}
{\cal{C}}_W(t) = 2K(t - t_*).
\label{cancel}
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{butterfly.pdf}
\caption{In the left panel the operation $U^\dag(t) I U$ is illustrated. The letters $i$ and $f$ represent initial and final states. The backtracking trajectories illustrate the cancelation of $U$ and $U^{\dag}.$ In the right panel the unit insertion is replaced by the insertion of $W.$ The backtracking of trajectories takes place for a limited time until the butterfly effect kicks in at the scrambling time. }
\label{butterfly}
\end{center}
\end{figure}
We can illustrate the behavior in (\ref{cancel}) in the Hayden-Preskill circuit model \cite{Hayden:2007cs}. The reader is referred to \cite{Susskind:2014ira} for the details of the model. Let's begin with a very simple state of $K$ qubits, namely
\begin{equation}
|\psi(0)\rangle = |00000....\rangle.
\end{equation}
We focus on a particular qubit labeled $W_a.$ After $n< \log_2 K$ parallel time steps the qubit $W_a$ will have interacted, either directly or indirectly, with $2^n$ qubits. Let us call that subset $A.$ The remaining $K-2^n$ qubits have had no contact with $W_a.$ The evolution operator is a product of gates. It factors into an operator for the subset $A$ and another factor for the complement of $A,$ which we call $B.$
\begin{equation}
|\psi(n)\rangle = U_B(n) U_A(n)|00000....\rangle.
\end{equation}
The operators $U_A$ and $U_B$ are built out of non-overlapping sets of qubits and commute with each other.
Next act with the qubit operator $W_a.$ Then run the system back with the operator $U_A^{\dag}U_B^{\dag}.$ The $B$ operators cancel and the result is
\begin{equation}
|\psi(2n)\rangle = U_A^{\dag}(n)W_a U_A(n)|00000....\rangle.
\end{equation}
This resulting state is a tensor product of $A$ and $B$ states. The $A$ factor is scrambled, but the $B$ factor has all qubits
in the state $0.$ For example when $n = \log_2 K-3$ at least seven-eights of the qubits are unaffected by the evolution. Obviously not much complexity has been generated during this time. However as soon as $n=\log_2 K$ the entire system becomes fairly scrambled. This happens rather suddenly. Once that point has passed, the complexity begins to grow linearly with time. Thus we see that the growth of complexity is delayed by $2\log_2 K,$ i.e., twice the scrambling time. This is the circuit analog of (\ref{cancel}).
\subsection{A note on tensor networks}
Hartman and Maldacena \cite{Hartman:2013qma} have proposed a tensor network (TN) picture to illustrate the evolution of ERBs.\footnote{The Hartman-Maldacena tensor networks cannot resolve distances on scales smaller than $l_{ads}.$ The tensors do not act in a space of a single qubit but rather on the entire Hilbert space of an $N \times N$ matrix theory. The TN picture makes the most sense for black holes of radius $R\gg l_{ads}.$}
Figure \ref{TN}
shows the evolution of the Hartman-Maldacena TN for the ERB in the case of a 1+1 dimensional boundary theory (D = 3). As time increases, more layers along the $\tau $ direction are added to the TN. Near the left and right boundaries the network shows the familiar scale-invariant pattern of associated with geometry of AdS \cite{Swingle:2009bg}. At the horizon the pattern changes to reflect the $\tau$-translation invariance of a long ERB. The wave function of the boundary state, obtained by contracting all the internal indices in the TN, evolves because the network keeps growing. In many ways the evolution of the network resembles the evolution of a quantum circuit, the width of the circuit being the number of layers in the $\theta $ direction and the depth being the number of layers in the $\tau$ direction.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.33]{TN.pdf}
\caption{Evolution of the ERB tensor network. The red curves depict the RT surface for computing vertical entanglement. The tensor network fills the volume of the ERB.}
\label{TN}
\end{center}
\end{figure}
For a given boundary state the associated TN is not unique (this is also true of quantum circuits).
It is tempting to identify the complexity of the boundary state with the number of nodes of the smallest tensor network that can generate the state.
There is an upper limit on the complexity of a state of a system of $K$ qubits, exponential in $K.$ What happens when the depth of the tensor network exceeds some exponential length? The bounded nature of complexity implies that the state generated by the TN can be constructed by a smaller TN. In some sense there is an upper bound on the size of the TN. This implies a breakdown in the classical geometric description of an ERB when the time becomes exponential in the entropy $S.$
\section{Complexity and shock wave geometries}\label{shocksection}
In what follows we will test the conjecture ${\cal{C}} \propto V$ by considering evolutions more general than those generated by $H_L$ and $H_R.$ In particular we consider perturbed geometries generated by applying thermal-scale operators $W_L(t_L)$ as discussed in \cite{Shenker:2013pqa}\cite{Shenker:2013yza}. In the qubit model the $W$-operators may be thought of as one-qubit traceless Pauli operators with complexity of order unity.
Let $t_1, t_2, t_3,..., t_n$ be a series of left-side times, typically not in time-order. We consider the state
\begin{equation}
|\Psi(t_L, t_R) \rangle = U_R(t_R) U_L(t_L) W_L(t_n)W_L(t_{n-1}).....W_L(t_1)|TFD\rangle.
\end{equation}
Using the fact that $H_L - H_R$ annihilates $|TFD\rangle$, we can re-write this using $L$ operators only, as
\begin{equation}\label{perturbedstate}
|\Psi(t_L, t_R) \rangle = U_L(t_L) W_L(t_n)W_L(t_{n-1}).....W_L(t_1)U_L^\dag(-t_R)|TFD\rangle.
\end{equation}
Since the operators are generally out of time order, the evolution can be represented by a time-fold \cite{Heemskerk:2012mn}\cite{Susskind:2013lpa} as shown in figure \ref{B}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.9]{B.pdf}
\caption{Time-fold with six insertions. The insertions at $t_1, t_2, t_3, t_4 \rm and \it t_6$ occur at switchback points.
The insertion at $t_5 $ does not. }
\label{B}
\end{center}
\end{figure}
Note that there are two kinds of insertions illustrated in figure \ref{B}. Some insertions---the ones at $t_1, t_2, t_3, t_4 $ and $t_6$---occur at fold points or ``switchbacks." Others like the one at $t_5$ are ``through-going."
This time fold diagram represents a recipe for making the state $|\Psi(t_L,t_R)\rangle$: beginning with the TFD, evolve forwards and backwards with the Hamiltonian, inserting local operators at the locations of the red dots. This recipe gives us an upper bound on the complexity of the state $|\Psi(t_L,t_R)\rangle$, namely the total folded time interval $t_f$:
\begin{equation}
\frac{{\cal{C}}(\Psi)}{K}\le t_f \equiv |t_1+t_R| + |t_2-t_1| + |t_3-t_2|+......+ |t_L - t_n|.
\end{equation}
How tight do we expect this bound to be? If the time $t_f$ is less than exponential in the entropy, we expect the bound to be tight, except for the partial cancelations described in section \ref{precursorSec}. These cancelations occur at each switchback. We therefore have
\begin{equation}\label{conjform}
\frac{{\cal{C}}(\Psi)}{K} = t_f - 2 n_{sb}t_*
\end{equation}
where $n_{sb}$ is the number of switchbacks, and we have assumed that $|t_i - t_{i-1}| > t_*$.
In the next section, we will use this formula to check the conjecture of \cite{Susskind:2013aaa}\cite{Susskind:2014rva}\cite{Susskind:2014ira}. This conjecture, with our refinement, states that the complexity is proportional to the volume of the ERB. References \cite{Shenker:2013pqa}\cite{Shenker:2013yza} showed how to construct geometries dual to perturbed TFD states (\ref{perturbedstate}), so checking ${\cal{C}} \propto V$ amounts to a concrete maximal surface problem in these geometries. Before we begin this calculation, we will emphasize three points:
\begin{itemize}
\item We can normalize the relationship between complexity and volume using the pure TFD state as discussed in section ~\ref{properties}. This allows us to check the agreement for states of the form (\ref{perturbedstate}) including the coefficient of proportionality.
\item We are restricting our attention to black holes with temperature of order the AdS scale. For such black holes, thermal scale perturbations are approximately as large as the entire system. This partially justifies our use of spherically symmetric shock wave geometries. It is simple to generalize (\ref{conjform}) for localized perturbations of a large spatially extended system. However, the corresponding maximal surface problem becomes significantly harder, and will be left to future work \cite{tobecontinued}.
\item We will verify the relationship ${\cal{C}} \propto V$ for large $|t_i - t_{i-1}|$, ignoring corrections that are $O(1)$ in the time differences, i.e. $O(K)$ in terms of the total complexity. We will, however, retain terms involving $t_*$, which are $\sim K \log K$.
\end{itemize}
\setcounter{equation}{0}
\subsection{Finding the maximal surface}
It is convenient to write the metric (\ref{metric}) in Kruskal coordinates,
\begin{align}\label{eternal}
ds^2&=-\frac{4 f(r)}{f'(R)^2}e^{-f'(R)r_*(r)}dudv + r^2 d\Omega_{D-2}^2\\
uv &= -e^{f'(R) r_*(r)} \hspace{20pt} u/v = -e^{-f'(R)t},\label{defofkruskal}
\end{align}
where $R$ is the horizon radius and $dr_* = f^{-1}dr$ is a tortoise coordinate. Adding a thermal-scale perturbation far in the past on the left leads to a null shock wave along the $u = 0$ horizon \cite{Shenker:2013pqa}. Adding a perturbation far in the future leads to a shock along $v = 0$. A sequence of perturbations as in (\ref{perturbedstate}) leads to a geometry with a long wormhole crossed by intersecting null shocks. These geometries were worked out in \cite{Shenker:2013yza}. A folded time axis with $n$ folds leads to a geometry with $n$ alternating shocks. A sample Kruskal diagram is shown in figure \ref{folded}.\footnote{Notice that we are sending in all shocks along either $u = 0$ or $v = 0$. In reality, they will be at finite $u,v$, but if the relative boost between adjacent shocks is large, we can take them to be along the horizon.}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale = .7]{6.pdf}
\caption{Kruskal diagram for an ERB dual to the TFD state perturbed by out-of-time-order operators at the left boundary. The blue curve represents the maximal-volume surface crossing the ERB. Each point of intersection with a shock has two sets of $(u,v)$ coordinates: one in the patch the left, and one in the patch to the right. These are related by null shifts determined by the strength of each shock.}\label{folded}
\end{center}
\end{figure}
This geometry is obtained by pasting together portions of the eternal AdS black hole metric across the horizons $u = 0$ or $v = 0$ with null shifts in the $v$ or $u$ directions of magnitude
\begin{equation}
\alpha_i=2\exp{\left[ -\frac{2\pi}{\beta}(t_*\pm t_i)\right]}
\end{equation}
Here, the sign depends on whether the shock is left-moving or right-moving. We will take this as the precise definition of $t_*$, but we note that it leads to $t_* \approx \frac{\beta}{2\pi}\log N^2$.
A maximal volume surface connecting $t_L$ and $t_R$ is also drawn in figure \ref{folded}. It is formed from $n+1$ pieces of maximal surface in the unperturbed geometry, connected at the $n$ locations where the surface crosses a shock. To each intersection point, we assign two different Kruskal coordinates. One is the location in the Kruskal system to the right of the shock, and the other is the location in the Kruskal system to the left. These are related by null shifts of magnitude $\alpha_i$.
Let us begin by considering a folded time axis with only switchback insertions, i.e. no through-going insertions. To keep the notation simple, we will also focus on a specific case with an odd number $n$ of total insertions, with $t_1 < -t_R$ and $t_n < t_L$. This is the case drawn in figure \ref{folded}. For odd $i$ we have the ``+'' sign in the definition of $\alpha$, and for even $i$ we have the ``-'' sign. The combined volume of the $n+1$ segments is
\begin{align}
V = V(t_R,v_1) + V(v_1+\alpha,u_2) + ... + V(u_{n-1} - \alpha_{n-1},v_n) + V(t_L,v_n+\alpha_n).
\end{align}
Using the formulas of appendix \ref{appendix}, and assuming all volumes are large (i.e. $|t_{i+1}-t_i|>t_*$), we find
\begin{align}
\frac{2\pi V}{\beta v_D} = &\log(- v_1e^{-2\pi t_R/\beta}) + \log\big[u_2(v_1+\alpha_1)\big]+... \\ &+\log\big[v_n(u_{n-1} - \alpha_{n-1})\big]+\log\big[(v_n+\alpha_n)e^{ 2\pi t_L/\beta}\big] + O(1).
\end{align}
Here and below, $O(1)$ represents a contribution that is $O(1)$ in terms of the various time variables. To ensure that the piecewise-maximal surface is actually maximal, we extremize this formula over the intersection points. This leads to $v_i = -\alpha_i/2$ for odd $i$ and $u_i = \alpha_i/2$ for even $i$. Plugging in the definition of $\alpha_i$, we find
\begin{equation}\label{int}
V = v_D(-t_R - 2t_1 + 2t_2 - ... + -2t_n + t_L - 2nt_*) + O(1).
\end{equation}
For the configuration of times specified above, this is simply
\begin{equation} \label{volumeresult}
V = v_D (t_f - 2 n t_*) + O(1)
\end{equation}
in precise agreement with the conjecture (\ref{conjecture1}) and the formula (\ref{conjform}). Different configurations of the times $\{t_i\}$ lead to different $\pm$ assignments for $\alpha_i$. The formula (\ref{int}) changes, but (\ref{volumeresult}) remains valid.
What is the effect of through-going insertions? We have seen that the shocks associated to switchback insertions already lead to an ERB volume that agrees with the complexity. In order for ${\cal{C}} \propto V$ to be correct, through-going insertions must not significantly change the volume of the ERB.
In fact, this is the case. The shock sourced by a through-going insertion insertion will run parallel to the shock associated with one of the adjacent switchback points; its only effect will be to slightly increase the strength of this adjacent shock. Let us illustrate this in the simplest case, with two shocks at times $t_1,t_2$. Suppose that we want to compute the volume of the bridge at $t_L = t_R = 0$, and the times satisfy $t_1 < t_2 < 0$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale = .65]{8.pdf}
\caption{Folded time axis and geometry for two shocks with $t_1< t_2 < t_L$. The $t_2$ insertion is through-going, and the associated shock runs right beside the stronger $t_1$ shock.}\label{twotwo}
\end{center}
\end{figure}
In the region that the maximal volume surface crosses the shocks, and in the boost frame appropriate to that surface, the shocks are very close together and running parallel. Parallel shocks superpose, so we can simply add the metrics together, obtaining a single effective shock with null shift $\alpha_1 + \alpha_2$. The volume is therefore proportional to $\log (\alpha_1 + \alpha_2) = \log \alpha_1 + O(1)$. The effect of the through-going insertion is to therefore to increase the complexity by a small amount, at most of order $K$.
\subsection{A maximal entanglement reflection principle}\label{reflection}
There are features of the behavior of $V(t_L,t_R)$ which surprised us when we first discovered them. They also happen to be features of the geodesic distance $d(t_L, t_R) $ for BTZ black holes. In the case of geodesic distance we can write down a simple analytic formulas for $d(t_L, t_R)$ in shock wave geometries. We will illustrate the points for the case of a single shock wave created at (negative) time $t_1.$
\begin{equation}
d(t_L, t_R) = 1 + 2 \log \left[ \cosh \frac{t_L+ t_R}{2} + qe^{(2|t_1| +t_L -t_R)/2} \right].
\label{d(tL,tR)}
\end{equation}
The first point is seen by setting $t_L=t_R.$ We note that the result is an even function of $t_L+t_R,$ i.e.,
\begin{equation}
d(t_L+t_R) = d(-t_L -t_R).
\end{equation}
The second surprising feature can be seen by fixing $t_L,$ say at $t_L=0,$ and noting that $d(0,t_R)$ decreases with $t_R$ for a fairly long period of time. The same two features are also found in the volume function $V(t_L,t_R).$
This first feature is surprising because the insertion of the shock wave at $t_1$ explicitly breaks the time-reversal symmetry. It is not obvious why the complexity should be an even function of $(t_L + t_R).$ The second feature is even more surprising when we interpret $V$ as complexity; why should the complexity decrease as a function of $t_R?$
Neither of these features are accidental. They are related to properties of the TFD state. The maximally entangled model for the TFD is a product of $K$ Bell pairs. Such a state has the property that acting with any unitary operator on the left side is equivalent to acting with a reflected operator on the right side. Thus, if the TFD were maximally entangled, acting with $W_L$ on the left at $t=0$, would be equivalent to acting with the corresponding $W_R$ on the right side at $t=0:$
\begin{equation}
W_L(t=0) |TFD\rangle = W_R(t=0) |TFD\rangle.
\end{equation}
If we use the symmetry of $|TFD\rangle$ under transformations generated by $H_R - H_L$ we can generalize this to
\begin{equation}
W_L(t_1) |TFD\rangle = W_R(-t_1) |TFD\rangle.
\label{reflect}
\end{equation}
This ``reflection principle" is illustrated in figure \ref{D}.
The TFD state is not exactly maximally entangled, but for shock wave geometries generated by very low-mass perturbations with very large time separations between, it seems that maximal entanglement is a good approximation. From the bulk geometry, this is clear: in figure \ref{D}, if we focus on the geometry near the $t = 0$ slice, very early shocks sent in from the left are almost indistinguishable from very late shocks sent in from the right.
Note that the formula (\ref{reflect}) can be used to move operators from $L$ to $R$, or vice versa, in multi-shock states. This means that all multi-shock states can be represented, in the approximation described above, in terms of perturbations purely on the left. This is why we have focused on such perturbations throughout the paper.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.7]{D.pdf}
\caption{For a maximally entangled state there is an equivalence between acting with unitary operators
on the left and right. The two shock wave geometries shown in the figure would be equivalent.}
\label{D}
\end{center}
\end{figure}
Let us consider the two surprising features described above, in light of the reflection principle. First, take the state
\begin{equation}
U_R(t)U_L(t) W_L(t_1) |TFD\rangle
\label{state t}
\end{equation}
corresponding to $t_L = t_R = t.$ Using \ref{reflect}, we write
\begin{equation}
U_R(t)U_L(t) W_L(t_1) |TFD\rangle = U_R(t)U_L(t) W_R(-t_1) |TFD\rangle.
\label{symmetry}
\end{equation}
By time reversal and left-right interchange this is equal to
\begin{equation}
U_R(-t)U_L(-t) W_L(t_1) |TFD\rangle.
\label{state -t}
\end{equation}
Thus comparing \ref{state t} and \ref{state -t} it would follow that $V$ is a symmetric function of $t$ even though the time reversal symmetry is broken by the insertion of $W(t_1).$
Now let us consider the second feature: the decrease of complexity with increasing $t_R$ for a certain period of time. According to (\ref{reflect}) the state $U_R(t_R) W_L(t_1)|TFD\rangle$ satisfies
\begin{equation}
U_R(t_R) W_L(t_1)|TFD\rangle = U_R(t_R) W_R(-t_1)|TFD\rangle.
\end{equation}
A left-right flip and a time reversal relates this state to
\begin{equation}
U_R(t_R) W_R(-t_1)|TFD\rangle \to U_L(-t_R) W_L(t_1)|TFD\rangle.
\end{equation}
The decrease of complexity with $t_R$ in the state $U(t_R) W_L(t_1)|TFD\rangle$ is thus mapped to an increase of complexity with $t_L$ in the flipped state. This increase with $t_L,$ following the action of $W_L(t_1),$ is the expected behavior.\footnote{Note that the decrease with $t_R$ only lasts until $t_R = |t_1|-2t_*.$ One can see an example of this in \ref{d(tL,tR)}. At $t_R = |t_1|-2t_*$ there is a crossover between the two terms and $d$ begins to increase with $t_R.$ This is also the expected behavior from the reflection principle.}
\setcounter{equation}{0}
\section{Conclusion}
Quantum computational complexity---thought of as a property of the state of a system---is an extremely subtle quantity; given a state, there are very few tools to compute its complexity. All the ordinary quantities that we are familiar with stop evolving by the scrambling time when the system reaches local equilibrium. Nevertheless the computational complexity of a state is well defined and continues to increase long after ordinary equilibrium is reached. It only saturates at the classical recurrence time $ \sim e^S. $
For ordinary purposes computational complexity is far too subtle to be relevant for any real experiment on a chaotic system. However it appears to play a fundamental role in encoding properties of the interiors of black holes. More generally it may be important for describing phenomena behind any event horizon, including cosmic horizons.
The assumption that ERB volume, $V(t_L, t_R),$ is determined by the complexity of the dual CFT state, together with some assumptions about the growth of complexity with time leads to a detailed conjecture for how $V(t_L, t_R)$
behaves in spherically symmetric shock wave geometries. This conjecture was checked for all such geometries in all dimensions.
One might wonder whether the equivalence between folded time and ERB volume is simply a geometric fact having nothing to do with chaos and complexity. The smoking gun implicating these properties is the partial cancelation occurring at switchback points. Quantum circuits allow us to see that the complexity of a precursor $W(t)$ is overestimated by the sum of the complexities of the evolution operators in \ref{precursor}. The Hayden Preskill circuit model \cite{Hayden:2007cs} gives a precise value for the overestimate \ref{cancel}. The value agrees with our guess in \ref{cancel}, and more importantly, it agrees with the calculation of ERB volumes in shock wave geometries.
The occurrence of the scrambling time in the formula is a clear indication that the effect is connected with chaos and complexity. The fact that the same cancelation occurs---in just the right way---for the length of ERBs is quite remarkable.
\section*{Acknowledgements}
We are grateful to Patrick Hayden and Steve Shenker for discussions. We thank Dan Roberts for drawing our attention to a difficulty in reconciling the area of codimension two surfaces and the complexity of localized perturbations of the TFD state \cite{tobecontinued}. This was part of our motivation for considering codimension one surfaces.
Support for this research came through NSF grant Phy-1316699 and the Stanford Institute for Theoretical Physics.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
This paper considers a fully discrete
finite difference scheme for the Benjamin--Ono (BO) equation. The
BO equation models the evolution of weakly nonlinear
internal long waves. It has been derived by Benjamin \cite{benjamin}
and Ono \cite{ono} as an approximate model for long-crested
unidirectional waves at the interface of a two-layer system of
incompressible inviscid fluids, one being infinitely deep. In
non-dimensional variables, the initial value problem associated with
the BO equation reads
\begin{equation}
\begin{cases}
\label{eq:main}
u_t= uu_x + H u_{xx}, \quad x\in\mathbb R, \ 0\le t \le T,& \\
u|_{t=0}=u_0,&
\end{cases}
\end{equation}
where $H$ denotes the Hilbert transform defined by
the principle value integral
\begin{equation*}
H u(x) := \mathrm{P.V.} \, \frac{1}{\pi} \int_{\mathbb R} \frac{u(x-y)}{y} \,dy.
\end{equation*}
The BO equation is, at least formally, completely
integrable \cite{ablowitz} and thus possesses an infinite number of
conservation laws. For example, the momentum and the energy,
given by
\begin{align*}
M(u):= \int u^2 \,dx, \, \, \text{and} \, \, E(u):= \frac{1}{2} \int
\abs{D_x^{1/2} u}^2 \,dx + \frac{1}{6} \int u^3 \,dx,
\end{align*}
are conserved quantities for solutions of \eqref{eq:main}.
We also consider the corresponding $2L$-periodic problem
\begin{equation}
\begin{cases}
\label{eq:main_per}
u_t= uu_x + \mathbb{H}_{\rm per} u_{xx}, \quad &x\in\mathbb{T}, \ 0\le t \le T, \\
u|_{t=0}=u_0,& x\in\mathbb{T}
\end{cases}
\end{equation}
where $\mathbb{T}:= \mathbb R/2 L \mathbb{Z}$. The periodic
Hilbert transform is defined by the principle value integral
\begin{align*}
\mathbb{H}_{\rm per} u(x) = \mathrm{P.V.} \frac{1}{2L} \int_{-L}^{L}
\cot \Bigl(\frac{\pi}{2L} y\Bigr) u(x-y)\,dy.
\end{align*}
The initial value problem \eqref{eq:main} has been extensively studied
in recent years. Well-posedness of \eqref{eq:main} in $H^s(\mathbb R)$,
for $s > 3$ was proved by Iorio \cite{iorio} using purely
hyperbolic energy methods. Then, Ponce \cite{ponce} derived a local
smoothing effect associated to the dispersive part of the equation,
which combined with compactness methods, enabled him to prove
well-posedness also for $s = 3$.
By combining a complex version of the Cole--Hopf transform with
Strichartz estimates, Tao \cite{Tao:2004} was able to show
well-posedness of the Cauchy problem \eqref{eq:main} in $H^1(\mathbb R)$.
This well-posedness was extended to $H^s(\mathbb R)$ for $s>1$ by Burq and
Planchon \cite{burq} and for $s\ge 0$ by Ionescu and Kenig
\cite{ionescu}.
In the periodic setting, Molinet \cite{molinet3a}
proved well-posedness in $H^s (\mathbb{T})$ for $s \ge
0$. For operator splitting methods applied to the BO equation, see \cite{DuttaHoldenKoleyRisebro:2015}.
In this paper, we define a numerical scheme for both \eqref{eq:main}
and \eqref{eq:main_per}, with the aim to develop a \textit{convergent}
finite difference scheme. While there are several numerical methods
for the BO equation
which perform well in practice, indeed better than the one presented here, see \cite{BoydXu} for a recent comparison of different
numerical methods, we emphasize that we here \textit{prove} the convergence of our
proposed scheme. Having said this, there are results concerning error estimates
for the BO equation in \cite{vasu:1998, Pelloni:2001, DengMa:2009}.
However, error estimate analysis a priori assumes existence of solutions of the underlying equation, while
our convergence analysis, as a by-product, can be viewed as a constructive proof for the existence
of solutions of the BO equation \eqref{eq:main}.
It is worth mentioning that the scheme under consideration in this paper is similar to the scheme
analyzed in \cite{vasu:1998}, the only difference being that a different discretization of Hilbert transform is
introduced in this paper.
We analyze the fully discrete Crank--Nicolson difference scheme
\begin{equation}
\label{eq:hilbertrealline}
u^{n+1}_j = u^n_j + {\Delta t}\, \ave{u}^{n+1/2}_j D u^{n+1/2}_j + {\Delta t}\, \mathbb H\left(
D_{+} D_{-} u^{n+1/2} \right)_j,
\quad n\in\mathbb N_0, \, j\in \mathbb{Z},
\end{equation}
where ${\Delta x}, {\Delta t}$ are discretization parameters, $u^n_j\approx u(j{\Delta x}, n{\Delta t})$ and $u^{n+1/2}=(u^n+u^{n+1})/2$. Furthermore, $D$ and $D_{\pm}$ denote
symmetric and forward/backward (spatial) finite differences,
respectively, $\mathbb H$ denotes a discrete Hilbert transform operator, and $\ave{u}$ denotes a spatial average. We show
(Theorem~\ref{thm:H3convergence}) that for initial data $u_0\in
H^2(\mathbb R)$ there exists a finite time $T$, depending only on the
$H^2(\mathbb R)$ norm of the initial data such that for $t\le T$, the
difference approximation \eqref{eq:BO_fd} converges uniformly in
$C(\mathbb R\times [0,T])$ to the unique solution of the BO
equation \eqref{eq:main} as ${\Delta x}\to 0$ with ${\Delta t}=\mathcal O({\Delta x})$.
Furthermore, following \cite[Theorem 3.2]{vasu:1998}, a second-order
error estimate in both time and space for smooth solutions can be obtained by our numerical method.
The rest of the paper is organized as follows: In
Section~\ref{sec:scheme}, we present necessary notations to introduce
the Crank--Nicolson
scheme and present the convergence analysis in the full line case, in
Section~\ref{sec:periodic} we present the periodic Hilbert transform
and outline the proofs in the periodic setting, and finally in
Section~\ref{sec:numerics}, we test our numerical scheme and provide
some numerical results.
\section{The finite difference scheme}
\label{sec:scheme}
Throughout this paper, we use the letters $C$, $K$ etc.~to denote
various constants which may change from line to line.
We start by introducing the necessary notation. Derivatives will be
approximated by finite differences, and the basic quantities are as
follows. For any function $p:\mathbb R\to \mathbb R$, we set
$$
D_{\pm} p(x)=\pm \frac1{{\Delta x}}\big(p(x\pm {\Delta x})-p(x)\big), \ \text{and}\
D = \frac{1}{2}\left(D_{+} + D_{-} \right)
$$
for some (small) positive number ${\Delta x}$. If we introduce the averages
$$
\ave{p}(x) := \frac{1}{3}\left(p(x+{\Delta x})+p(x)+ p(x-{\Delta x})\right), \,\, \bar{p}(x) := \frac{1}{2}\left(p(x+{\Delta x})+ p(x-{\Delta x})\right)
$$
and the shift operator
$$
S^\pm p(x)=p(x\pm{\Delta x}),
$$
we find that
\begin{align*}
D(pq)&= \bar{p} Dq+\bar{q} Dp, \\
D_{\pm} (pq)&=S^\pm p D_{\pm} q+qD_{\pm} p=S^\pm q D_{\pm} p+pD_{\pm} q.
\end{align*}
We discretize the real axis using ${\Delta x}$ and set $x_j = j {\Delta x}$ for $j
\in \mathbb{Z}$. For a given function $p$, we define $p_j = p(x_j)$. We will
consider functions in $\ell^2$ with the usual inner product and norm
\begin{equation*}
\langle p, q\rangle= {\Delta x}\sum_{j\in \mathbb{Z}} p_j q_j, \quad \norm{p}= \norm{p}_2=\langle p,p\rangle^{1/2}, \qquad
p,q\in \ell^2.
\end{equation*}
Moreover, we define $h^2$-norm of a grid function as
\begin{align*}
\norm{p}_{h^2} := \Bigl(\norm{p}^2 + \norm{D_{+} p}^2 + \norm{D_{+} D_{-} p}^2\Bigr)^{1/2}.
\end{align*}
Observe that
\begin{equation*}
\norm{p}_\infty := \sup_{j\in\mathbb{Z}}\abs{p_j}\le \frac1{{\Delta x}^{1/2} } \norm{p}.
\end{equation*}
In the periodic case, let $N$ be a given \emph{odd} natural number.
We divide the periodicity interval $[-L,L]$ into $N$ sub-intervals
$[x_j, x_{j+1}]$ using $ {\Delta x}=\frac{2L}{N}$, where
\begin{equation*}
x_j=-L+j{\Delta x}, \quad \text{for} \quad j=0,1,2,.....,N.
\end{equation*}
In the periodic case the sum over $\mathbb{Z}$ is replaced by a
finite sum $j=0,\dots,N$.
The various difference operators enjoy the following properties:
\begin{equation*}
\langle p, D_{\pm}q\rangle=-\langle D_{\mp}p,q\rangle, \quad \langle p,Dq\rangle=-\langle Dp,q\rangle, \qquad p,q\in \ell^2.
\end{equation*}
Furthermore, using Leibniz rules, the following identities can be readily verified:
\begin{subequations}
\begin{align}
\label{imp1}
\langle D(pq), q \rangle &= \frac{{\Delta x}}{2} \langle D_{+} p \,D q, q \rangle + \frac12 \langle S^{-} q\, Dp, q \rangle, \\
\label{imp2}
D_{+} D_{-} (pq)&=D_{-} p D_{+} q + D_{+} D_{-} q S^- p + D_{+} p D_{+} q + q D_{+} D_{-} p.
\end{align}
\end{subequations}
We also need to discretize in the time direction. Introduce (a small)
time step ${\Delta t}>0$, and use the notation
\begin{equation*}
D^t_{+} p(t)=\frac1{{\Delta t}}\big(p(t+{\Delta t})-p(t)\big),
\end{equation*}
for any function $p\colon[0,T]\to \mathbb R$. Write $t_n=n{\Delta t}$ for
$n\in\mathbb N_0=\mathbb N\cup\{0\}$. A fully discrete grid function is a function
$u_{\Delta x}\colon {\Delta t}\, \mathbb N_0 \to \mathbb R^\mathbb{Z}$, and we write
$u_{\Delta x}(x_j,t_n)=u^n_j$. (A CFL-condition will enforce a relationship
between ${\Delta x}$ and ${\Delta t}$, and hence we only use ${\Delta x}$ in the notation.)
Next we present a lemma, which essentially gives
a relation between discrete and continuous Sobolev norms. Since we shall use this lemma
frequently, for the sake of completeness, we present a proof of this lemma in the full line case.
\begin{lemma}
\label{lemmaimp}
There exists a constant $C$ such that for all $u \in H^2(\mathbb R)$
\begin{align*}
\norm{u_{\Delta x}}_{h^2} \le C\, \norm{u}_{H^2},
\end{align*}
where we identify $u_{\Delta x}$ with the discrete evaluation
$\seq{u(x_j)}_{j}$.
\end{lemma}
\begin{proof}
To begin with, observe that the discrete operator
$D_{+} D_{-}$ commutes with the continuous operator $\partial_x$. A simple use of the H\"{o}lder estimate reveals that
\begin{align*}
\norm{D_{+} D_{-} u}^2_{L^2(\mathbb R)} &= {\Delta x} \sum_j \left(\frac{1}{{\Delta x}} \left(D_{-} u(x_{j+1}) - D_{-} u(x_j)\right)\right)^2 \\
&= {\Delta x} \sum_j \left(\int_{x_j}^{x_{j+1}} \frac{1}{{\Delta x}} \partial_x D_{-} u(x)\,dx\right)^2 \\
&\le {\Delta x} \sum_j \left(\norm{\frac{1}{{\Delta x}}}_{L^2([x_j,x_{j+1}])} \norm{\partial_x D_{-} u(x)}_{L^2([x_j,x_{j+1}])} \right)^2 \\
&= \norm{D_{-} \partial_x u}^2_{L^2(\mathbb R)}.
\end{align*}
Similarly, we can show that
\begin{align*}
\norm{D_{-} \partial_x u}_{L^2(\mathbb R)} \le \norm{\partial^2_x u}_{L^2(\mathbb R)}.
\end{align*}
Furthermore, similar arguments can be used to show
\begin{align*}
\norm{D_{+} u}_{L^2(\mathbb R)} \le \norm{\partial^2_x u}_{L^2(\mathbb R)}, \,\, \text{and} \,\,
\norm{u}_{L^2(\mathbb R)} \le \norm{\partial^2_x u}_{L^2(\mathbb R)}.
\end{align*}
Combining above results, the result is proved.
\end{proof}
We will now provide details for the discrete Hilbert transform,
which is different in full line and the periodic cases.
Here we concentrate on the full line case, both regarding the Hilbert
transform and the difference scheme. The periodic case is similar, and
we will only provide detailed proofs where the differences are
sufficiently important. Thus for the moment, we consider the
non-periodic case, while the results in the periodic case are outlined
in Section~\ref{sec:periodic}.
\subsection*{The discrete Hilbert transform on $\mathbb R$}
Recall that the continuous Hilbert transform $H$ on $\mathbb R$ is defined by
\begin{equation}\label{eq:HIlbert}
\begin{aligned}
H(u)(x) &= \mathrm{P.V.} \, \frac{1}{\pi} \int_{\mathbb R} \frac{u(y)}{x-y}
\,dy\\
&=\lim_{\varepsilon\downarrow 0} \frac{1}{\pi}\int_\varepsilon^\infty
\frac{1}{y}\left(u(x-y)-u(x+y)\right)\,dy.
\end{aligned}
\end{equation}
As a strategy to discretize the continuous Hilbert transform, we first consider \emph{even} $j$, and write $(H u)(x_j) := H(u)_j$
as
\begin{align*}
H(u)_j =\mathrm{P.V.} \frac{1}{\pi} \int_{\mathbb R} \frac{u(y)}{x_j-y} \,dy.
\end{align*}
This can be rewritten as
\begin{align*}
H(u)_j=\frac{1}{\pi}\sum_{k=\,\text{even}} \int_{x_k}^{x_{k+2}}
\frac{u(y)}{x_j-y} \,dy.
\end{align*}
Next, we apply the midpoint rule on each of these integrals in the sum, to
obtain the following quadrature formula
\begin{align*}
H(u)_j\approx \frac{2}{\pi}\sum_{k =\,\text{odd}} \frac{u_k}{j-k} .
\end{align*}
Similar arguments can be repeated almost \emph{verbatim} to deal with \emph{odd} $j$, to conclude
\begin{equation*}
H(u)_j\approx \frac{2}{\pi}\sum_{k=\,\text{even}} \frac{u_k}{j-k}.
\end{equation*}
Therefore, combining the above results, we can define the discrete
Hilbert transform $\mathbb H$ of a function $u$ as
\begin{align}
\label{eq:hil_full}
\mathbb H(u_{\Delta x})_j&=\frac1{\pi} \sum_{k\ne j}\frac{u_k
\left(1-(-1)^{j-k}\right)}{j-k}\qquad j\in \mathbb{Z}\\
&=\frac{1}{\pi} \sum_{k=1}^\infty \frac{1}{k}\left(u_{j-k} -
u_{j+k}\right)\left(1-(-1)^k\right)\notag\\
&=\frac{1}{\pi} \sum_{k=0}^\infty \int_{x_{2k}}^{x_{2k+2}}
\frac{1}{x_{2k+1}} \left(u(x_j-x_{2k+1})-u(x_j+x_{2k+1})\right)\,dy.\notag
\end{align}
We now list some useful properties of \eqref{eq:hil_full} in the
following lemma.
\begin{lemma}
\label{lemma:imp}
The discrete Hilbert transform $\mathbb H$ on $\mathbb R$ defined by
\eqref{eq:hil_full} is a linear operator with the following
properties: \\
(i) (Skew symmetric) For any two grid functions $u$ and $v$, the
discrete Hilbert transform satisfies
\begin{align*}
\langle \mathbb H u, v \rangle = - \langle u, \mathbb H v \rangle.
\end{align*}
(ii) (Translation invariant) The discrete Hilbert transform
commutes with discrete derivatives, i.e.,
\begin{align*}
\mathbb H\left( D_{\pm} u \right) = D_{\pm} \mathbb H(u).
\end{align*}
(iii) (Norm preservation) Finally, it also preserves the discrete
$L^2$-norm
\begin{align*}
\norm{\mathbb H u}=\norm{u}.
\end{align*}
\end{lemma}
\begin{remark}
The continuous Hilbert transform \eqref{eq:HIlbert} satisfies the same properties with respect to the standard inner product in $L^2$ and ordinary derivatives.
\end{remark}
For a proof of the above lemma, we refer to the monograph by King \cite[pp.~671--674]{king}.
It is worth mentioning that these properties are
essential in order to carry out the analysis given below.
We shall also have use for the following lemma:
\begin{lemma}
\label{lem:l2convergence} Let $\varphi$ be a function in $C^3_0(\mathbb R)$,
and define the piecewise constant function $h_{{\Delta x}}$ by
\begin{equation*}
h_{{\Delta x}}(x)=h_j=\mathbb H(\varphi)(x_j)\quad \text{ for $x\in [x_j,x_{j+1})$.}
\end{equation*}
Then
\begin{equation*}
\lim_{{\Delta x}\to 0} \norm{H(\varphi) - h_{{\Delta x}}}_{L^2(\mathbb R)} = 0.
\end{equation*}
\end{lemma}
\begin{proof}
We define the auxiliary function
\begin{equation*}
\tilde{h}(x)=\tilde{h}_j=H(\varphi)(x_j)\quad \text{ for $x\in [x_j,x_{j+1})$.}
\end{equation*}
Then
\begin{align*}
\norm{H(\varphi) - \tilde{h}}_{L^2(\mathbb R)}^2 &=
\sum_j \int_{x_j}^{x_{j+1}} \left(H(\varphi)(x) -
H(\varphi)(x_j)\right)^2 \,dx \\
&= \sum_j \int_{x_j}^{x_{j+1}} \Bigl( \int_{x_j}^x
H(\varphi)'(z)\,dz\Bigr)^2 \,dx\\
&= \sum_j \int_{x_j}^{x_{j+1}} \Bigl( \int_{x_j}^x
H(\varphi')(z)\,dz\Bigr)^2 \,dx\\
&\le \sum_j \int_{x_j}^{x_{j+1}} \int_{x_j}^{x_{j+1}}
(H(\varphi')(z))^2\,dz\, (x-x_j) \,dx \\
&= \frac{{\Delta x}^2}{2} \norm{H(\varphi')}_{L^2(\mathbb R)}^2 \\
&= \frac{{\Delta x}^2}{2} \norm{\varphi'}_{L^2(\mathbb R)}^2 .
\end{align*}
Next,
\begin{align*}
\norm{h-\tilde{h}}_{L^2(\mathbb R)}^2 &= {\Delta x}\sum_{j} (h_j-\tilde{h}_j)^2
\le {\Delta x}\sum_{\abs{j}\le J} (h_j-\tilde{h}_j)^2 +
2{\Delta x}\sum_{\abs{j}>J} h_j^2 + \tilde{h}_j^2\\
&=:S_1+S_2.
\end{align*}
Now we have that
\begin{align*}
h_j-\tilde{h}_j &=\sum_{k\ge 0}\Big( \int_{x_{2k}}^{x_{2k+2}}
\psi(x_j,x_{2k+1}) \,dy - \int_{x_{2k}}^{x_{2k+2}} \psi(x_j,y)\,dy\Big),
\end{align*}
where $\psi(x,y)=(\varphi(x-y)-\varphi(x+y))/y$. By the error formula
for the midpoint quadrature rule we have that
\begin{equation*}
\Bigl| \int_{x_{2k}}^{x_{2k+2}}
\psi(x_j,x_{2k+1}) \,dy - \int_{x_{2k}}^{x_{2k+2}} \psi(x_j,y)\,dy \Bigr| \le C{\Delta x}^3 \norm{\varphi^{(3)}}_{L^\infty(\mathbb R)}.
\end{equation*}
Furthermore, since the support of $\varphi$ is bounded, the above sum
over $k$ contains only a finite number of terms, namely $M_\varphi/{\Delta x}$,
independently of $j$.
Therefore,
\begin{equation*}
\abs{h_j-\tilde{h}_j}\le M_\varphi C {\Delta x}^2 \norm{\varphi^{(3)}}_{L^\infty(\mathbb R)},
\end{equation*}
and
\begin{equation*}
S_1 \le M_\varphi C {\Delta x}^4 \norm{\varphi^{(3)}}_{L^\infty(\mathbb R)} 2 J.
\end{equation*}
Since $\sum_j h_j^2$ and $\sum_j \tilde{h}_j^2$ are finite, we can
choose $J$ large to make $S_2$ small, and then ${\Delta x}$ small to make
$S_1$ small. Hence $\norm{h-\tilde{h}}_{L^2}$ converges to zero as
${\Delta x} \to 0$. By the triangle inequality $\norm{H(\varphi)-h}_{L^2}\le
\norm{H(\varphi)-\tilde{h}}_{L^2}+\norm{h-\tilde{h}}_{L^2} \to 0$.
\end{proof}
\subsection*{The difference scheme}
We propose the following Crank--Nicolson implicit scheme to generate
approximate solutions of the BO equation \eqref{eq:main}
\begin{equation}
\label{eq:BO_fd}
u^{n+1}_j = u^n_j + {\Delta t}\, \mathbb G(u^{n+1/2})_j + {\Delta t}\, \mathbb H( D_{+} D_{-} u^{n+1/2})_j,
\hspace{.3cm} n\in\mathbb N_0,\ j\in \mathbb{Z},
\end{equation}
where we have used the following notations:
\begin{align*}
u^{n+1/2} := \frac12(u^n + u^{n+1}), \quad \text{and} \,\, \mathbb G(u):= \ave{u}\,Du.
\end{align*}
For the initial data we have
\begin{equation*}
u^0_j=u_0(x_j),\hspace{.3cm} j\in \mathbb{Z}.
\end{equation*}
Note that since the scheme \eqref{eq:BO_fd} is implicit, we must guarantee that
the scheme is well-defined, i.e., that it admits a unique solution. Assuming this for the moment, we show
that the implicit scheme is $L^2$-conservative, by simply taking
inner product of the scheme \eqref{eq:BO_fd} with $u^{n+1/2}_j$. This yields
\begin{equation*}
\frac12 \langle u^{n+1}-u^n, u^{n+1} +u^n \rangle = {\Delta t} \langle u^{n+1/2}, \mathbb G u^{n+1/2}\rangle
+ {\Delta t} \langle u^{n+1/2}, \mathbb H(D_{+} D_{-} u^{n+1/2})\rangle.
\end{equation*}
A simple calculation, using Lemma~\ref{lemma:imp}, reveals that
\begin{equation}\label{eq:simple}
\langle \mathbb H \left(D_{+} D_{-} u \right), u \rangle=0, \quad \text{and}\,\, \langle \mathbb G(u), u \rangle=0.
\end{equation}
Thus, we conclude that
\begin{equation}\label{eq:l2}
\norm{u^{n+1}} = \norm{u^{n}} .
\end{equation}
To solve \eqref{eq:BO_fd}, we use a simple fixed point iteration, and define the
sequence $\seq{w_{\ell}}_{\ell\ge 0}$ by letting $w_{\ell+1}$ be the
solution of the linear equation
\begin{equation}
\label{eq:iteration scheme}
\begin{cases}
w_{l+1} =v + {\Delta t} \,\mathbb G\left(\frac{v+w_l}{2}\right) + \frac{1}{2}{\Delta t} \,\mathbb H \Big( D_{+} D_{-} \left(v+w_{l+1}\right)\Big), \\
w^0 = v := u^n.
\end{cases}
\end{equation}
See also \cite[Lemmas 3.3 and 3.5]{vasu:1998}.
The following stability lemma serves as a building block for the subsequent convergence analysis.
\begin{lemma}
\label{lemma1}
Choose a constant $L$ such that $0<L<1$ and set
\begin{equation*}
K=\frac{6-L}{1-L}>6.
\end{equation*}
We consider the iteration \eqref{eq:iteration
scheme} with $w^0=u^n$, and assume that the following CFL
condition holds
\begin{equation}\label{eq:cfl}
\lambda \le L/\left(K \norm{u^n}_{h^2}\right), \,\, \text{with}\,\, \lambda = {\Delta t} /{\Delta x}.
\end{equation}
Then there exists a function $u^{n+1}$ which solves
\eqref{eq:BO_fd}, and $\lim_{\ell\to\infty}w^\ell = u^{n+1}$. Furthermore,
the following estimate holds:
\begin{equation}
\label{eq:unp1bnd}
\norm{u^{n+1}}_{h^2}\le K \norm{u^n}_{h^2},
\end{equation}
where $K$ depends only on given $L$.
\end{lemma}
\begin{proof}
Define $\Delta w_l := w_{l+1}-w_l$, a straightforward calculation using \eqref{eq:iteration scheme} returns
\begin{equation}
\label{eq:test1}
\Big(1-\frac{1}{2}{\Delta t}\,\mathbb H D_{+} D_{-} \Big) \Delta w_l = {\Delta t}
\left[\mathbb G\left(\frac{v+w_l}2\right) -
\mathbb G\left(\frac{v+w_{l-1}}2 \right)\right]=:{\Delta t} \Delta \mathbb G.
\end{equation}
Next, applying the discrete operator $D_{+} D_{-}$ to \eqref{eq:test1}, then multiplying the resulting equation by ${\Delta x} D_{+} D_{-} \Delta w_l$,
and subsequently summing over $j \in \mathbb{Z}$, we conclude
\begin{equation*}
\norm{D_{+} D_{-} \Delta w_l}^2 = {\Delta t}\langle D_{+} D_{-} \Delta \mathbb G,D_{+} D_{-} \Delta w_{l}\rangle \le {\Delta t}
\norm{D_{+} D_{-} \Delta \mathbb G}\,\norm{D_{+} D_{-} \Delta w_l}.
\end{equation*}
After some calculations, we find that
\begin{equation*}
\Delta \mathbb G = \frac14\left[
\widetilde{\Delta w_{l-1}} D\left(v+w_{l-1}\right) +
\widetilde{(v+w_l)} D\left(\Delta w_{l-1}\right)\right].
\end{equation*}
Next, in order to calculate $D_{+} D_{-} \Delta \mathbb G$, we use the identity \eqref{imp2} and discrete
Sobolev inequalities (cf. \cite[Lemma A.1]{HoldenKoleyRisebro:2013}). This results in
\begin{align*}
\norm{D_{+} D_{-} \left(\widetilde{\Delta w_{l-1}} D\left(v+w_{l-1}\right) \right)}
\le \frac{1}{{\Delta x}} \max{\Big\{\norm{v}_{h^2}, \norm{w_{l-1}}_{h^2}\Big\}}
\norm{\Delta w_{l-1}}_{h^2},
\end{align*}
and similarly
\begin{align*}
\norm{D_{+} D_{-} \left( \widetilde{(v+w_l)} D\left(\Delta w_{l-1}\right)\right)}
\le \frac{1}{{\Delta x}}\max{\Big\{\norm{ v}_{h^2},\norm{w_l}_{h^2}\Big\}} \norm{\Delta w_{l-1}}_{h^2}.
\end{align*}
Combining the above results, we obtain
\begin{equation}
\label{eq:ineq}
\norm{D_{+} D_{-} \Delta w_l}\le \lambda
\max{\Big\{\norm{ v}_{h^2},\norm{w_l}_{h^2}, \norm{w_{l-1}}_{h^2} \Big\}} \norm{\Delta w_{l-1}}_{h^2}.
\end{equation}
Observe that an appropriate inequality like \eqref{eq:ineq} can be obtained for $\norm{D_{+} \Delta w_l}$ and $\norm{\Delta w_l}$,
which in turn can be used, along with \eqref{eq:ineq}, to conclude
$$
\norm{\Delta w_l}_{h^2}\le \lambda
\max{\Big\{\norm{ v}_{h^2},\norm{w_l}_{h^2}, \norm{w_{l-1}}_{h^2} \Big\}} \norm{\Delta w_{l-1}}_{h^2}.
$$
To proceed further, we need to estimate $\norm{D_{+} D_{-} w_l}$. In that context, we first observe that $w_1$
satisfies the following equation
\begin{equation*}
w_1=v + {\Delta t} \,\mathbb G(v) + \frac{1}{2}{\Delta t}\,\mathbb H \Big( D_{+} D_{-} (v+w_1)\Big).
\end{equation*}
Applying the discrete operator $D_{+} D_{-}$ to the equation satisfied by $w_1$,
and subsequently taking the inner product with $ D_{+} D_{-}(v+w_1)$, we get
\begin{align*}
\norm{D_{+} D_{-} w_1}^2 &= \norm{D_{+} D_{-} v}^2 + {\Delta t}
\langleD_{+} D_{-} \mathbb G(v),D_{+} D_{-}(v+w_1)\rangle) \\
&= \norm{D_{+} D_{-} v}^2 + {\Delta t} \langle(D_{+} D_{-} \mathbb G(v),D_{+} D_{-} w_1\rangle\\
&\le \norm{D_{+} D_{-} v}^2 + {\Delta t}^2 \norm{D_{+} D_{-} \mathbb G(v)}^2 + \frac14 \norm{D_{+} D_{-} w_1}^2.
\end{align*}
Next, a simple calculation along with
discrete Sobolev inequalities (cf. \cite[Lemma A.1]{HoldenKoleyRisebro:2013}) confirms that
\begin{equation*}
\norm{D_{+} D_{-} \mathbb G(v)} = \norm{D_{+} D_{-} (\tilde{v}\,Dv)} \le \frac{2}{{\Delta x}} \norm{v}_{h^2}^2.
\end{equation*}
Hence
\begin{equation}
\norm{D_{+} D_{-} w_1}\le \sqrt{\frac43} \left(1 + 4\lambda^2 \norm{v}_{h^2}^2\right)^{1/2}\norm{v}_{h^2}. \label{eq:ineq1}
\end{equation}
Now choose a constant $L\in (0,1)$, and define $K$ by
\begin{equation*}
K=\frac{6-L}{1-L}>6.
\end{equation*}
Therefore, it is clear that if $\lambda$ satisfies the CFL condition \eqref{eq:cfl}, then
\begin{equation*}\label{eq:lambdaassume1}
\sqrt{\frac43}\sqrt{1 + 4\lambda^2 \norm{v}_{h^2}^2}\le 4.
\end{equation*}
Hence from \eqref{eq:ineq1}, making use of the interpolation inequality, we conclude that
\begin{align*}
\norm{w_1}_{h^2} &\le K \norm{v}_{h^2}.
\end{align*}
At this point, we assume inductively that
\begin{subequations}
\begin{align}
\label{eq:inducta}
\norm{w_l}_{h^2} &\le K \norm{ v}_{h^2}, \quad \text{for}\,\, l=1,\ldots,m,\\
\label{eq:inductb}
\norm{ \Delta w_l}_{h^2}&\le L \norm{\Delta w_{l-1}}_{h^2}, \quad \text{for}\,\, l=2,\ldots,m.
\end{align}
\end{subequations}
We have already shown \eqref{eq:inducta} for $m=1$. To show
\eqref{eq:inductb} for $m=2$, note that
\begin{align*}
\norm{\Delta w_2}_{h^2} \le \lambda \max\Big\{\norm{v}_{h^2},\norm{ w_1}_{h^2}\Big\}
\norm{ \Delta w_1}_{h^2}
\le 4 \lambda \norm{ v}_{h^2} \norm{ \Delta w_1}_{h^2}
\le L \norm{ \Delta w_1}_{h^2},
\end{align*}
by CFL condition \eqref{eq:cfl}.
To show \eqref{eq:inducta} for $m>1$,
\begin{align*}
\norm{ w_{m+1}}_{h^2} & \le \sum_{l=0}^m \norm{ \Delta w_l}_{h^2} + \norm{ v}_{h^2}
\le \norm{(w_1-v)}_{h^2} \sum_{l=0}^m L^l + \norm{ v}_{h^2} \\
&\le \left(\norm{w_1}_{h^2} +\norm{ v}_{h^2} \right) \frac{1}{1-L} + \norm{ v}_{h^2}
\le \frac{4+2-L}{1-L} \norm{v}_{h^2} =K\norm{ v}_{h^2}.
\end{align*}
Then
\begin{equation*}
\norm{\Delta w_{m+1}}_{h^2} \le \lambda K \norm{ v}_{h^2} \,\norm{ \Delta w_m}_{h^2} \le L
\norm{\Delta w_m}_{h^2},
\end{equation*}
if the CFL condition \eqref{eq:cfl} holds.
To sum up, if $L\in (0,1)$, and $K$ is defined by $K=(6-L)/(1-L)$, and
$\lambda$ satisfies the CFL-condition
\begin{equation*}
\lambda \le \frac{L}{K\norm{ v}_{h^2} },
\end{equation*}
then we have the desired estimate \eqref{eq:unp1bnd}. Finally,
using \eqref{eq:inductb}, one can show that
$\{w_{\ell}\}$ is Cauchy, hence $\{w_{\ell}\}$ converges. This
completes the proof.
\end{proof}
\begin{remark}
Observe that the above result guarantees that the iteration
scheme converges for one time step under CFL condition \eqref{eq:cfl},
where the ratio between temporal and spatial mesh sizes
must be smaller than an upper bound that depends on the computed
solution at that time, i.e., $u^n$.
Since we want
the CFL-condition only to depend on the initial data
$u_0$, we have to derive local a priori bounds for the computed
solution $u^n$. This will be achieved in Theorem~\ref{thm2} to conclude that the
iteration scheme \eqref{eq:iteration scheme} converges for
sufficiently small ${\Delta t}$.
\end{remark}
The following lemma is the most important step towards stability, and the very heart of this paper:
\begin{lemma}
\label{lemma3.1}
Let the approximate solution $u^n$ be generated by the
Crank--Nicolson scheme \eqref{eq:BO_fd}, where ${\Delta t}$ and ${\Delta x}$ are
such that \eqref{eq:cfl} holds.
Then we have that
\begin{align*}
D_+^t\left( \norm{ u^n}_{h^2}\right) \le \sqrt{\frac32} \,
\norm{ u^{n+1/2}}_{h^2}^2.
\end{align*}
\end{lemma}
\begin{proof}
If $D_{+}D_{-} u^n=0$, then $u^n=0$ and $u^{n+1}=0$ since $u^n, u^{n+1}\in \ell^2$, so that the lemma
trivially holds. Therefore we can assume that $D_{+}D_{-} u^n\ne 0$.
Applying the discrete operator $D_{+} D_{-}$ to \eqref{eq:BO_fd}, and subsequently taking inner product with $D_{+} D_{-} u^{n+1/2}$ yields
\begin{align*}
\frac12 \norm{D_{+} D_{-} u^{n+1}}^2 &= \frac12 \norm{D_{+} D_{-} u^{n}}^2 + \Delta t \langle D_{+} D_{-} \mathbb G(u^{n+1/2}), D_{+} D_{-} u^{n+1/2} \rangle,
\end{align*}
using \eqref{eq:simple}, which implies
\begin{equation}\label{eq:dtbound}
D_+^t\left(\norm{D_{+} D_{-} u^n}\right) = 2 \frac{ \langle D_{+} D_{-} \mathbb G(u^{n+1/2}), D_{+} D_{-} u^{n+1/2} \rangle}{\norm{D_{+} D_{-} u^{n+1}} + \norm{D_{+} D_{-} u^n}}.
\end{equation}
For the moment we drop the superscript $n+1/2$ from our notation, and use the notation $u$ for $u^{n+1/2}$,
where $n$ is fixed. We use the product rule \eqref{imp2} to write
\begin{align*}
\langle D_{+} D_{-} \mathbb G(u), D_{+} D_{-} u \rangle &= \langle D_{+} D_{-}
\left(\ave{u}\,Du \right), D_{+} D_{-} u \rangle \\
& = \langle D_{-} \ave{u}\, D_{+}(Du), D_{+} D_{-} u \rangle + \langle S^-
\ave{u}\, D_{+} D_{-} (Du), D_{+} D_{-} u \rangle \\
& \quad+ \langle D_{+} \ave{u}\, D_{+}(Du), D_{+} D_{-} u \rangle +
\langle D_{+} D_{-} \ave{u}\, Du, D_{+} D_{-} u \rangle \\
&=: \mathcal{E}^1(u) + \mathcal{E}^2(u) + \mathcal{E}^3(u) +
\mathcal{E}^4(u),
\end{align*}
in the obvious notation.
By the discrete
Sobolev inequality (cf.~\cite[Lemma A.1]{HoldenKoleyRisebro:2013})
\begin{equation*}
\norm{D_{-} u}_{\infty}\le \sqrt{\frac32} \left( \norm{D_{+}D_{-} u}+\norm{u}\right),
\end{equation*}
and the relation $\norm{D_{+}D_{-} u}= \norm{D_{+}^2 u}$,
we apply the Cauchy--Schwarz inequality to obtain
\begin{align*}
\abs{\mathcal{E}^1(u)} &\le \norm{D_{-} \ave{u}}_{\infty} \norm{D_{+}
Du} \norm{D_{+} D_{-} u} \\
&\le \norm{D_{-} \ave{u}}_{\infty}\frac12\big(
\norm{D_{+}^2u}+\norm{D_{+} D_{-} u} \big)\norm{D_{+} D_{-} u} \\
&= \norm{D_{-} {u}}_{\infty} \norm{D_{+} D_{-} u}^2 \\
& \le\sqrt{\frac32} (\norm{D_{+} D_{-} u} + \norm{u}) \norm{D_{+} D_{-} u}^2 \\
& \le \sqrt{\frac32} \, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2.
\end{align*}
Similar arguments show that
\begin{align*}
\abs{\mathcal{E}^3(u)} \le \sqrt{\frac32}\, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2,
\,\text{and}\,\abs{\mathcal{E}^4(u)} \le \sqrt{\frac32} \, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2.
\end{align*}
To estimate the last term, we proceed as follows:
\begin{align*}
\mathcal{E}^2(u) &:= \langle S^- \ave{u}\, D_{+} D_{-} (Du), D_{+} D_{-} u
\rangle\\
&=\langle S^- \ave{u}\, D(D_{+} D_{-} u), D_{+} D_{-} u \rangle \\
&=\langle S^-u\, D_{+}D_{-} u,D(D_{+}D_{-} u)\rangle\\
&= - \langle D\left(S^- \ave{u}\,D_{+} D_{-} u \right), D_{+} D_{-} u
\rangle \\
&= -\frac{{\Delta x}}{2} \langle D_{+}(S^- \ave{u}) \, D(D_{+} D_{-} u), D_{+} D_{-}
u \rangle \\
&\qquad- \frac12 \langle S^- D_{+} D_{-} u\, D(S^- \ave{u}), D_{+} D_{-} u
\rangle\quad\text{by \eqref{imp1}}\\
&=: \mathcal{E}^{21}(u)+ \mathcal{E}^{22}(u).
\end{align*}
Again using the discrete Sobolev inequality (cf. \cite[Lemma
A.1]{HoldenKoleyRisebro:2013}) we see that
\begin{align*}
\abs{\mathcal{E}^{21}(u)} &\le \frac{{\Delta x}}{2} \norm{ D_{+}(S^-
\ave{u})}_{\infty} \, \norm{DD_{+} D_{-} u} \, \norm{D_{+} D_{-} u}\\
&=\norm{D_{-} u}_{\infty} \left({\Delta x} \norm{DD_{+} D_{-} u}
\right)\norm{D_{+} D_{-} u}\\
&\le\norm{ D_{-} u}_{\infty} \, \norm{D_{+} D_{-} u} \, \norm{D_{+}
D_{-} u}\\
& \le \sqrt{\frac32} \, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2.
\end{align*}
Similarly,
\begin{align*}
\abs{\mathcal{E}^{22}(u)} \le \sqrt{\frac32}\, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2.
\end{align*}
Therefore, we conclude
\begin{align*}
\abs{\mathcal{E}^2(u)} \le \sqrt{\frac32} \, \norm{D_{+} D_{-} u}\, \norm{u}_{h^2}^2.
\end{align*}
Hence
\begin{align*}
2 \frac{ \langle D_{+} D_{-} \mathbb G(u^{n+1/2}), D_{+} D_{-} u^{n+1/2}
\rangle}{\norm{D_{+} D_{-} u^{n+1}} + \norm{D_{+} D_{-} u^n}}
&\le 2\sqrt{\frac32} \frac{\norm{D_{+} D_{-} u^{n+1/2}}\,
\norm{u^{n+1/2}}_{h^2}^2}{\norm{D_{+} D_{-} u^{n+1}} + \norm{D_{+} D_{-}
u^n}} \\
& \le\sqrt{\frac32}\norm{u^{n+1/2}}_{h^2}^2,
\end{align*}
which by \eqref{eq:dtbound} implies that
\begin{equation}\label{eq:a1}
\abs{D_+^t\left(\norm{D_{+} D_{-} u^n}\right)} \le \sqrt{\frac32}\,\norm{u^{n+1/2}}_{h^2}^2.
\end{equation}
In the same manner, applying the operator $D_{+}$ to \eqref{eq:BO_fd},
and subsequently taking the inner product with $D_{+} u^{n+1/2}$, yields
\begin{equation*}
D_+^t\left(\norm{D_{+} u^n}\right) = 2 \frac{ \langle D_{+}
\mathbb G(u^{n+1/2}), D_{+} u^{n+1/2} \rangle}{\norm{D_{+} u^{n+1}} +
\norm{D_{+} u^n}}.
\end{equation*}
Using the discrete
Sobolev inequality $\norm{u}_\infty \le \norm{u}_{h^1}$
\begin{align*}
\abs{\langle D_{+} \mathbb G(u), D_{+} u \rangle}
&= \abs{\langle \ave{u}Du, \DmD_{+} u \rangle}\\
&\le \norm{u}_\infty \,\norm{Du} \,\norm{D_{+}D_{-} u}\\
&\le \norm{D_{+} u} \norm{u}_{h^2}^2.
\end{align*}
Thus, we obtain
\begin{equation} \label{eq:a2}
\abs{D_+^t\left(\norm{D_{+} u^n}\right)}\le \sqrt{\frac32}\norm{u^{n+1/2}}_{h^2}^2.
\end{equation}
Furthermore, the conservative property \eqref{eq:l2} implies that
\begin{equation} \label{eq:a3}
D_+^t\left(\norm{u^n}\right) =0.
\end{equation}
Combining \eqref{eq:a1}, \eqref{eq:a2}, and \eqref{eq:a3} concludes the proof.
\end{proof}
We can now state the following stability result:
\begin{theorem}
\label{thm2}
If the initial function $u_0$ is in $H^2$,
then there exist
a time $T>0$ and a constant $C$, both depending only on $\norm{u_0}_{H^2}$, such that
\begin{align*}
\norm{u^n}_{h^2} \le C, \quad \text{for $t_n \le T$}
\end{align*}
for all sufficiently small $\lambda={\Delta t}/{\Delta x}$.
\end{theorem}
\begin{proof}
Set $y_n = \norm{u^{n}}_{h^2}$. By Lemma~\ref{lemma1}, we have
$\norm{u^{n+1/2}}\le K\norm{u^n}$, so that Lemma~\ref{lemma3.1}
gives
\begin{equation*}
y_{n+1}\le y_n + \sqrt{\frac32}\left(K y_n\right)^2
\end{equation*}
for all ${\Delta t}/{\Delta x}\le\lambda_n=L/(K \norm{u^n}_{h^2})$. We choose a time discretization ${\Delta t}_n$.
Let $w(t)$ solve the differential equation $w'(t)=\sqrt{3/2}K^2 w(t)^2$,
$w(0)=\norm{u_0}_{H^2}$. This equation has a blow up time $\hat{T}=1/(\sqrt{3/2}K^2 \norm{u_0}_{H^2})$,
and for $t<T$, $w$ is strictly increasing. Choose $T<\hat{T}$, we
have that $w(t)\le w(T)$, and we claim that also $y_n\le w(t_n)\le
w(T)$ for $t_n\le T$. This claim is true for $n=0$, and we
inductively assume that it is true for $n=0,\ldots,N$. Then
\begin{align*}
y_{N+1}=y_N + {\Delta t}_n CK^2 y_N^2 &\le w(t_N) + \int_{t_N}^{t_{N+1}}
\sqrt{\frac32}K^2 w(t_N)^2\,dt\\
&\le w(t_N)+\int_{t_N}^{t_{N+1}} w'(s)\,ds = w(t_{N+1}).
\end{align*}
This proves that $y_n\le w(T)$ for all $n$ such that $t_n\le T$, thus $ \norm{u^n}_{h^2} \le C=w(T)$. We can now use a uniform spacing, and let
${\Delta t}/{\Delta x}\le\lambda\le L/(KC)$.
\end{proof}
Now we turn to the estimate of the temporal derivative of approximate solution $u^n$.
This bound will enable us to apply the Arzel\`a--Ascoli theorem in order to prove the convergence of an approximate solution $u^n$. From the scheme \eqref{eq:BO_fd}, using the propety $\norm{D_{+}D_{-} u}=\norm{ \mathbb H (D_{+}D_{-} u)}$,
we see that
\begin{align*}
\norm{D^t_{+} u^n}\le \norm{\mathbb G(u^{n+1/2})} \, + \, \norm{D_{+}D_{-} u^{n+1/2}}.
\end{align*}
By the discrete Sobolev inequality
$$
\norm{\mathbb G(u^{n+1/2})} \le \norm{u^{n+1/2}}_{\infty} \norm{Du^{n+1/2}} \le C \norm{u^{n+1/2}}_{h^2}^2.
$$
Therefore Theorem~\ref{thm2} implies that $\norm{D^t_{+} u^n}\le C$.
Thus, we can follow Sj\"oberg \cite{Sjoberg:1970} to prove
convergence of the scheme \eqref{eq:BO_fd} for $t<T$. We reason as follows: We
construct the piecewise quadric continuous interpolation
$u_{{\Delta x}}(x,t)$ in two steps. First we make a spatial interpolation for
each $t_n$:
\begin{equation} \label{eq:bilinearinterp}
\begin{aligned}
u^n(x) &=u_j^n +(x-x_j)Du_j^n \\
&\quad +\frac12(x-x_j)^2D_{+}D_{-} u_j^n, \quad x\in
[x_j,x_{j+1}), \, j\in\mathbb{Z}.
\end{aligned}
\end{equation}
Next we interpolate in time:
\begin{equation} \label{eq:bilinearinterp1} u_{{\Delta x}}(x,t) =u^n(x)
+(t-t_n)D^t_{+} u^n(x), \quad x\in \mathbb R, \, t\in [t_n, t_{n+1}], \,
(n+1)t_{n+1}\le T.
\end{equation}
Observe that
\begin{equation*}
u_{{\Delta x}}(x_j,t_n) =u_j^n, \qquad j\in\mathbb{Z}, \quad n\in\mathbb N_0.
\end{equation*}
Note that $u_{\Delta x}$ is continuous everywhere and continuously
differentiable in space.
The function $u_{\Delta x}$ satisfies for $x\in [x_j,x_{j+1})$ and $t\in
[t_n, t_{n+1}]$
\begin{align}
\partial_x u_{\Delta x}(x,t)&=Du^n_j+(x-x_j)D_{+}D_{-} u^n_j \label{eq:udxH1C} \\
&\quad +(t-t_n)D^t_{+}\Big( Du^n_j+(x-x_j)D_{+}D_{-} u^n_j \Big), \notag\\
\partial_x^2 u_{\Delta x}(x,t)&= D_{+}D_{-} u^n_j+(t-t_n)D^t_{+} D_{+}D_{-} u^n_j,\label{eq:udxtL2C} \\
\partial_t u_{\Delta x}(x,t)&= D^t_{+} u^n(x),\label{eq:udxtL2C1}
\end{align}
which implies
\begin{align}
\norm{u_{{\Delta x}}(\, \cdot\,,t)}_{L^2(\mathbb R)}&\le \norm{u_0}_{L^2(\mathbb R)},\label{eq:udxL2}\\
\norm{\partial_x u_{\Delta x}(\, \cdot\,,t)}_{L^2(\mathbb R)} &\le C,\label{eq:udxH1}\\
\norm{\partial_t u_{\Delta x}(\, \cdot\,,t)}_{L^2(\mathbb R)} &\le C,\label{eq:udxtL2} \\
\norm{\partial_{x}^2 u_{\Delta x}(\, \cdot\,,t)}_{L^2(\mathbb R)}&\le
C,\label{eq:udxH3}
\end{align}
for $t\le T$ and for a constant $C$ which is independent of
${\Delta x}$.
The bound on $\partial_t u_{\Delta x}$ also implies that $u_{\Delta x}\in
\Lip([0,T];L^2(\mathbb R))$. Then an application of the
Arzel\`a--Ascoli theorem using \eqref{eq:udxL2} shows that the set
$\seq{u_{\Delta x}}_{{\Delta x}>0}$ is sequentially compact in
$C([0,T];L^2(\mathbb R))$. Thus there exists a sequence
$\seq{u_{{\Delta x}_j}}_{j\in\mathbb N}$ which converges uniformly in
$C([0,T];L^2(\mathbb R))$ to some function $u$.
Next we show that the limit $u$ is a weak solution of the Cauchy
problem \eqref{eq:main}, i.e., $u$ satisfies
\begin{equation}
\label{weak}
\int_0^T \int_{-\infty}^{\infty} \big(u \psi_t - \frac{u^2}{2}\psi_x -
u H(\psi_{xx})\big) \,dxdt
+\int_{-\infty}^{\infty} \psi(x,0)u_0(x)\,dx=0,
\end{equation}
for all test functions $\psi\in C^\infty_0(\mathbb R\times[0,T))$.
To do this, we start by noting that the piecewise constant function
\begin{equation*}
\bar{u}_{{\Delta x}}(x,t)=u^n_j\quad \text{ for $(x,t)\in
[x_j,x_{j+1})\times [t_n,t_{n+1})$,}
\end{equation*}
also converges to $u$ in $L^\infty([0,T];L^2_{\mathrm{loc}}(\mathbb R))$. It is more
convenient to apply a Lax--Wendroff type argument to $\bar{u}_{\Delta x}$
than to $u_{\Delta x}$.
Let $\psi \in C_0^{\infty}(\mathbb R \times [0,T))$ be any test function and denote $\psi_j^n = \psi(x_j, t_n)$.
Multiplying the scheme \eqref{eq:BO_fd} by ${\Delta x} {\Delta t} \psi_j^n$, and subsequently summing over all $j$ and $n$ yields
\begin{align*}
{\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \,D^t_{+} u^n_j & = {\Delta x} {\Delta t}
\sum_{j} \sum_{n} \psi_j^n \,\mathbb G(u^{n+1/2})_j \\
& \quad - {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n\, \mathbb H(D_{+} D_{-}
u^{n+1/2})_j.
\end{align*}
It is straightforward to show that
\begin{align*}
{\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \,D^t_{+} u^n_j
&= - {\Delta x} {\Delta t} \sum_{j} \sum_{n} u^n_j\, D^t_{-} \psi_j^n - {\Delta x} \sum_{j} \psi_j^0 \, u^0_j \\
& \to -\int_{\mathbb R} \int_0^T u \psi_t \,dx\,dt - \int_{\mathbb R} \psi(x,0) u_0(x) \,dx \, \text{as} \,\, {\Delta x} \downarrow 0.
\end{align*}
Next, for the nonlinear term, we proceed as follows:
\begin{align*}
{\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \,\mathbb G(u^{n+1/2})_j
&= {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \, \widetilde{u_j^{n+1/2}}\, Du_j^{n+1/2} \\
&= {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \Bigg[ \frac13
D\Big(u_j^{n+\frac12}\Big)^2 + \frac13 u_j^{n+1/2} \,D u_j^{n+1/2}
\Bigg].
\end{align*}
A simple summation-by-parts formula yields
\begin{align*}
\frac13 {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \, D\Big(u_j^{n+1/2}\Big)^2
&= - \frac13 {\Delta x} {\Delta t} \sum_{j} \sum_{n} \Big(u_j^{n+1/2}\Big)^2 D\psi_j^n \\
&\to -\frac13 \int_{\mathbb R} \int_0^T u^2\, \psi_x \,dx\,dt, \,\, \text{as} \, \,{\Delta x} \downarrow 0.
\end{align*}
Again, using summation-by-parts
\begin{align*}
\frac13 {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n u_j^{n+1/2} \,D u_j^{n+1/2}
&= - \frac{1}{12} {\Delta x} {\Delta t} \sum_{j} \sum_{n} u_j^{n+1/2}\,u_{j-1}^{n+1/2}\, D_{-} \psi_j^n \\
&\quad - \frac{1}{12} {\Delta x} {\Delta t} \sum_{j} \sum_{n} u_j^{n+1/2}\, u_{j+1}^{n+1/2}\, D_{+} \psi_j^n \\
& \to -\frac16 \int_{\mathbb R} \int_0^T u^2\, \psi_x \,dx\,dt \,\, \text{as} \, \,{\Delta x} \downarrow 0.
\end{align*}
Here we have used the general formula
\begin{equation*}
\langle p , q D q\rangle = - \frac14\langle q S^{-} q , D_{-} p\rangle - \frac14\langle q S^{+} q, D_{+} p\rangle.
\end{equation*}
Hence, we conclude
\begin{equation*}
{\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n \,\mathbb G(u_j^{n+1/2})
\to -\frac12 \int_{\mathbb R} \int_0^T u^2\, \psi_x \,dx\,dt \,\, \text{as} \, \,{\Delta x} \downarrow 0.
\end{equation*}
We are left with the term involving the Hilbert transform. With a
slight abuse of notation we identify a sequence $\seq{v_j}$ with a
piecewise constant function, and use the notation $\langle
\, \cdot\,,\, \cdot\,\rangle$ for the $\ell^2$ inner product as well as for the
inner product in $L^2(\mathbb R)$. Then
\begin{equation*}
- {\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n\, \mathbb H(D_{+} D_{-}
u^{n+1/2})_j = {\Delta t} \sum_n \langle u^{n+1/2}, \mathbb H(D_{+}D_{-} \psi^n)\rangle.
\end{equation*}
Next,
\begin{align*}
\abs{ \langle u^{n+1/2}, \mathbb H(D_{+}D_{-} \psi^n) \rangle-\langle
u,H(\psi_{xx}(\, \cdot\,,t_n)\rangle}&\le \abs{ \langle u^{n+1/2}-u, \mathbb H(D_{+}D_{-}
\psi^n)\rangle} \\
&\qquad +
\abs{\langle u,\mathbb H(D_{+}D_{-}\psi^n)-H(\psi_{xx}(\, \cdot\,,t_n))\rangle}\\
&\le \norm{u^{n+1/2}-u}\,\norm{D_{+}D_{-} \psi^n} \\
&\qquad +
\norm{u}\,\norm{\mathbb H(D_{+}D_{-}\psi^n)-H(\psi_{xx}(\, \cdot\,,t_n))}.
\end{align*}
The first term on the right will tend to zero, since $u^{n+1/2}$
converges to $u$ in $L^2$. Regarding the second term we have that
the piecewise constant function $D_{+}D_{-} \psi^n$ will converge to
$\psi_{xx}(\, \cdot\,,t_n)$ since $\psi$ is smooth, as will the piecewise constant
function $v^n_j:=\psi_{xx}(x_j,t_n)$. Using these observations
\begin{align*}
\norm{\mathbb H(D_{+}D_{-}\psi^n)-H(\psi_{xx}(\, \cdot\,,t_n))} &\le
\norm{\mathbb H(D_{+}D_{-}\psi^n-v^n)}+\norm{\mathbb H(v^n)-H(\psi_{xx}(\, \cdot\,,t_n))}\\
&\le \norm{D_{+}D_{-}\psi^n-v^n} + \norm{\mathbb H(v^n)-H(\psi_{xx}(\, \cdot\,,t_n))}.
\end{align*}
We have already observed that the first term on the right will tend to
zero as ${\Delta x}$ to zero, and the second term will vanish by
Lemma~\ref{lem:l2convergence} since $\psi_{xx}$ is smooth. Thus we
have established that
\begin{equation*}
{\Delta x} {\Delta t} \sum_{j} \sum_{n} \psi_j^n\, \mathbb H(D_{+} D_{-}
u^{n+1/2})_j \to
- \int_0^T \int_{\mathbb R} u H(\psi_{xx}) \,dxdt \,\, \text{as} \, \,{\Delta x} \downarrow 0,
\end{equation*}
which shows that $u$ is a weak solution.
The bounds \eqref{eq:udxH1}, \eqref{eq:udxtL2}, and \eqref{eq:udxH3}
mean that $u$ is actually a strong solution such that \eqref{eq:main}
holds as an $L^2$ identity. Thus the limit $u$ is the unique solution
to the BO equation \eqref{eq:main} taking the initial data $u_0$.
Summing up, we have proved the following theorem:
\begin{theorem}\label{thm:H3convergence}
Assume that $u_0\in H^2(\mathbb R)$. Then there exists a finite time
$T$, depending only on $\norm{u_0}_{H^2(\mathbb R)}$, such that for
$t\le T$, the difference approximations defined by
\eqref{eq:BO_fd} converge uniformly in $C(\mathbb R\times [0,T])$ to
the unique solution of the Benjamin--Ono equation \eqref{eq:main} as
${\Delta x}\to 0$ with ${\Delta t}=\order{{\Delta x}}$.
\end{theorem}
\section{The periodic case}\label{sec:periodic}
To keep the presentation fairly short we have only provided details in
the full line case. However, the same proofs apply also in the
periodic case but the discrete Hilbert transform is defined
differently. In this case it should be an approximation of
the singular integral
\begin{equation}
\label{PHT}
\mathbb{H}_{\rm per} u(x) = \mathrm{P.V.} \frac{1}{2L} \int_{-L}^{L}\!\!
\cot\left(\frac{\pi}{2L} (x-y)\right) u(y)\,dy,
\end{equation}
such that Lemma~\ref{lemma:imp} holds.
A simple use of the trigonometric identity
\begin{align*}
2\cot(\theta)=\cot\left(\frac\theta2\right) -
\tan\left(\frac\theta2\right),
\end{align*}
helps use to rewrite \eqref{PHT} as
\begin{equation*}
\mathbb{H}_{\rm per} u:= T_1 u - T_2 u,
\end{equation*}
where
\begin{equation}
T_1u(x) = \mathrm{P.V.} \frac{1}{4L} \int_{-L}^{L}
\cot\left(\frac{\pi}{4L} (x-y)\right) u(y)\,dy,
\end{equation}
and
\begin{equation}
T_2u(x) = \mathrm{P.V.} \frac{1}{4L} \int_{-L}^{L}
\tan\left(\frac{\pi}{4L} (x-y)\right) u(y)\,dy.
\end{equation}
Let $n$ be an \emph{even} integer such that $0\le n\le N-1$. For
this $n$, we have
\begin{align*}
T_1u(x_n) &= \mathrm{P.V.} \frac{1}{4L} \int_{x_0}^{x_N} \cot\left(\frac{\pi}{4L} (x_n-y)\right) u(y)\,dy\\
&= \frac{1}{4L}\sum_{j=0}^{\frac{N-3}{2}} \int_{x_{2j}}^{x_{2j+2}} \cot\left(\frac{\pi}{4L} (x_n-y)\right) u(y)\,dy \\
&\quad + \frac{1}{4L} \int_{x_{N-1}}^{x_N} \cot\left(\frac{\pi}{4L}
(x_n-y)\right) u(y)\,dy.
\end{align*}
We apply the midpoint rule on each of these integrals in the sum and
endpoint rule for the last integral, and we obtain the following
quadrature formula:
\begin{equation}
\label{T_1quad}
\begin{aligned}
T_1u(x_n)&=\frac{1}{4L}\sum_{j=\,\text{odd}}2{\Delta x} \, \,u(x_j)\cot\left(\frac{\pi}{4L} (x_n-x_j)\right)\\
&\quad +\frac{1}{4L}{\Delta x} \, \,u(x_N)\cot\left(\frac{\pi}{4L}
(x_n-x_N)\right).
\end{aligned}
\end{equation}
Using the identity ${\Delta x}=2L/N$, we define
\begin{equation}
\label{T_1quada}
T_1u_n=\frac{1}{N}\sum_{j=\,\text{odd}}u_j\cot\left(\frac{\pi(n-j)}{2N} \right)\\
+\frac{1}{2N}u(x_N)\cot\left(\frac{\pi}{4L} (x_n-x_N)\right).
\end{equation}
Next we write $T_2u(x_n)$ as
\begin{align*}
T_2u(x_n) &= \mathrm{P.V.} \frac{1}{4L} \int_{x_0}^{x_N}
\tan\left(\frac{\pi}{4L} (x_n-y)\right) u(y)\,dy\\
& = \frac{1}{4L}\sum_{j=\,\text{odd}}\int_{x_{j}}^{x_{j+2}}
\tan\left(\frac{\pi}{4L} (x_n-y)\right) u(y)\,dy \\
&\quad + \frac{1}{4L} \int_{x_{0}}^{x_1} \tan\left(\frac{\pi}{4L}
(x_n-y)\right) u(y)\,dy.
\end{align*}
To obtain the quadrature formula, we use the midpoint rule on each of
the integral in the sum and endpoint rule on the last integral,
\begin{equation}
\label{T_2quad}
\begin{aligned}
T_2u(x_n)&=\frac{1}{4L}\sum_{j=\,\text{even},j\neq 0}2{\Delta x} \,
\,u(x_j)\tan\left(\frac{\pi}{4L} (x_n-x_j)\right)\\
&\quad +\frac{1}{4L}{\Delta x} \, \,u(x_0)\tan\left(\frac{\pi}{4L}
(x_n-x_0)\right).
\end{aligned}
\end{equation}
Using the identity ${\Delta x}=2L/N$, we have
\begin{equation}
\label{T_2quada}
T_2u_n=\frac{1}{N}\sum_{ j=\,\text{even},\; j\neq 0 } u_j
\tan \left(\frac{\pi(n-j)}{2N} \right)\\
+\frac{1}{2N} u(x_0) \tan \left( \frac{\pi}{4L}
(x_n-x_0) \right).
\end{equation}
Since $u$ is $N$-periodic grid function, we have
\begin{align*}
u(x_N)\cot\left(\frac{\pi}{4L} (x_n-x_N)\right)= -
\,u(x_0)\tan\left(\frac{\pi}{4L} (x_n-x_0)\right).
\end{align*}
Therefore, adding \eqref{T_2quada} and \eqref{T_1quada} we have, for
even $n$
\begin{align*}
(\mathbb{\mathbb{H}_{\rm per}}
u)_n=\frac{1}{N}\sum_{j=\,\text{odd}}u_j\cot\left(\frac{\pi(n-j)}{2N}
\right) - \frac{1}{N}\sum_{ j=\,\text{even} } u_j \tan
\left(\frac{\pi(n-j)}{2N} \right).
\end{align*}
Similarly, we have for odd $n$
\begin{align*}
(\mathbb{\mathbb{H}_{\rm per}}
u)_n=\frac{1}{N}\sum_{j=\,\text{even}}u_j\cot\left(\frac{\pi(n-j)}{2N}
\right) - \frac{1}{N}\sum_{ j=\,\text{odd} } u_j \tan
\left(\frac{\pi(n-j)}{2N} \right).
\end{align*}
Combining above two relations, we conclude
\begin{equation}
\label{b formula}
\mathbb{\mathbb{H}_{\rm per}} u = c*u,
\end{equation}
where the vector $c$ is given by
\begin{equation}
\label{c formula}
c_n=\frac{1-(-1)^n}{2N} \cot\left( \frac{\pi n}{2N}\right) -
\frac{1+(-1)^n}{2N} \tan\left( \frac{\pi n}{2N}\right).
\end{equation}
Next we prove the following properties of discrete Hilbert transform
$\mathbb{\mathbb{H}_{\rm per}}$ defined by \eqref{b formula}--\eqref{c formula}:
\begin{lemma}
\label{lemma123}
The discrete Hilbert transform is skew symmetric. Moreover, it
satisfies $\norm{\mathbb{\mathbb{H}_{\rm per}} u}\le\norm{u}$ and $\norm{u}=\norm{\mathbb{\mathbb{H}_{\rm per}}
u}$ provided $\sum_{j=0}^{N-1}u_{j}=0$. Furthermore, we have
\begin{align*}
\norm{\mathbb{\mathbb{H}_{\rm per}}D_{+}D_{-} u}=\norm{D_{+}D_{-} u}.
\end{align*}
\end{lemma}
\begin{proof}
The skew-symmetric property of $\mathbb{\mathbb{H}_{\rm per}}$ follows from the fact
that $c_{-n}=-c_n$, for any $n$. Furthermore, we use the
\emph{discrete Fourier transform} (DFT) to prove that
$\mathbb{\mathbb{H}_{\rm per}}$ preserves the $\ell^2$-norm.
First we recall the definition of discrete Fourier transform. For a
given $N$-periodic grid function $u$, we define the DFT by
\begin{align*}
\hat{u}_k= \sum_{n=0}^{N-1}u_n\, e^{-i\frac{2\pi k n}{N}}, \quad
k=0,1,2,..., N-1,
\end{align*}
and the inversion formula is then
\begin{align*}
u_k=\frac1N \sum_{n=0}^{N-1}\hat{u}_n\, e^{i\frac{2\pi k n}{N}},
\quad k=0,1,2,..., N-1.
\end{align*}
Then the Parseval formula reads
\begin{align*}
\norm{\hat{u}}= \sqrt{N}\norm{u}.
\end{align*}
Next we compute the DFT of $c$. We claim that the Fourier transform
of $c$ is given by
\begin{equation}
\label{chat}
\hat{c}_n=
\begin{cases}
-i & \text{for $n=1,2,...,\frac{N-1}{2}$},\\
\,0 &\text{for $n=0$},\\
\,i &\text{for $n=\frac{N+1}2,....., N-2,N-1$}.
\end{cases}
\end{equation}
To prove this we use inverse discrete Fourier transform. From
\eqref{chat}, we see that
\begin{align*}
\sum_{k=0}^{N-1} \hat{c}_k e^{i\frac{2\pi k n}{N}}&=
-i\sum_{k=1}^{(N-1)/2} \hat{c}_k e^{i\frac{2\pi k n}{N}}
+i\sum_{k=(N+1)/2}^{N-1} \hat{c}_k e^{i\frac{2\pi k n}{N}}\\
&=2\sum_{k=1}^{(N-1)/2}\sin\left(\frac{2\pi
kn}{N}\right)\\
&= 2 \, \Im\left( \sum_{k=1}^{(N-1)/2}\exp\left(\frac{2\pi ikn}{N}\right) \right)\\
&= 2\, \Im\left( \frac{e^{i\frac{2\pi
n}{N}\frac{N-1}{2}}-1}{e^{i\frac{2\pi n}{N}}-1}
e^{i\frac{2\pi n}{N}}\right)\\
&= - \Im\left(i \frac{e^{i\frac{2\pi
n}{N}\frac{N-1}{2}}-1}{\sin(\frac{\pi n}{N})}
e^{i\frac{\pi n}{N}}\right)\\
&= - \Im\left(i \frac{(-1)^n e^{-i\frac{\pi
n}{N}}-1}{\sin(\frac{\pi n}{N})} e^{i\frac{\pi n}{N}}\right)\\
&= - \Im\left(i \frac{(-1)^n -e^{i\frac{\pi n}{N}}}{\sin(\frac{\pi n}{N})} \right)\\
&= \cot \left(\frac{\pi n}{N}\right) - \frac{(-1)^n}{\sin(\pi n/N)} \\
&= \frac{\cos^2(\frac{\pi n}{2N})- \sin^2(\frac{\pi
n}{2N})}{2\sin(\frac{\pi n}{2N})\cos(\frac{\pi n}{2N})}
- \frac{(-1)^n\big(\cos^2(\frac{\pi n}{2N})+ \sin^2(\frac{\pi
n}{2N}) \big)}{2\sin(\frac{\pi n}{2N})\cos(\frac{\pi n}{2N})}
\\
&=N\Big(\frac{1-(-1)^n}{2N} \cot\left( \frac{\pi n}{2N}\right) -
\frac{1+(-1)^n}{2N} \tan\left( \frac{\pi n}{2N}\right)\Big)\\
&=N c_n.
\end{align*}
This proves the claim. Therefore, we have
\begin{align*}
\widehat{\mathbb{\mathbb{H}_{\rm per}} u}_n=\hat{c}_n \, \hat{u}_n.
\end{align*}
Now using Parseval's formula we have
\begin{align*}
\norm{\mathbb{\mathbb{H}_{\rm per}}u}&= \norm{\widehat{\mathbb{\mathbb{H}_{\rm per}} u}}\\
&=\norm{\hat{c}\, \hat{u}}\\
&= \Big(\sum_{n=1}^{N-1} \abs{\hat{u}}^2_n\Big)^{1/2} \\
&\le \norm{\hat{u}}\\
&=\norm{u}.
\end{align*}
Thus we have $ \norm{\mathbb{\mathbb{H}_{\rm per}}u}\le \norm{u}$, and $
\norm{\mathbb{\mathbb{H}_{\rm per}}u}= \norm{u}$ provided $\hat{u}(0)=0$, that is,
\begin{align*}
\sum_{j=0}^{N-1}u_j=0.
\end{align*}
\end{proof}
Keeping in mind the above discretization for the Hilbert transform, we propose the following implicit scheme to generate
approximate solutions to the BO equation
\eqref{eq:main_per}
\begin{equation}
\label{eq:BO_fd_per}
u^{n+1}_j = u^n_j + {\Delta t}\, \mathbb G(u^{n+1/2})_j + {\Delta t}\, \mathbb{\mathbb{H}_{\rm per}}(D_{+} D_{-} u^{n+1/2})_j,
\end{equation}
for $n\ge 0$ and $j=0,\ldots,N-1$.
Regarding $u^0$ we set
\begin{equation*}
u^0_j=u_0(x_j),\qquad j= 0, \dots, N-1.
\end{equation*}
Using the properties of the discrete Hilbert transform \eqref{b
formula}--\eqref{c formula}, and using identical arguments to those
used
in the proof of Theorem~\ref{thm:H3convergence}, we can proove the following
theorem:
\begin{theorem}
\label{thm:H3convergence_per}
Assume that $u_0\in H^2(\mathbb{T})$. Then there exists a finite
time $T$, depending only on $\norm{u_0}_{H^2(\mathbb{T})}$,
such that for $t\le T$, the difference approximations defined
by \eqref{eq:BO_fd_per} converge uniformly in $C(\mathbb{T}\times
[0,T])$ to the unique solution of the Benjamin--Ono equation
\eqref{eq:main_per} as ${\Delta x}\to 0$ with ${\Delta t}=\order{{\Delta x}}$.
\end{theorem}
\section{Numerical experiments}
\label{sec:numerics}
The fully-discrete scheme given by \eqref{eq:BO_fd} has been tested on
suitable test cases, namely soliton interactions, in order to
demonstrate its effectiveness. It is well-known that a soliton is a
self-reinforcing solitary wave that maintains its shape while
traveling at constant speed. Solitons are the result of a delicate
cancellation of nonlinear and dispersive effects in the
medium. Several authors, see, e.g.,
\cite{BoydXu,vasu:1998,Pelloni:2000} have studied the soliton
interactions for the BO equation.
\subsection*{A one-soliton solution}
The Benjamin--Ono equation
\eqref{eq:main_per} has one-periodic wave solution that tend towards
the one-soliton in the long wave limit, i.e., when the wave number
goes to zero. It is given by
\begin{equation}
\begin{aligned}
u(x,t) = -\frac{2c\delta^2}{1 -\sqrt{1-\delta^2} \cos(c\delta(x
-ct))}, \quad \text{with} \quad \delta=\frac{\pi}{cL},
\end{aligned}
\label{eq:BO_onesol}
\end{equation}
where $L$ denotes the period and $c$ is the wave speed.
We have applied scheme \eqref{eq:BO_fd_per} to simulate the periodic
one wave solution \eqref{eq:BO_onesol} with $L=15$, $c=0.25$ and
initial data $u_0(x) =u(x,0)$. The exact solution is periodic in time
with the period $p=120$. In Figure~\ref{fig:1} we show the approximate
and exact solution at $t=4p=480$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{One_soliton_t=480_N=65}
\caption{Comparison of exact and numerical solutions with initial
data \eqref{eq:BO_onesol}. }
\label{fig:1}
\end{figure}
We have also computed numerically the error for a range of ${\Delta x}$,
where the relative $L^2$ error at time $T$ is defined by
\begin{equation*}
E1(T)=100 \frac{\norm{u-u_{{\Delta x}}}_2}{\norm{u}_{2}}
\end{equation*}
where the norms were computed using the trapezoid rule on the points $x_j$,
and the relative $L^{\infty}$ error is defined by
\begin{equation*}
E2(T)=100 \frac{\norm{u-u_{{\Delta x}}}_{\infty}}{\norm{u}_{\infty}}.
\end{equation*}
In Table~\ref{tab:1}, we show $L^2$ relative errors as
well as $L^{\infty}$ relative errors for this example at time $T=480$.
\begin{table}[h]
\centering
\begin{tabular}[h]{c|r r r r }
$N$ &\multicolumn{1}{c}{$E1$} &\multicolumn{1}{c}{rate}
&\multicolumn{1}{c}{$E2$} &\multicolumn{1}{c}{rate} \\
\hline\\[-2ex]
33 & 21.24 & &
23.35 & \\[-1ex]
65 & 5.76 &\raisebox{1.5ex}{1.9} & 6.75
&\raisebox{1.5ex}{1.8} \\[-1ex]
129 & 1.46 & \raisebox{1.5ex}{2.0}& 1.71
&\raisebox{1.5ex}{2.0} \\[-1ex]
257 & 0.39 &\raisebox{1.5ex}{1.9} & 0.49
&\raisebox{1.5ex}{1.8} \\[-1ex]
513 & 9.75e{-2} & \raisebox{1.5ex}{2.0}& 1.21e{-1}
&\raisebox{1.5ex}{2.0} \\[-1ex]
1025 & 3.34e{-2} & \raisebox{1.5ex}{1.5}& 4.70e{-2}
&\raisebox{1.5ex}{1.4} \\[-1ex]
2049 & 7.50e{-3} &\raisebox{1.5ex}{2.1}& 1.07e{-2} &\raisebox{1.5ex}{2.1}
\end{tabular}
\vspace{1.5ex}
\caption{$E1$ and $E2$ for the one-soliton solution at time $T=480$.}
\label{tab:1}
\end{table}
The computed solution in Figure~\ref{fig:1} looks quite well and the errors are also quite low and the convergence rate seems to converge to 2.
\subsection{A two-soliton solution}
\label{sec:twosol}
The velocity of a soliton depends on its amplitude; the higher the
amplitude, the faster it moves. Thus a fast soliton will overtake a
slower soliton moving in the same direction. After the interaction,
the solitons will reappear with the same shape, but possibly with a
change in phase. As explicit formulas are available, they provide
excellent test cases for numerical methods.
Inspired by \cite{vasu:1998} we use the exact solution
\begin{equation}
w(x,t) =- \frac{4 c_1 c_2 \left( c_1 \lambda^2_1 + c_2 \lambda^2_2
+ (c_1 + c_2)^3 c_1^{-1} c_2^{-1} (c_1 -c_2)^{-2}
\right)}{\left( c_1 c_2 \lambda_1 \lambda_2 -(c_1 + c_2)^2 (c_1
-c_2)^{-2} \right)^2 + (c_1 \lambda_1 + c_2 \lambda_2)^2},
\label{eq:BO_twosol}
\end{equation}
where $\lambda_j = \lambda_j(x,t) = x -c_j t $, $j=1,2$, and
$c_1, c_2$ are arbitrary constants. Explicit periodic
two-soliton solutions exist, but the exact formula is
complicated. See, e.g., \cite{satsuma} for a more detailed
discussion. In what follows, we have computed the
two-soliton solution \eqref{eq:BO_twosol} of the unrestricted Cauchy
problem \eqref{eq:main_per}. Moreover, we have used the initial value
$u_0(x) = w(x,-10)$ on an interval $(-30,30)$ as initial
values. and $c_1=2$, $c_1=1$. Since we compute on a finite line we
have used the periodic continuation, and used the scheme for the
periodic case. Since $w(\pm 30,t)$ remains very small in the time
interval $[-10,10]$ we believe that the computed solution is very close
to $w(t-10,x)$ for $t\le 20$, and we use $w(10,x)$ as a reference solution.
Computationally, this is a much harder problem than the one-soliton
solution due to the fact that in this case the errors stem from both
the approximation of the unrestricted initial-value problem by a
periodic one, and by the numerical approximation of the latter. In
Figure~\ref{fig:2} we show the exact solution and the approximate
solutions at $t=20$ computed using $257$ and $513$ grid points in the
interval $[-30,30]$.
\begin{figure}[h]
\centering
\includegraphics[width=.7\linewidth]{BO_two_soliton1}
\caption{The numerical solution $u_{\Delta x}(x,20)$ with initial data
$w(x,-10)$.}
\label{fig:2}
\end{figure}
As the Figure~\ref{fig:2} exhibits, the scheme performs well in the
sense that after the interaction, the two soliton have the same shapes
and velocities as before the interaction. In Table~\ref{tab:2}, we
show the relative errors $E1$ and $E2$ as well as numerical rate of
convergence for the computed solutions. The large errors and the slow
convergence rate both indicate that we are not yet in asymptotic
regime.
\begin{table}[h]
\centering
\begin{tabular}[h]{c|r r r r }
$N$ &\multicolumn{1}{c}{$E1$} &\multicolumn{1}{c}{rate}
&\multicolumn{1}{c}{$E2$} &\multicolumn{1}{c}{rate} \\
\hline\\[-2ex]
65 & 125.12 & &
113.07 & \\[-1ex]
129 & 124.76 &\raisebox{1.5ex}{0.0} & 97.26
&\raisebox{1.5ex}{0.2} \\[-1ex]
257 & 108.74 & \raisebox{1.5ex}{0.2}& 93.99
&\raisebox{1.5ex}{0.0} \\[-1ex]
513 & 71.34 &\raisebox{1.5ex}{0.6} & 71.20
&\raisebox{1.5ex}{0.4} \\[-1ex]
1025 & 25.28 & \raisebox{1.5ex}{1.5}& 29.20
&\raisebox{1.5ex}{1.3} \\[-1ex]
2049 & 6.87 & \raisebox{1.5ex}{1.9}& 7.98
&\raisebox{1.5ex}{1.9} \\[-1ex]
4097 & 2.16 &\raisebox{1.5ex}{1.7}& 2.52 &\raisebox{1.5ex}{1.7}
\end{tabular}
\vspace{1.5ex}
\caption{$E1$ and $E2$ for the two-soliton solution at time $T=20$
with initial data $w(-10,x)$.}
\label{tab:2}
\end{table}
To sum up, our conservative scheme performs very well
in practice and \emph{proven} to converge, whereas
to the best of our knowledge, there is no constructive proof of convergence,
for the other schemes associated to \eqref{eq:main} or \eqref{eq:main_per},
except \cite{vasu:1998} for some partial result (existence of solution has been assumed)
in the periodic case \eqref{eq:main_per}.
\iffalse
\section{Conclusion}
\label{sec:con}
We have considered the initial value problem for the BO
equation \eqref{eq:main}. Both the decaying case on the full line and
the periodic case are considered. We have used a Crank--Nicolson approximation in time
and finite difference approximations in the spatial variable for \eqref{eq:main}. Moreover, the treatment of
the nonlinear term preserves the conservative property and we approximated
the Hilbert transform by a quadrature formula, which can be computed
by the Fast Fourier Transform (FFT) method. The classical Arzel\`a--Ascoli theorem
has been used to prove the convergence of the scheme \eqref{eq:BO_fd}.
Finally, numerical experiments
have been presented to illustrate the convergence.
Summing up, we proved following results
\begin{itemize}
\item For initial data in $H^2(\mathbb R)$, the fully discrete finite
difference scheme \eqref{eq:hilbertrealline} converges uniformly in
$C(\mathbb R\times [0,T])$ to the unique solution of the
BO equation \eqref{eq:main} as ${\Delta x}\to 0$ with
${\Delta t}=\mathcal O({\Delta x})$.
\item For initial data in $u_0\in H^2(\mathbb{T})$, the fully discrete finite
difference scheme \eqref{eq:BO_fd_per} converges uniformly in
$C(\mathbb R\times [0,T])$ to the unique solution of the
BO equation \eqref{eq:main_per} as ${\Delta x}\to 0$ with
${\Delta t}=\mathcal O({\Delta x})$.
\end{itemize}
\fi
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Recently, human action recognition has gained much interest for its great potential in many application areas such as video surveillance and human-computer interaction.
The challenge of human action recognition usually results from the problem that different action classes often share some common motion patterns.
Moreover, the action videos usually include many redundant frames indicating the background, large noise, clutter or small movements that are with limited help for recognition \cite{cai2013human}.
For action video classification, if the training and test videos contain some frames representing the common motion components among different action classes, or some redundant frames with large noise or useless clutters, then the discriminability of classifier learnt from training frames will be corrupted, and the recognition result of test video will also get corrupted when the labels of undiscriminating test frames are used to determine the class label of test video.
Sparse coding based recognition approaches have attracted much attention in the field of computer vision \cite{Huang2014Feature}.
To learn a well-adapted dictionary for obtaining good reconstruction and recognition performance,
many algorithms \cite{Jiang2013,Yang2014,Shrivastava2014} for training dictionary with label information and discriminative criterion have been proposed.
These algorithms can only work well for the cleaning training samples or training samples with small corruption \cite{Li2014}.
So the efforts for learning low-rank and discriminative dictionary are made in \cite{Li2014,MaCVPR2012} to solve this problem.
However, the previous methods can not handle the problem resulted from the undiscriminating test frame samples for video recognition.
Imagining the test video contains many frames representing the useless clutters or some common components among different video classes, the label of test video will get corrupted by these undiscriminating test frames.
So it is necessary to develop a discriminative dictionary for the situation that both the training and test videos contain many corrupted or uninteresting frame samples.
In this paper, we attempt to learn a discriminative dictionary for action recognition to handle the situation that both the training and test videos contain undiscriminating frames with common components, redundant components, background, clutter or large noise.
We propose a recognition framework for human action recognition in video by learning a discriminative dictionary called zeroth class dictionary.
The "zeroth class" trick is proposed for detecting and filtering out the undiscriminating frames of the test video to eliminate the negative effect resulted from these frames during voting the action category of test video.
The zeroth class dictionary method is a two-phase dictionary learning system \cite{Bai2014Multiple,Li2011MULTI} including three steps:
(1)Firstly, the discriminative frames of training videos are selected by Gentle Adaboost algorithm to learn the first-phase dictionary.
The left undiscriminating frames are relabeled and assigned to the zeroth class.
The zeroth class is a virtual class indicating undiscriminating frames which are with limited help for recognition,
such as frames with common poses shared by different actions, frames with clutter or noise, and other redundant frames.
Then the first-phase dictionary is learnt on the selected discriminative frames;
the reconstruction errors of all frames corresponding to each dictionary atom are collected to build the new frame representations.
(2)Using the new frame representations, we learn the class-specific dictionary in which the sub-dictionary of each action class is learned on the corresponding selected discriminative frames and the zeroth class sub-dictionary is learnt on the undiscriminating frames.
Then we obtain the preliminary labels of the test frames based on the learnt class-specific dictionary.
The zeroth class is used for recognizing the redundant frames of test video, which are filtered out afterwards.
(3)After the undiscriminating frames of test video are excluded, the final action label is voted by all remained discriminative frames of the test video.
Experimental results on benchmark data show that our method outperforms most state-of-the-art approaches.
The rest of the paper is organized as follows:
Sect.2 reviews the relative works.
Sect.3 presents the zeroth class dictionary learning framework
for human action recognition.
Experimental setup and results on
benchmark datasets are presented in Sect.4, and conclusions
are given in Sect.5.
\section{Relative works}
Many efforts \cite{Guha,ZhangICIP2015,Wang20123902} have been devoted to studying action recognition by sparse representation and dictionary learning.
Guha and Ward \cite{Guha} provide a sparse representation for human action recognition by learning the over-complete bases on the local motion patterns.
Zhang et al. \cite{ZhangICIP2015} learn dictionary from spatiotemporal salient patches and use the sparse reconstruction coefficients of patches to represent image sequences of action videos.
Wang et al. \cite{Wang20123902} propose a sparse model incorporating the similarity constrained term and the dictionary incoherence term for human action recognition.
Our work is also similar to the silhouette based action recognition approaches \cite{Chaaraoui2013,Cheema,frftacpr2015,ChengTIP2015}.
Chaaraoui et al. \cite{Chaaraoui2013} develop a human action recognition method through extracting multi-view key poses sequences and handling variations in shape by dynamic time warping.
Cheema et al. \cite{Cheema} propose a human action recognition method by extracting a scale invariant contour-based pose feature and clustering the features to construct distinctive key poses.
Cai and Feng \cite{frftacpr2015} present a human action recognition method by describing contour-based shape feature using fractional Fourier transform.
Cheng et al. \cite{ChengTIP2015} propose a human action recognition approach based on human silhouettes by supervised temporal t-stochastic neighbor embedding and incremental learning via low-dimensional embedding.
\section{Zeroth class dictionary learning based action recognition framework}
\subsection{Feature extraction}
We use the fractional Fourier shape descriptor \cite{frftacpr2015} to represent each frame of action videos.
The fractional Fourier shape descriptor is built on the human pose represented by contour points of the binary silhouette.
Given an image extracted from the action video, its binary silhouette is obtained from the segmented foreground region.
Then the boundary of silhouette is extracted
and the position of all points $\{(x(i),y(i))\}_{i=1}^{N}$ along the boundary is represented as a complex sequence $\{s(i)|s(i)=x(i)+jy(i)\}_{i=1}^{N}$, where $x(i)$ and $y(i)$ denote the horizontal and vertical coordinate of the $i$th point respectively. Here $N$ is the total number of contour points, and $j$ denotes the imaginary unit.
Then we shift the base point of coordinate system to the center of mass $(x_{c},y_{c})$ of contour points along the boundary.
\begin{equation} \label{eq5}
\begin{aligned}
\tilde{x}(i)=x(i)-x_{c} \ \ \ \ \ \ \tilde{y}(i)=y(i)-y_{c}
\end{aligned}
\end{equation}
After that, the length of sequence is normalized to a predetermined value $L$ through down-sampling the contour.
In our experiments, the normalized length $L$ is set as 100.
\begin{equation} \label{eq5}
\begin{aligned}
\hat{x}(i)=\tilde{x}(\lceil i*\frac{N}{L} \rceil) \ \ \ \ \ \ \hat{y}(i)=\tilde{y}(\lceil i*\frac{N}{L} \rceil)
\end{aligned}
\end{equation}
Afterwards, we compute the discrete fractional Fourier transform of the transformed contours $\{\hat{s}(i):\hat{x}(i)+j\hat{y}(i)\}_{i=1}^{L}$,
and get the response $\{S(i)\}_{i=1}^{L}$ in the fractional Fourier domain.
For a continuous signal $\hat{s}(t)$, its $p$ order continuous fractional Fourier transform is defined as:
\begin{equation} \label{eq2}
\begin{aligned}
S_{p}(u)=
\begin{cases}
B_{\alpha}\int_{-\infty}^{\infty}exp(j\frac{t^{2}+u^{2}}{2}cot\alpha-\frac{jtu}{sin\alpha})\hat{s}(t)dt \\
\quad \quad \quad \quad \quad \quad \quad \quad \quad \alpha \neq n\pi \\
\hat{s}(t) \quad \quad \quad \quad \quad \quad \quad \alpha =2n\pi \\
\hat{s}(-t) \quad \quad \quad \quad \quad \alpha =(2n\pm1)\pi
\end{cases}
\end{aligned}
\end{equation}
where $\alpha=p\pi/2$ is the rotation angle and $B_{\alpha}=\sqrt{\frac{1-jcot\alpha}{2\pi}}$.
Here $n$ denotes an integer.
The order $p$ is set as $0.9$ in our experiments.
For digital computation, we use a sampling-type discrete fractional
Fourier transform proposed in \cite{Ozaktas} to calculate the response $\{S(i)\}_{i=1}^{L}$.
Then the amplitude of fractional response, $|S(i)|$, is calculated and normalized to obtain a scale invariant descriptor.
\begin{equation} \label{eq5}
\begin{aligned}
d(i)=\frac{|S(i)|^{2}}{\sum_{i=1}^{L}|S(i)|^{2}}
\end{aligned}
\end{equation}
$\{d(i)\}_{i=1}^{L}$ consists the fractional Fourier shape descriptor of human pose contour.
For each training video, we assign its action class label to its all affiliated frames.
\subsection{First-phase dictionary learning}
Firstly, the Gentle AdaBoost algorithm is employed to select discriminative
training frames.
Gentle AdaBoost provides an approach for reweighting data points by updating weights of base classifiers and puts higher weights on undiscriminating data points than discriminative points \cite{Friedman98additivelogistic}.
Regression stump is used as the base classifier. The regression stump is a simple additive logistic regression based classifier, which classifies data points according to only one input dimension. For an input sample $x$ whose $k$th dimensional feature is denote as $x(k)$, the output class label $f(x)$ of regression stump is defined by only four parameters $(w,v,k,th)$, and represented as follows.
\begin{equation} \label{eq5}
\begin{aligned}
f(x) = w*sign(x(k)-th)+v
\end{aligned}
\end{equation}
The "one-against-the-rest" technique is employed to extend the primary binary
classification problem to multi-class case.
A training frame with high weight imply that it contains common and undiscriminating patterns between different action categories.
We select the frames with the lowest weights from the training frame set of each action class at a rate of $R$ to build the discriminative subset for generating the first-phase dictionary, and the remained frames are pushed into a pool where they are relabeled as the zeroth class and will be used for detecting
undiscriminating frames of the test video later.
After the discriminative subset of the training frames is selected out, we generate a dictionary $D$ on this set.
The aim is to learn a dictionary $D$ so that the selected discriminative frames have a sparse representation $B$ over the dictionary.
It can be written as the following optimization problem \cite{AharonKSVD}:
\begin{equation} \label{eq1}
\begin{aligned}
min_{D,B}\|Y-DB\|_{F}^{2} \ \ \ \ \ \ \ \ \ s.t.\ \ |b_{i}\|_{0}\leq C ,\ \forall i
\end{aligned}
\end{equation}
where $Y$ is the selected discriminative subset of training frames represented by the fractional Fourier descriptor;
$D$ is the learned dictionary on the discriminative subset; $b_{i}$ is the $i$th column of sparse coefficients matrix $B$, denoting the representing coefficient of $i$th frame.
$C$ is the parameter controlling the sparsity of coefficients.
$\|\cdot\|_{F}$ denotes the Frobenius norm, and $\|\cdot\|_{0}$ is the $l_{0}$ norm enforcing the coefficients to be sparse.
Then for a frame $y$, the reconstruction error corresponding to the $i$th atom of the dictionary $D$ is computed as \cite{Pati93orthogonalmatching}:
\begin{equation} \label{eq1}
\begin{aligned}
e_{i}(y)=\|y-D\delta_{i}(\hat{\beta})\|^{2} \ \ \ \ \ \ \ \ \ \ \ \ \ \\
\hat{\beta}=argmin_{\beta}\|y-D\beta\|^{2} \ \ \ \ \ s.t. \ \ \ \|\beta\|_{0}\leq C
\end{aligned}
\end{equation}
where the function $\delta_{i}(\beta)$ sets the $j$th dimension of $\beta$ as 0 if $j\neq i$.
Suppose $m$ is the atom number of dictionary $D$,
then the vector $[e_{1}(y),...,e_{m}(y)]^{T}$ makes up a new feature of frame $y$, which would be used as the new frame feature in the next phase dictionary learning.
\subsection{Second-phase dictionary learning}
After the new features of all frames in both training and test videos are computed, the class-specific dictionary learning \cite{Wang20123902} is performed.
Suppose $K$ is the number of action categories.
Using the training frames belonging to the $k$th ($k$ $=$ $0,1,...,K$) class (including the zeroth class), we learn the class-specific dictionary $D_{k}$ using the new feature represented by reconstruction errors on the first-phase dictionary.
The sub-dictionary $D_{k}$ associated with the $k$th($k$ $=$ $1,...,K$) nonzero action class is learnt on the corresponding selected discriminative frames of the $k$th class, and the zeroth class dictionary $D_{0}$ is learnt on the undiscriminating frames.
Then the whole dictionary $\bar{D}$ is constructed by concatenating all the class-specific dictionaries, that is to say, $\bar{D}=[D_{0}|D_{1}| D_{2} |...| D_{K}]$.
After the whole dictionary $\bar{D}$ is learned, the sparse representation $a_{i}$ of a frame $\check{x}_{i}$ of the test video can be estimated as follows.
\begin{equation} \label{eq5}
\begin{aligned}
a_{i}=argmin_{a} \|\check{x}_{i}-\bar{D}a\|^{2} \ \ \ \ \ \ \ \ \ s.t. \ \ \|a\|_{0} \leq C
\end{aligned}
\end{equation}
The reconstruction error $r_{k}(\check{x}_{i})$ associated with the $k$th class can be defined as:
\begin{equation} \label{eq5}
\begin{aligned}
r_{k}(\check{x}_{i})=\|\check{x}_{i}-\bar{D}\Theta_{k}(a_{i})\|^{2} \ \ \ \ \ \ k=0,1,...,K
\end{aligned}
\end{equation}
where $\Theta_{k}(a_{i})$ produces a vector whose nonzero entries are coefficients of $a_{i}$ associated with the $k$th class .
Then each frame of the test video is assigned to the class that corresponds to
the minimum of reconstruction error with respect to each class(including the zeroth class).
The estimated preliminary class $\check{k}_i$ of the test frame $\check{x}_{i}$ is given as:
\begin{equation} \label{eq5}
\begin{aligned}
\check{k_i}=argmin_{k\in \{0,1,...,K\}} r_{k}(\check{x}_{i})
\end{aligned}
\end{equation}
Afterwards, we filtered out the undiscriminating frames in the test video which are labeled as the zeroth class.
Then the max pooling or sum pooling criteria is used to vote the action label by the remained discriminative frames corresponding to nonzero classes of the test video.
For max pooling policy, each frame of the test video is classified to the nonzero class that corresponds to
the minimum of reconstruction error with respect to each non-zero class.
Then the estimated action class $\hat{k}$ of the test video is given as:
\begin{equation} \label{eq5}
\begin{aligned}
\hat{k}=argmin_{k\in \{1,...,K\}}min_{\hat{i} \in \{i| \check{k_i} \neq 0 \}} r_{k}(\check{x}_{\hat{i}}) \\
\end{aligned}
\end{equation}
For sum pooling policy, an overall residual is constructed by summing up the reconstruction errors corresponding to nonzero classes of each frames in the test video;
then the test video is assigned to the nonzero class with respective to the minimum of overall error.
The estimated action class $\hat{k}$ of the test video is given as:
\begin{equation} \label{eq5}
\begin{aligned}
\hat{k}=argmin_{k\in \{1,...,K\}}\sum_{\hat{i} \in \{i| \check{k_i} \neq 0 \}} r_{k}(\check{x}_{\hat{i}}) \\
\end{aligned}
\end{equation}
\section{Experimental results}
In order to evaluate the performance and practicability of the proposed approach,
two human action recognition datasets, the Weizmann dataset \cite{ActionsAsSpaceTimeShapes_pami07} and the MuHAVi-MAS14 dataset \cite{Singh}, are used
as benchmarks.
For each class, we select frames with the lowest Gentle Adaboost weights as the discriminative subset at a rate of $R$.
The leave-one-out cross validation strategy is employed to separate the training video set and test video set.
All parameters are tuned by grid searching.
The best recognition rates on Weizmann dataset and MuHAVI-MAS14 dataset are achieved as 97.85\% and 95.59\% respectively when $R$ is set as $0.2$ and $C$ is set as $15$.
We also compare the accuracy of our method to the reported accuracy of other state-of-the-art methods.
The comparison results are presented in Table1.
Although the action features employed in most listed methods are different to ours, our method still shows a considerable performance and outperforms most listed methods on the benchmarks.
\begin{table}[htbp]
\centering
\caption{Comparison of methods on benchmarks}
\begin{tabular}{p{4.5cm} p{1.5cm} p{1.5cm}}
\toprule
Method & Weizmann & MuHAVi-MAS14\\
\midrule
Our method(sum pooling) & 97.85\% & 95.59\%\\
Our method(max pooling) & 95.70\% & 95.59\%\\
Chaaraoui et al. \cite{Chaaraoui2013} & 92.77\% & 91.18\% \\
Cheema et al. \cite{Cheema} & 91.6\% & 86.03\%\\
Singh et al. \cite{Singh} & & 82.35\%\\
Wang et al. \cite{Wang20123902} & 96.7\% & \\
Cheng et al. \cite{ChengTIP2015} & 94.44\% & \\
Cai and Feng \cite{frftacpr2015} & 93.55\% & \\
\bottomrule
\end{tabular
\label{tab:recresult
\end{table
Analysis of the relation between recognition accuracy and the size of zeroth class set has also been carried out.
Figure 1 and Figure 2 show the relation between accuracy and the rate $R$ on Weizmann and MuHAVI-MAS14 dataset respectively.
Experiment results demonstrate that our framework outperforms the ordinary two-phase dictionary learning framework without the discriminative frame detection and filtering stage at most time.
However, if too many undiscriminating frames are selected into the zeroth class training set for the first-phase dictionary learning , the performance of the framework will decline.
The experimental results demonstrate introducing the zeroth class is effective if a proper proration of zeroth class of the training set is set.
We have also analyzed the relation between accuracy of our method and the parameter $C$.
The experimental results on Weizmann and MuHAVI-MAS14 dataset are illustrated in Figure 3 and Figure 4 respectively.
The results demonstrate that it is easy to find a proper parameter $C$ for achieving good classification performance through the zeroth class dictionary learning framework.
\section{Conclusion}
This paper presents an action recognition method by using zeroth class dictionary.
The zeroth class dictionary provides a method to detect and delete undiscriminating
frames of test video for improving the classification accuracy.
The recognition framework is validated on benchmarks, showing a considerable performance.
\begin{figure}[htbp]
\centering
\includegraphics[height=3.8cm]{paraRweiz.eps}
\caption{Relation between accuracy and the rate of selected discriminative frames on Weizmann dataset}
\label{Fig3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3.8cm]{paraRmas.eps}
\caption{Relation between accuracy and the rate of selected discriminative frames on MuHAVI-MAS14 dataset }
\label{Fig3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3.8cm]{paraCweiz.eps}
\caption{Relation between accuracy and parameter C on Weizmann dataset}
\label{Fig3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=3.8cm]{paraCmas14.eps}
\caption{Relation between accuracy and parameter C on MuHAVI-MAS14 dataset}
\label{Fig3}
\end{figure}
{
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
It is well known that the BCS theory of superconductivity \cite{bcs} has proved to be an important theoretical discovery which describes the properties of low temperature superconductors. However, there has been no successful theory yet to understand the superconductivity in a class of materials that exhibit the phenomenon at high temperatures. Examples of such materials are the high $T_c$ cuprates. It is reasonably evident that these are strongly coupled materials and hence lies the difficulty in developing a theory for them.
Recently, there has been an upsurge in the study of holographic superconductors. The importance of these were realized as they reproduced some properties of high $T_c$ superconductors. The theoretical input that goes in the construction of these holographic superconductor models is the gauge/gravity correspondence first discovered in the context of string theory \cite{adscft1}-\cite{polyakov}. The physical mechanism behind this understanding involves the demonstration of a spontaneous symmetry breaking in an Abelian Higgs model in a black hole background that is asymptotically AdS \cite{gub}-\cite{prl}. Thereafter, several important properties of these gravity models have been studied analytically
\cite{rob}-\cite{dghorai}.
An important aspect of superconductors is their response to an external magnetic field known as the Meissner effect \cite{tink}. The response shows perfect diamagnetism as the temperature is lowered below a critical temperature $T_c$.
Quite a few number of investigations have been carried out both numerically \cite{maeda}-\cite{john} as well as analytically \cite{royc}-\cite{sg}. However, these studies were restricted mainly to the framework of Maxwell electrodynamics. In \cite{dib}, the conventional action for Maxwell electrodynamics was replaced by the power Maxwell action \cite{her}. The motivation for such a study comes from the question of investigating the behaviour of the condensate in the presence of higher curvature corrections coming from the power Maxwell theory.
Noncommutativity of spacetime is another important area of theoretical physics where considerable research has been carried out. The idea which first came in 1947 \cite{snyder} was brought into forefront from studies in string theory \cite{seiberg}. Very recently, black hole backgrounds have been provided incorporating the ideas of noncommutativity \cite{nic1}-\cite{rbsg}. The noncommutative effect gets introduced here by a smeared source of matter. This is then used to solve Einstein's equation of general relativity.
In this paper, we want to investigate the role of noncommutative spacetime on the properties of holographic superconductors. Such an investigation has been carried out earlier in \cite{ghorai1} in the framework of Born-Infeld electrodynamics. Here, we shall carry out such a study in power Maxwell electrodynamics using the matching method approach \cite{soda} in the probe limit approximation which essentially means that the backreaction of the spacetime has been neglected. Moreover, we shall also investigate the Meissner effect in this framework and study the role played by spacetime noncommutativity.
The paper is organized as follows. In section 2, the basic formalism for the $d$-dimensional holographic superconductor in noncommutative spacetime coupled to power Maxwell electrodynamics is presented. In section 3, we obtain the relationship between the critical temperature and the charge density and the value of the condensation operator using the matching method approach. In section 4, we investigate the Meissner like effect using the same approach.
We finally conclude in section 5.
\section{Basic analytical set up}
The analysis proceeds by setting up a gravitational dual for higher dimensional superconductors.
We consider the metric of a $d$-dimensional planar noncommutative Schwarzschild-AdS spacetime as the gravity dual of the holographic superconductor. The metric of such a black hole spacetime reads
\begin{equation}
ds^{2}= -f\left ( r \right )dt^{2}+\frac{1}{f\left ( r \right )}dr^{2}+r^{2}dx_{i}dx^{i}
\label{s1}
\end{equation}
where $f(r)$ is given by \cite{nic1}
\begin{equation}
f(r)=\frac{r^{2}}{L^{2}}-\frac{2MG_{d}}{r^{d-3}\Gamma(\frac{d-1}{2})}\gamma\left(\frac{d-1}{2},\frac{r^{2}}{4\theta}\right)+ k
\label{eq02}
\end{equation}
\noindent and $\gamma(s,x)$ is the lower incomplete gamma function given by
\begin{equation}
\gamma(s,x)=\int_{0}^{x}t^{s-1}e^{-t} dt
\label{eq03}
\end{equation}
and $k$ denotes the curvature. $dx^{i}dx_{i}$ represents the line element of a $(d-2)$-dimensional hypersurface with vanishing curvature. In this analysis, we shall set $k=0$ since we require a planar holographic superconductor. The metric therefore takes the form
\begin{equation}
f(r)=\frac{r^{2}}{L^{2}}- \frac{2MG_{d}}{r^{d-3}\Gamma \left ( \frac{d-1}{2} \right )}\gamma\left(\frac{d-1}{2},\frac{r^{2}}{4\theta}\right)
\label{eq04}~.
\end{equation}
The horizon radius can be obtained by setting $[f(r)]_{r=r_{+}}=0$ :
\begin{equation}
r_{+}^{d-1}=
\frac{2MG_{d}}{\Gamma(\frac{d-1}{2})}\gamma\left(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta}\right)
\label{eq05}
\end{equation}
\noindent where we have set $L=1$ for convenience. Using eq.(\ref{eq05}),
eq.(\ref{eq04}) can be recast as
\begin{equation}
f(r) = r^{2}\left(1- \frac{r_{+}^{d-1}}{r^{d-1}}\frac{\gamma(\frac{d-1}{2},\frac{r^{2}}{4\theta})}{\gamma(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta})}\right).
\label{eq06}
\end{equation}
\noindent The Hawking temperature $T$ of the black hole is given by
\begin{equation}
T= \frac{f^{'}(r_{+})}{4\pi}~.
\label{eq07}
\end{equation}
By the gauge/gravity duality, this temperature is interpreted as the temperature of the boundary field theory. Using eq.(s) (\ref{eq04}) and (\ref{eq05}), we get
\begin{eqnarray}
T & = & \frac{1}{4\pi}\left[(d-1)r_{+}-r_{+}^{2}\frac{\gamma^{'}(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta})}{\gamma(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta})}\right]\nonumber\\
&=&\frac{r_{+}}{4\pi}\left[d-1-\frac{4MG_{d}}{\Gamma(\frac{d-1}{2})}\frac{e^{-\frac{r_{+}^{2}}{4\theta}}}{(4\theta)^{\frac{d-1}{2}}}\right]~.
\label{eq08}
\end{eqnarray}
\noindent In the limit $\theta\rightarrow0$, the coefficient of the $t$-component of the noncommutative metric reduces to
\begin{equation}
f(r)= r^{2}\left(1-\frac{r_{+}^{d-1}}{r^{d-1}}\right)
\label{eq09}
\end{equation}
where $r_{+}= 2MG_{d}$. This metric is the planar Schwarzschild-AdS black hole in $d$-spacetime dimensions.
\noindent The Hawking temperature for this black hole reads
\begin{equation}
T=\frac{(d-1)r_{+}}{4\pi}~.
\label{eq10}
\end{equation}
\noindent We now write down an appropriate action for the bulk which can explain the phase transition at the boundary. The action involves a gravity theory with a negative cosmological constant together with a complex scalar field $\psi$ minimally coupled to the power Maxwell field
\begin{eqnarray}
S=\int d^{d}x \sqrt{-g} \left[R-2\Lambda -\beta(F_{\mu\nu}F^{\mu\nu})^{q}-|\bigtriangledown_{\mu}\psi-iA_{\mu}\psi|^{2}-m^{2}|\psi|^{2}\right]
\label{eq11}
\end{eqnarray}
where $\beta$ is the coupling constant of power Maxwell electrodynamics, $q$ is the power parameter of the power Maxwell field, $\Lambda=-\frac{(d-1)(d-2)}{2}$ is the cosmological constant.
\noindent In the subsequent discussion, we shall work in the probe limit which means that the effect of back reaction of the matter fields on the metric is neglected.
\noindent The equations of motion for the Maxwell and the scalar field can be obtained by varying the action. These read
\begin{eqnarray}
\frac{4\beta q}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}(F_{\lambda\sigma}F^{\lambda\sigma})^{q-1} F^{\mu\nu})-i(\psi^{*}\partial^{\nu}\psi-\psi(\partial^{\nu}\psi)^{*})-2A^{\nu}|\psi|^{2}=0
\label{eq12}
\end{eqnarray}
\begin{eqnarray}
\partial_{\mu}(\sqrt{-g}\partial^{\mu}\psi)-i\sqrt{-g}A^{\mu}\partial_{\mu}\psi-i\partial_{\mu}(\sqrt{-g}A^{\mu}\psi)-\sqrt{-g}A_{\mu}A^{\mu}\psi-\sqrt{-g}m^{2}\psi=0~.
\label{eq13}
\end{eqnarray}
To make further progress, we make the following ansatz
\begin{equation}
A=\phi(r)dt \quad ,\quad \psi=\psi(r)~.
\label{eq14}
\end{equation}
\noindent With this ansatz, eq.(s)(\ref{eq12}) and (\ref{eq13}) now take the form
\begin{equation}
\partial_{r}^{2}\phi+\frac{1}{r}\left(\frac{d-2}{2q-1}\right)\partial_{r}\phi-\frac{2\phi \psi^{2}(\partial_{r}\phi)^{2(1-q)}}{(-1)^{q-1}2^{q+1}\beta q(2q-1)f(r)}=0
\label{eq15}
\end{equation}
\begin{equation}
\partial_{r}^{2}\psi +\left(\frac{f^{'}}{f}+ \frac{d-2}{r}\right)\partial_{r}\psi + \frac{\phi^{2}\psi}{f^{2}}-\frac{m^{2}\psi}{f}=0~.
\label{eq16}
\end{equation}
\noindent Making a change of variable $z=\frac{r_{+}}{r}$, the above equations take the form
\begin{eqnarray}
\partial_{z}^{2}\phi+\frac{1}{z}\left(2-\frac{d-2}{2q-1}\right)\partial_{z}\phi + \frac{2\phi(z)\psi^{2}(z)r_{+}^{2q}(\partial_{z}\phi)^{2(1-q)}}{z^{4q}2^{q+1}(-1)^{3q}\beta q (2q-1)f(z)}=0
\label{eq17}
\end{eqnarray}
\begin{equation}
\partial_{z}^{2}\psi +\left(\frac{f^{'}(z)}{f(z)}- \frac{d-4}{z}\right)\partial_{z}\psi + \frac{\phi^{2}\psi r_{+}^{2}}{z^{4}f^{2}(z)}-\frac{m^{2}\psi r_{+}^{2}}{z^{4}f(z)}=0~.
\label{eq18}
\end{equation}
\noindent In the limit $ z\rightarrow0 $, the asymptotic behaviour of the fields $\phi$ and $\psi$ can be expressed as
\begin{equation}
\phi(z)=\mu-\frac{\rho^{\frac{1}{2q-1}}}{r_{+}^{\frac{d-2}{2q-1}-1}}z^{\frac{d-2}{2q-1}-1}
\label{eq19}
\end{equation}
\begin{equation}
\psi(z)=\frac{\psi_{-}}{r_{+}^{\lambda_{-}}}z^{\lambda_{-}} +\frac{\psi_{+}}{r_{+}^{\lambda_{+}}}z^{\lambda_{+}}
\label{eq20}
\end{equation}
where
\begin{equation}
\lambda_{\pm}=\frac{1}{2}\left[d-1\pm \sqrt{(d-1)^{2}+4m^{2}}\right]
\label{eq21}~.
\end{equation}
\noindent The gauge/gravity duality interprets $\mu$ and $\rho$ as the chemical potential and the charge density. $\psi_{+}$ and $\psi_{-}$ are the vacuum expectation values of the dual operator $O$. For the rest of our analysis, we shall choose $\psi_{+}=\left \langle O_{+} \right \rangle$ and $\psi_{-}=0$.
\section{Critical temperature, charge density relation and condensation operator}
\begin{comment}
considering commutative $f(r)$
at the horizon $z$=1 we get
\begin{equation}
\phi(1)=0,\psi^{'}(1)=-\frac{m^{2}}{d-1}\psi(1)
\label{eq23}
\end{equation}
We expand $\phi(z)$ and $\psi(z)$ in Taylor series near horizon ($z=1$)
\begin{eqnarray}
\phi(z)=\phi(1)-\phi^{'}(1)(1-z)+\frac{1}{2}\phi^{''}(1)(1-z)^{2}+ O((1-z)^{3})
\label{eq24}
\end{eqnarray}
\begin{eqnarray}
\psi(z)=\psi(1)-\psi^{'}(1)(1-z)+\frac{1}{2}\psi^{''}(1)(1-z)^{2}+O((1-z)^{3})
\label{eq25}
\end{eqnarray}
From equations (\ref{eq18}) and (\ref{eq19}) using L'Hospital rule we obtain the expressions for $\phi^{''}(1)$ and $\psi^{''}(1)$
\begin{eqnarray}
\phi^{''}(1)= -(2-\frac{d-2}{2q-1})\phi^{'}(1)
+\frac{2r_{+}^{2(q-1)}\psi^{2}(1)(\phi^{'}(1))^{3-2q}}{(-1)^{3q}2^{q+1}\beta q(2q-1)(d-1)}
\label{eq26}
\end{eqnarray}
\begin{eqnarray}
\psi^{''}(1)=(\frac{m^{2}}{(d-1)})(1+\frac{m^{2}}{2(d-1)})\psi(1)-\frac{\phi^{'2}\psi(1)}{2r_{+}^{2}(d-1)^{2}}
\label{eq27}
\end{eqnarray}
using (\ref{eq23}),(\ref{eq26}) and(\ref{eq27}) equations (\ref{eq24}) and (\ref{eq25}) can be expressed as
\begin{eqnarray}
\phi(z)= -\phi^{'}(1)(1-z)+\frac{1}{2}[(\frac{d-2}{2q-1}-2)\phi^{'}(1)
+\frac{2r_{+}^{2(q-1)}\psi^{2}(1)(\phi^{'}(1))^{3-2q}}{(-1)^{3q}2^{q+1}\beta q(2q-1)(d-1)}](1-z)^{2}
\label{eq28}
\end{eqnarray}
\begin{multline}
\psi(z)=\psi(1)+(1-z)\frac{m^{2}}{d-1}\psi(1)\\+\frac{1}{2}\left [ \left ( 1+\frac{m^{2}}{2(d-1)} \right )\left ( \frac{m^{2}}{d-1} \right )-\frac{\phi^{2}(1)}{2r_{+}^{2}(d-1)} \right ]\psi(1)(1-z)^{2}
\label{eq29}
\end{multline}
Next by matching method we equate the above solutions for $\phi(z)$ and $\psi(z)$
[(\ref{eq28}) and (\ref{eq29})] with the asymptotic solutions [(\ref{eq20}) and (\ref{eq21})] at some point $z=z_{m}$ which yields following equations
\begin{multline}
\mu -\frac{\rho^{\frac{1}{2q-1}}z_{m}^{\frac{d-2}{2q-1}-1}}{(r_{+})^{\frac{d-2}{2q-1}-1}}
=\\v(1-z_{m})+\frac{1}{2}(1-z_{m})^{2}\left[\left(2-\frac{d-2}{2q-1}\right)v+\frac{2r_{+}^{2(q-1)}\alpha^{2}(-v)^{3-2q}}{(-1)^{3q}2^{q+1}(\beta q)(2q-1)(d-1)}\right]
\label{eq30}
\end{multline}
by taking derivative on both side of equation (\ref{eq30})
\begin{multline}
-\frac{\rho^{\frac{1}{2q-1}}z_{m}^{\frac{d-2}{2q-1}-1}}{(r_{+})^{\frac{d-2}{2q-1}-1}}\left(\frac{d-2}{2q-1}-1\right)
=\\-v-\frac{1}{2}(1-z_{m})\left[\left(2-\frac{d-2}{2q-1}\right)v+\frac{2r_{+}^{2(q-1)}\alpha^{2}(-v)^{3-2q}}{(-1)^{3q}2^{q+1}(\beta q)(2q-1)(d-1)}\right]
\label{eq31}
\end{multline}
and,
\begin{multline}
\frac{\left \langle O_{+} \right \rangle z_{m}^{\lambda_{+}}}{r_{+}^{\lambda_{+}}}=\\\alpha + (1-z_{m})\alpha\left(\frac{m^{2}}{d-1} \right )+ \frac{1}{2}\alpha(1-z_{m})\left[\left(\frac{m^{2}}{d-1}\right)\left(1+\frac{m^{2}}{2(d-1)}\right)-\frac{\tilde{v}^{2}}{2(d-1)^{2}}\right]
\label{eq32}
\end{multline}
taking derivative on both side of equation (\ref{eq32})
\begin{multline}
\lambda_{+}\frac{\left \langle O_{+} \right \rangle z_{m}^{\lambda_{+}-1}}{r_{+}^{\lambda_{+}}}=\\-\alpha\left(\frac{m^{2}}{d-1} \right )-\alpha(1-z_{m})\left[\left(\frac{m^{2}}{d-1}\right)\left(1+\frac{m^{2}}{2(d-1)}\right)-\frac{vt^{2}}{2(d-1)^{2}}\right]
\label{eq33}
\end{multline}
we have choosen $v=-\phi^{'}(1), \alpha=\psi(1)$ and $ vt=\frac{v}{r} $
Now our aim is to compute the relationship between charge density ($\rho$) and critical temperature ($T_{c}$) and codensation operator ($\left \langle O_{+} \right \rangle$) in terms of known parameters $z_{m},T,q,\beta$ and $D$
from equation (\ref{eq31}) substituting $r_{+}=\frac{4\pi T}{(d-1)}$ we get
\begin{multline}
\alpha^{2}= \frac{(-1)^{5q-1}2^{q}(\beta q)(2q-1)(d-1)}{vt^{2(1-q)}(1-z_{m})}\\
\times \left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m}) \right]\times\left(\frac{(T_{c})_{c}}{T}\right)^{\frac{d-2}{2q-1}}\left[1-\left(\frac{T}{(T_{c})_{c}}\right)^{\frac{d-2}{2q-1}}\right]
\label{eq34}
\end{multline}
where
\begin{equation}
(T_{c})_{c}=\xi\rho^{\frac{1}{d-2}}
\label{eq35}
\end{equation}
with
\begin{equation}
\xi_{c}=\frac{z_{m}^{(\frac{d-2}{2q-1}-2)(\frac{2q-1}{d-2})}}{\tilde{v}^{\frac{2q-1}{d-2}}}\left(\frac{d-1}{4\pi}\right)\frac{(\frac{d-2}{2q-1}-1)^{\frac{2q-1}{d-2}}}{[1+(2-\frac{d-2}{2q-1})(1-z_{m})]^{\frac{2q-1}{d-2}}}
\label{eq36}
\end{equation}
\begin{table}[ht]
\caption{Comparison between the analytical and numerical values of $\xi_{c}$ ,where $z_{m}=0.5$ and $d=5$}
\centering
\begin{tabular}{|c| c| c| c| c| }
\hline
$m^{2}$& \multicolumn{2}{c|}{$q=1$} & \multicolumn{2}{c|}{$q=\frac{5}{4}$} \\
\hline
&$(\xi_{c})_{a}$ & $(\xi_{c})_{n}$ & $(\xi_{c})_{a}$ & $(\xi_{c})_{n}$ \\
\hline
0 & 0.17028 &.00 &0.0880 & .00 \\
\hline
-3 & 0.2017 &.00 & 0.1135 & .000 \\
\hline
\end{tabular}
\label{t3}
\end{table}
from equation (\ref{eq32}) and (\ref{eq33}) we find out the expression for $\tilde{v}$
\begin{multline}
\tilde{v}=
\left
(m^{4}+2m^{2}(d-1)\left[\frac{2(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}+1\right]+\frac{4\lambda_{+}(d-1)^{2}}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}
\right)^{\frac{1}{2}}
\label{eq37}
\end{multline}
and substituting $\alpha$ from equation (\ref{eq34}) on equation (\ref{eq32}) now it is straight forward to obtain the expression for condensation operator ($\left \langle O_{+} \right \rangle$)
\begin{equation}
\left \langle O_{+} \right \rangle=\frac{r_{+}^{\lambda_{+}}(1-\frac{m^{2}(1-z_{m})}{2(d-1)})}{z_{m}^{\lambda_{+}}(1+\frac{\lambda_{+}(1-z_{m})}{2z_{m}})}\sqrt{\mathit{A}}\times
\left(\frac{(T_{c})_{c}}{T}\right)^{\frac{d-2}{2(2q-1)}}
\sqrt{1-\left(\frac{T}{(T_{c})_{c}}\right)^{\frac{d-2}{2q-1}}}
\label{eq38}
\end{equation}
where
\begin{equation}
\mathit{A}=\frac{(-1)^{5q-3}2^{q}(\beta q)(2q-1)(d-1)}{\tilde{v}^{2(1-q)}(1-z_{m})}\times\left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m})\right]
\label{eq39}
\end{equation}
\end{comment}
To begin with, we start by substituting $z=\frac{r_{+}}{r}$ in eq.
(\ref{eq06}). This yields
\begin{equation}
f(z)= \frac{r_{+}^{2}}{z^{2}}g_{0}(z)
\label{eq22}
\end{equation}
\noindent where
\begin{equation}
g_{0}(z)=\left(1-z^{d-1}\frac{\gamma(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta z^{2}})}{\gamma(\frac{d-1}{2},\frac{r_{+}^{2}}{4\theta})}\right).
\label{eq23}
\end{equation}
We now apply the matching method to proceed further. Near the horizon, that is $z=1$, we make Taylor series expansions of the fields $\phi(z)$ and $\psi(z)$. These read
\begin{eqnarray}
\phi(z) & = & \phi(1)-\phi^{'}(1)(1-z)+\frac{1}{2}\phi^{''}(1)(1-z)^{2}+ O((1-z)^{3})\nonumber
\\& = & -\phi^{'}(1)(1-z)+\frac{1}{2}\phi^{''}(1)(1-z)^{2}+ O((1-z)^{3})
\label{eq24}
\end{eqnarray}
\begin{eqnarray}
\psi(z)=\psi(1)-\psi^{'}(1)(1-z)+\frac{1}{2}\psi^{''}(1)(1-z)^{2}+O((1-z)^{3})
\label{eq25}
\end{eqnarray}
since $\phi(1)=0$.
\noindent To evaluate the expressions for $\phi^{''}(1)$ and
$\psi^{'}(1)$, $\psi^{''}(1)$, we look at eq.(s)
(\ref{eq17}) and (\ref{eq18}) for $z=1$. This yields
\begin{eqnarray}
\phi^{''}(1) & = & \left(\frac{d-2}{2q-1}-2\right)\phi^{'}(1)-\frac{2r_{+}^{2(q-1)}\psi^{2}(1)(\phi^{'}(1))^{3-2q}}{2^{q+1}(-1)^{3q}(\beta q)(2q-1)g_{0}^{'}(1)} \label{eq26}
\\\psi^{'}(1) & = &\frac{m^{2}}{g_{0}^{'}(1)}\psi(1)
\label{eq27}
\\\psi^{''}(1) & = & \frac{1}{2}\left[d-4-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}+\frac{m^{2}}{g_{0}^{'}(1)}\right]\frac{m^{2}}{g_{0}^{'}(1)}\psi(1)-\frac{\phi^{'2}(1)\psi(1)}{2r_{+}^{2}g_{0}^{'2}(1)}~.
\label{eq28}
\end{eqnarray}
Substituting these expressions in eq.(s) (\ref{eq24}) and (\ref{eq25}), we get
\begin{multline}
\phi(z)
=-\phi^{'}(1)(1-z)\\+\frac{1}{2}(1-z)^{2}\left[\left(\frac{d-2}{2q-1}-2\right)\phi^{'}(1)-\frac{2r_{+}^{2(q-1)}\psi^{2}(1)(\phi^{'}(1))^{3-2q}}{(-1)^{3q}2^{q+1}(\beta q)(2q-1)g_{0}^{'}(1)}\right]
\label{eq29}
\end{multline}
\begin{multline}
\psi(z)
=\psi(1)-\frac{m^{2}}{g_{0}^{'}(1)}\psi(1)(1-z)\\
+\frac{1}{2}(1-z)^{2}\left[\frac{1}{2}\left(d-4-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}+\frac{m^{2}}{g_{0}^{'}(1)}\right)\frac{m^{2}}{g_{0}^{'}(1)}\psi(1)-\frac{\phi^{'2}(1)\psi(1)}{2r_{+}^{2}g_{0}^{'2}(1)} \right]~.
\label{eq30}
\end{multline}
\noindent We now proceed to implement the matching method. We match the above solutions with the asymptotic solutions (\ref{eq19}) and (\ref{eq20}) at $z=z_{m}$. This gives the following relations
\begin{multline}
\mu -\frac{\rho^{\frac{1}{2q-1}}z_{m}^{\frac{d-2}{2q-1}-1}}{(r_{+})^{\frac{d-2}{2q-1}-1}}
=\\v(1-z_{m})+\frac{1}{2}(1-z_{m})^{2}\left[\left(2-\frac{d-2}{2q-1}\right)v-\frac{2r_{+}^{2(q-1)}\alpha^{2}(-v)^{3-2q}}{(-1)^{3q}2^{q+1}(\beta q)(2q-1)(g_{0}^{'})}\right]
\label{eq31}
\end{multline}
\begin{multline}
\frac{\left \langle O_{+} \right \rangle z_{m}^{\lambda_{+}}}{r_{+}^{\lambda_{+}}}=\\\alpha - (1-z_{m})\alpha\left(\frac{m^{2}}{g_{0}^{'}(1)} \right )+ \frac{1}{2}\alpha(1-z_{m})^{2}
\left[\frac{1}{2}\left(\frac{m^{2}}{g_{0}^{'}(1)}\right)
\left(d-4-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}+\frac{m^{2}}{g_{0}^{'}(1)}\right)-\frac{\tilde{v}^{2}}{2g_{0}^{'}(1)^{2}}\right]~.
\label{eq32}
\end{multline}
where $v=-\phi^{'}(1), \alpha=\psi(1)$ and $ \tilde{v}=\frac{v}{r} $.
\noindent Taking derivative on both sides of eq.(\ref{eq31}) and (\ref{eq32}) yields
\begin{multline}
-\frac{\rho^{\frac{1}{2q-1}}z_{m}^{\frac{d-2}{2q-1}-2}}{(r_{+})^{\frac{d-2}{2q-1}-1}}\left(\frac{d-2}{2q-1}-1\right)
=\\-v-(1-z_{m})\left[\left(2-\frac{d-2}{2q-1}\right)v-\frac{2r_{+}^{2(q-1)}\alpha^{2}(-v)^{3-2q}}{(-1)^{3q}2^{q+1}(\beta q)(2q-1)g_{0}^{'}(1)}\right]
\label{eq33}
\end{multline}
\begin{multline}
\lambda_{+}\frac{\left \langle O_{+} \right \rangle z_{m}^{\lambda_{+}-1}}{r_{+}^{\lambda_{+}}}=\\\alpha\left(\frac{m^{2}}{g_{0}^{'}(1)}\right)
-\alpha(1-z_{m})\left[\frac{1}{2}\left(\frac{m^{2}}{g_{0}^{'}(1)}\right)
\left(d-4-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}+\frac{m^{2}}{g_{0}^{'}(1)}\right)-\frac{\tilde{v}^{2}}{2g_{0}^{'}(1)^{2}}\right].
\label{eq34}
\end{multline}
From the above set of equations together with eq.(\ref{eq08}), it is simple to obtain
\begin{multline}
\alpha^{2}\equiv\alpha^{2}_{NC}=- \frac{(-1)^{5q-3}2^{q}(\beta q)(2q-1)g_{0}^{'}(1)}{\tilde{v}_{NC}^{2(1-q)}(1-z_{m})}\\
\times \left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m}) \right]\times\left(\frac{(T_{c})_{NC}}{T}\right)^{\frac{d-2}{2q-1}}\left[1-\left(\frac{T}{(T_{c})_{NC}}\right)^{\frac{d-2}{2q-1}}\right]
\label{eq35}
\end{multline}
where
\begin{eqnarray}
(T_{c})_{NC} &=&\xi_{NC}\rho^{\frac{1}{d-2}}\\
\xi_{NC} &=&-\frac{z_{m}^{(\frac{d-2}{2q-1}-2)(\frac{2q-1}{d-2})}}{\tilde{v}_{NC}^{\frac{2q-1}{d-2}}}\left(\frac{g_{0}^{'}(1)}{4\pi}\right)\frac{(\frac{d-2}{2q-1}-1)^{\frac{2q-1}{d-2}}}{[1+(2-\frac{d-2}{2q-1})(1-z_{m})]^{\frac{2q-1}{d-2}}}~.
\end{eqnarray}
Note that $NC$ in the above equations stand for the noncommutative case. The above results give the relation between the critical temperature and the charge density. It can be observed from the analytical results that the critical temperature decreases with increase in the noncommutative parameter $\theta$ which clearly indicate that the condensate gets harder to form as the spacetime noncommutativity increases. However, as the mass of the black hole increases, the critical temperature for a particular value of $\theta$ increases which tells that the effects of spacetime noncommutativity becomes prominent for lower mass black holes.
Further, we can infer from the Tables 3 and 4 (comparing the results with Table 2) that the onset of power Maxwell electrodynamics (for a value of $q\neq 1$) makes the condensate difficult to form. However, in this case also the effect of the power Maxwell theory on the formation of the condensate decreases with increase in the mass of the black holes.\\
\noindent In the Tables 1, 2, 3 and 4, we present the analytical results for $\xi_{NC}$ for different values of $M$ and $\theta$.
\begin{table}[ht]
\caption{Analytical values of $\xi_{NC}$ for different values of $M$ and $\theta$ [$q=1$, $m^{2}=0$, $z_{m}=0.5$ and $d=5$] }
\centering
\begin{tabular}{|c| c| c| c| }
\hline
$\theta$ & \multicolumn{3}{c|}{$\xi_{NC}$} \\
\hline
& $MG_{d}=10$ & $MG_{d}=50$ & $MG_{d}=100$ \\
\hline
0.3 & 0.1507 & 0.16933 & 0.1702 \\
\hline
0.5 & 0.1384 & 0.16058 & 0.1678 \\
\hline
0.7 & 0.1395 & 0.1492 & 0.1608 \\
\hline
0.9 & 0.1439 & 0.1418 & 0.1525 \\
\hline
\end{tabular}
\label{t3}
\end{table}
\begin{table}[ht]
\caption{Analytical values of $\xi_{NC}$ for different values of $M$ and $\theta$ [$q=1$, $m^{2}=-3$, $z_{m}=0.5$ and $d=5$] }
\centering
\begin{tabular}{|c| c| c| c| }
\hline
$\theta$ & \multicolumn{3}{c|}{$\xi_{NC}$} \\
\hline
& $MG_{d}=10$ & $MG_{d}=50$ & $MG_{d}=100$ \\
\hline
0.3 & 0.1761 & 0.2003 & 0.2015 \\
\hline
0.5 & 0.1649 & 0.18798 & 0.1977 \\
\hline
0.7 & 0.1701 & 0.1744 & 0.1883 \\
\hline
0.9 & 0.1767 & 0.1669 & 0.1782 \\
\hline
\end{tabular}
\label{t3a}
\end{table}
\begin{table}[ht]
\caption{Analytical values of $\xi_{NC}$ for different values of $M$ and $\theta$ [ $q=5/4$, $m^{2}=-3$, $z_{m}=0.5$ and $d=5$] }
\centering
\begin{tabular}{|c| c| c| c| }
\hline
$\theta$ & \multicolumn{3}{c|}{$\xi_{NC}$} \\
\hline
& $MG_{d}=10$ & $MG_{d}=50$ & $MG_{d}=100$ \\
\hline
0.3 & 0.1015 & 0.1126 & 0.1134 \\
\hline
0.5 & 0.0981 & 0.1067 & 0.1114 \\
\hline
0.7 & 0.1020 & 0.1008 & 0.1069 \\
\hline
0.9 & 0.1056 & 0.0980 & 0.1024 \\
\hline
\end{tabular}
\label{t3b}
\end{table}
\begin{table}[ht]
\caption{Analytical values of $\xi_{NC}$ for different values of $M$ and $\theta$ [ $q=7/4$, $m^{2}=-3$, $z_{m}=0.5$ and $d=5$] }
\centering
\begin{tabular}{|c| c| c| c| }
\hline
$\theta$ & \multicolumn{3}{c|}{$\xi_{NC}$} \\
\hline
& $MG_{d}=10$ & $MG_{d}=50$ & $MG_{d}=100$ \\
\hline
0.3 & 0.0167 & 0.0177 & 0.0178 \\
\hline
0.5 & 0.0172 & 0.0171 & 0.0176 \\
\hline
0.7 & 0.0183 & 0.0167 & 0.0171 \\
\hline
0.9 & 0.0187 & 0.0168 & 0.0168 \\
\hline
\end{tabular}
\label{t3c}
\end{table}
\noindent From eq.(s).(\ref{eq32}) and (\ref{eq34}), we obtain the expression for $\tilde{v}_{NC}$
\begin{multline}
\tilde{v}\equiv\tilde{v}_{NC}^{2}=
m^{4}+m^{2}g_{0}^{'}(1)
\left(d-4-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}\right)-
\left(\frac{4m^{2}g_{0}^{'}(1)(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}\right)\\ + \left(\frac{4g_{0}^{'2}(1)\lambda_{+}}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}\right)~.
\label{eq38}
\end{multline}
\noindent Now we have all the expressions in hand required to compute the condensation operator for this problem. The expression for $\left \langle O_{+} \right \rangle$ can be obtained by substituting $\tilde{v}_{NC}$ from eq. (\ref{eq38}) in eq.(\ref{eq33}). This gives
\begin{equation}
\left \langle O_{+} \right \rangle_{NC}=\frac{r_{+}^{\lambda_{+}}(1-\frac{m^{2}(1-z_{m})}{2g_{0}^{'}(1)})}{z_{m}^{\lambda_{+}}(1+\frac{\lambda_{+}(1-z_{m})}{2z_{m}})}\sqrt{\mathit{A}_{NC}}\times
\left(\frac{(T_{c})_{NC}}{T}\right)^{\frac{d-2}{2(2q-1)}}
\sqrt{1-\left(\frac{T}{(T_{c})_{NC}}\right)^{\frac{d-2}{2q-1}}}
\label{eq39}
\end{equation}
\noindent where
\begin{equation}
\mathit{A}_{NC}=\frac{(-1)^{5q-3}2^{q}(\beta q)(2q-1)(-g_{0}^{'}(1))}{\tilde{v}_{NC}^{2(1-q)}(1-z_{m})}\times\left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m})\right]~.
\label{eq40}
\end{equation}
Now we proceed to take the $\theta \rightarrow 0$ limit of the above findings.
The expression for $\alpha\equiv\alpha_{C}=\psi(1)$ now reduces to\footnote{It is to be noted that our expression differs from that in \cite{dib} since an algebraic error was made in that paper.}
\begin{comment}
\noindent considering the $f(r)$ given by eq.(\ref{eq10}) at $z=1$ following equations hold true
\begin{eqnarray}
\phi(1) &= &0 \\
\psi^{'}(1) &= &-\frac{m^{2}}{d-1}\psi(1) \\
\phi^{''}(1) &= & -(2-\frac{d-2}{2q-1})\phi^{'}(1)
+\frac{2r_{+}^{2(q-1)}\psi^{2}(1)(\phi^{'}(1))^{3-2q}}{(-1)^{3q}2^{q+1}\beta q(2q-1)(d-1)}\\
\psi^{''}(1) &= &(\frac{m^{2}}{(d-1)})(1+\frac{m^{2}}{2(d-1)})\psi(1)-\frac{\phi^{'2}\psi(1)}{2r_{+}^{2}(d-1)^{2}}~.
\end{eqnarray}
\end{comment}
\begin{multline}
\alpha^{2}_{C}= \frac{(-1)^{5q-1}2^{q}(\beta q)(2q-1)(d-1)}{\tilde{v}^{2(1-q)}(1-z_{m})}\\
\times \left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m}) \right]\times\left(\frac{(T_{c})_{C}}{T}\right)^{\frac{d-2}{2q-1}}\left[1-\left(\frac{T}{(T_{c})_{C}}\right)^{\frac{d-2}{2q-1}}\right]
\label{eq41}
\end{multline}
where
\begin{equation}
(T_{c})_{C}=\xi_{C}\rho^{\frac{1}{d-2}}
\label{eq42}
\end{equation}
with $\xi_{C}$ given by
\begin{equation}
\xi_{C}=\frac{z_{m}^{(\frac{d-2}{2q-1}-2)(\frac{2q-1}{d-2})}}{\tilde{v}_{C}^{\frac{2q-1}{d-2}}}\left(\frac{d-1}{4\pi}\right)\frac{(\frac{d-2}{2q-1}-1)^{\frac{2q-1}{d-2}}}{[1+(2-\frac{d-2}{2q-1})(1-z_{m})]^{\frac{2q-1}{d-2}}}~.
\label{eq43}
\end{equation}
The expression for $\tilde{v}\equiv\tilde{v}_{C}$ and the condensation operator in the commutative case take the form
\begin{multline}
\tilde{v}_{C}=
\left
(m^{4}+2m^{2}(d-1)\left[\frac{2(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}+1\right]+\frac{4\lambda_{+}(d-1)^{2}}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}
\right)^{\frac{1}{2}}
\label{eq44}
\end{multline}
\begin{equation}
\left \langle O_{+} \right \rangle_{C}=\frac{r_{+}^{\lambda_{+}}(1+\frac{m^{2}(1-z_{m})}{2(d-1)})}{z_{m}^{\lambda_{+}}(1+\frac{\lambda_{+}(1-z_{m})}{2z_{m}})}\sqrt{\mathit{A}_{C}}\times
\left(\frac{(T_{c})_{C}}{T}\right)^{\frac{d-2}{2(2q-1)}}
\sqrt{1-\left(\frac{T}{(T_{c})_{C}}\right)^{\frac{d-2}{2q-1}}}
\label{eq45}
\end{equation}
where
\begin{equation}
\mathit{A}_{C}=\frac{(-1)^{5q-3}2^{q}(\beta q)(2q-1)(d-1)}{\tilde{v}_{C}^{2(1-q)}(1-z_{m})}\times\left[1+\left(2-\frac{d-2}{2q-1}\right)(1-z_{m})\right]~.
\label{eq46}
\end{equation}
\noindent In Table 5, we display the analytical results for $\xi_{C}$
for the commutative case.
\begin{table}[ht]
\caption{Analytical values of $\xi_{C}$ for $z_{m}=0.5$ and $d=5$}
\centering
\begin{tabular}{|c| c| c| c| }
\hline
$m^2$ & \multicolumn{3}{c|}{$\xi_{C}$} \\
\hline
&$q=1$ & $q=5/4$ & $q=7/4$ \\
\hline
0 & 0.17028 &0.0880 &0.0117 \\
\hline
-3 & 0.2017 & 0.1135 &0.0179 \\
\hline
\end{tabular}
\label{t5}
\end{table}
In Figure 5, we use the analytical results in Tables 2, 3 and 4 to plot $\xi$ vs $\theta$ for different values of the power Maxwell parameter.
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{xitheta.jpg}
\caption{$\xi$ vs $\theta$ plot : $z_{m}=0.5, MG_{d}=100, m^2=-3, d=5$}
\label{fig1}
\end{figure}
\section{Meissner like effect}
In this section we introduce an external magnetic field $B$ in the bulk theory and observe how the condensation behaves at low temperature for noncommutative black hole background in the bulk. The intention is to find a critical magnetic filed $B_{c}$ above which the condensation vanishes. We therefore make the following ansatz
\begin{equation}
A_{t}=\phi(z) \quad, \quad A_{y}=Bx \quad, \quad \psi=\psi(x,z)~.
\label{eq47}
\end{equation}
The equation of motion for the complex scalar field $\psi$ that follows from the above ansatz reads
\begin{multline}
\partial_{z}^{2}\psi(x,z)+\left(\frac{f^{'}(z)}{f(z)}-\frac{d-4}{z}\right)\partial_{z}\psi(x,z)\\+\frac{\phi^{2}(z)\psi(x,z) r_{+}^{2}}{z^{4}f^{2}(z)}-\frac{m^{2}r_{+}^{2}\psi(x,z)}{z^{4}f(z)}+\frac{1}{z^{2}f(z)}(\partial_{x}^{2}\psi-B^{2}x^{2}\psi)=0~.
\label{eq00}
\end{multline}
For solving the above equation, we write $\psi(x,z)$ as
\begin{equation}
\psi(x,z)=X(x)R(z)~.
\label{eq49}
\end{equation}
Substituting eq.(\ref{eq49}) in eq.(\ref{eq00}), we arrive at the following expression
\begin{multline}
\frac{z^{2}f(z)}{R(z)}\left[\partial_{z}^{2}R(z)+\left(\frac{f^{'}(z)}{f(z)}-\frac{d-4}{z}\right)\partial_{z}R(z)\right]
+\frac{\phi^{2}(z)r_{+}^{2}}{z^{2}f(z)}-\frac{m^{2}r_{+}^{2}}{z^{2}}
\\-\frac{1}{X(x)}\left[-\partial_{x}^{2}X(x)+B^{2}x^{2}X(x)\right]=0~.
\label{eq50}
\end{multline}
This equation implies that $X(x)$ satisfies a $1$-dimensional simple harmonic oscillator equation with frequency $B$
\begin{equation}
-X^{''}(x)+B^{2}x^{2}X(x)=\lambda_{n}BX(x)
\label{eq51}
\end{equation}
where the separation constant is given by $\lambda_{n}=2n+1$. For the rest of our analysis we shall set $n=0$, since this corresponds to the most stable mode.
\noindent The equation for $R(z)$ has the following form
\begin{equation}
R^{''}(z)+\left(\frac{f^{'}(z)}{f(z)}-\frac{d-4}{z}\right)R^{'}(z)+
\frac{\phi^{2}(z)r_{+}^{2}R(z)}{z^{4}f^{2}(z)}-\frac{m^{2}r_{+}^{2}R(z)}{z^{4}f(z)}\\=\frac{BR(z)}{z^{2}f(z)}~.
\label{eq52}
\end{equation}
Now we shall expand $R(z)$ in a Talylor series around $z=1$ and equate it with the asymptotic solution of $R(z)$ at some point $z=z_{m}$.
\noindent The Taylor series expansion of $R(z)$ around $z=1$ reads
\begin{equation}
R(z)=R(1)-R^{'}(1)(1-z)+\frac{1}{2}R^{''}(1)(1-z)^{2}+O\left((1-z)^{3}\right)~.
\label{eq53}
\end{equation}
\noindent Further the asymptotic form for $R(z)$ reads
\begin{equation}
R(z)=\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z^{\lambda_{+}}~.
\label{eq54}
\end{equation}
\noindent Equating these at $z=z_{m}$ yields
\begin{equation}
\left[\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z^{\lambda_{+}}\right]_{z=z_{m}}=\left[R(1)-R^{'}(1)(1-z)+\frac{1}{2}R^{''}(1)(1-z)^{2}+O\left((1-z)^{3}\right)\right]_{z=z_{m}}
\label{eq55}~.
\end{equation}
\noindent Differentiating eq.(s)(\ref{eq53}) and (\ref{eq54}) with respect to $z$ and evaluating at $z=z_{m}$ yields
\begin{equation}
\left[\lambda_{+}\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z^{\lambda_{+}-1}\right]_{z=z_{m}}
=\left[R^{'}(1)-R{''}(1)(1-z)+O\left((1-z)^{3}\right)\right]_{z=z_{m}}~.
\label{eq56}
\end{equation}
\begin{comment}
for commutative $f(r)$ [equation (\ref{eq11})] at the horizon
from (\ref{eq60}) we get $R^{'}(1)$ and $R^{''}(1)$
\begin{equation}
R^{'}(1)=\left(-\frac{m^{2}}{d-1}-\frac{B}{r_{+}^{2}(d-1)}\right)R(1)
\label{eq63}
\end{equation}
\begin{equation}
R^{''}(1)=
\frac{1}{2}\left(\frac{m^{2}}{d-1}+\frac{B}{r_{+}^{2}(d-1)}\right)^{2}R(1)+\frac{m^{2}}{d-1}R(1)
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}(d-1)^{2}}
\label{eq64}
\end{equation}
putting $R^{'}(1)$ and $R^{''}(1)$ in (\ref{eq61}) and (\ref{eq62}) we obtain
\begin{multline}
\left[\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z_{m}^{\lambda_{+}}\right]=
R(1)+
\left(\frac{m^{2}}{d-1}+\frac{B}{r_{+}^{2}(d-1)}\right)(1-z_{m})R(1)\\
+\frac{1}{2}\left(\frac{1}{2}\left(\frac{m^{2}}{d-1}+\frac{B}{r_{+}^{2}(d-1)}\right)^{2}R(1)+\frac{m^{2}}{d-1}R(1)
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}(d-1)^{2}}\right)(1-z_{m})^{2}
\label{eq65}
\end{multline}
\begin{multline}
\left[z_{m}^{\lambda_{+}}\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z_{m}^{\lambda_{+}-1}\right]=
-\left(\frac{m^{2}}{d-1}+\frac{B}{r_{+}^{2}(d-1)}\right)R(1)\\
-\frac{1}{2}\left(\frac{1}{2}\left(\frac{m^{2}}{d-1}+\frac{B}{r_{+}^{2}(d-1)}\right)^{2}R(1)+\frac{m^{2}}{d-1}R(1)
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}(d-1)^{2}}\right)(1-z_{m})
\label{eq66}
\end{multline}
from (\ref{eq65}) and (\ref{eq66}) it is straight forward to obtain the quadratic equation for $B$
\begin{equation}
B^{2}+pr_{+}^{2}B+nr_{+}^{4}-\phi^{'2}(1)r_{+}^{2}=0
\label{eq67}
\end{equation}
where,
\begin{equation}
p=\left[2m^{2}+\frac{4(d-1)(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}\right]
\label{eq68}
\end{equation}
and
\begin{multline}
n=m^{4}+2m^{2}(d-1)+\frac{4m^{2}(d-1)(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}\\
+\frac{4\lambda{+}(d-1)^{2}}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}
\label{eq69}
\end{multline}
when $B=B_{c}$ condensation is very small and approximately we can take $\psi\approx0$ and equation (\ref{eq19}) turns out to be
\begin{equation}
\partial_{z}^{2}\phi+\frac{1}{z}\left(2-\frac{d-2}{2q-1}\right)\partial_{z}\phi \approx 0
\label{eq70}
\end{equation}
which has a unique solution
\begin{equation}
\phi(z)=\left(\frac{\rho}{r_{+}^{d-2}}\right)^{\frac{1}{2q-1}}r_{+}(1-z^{\frac{d-2}{2q-1}-1})
\label{eq71}
\end{equation}
\begin{equation}
\Rightarrow \phi^{'2}(1)r_{+}^{2}=
\left(\frac{\rho}{r_{+}^{d-2}}\right)
^{\frac{2}{2q-1}}r_{+}^{4}
\left(1-z^{\frac{d-2}{2q-1}-1}\right)^{2}
\label{eq72}
\end{equation}
\begin{equation}
B_{c}=\frac{(d-1)^{\frac{d-2}{2q-1}-2}}{2(4\pi)^{\frac{d-2}{2q-1}-2}\xi_{c}^{\frac{d-2}{2q-1}}}(T_{c})_{c}^{2}\times
\left[
\Omega (D,q,m)-p\left(\frac{4\pi\xi_{c}}{d-1}\right)^{\frac{d-2}{2q-1}}
\left(\frac{T}{T_{c}}\right)^{\frac{d-2}{2q-1}}\right]
\end{equation}
where,
\begin{equation}
\Omega(D,q,m)=\left[4(\frac{d-2}{2q-1}-1)^{2}-(4n-p^{2})\left(\frac{4\pi\xi_{c}}{d-1}\right)^{\frac{2(d-1)}{2q-1}}\left(\frac{T}{T_{c}}\right)^{\frac{2(d-1)}{2q-1}}\right]^{\frac{1}{2}}
\end{equation}
\end{comment}
\noindent Now for the noncommutative black hole spacetime (\ref{eq06}), we have from eq.(\ref{eq52})
\begin{equation}
R^{'}(1)=\left(\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right)R(1)
\label{eq57}
\end{equation}
\begin{multline}
R^{''}(1)=
\frac{1}{2}\left[d-4+\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}\right]\left[\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right]R(1)
\\+\frac{BR(1)}{r_{+}^{2}g_{0}^{'}(1)}
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}g_{0}^{'2}(1)}~.
\label{eq58}
\end{multline}
\noindent Substituting $R^{'}(1)$ and $R^{''}(1)$ in eq.(s)(\ref{eq55}) and (\ref{eq56}), we have
\begin{multline}
\left[\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z_{m}^{\lambda_{+}}\right]=
R(1)
-\left(\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right)(1-z_{m})R(1)\\
+\frac{1}{2}(1-z_{m})^{2}
[\frac{1}{2}\left[d-4+\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}\right]\left[\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right]R(1)
\\+\frac{BR(1)}{r_{+}^{2}g_{0}^{'}(1)}
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}g_{0}^{'2}(1)}]
\label{eq59}
\end{multline}
\begin{multline}
\left[\lambda_{+}\frac{\left \langle O \right \rangle_{+}}{r_{+}^{\lambda_{+}}}z_{m}^{\lambda_{+}-1}\right]=
\left(\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right)R(1)\\
-(1-z_{m})
[\frac{1}{2}\left[d-4+\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}-\frac{g_{0}^{''}(1)}{g_{0}^{'}(1)}\right]\left[\frac{m^{2}}{g_{0}^{'}(1)}+\frac{B}{r_{+}^{2}g_{0}^{'}(1)}\right]R(1)
\\+\frac{BR(1)}{r_{+}^{2}g_{0}^{'}(1)}
-\frac{\phi^{'2}(1)R(1)}{2r_{+}^{2}g_{0}^{'2}(1)}]~.
\label{eq60}
\end{multline}
\noindent Eq.(s)(\ref{eq59}) and (\ref{eq60}) yields a quadratic equation for $B$. This reads
\begin{equation}
B^{2}+pr_{+}^{2}B+nr_{+}^{4}-\phi^{'2}(1)r_{+}^{2}=0
\label{eq61}
\end{equation}
where
\begin{equation}
p=2m^{2}+\left(d-4-\frac{g^{''}_{0}(1)}{g^{'}_{0}(1)}\right)g_{0}^{'}(1)+2g_{0}^{'}(1)-\frac{4g_{0}^{'}(1)(\lambda_{+}(1-z_{m})+z_{m})}{(1-z_{m})(\lambda_{+}(1-z_{m})+2z_{m})}
\label{eq62}
\end{equation}
and
\begin{multline}
n=m^{4}
+m^{2}g_{0}^{'}(1)
\left[\left(d-4-\frac{g^{''}_{0}(1)}{g^{'}_{0}(1)}\right)
-\frac{4(z_{m}+\lambda_{+}(1-z_{m}))}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}\right]\\
+\frac{4\lambda{+}g_{0}^{'2}(1)}{(1-z_{m})(2z_{m}+\lambda_{+}(1-z_{m}))}~.
\label{eq63}
\end{multline}
\noindent Now when $B=B_{c}$, the condensate vanishes and hence we can take $\psi=0$ and eq.(\ref{eq17}) now takes the form
\begin{equation}
\partial_{z}^{2}\phi+\frac{1}{z}\left(2-\frac{d-2}{2q-1}\right)\partial_{z}\phi= 0~.
\label{eq64}
\end{equation}
Solving this, we get
\begin{equation}
\phi(z)=\left(\frac{\rho}{r_{+}^{d-2}}\right)^{\frac{1}{2q-1}}r_{+}(1-z^{\frac{d-2}{2q-1}-1})
\label{eq65}
\end{equation}
\begin{equation}
\Rightarrow \phi^{'2}(1)r_{+}^{2}=
\left(\frac{\rho}{r_{+}^{d-2}}\right)
^{\frac{2}{2q-1}}r_{+}^{4}
\left(\frac{d-2}{2q-1}-1\right)^{2}~.
\label{eq66}
\end{equation}
Using eq.(\ref{eq66}) in eq.(\ref{eq61}) we get the expression for the critical magnetic field $B_{c}$ :
\begin{equation}
(B_{c})_{NC}=\frac{(-g_{0}^{'}(1))^{\frac{d-2}{2q-1}-2}}{2(4\pi)^{\frac{d-2}{2q-1}-2}\xi_{NC}^{\frac{d-2}{2q-1}}}(T_{c})_{NC}^{2}\times
\left[
\Omega_{NC}(d,q,m)-p\left(-\frac{4\pi\xi_{NC}}{g_{0}^{'}(1)}\right)^{\frac{d-2}{2q-1}}
\left(\frac{T}{(T_{c})_{NC}}\right)^{\frac{d-2}{2q-1}}\right]
\label{eq67}
\end{equation}
where
\begin{equation}
\Omega_{NC}(d,q,m)=
\left[4(\frac{d-2}{2q-1}-1)^{2}-(4n-p^{2})\left(-\frac{4\pi\xi_{NC}}{g_{0}^{'}(1)}\right)^{\frac{2(d-1)}{2q-1}}\left(\frac{T}{(T_{c})_{NC}}\right)^{\frac{2(d-1)}{2q-1}}\right]^{\frac{1}{2}}~.
\label{68}
\end{equation}
Once again we take the $\theta\rightarrow 0$ limit of the above results. This gives the critical magnetic field in the commutative case :
\begin{equation}
(B_{c})_{C}=\frac{(d-1)^{\frac{d-2}{2q-1}-2}}{2(4\pi)^{\frac{d-2}{2q-1}-2}\xi_{c}^{\frac{d-2}{2q-1}}}(T_{c})_{C}^{2}\times
\left[
\Omega_{C}(d,q,m)-p\left(\frac{4\pi\xi_{C}}{d-1}\right)^{\frac{d-2}{2q-1}}
\left(\frac{T}{(T_{c})_{C}}\right)^{\frac{d-2}{2q-1}}\right]
\label{eq69}
\end{equation}
where
\begin{equation}
\Omega_{C}(d,q,m)=\left[4(\frac{d-2}{2q-1}-1)^{2}-(4n-p^{2})\left(\frac{4\pi\xi_{c}}{d-1}\right)^{\frac{2(d-1)}{2q-1}}\left(\frac{T}{(T_{c})_{C}}\right)^{\frac{2(d-1)}{2q-1}}\right]^{\frac{1}{2}}~.
\label{eq70}
\end{equation}
The above findings are displayed in Figures 2 and 3. It is evident from these figures that there exists a critical magnetic field as well as a critical temperature above which the superconducting phase vanishes. In Fig.2, we present our results for $B_{c}-T$ for two sets of values, namely, $m^{2}=-3$, $q=1$ and $d=5$ and $m^{2}=0$, $q=1$ and $d=5$ for different values of the noncommutative parameter $\theta$. In Fig.3, the plots are made for $q=5/4$ with $m^{2}=-3$ and $d=5$ for different values of the noncommutative parameter $\theta$.
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{BGraph1.jpg}
\includegraphics[width=8cm]{BGraph3.jpg}
\caption{$B_{c}/T_{c}^{2}$ vs $T/T_{c}$ plot : $z_{m}=0.5, MG_{d}=100, d=5$}
\label{fig1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{BGraph2.jpg}
\caption{$B_{c}/T_{c}^{2}$ vs $T/T_{c}$ plot : $z_{m}=0.5, MG_{d}=100, d=5$}
\label{fig1}
\end{figure}
\noindent It is interesting to note that the critical magnetic field above which the condensate vanishes increases with increase in the noncommutative parameter $\theta$.
\section{Conclusions}
In this paper, we have explored the role of noncommutativity of spacetime in holographic superconductors in the framework of power Maxwell electrodynamics
using the matching method technique. In our study, the relation between the critical temperature and the charge density has been obtained in $d$-dimensions. It is observed that the critical temperature not only depends on the charge density but also on the noncommutative parameter $\theta$, mass of the black hole and the parameter $q$ appearing in the power Maxwell theory. We have presented the analytical results for the ratio of the critical temperature and charge density for $d=5$. We have also analytically obtained the expression for the condensation operator in $d$-dimensions. Our analytical results indicate that the condensation gets harder to form in the presence of the power Maxwell parameter $q$ and the noncommutative parameter $\theta$. However, with increase in the mass of the black hole,
the critical temperature for a particular value of $\theta$ increases which reveals that the effects of spacetime noncommutativity becomes weaker for black holes with mass much larger in comparison to the noncommutative parameter $\theta$.
We also conclude from our results that the onset of power Maxwell electrodynamics (for a value of $q\neq 1$) makes the condensate harder to form. However, once again the effect of the power Maxwell parameter on the formation of the condensate decreases with increase in the mass of the black holes.
We then study the Meissner like effect by introducing an external magnetic field in our model. The critical magnetic field above which the condensate vanishes is obtained by the matching method and is observed to increase with increase in the noncommutative parameter $\theta$.
\begin{comment}
According to mean field theory in second order phase transition, the critical exponent associated with order parameter is $1/2$. Similarly in our case the critical exponent associated with condensation operator is $1/2$ which suggests the existence of second order phase transition. Condensation gets affected by the scalar mass. Larger scalar mass allows higher critical magnetic field. Another important feature to be noted is that, nonlinear corrections to the maxwell action does not affect the solution. For fixed $d,z_{m},m^{2},MG_{d}$ at $T/T_{c}=0$ we get same critical filed strength for both $q=1$ and $q=5/4$ cases. It is observed that if we noccomutativity the nature of the solution does not alter. But as $\theta$ takes higher values critical magnetic field increases.
\end{comment}
\section*{Acknowledgments}
SP wants to thank the Council of Scientific and Industrial Research
(CSIR), Govt. of India for financial support.
SG acknowledges the support by DST SERB under Start Up Research Grant (Young Scientist), File No.YSS/2014/000180.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction.}\label{sectionI}
In many respects, the properties of colloidal fluids resemble almost perfectly those of the correspondent atomic liquid \cite{pusey0,nagele0,deschepperpusey1, deschepperpusey2}.
It is well known that the equilibrium phase diagram, and in general all the equilibrium thermodynamic properties, of a specific model system (say a Lennard-Jones liquid) will be independent of the microscopic (either molecular or Brownian) dynamics that govern the motion of the $N$ interacting particles that constitute the system. This implies that these equilibrium properties can be generated using either molecular or Brownian dynamics simulations \cite{tildesley}. Furthermore, although time-dependent properties
are expected in general to depend on the specific microscopic
dynamics, some features associated with the
long-time dynamic behavior of the system also seem to be rather insensitive
to the microscopic short-time dynamics. This appears to be particularly true regarding the rather complex dynamic behavior of these systems as they approach the glass transition \cite{lowenhansenroux,szamelflenner,puertasaging}. Determining the range of validity of this analogy continues to be a relevant topic in the study of the dynamics of liquids.
From the theoretical side one would like to unify colloidal
and atomic liquids in a common theoretical description of the relaxation dynamics of the local density fluctuations, which explicitly exhibits the origin of the
similarities and differences in their macroscopic dynamics. One possible general framework for such theoretical analysis is the concept of the generalized Langevin equation (GLE) \cite{delrio,faraday}. This equation describes the dynamics of the thermal fluctuations $\delta a_{i}(t)\ (\equiv a_{i}(t)-a^{eq}_{i})$ of the instantaneous value of the macroscopic variables $ a_{i}(t)$ ($i=1,2,...,\nu$), around its equilibrium value $a^{eq}_{i}$, and has the structure of the most general linear stochastic equation with additive noise for the vector $\delta \mathbf{a}(t)=\left[\delta a_{1}(t),\delta a_{2}(t),...,\delta a_{\nu }(t)\right]^{\dagger} $ (with the dagger meaning transpose). The GLE equation has been widely used in the description of thermal fluctuation phenomena in simple liquid systems, and Boon and Yip's textbook \cite{boonyip} contains a detailed account of its early use to describe the dynamics of simple liquids. Although this stochastic equation is conventionally associated with the Mori-Zwanzig projection operator formalism \cite{zwanzig,mori}, in reality its structure is not a consequence of the hamiltonian basis of Mori-Zwanzig's derivation; instead, it is essentially equivalent to the mathematical condition of stationarity \cite{delrio}.
Thus, in Ref. \cite{scgle0} the GLE formalism, understood in the latter manner, was employed to derive the most general diffusion equation of a model Brownian liquid (i.e., an idealized monodisperse colloidal suspension in the absence of hydrodynamic interactions) formed by $N$ spherical Brownian particles interacting between them through direct (i.e., conservative) forces, but in the absence of hydrodynamic interactions. The resulting general memory function expression for the intermediate scattering function (ISF) $F(k,t)$ and for its self component $F_S(k,t)$, were later employed in the construction of the self-consistent generalized Langevin equation (SCGLE) theory of colloid dynamics \cite{scgle1,scgle2}, eventually applied to the description of dynamic arrest phenomena \cite{rmf,todos1,todos2} and more recently \cite{noneqscgle0,noneqscgle1} to the construction of a first-principles theory of equilibration and aging of colloidal glass-forming liquids.
With the aim of investigating the relationship between the dynamics of atomic and Brownian liquids, here we start the extension of these theoretical developments to describe the macroscopic dynamics of both kinds of systems within the same theoretical formalism. With this general intention in mind, in the present paper we discuss the application of the generalized Langevin equation formalism above, to the derivation of general memory-function expressions for the (collective and self) intermediate scattering functions of an atomic liquid. These expressions should in principle be capable of describing the crossover behavior of these properties between their ballistic short time limit and their diffusive long-time behavior. Although in practice we do not use these expressions here to numerically evaluate these functions in the short- or intermediate time-regime $t\approx \tau_0$ (where $ \tau_0$ is the mean free time), we find that in their long-time limit, $t \gg \tau_0$, these expressions for $F(k,t)$ and $F_S(k,t)$ become essentially identical to the corresponding expressions for a colloidal fluid, strongly suggesting a well defined long-time dynamic correspondence between atomic and colloidal liquids.
The strategy that we shall employ to derive the memory function equations for the intermediate scattering functions of our model atomic liquid will actually rely very heavily on the referred previous derivation \cite{scgle0} of the time-evolution equations for $F(k,t)$ and $F_S(k,t)$ of the corresponding idealized Brownian fluid. The rationale for this is the rather simple observation that the essential difference between an atomic liquid and its idealized Brownian counterpart (a colloidal liquid in the absence of hydrodynamic interactions) is the presence, in the microscopic equations of motion of the latter, of the friction force $-\zeta ^{(s)}{\bf v}_{i}(t)$ due to the supporting solvent and the corresponding fluctuating force ${\bf f}^{(s)}(t)$. Thus, we first review the derivation of Ref. \cite{scgle0}, with the aim of keeping track of the effects of these friction terms. This aspect of the present work is developed in section \ref{sectionII}. At the end of the section, we simply take the $\zeta ^{(s)}\to 0$ limit of the end result of the referred derivation, to obtain the corresponding time-evolution equations for $F(k,t)$ and $F_S(k,t)$ of our atomic liquid (namely, Eqs. (\ref{fdkz0}) and (\ref{fsdkz0})).
The next task of this work is to analyze the long-time limit of these results for $F(k,t)$ and $F_S(k,t)$. In Ref. \cite{scgle0}, dealing with Brownian systems, this limit was referred to as the ``overdamped" limit, corresponding to times $t$ much longer than the relaxation time $\tau ^{(s)} \equiv M/\zeta ^{(s)}$ of the velocity autocorrelation function. This relaxation results from the damping of the particle's momentum due to the friction force $-\zeta ^{(s)}{\bf v}_{i}(t)$. Thus, in that case $\tau ^{(s)}$ sets the crossover timescale from the early initial regime $t << \tau^{(s)}$, where the inertial effects are still important, to the long-time regime $t>> \tau ^{(s)}$, where the motion of the suspended particles is purely diffusive, and described by the short-time self-diffusion coefficient $D^{(s)}=k_BT/\zeta ^{(s)}$. In contrast, in atomic liquids an analogous timescale is apparently absent, since there is not any material solvent exerting damping friction forces. In spite of that, in Section \ref{sectionIII}, we analyze the long-time limit of the time-evolution equations for $F(k,t)$ and $F_S(k,t)$ of the atomic liquid derived in Section \ref{sectionII}. We find that in this limit, these equations happen to adopt the same structure as the corresponding equations for Brownian systems in their overdamped limit. As a result of this analysis, we conclude that the parameter playing the role of the short-time self-diffusion coefficient $D^{(s)}$ is now the self-diffusion coefficient $D^{0}$ determined by kinetic theory.
This formal dynamic correspondence has important physical consequences, expressed in terms of well defined scaling properties of the dynamics of two fluid systems which only differ in the microscopic laws that govern the motion of the constituent particles (either molecular or Brownian dynamics). The most relevant of such consequences are briefly discussed in the final section (Section \ref{sectionV}) of this paper.
\section{Atomic fluid as a frictionless Brownian liquid.} \label{sectionII}
Let us start by reviewing the derivation in Ref. \cite{scgle0} of the time-evolution equations of $F(k,t)$ and $F_S(k,t)$ of an idealized monodisperse colloidal suspension in the absence of hydrodynamic interactions, formed by $N$ spherical particles in a
volume $V$, whose microscopic dynamics is described by the $N$-particle Langevin equations \cite
{3,4,5}
\begin{equation}
M{\frac{d{\bf v}_{i}(t)}{dt}}\equiv -\zeta ^{(s)}{\bf v}_{i}(t)+{\bf f}^{(s)}
_{i}(t)+\sum_{j\neq i}{\bf F}_{ij}(t),\quad (i=1,2,\ldots ,N). \label{eq1}
\end{equation}
In these equations, $M$ is the mass and ${\bf v}_{i}(t)$ the
velocity of the $i$th particle, and $\zeta^{(s)}$ is its friction coefficient
in the absence of interactions. Also, ${\bf f}^{(s)}_{i}(t)$ is a random force,
modeled as a Gaussian white noise of zero mean, and variance given by $%
\langle {\bf f}^{(s)}_{i}(t){\bf f}^{(s)}_{j}(0)\rangle =k_{B}T\zeta^{(s)}2\delta
(t)\delta _{ij}\stackrel{\leftrightarrow }{{\bf I}}(i,j=1,2,\ldots ,N;%
\stackrel{\leftrightarrow }{{\bf I}}\mbox{being the }3\times 3%
\mbox{
unit tensor})$. The direct interactions between the particles are
represented by the sum of the pairwise forces ${\bf F}_{ij}$ that the $j$th
particle exerts on particle $i$, i.e., ${\bf F}_{ij}$ is obtained from the
pair potential $u(|{\bf r}_{i}-{\bf r}_{j}|)$.
Our goal is to derive the macroscopic time-evolution equations for the ISFs $F(k,t)$ and $F_{S}(k,t)$, starting from this microscopic level of description. Some of the most important features of such general time evolution equations for $F(k,t)$ and $F_{S}(k,t)$ can be written, however, right at the outset, since they derive from
the general selection rules \cite{delrio} originating from the stationarity
condition and from other symmetry properties of the macroscopic variables
whose dynamics couple to the dynamics of the local particle concentration.
This was the approach adopted in Ref. \cite{scgle0}, which derived the most general
time-evolution equation for the fluctuations of the local concentration $n({\bf r},t)$ \ of colloidal particles, consistent with the selection rules referred to above. The specific information of the microscopic
dynamics, was then employed in the approximate or partial determination of
those elements of the time-evolution equation that such selection rules
left undetermined. This section briefly summarizes the main steps of such derivation.
At each step of the following derivation, however, we urge the reader to keep track of the particular case in which the friction term $-\zeta ^{(s)}{\bf v}_{i}(t)$ and its corresponding fluctuating force ${\bf f}^{(s)}
_{i}(t)$ are absent, and to recognize that an
\emph{atomic} liquid can be viewed as the present Brownian liquid in the
limit of an infinitely tenuous solvent, such that the Stokes
friction coefficient $\zeta ^{(s)}$ vanishes. Thus, we shall take the limit $\zeta ^{(s)} \to 0$ in the general memory-function expressions for $F(k, t)$ and $F_S(k,t)$ derived in this section. In such limit one is left
only with the particles in the vacuum, and
these equations for $F(k, t)$ and $F_S(k,t)$ will then become the exact memory function expressions for the ISFs of an \emph{atomic} liquid.
Thus, let us first recall that the basis of the GLE formalism are the general mathematical conditions stated by the theorem of stationarity \cite{delrio}. This theorem states that the equation describing the dynamics of the thermal fluctuations $\delta a_{i}(t)\ (\equiv a_{i}(t)-a^{eq}_{i})$ of the instantaneous value of the macroscopic variables $ a_{i}(t)$ ($i=1,2,...,\nu$) around its equilibrium value $a^{eq}_{i}$ must have the structure of the most general linear stochastic equation with additive noise for the vector $\delta \mathbf{a}(t)=\left[\delta a_{1}(t),\delta a_{2}(t),...,\delta a_{\nu }(t)\right]^\dagger $, namely,
\begin{equation}
\frac{d\delta \mathbf{a}(t)}{dt}=-\omega \chi ^{-1}\delta \mathbf{a}(t)-\int%
\limits_{0}^{t}L(t-t^{\prime })\chi ^{-1}\delta \mathbf{a}(t^{\prime })dt^{\prime }+%
\mathbf{f}(t).
\label{gle0}
\end{equation}
In this equation $\chi $ is the matrix of static correlations, $\chi
_{ij}\equiv \left\langle \delta a_{i}(0) \delta a_{j}^{\ast }(0)\right\rangle $, $\omega $
is an anti-Hermitian matrix ($\omega _{ij}=-\omega _{ji}^{\ast }$), and the matrix $L(t)$
is determined by the fluctuation-dissipation relation $L_{ij}(t)=\left\langle
f_{i}(t)f_{j}^*(0)\right\rangle $, where $f_{i}(t)$ is the $i$th component of the vector of random forces $\mathbf{f}(t)$.
For the present purpose, we choose the components of the state vector $\delta {\bf a}(t)$ as
\begin{equation}
\delta {\bf a}(t)\equiv \left[ \delta n({\bf k},t),\delta j({\bf k},t),\delta
\sigma _{K}({\bf k},t),\delta \sigma _{U}({\bf k},t)\right]^{\dagger},
\end{equation}
with the following definitions. First, $a_{1}(t)$ is the Fourier transform $\delta n({\bf k},t)$ of the fluctuations $\delta n({\bf r},t)\equiv n({\bf r},t)-n$ of the local concentration $n({\bf r},t)$ around
its bulk value $n$. The microscopic definition of $\delta n({\bf k},t)$ (for ${\bf k}=0$) is
\begin{equation}
\delta n({\bf k},t)=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}e^{i{\bf k\cdot r}_{i}(t)}, \label{defnumber}
\end{equation}
where ${\bf r}_{i}(t)$ is the position of the $i$th colloidal particle at
time $t$. Normalized in this manner $\delta n({\bf k},t)$ is such that its
static correlation is $\chi _{nn}(k)\equiv \left\langle \delta n(%
{\bf k},0)\delta n(-{\bf k},0)\right\rangle =S(k)$, where $S(k)$ is the
static structure factor of the bulk suspension.
Taking the time-derivative of $\delta n({\bf k},t)$ we have the continuity equation,
\begin{equation}
\frac{\partial \delta n({\bf k},t)}{\partial t}=ik\delta j_{l}({\bf k},t), \label{continuity}
\end{equation}
where $\delta j_{l}({\bf k},t)\equiv j_{l}({\bf k},t)=\widehat{{\bf k}}{\bf %
\cdot j}({\bf k},t)$ is the component of the current ${\bf j}({\bf k},t)$ in
the direction $\widehat{{\bf k}}$ of the vector ${\bf k}$, i.e.,
\begin{equation}
j_{l}({\bf k},t)=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\widehat{{\bf k}}{\bf \cdot
v}_{i}(t)e^{i{\bf k\cdot r}_{i}(t)} \label{defcurrent}
\end{equation}
with ${\bf v}_{i}(t)=d{\bf r}_{i}(t)/dt$. Thus, $a_2
(t)\equiv \delta j_{l}({\bf k},t)$, whose
static correlation matrix is
\begin{equation}
\chi _{jj}=k_{B}T/M. \label{chijj}
\end{equation}
If we take the time-derivative of the current in Eq. (\ref{defcurrent}), and employ the $N$-particle
Langevin equation, Eq. (\ref{eq1}), we are led to the following result
\begin{equation}
\frac{\partial \delta j_{l}({\bf k},t)}{\partial t}=-\frac{\zeta^{(s)}}{M}%
\delta j_{l}({\bf k},t)+\frac{f^{(s)}({\bf k},t)}{M}+\frac{1}{\sqrt{N}}%
\sum_{i=1}^{N}\widehat{{\bf k}}{\bf \cdot }\frac{{\bf F}_{i}(t)}{M}e^{i{\bf %
k\cdot r}_{i}(t)} +\frac{ik}{\sqrt{N}}\sum_{i=1}^{N}\left[ \widehat{{\bf k}}%
{\bf \cdot v}_{i}(t)\right] ^{2}e^{i{\bf k\cdot r}_{i}(t)}
\end{equation}
where
\begin{equation}
f^{(s)}({\bf k},t)\equiv \frac{1}{\sqrt{N}}\sum_{i=1}^{N}\widehat{{\bf k}}{\bf
\cdot f}^{(s)}_{i}(t)e^{i{\bf k\cdot r}_{i}(t)},
\end{equation}
and ${\bf F}_{i}(t)\equiv \sum_{j\neq i}{\bf F}_{ij}(t)$. This equation can also be written as
\begin{equation}
\frac{\partial \delta j_{l}({\bf k},t)}{\partial t}=-\frac{\zeta^{(s)}}{M}%
\delta j_{l}({\bf k},t)+\frac{f^{(s)}({\bf k},t)}{M}+ik\delta \sigma ^{zz}({\bf k},t),
\label{djdt0}
\end{equation}
with $\delta \sigma ^{zz}(k,t)$ being the
instantaneous fluctuation of the isotropic diagonal component of the stress
tensor
\begin{equation}
\sigma ^{_{\alpha \beta }}({\bf k},t)\equiv \frac{1}{\sqrt{N}}
\sum_{i=1}^{N}\left\{ v_{i}^{\alpha }v_{j}^{\beta }-\frac{1}{2M}\sum_{j\neq
i}\frac{r_{ij}^{\alpha }r_{ij}^{\beta }}{r_{ij}^{2}}P_{k}(r_{ij})\right\}
e^{i{\bf k\cdot r}_{i}(t)},
\end{equation}
where
\begin{equation}
P_{k}(r_{ij})\equiv r_{ij}\frac{du(r_{ij})}{dr_{ij}}\frac{e^{i{\bf k\cdot r}%
_{ij}(t)}-1}{{\bf k\cdot r}_{ij}(t)}.
\end{equation}
In these equations, ${\bf r}_{ij}\equiv {\bf r}_{i}-{\bf r}_{j}$, and $%
u(r_{ij})$ is the pair potential.
Let us now write $\delta \sigma ^{zz}({\bf k},t)$ as
\begin{equation}
\delta \sigma ^{zz}({\bf k},t)=\delta p({\bf k},t)+\delta \sigma _{K}({\bf k},t)+\delta \sigma _{U}({\bf k},t),
\end{equation}
with $\delta p(k,t)=[\chi _{jj}/S(k)]\delta n(k,t)$ being the Fourier transform of the local pressure fluctuations, and with $\delta \sigma _{K}({\bf k},t)$ and $\delta \sigma _{U}({\bf k},t)$ being the statically orthogonal kinetic and configurational components of [$\delta \sigma ^{zz}({\bf k},t)-\delta p({\bf k},t)]$,
defined as
\begin{equation}
\delta \sigma _{K}({\bf k},t)\equiv \frac{1}{\sqrt{N}} \sum_{i=1}^{N}(v_{i}^{z})^{2}e^{i{\bf k\cdot r}_{i}(t)}-\chi _{jj}\delta n({\bf k},t), \label{defsigmak}
\end{equation}
and
\begin{equation}
\delta \sigma _{U}({\bf k},t)\equiv -\frac{1}{2M\sqrt{N}}%
\sum_{i=1}^{N}\sum_{j\neq i}\frac{r_{ij}^{\alpha }r_{ij}^{\beta }}{r_{ij}^{2}
}P_{k}(r_{ij})e^{i{\bf k\cdot r}_{i}(t)}-\delta p({\bf k}
,t)+\chi _{jj}\delta n({\bf k},t). \label{defsigmau}
\end{equation}
This completes the microscopic definition of the components $\delta n({\bf k},t),\delta j({\bf k},t),\delta
\sigma _{K}({\bf k},t),$ and $\delta \sigma _{U}({\bf k},t)$ of the state vector ${\bf a}(t)$, which are then found in Eqs. (\ref{defnumber}), (\ref{defcurrent}), (\ref{defsigmak}), and (\ref{defsigmau}), respectively.
As a result, we finally rewrite the momentum conservation equation, Eq. (\ref{djdt0}), as
\begin{equation}
\frac{\partial \delta j_{l}({\bf k},t)}{\partial t}=-\frac{\zeta^{(s)}}{M}
\delta j_{l}({\bf k},t)+\frac{1}{M}f^{(s)}({\bf k},t)+ik\delta p({\bf k}
,t)+ik\delta \sigma _{K}({\bf k},t)+ik\delta \sigma _{U}({\bf k},t).
\label{djdt1}
\end{equation}
This equation, together with the continuity equation in Eq. (\ref{continuity}), couple the variables $\delta n({\bf k},t)$ and $\delta j({\bf k},t)$ with the variables $\delta
\sigma _{K}({\bf k},t)$ and $\delta \sigma _{U}({\bf k},t)$, whose time-evolution equation must now be determined, and the GLE formalism provides a natural manner to do
that. For this, one first performs a straightforward statistical thermodynamical
calculation of the matrix $\chi $ of static correlations $\chi _{ij}\equiv
\left\langle a_{i}(0)a_{j}^{\ast }(0)\right\rangle $, with the following result \cite{scgle0}
\begin{equation}
{\bf \chi }=\left[
\begin{array}{cccc}
\chi _{nn} & 0 & 0 & 0 \\
0 & \chi _{jj} & 0 & 0 \\
0 & 0 & \chi _{KK} & 0 \\
0 & 0 & 0 & \chi _{UU}
\end{array}
\right],
\end{equation}
with $\chi _{nn}=S(k)$ and $\chi _{jj}=k_{B}T/M$, and with $\chi
_{KK}$ and $\chi _{UU}$ given by
\begin{equation}
\chi _{KK}=2\chi _{jj}^{2}
\end{equation}
and
\begin{equation}
\chi _{UU}=\chi _{jj}^{2}\left[ 1+n\int d{\bf r}g(r)\frac{\partial ^{2}\beta
u(r)}{\partial z^{2}}\left( \frac{1-\cos (kz)}{k^{2}}\right) -\frac{1}{S(k)}%
\right].
\end{equation}
We then write up the generalized Langevin equation for our vector $\delta {\bf a}(t)$ in the format
of Eq. (\ref{gle0}). For this, we first
notice that all the variables, except $\delta a_{2}(t)=\delta j_{l}({\bf k},t)$, are
even functions under time-reversal. According to Onsager reciprocity
relations, and the general anti-hermiticity of $\omega $ and hermiticity of $
L(z)$ \cite{delrio}, we have that the only possibly non-zero elements of the
matrix $\omega $ and $L(z)$ are
\begin{equation}
\omega {\bf =}\left[
\begin{array}{cccc}
0 & \omega _{nj} & 0 & 0 \\
-\omega _{nj}^{\ast } & 0 & \omega _{jK} & \omega _{jU} \\
0 & -\omega _{jK}^{\ast } & 0 & 0 \\
0 & -\omega _{jU}^{\ast } & 0 & 0
\end{array}
\right]
\end{equation}
\begin{equation}
L(t)=\left[
\begin{array}{cccc}
L_{nn} & 0 & L_{nK} & L_{nU} \\
0 & L_{jj} & 0 & 0 \\
L_{nK}^{\ast } & 0 & L_{KK} & L_{KU} \\
L_{nU}^{\ast } & 0 & L_{KU}^{\ast } & L_{UU}
\end{array}
\right]
\end{equation}
The determination of the non-zero elements of $\omega $ and of some of the
non-zero elements of $L(t)$ is rather straightforward, since, from the exact
continuity equation,
\begin{equation}
\frac{\partial \delta n({\bf k},t)}{\partial t}=ik\delta j_{l}({\bf k},t)
\end{equation}
we immediately see that $\omega _{nj}=-ik\chi _{jj}$, and that $%
L_{nn}=L_{nK}=L_{nU}=0$. Similarly, from eq. (\ref{djdt1}) we can see that $\omega _{jK}\chi _{jU}^{-1}=\omega _{jU}\chi _{UU}^{-1}=-ik$
and $L_{jj}\chi _{jj}^{-1}=\zeta^{(s)}/M$. As a result, all the elements of
the ``frecuency'' matrix $\omega $ have been determined, and in fact, only
the kinetic coefficients $L_{KK}(k,z)$, $L_{KU}(k,z)=L_{UK}(k,z)$, and $%
L_{UU}(k,z)$ remain undetermined by general symmetry principles, or physical
principles such as mass or momentum conservation. Thus, the time-evolution
equations that complete the non-contracted description for the components of
the vector $\delta {\bf a}(t)$ are the mass and momentum conservation
equations, Eqs. (\ref{continuity}) and (\ref{djdt1}), along with the time-evolution equations for
$\delta \sigma _{K}({\bf k},t)$ and $\delta \sigma _{U}({\bf k},t)$, namely,
\begin{eqnarray}
\frac{\partial \delta \sigma _{K}({\bf k},t)}{\partial t} &=&ik\chi
_{KK}\chi _{jj}^{-1}\delta j_{l}({\bf k},t)-\int_{0}^{t}L_{KK}({\bf k}%
,t-t^{\prime })\chi _{KK}^{-1}\delta \sigma _{K}({\bf k},t)dt^{\prime }
\nonumber \\
&&-\int_{0}^{t}L_{UK}({\bf k},t-t^{\prime })\chi _{UU}^{-1}\delta \sigma
_{U}({\bf k},t)dt^{\prime }+f_{K}({\bf k},t) \label{dsigmakdt}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial \delta \sigma _{U}({\bf k},t)}{\partial t} &=&ik\chi
_{UU}\chi _{jj}^{-1}\delta j_{l}({\bf k,}t{\bf )}-\int_{0}^{t}L_{UU}({\bf k}%
,t-t^{\prime })\chi _{UU}^{-1}\delta \sigma _{U}({\bf k},t)dt^{\prime }
\nonumber \\
&&-\int_{0}^{t}L_{UK}({\bf k},t-t^{\prime })\chi _{KK}^{-1}\delta \sigma
_{K}({\bf k},t)dt^{\prime }+f_{U}({\bf k},t). \label{dsigmaudt}
\end{eqnarray}
In these equations, only $L_{KK}({\bf k},t)$, $L_{UU}({\bf k},t)$, and $%
L_{UK}({\bf k},t)$ remain unknown.
The extended dynamic description provided by Eqs. (\ref{continuity}), (\ref{djdt1}), (\ref{dsigmakdt}),
and (\ref{dsigmaudt}) can now be contracted down to a single time-evolution equation involving only
$\delta n({\bf k},t)$ \cite{delrio}. This essentially amounts to formally eliminating the variables $\delta j({\bf k},t),\ \delta
\sigma _{K}({\bf k},t)$, and $\delta \sigma _{U}({\bf k},t)$, from this system of equations. The result of such contraction procedure reads \cite{scgle0}
\begin{equation}
\frac{\partial \delta n({\bf k},t)}{\partial t}=-\int_{0}^{t}L(k,t-t^{\prime
})\chi _{nn}^{-1}\delta n({\bf k},t^{\prime })dt^{\prime }+f({\bf k},t), \label{cont2}
\end{equation}
where $f({\bf k},t)$ is a random term with zero mean and time-dependent
correlation function $\left\langle f({\bf k},0)f(-{\bf k},0)\right\rangle
=L(k,t)$ with $L(k,t)$ given, in Laplace space, by
\begin{equation}
L(k,z)=\frac{k^{2}\chi _{jj}}{z+z^{(s)}+\chi _{jj}^{-1}\Delta L_{jj}(k,z)}.
\end{equation}
with $z^{(s)}\equiv \zeta^{(s)}/M$ and
\begin{equation}
\Delta L_{jj}(k,z)=\frac{k^{2}\chi _{KK}}{z+L_{KK}\chi _{KK}^{-1}}+\frac{%
k^{2}\chi _{UU}\left[ 1-\frac{L_{KU}\chi _{UU}^{-1}}{z+L_{KK}\chi _{KK}^{-1}}%
\right] ^{2}}{z+L_{UU}\chi _{UU}^{-1}-\frac{\chi _{KK}^{-1}L_{KU}L_{UK}\chi
_{UU}^{-1}}{z+L_{KK}\chi _{KK}^{-1}}}
\end{equation}
Multiplying Eq. (\ref{cont2}) by $\delta n(-{\bf k},0)$, and taking the equilibrium average, this equation becomes the time-evolution equation for the intermediate scattering function $F(k,t)\ \equiv\ \langle\delta n({\bf k},t)\delta n(-{\bf k},0)\rangle$, an equation that can be written as an expression for the Laplace transform $F(k,z)$ in terms of the memory functions $L_{KK}(k,z)$, $L_{KU}(k,z)=L_{UK}(k,z)$, and $L_{UU}(k,z)$, namely,
\begin{equation}
F(k,z)=\frac{S(k)}{z+\frac{k^{2}S^{-1}(k)\chi _{jj}}{z+z^{(s)}+\frac{k^{2}\chi
_{jj}^{-1}\chi _{KK}}{z+L_{KK}\chi _{KK}^{-1}}+\frac{k^{2}\chi
_{jj}^{-1}\chi _{UU}\left[ 1-\frac{L_{KU}\chi _{UU}^{-1}}{z+L_{KK}\chi
_{KK}^{-1}}\right] ^{2}}{z+L_{UU}\chi _{UU}^{-1}-\frac{\chi
_{KK}^{-1}L_{KU}L_{UK}\chi _{UU}^{-1}}{z+L_{KK}\chi _{KK}^{-1}}}}}.
\end{equation}
At this point we can discuss the limit of vanishing solvent friction, $\zeta^{(s)} \to 0$. As discussed above, in this limit our Brownian fluid becomes a Newtonian system, in the sense that its microscopic dynamics is described by Eq. (\ref{eq1}) without the friction and fluctuating terms. Thus, the expression for $F(k,z)$ describing the collective dynamics of an atomic liquid can be obtained from the previous expression by simply setting $z^{(s)}=0$, i.e,
\begin{equation}
F(k,z)=\frac{S(k)}{z+\frac{k^{2}S^{-1}(k)\chi _{jj}}{z+\frac{k^{2}\chi
_{jj}^{-1}\chi _{KK}}{z+L_{KK}\chi _{KK}^{-1}}+\frac{k^{2}\chi
_{jj}^{-1}\chi _{UU}\left[ 1-\frac{L_{KU}\chi _{UU}^{-1}}{z+L_{KK}\chi
_{KK}^{-1}}\right] ^{2}}{z+L_{UU}\chi _{UU}^{-1}-\frac{\chi
_{KK}^{-1}L_{KU}L_{UK}\chi _{UU}^{-1}}{z+L_{KK}\chi _{KK}^{-1}}}}}. \label{fdkz0}
\end{equation}
In a completely analogous manner we can derive the corresponding expression for
the \emph{self}-ISF $F_S(k,t)$, with the following result
\begin{equation}
F_S(k,z)=\frac{1}{z+\frac{k^{2}\chi _{jj}}{z+\frac{k^{2}\chi
_{jj}^{-1}\chi _{KK}}{z+L_{KK}^{(S)}\chi _{KK}^{-1}}+\frac{k^{2}\chi
_{jj}^{-1}\chi _{UU}^{(S)}\left[ 1-\frac{L_{KU}^{(S)}\chi _{UU}^{(S)-1}}{z+L_{KK}^{(S)}\chi
_{KK}^{-1}}\right] ^{2}}{z+L_{UU}^{(S)}\chi _{UU}^{(S)-1}-\frac{\chi
_{KK}^{-1}L_{KU}^{(S)}L_{UK}^{(S)}\chi _{UU}^{(S)-1}}{z+L_{KK}^{(S)}\chi _{KK}^{-1}}}}}, \label{fsdkz0}
\end{equation}
with
\begin{equation}
\chi^{(S)} _{UU}\equiv \frac{n\chi _{jj}^{2}}{k^{2}}\left[ \int
d{\bf r}g(r)\left(\frac{\partial ^{2}\beta u(r)}{\partial z^{2}}
\right) \right]. \label{chiuself}
\end{equation}
These general results now will serve as the basis for the analysis of the long-time dynamics of an atomic liquid, carried out in the following section.
\section{Long-time dynamic equivalence of atomic and colloidal liquids.}\label{sectionIII}
In this section we analyze the long-time (or small frequency) limit of the general expressions for $F(k,z)$ and $F_S(k,z)$ in Eqs. (\ref{fdkz0}) and (\ref{fsdkz0}). With this purpose,
as an additional approximation (following Ref. \cite{scgle0}, but introduced here only for simplicity) let us first neglect the possible crossed kinetic couplings represented by the memory
functions $L_{KU}(k,z)=L_{UK}(k,z)$ in this equation. This leads
to simpler expression for the ISF of an atomic
liquid, namely,
\begin{equation}
F(k,z)=\frac{S(k)}{z+\frac{k^{2}S^{-1}(k)\chi
_{jj}}{z+\frac{k^{2}\chi _{jj}^{-1}\chi _{KK}}{z+L_{KK}(k,z)\chi
_{KK}^{-1}}+\frac{k^{2}\chi _{jj}^{-1}\chi _{UU}}{z+L_{UU}(k,z)\chi
_{UU}^{-1}}}} \label{fkz}
\end{equation}
and
\begin{equation}
F_S(k,z)=\frac{1}{z+\frac{k^{2}\chi _{jj}}{z+\frac{k^{2}\chi
_{jj}^{-1}\chi _{KK}}{z+L_{KK}^{(S)}(k,z)\chi
_{KK}^{-1}}+\frac{k^{2}\chi _{jj}^{-1}\chi
_{UU}^{(S)}}{z+L_{UU}^{(S)}(k,z)\chi _{UU}^{(S)-1}}}}. \label{fskz}
\end{equation}
Eqs. (\ref{fkz}) and (\ref{fskz}) express $F(k,t)$ and $F_S(k,t)$ in terms of the unknown memory
functions $L_{KK}(k,z)$, $L_{UU}(k,z)$, $L_{KK}^{(S)}(k,z)$ and
$L_{UU}^{(S)}(k,z)$. To understand the properties of these memory functions, with the aim of introducing additional approximations or simplifications, it helps to analyze their physical meaning. For this, let us recall that the memory functions
$L_{KK}(k,z)$ and $L_{KK}^{(S)}(k,z)$ are associated with the
relaxation of the kinetic part $\sigma_K ^{{\alpha \beta }}({\bf
k},t)\equiv N^{-1/2}\sum_{i=1}^{N} v_{i}^{\alpha }v_{i}^{\beta }
e^{i{\bf k\cdot r}_{i}(t)}$ of the stress tensor, whose trace
$\sigma_K ({\bf k},t)\equiv N^{-1/2}\sum_{i=1}^{N}
\textbf{v}_{i}^{2} e^{i{\bf k\cdot r}_{i}(t)}$ is directly related
with the FT of the local kinetic energy density. Thus, $L_{KK}(k,z)$
and $L_{KK}^{(S)}(k,z)$ essentially describe the transport of
molecular kinetic energy, i.e., the transport of heat. These
transport processes occur primarily by means of molecular collisions
and quickly lead to a uniform distribution of the mean kinetic
energy of the particles, i.e., to thermal (but not thermodynamic!)
equilibrium. As a result, these memory functions may be expected to
be related with heat conductivity, and to decay within molecular
collision times. The memory functions $L_{UU}(k,z)$ and
$L_{UU}^{(S)}(k,z)$, on the other hand, describe the relaxation of
the configurational component of the stress tensor, which involves
structural relaxation processes that may decay after much longer
relaxation times.
Because of this, if one is interested in the long-time behavior of the ISFs,
one may neglect the frequency-dependence of
$L_{KK}(k,z)$, and replace it by its zero-frequency limit,
\begin{equation}
L_{KK}(k,z)\approx L_{K}(k) \equiv \lim _{z\to 0} L_{KK}(k,z)
\end{equation}
in Eq. (\ref{fkz}), and similarly for $L_{KK}^{(S)}(k,z)$,
\begin{equation}
L_{KK}^{(S)}(k,z)\approx L_{K}^{(S)}(k) \equiv \lim _{z\to 0} L_{KK}^{(S)}(k,z),
\end{equation}
in Eq. (\ref{fskz}). In addition, we also assume that the kinetic
coefficients $L_{K}(k)$ and $L_{K}^{(S)}(k)$ are not fundamentally
different from each other, so that we neglect their possible
differences,
\begin{equation}
L_{K}(k) \approx L_{K}^{(S)}(k).
\end{equation}
At this point we take the desired long-time limit $t>>\tau_0$ in
the resulting approximate expressions for $F(k,z)$ and $F_S(k,z)$.
This amounts to neglecting the frequency $z$ compared with the
frequencies $z_D \equiv L_{KK}^{(S)}(k,z)\chi_{KK}^{-1}$ and $z_B
\equiv k^{2}\chi _{jj}^{-1}\chi _{KK}/z_D$ in Eqs. (\ref{fkz})
and (\ref{fskz}), which leads to the ``overdamped" form of these
expressions, namely,
\begin{equation}
F(k,z) = \frac{S(k)}{z+\frac{k^{2}S^{-1}(k)D^0}{1+C(k,z)}}
\label{fkz2}
\end{equation}
and
\begin{equation}
F_S(k,z) = \frac{1}{z+\frac{k^{2}D^0}{1+C_S(k,z)}}, \label{fskz2}
\end{equation}
where we have defined the memory functions $C(k,z)$ and $C_S(k,z)$
as
\begin{equation}
C(k,z) \equiv \left[\frac{k^{2}D^0\chi _{jj}^{-2}\chi
_{UU}}{z+L_{UU}(k,z)\chi _{UU}^{-1}}\right] \label{ckz}
\end{equation}
and
\begin{equation}
C_S(k,z) \equiv \left[\frac{k^{2}D^0\chi _{jj}^{-2}\chi
_{UU}^{(S)}}{z+L_{UU}^{(S)}(k,z)\chi _{UU}^{(S)-1}}\right],
\label{cskz}
\end{equation}
respectively.
In these equations we have denoted the unknown frequency $z_D=L_{K}^{(S)}(k)\chi _{KK}^{-1}$ as
\begin{equation}
L_{K}^{(S)}(k)\chi _{KK}^{-1}= 2k^2D^0. \label{dkinetictheory3}
\end{equation}
The use of the symbol $D^0$ is, of course, not accidental, since this parameter can be identified with the self-diffusion coefficient that describes the sequence of ballistic random flights of a tracer particle as it collides with its neighbor particles. To see this, notice that in the conditions in which the effects of the configurational memory function $C_S(k,z)$ are negligible (such as in the low-density regime, in which $\chi
_{UU}^{(S)}=\chi_{UU}=0$), Eq. (\ref{fskz2}) becomes
\begin{equation}
F_S(k,z) \approx \frac{1}{z+k^2D^0},
\label{fkz0}
\end{equation}
or
\begin{equation}
F_S(k,t) \approx e^{-k^{2}D^0 t}. \label{fkt00}
\end{equation}
This result implies that the MSD is given by $W(t)\approx D^0t$, i.e., that the motion of a tracer particle after many collision times will be diffusive. The corresponding diffusion coefficient $D^0$ must then be identical to that determined by kinetic-theoretical arguments, i.e., must be given by $D^0 = (l_0)^2/\tau_0$, where $l_0$ and $\tau_0$ are, respectively, the mean free path and the mean free time. Since $l_0/\tau_0 =v_0$, $D^0$ can also be written as $D^0 = v_0l_0$. If we then estimate the mean free path $l_{0}$ to be given by $l_{0} \sim
1/n\sigma^2$, with $n\equiv N/V$ and with $\sigma$ being the
collision diameter of the particles, we then have that $D^0
\sim \sqrt{k_BT/M}/(n\sigma^2)$. In fact, the rigorous value of
$D^0$ is \cite{chapmancowling}
\begin{equation}
D^0\equiv \frac{3}{8\sqrt
\pi}\left(\frac{k_BT}{M}\right)^{1/2}\frac{1}{n\sigma^2}.
\label{dkinetictheory}
\end{equation}
The comparison of the overdamped expressions for $F(k,z)$ and $F_S(k,z)$ in Eqs. (\ref{fkz2})-(\ref{cskz}) above, with the corresponding overdamped results of a colloidal liquid (i.e., with Eqs. (4.24) and (4.33) of Ref. \cite{scgle0}), reveals the
remarkable formal identity between the long-time expressions for $F(k,t)$ and $F_S(k,t)$ of an atomic liquid, and the corresponding results for the analogous colloidal system. The fundamental difference between these two cases is to be found in the definition of the diffusion coefficient $D^0$, which in the present (atomic) case depends on temperature and density, and is given by the kinetic-theoretical result in Eq. (\ref{dkinetictheory}), whereas in colloidal liquids it is a constant, identical to the short-time self-diffusion coefficient given, for example, by the Einstein-Stokes expression in the absence of hydrodynamic interactions. Thus, this formal identity implies that
the long-time dynamic properties of an atomic liquid will then
coincide with the corresponding properties of a colloidal system
with the same $S(k)$, provided that the time is scaled as $D^0t$, with the respective meaning and definition of $D^0$. This observation has important implications, which can be tested, for example, by comparing the simulation results for $F_S(k,t)$ obtained by both, molecular dynamics and Brownian dynamics, for the same system and conditions.
\section{Test of the predicted long-time dynamic equivalence.}\label{sectionIV}
In this section we perform the test of the predicted long-time dynamic equivalence between a model atomic liquid and its corresponding Brownian fluid. This dynamic equivalence is tested here by comparing the macroscopic dynamics of the hard sphere liquid when the motion of its constituent particles is described, respectively, by Eqs. (\ref{eq1}) without and with the solvent friction terms present, i.e., by performing and comparing the molecular and the Brownian dynamics simulations of these properties.
As a reference let us first recall the exact short-time limit of the self-ISF of an atomic liquid. Since for correlation times $t$ shorter than the mean free time $\tau_0$ all the particles move ballistically, $[{\bf r}_i(t)-{\bf r}_i(0)]={\bf v}_i^{0}t$, we have that $F_S(k,t)\approx (1/N)\langle\sum_{i=1}^N \exp {[i{\bf k}\cdot{\bf v}_i(0)t]}\rangle$. Using the equilibrium distribution of the initial velocities ${\bf v}_i(0)$, one can see that the exact short-time limit of the self-ISF is given by $F_S(k,t)=\exp(-\frac{1}{2}k^{2}v_{0}^{2}t^{2})$. This expression provides an excellent approximation at small volume fractions, where $F_S(k,t)$ has decayed to negligible values for $t\approx \tau^0$, as illustrated by its comparison in the main panel of Fig. (\ref{fig1}) with the MD-simulated $F_S(k,t)$ for the hard sphere fluid at $\phi =0.1$ and $k\sigma=7.1$.
The MD simulations were conducted on a soft-sphere system, and the results were then mapped onto those of the equivalent hard-sphere liquid as discussed in Ref. \cite{soft2}. The soft-sphere simulations were carried out using the velocity-verlet algorithm with $N=1000$ particles of the same mass $M$ in a volume $V$ and a
time step $\Delta t/t^{*}=1 X 10^{-3}\sqrt{m\sigma^{2}/\epsilon}$. During the
equilibration and production cycles, temperature was kept
constant by a simple rescaling of the velocities of the particles every 100
time steps. For high volume fractions we used polydisperse systems, where the diameters of the $N$ particles were
evenly distributed between $\overline{ \sigma} (1-w/2)$ and
$\overline{ \sigma} (1+w/2)$, with $\overline \sigma$ being the mean
diameter. We consider the case $w=0.3$, corresponding to a
polydispersity $s_\sigma = w/\sqrt{12}=0.0866$. The length, mass, and time
units employed are, respectively, $\overline{\sigma}$, $M$, and
$\overline{\sigma}\sqrt{M/k_BT}$. The simulations are carried out for an array of volume fractions $\phi=(\pi/6) n \overline{\sigma^3}$ where
$\overline{\sigma^3}$ is the third moment of the size distribution and $n$ is the total number density $n\equiv N/V$.
Defining the relaxation time $\tau_{\alpha}$ by the condition $F_{S}(k,\tau_{\alpha})=1/e$, we have that in the ballistic regime $\tau_{\alpha}$ can be approximated by $\tau_{\alpha}=\frac{\sqrt{2}}{kv_{0}}$, which is the low-density limiting value represented in the inset of Fig. \ref{fig1} by the horizontal dashed line. The inset also plots the simulation results for $\tau_{\alpha}$ in a wide range of volume fractions, to show the deviations from this limiting behavior as the density is increased. Beyond this low-density regime, these deviations become increasingly more important, as also illustrated in the main panel of Fig. \ref{fig1} by the MD simulation results for $F_S(k,t)$ at the near-freezing volume fractions $\phi=0.4$ and 0.5. Here, of course, the $\phi$-independent limit $F_S(k,t)=\exp(-\frac{1}{2}k^{2}v_{0}^{2}t^{2})$ is clearly inadequate, although the Gaussian approximation, $F_S(k,t) \approx \exp[-k^2 W(t)]$ still provides an accurate representation of the short-time decay of this function. This is illustrated by the solid lines of the main panel of Fig. \ref{fig1}, which result from employing the MD-simulated data for the mean squared displacement $W(t)$ in $F_S(k,t)= \exp[-k^2 W(t)]$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.35]{figure1.eps}
\caption{Molecular dynamics results for the self-ISF $F_S(k,t)$ of a
hard-sphere fluid as a function of time $t$ (expressed in ``molecular" units $[\sigma/v_0]$) at fixed wave-vector $k\sigma=7.1$, and volume fractions $\phi$ = 0.1 (solid circles), 0.4 (empty circles), and 0.5 (striped circles). The dashed line is the exact limit $F_S(k,t)=\exp(-\frac{1}{2}k^{2}v_{0}^{2}t^{2})$ and the solid lines are the results of the Gaussian approximation $F_S(k,t) \approx \exp[-k^2 W(t)]$, with $W(t)$ given by the same molecular dynamics simulation data. In the inset we plot the relaxation time $\tau_{\alpha}$ (also in units of $[\sigma/v_0]$), defined by the condition $F_{S}(k,\tau_{\alpha})=1/e$, for these and other volume fractions; the horizontal dashed line indicates the limiting value $v_{0}\tau_{\alpha}/\sigma=\frac{\sqrt{2}}{k\sigma}=0.199$. } \label{fig1}
\end{center}
\end{figure}
With this low-density short-time ballistic limiting behavior as a reference, let us now compare the simulation results for $F_S(k,t)$ obtained by both, molecular dynamics and Brownian dynamics simulations, for the same system and conditions. For this comparison, in addition to the molecular dynamics simulations, we performed Brownian dynamics simulations of the hard sphere liquid using the conventional Ermak and McCammon's Brownian dynamics algorithm \cite{tildesley,ermakmccammon} on a soft sphere fluid, and then mapping the results onto those of the hard-sphere liquid according to the methodology proposed and explained in Ref. \cite{dynamicequivalence}. The resulting comparison provides a test of the theoretical prediction of the previous sections, that the dynamics of an atomic liquid coincides with the dynamics of the corresponding Brownian fluid in the \emph{opposite} regime, i.e., for high densities and long times. Thus, Fig. \ref{fig2}(a), presents both simulation results for the hard sphere system at three volume fractions, $\phi=0.50$, 0.548, and 0.571, representing the metastable regime of the hard sphere liquid. As this figure illustrates, plotting $F_S(k,t)$ as a function of the scaled time $t^*\equiv D^0t/\sigma^2$ clearly exhibits the expected long-time dynamic equivalence between atomic and Brownian liquids. We notice, however, that this long-time dynamic equivalence is not observed in $F_S(k,t)$ at lower volume fractions, corresponding to the stable fluid regime ($\phi \lesssim 0.45$). The reason for this is that in such regime, illustrated in Fig. \ref{fig1}, the decay of $F_S(k,t)$ to a value $\approx e^{-1}$ occurs within times comparable to the mean free time $\tau_0$ and is, hence, intrinsically ballistic. It is only at higher volume fractions that this long-time dynamic equivalence is fully exhibited by the \emph{diffusive} decay of $F_S(k,t)$, as illustrated by Fig. \ref{fig2}(a).
\begin{figure}
\begin{center}
\includegraphics[scale=.27]{figure2a.eps}
\includegraphics[scale=.27]{figure2b.eps}
\caption{(a) Molecular dynamics (solid symbols) and Brownian dynamics (empty symbols) simulation results for the self-intermediate scattering function $F_S(k,t)$ of the hard sphere liquid at volume fraction $\phi=0.50$, 0.548, and 0.571, evaluated at the main peak of the static structure factor and plotted as a function of the dimensionless time $t^*\equiv D^0t/\sigma^2$. (b) Volume fraction dependence of the dimensionless $\alpha$-relaxation time $\tau^*\equiv k^2D^0\tau_\alpha$ of the hard sphere liquid determined from the corresponding molecular dynamics (solid symbols) and Brownian dynamics (empty symbols) simulations. The dashed curve represents the low-density limit $\tau^* = (k\sigma)\sqrt{2 \pi}/16\phi$, whereas the solid curve correspond to the results of the SCGLE theory (Eqs. (1),(2),(5)-(8) of Ref. \cite{todos2}, with $k_c =
1.305(2\pi/\sigma))$.}
\label{fig2}
\end{center}
\end{figure}
Another manner to summarize this observation is to compare the volume fraction dependence of the relaxation time $\tau_\alpha$ of both, molecular and Brownian dynamics. In Fig. \ref{fig2}(b) these simulation results are presented in terms of the dimensionless $\alpha$-relaxation time $\tau^*\equiv k^2D^0\tau_\alpha$. For a Brownian liquid $\tau_\alpha\to 1/ k^2D^0$ as $\phi \to 0$, with a $\phi$-independent short-time diffusion coefficient $D^0$, so that $\tau^*\to 1$ as $\phi \to 0$. As discussed in the previous section, however, for atomic liquids $\tau_{\alpha} \to \sqrt{2}/kv_{0}$ as $\phi \to 0$, so that in the same limit $\tau^* \to (k\sigma)\sqrt{2 \pi}/16\phi$, where we have taken into account the fact that in this case, the short-time diffusion coefficient $D^0$ is given by the kinetic-theoretical result in Eq. (\ref{dkinetictheory}). This limiting behavior was represented by the horizontal dashed line of Fig. \ref{fig1}, and is now represented by the dashed curve of Fig. \ref{fig2}(b). From the comparison in this figure one can see that the long-time dynamic equivalence manifests itself in the collapse of the molecular and Brownian dynamics data for $\tau^*$ at high volume fractions. For smaller volume fractions, the differences in the short-time behavior of $F_S(k,t)$ lead to the observed differences between the molecular and Brownian dynamics results for $\tau^*$ below a crossover volume fraction located near the freezing transition of the HS liquid.
The solid curve in Fig. \ref{fig2}(b) is the prediction for $\tau^*\equiv k^2D^0\tau_\alpha$ of the self-consistent generalized Langevin equation (SCGLE) theory of colloid dynamics, i.e., of Eqs. (1),(2),(5)-(8) of Ref. \cite{todos2}. These are actually Eqs. (\ref{fkz2}) and (\ref{fskz2}) above, complemented by the closure relation $C(k,t)=C_S(k,t)=\lambda(k)\Delta \zeta(t)$, where $\Delta \zeta(t)$ is the time-dependent friction function describing the configurational contribution to the friction force on a tracer particle (given by Eq. (6) of Ref. \cite{todos2}). The static structure factor of the hard sphere system, needed as an input in these equations, is provided by the Percus-Yevick approximation with its Verlet-Weis correction \cite{percusyevick,verletweis}.
The function $\lambda (k) = 1/[1+(k/k_c)^2]$ is a phenomenological ``interpolating" function, with the cutoff wave-vector $k_c$ used here to calibrate the SCGLE theory by optimizing the overall agreement of its predictions with the data for $\tau^*$ constituted by the totality of the Brownian dynamics results (squares) and by the molecular dynamics data corresponding to the metastable liquid ($0.5\lesssim \phi$) in this figure. This calibration procedure results in the value $k_c = 1.305(2\pi/\sigma)$.
As said above, the short-time differences between the molecular and the Brownian dynamics data for $\tau^*$ in Fig. \ref{fig2}(b) appear at densities below a crossover volume fraction located, for the data in this figure, near the freezing transition of the HS liquid. The location of this crossover depends, however, on the wave-vector $k$ at which the decay of $F_S(k,t)$ is being observed, moving to a vanishing value in the long-wavelength limit, $k\to 0$. This means that in this limit the molecular and Brownian dynamics results for $\tau^*$ will be identical at all volume fractions. In fact, this is also what happens to the most representative long-time dynamic property, namely, the long-time self-diffusion coefficient. $D_L$ is defined as $D_L \ \equiv \lim_{t \to \infty}
\langle(\Delta \textbf{r}(t))^2\rangle / 6t $, but is also given by $D_L= \lim_{k\to 0} \lim_{z\to 0} [k^2F_S(k,z)]^{-1}= D^0/[1+C_S(k=0,z=0)]$. According to Eq. (\ref{fskz2}) above, and within the SCGLE closure $C_S(k,t)=\lambda(k)\Delta \zeta(t)$, for an atomic system this parameter, scaled as $D^*\equiv D_L/D^0 $, can be written as
\begin{equation}
D^*= 1/[1+\int _0^{\infty}\Delta \zeta^*(t)dt],
\label{dstar}
\end{equation}
with $\Delta \zeta^*(t)$ given, according to Eq. (6) of Ref. \cite{todos2}, by
\begin{equation}
\Delta \zeta ^*(t) =\frac{D^0}{3\left( 2\pi \right) ^{3}n}\int d
{\bf k}\left[\frac{ k[S(k)-1]}{S(k)}\right] ^{2}F(k,t)F_S(k,t).
\label{dzdt0p}
\end{equation}
These equations, however, are identical to their colloidal counterpart. Thus, they imply that the parameter $D^*$ of an atomic liquid must be indistinguishable from the corresponding parameter of the equivalent colloidal system with the same interactions and the same static structure factor.
\begin{figure}
\begin{center}
\includegraphics[scale=.32]{figure3.eps}
\caption{ Long-time self-diffusion coefficient $D_L (\phi)$ of the
hard-sphere fluid determined by molecular
dynamics simulations \cite{gabriel0,gabriel}, expressed in ``atomic units" $\sigma(k_BT/M)^{1/2}$ (empty
diamonds), and normalized as $D^* (\phi) \equiv D_L(\phi)/ D^0(\phi)$, with
$D^0(\phi)$ given by Eq. (\ref{dkinetictheory}) (full diamonds). The other full symbols
are the Brownian dynamics simulation results for $D^*$ from Refs.
\cite{cichocki} (triangles) and \cite{tokuyamabd} (circles). }
\label{fig3}
\end{center}
\end{figure}
The accuracy of this important and distinct prediction can also be checked by comparing the corresponding molecular and Brownian dynamics results.
Thus, in Fig. \ref{fig3} we plot molecular dynamics data for
$D_L(\phi)$ of a hard-sphere fluid both, in the ``usual" atomic
units $ \sigma(k_BT/M)^{1/2}$, and scaled as $D^* (\phi) \equiv
D_L(\phi)/ D^0(\phi)$, with $D^0(\phi)$ given by Eq.
(\ref{dkinetictheory}). The same figure also presents available
Brownian dynamics simulation results for $D_L(\phi)$ of the hard
sphere system without hydrodynamic interactions, also scaled as $D^*
(\phi)\equiv D_L(\phi)/ D^0$, but with $D^0$ being the
$\phi$-independent short-time self-diffusion coefficient of the
Brownian particles. Clearly, the ``colloidal" and the ``atomic"
results for $D^*$ collapse onto the same curve, which we denote by
$D^*_{HS} (\phi)$. One immediate and important consequence of this comparison is, for example, that L\"owen's dynamic criterion for freezing \cite{lowen} now applies for both, the atomic and the colloidal hard sphere liquid, i.e., the condition $D^*_{HS} (\phi)\approx 0.1$ occurs at $\phi=\phi_{HS}^{(f)}=0.494$ in both cases. The comparison in this figure, however, is only one particular manifestation of the more general long-time dynamic scaling suggested by the present work, whose applications were also illustrated by the other results presented in this section.
\section{Summary and discussion.}\label{sectionV}
In this paper we have discussed the relationship between the dynamics of atomic and Brownian liquids, by describing the macroscopic dynamics of both kinds of systems within the same theoretical formalism. We have based this discussion on the application of the generalized Langevin equation formalism to the derivation of general memory-function expressions for the (collective and self) intermediate scattering functions of an atomic liquid. The actual derivation, however, consisted in the review of the previous derivation \cite{scgle0} of the time-evolution equations for $F(k,t)$ and $F_S(k,t)$ of the corresponding Brownian fluid, keeping track of the effects of the solvent friction. At the end of such derivation the zero-friction limit was taken, to obtain the corresponding time-evolution equations for $F(k,t)$ and $F_S(k,t)$ of our atomic liquid (Eqs. (\ref{fdkz0}) and (\ref{fsdkz0})).
We then analyzed the long-time limit of these results for $F(k,t)$ and $F_S(k,t)$. The comparison of such overdamped expressions with the corresponding results in the case of a colloidal liquid, revealed the
remarkable formal identity between the long-time expressions for $F(k,t)$ and $F_S(k,t)$ of atomic and colloidal liquids. As discussed in Sect. III, the fundamental difference between these two cases lies in the definition of the diffusion coefficient $D^0$; in atomic liquids it depends on temperature and density, and is given by the kinetic-theoretical result in Eq. (\ref{dkinetictheory}), whereas in colloidal liquids it is a constant, given by the density-independent Einstein-Stokes value in the absence of hydrodynamic interactions. Let us mention that this dynamic equivalence can also be inferred by the derivation of the (generalized) Langevin equation that describes the motion of representative tagged particles in an atomic liquid \cite{atomictracerdiff}. The atomic-to-Brownian long-time dynamic equivalence thus seems to be a very robust prediction, with important physical consequences. It implies, for example, that in an atomic system, the self-diffusion coefficient $D^{0}$ determined by kinetic theory plays the same role as the short-time self-diffusion coefficient $D^{(s)}$ in colloidal liquids. It also implies that the long-time dynamic properties of an atomic liquid will
coincide with the corresponding properties of a colloidal system
with the same $S(k)$, provided that the time is scaled as $D^0t$, with the respective meaning and definition of $D^0$.
In section IV we tested this observation by comparing the simulation results for $F_S(k,t)$ obtained by both, molecular dynamics and Brownian dynamics, for the hard sphere system. As mentioned at the end of the previous section, one important consequence is that L\"owen's dynamic criterion for freezing \cite{lowen} now applies for both, the atomic and the colloidal hard sphere liquid. This result, taken together with the dynamic equivalence between soft- and hard-sphere liquids recently discussed in Ref. \cite{soft2}, further extends the application of this criterion to soft-sphere molecular liquids. The most relevant implications of this dynamic equivalence have been corroborated by the systematic comparisons between molecular and Brownian dynamics simulations of the sort illustrated in this paper. A summary of this analysis has been advanced in a recent brief communication \cite{atombrownequivletter}.
We should mention, in addition, that in reality the validity of the present dynamic correspondence between atomic and colloidal liquids should extend over to colloidal systems involving hydrodynamic interactions, provided that the corresponding effects enter only through the value of the short-time self-diffusion coefficient $D^{(s)}(\phi)$, which should then play the role of a density-dependent $D^0$, as suggested in \cite{prlhi}.
Besides analyzing further these important predictions, we are in the process of applying the general expressions for $F(k,t)$ and $F_S(k,t)$ for an atomic liquid derived in this paper, to the development of a self-consistent scheme to calculate these properties. The intention is to extend to atomic liquids the self-consistent generalized Langevin equation (SCGLE) theory of colloid dynamics \cite{scgle1,scgle2}, including the description of dynamic arrest phenomena \cite{rmf,todos1,todos2} and the recently developed first-principles theory of equilibration and aging \cite{noneqscgle0,noneqscgle1}. This, however, will be reported separately.
\bigskip
ACKNOWLEDGMENTS: The authors are grateful to G. P\'erez-\'Angel for providing the molecular dynamics data in Fig. 3, and to L. E. S\'anchez-D\'iaz, P. Mendoza-M\'endez, and A. Vizcarra-Rend\'on, for valuable discussions. L. L.-F. and M. M.-N. acknowledge the kind hospitality of the Joint Institute for Neutron Sciences (Oak Ridge, TN), where part of this manuscript was written. We are grateful to W.-R. Chen and T. Egami for stimulating discussions.
This work was supported by the Consejo Nacional de
Ciencia y Tecnolog\'{\i}a (CONACYT, M\'{e}xico) through grants 84076 and
132540 and through the Red Tem\'atica de la Materia Condensada Blanda.
\vskip.5cm
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Binary and multiple stars are essential objects in many fields of astrophysics and the statistics of stellar multiplicity, down to planetary mass companions, is an observable of fundamental importance.
For stellar physics, binaries allow for the precise determination of stellar masses, down to sub-percent accuracy \citepads{2021A&ARv..29....4S}.
Pairs of stars sharing the same age and initial chemical composition but with, for instance, slightly different masses, are valuable and highly constraining test cases for stellar models.
Eclipsing binaries, detected in large numbers by space photometry missions (see, e.g., \citeads{2016AJ....151...68K}) are key targets, both for modeling \citepads{2017A&A...608A..62H} and distance determinations \citepads{Pietrzynski:2019aa}.
The influence of binary stars on the formation of our Galaxy, its evolution, and composition has many facets.
Multiplicity deeply influences the physical mechanisms through which stars form, affecting the stellar initial mass function.
The evolution of binary stars may also diverge considerably from that of single stars, for instance, through mass exchange.
This is particularly common during the final stages of their evolution, resulting in spectacular events such as novae or type Ia supernovae.
The coalescence of the compact products of the evolution of massive binary stars is also a major source of gravitational wave emission \citepads{2017PhRvL.119p1101A}.
Giant and telluric planets with extremely diverse properties are now known in large numbers, mainly from the radial velocity and transit techniques \citepads{2016PASP..128f6001F, 2015ARA&A..53..409W, 2014PASP..126..827H} but also from direct imaging \citepads{2019A&A...631A.155B, 2019AJ....158...13N, 2015ApJ...815..108M}.
Stellar binarity has a major impact on the stability of planetary systems (see, e.g., \citeads{2016AJ....152....8K, 2014ApJ...791..111W}).
High-precision astrometry offers a complementary way to detect and characterize exoplanets through the detection of their influence on the space trajectory of their host stars.
Thanks to the unprecedented accuracy of its astrometric measurements and its sensitivity to faint objects, Gaia provides us with a direct way to constrain the presence of companions, exploring the planetary mass regime for a large number of stars in the solar neighborhood.
The long time baseline of 24.75\,years between the Hipparcos and Gaia Early Data Release 3 (EDR3) position measurement epochs opens the possibility to determine the long-term proper motion (PM) vectors of the Hipparcos catalog stars with a high level of accuracy.
For a single star, the long-term and short-term PM vectors are identical (apart from the geometrical perspective acceleration), but they diverge in the presence of a secondary orbiting body.
The presence of a faint secondary object results in a shift of the barycenter of the system away from its photocenter (usually located close to the primary star's position). The orbital motion of the pair induces a time-dependent displacement of the photocenter around the center of mass.
The comparison of the long-term PM vector with the Gaia or Hipparcos short-term PM vectors therefore opens the possibility to search for an orbiting companion through its effect on the PM of the primary target.
Historically, this principle was first employed by \citetads{1844MNRAS...6R.136B} to discover the invisible companion of \object{Sirius}, the white dwarf \object{Sirius B}, and it was also applied to various types of stars, for instance, by \citetads{1999A&A...346..675W}, \citetads{2004ASPC..318..141J}, \citetads{2005AJ....129.2420M}, \citetads{2007A&A...464..377F}, and \citetads{2008ApJ...687..566M}.
In the present work, we measured the PM offset as a "proper motion anomaly" (PMa), namely, a difference between the "instantaneous" PM vector from the Hip2 or EDR3 catalogs and the long-term PM vector.
In Sect.~\ref{HG-PMa}, we present a revision of the PMa of Hipparcos stars using new astrometry from the EDR3 \citepads{GaiaEDR3content}. After briefly defining the PMa and describing how it can be interpreted in terms of companion properties, we evaluate the sensitivity, completeness, and accuracy of our PMa catalog. We then introduce, in Sect.~\ref{cpm-section}, the procedure we adopted to identify common proper-motion (CPM) gravitationally bound candidate companions. In Sect.~\ref{discussion}, we discuss the global results of our survey, the possible use of the renormalised unit weight error (RUWE) parameter as an additional indicator for binarity, and the combination of the PMa and CPM techniques. Finally, we present sample analyses of specific targets in Sect.~\ref{examples}, followed by our conclusions in Sect.~\ref{conclusion}.
\section{Hipparcos-Gaia proper motion anomaly \label{HG-PMa}}
\subsection{General principle}
The principle underlying the detection of companions from their influence on the PM of a star relies on the comparison of the long-term and short-term PMs of this star. For a single star, the long-term PM determined from the positions measured at the Hipparcos and EDR3 epochs (24.75 years apart) is identical to the short term PM measured by each mission over a few years.
For a binary star, the short-term PM includes in addition the tangential component of its orbital velocity. As the latter is changing with time over the orbital period of the system, a deviation appears between the short-term and long-term PMs of the star, due to the curvature of its sky trajectory.
The PMa, namely, the difference between the short-term and long-term PM, is therefore an efficient and sensitive indicator to detect non-single stars, as it is a proxy for the orbital velocity of the photocenter of the system around its center of mass.
Thanks to the long time baseline between the Hipparcos and Gaia epochs, the PMa can now be measured with a very high accuracy, which translates to substellar mass sensitivity for the companion of nearby stars.
Further details on the PMa are available in \citetads{2019A&A...623A..72K}.
Examples of analyses of binary and multiple stars based on Hipparcos and Gaia astrometry can be found for instance in the following studies:
\citetads{2018ApJS..239...31B}, \citetads{2019AJ....158..140B}, \citetads{2019ApJ...871L...4D}, \citetads{2019A&A...623A.116K}, \citetads{2020ApJ...904L..25C}, \citetads{2020MNRAS.496.1922B}, \citetads{2021A&A...645A...7K}, \citetads{2021ApJS..254...42B}, \citetads{2021arXiv210907525B}, and \citetads{2021arXiv210608249K}.
\subsection{Input data, basic corrections, and PMa computation\label{inputdata}}
We adopted the Hipparcos catalog at epoch J1991.25 (\citeads{2007ASSL..350.....V}, hereafter `Hip2', 117\,955 sources) and the Gaia EDR3 catalog \citepads{2016A&A...595A...1G, 2021A&A...650C...3G, GaiaEDR3content, 2020yCat.1350....0G} at epoch J2016.0.
For the collection of most of the data used in the present work, we made extensive use of the \texttt{astroquery} library \citepads{2017ascl.soft08004G} distributed as part of the \texttt{Astropy} library \citepads{2013A&A...558A..33A,2018AJ....156..123A} to access the ViZieR online database \citepads{2000A&AS..143...23O} at the CDS.
For the cross-identification of the Hip2 stars in the EDR3 catalog, we started from the \texttt{gaiaedr3.hipparcos2\_best\_neighbour} list provided as part of the EDR3 \citepads{2017A&A...607A.105M,2019A&A...621A.144M}, which has 99\,525 records (84.4\% of Hip2).
For the missing Hip2 sources, we searched the EDR3 catalog shifting the Hip2 source position to epoch J2016.0 using the Hip2 PM vector.
We then classically employed magnitude, parallax, and angular proximity criteria to select the most probable candidate source in the EDR3 catalog.
A total of 116\,343 sources are present in our PMa catalog (98.6\% of Hip2 catalog), out of which 568 stars (0.5\%) have neither DR2 or EDR3 PMa vectors (due, e.g., to the Gaia PM vector being unavailable) and 1535 stars have no EDR3 PMa vector (1.3\%).
We applied the corrections to the EDR3 parallaxes as prescribed by \citetads{2021A&A...649A...4L}\footnote{\url{https://www.cosmos.esa.int/web/gaia/edr3-code}}, and corrected the PM of bright sources for the spin of the Gaia frame with respect to the ICRS determined by \citetads{2021A&A...649A.124C}. We also inflated the parallax error bars according to \citetads{2021MNRAS.506.2269E}.
For simplicity, we use $\mu_{\mathrm{\alpha}}$ to denote the PM along the right ascension axis, $\mu_{\mathrm{\alpha}} \cos(\delta)$.
We collected ancillary data (photometry, radial velocity, etc.) and estimated the mass and radius of each star following the methodology described by \citetads{2019A&A...623A..72K}.
The long-term Hipparcos-Gaia vector is computed from the difference in coordinates between the Hipparcos and Gaia catalogs, scaled by the time difference between the two catalogs (24.75 years for Gaia EDR3). The PMa vector coordinates are computed by subtracting the Hipparcos-Gaia PM vector from the individual Hipparcos and Gaia vectors, and the associated uncertainties are computed using a simple Monte Carlo approach. This computation is conducted in three dimensions for the stars located within 20\,pc of the Sun, to take properly into account the light time propagation and perspective acceleration. They are particularly important for the nearest stars with a fast PM (\object{Proxima Centauri}, \object{Barnard's star}...). For stars beyond this distance, a two-dimensional computation (in tangential coordinates) was implemented to reduce the computation time, as the perspective acceleration is negligible, but still taking into account the light time propagation.
As a remark, the Hipparcos astrometry of visual pairs with separations of 10 to $20\arcsec$ is often distorted owing to the satellite's measuring system design (Tokovinin, private comm.). As a result, the PMa computed for the components of such physically unrelated pairs can be spurious.
\subsection{Companion properties and sensitivity function\label{sensitivity_pma}}
As discussed by \citetads{2019A&A...623A..72K}, the mass of the companion of a primary star exhibiting a PMa signal can be constrained using the measured tangential velocity anomaly.
It is, however, degenerate with its orbital radius $r$ following the relation:
\begin{equation}\label{m2mass}
\frac{m_2}{\sqrt{r}} = \sqrt{\frac{m_1}{G}}\,v_\mathrm{1} = \sqrt{\frac{m_1}{G}}\,\left( \frac{\Delta \mu [\mathrm{mas\,a}^{-1}] }{\varpi [\mathrm{mas}]} \times 4740.470 \right)
,\end{equation}
where $m_1$ is the mass of the primary star, $m_2$ the mass of the companion, $G$ the universal gravitational constant, $\Delta \vec{\mu}$ the PMa, $v_1$ the tangential orbital velocity of the primary star, and $\varpi$ its parallax.
The sensitivity of the PMa technique in terms of secondary mass therefore decreases linearly with the distance of the target.
In this expression, we assume that the orbit is circular and observed ``face-on,'' and that the photocenter of the system is located close to the primary star (the secondary source is faint compared to the primary).
Also, the practical sensitivity of the PMa technique is limited by the time window smearing of the short-term PM measurements (Hipparcos or Gaia), as well as the limited time baseline between the Hipparcos and Gaia epochs for the estimation of the long-term PM vector (see below).
For a more realistic definition of the expected companion properties, we include the uncertainty on the orbit inclination in a statistical way, following Sect. 3.6 of \citetads{2019A&A...623A..72K}. The influence of the orbital eccentricity is limited (in a statistical sense), as it does not introduce a global bias, but it will affect individual measurements obtained, for instance, near the periastron or apastron for which the orbital velocity takes extreme values.
The sensitivity function $m_2 = f(r)$ is affected by the fact that the Hipparcos and Gaia catalog measurements are smeared over the observing time window of the two missions. The astrometric transits were obtained over a period of $\delta t_\mathrm{H} = 1227$\,d \citepads{1997A&A...323L..49P}, $\delta t_\mathrm{G2} = 668$\,d \citepads{2018A&A...616A...1G}, and $\delta t_\mathrm{G3} = 1038$\,d \citepads{GaiaEDR3content}, respectively, for Hipparcos, Gaia DR2 and Gaia EDR3. This drastically reduces the sensitivity of the survey to companions with orbital periods shorter than these time windows.
The sensitivity also decreases for long period orbits due to the fact that we subtract the long-term $\vec{\mu_\mathrm{HG}}$ PM vector from the short-term Gaia PM vector. For long orbital periods (typically longer than about five$ $\,times the Hipparcos-Gaia time span), the subtraction of $\vec{\mu_\mathrm{HG}}$ removes a significant part of the signature of the orbital motion of the photocenter of the system around the barycenter. This reduces the PMa signal and, therefore, the sensitivity to low-mass objects.
Figure~\ref{Sensitivity-function-m2r} shows the sensitivity function for a solar mass star located at a distance of 1\,pc, with a tangential velocity anomaly of 0.26\,m\,s$^{-1}$ corresponding to the median accuracy of EDR3 PMa measurements. The domain shaded in green shows the geometrical uncertainty due to the unconstrained orbital inclination.
The "spikes" visible in Fig.~\ref{Sensitivity-function-m2r} for orbital radii smaller than that corresponding to the Gaia time window are due to the fact that when the orbital period corresponds to the EDR3 time window is divided by an integer, the PMa signal becomes null. This results in a non-detection of the companion independently of its mass.
\begin{figure}
\includegraphics[width=\hsize]{Figures/Sensitivity-function-m2r.pdf}
\caption{Sensitivity function $m_2 = f(r)$ for a solar mass star at a distance of 1\,pc. The pink markings show a selection of orbital periods in years, and the orbital radius corresponding to an orbital period equal to the Gaia EDR3 duration (34 months) is displayed as an orange vertical line.\label{Sensitivity-function-m2r}}
\end{figure}
\subsection{Properties of the PMa catalog}
An extract of the PMa catalog is presented in Table~\ref{PMa-sample}.
\subsubsection{Completeness of the sample\label{completeness}}
We estimate the completeness of the Hipparcos-EDR3 sample within 100 pc for stellar-mass objects using as a basis the full EDR3 catalog within the scope of this distance. As shown by \citetads{2021A&A...649A...6G}, the degree of completeness of the Gaia EDR3 Catalogue of Nearby Stars (GCNS) within 100\,pc is at an excellent level. The deep $G \approx 21$ limiting magnitude of Gaia corresponds to the apparent brightness of the lowest mass stars at 100\,pc (see also Sect.~\ref{sensitivity-combination}). The EDR3 catalog is thus highly complete for stellar mass objects down to the hydrogen-burning limit up to this distance and gives a good fiducial to estimate the Hipparcos completeness. The distribution of the number of stars as a function of mass is shown in Fig.~\ref{PMa-completeness} for the Hipparcos+EDR3 and full EDR3 samples. The completeness of the Hipparcos-EDR3 catalog compared to the EDR3 for low-mass stars below $0.5\,M_\odot$ located within 100\,pc ($\varpi_\mathrm{G3}>10$\,mas) is only $\approx 0.07\%$, whereas it is higher than 80\% for stars more massive than the Sun.
\begin{figure}
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-completeness.pdf}
\caption{Number of stars as a function of mass (left panel) and completeness of the Hipparcos-EDR3 proper motion anomaly sample within 100 pc (right panel).\label{PMa-completeness}}
\end{figure}
\subsubsection{Accuracy}
\begin{figure}
\includegraphics[width=\hsize]{Figures/PMa-G2-G3.pdf}
\caption{2D histogram of the PMa signal-to-noise ratio between Gaia DR2 and Gaia EDR3 analyses.
The increase in S/N of the PMa signal in the EDR3 is typically a factor of 2.5 (pink dashed line).\label{PMa-G2-G3}}
\end{figure}
The median accuracy of the determined $\Delta \mu_\mathrm{G3}$ PMa vectors is $\sigma(\Delta \mu_\mathrm{G3}) = 56\,\mu$as\,a$^{-1}$, corresponding to an accuracy on the tangential velocity anomaly of $\sigma(\Delta v_\mathrm{tan, G3}) = 26$\,cm\,s$^{-1}$\,pc$^{-1}$ (i.e., normalized to a distance of 1\,pc). This corresponds to an improvement of a factor 2.5 compared to the accuracy of the Gaia DR2 PMa values presented by \citetads{2019A&A...623A..72K}. This improvement is also visible in Fig.~\ref{PMa-G2-G3}, which shows a 2D histogram of the measured PMa signal-to-noise (S/N) values from Gaia DR2 and EDR3.
The median uncertainties of the Hip2 catalog positions in RA and Dec are $\sigma(\alpha[\mathrm{Hip2]}) = 0.7$\,mas and $\sigma(\delta[\mathrm{Hip2]}) = 0.6$\,mas, resulting in a median contribution to the uncertainty on the Hip2-EDR3 long-term proper motion of $\sigma(\mu_\mathrm{HG}) = 37\,\mu$as\,a$^{-1}$. On the other hand, the median uncertainty of the Gaia EDR3 PM vector norm for stars brighter than $G=12$ is $\sigma(\mu_\mathrm{G3}) = 27\,\mu$as\,a$^{-1}$ and is expected to decrease in the Gaia DR4 and DR5 to $\sigma(\mu_\mathrm{G4}) \approx 6\,\mu$as\,a$^{-1}$ and $\sigma(\mu_\mathrm{G5}) \approx 2\,\mu$as\,a$^{-1}$ for bright stars\footnote{\texttt{\url{https://www.cosmos.esa.int/web/gaia/science-performance}} retrieved in September 2021}. While the Hipparcos astrometry is already dominant in the error budget of the PMa vector determination, its use in combination with the future DR4 and DR5 epoch astrometry will still be a powerful asset in characterizing companions with long orbital periods (of several centuries). This will help to bridge the gap between the astrometric companion detections (from the Gaia epoch astrometry) and the spatially resolved CPM companions (see Sect.~\ref{sensitivity-combination}).
\subsubsection{Internal and external validation}
Among the sample of stars that show a significant PMa detection (S/N>3) in the DR2 \citepads{2019A&A...623A..72K}, 88.5\% are confirmed with an EDR3 PMa S/N larger than 3 (Table~\ref{G2G3-confirmations}; Fig.~\ref{Hip2-confirmed23-histo}). In addition, thanks to the improved accuracy of the EDR3 measurements, 10,423 stars exhibit a PMa S/N greater than 3, while they were below this limit in the DR2. Overall, the EDR3 increases the accuracy and reliability of the PMa detections, removing a significant number of spurious detections and confirming most of the DR2 signals.
For 3\% of the Hipparcos sources, a significant PMa signal (S/N > 3) was found using the DR2, which is not confirmed using the EDR3 (S/N<3). In some cases, this could be caused by companions whose orbital period is close to the EDR3 time window, resulting in a strong smearing and the disappearance of the PMa signal. For bright Hipparcos stars, the EDR3 astrometric reduction appears significantly more robust than in the DR2, reducing the biases on their derived EDR3 PM vectors. This results for single stars in a better agreement of their EDR3 PM vectors with the long-term Hipparcos-Gaia PM and, therefore, the disappearance of the PMa signal.
\begin{table}
\caption{Proper motion anomaly detections and divergences from Gaia DR2 and EDR3.
\label{G2G3-confirmations}}
\centering
\begin{tabular}{lrr}
\hline
\hline
& Number & Fraction \\
\hline \noalign{\smallskip}
Objects with DR2 PMa values & 116343 & 100.0 \% \\
DR2 SNR>3 and EDR3 SNR>3 & 27071 & 23.3 \% \\
DR2 SNR>3 and EDR3 SNR<3 & 3490 & 3.0 \% \\
DR2 SNR<3 and EDR3 SNR>3 & 10461 & 9.0 \% \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=\hsize]{Figures/Hip2-confirmed23-histo.pdf}
\caption{Histogram of the S/N of the PMa signal from Gaia DR2 (light grey), the stars presenting a PMa S/N > 3 both in DR2 and EDR3 (medium blue), the stars with a PMa S/N > 3 only in DR2 (hatched light blue), and the stars with a PMa S/N > 3 only in the EDR3 (green).
The right panel shows the corresponding fraction of the stars in the total sample per S/N bin.\label{Hip2-confirmed23-histo}}
\end{figure}
A mild color dependence of the PM vectors in Gaia EDR3 was found by \citetads{2021A&A...649A.124C} and we applied the recommended correction to the EDR3 catalog values (see also Sect.~\ref{inputdata}). In order to verify that this does not have an effect on the PMa vectors, we computed the mean PMa vectors over bins of 10,000 stars, as a function of their magnitude and visible-infrared color. The result is presented in Fig.~\ref{PMa_mean_vs_mag}. We do not detect any significant bias at a level of $\pm 25\,\mu$as\,a$^{-1}$ ($\pm 12$\ cm\,s$^{-1}$\,pc$^{-1}$). As a significant number of bright Hipparcos stars are close to the Gaia saturation limit, and very diverse in color, this first-order analysis shows that there is no large systematic differential effect due to magnitude or color. However, this test is not intended to demonstrate the absence of a position-dependent effect over the sky (e.g., a sinusoidal bias as a function of right ascension), as the whole sky sample in each bin is averaged to produce the plots presented in Fig.~\ref{PMa_mean_vs_mag}.
\begin{figure}
\includegraphics[width=\hsize]{Figures/PMa_mean_vs_mag.pdf}
\includegraphics[width=\hsize]{Figures/PMa_mean_vs_color.pdf}
\caption{Mean value of the proper motion anomaly as a function of the $G$ band magnitude (top panels) and $(G-K)$ color, within bins of 10\,000 stars. The magnitude limits of each bin are shown with dashed lines.
The overall mean value and associated uncertainty is given in each plot.\label{PMa_mean_vs_mag}}
\end{figure}
\citetads{2018ApJS..239...31B, 2021ApJS..254...42B} recently reported similar analyses of the Hipparcos and Gaia PMs to the present work, respectively, for Gaia DR2 and EDR3.
No significant systematic difference is present between the present work and \citetads{2021ApJS..254...42B}, with a mean difference in the long-term PM vector of: $\Delta \mu_\alpha = -1.2 \pm 3.7\,\mu\mathrm{as\,a}^{-1}$, $\Delta \mu_\delta = +5.6 \pm 3.9\,\mu\mathrm{as\,a}^{-1}$.
This corresponds to a mean tangential velocity difference of only $\Delta \mu_\alpha = -0.6$\,cm\,s$^{-1}$\,pc$^{-1}$ and $\Delta \mu_\delta = +2.7$\,cm\,s$^{-1}$\,pc$^{-1}$.
\section{Common proper motion companions\label{cpm-section}}
The general principle of our analysis is classically to search for companions of a selection of targets in the EDR3 catalog, based on the proximity of their parallax and PM. As discussed in Sect.~\ref{sample}, we complete our input list of EDR3 catalog targets with the Hipparcos stars that are absent from the Gaia catalog (mainly due to saturation). Comparable works based on the Hipparcos, Gaia DR2 and Gaia EDR3 catalogs can be found in, for instance, \citetads{2011ApJS..192....2S}, \citetads{Jim_nez_Esteban_2019}, \citetads{2019A&A...623A.117K}, \citetads{gonzalez-payo}, \citetads{2019MNRAS.488.4740P}, \citetads{Hartman_2020}, \citetads{2020ARep...64..756S}, \citetads{Zavada_2020}, \citetads{Pearce_2020}, \citetads{2020ARep...64..756S}, \citetads{2021A&A...649A...6G}, and \citetads{2021MNRAS.506.2269E}.
\subsection{Star sample\label{sample}}
We selected two samples of stars for our survey of CPM companion candidates, which are partly overlapping:
\textbf{100\,pc sample}: The EDR3 targets located within 100\,pc ($\varpi_\mathrm{G3}>10$\,mas), supplemented with the missing Hip2 stars located within this distance range. This sample comprises 542,232 individual objects, out of which 21,217 are present both in the Hip2 and EDR3 catalogs, and 262 are present only in the Hip2 catalog (essentially the brightest stars). We chose here the simplified approach of a strict parallax limit for the selection of our sample compared to that of the GCNS \citepads{2021A&A...649A...6G}, as we are not aiming for an exhaustive census of the stars within this distance.
\textbf{Hipparcos stars}: The Hip2 catalog sources that we took into account in the present work comprise 117,628 stars. For these targets, we adopted the Hipparcos-EDR3 long term PM vector (Sect.~\ref{HG-PMa}) and EDR3 parallax when available or, alternatively, the Hip2 PM and parallax.
A cross-identification of the Hipparcos and DR2 catalogs is presented in \citetads{2019A&A...623A..72K}. The very brightest stars of the Hip2 catalog do not have a counterpart in the EDR3 catalog, as they are heavily saturated (e.g., \object{Sirius}, \object{Betelgeuse}, $\alpha$\,Centauri AB, etc.). However, this is not a limitation for the present CPM companion survey as we adopted the Hip2 parameters (position, parallax, and PM vectors) for these particular targets.
\subsection{Initial search volume}
For each target of our survey, we defined the search range $\delta\varpi$ for the parallax for the candidate companions taking into account: (1) the acceptable difference in distance between the target and its companions and (2) the uncertainties on their respective parallaxes.
To define the search depth in terms of differential distance between the target and the candidate companions, we considered the parallax range $\delta \varpi_A$ defined as:
\begin{equation}
\delta \varpi_A[\arcsec] = \varpi_0[\arcsec] - \frac{1}{\left( 1/ \varpi_0[\arcsec] + dz_\mathrm{max}[\mathrm{pc}] \right)}
\label{nearfar}
,\end{equation}
with $dz_\mathrm{max} = 0.5$\,pc as the maximum acceptable difference in distance and $\varpi_0$ as the parallax of the target. We neglect the difference between the range in parallax corresponding to the far side (with respect to the target) and the near side (larger parallax). We consider the expression of Eq.\,\ref{nearfar} for $\delta \varpi_A[\arcsec]$ symmetrically for the near and far sides. This first term is important for the nearest stars, whose candidate companions may have a significantly different parallax even though they are physically bound (e.g., Proxima and $\alpha$\,Cen~AB).
Secondly, we take into account the uncertainty on the parallax of the main target $\sigma_{\varpi 0}$ via:
\begin{equation}
\delta \varpi_B[\arcsec] = N\,\sigma_{\varpi0}[\arcsec]
,\end{equation}
where $N=3$ the maximum parallax difference in number of standard deviations.
We therefore queried the Gaia EDR3 catalog with an acceptable parallax range of $[\varpi_0 - \delta \varpi, \varpi_0 + \delta \varpi]$ where:
\begin{align}
\delta \varpi[\arcsec] & = \sqrt{\delta \varpi_A[\arcsec]^2 + \delta \varpi_B[\arcsec]^2},\\
&= \varpi_0[\arcsec] \sqrt{ \left(\frac{dz_\mathrm{max}[\mathrm{pc}]\, \varpi_0[\arcsec]}{1+dz_\mathrm{max}[\mathrm{pc}]\, \varpi_0[\arcsec]} \right)^2 +\left(N\,\frac{\sigma_{\varpi 0}[\arcsec]}{\varpi_0[\arcsec]}\right)^2}.
\end{align}
The parallax of the primary target ($\varpi_0$) is taken from the EDR3 catalog or, alternatively, from the Hipparcos catalog for the bright stars absent from the EDR3 or those whose parallax is less accurate in the EDR3 than in Hipparcos. In summary, we retrieved from the EDR3 catalog those stars with a parallax within $\pm \delta \varpi$ of the primary target and within 1\,pc in terms of the projected linear separation. We set a minimum search radius of 1\,arcmin and a maximum of $2.5\,\deg$ to avoid overly small (for stars farther than 3.4\,kpc) or large (for stars closer than 23\,pc) search angles. The shape of the resulting search volume in space is a truncated cone with spherical near and far surfaces.
We did not search for candidate companions in the Hipparcos catalog. This means that Hipparcos stars that are absent from the Gaia catalog are not listed as candidate companions of Gaia-only targets (e.g., \object{Sirius A} is not listed as a companion of \object{Sirius B}). However, as we did search the Gaia EDR3 catalog around Hipparcos-only targets, the identified companions are properly listed in the catalog (e.g., Sirius B is listed as a companion of Sirius A). Hipparcos-only companions to Hipparcos-only targets (concerning only a small number of sources) can be found, for instance, in the Hipparcos Catalogue Double and Multiple Systems Annex \citepads{1997A&A...323L..53L}.
\subsection{Photometry, reddening, and physical properties}
We completed the EDR3 record of each star within the search volume with its $K$ band magnitude from the 2MASS catalog \citepads{2006AJ....131.1163S}, the visible $B$, $V,$ and $R$ magnitudes from the NOMAD catalog \citepads{2004AAS...205.4815Z}, and the Hipparcos $H_p$ magnitude (when available).
We added flags for the known binary and multiple systems from the Washington Double Star catalog \citepads{2001AJ....122.3466M} and the double and multiple star annex (DMSA) of the original Hipparcos catalog \citepads{1997ESASP1200.....E}.
The interstellar reddening was neglected for the target stars located within 50\,pc (that is, within the Local Bubble, \citeads{2011ARA&A..49..237F}).
For the more distant objects in our sample, we adopted the color excess $E(B-V)$ predicted by the \texttt{Stilism}\footnote{\url{https://stilism.obspm.fr}} 3D model of the local interstellar medium \citepads{2014A&A...561A..91L, 2017A&A...606A..65C}.
The radial velocities were retrieved from different catalogs as described in \citetads{2019A&A...623A..72K} (\citeads{2002ApJS..141..503N}, \citeads{2018A&A...616A...7S}, \citeads{2007A&A...475..519H}, \citeads{2018A&A...616A...1G}, \citeads{2018A&A...616A...5C}, and \citeads{2012AstL...38..331A}). The stellar masses and radii were estimated from the dereddened photometry following the same procedure as \citetads{2019A&A...623A..72K} (based on \citeads{2000A&AS..141..371G}, \citeads{0004-637X-804-1-64}, \citeads{2016MNRAS.462.2295H}, and \citeads{2004A&A...426..297K}).
\subsection{Selection of common proper motion companions}
Within the field star sample, our selection of the candidate CPM companions is based on a score built from the parallax and PM of the candidate companions located in the search volume, relative to the parameters of the target star.
\subsubsection{Selection on parallax}
The probability that the candidate companion (parallax $\varpi \pm \sigma_\varpi$) and the target (parallax $\varpi_0 \pm \sigma_{\varpi 0}$) are located within $dz_\mathrm{max}=0.5$\,pc of each other along the radial direction to the Sun is given by the probability density function:
\begin{align}
P_\varpi &= PDF(\varpi - \varpi_0; \sigma_\mathrm{tot}) ,\\
&= \exp \left( - \frac{(\varpi - \varpi_0)^2}{2\,\sigma_\mathrm{tot}^2} \right),
\end{align}
where $\sigma_\mathrm{tot} = \sqrt{\sigma_\varpi^2 + \sigma_{\varpi 0}^2 + (dz_\mathrm{max}\, \varpi_0^2)^2}$.
This quantity gives us the probability that the target and candidate companion are at a compatible distance.
\subsubsection{Selection on relative tangential velocity}
The candidate companions whose differential tangential velocity $\Delta \varv_\mathrm{tan}$ with the target is slower than 5\,km\,s$^{-1}$ are flagged as \texttt{LowV} (low velocity) in the catalog.
To test the possibility that the candidate companion and the target are gravitationally bound, we compare $\Delta \varv_\mathrm{tan}$ with the escape velocity $\varv_\mathrm{esc}$ of the system at the projected linear separation $r = \varpi \Delta \theta$ (with $\Delta \theta$ their angular separation):
$\varv_\mathrm{esc} = \sqrt{2\,G\,(m_1 + m_2)/r}$, where $m_1$ and $m_2$ are the estimates of the masses of the target and candidate companion (when available).
We note that $\varv_\mathrm{esc}$ is an upper limit of the true escape velocity as the actual linear distance between the two stars is larger than $r$.
The probability that the differential velocity $\Delta \varv_\mathrm{tan}$ is lower than $\varv_\mathrm{esc}$ is given by the survival function:
\begin{align}
P_\varv &= 1 - CDF(\Delta \varv_\mathrm{tan}; \varv_\mathrm{esc}; \sigma_{\Delta \varv}) ,\\
&= 1 - \frac{1}{\sigma_{\Delta \varv} \sqrt{2\pi}}
\int_{0}^{\Delta \varv_\mathrm{tan}} \exp \left( -\frac{(\varv- M\,\varv_\mathrm{esc})}{2\,\sigma_{\Delta \varv}^2} \right) d\varv,
\end{align}
where $M=2$ is a margin factor, intended to accommodate the unknowns in the determination of the differential tangential velocity, the escape velocity, and the possible presence of perturbing bodies in the considered stellar system.
The tangential velocity $\Delta \varv_\mathrm{tan}$ is the norm of a two-dimensional differential vector, whose coordinates are affected by uncertainties. This induces a systematic positive bias on the estimate of the vector norm (that follows a Rayleigh distribution).
The value of the escape velocity relies on the total mass of the system estimated from photometry, which may be underestimated if additional faint companions are present (e.g., in hierarchical multiple systems). Moreover, in this last configuration, the PM vector of a candidate companion may be affected the additional orbiting body, resulting in a higher tangential velocity.
We reject the candidates whose PM vector has a position angle diverging by more than $\pm 30^\circ$ from the PM vector of the target if it is located within 10,000\,au, and $\pm 10^\circ$ if it is farther from the target. This selection step relies on the hypothesis that the orbital velocity of physical systems is significantly slower than the systemic PM for wide binaries. This criterion rejects only a small fraction of the detected candidates.
In addition to the above velocity criteria, we set a maximum separation of $r=0.5$\,pc for gravitationally bound candidates, that is, $P_\varv$ is set to zero when $r > 0.5$\,pc ($\approx 100$\,kau).
\subsubsection{Score threshold for bound candidates\label{totalscore}}
We define the overall score of each candidate companion as the product of the parallax and velocity compatibility probabilities with the target:
\begin{equation}
P_\mathrm{tot} = P_\varpi\ P_\varv
.\end{equation}
The threshold in total score $P_\mathrm{tot}[\texttt{Bnd}]$ to identify gravitationally bound candidates (flagged as \texttt{Bnd} in the catalog) is an essential parameter to ensure a low degree of contamination of the sample with false positives, while simultaneously preserving valid candidates. To estimate the optimum threshold, we considered two approaches: (1) the overall distribution of the candidate companion scores and (2) the distribution of the linear separations of the companions. For this analysis, we consider the 100\,pc sample including the Hipparcos stars located within this distance.
The overall distribution of the total scores $P_\mathrm{tot}$ of the candidate companions classified as \texttt{LowV} of \texttt{Bnd} is shown in the left panel of Fig.~\ref{GaiaEDR3CPM-Ptot-histo}. Three domains are apparent: the nearby field stars ($P_\mathrm{tot} < 0.2$), the co-moving stars (e.g., within an open cluster, $0.2 < P_\mathrm{tot} < 0.6$) and the gravitationally bound candidates ($P_\mathrm{tot} > 0.6$). The boundaries between these three samples are visible as the points of inflexion of the histogram, as well as on the fraction of candidate companions above a given threshold (right panel of Fig.~\ref{GaiaEDR3CPM-Ptot-histo}).
The intermediate regime ($0.2 < P_\mathrm{tot} < 0.6$) potentially includes a significant number of bound companions, if the primary target is itself a close binary and its PM vector is affected by the orbital motion.
\begin{figure}
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-Ptot-histo.pdf}
\caption{Please add brief intro to the caption before describing panels. Left panel: Histogram of the $P_\mathrm{tot}$ total score of candidate companions. Right panel: Fraction of candidate companions with $P_\mathrm{tot}$ higher than a given threshold. \label{GaiaEDR3CPM-Ptot-histo}}
\end{figure}
Another method for determining the $P_\mathrm{tot}[\texttt{Bnd}]$ threshold is to consider the distributions of the number of candidate companions as a function of the linear separation with the primary for different threshold values. These histograms are shown in Fig.~\ref{GaiaEDR3CPM-LinSep-histo}. The histograms for $P_\mathrm{tot}[\texttt{Bnd}] = 0.1$ and 0.5 exhibit a clear divergence in the number of bound candidates for separations above 1000\,au, while it is not present for a threshold of 0.6 and above. This is an indication that a $P_\mathrm{tot}[\texttt{Bnd}] = 0.5$ is too low to prevent the contamination of the candidate bound sample by unbound neighbors. A threshold of $P_\mathrm{tot}[\texttt{Bnd}] = 0.6$ preserves the overall shape of the histogram of the measured separations, compared to the higher 0.7 and 0.99 thresholds, and does not diverge at large separations.
From these two approaches, a probability threshold of $P_\mathrm{tot}[\texttt{Bnd}]=0.6$ for bound candidates appears to be optimal, and we adopt this value to define the gravitationally bound flag (\texttt{Bnd}) in the catalog.
It should be stressed that gravitationally bound companions are present in the catalog below this threshold. For instance, the PM of the components of hierarchical multiple systems are affected by the orbital motion of each star, which results in a potential overestimation of the differential velocities (e.g., between an close binary primary and a third component). In addition, for multiple systems, the presence of an undetected but relatively massive close companion to a primary target (e.g., a main sequence or compact companion to a giant star) potentially results in an underestimation of the total mass. This induces an underestimation of the escape velocity, and potentially bound companions may therefore appear with total scores below the overall bound threshold. For this reason, when searching for bound CPM companions to a given target, the potential companions with $P_\mathrm{tot}$ scores below 0.6 should also be considered as potential candidates.
\begin{figure}
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-LinSep-histo.pdf}
\caption{Histogram of the linear separation of candidate gravitationally bound companions for different $P_\mathrm{tot}$ total score thresholds.
The histogram for a threshold of 0.6 (corresponding to the optimum) is marked with a red line. \label{GaiaEDR3CPM-LinSep-histo}}
\end{figure}
\subsection{Statistics of the detected CPM companions}
Extracts of the CPM catalog for the Hipparcos catalog stars and Gaia EDR3 sources within 100\,pc are presented in Tables~\ref{CPMHIP-sample} and \ref{CPMGaia-sample}, respectively.
The histogram of the candidate CPM companions detected in the 100\,pc Gaia sample is presented in Fig.~\ref{binarity_parallax_100pc} as a function of the parallax. For parallaxes $\varpi > 40$\,mas (distance < 25\,pc), the samples of companions flagged as \texttt{LowV} and \texttt{Bnd} are in good agreement, with an overall multiplicity frequency of 20.5\% (Fig.~\ref{binarity_parallax_100pc}, right panel).
Within a distance of 10\,pc, we obtain a multiplicity frequency of 25\%, in good agreement with the 27\% frequency found by \citetads{2021A&A...650A.201R}.
For smaller parallaxes ($\varpi < 40$\,mas), the fraction of \texttt{Bnd} candidates decreases linearly, reaching 10\% at $\varpi = 10$\,mas. Simultaneously, the number of \texttt{LowV} candidates increases rapidly, indicating that the majority of the stars classified in this category are unbound field stars.
\begin{figure}[ht]
\centering
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-CPM-histo.pdf}
\caption{Histogram of the detected bound and low velocity candidate companions in the 100\,pc Gaia sample as a function of the parallax of the target (left panel) and binary fraction as a function of the target parallax (right panel).}
\label{binarity_parallax_100pc}
\end{figure}
Figure~\ref{Nbound-histo_100pc} shows the histogram of the number of detected gravitationally bound candidates per target stars and the fraction in the 100\,pc sample.
Within the stars with detected bound candidate companions, the large majority has a single companion (96.0\%) or two companions (3.7\%). The sample of 103 stars with more than two bound candidates (0.3\% of all stars with bound candidates) likely contains a significant fraction of stellar groups in clusters that are close to the unbound limit. It should be noted that due to the identical processing for all stars, each member of a system of $N$ stars is counted individually in this total number (i.e., a system of ten candidate bound stars counts for ten stars, each with nine bound candidates). As a consequence, the actual number of high-order multiple systems is therefore very low in our sample.
\begin{figure}[ht]
\centering
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-Nbound-histo.pdf}
\caption{Histogram of the number of detected bound candidate companions for stars in the 100\,pc sample (left panel) and fraction of the stars with $N$ candidate bound companions (right panel).}
\label{Nbound-histo_100pc}
\end{figure}
\section{Discussion\label{discussion}}
\subsection{Binary fraction as a function of primary mass}
Figure~\ref{binarity_mass_100pc} shows the histograms of the stellar mass of the targets with a significant PMa signal (S/N>3; top panels) and gravitationally bound candidate companions (bottom panels). For those stars more massive than the Sun, the fraction of stars with a PMa signal is $\approx 35$\%, and simultaneously $\approx 20$\% of this sample has bound CPM candidates.
As already reported by \citetads{2019A&A...623A..72K}, the fraction of very low-mass stars of the Hipparcos catalog ($m_1 < 0.3\,M_\odot$) exhibiting a significant PMa signal reaches more than 50\%. This is induced by the very high sensitivity of the PMa companion detection technique (in terms of companion mass) for the nearest very low-mass stars (e.g., \object{Proxima Centauri}, \object{Barnard's star}, \object{Kapteyn's star}...). As a result, we are able to detect the signature of low-mass planetary companions orbiting these objects down to a few tens of Earth masses, which is significantly lower in mass than for the other Hipparcos stars located within 100\,pc. In other words, the PMa signals of the very low-mass Hipparcos stars are likely caused by much lower mass planetary companions than for the rest of the catalog, and the binary fraction consequently appears higher.
\begin{figure}[ht]
\centering
\includegraphics[width=\hsize]{Figures/Hip2-G3-binarity.pdf}
\includegraphics[width=\hsize]{Figures/GaiaEDR3CPM-mass-histo.pdf}
\caption{Histogram of the Hipparcos stars within 100\,pc exhibiting a PMa S/N larger than 3 (top left panel) and fraction of the overall sample (top right panel), as a function of the primary mass.
The histogram and fraction of the stars with CPM bound candidate companions for the full 100\,pc sample (Gaia+Hipparcos) are shown in the bottom panels.}
\label{binarity_mass_100pc}
\end{figure}
\subsection{Gaia RUWE as indicator of binarity}
The Gaia RUWE parameter \citepads{GaiaEDR3astrometric} is generally employed as a statistical quality flag for Gaia data: a value of the RUWE>1.4 indicates that the astrometric parameters of a given source may be degraded.
The majority of high RUWE objects are partially resolved binary stars or tight astrometric binaries with a significant orbit-induced displacement of the photocenter (i.e., those having a low mass ratio between the components).
This is particularly the case when the orbital period is close to 1 year, as it then interferes with the period of the parallactic ellipse measured by Gaia.
The resolving power of Gaia depends on the difference $\Delta G$ in magnitude of the two objects, and is approximately $0.5\arcsec$ for equal magnitude stars (up to $1.2\arcsec$ for $\Delta G = 5$) in the EDR3 as determined by \citetads{2021A&A...649A...6G}.
At a given epoch observation, the pointing of a binary (or double star) by Gaia is more complex than expected \citepads{GaiaEDR3astrometric}. A similar situation already occurred with Hipparcos, and \citetads{1997A&AS..122..571M} coined the word ``\textit{Hippacenter}'' to define the pointing of epoch Hipparcos observations of double stars.
Concerning Gaia, if a double star has a separation well below the angular resolution of the telescope ($\approx 0.1\arcsec$), the ``\textit{Gaiacenter},'' as we may perhaps designate the epoch pointing for Gaia, is simply the photocenter.
Beyond a $1.2\arcsec$ separation, each component of the pair may be observed individually.
For these two extreme cases, the standard astrometric solution will not be perturbed,beyond the possible effect of the orbital motion of the photocenter (that is, the quantity measured by the PMa observable).
On the contrary, the binaries whose separation lies in the range between $0.1\arcsec-1.2\arcsec$ will have a ``Gaiacenter'' closer to the primary and that varies with the projected separation along the Gaia transit direction and with the magnitude difference.
With a reference point that is shown not to be be consistent from epoch to epoch, the standard astrometric solution will be perturbed.
In such cases, the derived PMa value should be considered with caution.
While higher values of the RUWE up to 2 or 3 may still provide usable measurements within their stated uncertainties (see \citeads[Sect.~5.3 of]{2021A&A...649A..13M}), there is a higher probability of bias on their astrometry and hence on their PMa.
Further quality parameters provided in the Gaia catalog may be used to test the quality of the astrometry of high RUWE stars.
For instance, applying \texttt{ipd\_frac\_multi\_peak}>3 to the relatively wide binaries ($\approx 1\arcsec$) or \texttt{ipd\_gof\_harmonic\_amplitude}>0.1 to the smaller separations are indications of a photocenter measurement problem (see \citeads[Sect.~3.3 of]{GaiaEDR3validation}).
\begin{figure}
\includegraphics[width=\hsize]{Figures/RUWE-PMaSNR-histo.pdf}
\caption{Histogram of the PMa S/N values of the Hipparcos catalog stars as a function of their Gaia EDR3 renormalized unit weight error (RUWE).\label{RUWE-PMaSNR-histo}}
\end{figure}
\citetads{2020MNRAS.496.1922B} and \citetads{2021ApJ...907L..33S} demonstrated that the RUWE is actually a reliable indicator of the presence of a close companion.
As shown in Fig.~\ref{RUWE-PMaSNR-histo}, 75\% of the 25,067 Hip2 stars with RUWE>1.4 (representing 21\% of the full Hip2 catalog) exhibit a significant PMa S/N>3. Conversely, 49\% of the 37\,437 Hip2 stars that exhibit a PMa S/N>3 have a RUWE>1.4. We therefore confirm the high correlation between the PMa and RUWE quantities.
As also noted by \citetads{2021ApJ...907L..33S} for eclipsing binaries, there is a smooth transition in the fraction of stars with PMa S/N>3 for RUWE values between 1.0 (20\%) and 1.6 (70\%), which remains stable for higher RUWE values.
The RUWE parameter appears as a valuable indicator of binarity for tight systems (partially resolved or with a large photocenter motion) with angular separations on the order of $1\arcsec$ or below. It has the important advantage of being available for the full Gaia catalog, whereas the PMa is limited to Hipparcos stars. The RUWE is therefore complementary to the PMa and CPM indicators, with the limitation that the conversion of the RUWE value into constraints on the physical properties of the companion is made difficult given its statistical nature.
\subsection{Combined sensitivity of the PMa and CPM techniques\label{sensitivity-combination}}
Considering the median accuracy of the PMa vectors from Gaia EDR3 ($\sigma(\Delta \mu_\mathrm{G3}) = 56\,\mu$as\,a$^{-1}$) and Gaia's EDR3 limiting magnitude of $G=20.41$ (from \citeads{2021A&A...649A...6G}, encompassing 85\% of the sources), the domains of companion masses sampled by the combination of the PMa+CPM approaches are shown in Fig.~\ref{sensitivity_PMa_CPM}.
To convert the Gaia limiting magnitude to companion masses (for the CPM technique mass limits), we adopted the $M_G$ magnitude-spectral type relation calibrated by \citetads{2018A&A...619L...8R}. We then approximated the brown dwarf masses from Fig.~8 of \citetads{2018A&A...619L...8R}, for an age of 5\,Ga. The resulting masses should be considered rough approximations, particularly as the brightness of brown dwarfs critically depends on their age. The mass of stars of spectral types earlier than M6V were taken from the tables by \citetads{2012ApJ...746..154P} and \citetads{2013ApJS..208....9P}. We took into account the contrast sensitivity of Gaia as a function of the separation from the target star by inverting the Eq. (2) of \citetads{2021A&A...649A...6G}.
The diagrams in Fig.~\ref{sensitivity_PMa_CPM} show the complementarity of the PMa and CPM detection techniques. While the PMa technique is sensitive to substellar mass companions ($m_2<80\,M_\mathrm{Jup}$) from $\approx 2$ to a few hundred astronomical units (with a decreasing sensitivity), the CPM technique enables the detection of substellar companions at separations up to tens of thousands of astronomical units. For targets located at 100\,pc, the CPM mass sensitivity limit corresponds to the substellar mass limit ($\approx 80\,M_\mathrm{Jup}$).
\begin{figure*}[t]
\centering
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-01Msun-10pc.pdf}
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-01Msun-100pc.pdf}
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-1Msun-10pc.pdf}
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-1Msun-100pc.pdf}
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-2Msun-10pc.pdf}
\includegraphics[width=8.5cm]{Figures/Sensitivity-PMa-CPM-2Msun-100pc.pdf}
\caption{Combined sensitivity limits of the PMa and CPM detection techniques for different combinations of target mass and distance.
When the target star is too bright for the application of the PMa technique ($G<3.0$) the sensitivity curve is shown in grey. The substellar ($m_2 = 80\,M_\mathrm{Jup}$) and planetary ($m_2=13\,M_\mathrm{Jup}$) mass limits are shown with dashed lines.}
\label{sensitivity_PMa_CPM}
\end{figure*}
Out of the 4063 Hipparcos stars in the 100\,pc sample that have bound candidate companions, 1585 (39\%) also exhibit a PMa S/N>3.
Conversely, out of the 7175 Hipparcos stars within 100\,pc showing a PMa S/N>3, 1585 (22\%) also have bound candidate companions.
The overlap between the PMa and CPM binary samples may have two origins: (1) the observed PMa signal is induced by the wide candidate detected through the CPM technique and (2) triple systems where the close pair is revealed by the PMa method and the wide companion by CPM. As shown in Fig.~\ref{GaiaEDR3CPM-LinSep-histo}, the majority of resolved companions has a linear separation beyond 300\,au. In principle, this corresponds to orbital periods poorly suited for an efficient detection using the PMa technique (Sect.~\ref{sensitivity_pma}), apart from the very nearby stars within $\approx 10$\,pc for which the sensitivity overlap is significant between the two approaches (Fig.~\ref{sensitivity_PMa_CPM}).
For this reason, the targets beyond this distance for which both a PMa signal and a CPM candidate are detected are in most cases (at least) triple systems composed of a close binary (producing the PMa signal) and a wide companion (from CPM).
\subsection{Overall binary fraction of the Hipparcos catalog}
\begin{table}[ht]
\caption{Number of stars with PMa, CPM and RUWE>1.4 binarity signals in the Hipparcos catalog. \label{Hip-stat}}
\centering
\begin{tabular}{lcc}
\hline
\hline
Method & Number of stars & Fraction \\
\hline \noalign{\smallskip}
Full catalog & 117,955 & 100\% \\
\hline \noalign{\smallskip}
PMa S/N>3 & 37,347 & 32\% \\
CPM bound candidates & 12,914 & 11\% \\
RUWE>1.4 & 25,067 & 21\% \\
\hline \noalign{\smallskip}
PMa or CPM & 37,347 & 32\%\\
PMa or CPM or RUWE & 50,720 & 43\%\\
\hline
\end{tabular}
\end{table}
We list in Table~\ref{Hip-stat} the number and fraction of stars of the full Hipparcos catalog presenting a signal of binarity from the PMa, CPM and RUWE>1.4 indicators. Combining these three indicators, we detect a total of 50,720 stars of the Hipparcos catalog that present a signal of binarity in at least one of the three criteria, corresponding to a fraction of 43\% of the full sample. For comparison, the Hipparcos catalog's Double and Multiple Systems Annex (DMSA) \citepads{1997A&A...323L..49P,1997ESASP.402...13L} comprises 17,917 entries, corresponding to a fraction of 15\% of the catalog.
From the analysis of Gaia EDR3, \citetads{2021ApJS..254...42B} found 30\% of the Hipparcos stars exhibiting a significant difference between their short-term EDR3 and long-term Hipparcos-Gaia, which is consistent with our PMa binary fraction.
\section{Example analyses of specific targets\label{examples}}
In this section, we present a selection of brief analyses of a sample of representative targets of different types as examples of possible interpretations of the contents of the PMa and CPM catalogs.
In the CPM finding charts, the markers showing the positions of the stars in the field are represented at the EDR3 epoch (2016.0). The positions were translated to the Gaia EDR3 reference epoch when needed (e.g., for Hipparcos-only targets). The background images were retrieved from the Second Generation Digitized Sky Survey Red (DSS2-Red). As these images were taken at various epochs, this leads to an apparent difference in position with the markers for the fast PM stars. The PM vectors $\vec{\mu_\mathrm{HG}}$, $\vec{\mu_\mathrm{Hip}}$, and $\vec{\mu_\mathrm{G3}}$ are shown separately when available, respectively, in light red, magenta, and blue colors. The bound candidate companions (\texttt{Bnd} flag in the catalog) are marked with a yellow star and a red PM vector, while the low velocity stars (\texttt{LowV} flag) are marked with an orange PM vector. When present, the field stars that have compatible parallaxes are marked with blue symbols.
In the figures showing the PMa sensitivity function, the possible combinations of mass and orbital radius for the companion are shown as green, blue, and cyan curves, respectively for the EDR3, DR2, and Hipparcos epochs. The associated uncertainty domains are shaded in the corresponding color. The pink markings indicate the orbital period corresponding to selected orbital radii.
\subsection{Bright stars}
Among the stars brighter than the Gaia saturation limit, we identified 1080 stars with magnitudes $m < 6$ in the $V$, $H_P$, or $G$ bands with bound candidate companions. A subset of this sample for the stars brighter than $m = 3$ is listed in Table~\ref{superbright}.
\begin{figure*}
\centering
\includegraphics[width=8.3cm,page=2]{Figures/CPMfigures/HIP018543cpm.pdf}
\includegraphics[width=8.3cm,page=1]{Figures/CPMfigures/HIP024608cpm.pdf}\\
\includegraphics[width=8.3cm,page=2]{Figures/CPMfigures/HIP049669cpm.pdf}
\includegraphics[width=8.3cm,page=2]{Figures/CPMfigures/HIP054061cpm.pdf}\\
\includegraphics[width=8.3cm,page=2]{Figures/CPMfigures/HIP072105cpm.pdf}
\includegraphics[width=8.3cm,page=2]{Figures/CPMfigures/HIP111954cpm.pdf}
\caption{Field charts of the bright stars with bound candidate companions $\gamma$\,Eri, $\alpha$\,Aur, $\alpha$\,Leo, $\alpha$\,UMa, $\epsilon$\,Boo, and $\epsilon$\,PsA. \label{SBright}}
\end{figure*}
\subsubsection{$\gamma$ Eri}
We identified a CPM companion (Fig.~\ref{SBright}), with a very low mass of $\approx 0.1\,M_\odot$, to the nearby red giant star $\gamma$\,Eri\,A (\object{HIP 18543}; spectral type M0III). The projected separation between component B and the primary is 1\,kau, and its Gaia $G$ band magnitude is $G=16.1$.
$\gamma$\,Eri\,A exhibits a moderate PMa in Gaia EDR3 ($S/N=3.2$). This indicates the presence of an additional close-in companion, possibly a low-mass red dwarf ($M<0.4\,M_\odot$) orbiting within 50\,au of the primary. This PMa signal cannot be explained by the resolved CPM companion, whose mass is insufficient.
\subsubsection{$\alpha$\,Aur (Capella)}
We confirmed the two bound CPM companions \object{GJ 195 AB} of the nearby giant star $\alpha$\,Aur (\object{Capella}, \object{HIP 24608}, \object{HD 34029}; $d=13$\,pc), with estimated masses of 0.53 and $0.57\,M_\odot$. These companions, located at a projected separation of 9.5\,kau from Capella A, were discovered by \citetads{1914AN....197..181F}. As the primary Capella A is itself an equal mass binary \citepads{2011A&A...531A..89W, 2013A&A...560A.113H}, the system is therefore at least a quadruple.
The very wide unbound CPM companion \object{50 Per} proposed by \citetads{2011ApJS..192....2S} located at a projected separation of 5.4\,pc is outside of the 1\,pc search limit of our survey.
\subsubsection{$\alpha$ Leo (Regulus)}
Next, $\alpha$\,Leo A (\object{HIP 49669}) is known to be a close spectroscopic binary \citepads{2008ApJ...682L.117G} whose companion $\alpha$\,Leo Ab was recently characterized by \citetads{2020ApJ...902...25G} as a 0.3\,M$_\odot$ pre-white dwarf. The main component A is a very-fast-rotating star that is seen almost equator-on \citepads{2005ApJ...628..439M}.
We confirmed that it has two additional bound candidate companions: \object{Gaia EDR3 3880785530720066176} (hereafter $\alpha$\,Leo B) and \object{Gaia EDR3 3880785530720066304} ($\alpha$\,Leo C), which are known to be co-moving with component A since the 19th century \citepads{10.1093/mnras/51.8.460}. They are a pair of relatively low-mass stars that are most likely gravitationally bound together and located at a projected separation of 4,300\,au from $\alpha$\,Leo A (Fig.~\ref{SBright}).
The position angle of $\alpha$\,Leo B with respect to A has slightly evolved from $305.1^\circ$ at epoch 1781.84 (as measured by Herschel) to $307.47^\circ$ at epoch 2016.0. The photometric estimate of the mass of B is around 0.63\,M$_\odot$, corresponding to a K7V spectral type \citepads{2012ApJ...746..154P,2013ApJS..208....9P}. It is only this component that has been identified as bound to Regulus AB, with a very high total score of 0.99. The estimation of the mass of C is complicated as the photometry is scarce, but being 3.5\,magnitudes fainter than component B in the $G$ band, it is likely an M4V red dwarf with a mass around 0.2\,M$_\odot$. This component was not identified by our search algorithm as bound to Regulus AB as its relative velocity of 2.8 km\,s$^{-1}$, caused by the orbital motion of the BC pair, is higher than the escape velocity.
It is possible to take advantage of the Gaia EDR3 parallaxes of components B ($\varpi[B] = 41.310 \pm 0.031$\,mas) and C ($\varpi[C] = 41.242 \pm 0.067$\,mas) to refine the Hipparcos parallax of $\alpha$\,Leo A ($\varpi_\mathrm{Hip}[A] = 41.130 \pm 0.350$\,mas).
\subsubsection{$\alpha$ UMa (Dubhe)}
Then, $\alpha$\,UMa (\object{HIP 54061}) is a very bright ($m_V = 1.8$) spectroscopic binary system. We detect the presence of a very low-mass dwarf companion (\object{Gaia EDR3 862234033499968640}; $m \approx 0.1$\,M$_\odot$) at a projected separation of 550\,au (Fig.~\ref{SBright}). The total score $P_\mathrm{tot}=0.602$ of this star is however close to the limit we adopted for bound candidates (Sect.~\ref{totalscore}). Due to the additional uncertainty on the systemic PM of the primary induced by its binarity, the gravitational link should be considered uncertain.
\subsubsection{$\epsilon$\,Boo}
We identified a candidate brown dwarf CPM companion (\object{Gaia EDR3 1279752168030730496}) to the A0V+K0II-III binary $\epsilon$\,Boo (\object{HIP 72105}; Fig.~\ref{SBright}), at a projected separation of 4.9\,kau (Fig.~\ref{SBright}). An additional CPM companion (\object{Gaia EDR3 1267607615425592448}, \object{2MASS J14454000+2615167}) with a very low relative tangential velocity of $\Delta \varv_\mathrm{tan} = 0.1 \pm 0.2$\,km\,s$^{-1}$ is also identified at a much wider separation of 186\,kau. Thus, $\epsilon$\,Boo may, in fact, be a quadruple system.
\subsubsection{$\epsilon$\,PsA}
The emission-line dwarf $\epsilon$\,PsA (\object{HIP 111954}, \object{HD 214748}) of spectral type B8Ve is a fast-rotating star \citepads{2019A&A...621A.123C} that exhibits both a significant PMa signal ($S/N = 12.7$) and a bound CPM candidate companion.
The PMa is visible in Fig.~\ref{SBright} as a difference between the long-term Hipparcos-Gaia PM vector (light green) and the short term Hipparcos and Gaia EDR3 PM vectors. The resolved companion $\epsilon$\,PsA\,B is likely a low-mass red dwarf ($m_B \approx 0.23$\,M$_\odot$), whose tangential velocity difference is only $\Delta \varv_\mathrm{tan} = 0.37 \pm 0.60$\,km\,s$^{-1}$ with respect to $\epsilon$\,PsA\,A. This projected velocity is well below the escape velocity at the projected separation of 11.7\,kau ($v_\mathrm{esc} \approx 0.95$\,km\,s$^{-1}$), considering a mass of $6\,M_\odot$ for the primary.
The observed PMa signal of the main component A cannot be caused by the resolved companion B; rather, the signal indicates the presence of a third component in the system orbiting close to the primary. As shown in Fig.~\ref{epsPsA-m2r}, the companion is possibly a solar mass star orbiting between $\approx 6$ to 30\,au from the primary. Alternatively, it could also be a more massive star orbiting at a larger separation.
The position angle of the Gaia EDR3 tangential velocity anomaly is $PA = 263.8 \pm 2.7 \deg$ for a norm of $\Delta \varv_\mathrm{tan,G3} = 3.6 \pm 0.3$\,km\,s$^{-1}$ ($S/N=12.7$). The PA coincides modulo $180^\circ$ with the position angle of the gaseous equatorial disk of the Be star, which was found by \citetads{2019A&A...621A.123C} to be $\mathrm{PA}=67^\circ$ (with a high inclination of $i=73^\circ$ on the line of sight). This indicates that the stellar mass close-in companion is possibly orbiting in the same plane as the disk. The PMa is also significant from the Hipparcos catalog ($S/N = 3.9$), with a position angle of $285.9 \pm 9\deg$ and a tangential velocity residual of $\Delta \varv_\mathrm{tan,H} = 2.6 \pm 0.7$\,km\,s$^{-1}$.
\begin{figure}
\includegraphics[width=\hsize]{Figures/epsPsA-m2r.pdf}
\caption{PMa sensitivity diagram of the fast rotating Be star $\epsilon$\,PsA. \label{epsPsA-m2r}}
\end{figure}
\subsubsection{L$_2$ Puppis}
This semi-regular pulsating red giant star L$_2$\,Puppis (\object{HIP 34922}, \object{HD 56096}) exhibits a significant PMa signal in Gaia EDR3 ($S/N = 4.0$) as well as in DR2 ($S/N=3.6$). However, the interpretation of this signal in terms of the presence of a massive companion is not pertinent.
The first reason is that the inhomogeneities present on the surface of giant and supergiant evolved stars (caused by their very large convective cells) affect the position of the photocenter, therefore adding noise to the astrometric measurements \citepads{2011A&A...528A.120C}.
In the case of L$_2$\,Pup, the situation is further complicated by the presence of an inhomogeneous circumstellar dust disk \citepads{2002MNRAS.337...79B, 2014A&A...564A..88K, 2015A&A...576A..46L, 2015A&A...581C...2L, 2020ApJ...901..144N} that is observed almost edge-on ($i=82^\circ$; \citeads{2015A&A...578A..77K, 2016A&A...596A..92K, 2017A&A...601A...5H}).
This disk partially hides the stellar disk and shifts the position of its photocenter in a time-variable way as the star pulsates with a period of $P\approx141$\,days.
The position angle of the EDR3 PMa vector is $PA = 180^\circ$, namely, it is perpendicular to the disk plane. This is consistent with the expected shift of the photocenter as the partially occulted photosphere emerges more or less in a north-south direction above the disk edge.
We identified a bound candidate CPM companion to L$_2$\,Pup (\object{Gaia EDR3 5559704601965623680}) located at a projected separation of 2100\,au (Fig.~\ref{HIP034922cpm}). L$_2$\,Pup\,B is a faint red dwarf with an estimated mass of $m_B = 0.15\,M_\odot$. Its parallax of $\varpi_\mathrm{G3}[B] = 16.465 \pm 0.028$\,mas) is much more accurate than the parallax of L$_2$\,Pup\,A, both from Hipparcos ($\varpi_\mathrm{H}[A] = 15.61 \pm 0.99$\,mas; \citeads{2007ASSL..350.....V}) and the EDR3 ($\varpi_\mathrm{G3}[A] = 17.79 \pm 0.94$\,mas, RUWE = 8.8). This makes of L$_2$\,Pup\,B a valuable proxy for evaluating the distance of the primary. As a remark, the Gaia DR2 parallax of L$_2$\,Pup\,A was incorrect by a factor two ($\varpi_\mathrm{G2}[A] = 7.36 \pm 0.61$\,mas), likely biased by the variability of the photocenter of the star.
\begin{figure}
\includegraphics[width=\hsize,page=2]{Figures/CPMfigures/HIP034922cpm.pdf}
\caption{Field chart of L$_2$\,Puppis showing its red dwarf companion.\label{HIP034922cpm}}
\end{figure}
\subsection{Resolved binary stars}
\subsubsection{GJ 65 AB \label{GJ65}}
Gliese 65 is a pair of very low-mass red dwarfs with late M5.5Ve and M6Ve spectral types (\object{GJ65 AB}, \object{Luyten 726-8}, \object{BL Cet}+\object{UV Cet}), which are relatively fast rotators \citepads{2017MNRAS.471..811B}. The two components are both present in the EDR3 catalog (Table~\ref{GJ65-data}). The close proximity of this system ($d=2.7$\,pc) allowed \citetads{2016A&A...593A.127K} to measure their radii using optical interferometry ($R(A)=0.165 \pm 0.006\,R_\odot$, $R(B)=0.159 \pm 0.006\,R_\odot$) and determine their masses ($m(A)=0.1225 \pm 0.0043\,M_\odot$; $m(B) = 0.1195 \pm 0.0043 M_\odot$) from their orbital motion.
These accurate physical parameters make them particularly attractive benchmarks for models of very low-mass stars. The barycentric parallax $\varpi = 373.7 \pm 2.7$\,mas) obtained by \citetads{2016A&A...593A.127K} is in good agreement with the mean EDR3 parallax of the two stars ($\varpi_\mathrm{G3} = 371.92 \pm 0.42$\,mas), although the RUWE is high for the two stars (Table~\ref{GJ65-data}). The mean parallax from the Gaia DR2 catalog ($\varpi_\mathrm{G2} = 371.03 \pm 0.21$\,mas) is within $2.1 \sigma$ of the EDR3 value and it also consistent with the orbital parallax determined by \citetads{2016A&A...593A.127K}.
\begin{table*}
\caption{Astrometry of the components of the red dwarf binary GJ65 AB from Gaia DR2 and EDR3 and their barycenter, adopting the fractional mass $m_B/(m_A+m_B) = 0.4938 \pm 0.0031$ from \citetads{2016A&A...593A.127K}.
\label{GJ65-data}}
\centering
\renewcommand{\arraystretch}{1.2}
\tiny
\begin{tabular}{lccccccc}
\hline
\hline
Star & Number & RUWE & RA & Dec & $\mu_\alpha$ & $\mu_\delta$ & $\varpi$ \\
& & & & & (mas\,a$^{-1}$) & (mas\,a$^{-1}$) & (mas) \\
\hline \noalign{\smallskip}
& Gaia DR2 \\
GJ65 A & 5140693571158739840 & 6.5 & 01h39m05.05425s & $-$17d56m54.1548s & $+3385.90 \pm 0.53$ & $+531.97 \pm 0.41$ & $369.96 \pm 0.29$ \\
GJ65 B & 5140693571158739712 & 6.9 & 01h39m05.09051s & $-$17d56m51.9462s & $+3182.81 \pm 0.60$ & $+592.04 \pm 0.46$ & $372.19 \pm 0.30$ \\
GJ65 AB & & & 01h39m05.0722s & $-$17d56m53.0642s & &\\
\hline \noalign{\smallskip}
& Gaia EDR3 \\
GJ65 A & 5140693571158739840 & 12.4 & 01h39m05.17303s & $-$17d56m53.8796s & $+3385.30 \pm 0.67$ & $+544.42 \pm 0.38$ & $367.76 \pm 0.83$ \\
GJ65 B & 5140693571158946048 & 10.5 & 01h39m05.20181s & $-$17d56m51.6583s & $+3178.68 \pm 0.43$ & $+584.10 \pm 0.30$ & $373.84 \pm 0.56$ \\
GJ65 AB & & & 01h39m05.1872s & $-$17d56m52.7827s & & \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\caption{Proper motion of the GJ65 AB barycenter from the weighted mean of the Gaia DR2 and EDR3 proper motion vectors of components A and B (first two lines) and from the difference in position between DR2 and EDR3 (last line).
\label{GJ65-bary}}
\centering
\begin{tabular}{lcc}
\hline
\hline
Method & $\mu_\alpha$ & $\mu_\delta$ \\
& (mas\,a$^{-1}$) & (mas\,a$^{-1}$) \\
\hline \noalign{\smallskip}
Gaia DR2 $\vec{\mu}$ avg. & $+3285.61 \pm 0.40$ & $+561.63 \pm 0.31$ \\
Gaia EDR3 $\vec{\mu}$ avg. & $+3283.29 \pm 0.40$ & $+563.98 \pm 0.24$ \\
\hline \noalign{\smallskip}
DR2-EDR3 pos. & $+3284.66 \pm 0.28$ & $+562.96 \pm 0.24$ \\
\hline
\end{tabular}
\end{table}
From the binary orbit, \citetads{2016A&A...593A.127K} estimated the fractional mass $m(B)/m_\mathrm{tot} = m(B)/\left[m(A)+m(B)\right] = 0.4938 \pm 0.0031$ ($\pm 0.6\%$), making it possible to determine the position of their barycenter from the positions of the two stars.
We can estimate the PM vector $\vec{\mu_\mathrm{AB}}$ of the barycenter using two different approaches: from the mean of the PM vectors of the two components (weighted by the inverse of their mass), and from the difference in position of the barycenter between the Gaia DR2 and EDR3 epochs (Table~\ref{GJ65-data}). Table~\ref{GJ65-bary} gives the resulting measurements of the barycentric PM vector using these two techniques. A difference at a level of $5\sigma$ is present between the DR2 and EDR3 values, which bracket the vector from the DR2 and EDR3 positions. This difference may indicate that the motion of one of the two stars is perturbed by the presence of a third body (details on a related caveat later in this paper).
We computed the PM vector of the barycenter from the difference between its positions at the DR2 and EDR3 epochs. It was then possible to derive the orbital velocity vector of each star A and B by subtracting from the DR2 and EDR3 PM vectors $\vec{\mu}(A)$ and $\vec{\mu}(B)$ the PM of the barycenter $\vec{\mu_\mathrm{AB}}$ through $\vec{\mu_\mathrm{orb}}(A/B) = \vec{\mu}(A/B) - \vec{\mu_\mathrm{AB}}$. A diagram of the resulting PM vectors is presented in Fig.~\ref{GJ65-orbPM}.
\begin{figure}
\includegraphics[width=\hsize]{Figures/Binary-GJ65AB.pdf}
\caption{Tangential orbital velocity vectors of GJ65 A and B from Gaia DR2 and EDR3.\label{GJ65-orbPM}}
\end{figure}
For a simple two-star system, the orbital velocity vectors of the two stars are colinear, with opposite directions and their norms are inversely proportional to each star's mass. As discussed by \citetads{2019A&A...623A..72K}, it is therefore possible to search for the signature of an additional massive body orbiting one of the two stars from the orbital velocity anomaly, $\Delta \mu_\mathrm{orb}$, defined as the quantity:
\begin{equation}
\vec{\Delta \mu_\mathrm{orb}}(B) = \vec{\mu_\mathrm{orb}}(B) - \frac{m(A)}{m(B)}\, \vec{\mu_\mathrm{orb}}(A)
.\end{equation}
We obtain the following orbital velocity anomaly vectors from the DR2 and EDR3 data, expressed angularly:
\begin{align}
\vec{\Delta \mu_\mathrm{orb}}(B)[\mathrm{DR2}] & = (+1.93 \pm 0.91 , -2.69 \pm 0.72)\,\mathrm{mas\,a}^{-1} ,\\
\vec{\Delta \mu_\mathrm{orb}}(B)[\mathrm{EDR3}] & = (-2.79 \pm 0.91 , +2.06 \pm 0.61)\,\mathrm{mas\,a}^{-1},
\end{align}
which we can also express in tangential velocity, knowing the parallax of the system:
\begin{align}
\vec{\Delta \varv_\mathrm{orb}}(B)[\mathrm{DR2}] & = (+24.6 \pm 11.5 , -34.3 \pm 9.2)\,\mathrm{m\,s}^{-1} ,\\
\vec{\Delta \varv_\mathrm{orb}}(B)[\mathrm{EDR3}] & = (-35.5 \pm 11.5 , +26.2 \pm 7.8)\,\mathrm{m\,s}^{-1}.
\end{align}
These residuals are significant at a level of $\approx 3\sigma$ both for the DR2 and EDR3 epochs, which may, in principle, indicate the presence of a third body in orbit around one of the components.
However, these results should be considered as a demonstration of principle and not a detection, due to the large RUWE of the Gaia measurements of the two stars. This high RUWE is possibly caused by the orbital curvature of the trajectories of the two stars, which is not taken into account in the EDR3 astrometric reduction and may result in a bias on the determined PM vectors. Alternatively, the angular proximity of the two stars (EDR3 separation $\approx 2.26\arcsec$) and their apparent brightness ($G(A) = 10.5$ and $G(B) = 10.8$) may induce a mutual contamination of the two stars on the Gaia detectors, depending on the position angle of each observed transit. Due to the relatively short orbital period ($P_\mathrm{orb} = 26.3$\,years), this effect also evolves significantly over time.
When available, the analysis of the epoch astrometry of Gaia will enable a thorough search for low-mass companions from a combined fit of the barycentric PM and parallactic wobble, together with the orbital motion of the components.
As a remark, the differential astrometry of GJ65 AB is monitored using the GRAVITY instrument \citepads[see Sect. 3.6 of]{2017A&A...602A..94G}, with an accuracy on the order of $50\,\mu$as. The objective of this project is to search for the signature of low-mass planets orbiting one of the two stars as a deviation of the differential astrometry of A and B from a two-body orbit.
\subsubsection{61 Cyg AB \label{61Cyg}}
The binary star \object{61 Cyg} AB comprises a K5V primary (\object{ADS 14636A}, \object{GJ 820A}, \object{HD 201091}, \object{HIP 104214}) and a K7V secondary (\object{ADS 14636B}, \object{GJ 820B}, \object{HD 201092}, \object{HIP 104217}). This is the nearest star in the northern hemisphere ($d=3.5$\,pc) and it is thanks to this proximity that \citetads{2008A&A...488..667K} and \citetads{2009ApJ...694.1085V} were able to measure the angular diameters of the two components using optical interferometry. The eccentric orbit of the system ($e \approx 0.4$) and very long orbital period (around 7 centuries, \citeads{2012A&A...546A..69M}) make the dynamical determination of the masses relatively difficult. Existing estimates range from 0.67 to $0.79\ M_\odot$ for A and 0.52 to $0.63\ M_\odot$ for B \citepads{1995Icar..116..359W, 2008A&A...488..667K, 2009ApJ...694.1085V, 2012ApJ...757..112B, 2018RAA....18...94S}.
Following \citetads{2019A&A...623A..72K}, we adopted the masses determined from the photometric mass-luminosity relation by \citetads{2015ApJ...804...64M}: $m(A)= 0.708 \pm 0.053\ M_\odot$ and $m(B)= 0.657 \pm 0.057\ M_\odot$, close to the best-fit values of $m(A) = 0.69\ M_\odot$ and $m(B) = 0.61\ M_\odot$ obtained by \citetads{2008A&A...488..667K} from evolutionary modeling with the CESAM2k code \citepads{1997A&AS..124..597M, 2008Ap&SS.316...61M, 2010ascl.soft10059M}.
The photometric masses correspond to a mass ratio of $m(B)/m(A) = 0.93 \pm 0.11$. From an astrometric determination of the radial velocity of 61 Cyg A and B using Hipparcos and Gaia EDR3, \citetads{2021A&A...652A..45L} obtained a mass ratio $m(B)/m(A) = 0.76 \pm 0.05$, which is $1.6\sigma$ smaller than our adopted value.
\citetads{2019A&A...623A..72K} presented an analysis of the PM of 61 Cyg AB using Hipparcos and Gaia DR2. We hereby extend this analysis using Gaia EDR3 astrometry.
\begin{table}
\caption{Proper motion of the 61 Cyg AB barycenter from the weighted mean of the Gaia DR2 and EDR3 proper motion vectors of components A and B (first two lines), and from the difference in position between the Hipparcos and the DR2/EDR3 epochs (last two lines).
\label{61Cyg-bary}}
\centering
\begin{tabular}{lcc}
\hline
\hline
Method & $\mu_\alpha$ & $\mu_\delta$ \\
& (mas\,a$^{-1}$) & (mas\,a$^{-1}$) \\
\hline \noalign{\smallskip}
Gaia DR2 $\vec{\mu}$ avg. & $+4136.10 \pm 0.12$ & $+3204.47 \pm 0.15$ \\
Gaia EDR3 $\vec{\mu}$ avg. & $+4136.17 \pm 0.03$ & $+3204.55 \pm 0.03$ \\
\hline \noalign{\smallskip}
Hip-DR2 pos. & $+4133.66 \pm 0.81$ & $+3203.81 \pm 0.17$ \\
Hip-EDR3 pos. & $+4133.71 \pm 0.79$ & $+3203.81 \pm 0.17$ \\
\hline
\end{tabular}
\end{table}
Following the approach of Sect.~\ref{GJ65}, we first estimated the PM of the barycenter of the system both from the weighted mean of the components' PM vectors and from the difference between the DR2/EDR3 position and the Hipparcos position.
The results are presented in Table~\ref{61Cyg-bary}. Although there is an excellent agreement between the determinations obtained using each of the two methods considered individually, there is a difference $\Delta \mu_\mathrm{AB}$ at a $3\sigma$ level between the two methods. Contrary to GJ65, for which the high RUWE casts doubts on the reliability of Gaia astrometry, the Gaia EDR3 measurements of both components exhibit a satisfactory RUWE level below 1.4 (1.0 and 1.2 for A and B, respectively). This quantity is a difference between the long-term (Hipparcos-Gaia) and short-term (Gaia average) estimates of the barycentric PM vector, therefore equivalent to the PMa defined for individual stars (Sect.~\ref{HG-PMa}).
This observed PMa is robust against a change in the mass ratio of the AB pair. Adopting a lower mass ratio $m(B)/m(A)=0.76$ \citepads{2021A&A...652A..45L} or a higher value of 1.0 (equal mass) for the computation modifies the barycentric PM vectors, but the observed PMa remains significant at a $\approx 3\sigma$ level.
This significant barycentric PMa indicates the probable presence of a third body orbiting either (1) one of the two components A or B (S-type companion) or (2) the AB pair (circumbinary, P-type companion). In hypothesis (1), the gravitational pull of the putative companion ``drags'' the PM of one of the two components, therefore biasing the short-term barycenter PM computed from the mean of the two component PM vectors. In situation (2), the presence of a very wide companion in circumbinary orbit would shift the PM vectors of both components A and B in the same way. This second hypothesis is, however, unlikely to be correct, as the period of a circumbinary companion would be extremely long (millenial scale). This would induce an undetectable shift on the short-term PM of the pair. The presence of a companion orbiting one of the two stars is therefore the most likely explanation to the observed anomaly on the barycenter PM of 61 Cyg AB.
To further test this hypothesis, we now examine the PM of each component. We first derived the orbital velocity vectors (as in Sect.~\ref{GJ65}) by subtracting the Hipparcos-Gaia barycentric PM from the Gaia PM vector of each star. The resulting vectors are presented in Table~\ref{61Cyg-orbitPM}, together with differential quantities. We observed a divergence in the position angle $\theta$ of the tangential velocity vectors of the two stars, which is also visible in Fig.~\ref{61Cyg-orbPM}. This difference reaches $\Delta \theta_{AB} = 3.2 \pm 1.0\,\deg$ and is consistent between the DR2 and EDR3 epochs. The orbital velocity offset of component B relative to A is significant at a $4.4\sigma$ level at $\Delta \varv_\mathrm{orb} =88 \pm 20$\,m\,s$^{-1}$ at a position angle of $\theta = 74 \pm 6\,\deg$, with consistent values from the DR2 and EDR3 data.
\begin{table*}
\caption{Orbital velocity vectors of the 61 Cyg components.
\label{61Cyg-orbitPM}}
\centering
\begin{tabular}{lcc}
\hline
\hline \noalign{\smallskip}
& 61 Cyg A & 61 Cyg B \\
\hline \noalign{\smallskip}
\textbf{Gaia DR2} \\
$\vec{\mu_\mathrm{orb}}$ (mas\,a$^{-1}$) & $(+31.54 \pm 0.81, +47.95 \pm 0.32)$ & $(-26.85 \pm 0.79, -46.28 \pm 0.20)$ \\
$\vec{\mu_\mathrm{orb}}$ position angle $\theta$ & $33.33 \pm 0.69\ \deg$ & $210.12 \pm 0.75\ \deg$ \\
Diff. position angle $\Delta \theta_{AB} = \theta(A) - \theta(B) + 180^\circ$ & \multicolumn{2}{c}{$3.21 \pm 1.01\ \deg$} \\
$\vec{\Delta \mu_\mathrm{orb}} = \vec{\mu_\mathrm{orb}}(B) + (m_A/m_B)\, \vec{\mu_\mathrm{orb}}(A)$ & \multicolumn{2}{c}{$(+4.69 \pm1.13, +1.67 \pm 0.38)$ mas\,a$^{-1}$} \\
$\vec{\Delta \varv_\mathrm{orb}} = \vec{\varv_\mathrm{orb}}(B) + (m_A/m_B)\, \vec{\varv_\mathrm{orb}}(A)$ & \multicolumn{2}{c}{$(+77.7 \pm 18.7, +27.7 \pm 6.3)$ m\,s$^{-1}$} \\
$\vec{\Delta \varv_\mathrm{orb}}$ norm, PA & \multicolumn{2}{c}{$87.1 \pm 21.2$ m\,s$^{-1}$, $+74.6 \pm 6.3\,\deg$} \\
\hline \noalign{\smallskip}
\textbf{Gaia EDR3} \\
$\vec{\mu_\mathrm{orb}}$ (mas\,a$^{-1}$) & $(+31.49 \pm 0.77, +47.74 \pm 0.17)$ & $(-26.74 \pm 0.76, -45.93 \pm 0.17)$ \\
$\vec{\mu_\mathrm{orb}}$ position angle $\theta$ & $33.41 \pm 0.66\ \deg$ & $210.21 \pm 0.70\ \deg$ \\
Diff. position angle $\Delta \theta_{AB} = \theta(A) - \theta(B) + 180^\circ$ & \multicolumn{2}{c}{$3.20 \pm 0.96\ \deg$} \\
$\vec{\Delta \mu_\mathrm{orb}} = \vec{\mu_\mathrm{orb}}(B) + (m_A/m_B)\, \vec{\mu_\mathrm{orb}}(A)$ & \multicolumn{2}{c}{$(+4.75 \pm 1.08, +1.81 \pm 0.24)$ mas\,a$^{-1}$} \\
$\vec{\Delta \varv_\mathrm{orb}} = \vec{\varv_\mathrm{orb}}(B) +(m_A/m_B)\, \vec{\varv_\mathrm{orb}}(A)$ & \multicolumn{2}{c}{$(+78.7 \pm 17.9, +30.0 \pm 4.0)$ m\,s$^{-1}$} \\
$\vec{\Delta \varv_\mathrm{orb}}$ norm, PA & \multicolumn{2}{c}{$88.5 \pm 19.8$ m\,s$^{-1}$, $+73.5 \pm 5.4\,\deg$} \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=\hsize]{Figures/Binary-61CygAB.pdf}
\includegraphics[width=\hsize]{Figures/Binary-61CygAB-zoom.pdf}
\caption{Proper motion and tangential orbital velocity of 61 Cyg A and B from Hipparcos, Gaia DR2 and EDR3 (top panel) and enlargement of Gaia DR2 and EDR3 data showing the divergence between the orbital velocity vectors of components A and B (bottom panel). The position of the barycenter is marked with a `+' symbol.\label{61Cyg-orbPM}}
\end{figure}
The measured orbital velocity anomaly is differential in nature between A and B and it is, in principle, not possible to determine around which of the two stars the companion is orbiting. Qualitatively, the orbital reflex motion due to the companion could result either in an increase or a decrease of the tangential orbital velocity of its host star, depending on the orbital phase.
In principle, the interpretation of the orbital velocity anomaly in terms of companion mass is similar to that of the PMa presented in Sect.~\ref{sensitivity_pma}. As Gaia PMs are average values over the measurement periods, we have a smearing of the velocity signature as in the case of the classical Hipparcos-Gaia PMa. However, as the orbital velocity anomaly is a differential quantity between two ``instantaneous'' velocities (of stars A and B), there is no decrease in sensitivity for very long orbital periods.
The green domain in Fig.~\ref{61Cyg-orbMass} shows the range of possible combinations of companion mass and orbital radius that would explain the observed orbital velocity anomaly. The plot is drawn for the adopted mass of 61\,Cyg B ($0.657 \pm 0.057\ M_\odot$), but the figure is almost the same for component A.
According to \citetads{2005A&A...434..355M}, stable orbits of S-type planets are expected for equal-mass binaries up to a star-planet separation of 0.22 times the stellar separation. With a semi-major axis of $a = 24.5\arcsec$ corresponding to $\approx 85$\,au (\citeads{2012A&A...546A..69M}; \citeads{2001AJ....122.3472H}), stable orbits are therefore expected within $\approx 20$\,au of each star. The shaded region in Fig.~\ref{61Cyg-orbMass} shows the domain of unstable orbits at larger separations. The constant velocity anomaly between the DR2 and EDR3 makes a short-period planet unlikely.
In 1943, \citetads{1943PASP...55...29S,1957AJ.....62Q..35S} announced the detection of a massive planet (or brown dwarf) orbiting around one of the components of 61\,Cyg with a period around 5\,years. The presence of a massive companion on such a short period orbit was later disproved by \citetads{1995Icar..116..359W} and \citetads{2008PASP..120..531C}. \citetads{2021AJ....161..134H} identified a low-amplitude radial velocity signal with $K=2.8$\,m\,s$^{-1}$ on 61\,Cyg A with a period of 2\,600\,days ($\approx 7$\,years), which they attributed to stellar activity (see also \citeads{2017ApJ...845...79B}) and classified as a false positive. \citetads{2017AJ....153..208B} found no significant RV signal on both the A or B components. Based on a 10\,000\,days time series of radial velocity measurements, Figs.\,83 and 84 of \citetads{2016PASP..128k4401H} show a non-excluded domain for a high-mass planetary companion of 61\,Cyg~A or B at a separation of 10\,au and above. A radial velocity signal at a level of several 10\,m\,s$^{-1}$ would likely have been detected by recent radial velocity surveys, possibly indicating a high inclination of the planetary orbit and a low radial velocity amplitude. From adaptive optics imaging in the infrared, \citetads{2010ApJ...714.1551H} obtained detection limits of 8 to10\,$M_\mathrm{Jup}$ between 10 and 30\,au from 61\,Cyg~B (their Fig. 8). However, their assumed age of 2\,Ga for the system appears underestimated (\citeads{2008A&A...488..667K} obtain 6\,Ga), and this older age would result in increased mass detection limits.
Combining the observed velocity anomaly with these observational constraints, the most probable properties of the exoplanet (or low-mass brown dwarf) present in the 61\,Cyg system are, therefore, a mass of $m_2 \approx 10\,M_\mathrm{Jup}$ and an orbital radius between $\approx 10$ and 20\,au (Fig.~\ref{61Cyg-orbMass}). Shorter orbital periods are in principle also possible in the case of high inclination orbits (see, e.g., \citeads{2021A&A...645A...7K}).
Assuming the same direction on sky as 61\,Cyg AB's orbit for the planetary companion's orbit, the companion would currently be located to the southeast of star B at an angular separation of 3 to $6\arcsec$ or, alternatively, to the northwest of star A within a similar separation range.
\begin{figure}
\includegraphics[width=\hsize]{Figures/Sensitivity-61Cyg-orb-m2r.pdf}
\caption{Companion properties explaining the observed orbital velocity anomaly of 61\,Cyg AB. The regions excluded from radial velocity data by \citetads{2016PASP..128k4401H} (stars A and B) and from imaging by \citetads{2010ApJ...714.1551H} (star B only) are shown in shaded blue and orange, respectively. The unstable domain from interactions with the other component of 61\,Cyg \citepads{2005A&A...434..355M} is shown in light magenta. \label{61Cyg-orbMass}}
\end{figure}
\subsection{Exoplanet host stars}
\subsubsection{Proxima Centauri}
The nearest star to the Sun, \object{Proxima Centauri} (\object{GJ 551}, \object{HIP 70890}) is a very low-mass M5.5Ve red dwarf that is a member of the $\alpha$\,Centauri triple system \citepads{2017A&A...598L...7K}. It orbits the main pair $\alpha$\,Cen AB \citepads{2016kervella, 2021A&A...646A...7S} with a very long period of more than 500\,000 years \citepads{2021AJ....162...14A}.
Although the $\alpha$\,Cen AB pair only has one unconfirmed candidate planet \citepads{2021NatCo..12.2651W, 2021NatCo..12..922W}, Proxima Cen hosts one confirmed terrestrial mass planet orbiting in its habitable zone, \object{Proxima b} \citepads{2016Natur.536..437A, 2017A&A...599A.126D, 2020A&A...639A..77S}. With an orbital period of only $P=11.2$\,d and a semi-major axis of $a=0.05$\,au, Proxima b is undetectable astrometrically from the Gaia DR2 or EDR3 catalog data, as these are, respectively, averaged over periods of approximately 2 and 3\,years. As it induces an expected astrometric wobble of less than $3\,\mu$as on its host star, the planet Proxima b will likely remain undetectable even from the individual epoch astrometry collected over the full Gaia mission.
Another candidate planet, Proxima c, has been detected by \citetads{2020SciA....6.7467D} using the radial velocity technique. With an estimated semi-major axis of $a_c = 1.5$\,au, corresponding to an orbital period of $P = 5.2 \pm 0.3$\,a and a radial velocity of $K_c = 1.2 \pm 0.4$\,m\,s$^{-1}$, its minimum mass is estimated to be $m_c \sin i = 5.7 \pm 1.9\,M_\oplus$. Thanks to its longer orbital period and larger expected astrometric signature, Proxima c is in principle detectable using Gaia astrometry. Taking advantage of the marginal Gaia DR2 PMa signal present at a $1.8\sigma$ level in Proxima Cen:
\begin{align}
\vec{\Delta \mu_\mathrm{G2}} & = (+0.218 \pm 0.112, +0.384 \pm 0.215)\ \mathrm{mas\,a}^{-1} ,\\
\vec{\Delta \varv_\mathrm{tan,G2}} & = (+1.34 \pm 0.69, +2.37 \pm 1.33)\ \mathrm{m\,s}^{-1},
\end{align}
\citetads{2020A&A...635L..14K} determined the orbital inclination and a deprojected mass of $m_c = 12^{+12}_{-5}\,M_\oplus$. From HST-FGS astrometry of Proxima Cen, \citetads{2020RNAAS...4...46B} obtained a comparable mass of $m_c = 18 \pm 5\,M_\oplus$.
The Gaia EDR3 PMa signal is significantly lower than in the DR2 ($S/N = 0.9$):
\begin{align}
\vec{\Delta \mu_\mathrm{G3}} & = (-0.022 \pm 0.046, -0.069 \pm 0.069)\ \mathrm{mas\,a}^{-1} ,\\
\vec{\Delta \varv_\mathrm{tan,G3}} & = (-0.14 \pm 0.28, -0.42 \pm 0.42)\ \mathrm{m\,s}^{-1}.
\end{align}
As shown in Fig.~\ref{ProximaCentauri-m2r}, this PMa level is compatible with a deprojected mass for Proxima c closer to the minimum mass determined by \citetads{2020SciA....6.7467D} than the values estimated by \citetads{2020A&A...635L..14K} and \citetads{2020RNAAS...4...46B}. However, the orbital period of Proxima c ($P = 5.2$\,a) is only $1.8\times$ longer than the integration window of Gaia EDR3 ($\delta t_\mathrm{G3} = 2.8$\,a). Due to the associated smearing effect, this results in a lower sensitivity in the PMa signal, visible as a peak at 1\,au in the EDR3 curve of Fig.~\ref{ProximaCentauri-m2r}. As this decreased sensitivity peak will be further shifted toward larger orbital radii for a longer Gaia integration window, the astrometric signature of Proxima c will likely be detectable only in the epoch astrometry of Gaia (expected with the final Gaia data release).
\begin{figure}
\includegraphics[width=\hsize]{Figures/ProximaCentauri-m2r.pdf}
\caption{PMa sensitivity diagram of Proxima for Gaia DR2 (blue) and EDR3 (green) measurements. The minimum masses of the planets Proxima b \citepads{2020A&A...639A..77S} and c \citetads{2020SciA....6.7467D} are represented with orange and red symbols, respectively. \label{ProximaCentauri-m2r}}
\end{figure}
\subsubsection{$\epsilon$ Eridani}
The young K2V dwarf $\epsilon$\,Eri (\object{GJ 144}, \object{HIP 16357}, \object{HD 22049}) is located at a distance of only $d=3.2$\,pc. The presence of a massive planet orbiting this star was first proposed by \citetads{2000ApJ...544L.145H} from radial velocity data. The presence of this planet was confirmed by \citetads{1538-3881-157-1-33}, who also established its physical properties using the radial velocity technique ($m_b = 0.78^{+0.38}_{-0.12} M_\mathrm{Jup}$, $P_\mathrm{orb} = 7.37 \pm 0.07$\,a, $a = 3.48 \pm 0.02$\,au). However, direct imaging searches for exoplanets around $\epsilon$\,Eri (e.g., \citeads{pathak2021, 1538-3881-157-1-33, 2015A&A...574A.120J}) did not produce any detections.
\citetads{2021arXiv210701090M} analyzed the PM of $\epsilon$\,Eri based on astrometry with URAT telescope \citepads{2015AJ....150..101Z}, as well as Hipparcos and Gaia DR2 and EDR3, and obtained a tangential velocity anomaly of $\vec{\Delta \varv_\mathrm{tan}} = (+6, +13)$\,m\,s$^{-1}$ from the long-term Hipparcos+URAT and the Gaia EDR3 short-term PM, in good agreement with the value we obtain from Hipparcos and EDR3 $\vec{\Delta \varv_\mathrm{tan}}[EDR3] = (+4.7 \pm 2.4, +12.6 \pm 1.8)$\,m\,s$^{-1}$.
The PMa sensitivity diagram (Fig.~\ref{epsEri-star-m2r}) shows the good agreement of the Hipparcos and EDR3 PMa with the properties of $\epsilon$\,Eri b. The Gaia DR2 measurement is not represented as the accuracy of the PM vector is low (three times lower than Hipparcos) and, therefore, it does not set adequate constraints. The planetary properties excluded by the direct imaging searches by \citetads{pathak2021}, \citetads{1538-3881-157-1-33} and \citetads{2015A&A...574A.120J} are represented as shaded areas in Fig.~\ref{epsEri-star-m2r}. This diagram shows the very good complementarity of the astrometric, radial velocity, and direct imaging approaches to characterize planetary systems. We do not identify any CPM companion of $\epsilon$\,Eri in the Gaia EDR3 catalog.
\begin{figure}
\includegraphics[width=\hsize]{Figures/epsEri-m2r.pdf}
\caption{PMa sensitivity diagram of $\epsilon$\,Eri for the Hipparcos (cyan) and EDR3 (green) measurements. The shaded regions represent the planet properties excluded by direct imaging searches. \label{epsEri-star-m2r}}
\end{figure}
\subsubsection{Kapteyn's star}
We do not detect any significant PMa signal on the very low-mass red dwarf \object{Kapteyn's star} (\object{HIP 24186}, \object{GJ 191}, \object{HD 33793}) either from the DR2 or EDR3 measurements. The EDR3 residual tangential velocity anomaly is only $\Delta \varv_\mathrm{tan} = 1.46 \pm 0.84\,m\,s^{-1}$, that is, $S/N = 1.7$. This level of agreement between the Hipparcos-Gaia long-term PM vector and the short-term Gaia PM vector is remarkable when compared to the total space velocity of the star of more than 290\,km\,s$^{-1}$. As shown in Fig.~\ref{Kapteyn-star-m2r}, this corresponds to an upper limit of $0.1\,M_\mathrm{Jup}$ on the mass of a companion orbiting between 2 and 10\,au. This negative result is consistent with the non-detection of planetary companions of Kapteyn's star by \citetads{2021AJ....161..230B} from radial velocities. We do not identify any CPM companion of Kapteyn's star in the Gaia EDR3 catalog.
\begin{figure}
\includegraphics[width=\hsize]{Figures/Kapteyn-star-m2r.pdf}
\caption{PMa sensitivity diagram of Kapteyn's star for Gaia DR2 (blue) and EDR3 (green) proper motion measurements. \label{Kapteyn-star-m2r}}
\end{figure}
\subsubsection{$\epsilon$ Indi}
The K5V primary star $\epsilon$\,Ind A (\object{GJ 845} A, \object{HIP 108870}) of the triple system $\epsilon$\,Ind hosts a massive exoplanet $\epsilon$\,Ind Ab. It was recently characterized by \citetads{2019MNRAS.490.5002F} as a cold and massive Jupiter analog ($m = 3\,M_\mathrm{Jup}$, $P_\mathrm{orb} = 45$\,a), based on a combination of radial velocity and astrometry from Hipparcos and Gaia. Recent attempts to directly image the planet $\epsilon$\,Ind~Ab in the thermal infrared domain by \citetads{pathak2021} and \citetads{2021A&A...651A..89V} were unsuccessful. We clearly detected the astrometric signature of this planet in the DR2 and EDR3 data, as shown in Fig.~\ref{epsInd-m2r}, with properties compatible with the determination by \citetads{2019MNRAS.490.5002F}.
The secondary $\epsilon$\,Ind B \citepads{2003A&A...398L..29S} is a binary brown dwarf system whose main component $\epsilon$\,Ind Ba (\object{Gaia EDR3 6412596012146801152}) is identified as a bound companion at a linear projected separation of 1.5\,kau and a relative tangential velocity $\Delta \varv_\mathrm{tan}= 1.25 \pm 0.01$\,km\,s$^{-1}$ (Fig.~\ref{HIP108870cpm}).
\begin{figure}
\includegraphics[width=\hsize]{Figures/epsInd-m2r.pdf}
\caption{PMa sensitivity diagram of $\epsilon$\,Ind A for Gaia DR2 (blue) and EDR3 (green) proper motion measurements. The properties of its massive planet $\epsilon$\,Ind Ab determined by \citetads{2019MNRAS.490.5002F} are represented with a red point.\label{epsInd-m2r}}
\end{figure}
\begin{figure}
\includegraphics[width=\hsize,page=1]{Figures/CPMfigures/HIP108870cpm.pdf}
\caption{Field chart of $\epsilon$\,Ind A with the binary brown dwarf companion $\epsilon$\,Ind~B.\label{HIP108870cpm}}
\end{figure}
\subsubsection{$\pi$ Mensae}
Based on radial velocity measurements, \citetads{2002MNRAS.333..871J} identified a massive planet ($\pi$\,Men\,b) orbiting the nearby (18.3\,pc) high-velocity G0V dwarf $\pi$\,Men (\object{HIP 26394}, \object{HD 39091}), with a period of 5.6\,years. Based on Hipparcos astrometry, \citetads{2011A&A...527A.140R} reported that this companion has a likely mass below $30\,M_\mathrm{Jup}$, and \citetads{2017ApJ...836..139F} classified this star as binary. The discovery of a transiting super-Earth ($\pi$\,Men\,c) with a mass around $5\,M_\oplus$ and an orbital period of 6\,days by \citetads{2018ApJ...868L..39H} and \citetads{2018A&A...619L..10G} considerably renewed the interest in the $\pi$\,Men system.
Using a combination of data sets, including Hipparcos and Gaia astrometry, the mutual inclination of the two planets was found to be remarkably high \citepads{2020MNRAS.497.2096X,2020A&A...640A..73D,2020A&A...642A..31D}.
Additionally, \citetads{2021MNRAS.502.2893K} found from transit spectroscopy that the rotation axis of the star is misaligned by $\approx 24\deg$ with the orbit of the inner super-Earth $\pi$\,Men\,c.
While the latter is beyond reach of a detection from the PMa technique with Gaia, planet b is well within its sensitivity range. We present the mass-orbital radius sensitivity diagram of $\pi$\,Men in Fig.~\ref{pi Men-m2r}. While the predicted mass-orbital radius domains are qualitatively in good agreement between the three catalogs, the PMa signal detected with the DR2 and EDR3 corresponds to a lower mass for planet b than the measured value (by 1 to $2\,\sigma$), while the Hipparcos PMa is slightly higher (by $1\,\sigma$).
These differences are due to the fact that the eccentricity of the orbit of $\pi$\,Men\,b is high at $e_b = 0.642$ \citepads{2020A&A...642A..31D}. A periastron passage of b occurred in J1990.1, within the measurement window of Hipparcos. Recent periastron passages of planet b occurred in J2013.0 and J2018.7, bracketing the measurement windows of Gaia DR2 ($J2014.6-J2016.4$) and EDR3 ($J2014.6-J2017.4$). This means that Gaia observations essentially cover the apastron of planet $\pi$\,Men\,b, and therefore give a slower tangential velocity anomaly for the star $\pi$\,Men.
This effect illustrates the limitation of the PMa analysis technique, which assumes a circular orbit for the companion, and accounts for the uncertainty in the inclination in a statistical manner. We did not find any resolved CPM companion of $\pi$\,Men in the Gaia EDR3 catalog.
\begin{figure}
\includegraphics[width=\hsize]{Figures/piMen-m2r.pdf}
\caption{PMa sensitivity diagram of $\pi$\,Men for the Hipparcos (cyan), DR2 (blue), and EDR3 (green) proper motion measurements. The properties of its massive planet $\pi$\,Men b determined by \citetads{2020MNRAS.497.2096X} and \citetads{2020A&A...642A..31D} are represented with a red and an orange point, respectively.\label{pi Men-m2r}}
\end{figure}
\subsection{White dwarfs}
We confirm the significant PMa signal detected by \citetads{2019A&A...623A..72K} in two of the 17 white dwarfs of the Hipparcos catalog (Table~\ref{wd-PMa}; Fig.~\ref{Hip2-HR-WD}): \object{GJ 140} and \object{LAWD 37}. As with the DR2 analysis, \object{Wolf 28} shows an indication of binarity at a $2\,\sigma$ level. The other white dwarfs do not show significant PMa signals, excluding the presence of Jupiter mass companions orbiting within a few astronomical units.
Our PMa sample is limited to the Hipparcos stars, but the Gaia white dwarf sample is naturally much larger (e.g., \citeads{2021arXiv210607669G,2021A&A...649A...6G}). Within 100\,pc, \citetads{2021MNRAS.506.5201R} identified a sample of 112 nearby white dwarf-main sequence binaries based on multi-band photometry.
The determination of the radial velocity of white dwarfs is complicated by the strong gravitational broadening of their spectral lines. As a result, their space velocity vector is affected by a larger uncertainty than normal stars and the PMa is more difficult to measure. Taking advantage of the perspective acceleration for nearby stars, the PMa may also be used to determine astrometrically the radial velocity of white dwarfs and other nearby stars \citepads{2021A&A...652A..45L, 1999A&A...348.1040D}.
\begin{figure}
\includegraphics[width=\hsize]{Figures/Hip2-HR-WD.pdf}
\caption{Hertzsprung-Russell diagram of the Hipparcos catalog white dwarfs showing the detected EDR3 PMa signals.\label{Hip2-HR-WD}}
\end{figure}
\section{Conclusion\label{conclusion}}
Up to a distance of 100\,pc, the combined use of the PMa and CPM techniques enabled us to detect companions down to substellar or even planetary mass using the Gaia EDR3 catalog.
The brightest stars in the sky heavily saturate the Gaia detectors, and the PMa technique is therefore not directly applicable to these targets. We identified, however, CPM companions based on their Hip2 PM and the EDR3 catalog of surrounding sources. We presented an updated version of the \citetads{2019A&A...623A..72K} catalog of PMa vectors for most of the Hip2 catalog stars, using the EDR3 positions and PM vectors.
We confirm the binary fraction obtained by \citetads{2021A&A...649A...6G}.
From a comparison with the results of our PMa survey, the Gaia RUWE appears as a valuable additional indicator for the presence of companions located within $\approx 1\arcsec$. Combining the PMa, CPM, and RUWE>1.4 indicators of binarity for the Hipparcos catalog stars results in a fraction of 43\% of the targets presenting a significant signal of binarity.
%
We presented, as example applications of the PMa and CPM catalogs, analyses of bright star resolved companions, resolved binary stars with individual Gaia PMs, exoplanet host stars, and white dwarfs. We confirm the presence of a significant orbital motion anomaly in the nearby K dwarf binary 61\,Cyg~AB, which we attribute to a low-mass brown dwarf (or high-mass planet) orbiting one of the components. We also recover the perturbation induced by the massive planets orbiting $\epsilon$\,Eri, $\epsilon$\,Ind, and $\pi$\,Men on the PM of their parent stars.
The Gaia DR3 catalog will include solutions for unresolved binaries \citepads{2019MmSAI..90..318P} that will enable more refined determinations of the PMa vectors for the Hipparcos stars.
The remarkable complementarity of the PMa and CPM approaches opens up the possibility for testing the binarity of a large sample of objects in the solar neighborhood down to orbital periods of $\approx 3$\,years from the PMa approach, and up to separations of tens of thousands of astronomical units from the CPM approach.
The future availability (in Gaia DR4) of epoch astrometry will eventually waive the present time smearing limitation, and open up the possibility for directly searching for anomalies in the sky trajectory of all Gaia stars.
In synergy with the astrometry, the time series of Gaia photometric and spectroscopic measurements will expand the detection space toward companions with shorter orbital periods, through the transit and radial velocity techniques.
The expected extension of the duration of the Gaia mission up to 2025 will permit the detection of companions with longer orbital periods. As demonstrated in recent works (e.g., \citeads{2018NatAs...2..883S, 2019AJ....158..140B, 2019A&A...632L...9K, 2020A&A...635L..14K, 2021A&A...645A...7K, 2021arXiv210907525B}), the combination of Gaia astrometry with radial velocity and photometric transit measurements will result in highly accurate calibrations of the masses of a large number of planets and brown dwarfs.
Follow-up observations by narrow-angle astrometry using, for instance, adaptive optics \citepads{2010Natur.468.1080M, 2019A&A...621L...8L, 2020A&A...642A..18L}, GRAVITY interferometry \citepads{2019A&A...623L..11G, 2020A&A...633A.110G, 2020A&A...642L...2N, 2021A&A...652A..57K}, or ALMA imaging astrometry \citepads{2021AJ....162...14A, 2021ApJ...916L...2B} will further build up on the Hipparcos and Gaia astrometric measurements, potentially detecting second order astrometric perturbations.
The potential of an infrared astrometric space mission successor to Gaia for the detection and characterization of telluric mass planets is also outstanding \citepads{2019arXiv190712535H}, particularly for planets orbiting low-mass stars and brown dwarfs.
\begin{acknowledgements}
The authors gratefully thank the referee, Dr Andrei A. Tokovinin, for valuable comments and suggestions that led to significant improvements of this paper.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}).
Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-15-CE31-0012-01 (project UnlockCepheids).
The research leading to these results has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (project CepBin, grant agreement No 695099).
This research made use of Astropy\footnote{Available at \url{http://www.astropy.org/}}, a community-developed core Python package for Astronomy \citepads{2013A&A...558A..33A,2018AJ....156..123A}, of the Numpy library \citepads{Harris20} and of the Matplotlib graphics environment \citepads{Hunter:2007}.
This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory.
We used the SIMBAD and VizieR databases and catalogue access tool at the CDS, Strasbourg (France), and NASA's Astrophysics Data System Bibliographic Services.
The original description of the VizieR service was published in \citetads{2000A&AS..143...23O}.
The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope.
The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright (c) of the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with the permission of these institutions.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In recent years, a large number of characterizations of
complexity classes based on logics and lambda calculi
have appeared. At least three different principles have been
exploited, namely linear types~\cite{Bellantoni00apal,hofmann00safe},
restricted modalities in the
context of linear logic~\cite{Girard98ic,Asperti02tocl,Lafont04tcs}
and non-size-increasing computation~\cite{hofmann99lics}.
Although related one to the other, these systems have been
studied with different, often unrelated methodologies and
few results are known about relative intentional expressive
power. We believe that this area of implicit computational
complexity needs unifying frameworks for the analysis of
quantitative properties of computation. This would help
to improve the understanding on existing systems. More
importantly, unifying frameworks can be used \emph{themselves}
as a foundation for controlling the use of resources inside
programming languages.\par
In this paper, we introduce a new semantical framework
which consists of an innovative modification of realizability.
The main idea underlying our proposal lies in considering
bounded-time algorithms as realizers instead of taking plain
Turing Machines as is usually the case
in realizability constructions. Bounds are expressed
abstractly as elements of a monoid. We can define a model
for a given (logical or type) system by choosing a
monoid flexible enough to justify all the constructs in the system.
The model can then be used to study the class of representable functions.\par
This allows us to give new proofs of soundness
(all representable functions on base types lies in certain
complexity classes) for Light Affine Logic ({\sf LAL}, \cite{Asperti02tocl}),
Elementary Affine Logic ({\sf EAL}, \cite{coppola01tlca}), {\sf LFPL}~\cite{hofmann99lics}
and Soft Affine Logic ({\sf SAL}, \cite{Baillot04fossacs}).
While being the first entirely semantical proof of polytime
soundness for light logics, our proof also provides a notable
simplification of the original already semantical proof of polytime
soundness for {\sf LFPL}~\cite{hofmann99lics}.
A new result made possible by the semantic framework
is the addition of polymorphism and a modality to {\sf LFPL}.\par
\condinc{
The rest of the paper is organized as follows. In
Section~\ref{sect:acm} we describe an abstract computational model
that will be used in the rest of the paper. In Section~\ref{sect:ls}
we introduce length spaces and show they can be used to interpret
multiplicative linear logic with free weakening.
Sections~\ref{sect:els}, \ref{sect:sls} and~\ref{sect:lls} are devoted
to present instances of the framework together with soundness results for
elementary, soft and light affine logics. Section~\ref{sect:lfpl}
presents a further specialization of length spaces and a new
soundness theorem for {\sf LFPL}\ based on it.\par}
{The rest of the paper is organized as follows. This
Section is devoted to a brief description of related work and
to preliminaries. In Section~\ref{sect:ls}
we introduce length spaces and show they can be used to interpret
multiplicative linear logic with free weakening.
Sections~\ref{sect:els} and~\ref{sect:oll} are devoted
to present instances of the framework together with soundness results for
elementary, soft and light affine logics. Section~\ref{sect:lfpl}
presents a further specialization of length spaces and a new
soundness theorem for {\sf LFPL}\ based on it.\par}
\condinc{}{An extended version of this paper is available~\cite{dallago05}.}
\paragraph{Related-Work}
Realizability has been used in connection with resource-bounded
computation in several places. The most prominent is
Cook and Urquhart work~\cite{Cook93apal}, where terms of a language called $\textit{PV}^\omega$ are
used to realize formulas of bounded arithmetic. The contribution of
that paper is related to ours in that realizability is used to show
``polytime soundness'' of a logic. There are important differences
though. First, realizers in Cook and Urquhart~\cite{Cook93apal}
are typed and very closely related to the logic that is being realized. Second, the
language of realizers $\textit{PV}^\omega$ only contains first order recursion
and is therefore useless for systems like {\sf LFPL}\ or {\sf LAL}. In contrast,
we use untyped realizers and interpret types as certain partial
equivalence relations on those. This links our work to the untyped
realizability model HEO (due to Kreisel~\cite{Kreisel59}). This, in turn,
has also been done by Crossley et al.~\cite{Crossley94jmlcs}. There, however, one proves externally
that untyped realizers (in this case of bounded arithmetic formulas)
are polytime. In our work, and this happens for the first time, the
untyped realizers are used to give meaning to the logic and obtain
polytime soundness as a corollary. Thus, certain resource bounds
are built into the untyped realizers by their very construction.
Such a thing is not at all obvious, because untyped universes of
realizers tend to be Turing complete from the beginning to due
definability of fixed-point combinators. We get around this problem
through our notion of a resource monoid and addition of a certain
time bound to Kleene applications of realizers. Indeed, we consider
this as the main innovation of our paper and hope it to be useful
elsewhere.
\condinc{
\section{A Computational Model}\label{sect:acm}
In this paper, we rely on an abstract computational framework rather
than a concrete one like Turing Machines. This, in particular, will simplify
proofs.\par
Let $L\subseteq\Sigma^*$ be the set of finite sequences over the
alphabet $\Sigma$. We assume a
pairing function $\langle\cdot,\cdot\rangle :
L\times L\rightarrow L$ and a length function
$|\cdot| : L\rightarrow \mathbb{N}$ such that $|\langle x,y\rangle| =
|x|+|y|+\mathit{cp}$ and $|x|\leq \textit{length}(x)$, where
$\textit{length}(x)$ is the number of symbols in $x$ and $\mathit{cp}$ is
a fixed constant. We assume a reasonable encoding of
algorithms as elements of $L$. We write
$\turmac{e}{x}$ for the (possibly undefined) application
of algorithm $e\in L$ to input $x\in L$. We furthermore assume
an abstract time measure $\timetm{e}{x}\in\mathbb{N}$ such that $\timetm{e}{x}$
is defined whenever $\turmac{e}{x}$ is and, moreover
\begin{varitemize}
\item $\turmac{e}{x}$ can be evaluated on a Turing machine in time
bounded by $p(\timetm{e}{x}+|e|+|x|)$, where $p:\mathbb{N}\rightarrow\mathbb{N}$ is
a fixed polynomial.
\item For each Turing machine $M$ running in time $f:\mathbb{N}\rightarrow\mathbb{N}$,
there is $e\in L$ so that $\turmac{e}{\Phi(x)}=\Phi(y)$,
(where $y$ is the result of running $M$ on input
$x$). Furthermore, $\timetm{e}{\Phi(x)}=O(f(|x|))$.
\item $B=\{0,1\}^*$ can be embedded into $L$ by a map $\Phi:B\rightarrow L$
such that both $\Phi$ and $\Phi^{-1}$ can be computed in
polynomial time.
\item There are $e_0,e_1\in L$ such that for every $x\in B$,
$\turmac{e_0}{\Phi(x)}=\Phi(0x)$, $\turmac{e_1}{\Phi(x)}=\Phi(1x)$.
Moreover, $\timetm{e_0}{x}=\timetm{e_1}{x}=O(1)$.
\item There is $e_\mathit{comp}$ (composition) such that for every $x,y$
it holds that $\turmac{e_\mathit{comp}}{\langle x,y\rangle}=z$ where
$|z| = |x|+|y|+O(1)$ and $\turmac{z}{w}=\turmac{y}{\turmac{x}{w}}$;
moreover, $\timetm{e_\mathit{comp}}{\langle x,y\rangle}=O(1)$
and $\timetm{e_\mathit{comp}}{w} = \timetm{x}{w}+\timetm{y}{\turmac{x}{w}}+O(1)$.
\item There is $e_\mathit{id}$ (identity) such that
$\turmac{e_\mathit{id}}{x}=x$ for every $x$ and
$\timetm{e_\mathit{id}}{x}= O(1)$.
\item For every $x\in L$ there is $e_\mathit{const}^x$ such
that $\turmac{e_\mathit{const}^x}{y}=x$ and $\timetm{e_\mathit{const}^x}{y}=O(1)$.
\item For every $x\in L$ there is $e_\mathit{tensconst}^x$ such
that $\turmac{e_\mathit{tensconst}^x}{y}=\langle y,x\rangle$ and
$\timetm{e_\mathit{tensconst}^x}{y}=O(1)$.
\item There is $e_\mathit{throwfirst}$ such that for every $x\in L$
$\turmac{e_\mathit{throwfirst}}{\langle x,y\rangle}=y$ and
$\timetm{e_\mathit{throwfirst}}{\langle x,y\rangle}=O(1)$.
\item There is $e_\mathit{swap}$ (swapping) such that
$\turmac{e_\mathit{swap}}{\langle x,y\rangle}=\langle
y,x\rangle$ and $\timetm{e_\mathit{swap}}{z} \leq O(1)$.
\item There is $e_\mathit{tens}$ (tensor) such that
for every $x$ $\turmac{e_\mathit{tens}}{x}=y$ where
$|y|=|x|+O(1)$ and $\turmac{y}{\langle
z,w\rangle}=\langle\turmac{x}{z},w\rangle$; moroever,
$\timetm{e_\mathit{tens}}{x}=O(1)$ and
$\timetm{y}{\langle z,w\rangle}=\timetm{x}{z}+O(1)$.
\item There is $e_{\mathit{assl}}$ (rebracketing) such that
$\turmac{e_{\mathit{assl}}}{\langle x,\langle
y,z\rangle\rangle}=\langle\langle x,y\rangle,z\rangle$ and
$\timetm{e_{\mathit{assl}}}{x}=O(1)$.
\item There is $e_\mathit{contr}$ (duplication, copying) such that
$\turmac{e_\mathit{contr}}{x}=\langle x,x\rangle$ and
$\timetm{e_{\mathit{contr}}}{x} = O(|x|)$.
\item There is $e_\mathit{eval}$ (application) such that
$\turmac{e_\mathit{eval}}{\langle x,y\rangle}=\turmac{x}{y}$ and
$\timetm{e_\mathit{eval}}{\langle
x,y\rangle}=\timetm{x}{y}+O(1)$.
\item There is $e_\mathit{curry}$ (currying, ``smn-theorem'')
such that, for each $x$, $y=\turmac{e_\mathit{curry}}{x}$
exists and satisfies $|y|=|x|+O(1)$ and $\timetm{e_\mathit{curry}}{x}=O(1)$; moreover,
for every $z$, $c_z=\turmac{y}{z}$ exists and satisfies $|c_z|=|y|+|z|+O(1)$ and
$\timetm{y}{z}=O(1)$; finally, for every $w$, $\turmac{c_z}{w}=\turmac{x}{\langle z,w\rangle}$
and $\timetm{c_z}{w}=\timetm{x}{\langle z,w\rangle}+O(1)$.
\end{varitemize}
There are a number of ways to instantiate this framework. One noticeable
and simple way consists in using call-by-value lambda calculus and is
described in the following. $\Sigma$ will be $\{\lambda,@,0,1,\!\blacktriangleright\!\}$.
To any lambda term $M\in\Lambda$, we can associate a string $M^\#\in\Sigma^*$
in the obvious way. For example, if $M\equiv (\lambda x.xy)(\lambda x.\lambda y.\lambda z.x)$,
then $M^\#$ is
$$
@\lambda @\!\blacktriangleright\! 0\!\blacktriangleright\!\lambda\lambda\lambda\!\blacktriangleright\! 1 0
$$
In other words, free occurrences of variables are translated into $\!\blacktriangleright\!$, while
bounded occurrences of variables are translated into $\!\blacktriangleright\! s$, where $s$ is the
binary representation of the deBruijn index for the occurrence. $L$ will
just be the set of strings in $\Sigma^*$ corresponding to
lambda terms via the mapping we just described. In the following,
we will often write a lambda-term in the usual notation, but this is
just syntactic sugar for the corresponding element of $L$. The abstract
length $|s|$ of $s\in\ L$ is just $\mathit{length}(s)$.
The map $\Phi:B\rightarrow L$ is defined by induction as follows:
\begin{eqnarray*}
\Phi(\varepsilon)&=&\lambda x.\lambda y.\lambda z.z\\
\Phi(0s)&=&\lambda x.\lambda y.\lambda z.x\Phi(s)\\
\Phi(1s)&=&\lambda x.\lambda y.\lambda z.y\Phi(s)
\end{eqnarray*}
Given $M,N\in\Lambda$, consider the following definitions:
\begin{eqnarray*}
\langle M,N\rangle &\equiv& \lambda x.xMN\\
M_0&\equiv&\lambda x.\lambda y.\lambda z.\lambda w.yx\\
M_1&\equiv&\lambda x.\lambda y.\lambda z.\lambda w.zx\\
M_\mathit{comp}&\equiv&\lambda x.\lambda y.\lambda z.x(yz)\\
M_\mathit{id}&\equiv&\lambda x.x\\
M_\mathit{const}^N&\equiv&\lambda x.N\\
M_\mathit{tensconst}^N&\equiv&\lambda x.\lambda y.yxM\\
M_\mathit{throwfirst}&\equiv&\lambda x.x(\lambda y.\lambda z.z)\\
M_\mathit{swap}&\equiv&\lambda x.x(\lambda y.\lambda w.\lambda z.zwy)\\
M_\mathit{tens}&\equiv& \lambda x.\lambda y.y(\lambda z.\lambda q.(\lambda y.\lambda w.wyq)(xz))\\
M_\mathit{assl}&\equiv&\lambda x.x(\lambda y.\lambda w.w(\lambda z.\lambda q.\lambda r.r(\lambda s.syz)q))\\
M_\mathit{contr}&\equiv&\lambda x.\lambda y.yxx\\
M_\mathit{eval}&\equiv&\lambda x.x(\lambda y.\lambda w.yw)\\
M_\mathit{curry}&\equiv&\lambda x.\lambda y.\lambda w.x(\lambda z.zyw)
\end{eqnarray*}
\emph{Values} are abstractions and variables.
We consider call-by-value reduction on lambda terms, i.e. we take $\rightarrow$
as the closurure of
$$
(\lambda x.M)V\rightarrow M\{x/V\}
$$
under all applicative contexts.
The application $\turmac{M}{N}$ of two lambda terms is the normal
form of $MN$ relative to the call-by-value reduction (if one exists).
We now define a (ternary) relation
$\twoheadrightarrow\;\subseteq\Lambda\times\mathbb{N}\times\Lambda$.
In the following, we will write $M\timearrow{n}N$ standing
for $(M,n,N)\in\twoheadrightarrow$ The precise definition
of $\twoheadrightarrow$ (in SOS-style) follows:
$$
\begin{array}{ccccc}
\infer{M\timearrow{0}M}{} &\hspace{1cm}&
\infer{M\timearrow{n}N}{M\rightarrow N & n=\max\{1,|N|-|M|\}} &\hspace{1cm}&
\infer{M\timearrow{n+m}L}{M\timearrow{n}N & N\timearrow{m}L}
\end{array}
$$
It turns out that for every $M,N$ such that $L$ is the normal form of $MN$,
there is exactly one integer $n$ such that $MN\timearrow{n}L$. So, defining
$\timetm{M}{N}$ to be just $n$ is unambiguous. All the axioms listed at the
beginning of this section can be proved to be satisfied by this calculus.
}{
\paragraph{Preliminaries}
In this paper, we rely on an abstract computational framework rather
than a concrete one like Turing Machines. This, in particular, will simplify
proofs.\par
Let $L\subseteq\Sigma^*$ be the set of finite sequences over the
alphabet $\Sigma$. We assume a
pairing function $\langle\cdot,\cdot\rangle :
L\times L\rightarrow L$ and a length function
$|\cdot| : L\rightarrow \mathbb{N}$ such that $|\langle x,y\rangle| =
|x|+|y|+\mathit{cp}$ and $|x|\leq \textit{length}(x)$, where
$\textit{length}(x)$ is the number of symbols in $x$ and $\mathit{cp}$ is
a fixed constant. We assume a reasonable encoding of
algorithms as elements of $L$. We write
$\turmac{e}{x}$ for the (possibly undefined) application
of algorithm $e\in L$ to input $x\in L$. We furthermore assume
an abstract time measure $\timetm{e}{x}\in\mathbb{N}$ such that $\timetm{e}{x}$
is defined whenever $\turmac{e}{x}$ is and, moreover, there exists a fixed
polynomial $p$ such that $\turmac{e}{x}$ can be evaluated on a Turing machine in time
bounded by $p(\timetm{e}{x}+|e|+|x|)$ (this is related to the so-called
invariance thesis~\cite{Boas90}). By ``reasonable'', we mean for
example that for any $e,d\in L$ there exists $d\circ e\in L$ such that
$|d\circ e| = |d|+|e|+O(1)$ and $\turmac{d\circ e}{x}=\turmac{d}{y}$ where
$y=\turmac{e}{x}$ and moreover $\timetm{d\circ e}{x} = \timetm{e}{x}+
\timetm{d}{y}+O(1)$. We furthermore assume that the abstract time needed to compute $d\circ e$
from $\langle d,e\rangle$ is constant. Likewise, we assume that
``currying'' and rewiring operations such as $\langle x,\langle
y,z\rangle\rangle\mapsto \langle\langle y,z\rangle x\rangle$
can be done in constant time. However, we do allow linear
(in $|x|$) abstract time for copying operations such us
$x\mapsto \langle x,x\rangle$.\par
There are a number of ways to instantiate this framework. In the
appendix, the precise form of the assumptions we make as well as
one instance based on call-by-value lambda-calculus are briefly
described.}
\section{Length Spaces}\label{sect:ls}
In this section, we introduce the category of length spaces and study
its properties. Lengths will not necessarily be numbers but rather
elements of a commutative monoid.\par
A \emph{resource monoid} is a quadruple $M=(|M|,+,\leq_M,\mathcal{D}_M)$ where
\begin{numlist}
\item
$(|M|,+)$ is a commutative monoid;
\item
$\leq_M$ is a pre-order on $|M|$ which is compatible with $+$;
\item
$\mathcal{D}_M:\{(\alpha,\beta)\;|\;\alpha\leq_M\beta\}\rightarrow\mathbb{N}$ is a function such
that for every $\alpha,\beta,\gamma$
\begin{eqnarray*}
\distpar{M}{\alpha}{\beta}+\distpar{M}{\beta}{\gamma}&\leq&\distpar{M}{\alpha}{\gamma}\\
\distpar{M}{\alpha}{\beta}&\leq&\distpar{M}{\alpha+\gamma}{\beta+\gamma}
\end{eqnarray*}
and, moreover, for every $n\in\mathbb{N}$ there is $\alpha$ such
that $\distpar{M}{0}{\alpha}\geq n$.
\end{numlist}
Given a resource monoid $M=(|M|,+,\leq_M,\mathcal{D}_M)$, the function
$\norm{M}:|M|\rightarrow\mathbb{N}$ is defined by putting
$\normpar{M}{\alpha}=\distpar{M}{0}{\alpha}$. We abbreviate
$\sigma+\dots+\sigma$ ($n$ times) as $n.\sigma$.\par
Let us try to give some intuition about these axioms. We shall use
elements of a resource monoid to bound data, algorithms, and runtimes
in the following way: an element $\varphi$ bounds an algorithm $e$ if
$\normpar{M}{\varphi}\geq |e|$ and, more importantly, whenever $\alpha$
bounds an input $x$ to $e$ then there must be a bound
$\beta\leq_M\varphi+\alpha$ for the result $y=\turmac{e}{x}$ and, most
importantly, the runtime of that computation must be bounded by
$\distpar{M}{\beta}{\varphi+\alpha}$. So, in a sense, we have the option
of either producing a large output fast or to take a long time for a
small output. The ``inverse triangular'' law above ensures that the
composition of two algorithms bounded by $\varphi_1$ and $\varphi_2$,
respectively, can be bounded by $\varphi_1+\varphi_2$ or a simple
modification thereof. In particular, the contribution of the
unknown intermediate result in a composition cancels out using
that law. Another useful intuition is that
$\distpar{M}{\alpha}{\beta}$ behaves like the difference
$\beta-\alpha$, indeed, $(\beta-\alpha)+(\gamma-\beta)\leq
\gamma-\alpha$.\par
\condinc{
\begin{lemma}
If $M$ is a resource monoid, then $\mathcal{D}_M$ is antitone on its first argument and
monotone on its second argument.
\end{lemma}
\begin{proof}
If $\alpha\leq_M\beta$, then
\begin{eqnarray*}
\distpar{M}{\alpha}{\gamma}&\geq&\distpar{M}{\alpha}{\beta}+\distpar{M}{\beta}{\gamma}
\geq\distpar{M}{\beta}{\gamma};\\
\distpar{M}{\gamma}{\alpha}&\leq&\distpar{M}{\gamma}{\alpha}+\distpar{M}{\alpha}{\beta}
\geq\distpar{M}{\gamma}{\beta}.
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}}{}
A \emph{length space} on a resource monoid $M=(|M|,+,\leq_M,\mathcal{D}_M)$
is a pair $A=(|A|,\vdashp{A})$, where $|A|$ is a set
and $\vdashp{A}\;\subseteq|M|\times L\times |A|$
is a (infix) relation satisfying the following conditions:
\begin{numlist}
\item
If $\maj{A}{\alpha}{e}{a}$, then $\normpar{M}{\alpha}\geq |e|$;
\item
For every $a\in|A|$, there are $\alpha,e$ such that $\maj{A}{\alpha}{e}{a}$
\item
If $\maj{A}{\alpha}{e}{a}$ and $\alpha\leq_M\beta$, then
$\maj{A}{\beta}{e}{a}$;
\item
If $\maj{A}{\alpha}{e}{a}$ and $\maj{A}{\alpha}{e}{b}$, then $a=b$.
\end{numlist}
The last requirement implies that each element of $|A|$ is uniquely
determined by the (nonempty) set of it realisers and in particular
limits the cardinality of any length space to the number of partial
equivalence relations on $L$.\par
A \emph{morphism} from length space $A=(|A|,\vdashp{A})$ to length space
$B=(|B|,\vdashp{B})$ (on the same resource monoid $M=(|M|,+,\leq_M,\mathcal{D}_M)$)
is a function $f:|A|\rightarrow |B|$ such that there exist
$e\in L=\Sigma^*$, $\varphi\in |M|$ with $\normpar{M}{\varphi}\geq |e|$ and
whenever $\maj{A}{\alpha}{d}{a}$, there must be $\beta,c$ such that
\begin{numlist}
\item
$\maj{B}{\beta}{c}{f(a)}$;
\item
$\beta\leq_M\varphi+\alpha$;
\item
$\turmac{e}{d}=c$;
\item
$\timetm{e}{d}\leq\distpar{M}{\beta}{\varphi+\alpha}$
\end{numlist}
We call $e$ a realizer of $f$ and $\varphi$ a majorizer of $f$.
The set of all morphisms from $A$ to $B$ is denoted as $\Hom{A}{B}$.
If $f$ is a morphism from $A$ to $B$ realized by $e$ and majorized by
$\varphi$, then we will write
$f:\morphism{A}{e}{\varphi}{B}$ or $\maj{A\multimap B}{\varphi}{e}{f}$.
\begin{remark}\label{remark:oldstuff}
It is possible to alter the time bound in the definition
of a morphism to
$\timetm{e}{d}\leq\distpar{M}{\beta}{\varphi+\alpha}\normpar{M}{\alpha+\varphi}$.
This allows one to accommodate linear time operations by padding
the majorizer for the morphism. All the subsequent proofs
go through with this alternative definition, at the expense of
simplicity and ease of presentation,
\end{remark}\par
Given two length spaces $A=(|A|,\vdashp{A})$ and $B=(|B|,\vdashp{B})$ on the
same resource monoid $M$, we can build
$A\otimes B=(|A|\times |B|,\vdashp{A\otimes B})$ (on $M$)
where $e,\alpha\vdashp{A\otimes B}(a,b)$ iff $\normpar{M}{\alpha}\geq |e|$ and
there are $f,g,\beta,\gamma$ with
$$
\begin{array}{c}
f,\beta\vdashp{A} a\\
g,\gamma\vdashp{B} b\\
e=\langle f,g\rangle\\
\alpha\geq_M\beta+\gamma
\end{array}
$$
$A\otimes B$ is a well-defined length space due to the axioms on $M$.\par
Given $A$ and $B$ as above, we can build $A\multimap B=(\Hom{A}{B},\vdashp{A\multimap B})$
where $e,\alpha\vdashp{A\multimap B}f$ iff $f$ is a morphism from $A$
to $B$ realized by $e$ and majorized by $\alpha$.\par
\condinc{
Morphisms can be composed:
\begin{lemma}[Composition]
Given length spaces $A,B,C$, there is a morphism
$$
\mathit{comp}:(B\multimap C)\otimes(A\multimap B)\rightarrow(A\multimap C)
$$
such that $\mathit{comp}(f,g)=\lambda x.f(g(x))$.
\end{lemma}
\begin{proof}
Let $f:\morphism{A}{x}{\varphi}{B}$ and $g:\morphism{B}{y}{\psi}{C}$.
We know there are constants $p,q,r$ such
that $\turmac{e_\mathit{comp}}{\langle x,y\rangle}=z$ where
$|z|\leq |x|+|y|+p$ and $\turmac{z}{w}=\turmac{y}{\turmac{x}{w}}$;
moreover, $\timetm{e_\mathit{comp}}{\langle x,y\rangle}\leq r$
and $\timetm{e_\mathit{comp}}{w}=\timetm{x}{w}+\timetm{y}{\turmac{x}{w}}+q$. Now,
let us now choose $\mu$ such that $\normpar{M}{\mu}\geq p+q$,
We will prove that $comp(f,g):\morphism{A}{z}{\varphi+\psi+\mu}{C}$.
Obviously, $\normpar{M}{\varphi+\psi+\mu}\geq |z|$. If $\maj{A}{\alpha}{w}{a}$,
then there must be $\beta,t$ such that $\maj{B}{\beta}{t}{f(a)}$
and the other conditions prescribed by the definition of a morphism
hold. Moreover, there must be $\gamma,s$ such
that $\maj{C}{\gamma}{s}{g(f(a))}$ and, again, the other conditions
are satisfied. Putting them together, we get:
$$
\gamma\leq_M\beta+\psi\leq_M\alpha+\varphi+\psi\leq_M\alpha+\varphi+\psi+\mu
$$
and
\begin{eqnarray*}
\timetm{z}{w}&\leq&\timetm{x}{w}+\timetm{y}{t}+q\\
&\leq&\distpar{M}{\beta}{\alpha+\varphi}+
\distpar{M}{\gamma}{\beta+\psi}+\normpar{M}{\mu}\\
&\leq&\distpar{M}{\beta+\psi}{\alpha+\varphi+\psi}+
\distpar{M}{\gamma}{\beta+\psi}+\distpar{M}{0}{\mu}\\
&\leq&\distpar{M}{\gamma}{\alpha+\varphi+\psi+\mu}
\end{eqnarray*}
This concludes the proof, since
$comp:\morphism{(B\multimap C)\otimes(A\multimap B)}{(f,g)}{\xi}{A\multimap C}$
where $\xi$ is such that $\normpar{M}{\xi}\geq r+|e_\mathit{comp}|$.\hfill $\Box$
\end{proof}
Basic morphisms can be built independently on the underlying resource monoid. Noticeably,
they correspond to axiom of multiplicative linear logic:
\begin{lemma}[Basic Maps]
Given length spaces $A,B,C$, there are morphisms:
\begin{eqnarray*}
\mathit{id}&:&A\rightarrow A\\
\mathit{swap}&:&A\otimes B\rightarrow B\otimes A\\
\mathit{assl}&:&A\otimes(B\otimes C)\rightarrow (A\otimes B)\otimes C\\
\mathit{eval}&:&A\otimes(A\multimap B)\rightarrow B\\
\mathit{curry}&:&((A\otimes B)\multimap C)\rightarrow A\multimap (B\multimap C)
\end{eqnarray*}
where
\begin{eqnarray*}
\mathit{id}(a)&=&a\\
\mathit{swap}(a,b)&=&(b,a)\\
\mathit{assl}(a,(b,c))&=&((a,b),c)\\
\mathit{eval}(a,f)&=&f(a)\\
\mathit{curry}(f)&=&\lambda a.\lambda b.f(a,b)
\end{eqnarray*}
\end{lemma}
\begin{proof}
We know that $\{e_\mathit{id}\}(d)$ takes constant time,
say at most $p$. Then, let $\varphi_\mathit{id}\in M$ be
such that $\normpar{M}{\varphi_\mathit{id}}\geq p+|e_\mathit{id}|$
(this can always be done). Now, let $\maj{A}{\alpha}{d}{a}$. We have
that $\maj{A}{\alpha}{d}{\mathit{id}(a)}$, $\alpha\leq_M\alpha+\varphi_\mathit{id}$,
$\{e_\mathit{id}\}(d)=d$. Moreover
\begin{eqnarray*}
\timetm{e_\mathit{id}}{d}&\leq&p\leq\normpar{M}{\varphi_\mathit{id}}=\distpar{M}{0}{\varphi_\mathit{id}}\\
&\leq& \distpar{M}{\alpha}{\alpha+\varphi_\mathit{id}}
\end{eqnarray*}
This proves $\mathit{id}$ to be a morphism.\par
We know that $\{e_\mathit{swap}\}(\langle d,c\rangle)$
takes constant time, say at most $p$. Then,
let $\varphi_\mathit{swap}\in |M|$ be
such that $\normpar{M}{\varphi_\mathit{id}}\geq p+|e_\mathit{swap}|$.
Now, let $\maj{A\otimes B}{\alpha}{e}{(a,b)}$. This i that
$e=\langle d,c\rangle$ and $\maj{B\otimes A}{\alpha}{\langle c,d\rangle}{(b,a)}$.
We can then apply the same argument as for $\mathit{id}$. In particular:
\begin{eqnarray*}
\timetm{e_\mathit{swap}}{e}&\leq&p\leq\normpar{M}{\varphi_\mathit{swap}}=
\distpar{M}{0}{\varphi_\mathit{swap}}\\
&\leq& \distpar{M}{\alpha}{\alpha+\varphi_\mathit{swap}}
\end{eqnarray*}
This proves $\mathit{swap}$ to be a morphism. We can verify $\mathit{assl}$ to be a morphism exactly in the same way.\par
We know that
$\{e_\mathit{eval}\}(\langle d,c\rangle)=\{d\}(c)$ and
$\{e_\mathit{eval}\}(\langle d,c\rangle)$ takes constant overload time, say at most
$p$. $\varphi_\mathit{eval}$ is chosen as to satisfy
$\normpar{M}{\varphi_{\mathit{eval}}}\geq p$.
Let now $\maj{A\otimes (A\multimap B)}{\alpha}{e}{(a,f)}$. This means that
$e=\langle d,c\rangle$ and there are $\beta$ and $\gamma$ such that
$$
\begin{array}{c}
\maj{A}{\beta}{d}{a}\\
\maj{A\multimap B}{\gamma}{c}{f}\\
\alpha\geq_M\beta+\gamma\\
\normpar{M}{\alpha}\geq\normpar{M}{\beta}+\normpar{M}{\gamma}+\mathit{cp}
\end{array}
$$
From $\maj{A\multimap B}{\gamma}{c}{f}$ it follows
that, by the definition of a morphism,
there must be $\delta,h$ such that
\begin{numlist}
\item
$\maj{B}{\delta}{h}{f(a)}$
\item
$\delta\leq_M\beta+\gamma$
\item
$\{c\}(d)=h$
\item
$\timetm{c}{d}\leq\distpar{M}{\delta}{\beta+\gamma}$
\end{numlist}
From $\delta\leq_M\beta+\gamma$ and $\beta+\gamma\leq_M\alpha$, it
follows that $\delta\leq_M\alpha\leq_M\alpha+\mu$.
Moreover:
\begin{eqnarray*}
\timetm{e_\mathit{eval}}{\langle d,c\rangle}&\leq&p+\timetm{c}{d}
\leq\normpar{M}{\varphi_\mathit{eval}}+\distpar{M}{\delta}{\beta+\gamma} \\
&\leq&\normpar{M}{\varphi_\mathit{eval}}+\distpar{M}{\delta}{\beta+\gamma}
+\distpar{M}{\beta+\gamma}{\alpha} \\
&\leq&\distpar{M}{0}{\varphi_\mathit{eval}}+\distpar{M}{\delta}{\alpha} \\
&\leq&\distpar{M}{\delta}{\alpha+\varphi_\mathit{eval}}
\end{eqnarray*}
Now, let us prove that $\mathit{curry}$ is a morphism.
First of all, we know there must be constants $p,q,r,s,t$ such
that, for each $e,x,y$, there are $d$ and $c_x$ with
\begin{eqnarray*}
\timetm{e_\mathit{curry}}{e}&\leq&p\\
d&=&\turmac{e_\mathit{curry}}{e}\\
|d|&\leq&|e|+q\\
\timetm{d}{x}&\leq&r\\
c_x&=&\turmac{d}{x}\\
|c_x|&\leq&|e|+|x|+s\\
\timetm{c_x}{y}&\leq&\timetm{e}{\langle x,y\rangle}+t\\
\turmac{e}{\langle x,y\rangle}&=&\turmac{c_x}{y}
\end{eqnarray*}
Let $\mu,\theta,\xi\in |M|$ be such that
\begin{eqnarray*}
\normpar{M}{\xi}&\geq& p\\
\normpar{M}{\mu}&\geq& q\\
\normpar{M}{\sigma}&\geq& r\\
\normpar{M}{\theta}&\geq& s\\
\normpar{M}{\eta}&\geq& t\\
\normpar{M}{\chi}&\geq& \mathit{cp}
\end{eqnarray*}
Let now $\maj{A\otimes B\multimap C}{\gamma}{e}{f}$.
We know that $|d|\leq|e|+q$ and
$\timetm{e_\mathit{curry}}{e}\leq p$. In order
to prove that $\mathit{curry}$ is indeed a morphism
realized by $e_\mathit{curry}$ and majorized by
$\mu+\xi+\sigma+\theta+\chi+\eta$, it then suffices
to prove that
$$
\maj{A\multimap B\multimap C}{\gamma+\mu+\sigma+\theta+\chi+\theta}{d}{\lambda a.\lambda b.f(a,b)}.
$$
Let then $\maj{A}{\alpha}{x}{a}$. There is
$c_x$ such that $c_x=\turmac{d}{x}$,
$|c_x|\leq|e|+|x|+s$ and $\timetm{d}{x}\leq r$. In
order to prove that $\lambda a.\lambda b.f(a,b)$ is
indeed a morphism realized by $d$ and majorized
by $\gamma+\mu+\sigma+\theta+\chi+\eta$, it then suffices to prove that
$\maj{B\multimap C}{\gamma+\alpha+\mu+theta+\chi+\eta}{c_x}{\lambda b.f(a,b)}$.
Let then $\maj{B}{\beta}{y}{b}$. There are $\delta,c$ such
$\maj{C}{\delta}{c}{f(a,b)}$, where
$\delta\leq\alpha+\beta+\chi+\gamma$. Moreover,
we know that
\begin{eqnarray*}
\timetm{c_x}{y}&\leq&\timetm{e}{\langle x,y\rangle}+t\leq\distpar{M}{\delta}{\alpha+\beta+\chi+\gamma}+t\\
&\leq&\distpar{M}{\delta}{\alpha+\beta+\gamma+\chi}+\distpar{M}{0}{\eta+\mu+\theta}\\
&\leq&\distpar{M}{\delta}{\alpha+\beta+\gamma+\chi+\eta+\mu+\theta}
\end{eqnarray*}
This concludes the proof.\hfill$\Box$.
\end{proof}
Length spaces can justify the usual rule for tensor as a map-former:
\begin{lemma}[Tensor]
Given length spaces $A,B,C$, there is a morphism
$$
\mathit{tens}:(A\multimap B)\rightarrow((A\otimes C)\multimap(B\otimes C))
$$
where $\mathit{tens}(f)=\lambda x.(f(\pi_1(x)),\pi_2(x))$.
\end{lemma}
\begin{proof}
Let $f:\morphism{A}{x}{\varphi}{B}$. We know there are constants
$p,q$ such that
$\turmac{e_\mathit{tens}}{x}=y$ where
$|y|\leq|x|+p$ and $\turmac{y}{\langle
z,w\rangle}=\langle\turmac{x}{z},w\rangle$; moroever,
$\timetm{e_\mathit{tens}}{x}\leq q$ and
$\timetm{y}{\langle z,w\rangle}\leq\timetm{x}{z}+r$.
Then, take $\psi\in |M|$ such that
$\normpar{M}{\psi}\geq p+r$, put
$\sigma=\psi+\varphi+\mu$, where $\normpar{M}{\mu}\geq\mathit{cp}$. Suppose
$\maj{A\otimes C}{\alpha}{\langle z,w\rangle}{(a,c)}$.
By definition, there are $\beta,\gamma$ such that
$$
\begin{array}{c}
\maj{A}{\beta}{z}{a}\\
\maj{C}{\gamma}{w}{c}\\
\alpha\geq_M\beta+\gamma
\end{array}
$$
By hypothesis, there are $\delta,t$ such that
$$
\begin{array}{c}
\maj{B}{\delta}{t}{f(a)}\\
\delta\leq_M\varphi+\beta\\
\turmac{e}{z}=t\\
\timetm{e}{z}\leq \distpar{M}{\delta}{\varphi+\beta}
\end{array}
$$
Then,
$\maj{B\otimes C}{\gamma+\delta+\mu}{\langle t,w\rangle}{(f(a),c)}$.
Moreover,
$$
\gamma+\delta+\mu\leq_M \gamma+\varphi+\beta+\mu\leq_M \alpha+\varphi+\mu\leq_M \alpha+\sigma
$$
Finally:
\begin{eqnarray*}
\timetm{y}{\langle z,w\rangle}&\leq&\timetm{x}{z}+r\\
&\leq& \distpar{M}{\delta}{\varphi+\beta}+\normpar{M}{\psi}\\
&\leq& \distpar{M}{\delta}{\varphi+\beta+\psi}\\
&\leq& \distpar{M}{\gamma+\delta+\mu}{\gamma+\varphi+\beta+\mu+\psi}\\
&=& \distpar{M}{\gamma+\delta+\mu}{\gamma+\beta+\sigma}\\
&=& \distpar{M}{\gamma+\delta+\mu}{\alpha+\sigma}
\end{eqnarray*}
This concludes the proof, since $tens:\morphism{(A\multimap B)}{(f,g)}{\xi}{(A\otimes C)\multimap(B\otimes C)}$
where $\xi$ is such that $\normpar{M}{\xi}\geq q+|e_\mathit{tens}|$.\hfill $\Box$
.\hfill $\Box$
\end{proof}
Thus:}{}
\begin{lemma}
Length spaces and their morphisms form a symmetric monoidal closed
category with tensor and linear implication given as above.
\end{lemma}
A length space $I$ is defined by $|I|=\{0\}$ and
$\maj{A}{\alpha}{e}{0}$ when $\normpar{M}{\alpha}\geq |e|$. For each
length space $A$ there are isomorphisms $A\otimes I\simeq A$ and a
unique morphism $A\rightarrow I$. The latter serves to justify full
weakening.\par
\condinc{
For every resource monoid $M$, there is a length space
$B_M=(\{0,1\}^*,\vdashp{B_M})$ where
$\maj{B_M}{\alpha}{\Phi(t)}{t}$ whenever $\normpar{M}{\alpha}\geq |t|$.
The function $s_0$ (respectively, $s_1$) from $\{0,1\}^*$ to itself
which appends $0$ (respectively, $1$) to the left
of its argument can be computed in constant time on the abstract
computational model and, as a consequence, is a morphism from $B_M$
to itself.}
{For every resource monoid $M$, there is a length space
$B_M=(\{0,1\}^*,\vdashp{B_M})$ where
$\maj{B_M}{\alpha}{\overline{t}}{t}$ whenever $\overline{t}$ is
a realizer for $t$ and $\normpar{M}{\alpha}\geq |\overline{t}|$.
The function $s_0$ (respectively, $s_1$) from $\{0,1\}^*$ to itself
which appends $0$ (respectively, $1$) to the left
of its argument can be computed in constant time in our
computational model and, as a consequence, is a morphism from $B_M$
to itself.}
\subsection{Interpreting Multiplicative Affine Logic}\label{bloed}
We can now formally show that second order multiplicative
affine logic (i.e. multiplicative linear logic plus
full weakening) can be interpreted inside the category of
length spaces on any monoid $M$. Doing this will simplify
the analysis of richer systems presented in following sections.
Formulae of (intuitionistic) multiplicative affine logic
are generated by the following productions:
$$
A::=\alpha\;|\;A\multimap A\;|\;A\otimes A\;|\;\forall\alpha.A
$$
where $\alpha$ ranges over a countable set of atoms.
Rules are reported in figure~\ref{figure:MAL}.
\begin{figure*}
\begin{center}
\fbox{
\begin{minipage}{.98\textwidth}
{\bf Identity, Cut and Weakening}.
$$
\begin{array}{lcr}
\infer[I]{A\vdash A}{} & \;\;
\infer[U]{\Gamma,\Delta\vdash B}{\Gamma\vdash A & \Delta,A\vdash B} &\;\;
\infer[W]{\Gamma,B\vdash A}{\Gamma\vdash A}
\end{array}
$$
{\bf Multiplicative Logical Rules}.
$$
\begin{array}{ccccccc}
\infer[L_\otimes]{\Gamma,A\otimes
B\vdash C}{\Gamma,A,B\vdash C} &\,&
\infer[R_\otimes]{\Gamma,\Delta\vdash A\otimes B}{\Gamma\vdash A & \Delta\vdash B} &\,&
\infer[L_\multimap]{\Gamma,\Delta,A\multimap
B\vdash C}{\Gamma\vdash A & \Delta,B\vdash C} &\,&
\infer[R_\multimap]{\Gamma\vdash A\multimap B}{\Gamma,A\vdash B}
\end{array}
$$
{\bf Second Order Logical Rules}.
$$
\begin{array}{lcr}
\infer[L^\forall]{\Gamma,\forall\alpha.A\vdash B}{\vdash \Gamma,A[C/\alpha]\vdash B} & \;\; &
\infer[R^\forall]{\Gamma\vdash \forall\alpha.A}{\Gamma\vdash A & \alpha\notin\mathit{FV}(\Gamma)}
\end{array}
$$
\end{minipage}}
\caption{Intuitionistic Multiplicative Affine Logic}\label{figure:MAL}
\end{center}
\end{figure*}
A \emph{realizability environment} is a partial function assigning length spaces (on the
same resource monoid) to atoms.
Realizability semantics $\reasem{A}{\eta}$
of a formula $A$ on the realizability environment $\eta$ is defined by induction
on $A$:
\begin{eqnarray*}
\reasem{\alpha}{\eta} &=&\eta(\alpha)\\
\reasem{A\otimes B}{\eta} &=&
\reasem{A}{\eta}\otimes\reasem{B}{\eta}\\
\reasem{A\multimap B}{\eta} &=&
\reasem{A}{\eta}\multimap\reasem{B}{\eta}\\
\reasem{\forall\alpha.A}{\eta} &=& (|\reasem{\forall\alpha.A}{\eta}|,
\vdashp{\reasem{\forall\alpha.A}{\eta}})
\end{eqnarray*}
where
\begin{eqnarray*}
|\reasem{\forall\alpha.A}{\eta}|&=&\prod_{C\in\mathscr{U}}|\reasem{A}{\eta[\alpha\rightarrow C]}|\\
\maj{\reasem{\forall\alpha.A}{\eta}}{\alpha}{e}{a}&\Longleftrightarrow&\forall C.
\maj{\reasem{A}{\eta[\alpha\rightarrow C]}}{\alpha}{e}{a}
\end{eqnarray*}
Here $\mathscr{U}$ stands for the class of all length spaces. A little
care is needed when defining the product since strictly speaking it
does not exist for size reasons. The standard way out is to let the
product range over those length spaces whose underlying set equals the
set of equivalence classes of a partial equivalence relation on $L$. As
already mentioned, every length space is isomorphic to one such. When
working with the product one has to insert these isomorphisms in
appropriate places which, however, we elide to increase readability.\par
If $n\geq 0$ and $A_1,\ldots, A_n$ are formulas,
the expression $\reasem{A_1\otimes\ldots\otimes A_n}{\eta}$ stands
for $I$ if $n=0$ and
$\reasem{A_1\otimes\ldots\otimes A_{n-1}}{\eta}\otimes\reasem{A_n}{\eta}$
if $n\geq 1$.
\section{Elementary Length Spaces}\label{sect:els}
In this section, we define a resource monoid $\mathcal{L}$
such that elementary affine logic can be interpreted in the
category of length spaces on $\mathcal{L}$. We then (re)prove
that functions representable in {\sf EAL}\ are elementary time
computable.\par
A \emph{list} is either $\mathit{empty}$ or
$\cons{n}{l}$ where $n\in\mathbb{N}$ and
$l$ is itself a list.
The sum $l+h$ of two lists $l$ and $h$ is defined as
follows, by induction on $l$:
\begin{eqnarray*}
\mathit{empty} + h = h + \mathit{empty} &=& h\\
\cons{n}{l}+\cons{m}{h}&=&\cons{n+m}{l+h}
\end{eqnarray*}
For every $e\in\mathbb{N}$, binary relations $\leq_e$ on lists can be defined as follows
\begin{varitemize}
\item
$\mathit{empty}\leq_e l$;
\item
$\cons{n}{l}\leq_e\cons{m}{h}$ iff there is $d\in\mathbb{N}$ such that
\begin{numlist}
\item
$n\leq 3^e(m+e)-d$;
\item
$l\leq_d h$.
\end{numlist}
\end{varitemize}
For every $e$ and for every lists $l$ and $h$ with $l\leq_e h$, we define
the natural number $\distpar{e}{l}{h}$ as follows:
\condinc{
\begin{eqnarray*}
\distpar{e}{\mathit{empty}}{\mathit{empty}}&=&0;\\
\distpar{e}{\mathit{empty}}{\cons{n}{l}}&=&3^e(n+e)+\distpar{3^e(n+e)}{\mathit{empty}}{l};\\
\distpar{e}{\cons{n}{l}}{\cons{m}{h}}&=&3^e(m+e)-n+\distpar{3^e(m+e)-n}{l}{h};
\end{eqnarray*}}{
$$
\begin{array}{l}
\distpar{e}{\mathit{empty}}{\mathit{empty}}=0;\\
\distpar{e}{\mathit{empty}}{\cons{n}{l}}=
3^e(n+e)+\distpar{3^e(n+e)}{\mathit{empty}}{l};\\
\distpar{e}{\cons{n}{l}}{\cons{m}{h}}=
3^e(m+e)-n+\distpar{3^e(m+e)-n}{l}{h};
\end{array}
$$}
Given a list $l$, $!l$ stands for the list $\cons{0}{l}$. The depth
$\depth{l}$ of a list $l$ is defined by induction on $l$:
$\depth{\mathit{empty}}=0$ while
$\depth{\cons{n}{l}}=\depth{l}+1$.
$|l|$ stands for the maximum integer appearing inside $l$, i.e.
$|\mathit{empty}|=0$ and
$|\cons{n}{l}|=\max\{n,|l|\}$.
For every natural number $n$, $\basiclist{n}$ stands for
$\cons{n}{\mathit{empty}}$.\par
\condinc{We can now verify that all the necessary conditions required by the
definition of a resource monoid are satisfied. To do this, we need a
number of preliminary results, which can all be proved by simple
inductions and case-analysis:
\begin{lemma}[Compatibility]\label{lemma:elscompat}
$\mathit{empty}\leq_e l$ for every $l$. Moreover,
if $l,h,j$ are lists and $l\leq_e h$, then
$l+j\leq_e h+j$.
\end{lemma}
\begin{proof}
The first claim is trivial. To prove the second,
we proceed by an induction on $j$. If $j=\mathit{empty}$,
then $l+j=l\leq_e h=h+j$. Now, suppose $j=\cons{n}{g}$.
If $h=\mathit{empty}$, then
$l=\mathit{empty}$ and, clearly $l+j=j\leq_e j=h+j$.
If $l=\mathit{empty}$, we have to prove that $j\leq_e h+j$.
Let $h=\cons{m}{f}$; then
\begin{eqnarray*}
n&\leq& n+m\leq 3^e(n+m+e)-0\\
g&\leq_0& g+f
\end{eqnarray*}
which means $j\leq_e h+j$.
Finally, suppose $l=\cons{m}{f}$, $h=\cons{p}{r}$. Then we know that
\begin{eqnarray*}
m&\leq& 3^e(p+e)-d\\
f&\leq_d& r
\end{eqnarray*}
But then, by inductive hypothesis,
\begin{eqnarray*}
m+n&\leq& 3^e(p+e)+n-d\leq 3^e(p+n+e)-d\\
f+g&\leq_d& r+g
\end{eqnarray*}
which yields $l+j\leq_e h+j$.\hfill$\Box$
\end{proof}
\begin{lemma}[Transitivity]\label{lemma:elstrans}
If $l,h,j$ are lists and $l\leq_e h$, $h\leq_d j$, then
$l\leq_{d+e} j$.
\end{lemma}
\begin{proof}
We can suppose all the involved lists to be different
from $\mathit{empty}$, since all the other cases are trivial.
$l=\cons{n}{g}$, $h=\cons{m}{f}$ and $j=\cons{p}{r}$.
From the hypothesis, we have
\begin{eqnarray*}
n&\leq& 3^e(m+e)-c\\
m&\leq& 3^d(p+d)-b\\
g&\leq_c& f\\
f&\leq_b& r
\end{eqnarray*}
But then, by inductive hypothesis, we get
\begin{eqnarray*}
n&\leq& 3^e(m+e)-c\leq 3^e(3^d(p+d)-b+e)-c\leq 3^e3^d(p+d+e)-b-c=3^{e+d}(p+d+e)-(b+c)\\\
g&\leq_{c+b}& r\\
\end{eqnarray*}
This means $l\leq_{d+e} j$.\hfill$\Box$
\end{proof}
\begin{lemma}\label{lemma:elsdistcompat}
if $l,h,j$ are lists and $l\leq_e h$, then
$\distpar{e}{l}{h}\leq\distpar{e}{l+j}{h+j}$
\end{lemma}
\begin{proof}
We proceed by an induction on $j$. If $j=\mathit{empty}$,
then $l+j=l$ and $h+j=h$. Now, suppose $j=\cons{n}{g}$.
If $h=\mathit{empty}$, then
$l=\mathit{empty}$ and, clearly $l+j=j=h+j$.
If $l=\mathit{empty}$, let $h=\cons{m}{f}$; then
\begin{eqnarray*}
\distpar{e}{l}{h}&=&\distpar{e}{\mathit{empty}}{h}=3^e(m+e)+\distpar{3^e(m+e)}{\mathit{empty}}{f}\\
&\leq&3^e(m+e)+3^en-3^en+\distpar{3^e(m+e)+3^en-3^en}{g}{g+f}\\
&\leq&3^e(m+n+e)-n+\distpar{3^e(m+n+e)-n}{g}{g+f}\\
&=&\distpar{e}{j}{h+j}=\distpar{e}{l+j}{h+j}
\end{eqnarray*}
Finally, suppose $l=\cons{m}{f}$, $h=\cons{p}{r}$. Then we know that
\begin{eqnarray*}
\distpar{e}{l}{h}&=&3^e(m+e)-p+\distpar{3^e(m+e)-p}{f}{r}\\
&\leq&3^e(m+e)-p+\distpar{3^e(m+e)-p}{f+g}{r+g}\\
&\leq&3^e(m+e)+3^en-(n+p)+\distpar{3^e(m+e)+3^en-n-p}{f+g}{r+g}\\
&=&3^e(m+n+e)-(n+p)+\distpar{3^e(m+n+e)-(n+p)}{f+g}{r+g}\\
&=&\distpar{e}{l+j}{h+j}
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{lemma:elsdisttrans}
If $l,h,j$ are lists and $l\leq_e h$, $h\leq_d j$, then
$\distpar{e}{l}{h}+\distpar{d}{h}{j}\leq\distpar{e+d}{l}{j}$.
\end{lemma}
\begin{proof}
If either $h=\mathit{empty}$ or $j=\mathit{empty}$, then the thesis
is trivial. So suppose $h=\cons{n}{g}$ and $j=\cons{m}{f}$.
If $l=\mathit{empty}$, then
\begin{eqnarray*}
\distpar{e}{l}{h}+\distpar{d}{h}{j}&=&3^e(n+e)+\distpar{3^e(n+e)}{\mathit{empty}}{g}
+3^d(m+d)-n+\distpar{3^e(m+d)-n}{g}{f}\\
&\leq&3^e(n+e)+3^d(m+d)-n+\distpar{3^e(n+e)+3^d(m+d)-n}{\mathit{empty}}{f}\\
&\leq&(3^e-1)n+3^ee+3^d(m+d)+\distpar{(3^e-1)n++3^ee+3^d(m+d)}{\mathit{empty}}{f}\\
&\leq&(3^e-1)3^d(m+d)+3^ee+3^d(m+d)+\distpar{(3^e-1)3^d(m+d)+3^ee+3^d(m+d)}{\mathit{empty}}{f}\\
&=&3^{d+e}(m+d+e)+\distpar{3^{d+e}(m+d+e)}{\mathit{empty}}{f}\\
&=&\distpar{e+d}{l}{j}
\end{eqnarray*}
If $l=\cons{p}{r}$, then
\begin{eqnarray*}
\distpar{e}{l}{h}+\distpar{d}{h}{j}&=&3^e(n+e)-p+\distpar{3^e(n+e)-p}{r}{g}
+3^d(m+d)-n+\distpar{3^d(m+d)-n}{g}{f}\\
&\leq&3^e(n+e)-p+3^d(m+d)-n+\distpar{3^e(n+e)-p+3^d(m+d)-n}{r}{f}\\
&\leq&(3^e-1)n+3^ee+3^d(m+d)-p+\distpar{(3^e-1)n+3^ee+3^d(m+d)-p}{r}{f}\\
&\leq&(3^e-1)3^d(m+d)+3^ee+3^d(m+d)-p+\distpar{(3^e-1)3^d(m+d)+3^ee+3^d(m+d)-p}{r}{f}\\
&=&3^{d+e}(m+d+e)-p+\distpar{3^{d+e}(m+d+e)-p}{r}{f}\\
&=&\distpar{e+d}{l}{j}
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}}{Relation $\leq_0$ and function $\mathcal{D}_0$ can
be used to build a resource monoid on lists.}
$|\mathcal{L}|$ will denote the set of all lists, while $\leq_\mathcal{L},\mathcal{D}_\mathcal{L}$ will
denote $\leq_0$ and $\mathcal{D}_0$, respectively.
\begin{lemma}\label{lemma:elsresource}
$\mathcal{L}=(|\mathcal{L}|,+,\leq_\mathcal{L},\mathcal{D}_\mathcal{L})$ is a resource monoid.
\end{lemma}
\condinc{\begin{proof}
$(\mathcal{L},+)$ is certainly a monoid. Compatibility of $\leq_\mathcal{L}$ follows from
lemmas~\ref{lemma:elscompat} and~\ref{lemma:elstrans}. The two required
property on $\mathcal{D}_\mathcal{L}$ come directly from lemmas~\ref{lemma:elsdistcompat}
and~\ref{lemma:elsdisttrans}. If $n\in\mathbb{N}$, observe that $\normpar{\mathcal{L}}{\cons{n}{\mathit{empty}}}=n$.
This concludes the proof.\hfill$\Box$
\end{proof}}{}
An \emph{elementary length space} is a length space on the resource
monoid $(|\mathcal{L}|,+,\leq_\mathcal{L},\mathcal{D}_\mathcal{L})$.
Given an elementary length space $A=(|A|,\vdashp{A})$, we can build the
length space $!A=(|A|,\vdashp{!A})$, where $\maj{!A}{l}{e}{a}$
iff $\maj{A}{h}{e}{a}$ and $l\geq_\mathcal{L} !h$. The construction
$!$ on elementary length spaces serves to capture the exponential modality
of elementary affine logic. Indeed, the following two results prove
the existence of morphisms and morphisms-forming rules
precisely corresponding to axioms and rules from {\sf EAL}.
\condinc{
\begin{lemma}\label{lemma:ealcontr}
For every $e\in\mathbb{N}$ and for every $l\in\mathcal{L}$,
$l+l\leq_1 l$ and $\distpar{e+1}{l+l}{l}\geq
\distpar{e}{0}{l}$.
\end{lemma}
\begin{proof}
The inequality $l+l\leq_1 l$ can be proved
by induction on $l$. The base case is trivial.
If $l=\cons{n}{h}$, then
\begin{eqnarray*}
n+n&\leq&3n+3-1=3^1(n+1)-1\\
h+h\leq_1&h
\end{eqnarray*}
The second inequality can be proved
by induction on $l$, too. The base case is trivial.
If $l=\cons{n}{h}$, observe that
\begin{eqnarray*}
\distpar{e+1}{l+l}{l}&=&3^{e+1}(n+e+1)-2n+\distpar{3^{e+1}(n+e+1)-2n}{h+h}{h}\\
\distpar{e}{0}{l}&=&3^{e}(n+e)+\distpar{3^{e}(n+e)}{0}{h}
\end{eqnarray*}
But
\begin{eqnarray*}
3^{e+1}(n+e+1)-2n&=& 3^e(n+e+1)+2(3^e)(n+e+1)-2n\\
&\geq& 3^e(n+e+1)+2n-2n\geq 3^e(n+e)+1
\end{eqnarray*}
This concludes the proof.
\hfill$\Box$
\end{proof}}{}
\condinc{
\begin{lemma}[Basic Maps]
Given elementary length spaces $A,B$, there are morphisms:
\begin{eqnarray*}
\mathit{contr}&:&!A\rightarrow!A\otimes!A\\
\mathit{distr}&:&!A\otimes!B\rightarrow !(A\otimes B)
\end{eqnarray*}
where
$\mathit{contr}(a)=(a,a)$ and
$\mathit{distr}(a,b)=(a,b)$
\end{lemma}}{
\begin{lemma}[Basic Maps]
Given elementary length spaces $A,B$, there are morphisms
$\mathit{contr}:!A\rightarrow!A\otimes!A$ and
$\mathit{distr}:!A\otimes!B\rightarrow !(A\otimes B)$
where
$\mathit{contr}(a)=(a,a)$ and
$\mathit{distr}(a,b)=(a,b)$.
\end{lemma}}
\condinc{
\begin{proof}
We know $\{e_\mathit{contr}\}(d)$ takes time $|d|+p$,
where $p$ is a constant. Then, let $l,h\in \mathcal{L}$ be
such that $\normpar{\mathcal{L}}{l}\geq p+|e_\mathit{contr}|$,
$\normpar{\mathcal{L}}{h}\geq\mathit{cp}$. Define $l_\mathit{contr}$
to be $l+h+\basiclist{1}$. Clearly,
$\normpar{\mathcal{L}}{l_\mathit{contr}}\geq |e_\mathit{contr}|$
Now, let $\maj{!A}{j}{d}{a}$. This implies that
$j\geq_{\mathcal{L}}!k$ where $\maj{A}{k}{d}{a}$.
Then:
\begin{eqnarray*}
h+!k+!k&\geq_{\mathcal{L}}& !k+!k\\
\normpar{\mathcal{L}}{h+!k+!k}&\geq&\normpar{\mathcal{L}}{h}+\normpar{\mathcal{L}}{!k}+\normpar{\mathcal{L}}{!k}\\
&\geq&\mathit{cp}+\normpar{\mathcal{L}}{!k}+\normpar{\mathcal{L}}{!k}
\end{eqnarray*}
This yields $\maj{!A\otimes !A}{h+!k+!k}{e}{(a,a)}$.
By lemma~\ref{lemma:ealcontr}, $h+!k+!k\leq_{\mathcal{L}}h+!k+\basiclist{1}
\leq_{\mathcal{L}}h+j+\basiclist{1}\leq_{\mathcal{L}}j+l_\mathit{contr}$.
Finally,
\begin{eqnarray*}
\timetm{e_\mathit{contr}}{d}&\leq&|d|+p\leq \normpar{\mathcal{L}}{k}+p
\leq\distpar{\mathcal{L}}{!k+!k}{!k+\basiclist{1}}+\normpar{\mathcal{L}}{l}\\
&\leq& \distpar{\mathcal{L}}{!k+!k}{!k+\basiclist{1}+l}\\
&\leq& \distpar{\mathcal{L}}{!k+!k+h}{!k+\basiclist{1}+l+h}\\
&=& \distpar{\mathcal{L}}{!k+!k+h}{!k+l_\mathit{contr}}
\end{eqnarray*}
This proves $\mathit{contr}$ to be a morphism.\par
Let $e_\mathit{distr}=e_\mathit{id}$. We know
$\{e_\mathit{id}\}(d)$ takes constant time,
say $p$. Then, let $l,h\in \mathcal{L}$ be
such that $\normpar{\mathcal{L}}{l}\geq p+|e_\mathit{distr}|$,
$\normpar{\mathcal{L}}{h}\geq\mathit{cp}$. $l_\mathit{distr}$ is then
defined as $l+!h$.
Now, let $\maj{!A\otimes !B}{j}{\langle d,c\rangle}{(a,b)}$.
This means that $j\geq !k+!i$, where
$\maj{A}{k}{d}{a}$ and $\maj{B}{i}{c}{b}$. This in turn
means that $\maj{A\otimes B}{k+i+h}{\langle d,c\rangle}{(a,b)}$
and $\maj{!(A\otimes B)}{!(k+i+h)}{\langle d,c\rangle}{(a,b)}$.
Moreover
$$
!(k+i+h)=!k+!i+!h\leq_\mathcal{L} j+!h\leq_\mathcal{L} j+l_\mathit{distr}
$$
Finally:
\begin{eqnarray*}
\timetm{e_\mathit{distr}}{\langle d,c\rangle}&\leq&p\leq\normpar{\mathcal{L}}{l}\\
&\leq& \distpar{\mathcal{L}}{!(k+i+h)}{j+!h}+\normpar{\mathcal{L}}{l}\\
&\leq& \distpar{\mathcal{L}}{!(k+i+h)}{j+!h+l}\\
&\leq& \distpar{\mathcal{L}}{!(k+i+h)}{j+l_\mathit{distr}}
\end{eqnarray*}
This proves $\mathit{distr}$ to be a morphism.\hfill$\Box$
\end{proof}}{}
\begin{lemma}[Functoriality]
If $f:\morphism{A}{e}{\varphi}{B}$, then there is $\psi$ such
that $f:\morphism{!A}{e}{\psi}{!B}$
\end{lemma}
\condinc{\begin{proof}
Let $\theta$ be $!\varphi$ and suppose $\maj{!A}{d}{l}{a}$. Then $l\geq !h$,
where $\maj{A}{d}{h}{a}$. Observe that there must be $j,c$ such that
$\maj{B}{c}{j}{f(a)}$, $j\leq_{\mathcal{L}} h+\varphi$ and
$\timetm{e}{d}\leq\distpar{\mathcal{L}}{j}{h+\varphi}$.
But then $\maj{!B}{c}{!j}{f(a)}$ and, moreover
\begin{eqnarray*}
!j&\leq_{\mathcal{L}}&!(h+\varphi)=!h+!\varphi\leq_{\mathcal{L}}!h+\theta\\
\timetm{e}{d}&\leq&\distpar{\mathcal{L}}{j}{h+\varphi}
\leq\distpar{\mathcal{L}}{!j}{!(h+\varphi)}\\
&\leq&\distpar{\mathcal{L}}{!j}{!h+!\varphi)}
\leq\distpar{\mathcal{L}}{!j}{l+\theta}
\end{eqnarray*}
This means that $f:\morphism{!A}{e}{\theta}{!B}$.\hfill$\Box$
\end{proof}}{}
Elementary bounds can be given on $\normpar{\mathcal{L}}{l}$ depending
on $|l|$ and $\depth{l}$:
\begin{proposition}\label{prop:EALbound}
For every $n\in\mathbb{N}$ there is an elementary function $p_n:\mathbb{N}\rightarrow\mathbb{N}$ such
that $\normpar{\mathcal{L}}{l}\leq p_{\depth{l}}(|l|)$.
\end{proposition}
\condinc{\begin{proof}
We prove a stronger statement by induction on $n$: for every $n\in\mathbb{N}$
there is an elementary function $q_n:\mathbb{N}^2\rightarrow\mathbb{N}$ such that for every $l,e$,
$\distpar{e}{\mathit{empty}}{l}\leq q_{\depth{l}}(|l|,e)$. First of all, we
know that $\distpar{e}{\mathit{empty}}{\mathit{empty}}=0$, so $q_0$ is just the
function which always returns $0$. $q_{n+1}$ is defined from $q_n$ as
follows: $q_{n+1}(x,y)=3^y(x+y)+q_n(x,3^y(x+y))$.
Indeed:
\begin{eqnarray*}
\distpar{e}{\mathit{empty}}{\cons{n}{l}}&=&
3^e(n+e)+\distpar{3^e(n+e)}{\mathit{empty}}{l}\\
&\leq&3^e(|\cons{n}{l}|+e)+q_{\depth{l}}(|l|,3^e(n+e))\\
&\leq&3^e(|\cons{n}{l}|+e)+q_{\depth{l}}(|\cons{n}{l}|,3^e|\cons{n}{l}|+e)\\
&=&q_{\depth{\cons{n}{l}}}(|\cons{n}{l}|,e)
\end{eqnarray*}
At this point we just put $p_n(x)=q_n(x,0)$.\hfill$\Box$
\end{proof}}{}
We emphasize that Proposition~\ref{prop:EALbound} does not assert that the mapping
$(n,m)\mapsto p_n(m)$ is elementary. This, indeed, cannot be true
because we know {\sf EAL}\ to be complete for the class of elementary
functions. If, however, $A\subseteq\mathcal{L}$ is such that $l\in A$
implies $\depth{l}\leq c$ for a fixed $c$, then $(l\in A)\mapsto
p_{\depth{l}}(|l|)$ is elementary and it is in this way that we will
use the above proposition.
\subsection{Interpreting Elementary Affine Logic}
{\sf EAL}\ can be obtained by endowing multiplicative affine logic
with a restricted modality. The grammar of formulae is enriched with
a new production $A::= !A$
while modal rules are reported in figure~\ref{figure:EAL}.
\begin{figure*}
\begin{center}
\fbox{
\begin{minipage}{.8\textwidth}
{\bf Exponential Rules and Contraction}.
$$
\begin{array}{lcr}
\infer[P]{!\Gamma\vdash !A}{\Gamma\vdash A} & \;\; &
\infer[C]{\Gamma,!A\vdash B}{\Gamma,!A,!A\vdash B}
\end{array}
$$
\end{minipage}}
\caption{Intuitionistic Elementary Affine Logic}\label{figure:EAL}
\end{center}
\end{figure*}
Realizability semantics is extended by
$\reasem{!A}{\eta}=!\reasem{A}{\eta}$.
\begin{theorem}\label{theo:EAL}
Elementary length spaces form a model of {\sf EAL}.
\end{theorem}
Now, consider the formula
$$
\mathit{List}_\EAL\equiv\forall\alpha.!(\alpha\multimap\alpha)\multimap!(\alpha\multimap\alpha)
\multimap!(\alpha\multimap\alpha)
$$
Binary lists can be represented as cut-free proofs
with conclusion $\mathit{List}_\EAL$. Suppose you have a proof
$\pi:!^j\mathit{List}_\EAL\multimap !^k\mathit{List}_\EAL$.
From the denotation $\reasem{\pi}{}$ we
can build a morphism $g$ from $\reasem{\mathit{List}_\EAL}{}$ to $B_\mathcal{L}$ by internal
application to $\varepsilon,s_0,s_1$. This map then induces a function
$f: B\rightarrow B$ as
follows: given $w\in B$, first compute a realizer for
the closed proof corresponding to it, then apply $g$ to the result.
\begin{remark}\label{remark:bounddepth}
Notice that elements of $B_\mathcal{L}$ can all be
majorized by lists with unit depth. Similarly, elements
of $\reasem{\mathit{List}_\EAL}{}$ corresponding to binary lists
can be majorized by lists with bounded depth. This observation
is essential to prove the following result.
\end{remark}
\begin{corollary}[Soundness]
Let $\pi$ be an {\sf EAL}\ proof with conclusion $\vdash!^j\mathit{List}_\EAL\multimap!^k\mathit{List}_\EAL$
and let $f:L\rightarrow L$ be the function induced by $\reasem{\pi}{}$.
Then $f$ is computable in elementary time.
\end{corollary}
\condinc{
}{}
The function $f$ in the previous result
equals the function denoted by the proof $\pi$ in the sense of
\cite{hofmann04tcs}. This intuitively obvious fact can be proved
straightforwardly but somewhat tediously using a logical relation or
similar, see also \cite{hofmann04tcs}.
\condinc{
\section{Soft Length Spaces}\label{sect:sls}
The grammar of formulae for {\sf SAL}\ is the same as the one of Elementary Affine Logic.
Rules are reported in figure~\ref{figure:SAL}.
\begin{figure*}
\begin{center}
\fbox{
\begin{minipage}{.8\textwidth}
{\bf Exponential Rules and Contraction}.
$$
\begin{array}{lcr}
\infer[P]{!\Gamma\vdash !A}{\Gamma\vdash A} & \;\; &
\infer[C]{\Gamma,!A\vdash B}{\Gamma,A,\ldots,A\vdash B}
\end{array}
$$
\end{minipage}}
\caption{Intuitionistic Soft Affine Logic}\label{figure:SAL}
\end{center}
\end{figure*}
We here use a resource monoid whose
underlying carrier set is $|\mathcal{I}|=|\mathcal{L}|\times\mathbb{N}$.
The sum $(l,n)+(h,m)$ of two elements in $|\mathcal{I}|$ is defined as
$(l+h,\max\{n,m\})$. For every $e\in\mathbb{N}$, binary relations
$\leq_e$ on $|\mathcal{I}|$ can be defined as follows
\begin{varitemize}
\item
$(\mathit{empty},n)\leq_0 (\mathit{empty},m)$ iff $n\leq m$;
\item
$(\mathit{empty},n)\leq_e (\cons{m}{l},p)$ iff there is $d\in\mathbb{N}$ such that
\begin{numlist}
\item
$e\leq m+pd$
\item
$(\mathit{empty},n)\leq_d (l,p)$
\end{numlist}
\item
$(\cons{n}{l},m)\leq_e(\cons{p}{h},q)$ iff there is $d\in\mathbb{N}$ such that
\begin{numlist}
\item
$e+n\leq p+qd$;
\item
$(l,m)\leq_d (h,q)$.
\end{numlist}
\end{varitemize}
If $\alpha=(l,n)\in|\mathcal{I}|$, then $!\alpha$ will
be the couple $(\cons{0}{l},n)\in|\mathcal{I}|$.
If there is $e$ such that $\alpha\leq_e \beta$, then we
will simply write $\alpha\leq_\mathcal{I} \beta$.
For every $\alpha$ and $\beta$ with $\alpha\leq_\mathcal{I} \beta$, we define
the natural number $\distpar{\mathcal{I}}{\alpha}{\beta}$ as follows:
\condinc{
\begin{eqnarray*}
\distpar{\mathcal{I}}{(\mathit{empty},n)}{(\mathit{empty},m)}&=&0\\
\distpar{\mathcal{I}}{(\mathit{empty},n)}{(\cons{m}{l},p)}&=&m+p\distpar{\mathcal{I}}{(\mathit{empty},n)}{(l,p)}\\
\distpar{\mathcal{I}}{(\cons{n}{l},m)}{(\cons{p}{h},q)}&=&p-n+q\distpar{\mathcal{I}}{(l,m)}{(h,q)}
\end{eqnarray*}
}{
$$
\begin{array}{l}
\distpar{\mathcal{I}}{(\mathit{empty},n)}{(\mathit{empty},m)}=0;\\
\distpar{\mathcal{I}}{(\mathit{empty},n)}{(\cons{m}{l},p)}=
m+p\distpar{\mathcal{I}}{(\mathit{empty},n)}{(l,p)};\\
\distpar{\mathcal{I}}{(\cons{n}{l},m)}{(\cons{p}{h},q)}=
p-n+q\distpar{\mathcal{I}}{(l,m)}{(h,q)};
\end{array}
$$
}
Analogously, we can define $\distpar{\mathcal{I}}{\alpha}{\beta}$ simply as the maximum integer $e$
such that $\alpha\leq_e \beta$. $|\alpha|$ is the maximum integer appearing inside $\alpha$, i.e.
$|(l,n)|=\max\{|l|,m\}$. The depth $\depth{\alpha}$ of $\alpha=(l,n)$ is $\depth{l}$.
\condinc{
\begin{lemma}[Compatibility]\label{lemma:slscompat}
$(\mathit{empty},0)\leq_0 \alpha$ for every $\alpha$. Moreover,
if $\alpha,\beta,\gamma\in|\mathcal{I}|$ and $\alpha\leq_e \beta$, then
$\alpha+\gamma\leq_e \beta+\gamma$.
\end{lemma}
\begin{proof}
The first claim is trivial. To prove the second,
we proceed by an induction on the structure of the
first component of $\gamma$. We just consider
the case where the first components of $\alpha,\beta,\gamma$ are
all different from $\mathit{empty}$. So, suppose
$\alpha=(\cons{n}{l},m)$, $\beta=(\cons{p}{h},q)$, $\gamma=(\cons{r}{j},s)$.
By hypothesis, we get $d\in\mathbb{N}$ such that
\begin{eqnarray*}
e+n&\leq&p+dq\\
(l,m)&\leq_d&(h,q)
\end{eqnarray*}
Then, $e+n+r\leq p+r+dq\leq p+r+d\max\{q,s\}$
and, by induction hypothesis, $(l+j,\max\{m,s\})\leq_d(h+j,\max\{q,s\})$.
This implies that $\alpha+\gamma\leq_e \beta+\gamma$.\hfill$\Box$
\end{proof}
\begin{lemma}[Transitivity]\label{lemma:slstrans}
If $\alpha,\beta,\gamma\in|\mathcal{I}|$ are lists and $\alpha\leq_e \beta$, $\beta\leq_d \gamma$, then
$\alpha\leq_{d+e} \gamma$.
\end{lemma}
\begin{proof}
We go by induction on the structure of the first component
of $\gamma$ and we suppose the first components of $\alpha,\beta,\gamma$ to be different
from $\mathit{empty}$. So, let
$\alpha=(\cons{n}{l},m)$, $\beta=(\cons{p}{h},q)$ and $\gamma=(\cons{r}{j},s)$.
From the hypothesis, there are $c,b\in\mathbb{N}$ such that
\begin{eqnarray*}
e+n&\leq&p+cq\\
d+p&\leq&r+bs\\
(l,m)&\leq_c&(h,q)\\
(h,q)&\leq_b&(j,s)
\end{eqnarray*}
But then, by inductive hypothesis, we get
\begin{eqnarray*}
(e+d)+n&\leq& d+p+cq\leq r+bs+cq\leq r+(b+c)s\\
(l,m)&\leq_{c+b}& (j,s)
\end{eqnarray*}
which yields $\alpha\leq_{d+e} \gamma$.\hfill$\Box$
\end{proof}
\begin{lemma}\label{lemma:slsdistcompat}
if $\alpha,\beta,\gamma\in\mathcal{I}$ and $\alpha\leq_e \beta$, then
$\distpar{\mathcal{I}}{\alpha}{\beta}\leq\distpar{\mathcal{I}}{\alpha+\gamma}{\beta+\gamma}$
\end{lemma}
\begin{proof}
This is trivial in view of~\ref{lemma:slscompat} and
the fact that $\distpar{\mathcal{I}}{\alpha}{\beta}$ is just
$\max\{e\in\mathbb{N}\;|\;\alpha\leq_e \beta\}$.\hfill$\Box$
\end{proof}
\begin{lemma}\label{lemma:slsdisttrans}
If $\alpha,\beta,\gamma\in\mathcal{I}$ and $\alpha\leq_e \beta$, $\beta\leq_d \gamma$, then
$\distpar{e}{\alpha}{\beta}+\distpar{d}{\beta}{\gamma}\leq\distpar{e+d}{\alpha}{\gamma}$.
\end{lemma}
\begin{proof}
This is trivial in view of~\ref{lemma:slstrans} and
the fact that $\distpar{\mathcal{I}}{\alpha}{\beta}$ is just
$\max\{e\in\mathbb{N}\;|\;\alpha\leq_e \beta\}$.\hfill$\Box$
\end{proof}}{}
\begin{lemma}\label{lemma:slsresource}
$(\mathcal{I},+,\leq_\mathcal{I},\mathcal{D}_\mathcal{I})$ is a resource monoid.
\end{lemma}
\condinc{\begin{proof}
$(|\mathcal{I}|,+)$ is certainly a commutative monoid. Compatibility of $\leq_\mathcal{I}$ follows from
lemmas~\ref{lemma:slscompat} and~\ref{lemma:slstrans}. The two required
property on $\mathcal{D}_\mathcal{I}$ come directly from lemmas~\ref{lemma:slsdistcompat}
and~\ref{lemma:slsdisttrans}. If $n\in\mathbb{N}$, observe that $\normpar{\mathcal{I}}{(\cons{n}{\mathit{empty}},0)}=n$.
This concludes the proof.\hfill$\Box$
\end{proof}}{}
A \emph{soft length space} is a length space on the resource
monoid $(\mathcal{I},+,\leq_\mathcal{I},\mathcal{D}_\mathcal{I})$.\par
Given a soft length space $A=(|A|,\vdashp{A})$, we can build the
length space $!A=(|A|,\vdashp{!A})$, where $\maj{!A}{\alpha}{e}{a}$
iff $\maj{!A}{\beta}{e}{a}$ and $\alpha\geq_\mathcal{I} !\beta$.
We write $\basiclistint{n}{m}$ for $(\cons{n}{\mathit{empty}},m)$.\par
\condinc{
\begin{lemma}\label{lemma:salcontr}
For every $\alpha\in\mathcal{I}$ and for every $n,m\in\mathbb{N}$ the following
inequality holds:
$$
n.\alpha\leq_{n\normpar{\mathcal{I}}{\alpha}+m}!\alpha+\basiclistint{m}{2n}
$$
\end{lemma}
\begin{proof}
Let $\alpha=(l,p)$. We go by induction on $l$. If $l$ is $\mathit{empty}$, then
\begin{eqnarray*}
n.\alpha&=&(\mathit{empty},p)\\
!\alpha+\basiclistint{m}{2n}&=&(\cons{m}{\mathit{empty}},\max\{p,2n\})\\
n\normpar{\mathcal{I}}{\alpha}+m&=&m\\
\mathit{empty}&\leq_0&\mathit{empty}
\end{eqnarray*}
This implies the thesis. Moreover, if $l=\cons{q}{h}$, then
\begin{eqnarray*}
n.\alpha&=&(n.l,p)=(\cons{nq}{n.h},p)\\
!\alpha+\basiclistint{m}{2n}&=&(\cons{m}{l},\max\{p,2n\})\\
n\normpar{\mathcal{I}}{\alpha}+m&=&n(q+p\normpar{\mathcal{I}}{l,p})+m
\end{eqnarray*}
By induction hypothesis, we get
\begin{eqnarray*}
(n.h,p)&\leq_{n\normpar{\mathcal{I}}{h,p}+q}&!(h,p)+\basiclistint{q}{2n}=(l,\max\{p,2n\})\\
(n(q+p\normpar{\mathcal{I}}{l,p})+m)+nq&=&m+2nq+np\normpar{\mathcal{I}}{l,p}\\
&\leq&m+\max\{p,2n\}(n\normpar{\mathcal{I}}{h,p}+q)
\end{eqnarray*}
from which the desired inequality easily follows.\hfill$\Box$
\end{proof}
\begin{lemma}[Basic Maps]
Given soft length spaces $A,B$ and a natural number
$n\geq 1$, there are morphisms:
\begin{eqnarray*}
\mathit{contr}_n&:&!A\rightarrow \overbrace{A\otimes\ldots\otimes A}^{\mbox{$n$ times}}\\
\mathit{distr}&:&!A\otimes!B\rightarrow !(A\otimes B)
\end{eqnarray*}
where
$\mathit{contr}(a)=(\overbrace{a,\ldots,a}^{\mbox{$n$ times}})$ and
$\mathit{distr}(a,b)=(a,b)$
\end{lemma}
\begin{proof}
We define realizers $e_\mathit{contr}^n$ for every $n\geq 1$ by
induction on $n$:
\begin{eqnarray*}
e_\mathit{contr}^1&=&e_\mathit{id}\\
e_\mathit{contr}^{n+1}&=&(e_\mathit{contr}^n)^*\circ e_\mathit{contr}
\end{eqnarray*}
Clearly, $e_\mathit{contr}^n$ is a realizer
for $\mathit{contr}_n$. Moreover,
$\timetm{e_\mathit{contr}^n}{x}\leq n|x|+q_n$, where $q_n$ does not
depend on $x$. Now, let $\psi_n$ be such that
$\normpar{\mathcal{I}}{\psi_n}\geq \mathit{cp}\cdot n$ and $\varphi_\mathit{contr}^n$ be
$\basiclistint{q_n}{2n}+\psi_n$ for every $n\geq 1$. Now,
let $\maj{!A}{\alpha}{j}{a}$. This implies $\alpha\geq_\mathcal{I} !(l,m)$,
where $\maj{A}{(l,m)}{j}{a}$. Notice that
$$
\maj{\underbrace{A\otimes\ldots\otimes A}_{\mbox{$n$ times}}}{n.(l,m)+\psi_n}{\langle \overbrace{j,\ldots,j}^{\mbox{$n$ times}}\rangle}{(\overbrace{a,\ldots,a}^{\mbox{$n$ times}})}
$$
By lemma~\ref{lemma:salcontr},
we finally get
\begin{eqnarray*}
n.(l,m)+\psi_n&\leq_\mathcal{I}& !(l,m)+\basiclistint{q_n}{2n}+\psi_n\\
&=&!(l,m)+\varphi_\mathit{contr}^n\leq \varphi_\mathit{contr}^n+\alpha\\
\timetm{e_\mathit{contr}^n}{j}&\leq&n|j|+q_n\\
&\leq&n\normpar{\mathcal{I}}{l,m}+q_n\\
&\leq&\distpar{\mathcal{I}}{n.(l,m)}{!(l,m)+\basiclistint{q_n}{2n}}\\
&\leq&\distpar{\mathcal{I}}{n.(l,m)}{(\cons{q_n}{l},\max\{m,2n\})}\\
&\leq&\distpar{\mathcal{I}}{n.(l,m)+\psi_n}{(\cons{q_n}{l},\max\{m,2n\})+\psi_n}\\
&\leq&\distpar{\mathcal{I}}{n.(l,m)+\psi_n}{\basiclistint{q_n}{2n}+\alpha+\psi_n}\\
&\leq&\distpar{\mathcal{I}}{n.(l,m)+\psi_n}{\alpha+\varphi^n_\mathit{contr}}
\end{eqnarray*}
This proves each $e_\mathit{contr}^n$ to be a morphism.\par
Let $e_\mathit{distr}=e_\mathit{id}$. We know
$\{e_\mathit{id}\}(d)$ takes constant time,
say $p$. Then, let $\psi,\mu\in \mathcal{I}$ be
such that $\normpar{\mathcal{I}}{\psi}\geq p+|e_\mathit{distr}|$,
$\normpar{\mathcal{I}}{\mu}\geq\mathit{cp}$. $\varphi_\mathit{distr}$ is then
defined as $\psi+!\mu$.
Now, let $\maj{!A\otimes !B}{\alpha}{\langle d,c\rangle}{(a,b)}$.
This implies $\alpha\geq !\beta+!\gamma$, where
$\maj{A}{\beta}{d}{a}$ and $\maj{B}{\gamma}{c}{b}$. This in turn
implies $\maj{A\otimes B}{\beta+\gamma+\mu}{\langle d,c\rangle}{(a,b)}$
and $\maj{!(A\otimes B)}{!(\beta+\gamma+\mu)}{\langle d,c\rangle}{(a,b)}$.
Moreover
$$
!(\beta+\gamma+\mu)=!\beta+!\gamma+!\mu\leq_\mathcal{L} \alpha+!\mu\leq_\mathcal{L} \alpha+\varphi_\mathit{distr}
$$
Finally:
\begin{eqnarray*}
\timetm{e_\mathit{distr}}{\langle d,c\rangle}&\leq&p\leq\normpar{\mathcal{L}}{\psi}\\
&\leq& \distpar{\mathcal{L}}{!(\beta+\gamma+\mu)}{\alpha+!\mu}+\normpar{\mathcal{L}}{\psi}\\
&\leq& \distpar{\mathcal{L}}{!(\beta+\gamma+\mu)}{\alpha+!\mu+\psi}\\
&\leq& \distpar{\mathcal{L}}{!(\beta+\gamma+\mu)}{\alpha+\varphi_\mathit{distr}}
\end{eqnarray*}
This proves $\mathit{distr}$ to be a morphism.\hfill$\Box$
\end{proof}
\begin{lemma}[Functoriality]
If $f:\morphism{A}{e}{\varphi}{B}$, then there is $\psi$ such
that $f:\morphism{!A}{e}{\psi}{!B}$
\end{lemma}
\begin{proof}
Let $\theta$ be $!\varphi$ and suppose $\maj{!A}{\alpha}{d}{a}$. Then $\alpha\geq !\beta$,
where $\maj{A}{\beta}{d}{a}$. Observe that there must be $\gamma,c$ such that
$\maj{B}{\gamma}{c}{f(a)}$, $\gamma\leq_{\mathcal{L}} \beta+\varphi$ and
$\timetm{e}{d}\leq\distpar{\mathcal{L}}{\gamma}{\beta+\varphi}$.
But then $\maj{!B}{!\gamma}{c}{f(a)}$ and, moreover
\begin{eqnarray*}
!\gamma&\leq_{\mathcal{L}}&!(\beta+\varphi)=!\beta+!\varphi\leq_{\mathcal{L}}!\beta+\theta\\
\timetm{e}{d}&\leq&\distpar{\mathcal{L}}{\gamma}{\beta+\varphi}
\leq\distpar{\mathcal{L}}{!\gamma}{!(\beta+\varphi)}\\
&\leq&\distpar{\mathcal{L}}{!\gamma}{!\beta+!\varphi}
\leq\distpar{\mathcal{L}}{!\gamma}{\alpha+\theta}
\end{eqnarray*}
This implies $f:\morphism{!A}{e}{\theta}{!B}$.\hfill$\Box$
\end{proof}
}{The following two results can be proved with techniques similar
to those from Proposition~\ref{prop:EALbound} and Theorem~\ref{theo:EAL}}
\begin{proposition}
For every $n\in\mathbb{N}$ there is a polynomial $p_n:\mathbb{N}\rightarrow\mathbb{N}$ such
that $\normpar{\mathcal{I}}{\alpha}\leq p_{\depth{\alpha}}(|\alpha|)$
for every $\alpha\in |\mathcal{I}|$.
\end{proposition}
\condinc{\begin{proof}
We go by induction on $n$. First of all, we
know that $\distpar{\mathcal{I}}{(\mathit{empty},0)}{(\mathit{empty},m)}=0$, so
$p_0$ is just the function which always returns $0$.
$p_{n+1}$ is defined from $p_n$ as follows: $p_{n+1}(x)=x+xp_n(x)$.
Indeed:
\begin{eqnarray*}
\distpar{\mathcal{I}}{(\mathit{empty},0)}{(\cons{n}{l},m)}&=&
n+m\distpar{\mathcal{I}}{(\mathit{empty},0)}{(l,m)}\\
&\leq&|(\cons{n}{l},m)|+|(\cons{n}{l},m)|p_{\depth{(l,m)}}(|(\cons{n}{l},m)|)\\
&=&p_{\depth{(\cons{n}{l},m)}}((\cons{n}{l},m)).
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}}{}
Again, we do not claim that $(n,m)\mapsto p_n(m)$ is a polynomial
(c.f. Remark~\ref{remark:bounddepth}).
\begin{theorem}
Soft length spaces form a model of {\sf SAL}.
\end{theorem}
Binary lists can be represented in {\sf SAL}\ as cut-free proofs
with conclusion
$$
\mathit{List}_\SAL\equiv\forall\alpha.!(\alpha\multimap\alpha)\multimap !(\alpha\multimap\alpha)
\multimap(\alpha\multimap\alpha)
$$
\begin{corollary}[Soundness]
Let $\pi$ be an {\sf SAL}\ proof with conclusion $\vdash !^j\mathit{List}_\SAL\multimap !^k\mathit{List}_\SAL$
and let $f:L\rightarrow L$ be the function induced by $\reasem{\pi}{}$.
Then $f$ is computable in polynomial time.
\end{corollary}
\section{Light Length Spaces}\label{sect:lls}
The grammar of formulae for Light Affine Logic is the one from Elementary Affine Logic, enriched
with a new production $A::=\S A$. Rules are reported in figure~\ref{figure:LAL}.
\begin{figure*}
\begin{center}
\fbox{
\begin{minipage}{.8\textwidth}
{\bf Exponential Rules and Contraction}.
$$
\begin{array}{lccr}
\infer[P_\S]{\S\Gamma,!\Delta\vdash \S A}{\Gamma,\Delta\vdash A} \;\; &
\infer[P_!^1]{!A\vdash !B}{A\vdash B} \; & \;
\infer[P_!^2]{\vdash !A}{\vdash A} & \;\;
\infer[C]{\Gamma,!A\vdash B}{\Gamma,!A,!A\vdash B}
\end{array}
$$
\end{minipage}}
\caption{Intuitionistic Light Affine Logic}\label{figure:LAL}
\end{center}
\end{figure*}
Light length spaces are a model of Light Affine
Logic. The underlying resource monoid
is more complex than the ones we encountered
so far. This complexity is a consequence
of the strange behaviour of
modality $!$, which is functorial but does
not distribute over tensor (i.e. $!(A\otimes B)
\not\cong !A\otimes !B$).\par
A \emph{tree} is either $\mathit{empty}$ or
a triple $\node{n}{t}{T}$ where $n\in\mathbb{N}$,
$t$ is itself a tree and $T$ is a finite
nonempty set of trees. $|\mathcal{T}|$ is the
set of all trees. We write $\basictree{n}$ for
the tree $\node{n}{\mathit{empty}}{\{\mathit{empty}\}}$.
The sum $t+s$ of two trees $t$ and $s$ is defined as
follows, by induction on $n$:
\condinc{
\begin{eqnarray*}
\mathit{empty} + t &=& t + \mathit{empty}=t;\\
\node{n}{t}{T}+\node{m}{u}{U}&=&\node{n+m}{t+u}{T\cup U};
\end{eqnarray*}}{
$$
\begin{array}{l}
\mathit{empty} + t = t + \mathit{empty}=t;\\
\node{n}{t}{T}+\node{n}{u}{U}=\node{n+m}{t+u}{T\cup U};
\end{array}
$$}
Here, more sophisticated techniques are needed.
For every $n,e\in\mathbb{N}$, binary relations $\leq_e^n$ on trees can be defined as follows
\begin{varitemize}
\item
$t\leq_e^0 u$ for every $t,u\in |\mathcal{T}|$;
\item
$\mathit{empty}\leq_e^{n+1} t$ for every $t\in |\mathcal{T}|$;
\item
$\node{m}{t}{T}\leq_e^{n+1}\mathit{empty}$ iff there is $d\in\mathbb{N}$ such that
\begin{numlist}
\item
$m\leq e-d$;
\item
$t\leq_{d^2}^n\mathit{empty}$;
\item
For every $s\in T$, $s\leq_{d}^n\mathit{empty}$.
\end{numlist}
\item
$\node{m}{t}{T}\leq_e^{n+1}\node{l}{u}{U}$ iff there is $d\in\mathbb{N}$ such that
\begin{numlist}
\item
$m\leq l+e-d$;
\item
There is a function $f:\{1,\ldots,d\}\rightarrow U$
such that $t\leq_{d^2}^n u+\sum_1^d f(i)$;
\item
For every $s\in T$ there is $z\in U$ with $s\leq_d^n z$.
\end{numlist}
\end{varitemize}
For every $e,n\in\mathbb{N}$ and for every trees $t$ and $u$ with $t\leq_e^n u$, we define
the natural number $\distpartwo{e}{n}{t}{u}$ as follows:
\condinc{
\begin{eqnarray*}
\distpartwo{e}{0}{t}{u}&=&0\\
\distpartwo{e}{n+1}{\mathit{empty}}{\mathit{empty}}&=&e+\distpartwo{e}{n}{\mathit{empty}}{\mathit{empty}}\\
\distpartwo{e}{n+1}{\mathit{empty}}{\node{m}{t}{T}}&=&
m+e+\max_f\{\distpartwo{(m+e)^2}{n}{\mathit{empty}}{t+\sum_{i=1}^{m+e}f(i)}\}\\
\distpartwo{e}{n+1}{\node{m}{t}{T}}{\mathit{empty}}&=&
e-m+\distpartwo{(e-m)^2}{n}{t}{\mathit{empty}}\\
\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}&=&
l+e-m+\max_f\{\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\}
\end{eqnarray*}}{
$$
\begin{array}{l}
\distpartwo{e}{0}{t}{u}=0;\\
\distpartwo{e}{n+1}{\mathit{empty}}{\mathit{empty}}=
e+\distpartwo{e}{n}{\mathit{empty}}{\mathit{empty}};\\
\distpartwo{e}{n+1}{\mathit{empty}}{\node{m}{t}{T}}=m+e+\max_f\{\distpartwo{(m+e)^2}{n}{\mathit{empty}}{t+\sum_{i=1}^{m+e}f(i)}\};\\
\distpartwo{e}{n+1}{\node{m}{t}{T}}{\mathit{empty}}=e-m+\distpartwo{(e-m)^2}{n}{t}{\mathit{empty}};\\
\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}=l+e-m+\max_f\{\distpartwo{(k+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\};
\end{array}
$$}
If $t$ is a tree, then $|t|$ is the greatest integer appearing in $t$, i.e.
$|\mathit{empty}|=0$ and
$|\node{n}{t}{T}|=\max\{n,|t|,\max_{u\in T}|u|\}$.
The depth $\depth{t}$ of a tree $t$ is defined as follows:
$\depth{\mathit{empty}}=0$ and
\condinc{
$$\depth{\node{n}{t}{T}}=1+
\max\{\depth{t},\max_{u\in T}\depth{u}\}.$$}
{$\depth{\node{n}{t}{T}}=1+
\max\{\depth{t},\max_{u\in T}\depth{u}\}$.}
Given a tree $t\in |\mathcal{T}|$, we define $!t$ as the tree $\node{1}{\mathit{empty}}{\{t\}}$
and $\S t$ as the tree $\node{0}{t}{\{\mathit{empty}\}}$.
\condinc{
In this context, a notion of isomorphism between trees
is needed: we say that trees $t$ and $u$ are \emph{isomorphic} and we
write $t\cong u$ iff for every $e,n\in\mathbb{N}$ and for every tree $v$
the following hold:
\begin{eqnarray*}
v\leq_e^n t&\Leftrightarrow&v\leq_e^n u\\
t\leq_e^n v&\Leftrightarrow&u\leq_e^n v\\
\distpartwo{e}{n}{v}{t}&=&\distpartwo{e}{n}{v}{u}\\
\distpartwo{e}{n}{t}{v}&=&\distpartwo{e}{n}{u}{v}
\end{eqnarray*}
\begin{lemma}
$\mathit{empty}\cong\basictree{0}$. Moreover,
for every tree $t$, $t+\mathit{empty}\cong t+\basictree{0}$.
\end{lemma}
\begin{proof}
We have to prove that for every $e,n\in\mathbb{N}$ and for every tree $v$:
\begin{eqnarray*}
v\leq_e^n \mathit{empty}&\Leftrightarrow&v\leq_e^n \basictree{0}\\
\mathit{empty}\leq_e^n v&\Leftrightarrow&\basictree{0}\leq_e^n v\\
\distpartwo{e}{n}{v}{\mathit{empty}}&=&\distpartwo{e}{n}{v}{\basictree{0}}\\
\distpartwo{e}{n}{\mathit{empty}}{v}&=&\distpartwo{e}{n}{\basictree{0}}{v}
\end{eqnarray*}
We go by induction on $n$,
considering the case where $n\geq 1$, since the base case
is trivial. First of all, observe that both
$\mathit{empty}\leq_e^{n+1} t$ and $\basictree{0}\leq_e^{n+1} t$
for every $t$. Moreover, $\mathit{empty}\leq_e^{n+1} \mathit{empty}$
and $\basictree{0}\leq_e^{n+1} \mathit{empty}$. Suppose now
that $\node{m}{t}{T}\leq_e^{n+1}\mathit{empty}$. This means
that there is $d$ such that
\begin{numlist}
\item
$m\leq e-d$;
\item
$t\leq_{d^2}^n\mathit{empty}$;
\item
for every $s\in T$, $s\leq_{d}^n\mathit{empty}$.
\end{numlist}
If we put $f(i)=\mathit{empty}$ for every $i$, we get
$t\leq_{d^2}^n\mathit{empty}+\sum_{i=1}^d f(i)$, which
yields $\node{m}{t}{T}\leq_e^{n+1}\basictree{0}$.
In the same way, we can prove that if $\node{m}{t}{T}\leq_e^{n+1}\basictree{0}$,
then $\node{m}{t}{T}\leq_e^{n+1}\mathit{empty}$.\par
We have:
\begin{eqnarray*}
\distpartwo{e}{n+1}{\mathit{empty}}{\mathit{empty}}&=&e+\distpartwo{e^2}{n}{\mathit{empty}}{\mathit{empty}}\\
\distpartwo{e}{n+1}{\mathit{empty}}{\basictree{0}}&=&e+\distpartwo{e^2}{n}{\mathit{empty}}{\mathit{empty}}\\
\distpartwo{e}{n+1}{\basictree{0}}{\mathit{empty}}&=&e+\distpartwo{e^2}{n}{\mathit{empty}}{\mathit{empty}}\\
\distpartwo{e}{n+1}{\mathit{empty}}{\node{m}{t}{T}}&=&m+e+\max_f\{\distpartwo{(m+e)^2}{n}{\mathit{empty}}
{t+\sum_{i=1}^{m+e} f(i)}\}\\
&=&\distpartwo{e}{n+1}{\basictree{0}}{\node{m}{t}{T}}\\
\distpartwo{e}{n+1}{\node{m}{t}{T}}{\mathit{empty}}&=&e-m+\distpartwo{(e-m)^2}{n}{t}{\mathit{empty}}\\
&=&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\basictree{0}}\\
\end{eqnarray*}
Moreover, observe that
\begin{eqnarray*}
\mathit{empty}+\mathit{empty}=\mathit{empty}&\cong&\basictree{0}=\basictree{0}+\mathit{empty}\\
\node{m}{t}{T}+\mathit{empty}&=&\node{m}{t}{T}+\basictree{0}
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}
\begin{proposition}[Compatibility]\label{prop:compatibilitylight}
For every $n,e\in\mathbb{N}$, $\mathit{empty}\leq_e^n t$ for every $t$ and, moreover,
if $t\leq_e^n u$ then $t+v\leq_e^n u+v$ for every $t,u,v$.
\end{proposition}
\begin{proof}
$\mathit{empty}\leq_e^n t$ is trivial. The second statement
can be proved by induction on $n$. The base case is trivial.
In the inductive case, we can suppose all the involved trees
to be different from $\mathit{empty}$.
Suppose that $\node{m}{t}{T}\leq^{n+1}_e\node{l}{u}{U}$.
We should prove $\node{m+k}{t+v}{T\cup V}\leq^{n+1}_e\node{l+k}{u+v}{U\cup V}$.
However,
\begin{eqnarray*}
m+k&\leq&(l+e)-d+k=(l+k+e)-d\\
t+v&\leq_{d^2}^n&u+\sum_{i=1}^d f(i)+v=u+v+\sum_{i=1}^{d}f(i)\\
\end{eqnarray*}
Moreover, for every $z\in T\cup V$ there certanily
exists $w\in U\cup V$ such that $z\leq^n_d w$.\hfill$\Box$
\end{proof}
\begin{proposition}[Transitivity]\label{prop:transitivitylight}
If $t \leq_e^n u\leq_d^n v$, then
$t\leq_{d+e}^n v$.
\end{proposition}
\begin{proof}
We go by induction on $n$. We can directly go to the
inductive case, since if $n=0$, then the thesis is trivial.
We can assume all the involved trees to be different from $\mathit{empty}$.
Let us suppose $\node{m}{t}{T}\leq_e^{n+1}\node{l}{u}{U}$
and $\node{l}{u}{U}\leq_d^{n+1}\node{k}{v}{V}$
First of all, we have $m\leq l+e-c$ and $l\leq k+d-b$, which
yields $m\leq k+d-b+e-c=k+(d+e)-(b+c)$. Moreover, by hypothesis,
there are functions $f:\{1,\ldots,c\}\rightarrow U$ and
$g:\{1,\ldots,b\}\rightarrow V$ such that
\begin{eqnarray*}
t&\leq_{c^2}^n& u+\sum_{i=1}^c f(i)\\
u&\leq_{b^2}^n& v+\sum_{i=1}^b g(i)
\end{eqnarray*}
Therefore, by inductive hypothesis and by proposition~\ref{prop:compatibilitylight}:
\begin{eqnarray*}
t&\leq_{c^2+b^2}^n& v+\sum_{i=1}^c f(i)+\sum_{i=1}^b g(i)\\
&\leq_{bc}^n& v+\sum_{i=1}^c h(i)+\sum_{i=1}^b g(i)
\end{eqnarray*}
where $h:\{1,\ldots,c\}\rightarrow V$. We can then
find a function $k:\{1,\ldots,c+b\}\rightarrow V$ such
that
$$
t\leq_{(c+b)^2}^n v+\sum_{i=1}^{c+b}k(i).
$$
Finally, if $z\in T$ then we find $w\in U$ such that $z\leq_c^n w$. We
then find $x\in V$ such that $w\leq_b^n x$ and so $z\leq_{c+b}^n x$.\hfill$\Box$
\end{proof}
\begin{proposition}\label{prop:compatdifflight}
For every $n,e$ and for every $t,u,v$,
$\distpartwo{e}{n}{t}{u}\leq\distpartwo{e}{n}{t+v}{u+v}$
\end{proposition}
\begin{proof}
We can proceed by induction on $n$ and, again, the case $n=0$ is trivial.
In the inductive case, as usual, we can suppose all the involved trees to be
different from $\mathit{empty}$. We have
\begin{eqnarray*}
&&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}\\
&=& l+e-m+\max_f\{\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\}\\
&=& l+e-m+\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\\
\end{eqnarray*}
where $f$ and realizes the max. By induction hypothesis,
\begin{eqnarray*}
&&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}\\
&\leq& (l+k)+e-(m+k)+\distpartwo{((l+k)+e-(m+k))^2}{n}{t+v}
{u+v+\sum_{i=1}^{(l+k)+e-(m+k)}f(i)}\\
&\leq& \distpartwo{e}{n+1}{\node{m}{t}{T}+\node{k}{v}{V}}{\node{l}{u}{U}+\node{k}{v}{V}}
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}
\begin{proposition}\label{prop:transdifflight}
$\distpartwo{e}{n}{t}{u}+\distpartwo{d}{n}{u}{v}\leq\distpartwo{e+d}{n}{t}{v}$
\end{proposition}
\begin{proof}
We can proceed by induction on $n$ and, again, the case $n=0$ is trivial.
In the inductive case, as usual, we can suppose all the involved trees to be
different from $\mathit{empty}$. Now
\begin{eqnarray*}
&&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}
+\distpartwo{d}{n+1}{\node{l}{u}{U}}{\node{k}{v}{V}}\\
&=&l+e-m+\max_f\{\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\}\\
&&+k+d-l+\max_g\{\distpartwo{(k+d-l)^2}{n}{u}{v+\sum_{i=1}^{k+d-l}g(i)}\}\\
&=&k+(e+d)-m+\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\\
&& +\distpartwo{(k+d-l)^2}{n}{u}{v+\sum_{i=1}^{k+d-l}g(i)}\\
&=&k+(e+d)-m+\distpartwo{(l+e-m)^2}{n}{t}{u+\sum_{i=1}^{l+e-m}f(i)}\\
&& +\distpartwo{(k+d-l)^2}{n}{u+\sum_{i=1}^{l+e-m}f(i)}{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}f(i)}\\
&\leq&k+(e+d)-m+\distpartwo{(l+e-m)^2+(k+d-l)^2}{n}{t}{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}f(i)}
\end{eqnarray*}
A function $h:\{1,\ldots,l+e-m\}\rightarrow V$ such that
$\sum_{i=1}^{l+e-m}f(i)\leq^n_{(l+e-m)(k+d-l)}\sum_{i=1}^{l+e-m}h(i)$
can be easily defined, once we remember that
$\node{l}{u}{U}\leq_d^n\node{k}{v}{V}$. This yields
\begin{eqnarray*}
&&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}
+\distpartwo{d}{n+1}{\node{l}{u}{U}}{\node{k}{v}{V}}\\
&\leq&k+(e+d)-m+\distpartwo{(l+e-m)^2+(k+d-l)^2}{n}{t}{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}f(i)}\\
&&+\distpartwo{(l+e-m)(k+d-l)}{n}{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}f(i)}
{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}h(i)}\\
&\leq&k+(e+d)-m+\distpartwo{(k+(e+d)-m)^2}{n}{t}{v+\sum_{i=1}^{k+d-l}g(i)+\sum_{i=1}^{l+e-m}h(i)}\\
&\leq&k+(e+d)-m+\distpartwo{(k+(e+d)-m)^2}{n}{t}{v+\sum_{i=1}^{l+(d+e)-m}p(i)}
\end{eqnarray*}
where $p:\{1,\ldots,l+(d+e)-m\}\rightarrow V$,
$p(i)=f(i)$ if $i\leq l+e-m$ and $p(i)=g(i-(l+e-m))$
otherwise. But, then
\begin{eqnarray*}
&&\distpartwo{e}{n+1}{\node{m}{t}{T}}{\node{l}{u}{U}}
+\distpartwo{d}{n+1}{\node{l}{u}{U}}{\node{k}{v}{V}}\\
&\leq&\distpartwo{e+d}{n}{\node{m}{t}{T}}{\node{k}{v}{V}}
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}
\begin{lemma}\label{lemma:updepth}
For every $t,u,e$, if $t\leq_e^{\max\{\depth{t},\depth{u}\}}u$,
then for every $n>\max\{\depth{t},\depth{u}\}$, $t\leq_e^n u$
and $\distpartwo{e}{n}{t}{u}=\distpartwo{e}{\max\{\depth{t},\depth{u}\}}{t}{u}$.
\end{lemma}
\begin{proof}
A straightforward induction on $\max\{\depth{t},\depth{u}\}$.\hfill$\Box$
\end{proof}}{}
The binary relation
$\leq_{\mathcal{T}}$ on $|\mathcal{T}|$ is defined by putting $t\leq_{\mathcal{T}} u$
whenever $\depth{t}\leq\depth{u}$ and $t\leq_0^{\depth{u}} u$.
$\mathcal{D}_{\mathcal{T}}$ is defined by
letting $\distpar{\mathcal{T}}{t}{u}=\distpartwo{0}{\depth{u}}{t}{u}$.
\begin{lemma}
$\mathcal{T}=(|\mathcal{T}|,+,\leq_{\mathcal{T}},\mathcal{D}_{\mathcal{T}})$ is a resource monoid.
\end{lemma}
\condinc{\begin{proof}
$(|\mathcal{T}|,+)$ is certainly a commutative monoid. For every $t$,
$t\leq_{\mathcal{T}}t$, as can be proved by induction on $t$:
$\mathit{empty}\leq_0^0\mathit{empty}$ by definition and, moreover,
$t=\node{m}{u}{U}\leq_0^{\depth{t}} t$ because, by inductive
hypothesis, $u\leq_0^{\depth{u}}u$ which yields, by lemma~\ref{lemma:updepth},
$u\leq_0^{\depth{t}-1}u$. In the same way, we can prove
that, for every $v\in U$, $v\leq_0^{\depth{t}-1}v$. Now, suppose
$t\leq_{\mathcal{T}}u$ and $u\leq_{\mathcal{T}}v$. This means that
$t\leq_0^{\depth{u}}u$, $u\leq_0^{\depth{v}}v$,
$\depth{t}\leq\depth{u}$ and $\depth{u}\leq\depth{v}$.
We can then conclude that $\depth{t}\leq\depth{v}$,
that $t\leq_0^{\depth{v}}u$ (by lemma~\ref{lemma:updepth})
and $t\leq_0^{\depth{v}}v$ (by proposition~\ref{prop:transdifflight}).
This in turn yields $t\leq_{\mathcal{T}}v$. Let us now prove compatibility:
suppose $t\leq_{\mathcal{T}}u$ and let $v$ be a tree. Then
$\depth{t}\leq\depth{u}$ and $t\leq_0^{\depth{u}}u$. If
$\depth{v}\leq\depth{u}$, then $\depth{u+v}=\depth{u}$ and
we can proceed by getting $t+v\leq_0^{\depth{u+v}}u+v$
(by proposition~\ref{prop:compatibilitylight}), which means
$t+v\leq_{\mathcal{T}}u+v$. If, on the other hand, $\depth{v}>\depth{u}$,
then we can first apply lemma~\ref{lemma:updepth} obtaining
$t\leq_0^{\depth{u+v}}u$ and then $t+v\leq_0^{\depth{u+v}}u+v$
(by proposition~\ref{prop:compatibilitylight}). By way
of lemma~\ref{lemma:updepth} and
propositions~\ref{prop:transdifflight} and~\ref{prop:compatdifflight}
we get
\begin{eqnarray*}
\distpar{\mathcal{T}}{t}{u}+\distpar{\mathcal{T}}{u}{v}&=&
\distpartwo{0}{\depth{u}}{t}{u}+\distpartwo{0}{\depth{v}}{u}{v}\\
&=&\distpartwo{0}{\depth{v}}{t}{u}+\distpartwo{0}{\depth{v}}{u}{v}\\
&\leq&\distpartwo{0}{\depth{v}}{t}{v}=\distpar{\mathcal{T}}{t}{v}\\
\distpar{\mathcal{T}}{t}{u}&=&
\distpartwo{0}{\depth{u}}{t}{u}\leq\distpartwo{0}{\depth{u+v}}{t}{u}\\
&\leq&\distpartwo{0}{\depth{u+v}}{t+v}{u+v}=\distpar{\mathcal{T}}{t+v}{u+v}\\
\end{eqnarray*}
This concludes the proof.\hfill$\Box$
\end{proof}}{}
A \emph{light length space} is a length space on the resource monoid
$\mathcal{T}=(|\mathcal{T}|,+,\leq_{\mathcal{T}},\mathcal{D}_{\mathcal{T}})$.
Given a light length space $A=(|A|,\vdashp{A})$, we can define:
\begin{varitemize}
\item
The light length space $!A=(|A|,\vdashp{!A})$ where
$\maj{!A}{t}{e}{a}$
iff $\maj{A}{u}{e}{a}$ and
$t\geq_{\mathcal{T}}!u$.
\item
The light length space $\S A=(|A|,\vdashp{\S A})$ where
$\maj{\S A}{t}{e}{a}$
iff $\maj{A}{u}{e}{a}$ and
$t\geq_{\mathcal{T}}\S u$.
\end{varitemize}
The following results states the existence of certain morphisms
and will be useful when interpreting light affine logic.
\begin{lemma}[Basic Maps]
Given light length spaces $A,B$, there are morphisms:
$\mathit{contr}:!A\rightarrow!A\otimes!A$,
$\mathit{distr}:\S A\otimes\S B\rightarrow \S(A\otimes B)$
and $\mathit{derelict}:!A\rightarrow \S A$ where
$\mathit{contr}(a)=(a,a)$ and $\mathit{distr}(a,b)=(a,b)$
and $\mathit{derelict}(a)=a$.
\end{lemma}
\begin{proof}
We know that $\{e_\mathit{contr}\}(d)$ takes time at most
$|d|+p$, where $p$ is a constant. Then, let $t,u\in |\mathcal{T}|$ be
such that $\normpar{\mathcal{T}}{t}\geq p+|e_\mathit{contr}|$,
$\normpar{\mathcal{T}}{u}\geq\mathit{cp}$. Define $t_\mathit{contr}$
to be $t+u+\basictree{2}$. Clearly, $\normpar{\mathcal{T}}{t_\mathit{contr}}\geq |e_\mathit{contr}|$.
Now, let $\maj{!A}{v}{d}{a}$. This means that
$v\geq_{\mathcal{T}}!w$ where $\maj{A}{w}{d}{a}$.
Then:
\begin{eqnarray*}
u+!w+!w&\geq_{\mathcal{T}}& !w+!w\\
\normpar{\mathcal{T}}{u+!w+!w}&\geq&\normpar{\mathcal{T}}{u}+\normpar{\mathcal{T}}{!w}+\normpar{\mathcal{T}}{!w}\\
&\geq&\mathit{cp}+\normpar{\mathcal{T}}{!w}+\normpar{\mathcal{T}}{!w}\geq |\langle d,d\rangle|
\end{eqnarray*}
This implies $\maj{!A\otimes !A}{u+!w+!w}{|\langle d,d\rangle|}{(a,a)}$.
Moreover, $u+!w+!w=u+!w+\basictree{1}\leq_{\mathcal{T}}v+t_\mathit{contr}$.
Finally,
\begin{eqnarray*}
\timetm{e_\mathit{contr}}{d}&\leq&|d|+p\leq \normpar{\mathcal{T}}{w}+\normpar{trees}{t}\\
&\leq& \distpar{\mathcal{T}}{u+!w+!w}{!w+t_\mathit{contr}}\leq \distpar{\mathcal{T}}{u+!w+!w}{v+t_\mathit{contr}}
\end{eqnarray*}
This proves $\mathit{contr}$ to be a morphism.\par
Let $e_\mathit{distr}=e_\mathit{id}$. We know that
$\{e_\mathit{id}\}(d)$ takes constant time, say at
most $p$. Then, let $t,u\in |\mathcal{T}|$ be
such that $\normpar{\mathcal{T}}{t}\geq p+|e_\mathit{distr}|$,
$\normpar{\mathcal{T}}{u}\geq\mathit{cp}$. $t_\mathit{distr}$ is then
defined as $t+\S u$.
Now, let $\maj{\S A\otimes \S B}{v}{\langle d,c\rangle}{(a,b)}$.
This implies that $v\geq \S w+\S x$, where
$\maj{A}{w}{d}{a}$ and $\maj{B}{x}{c}{b}$. This in turn
means that $\maj{A\otimes B}{w+x+u}{\langle d,c\rangle}{(a,b)}$
and $\maj{A\otimes B}{\S(w+x+u)}{\langle d,c\rangle}{(a,b)}$.
Moreover
$$
\S(w+x+u)=\S w+\S x+\S u\leq v+t_\mathit{distr}
$$
Finally:
\begin{eqnarray*}
\timetm{e_\mathit{distr}}{\langle d,c\rangle}&\leq&p\leq\normpar{\mathcal{T}}{t}\\
&\leq& \distpar{\mathcal{T}}{0}{t}+\distpar{\mathcal{T}}{\S(w+x+u)}{v+\S u}\leq\distpar{\mathcal{T}}{\S(w+x+u)}{v+t_\mathit{distr}}
\end{eqnarray*}
This proves $\mathit{distr}$ to be a morphism.\par
Let $e_\mathit{derelict}=e_\mathit{id}$. We know that
$\{e_\mathit{derelict}\}(d)$ takes constant time,
say at most $p$. Then, let $t_\mathit{distr}\in |\mathcal{T}|$ be
such that $\normpar{\mathcal{T}}{t_\mathit{distr}}\geq p+|e_\mathit{derlict}|$.
Now, let $\maj{!A}{v}{d}{a}$.
This means that $v\geq !w$, where
$\maj{A}{w}{d}{a}$. This in turn
means that $\maj{\S A}{\S w}{d}{a}$.
Moreover
$$
\S w\leq !w\leq !w+t_\mathit{derelict}.
$$
Finally:
\begin{eqnarray*}
\timetm{e_\mathit{distr}}{d}&\leq&p\leq\normpar{\mathcal{T}}{t_\mathit{derelict}}\\
&\leq& \distpar{\mathcal{T}}{0}{t_\mathit{derelict}}+\distpar{\mathcal{T}}{\S w}{!w}\\
&\leq& \distpar{\mathcal{T}}{\S w}{!w+t_\mathit{derelict}}
\end{eqnarray*}
This proves $\mathit{derelict}$ to be a
morphism.\hfill$\Box$
\end{proof}
\begin{lemma}\label{lemma:lifting}
For every $t\in |\mathcal{T}|$, there is $u$ such that,
for every $v$, $!(v+t)\leq_{\mathcal{T}}!v+u$.
\end{lemma}
\condinc{\begin{proof}
First of all we will prove the following statement by induction
on $t$: for every $t$, there is an integer $\overline{t}$ such
that for every $u$, $u+t\leq_{\overline{t}}^{\max\{\depth{u},\depth{t}\}}u$.
If $t=\mathit{empty}$, we can choose $\overline{t}$ to be just $0$,
since $u\leq_0^n u$ for every $u$. If $t=\node{m}{v}{V}$,
then we put $\overline{t}=m+\overline{v}+\sum_{w\in V}\overline{w}$.
Let $u$ be an arbitrary tree and let us assume, without losing
generality, that $u=\node{l}{w}{W}$. Let $d=\overline{v}+\sum_{w\in V}\overline{w}$.
We get
\begin{eqnarray*}
l+m&\leq&l+m+(\overline{v}+\sum_{w\in V}\overline{w})
- (\overline{v}+\sum_{w\in V}\overline{w})\\
&=& l+\overline{t}-d\\
v+w&\leq_{\overline{v}}^{\max\{\depth{v},\depth{w}\}}&w\\
&\leq_0^{\max\{\depth{v},\depth{w}\}}&w+\sum_{i=1}^d\mathit{empty}\\
\forall x\in V. x&\leq_{\overline{x}}^{\depth{x}}&\mathit{empty}\\
\forall x\in W. x&\leq_0^{\depth{x}}&x
\end{eqnarray*}
Using known results, we can rewrite these inequalities as
follows
\begin{eqnarray*}
l+m&\leq&l+\overline{t}-d\\
v+w&\leq_{d^2}^{\max\{\depth{t},\depth{u}\}-1}&w+\sum_{i=1}^d\mathit{empty}\\
\forall x\in V. x&\leq_{d}^{\max\{\depth{t},\depth{u}\}-1}&\mathit{empty}\\
\forall x\in W. x&\leq_{d}^{\max\{\depth{t},\depth{u}\}-1}&x
\end{eqnarray*}
This yields $u+t\leq_{\overline{t}}^{\max\{\depth{u},\depth{t}\}}t$.\par
Let us now go back to the lemma we are proving. We will now prove that
for every $t$, any term $u=\node{\overline{t}}{w}{U}$ such that
$\depth{u}=\depth{t}+1$ satisfies the thesis. Indeed, if we
put $d=\overline{t}$ and $n=\depth{v+t}$, we get:
\begin{eqnarray*}
1&\leq&\overline{t}-d+1\\
\mathit{empty}&\leq_{d^2}^n& u\\
v+t&\leq_d^n&v
\end{eqnarray*}
This, in turn implies $!(v+t)\leq_0^{n+1}!v+u$, which
yields $!(v+t)\leq_{\mathcal{T}}!v+u$.\hfill$\Box$
\end{proof}}{}
\begin{lemma}[Functoriality]
If $f:\morphism{A}{e}{\varphi}{B}$, then there are $\psi,\theta$ such
that $f:\morphism{!A}{e}{\psi}{!B}$ and $f:\morphism{\S A}{e}{\theta}{\S B}$.
\end{lemma}
\condinc{\begin{proof}
Let $\xi$ be the tree obtained from $\varphi$ by lemma~\ref{lemma:lifting} and
put $\psi=\xi+\varphi+\basictree{1}$. Suppose that $\maj{!A}{t}{d}{a}$. Then $t\geq !u$,
where $\maj{A}{u}{d}{a}$. Observe that there must be $v,c$ such that
$\maj{B}{v}{c}{f(a)}$, $v\leq_{\mathcal{T}} u+\varphi$ and
$\timetm{e}{d}\leq\normpar{\mathcal{T}}{u+\varphi}\distpar{\mathcal{T}}{v}{u+\varphi}$.
But then $\maj{!B}{!v}{c}{f(a)}$ and moreover
\begin{eqnarray*}
!v&\leq_{\mathcal{T}}&!(u+\varphi)\leq_{\mathcal{T}}!u+\xi\leq_{\mathcal{T}}t+\psi\\
\timetm{e}{d}&\leq&\distpar{\mathcal{T}}{v}{u+\varphi}\leq\distpar{\mathcal{T}}{!v}{!(u+\varphi)+\basictree{1}}\\
&\leq&\distpar{\mathcal{T}}{!v}{!u+\xi+\basictree{1}}\leq\distpar{\mathcal{T}}{!v}{t+\psi}
\end{eqnarray*}
This means that $f:\morphism{!A}{e}{\psi}{!B}$. Now, let $\theta$
be $\S\varphi$ and suppose $\maj{\S A}{t}{d}{a}$. Then $t\geq \S u$,
where $\maj{A}{u}{d}{a}$. Observe that there must be $v,c$ such that
$\maj{B}{v}{c}{f(a)}$, $v\leq_{\mathcal{T}} u+\varphi$ and
$\timetm{e}{d}\leq\normpar{\mathcal{T}}{u+\varphi}\distpar{\mathcal{T}}{v}{u+\varphi}$.
But then $\maj{\S B}{\S v}{c}{f(a)}$ and, moreover
\begin{eqnarray*}
\S v&\leq_{\mathcal{T}}&\S(u+\varphi)=\S u+\S\varphi\leq_{\mathcal{T}}t+\theta\\
\timetm{e}{d}&\leq&\distpar{\mathcal{T}}{v}{u+\varphi}\leq\distpar{\mathcal{T}}{\S v}{\S(u+\varphi)}\\
&\leq&\distpar{\mathcal{T}}{\S v}{\S u+\S \varphi)}\leq\distpar{\mathcal{T}}{\S v}{t+\theta}
\end{eqnarray*}
This means that $f:\morphism{\S A}{e}{\theta}{\S B}$.\hfill$\Box$
\end{proof}}{}
Now, we can prove a polynomial bound on $\normpar{T}{t}$:
\begin{proposition}
For every $n\in\mathbb{N}$ there is a polynomial $p_n:\mathbb{N}\rightarrow\mathbb{N}$ such
that $\normpar{\mathcal{T}}{t}\leq p_{\depth{t}}(|t|)$.
\end{proposition}
\condinc{\begin{proof}
We prove a stronger statement by induction on $n$: for every $n\in\mathbb{N}$
there is a polynomial $q_n:\mathbb{N}^2\rightarrow\mathbb{N}$ such that for every $t,e$,
$\distpartwo{e}{n}{\mathit{empty}}{t}\leq q_n(|t|,e)$. First of all, we
know that $\distpartwo{e}{0}{\mathit{empty}}{t}=0$, so $q_0$ is just the
function which always returns $0$. $q_{n+1}$ is defined from $q_n$ as
follows: $q_{n+1}(x,y)=x+y+q_n(x(x+y+1),(x+y)^2)$.
Indeed:
\begin{eqnarray*}
\distpartwo{e}{n+1}{\mathit{empty}}{\mathit{empty}}&=&e+\distpartwo{e}{n}{\mathit{empty}}{\mathit{empty}}\\
&\leq&e+q_n(0,e)\leq e+|\mathit{empty}|\\
&&+q_n(|\mathit{empty}|(|\mathit{empty}|+e+1),(|\mathit{empty}|+e)^2)\\
&=&q_{n+1}(|\mathit{empty}|,e)\\
\distpartwo{e}{n+1}{\mathit{empty}}{\node{m}{t}{T}}&=&
m+e+\max_f\{\distpartwo{(m+e)^2}{n}{\mathit{empty}}{t+\sum_{i=1}^{m+e}f(i)}\}\\
&\leq&m+e+q_n((m+e+1)(|\node{m}{t}{T}|),(m+e)^2)\\
&\leq&|\node{m}{t}{T}|+e\\
&&+q_n((|\node{m}{t}{T}|+e+1)(|\node{m}{t}{T}|),(|\node{m}{t}{T}|+e)^2)\\
&\leq&q_{n+1}(|\node{m}{t}{T}|,e)
\end{eqnarray*}
At this point, however, it suffices to put $p_n(x)=q_n(x,0)$.\hfill$\Box$
\end{proof}}{}
As for {\sf EAL} and {\sf SAL}, we cannot claim $(n,m)\mapsto p_n(m)$ to be
a polynomial. However, this is not a problem since we will be
able to majorize binary strings by trees with bounded depth (cf.Remark~\ref{remark:bounddepth}).
\subsection{Interpreting Light Affine Logic}
As for the $!$ modality, $\reasem{\S A}{\eta}=\S\reasem{A}{\eta}$.
\begin{theorem}
Light length spaces form a model of {\sf LAL}.
\end{theorem}
Binary lists can be represented in {\sf LAL}\ as cut-free proofs
with conclusion
$$
\mathit{List}_\LAL\equiv\forall\alpha.!(\alpha\multimap\alpha)\multimap !(\alpha\multimap\alpha)
\multimap\S(\alpha\multimap\alpha)
$$
\begin{corollary}[Soundness]
Let $\pi$ be an {\sf LAL}\ proof with conclusion $\vdash\{!,\S\}^j\mathit{List}_\LAL\multimap\{!,\S\}^k\mathit{List}_\LAL$
and let $f:B\rightarrow B$ be the function induced by $\reasem{\pi}{}$.
Then $f$ is computable in polynomial time.
\end{corollary}}
{
\section{Other Light Logics}\label{sect:oll}
Girard and Lafont have proposed refinements of {\sf EAL}, namely Light
Linear Logic ({\sf LLL}) and Soft Linear Logic ({\sf SLL}), which capture
polynomial time. We have succeeded in defining appropriate reource
monoids for affine variants of these logics, too. In this way
we can obtain proofs of ``polytime soundness'' by performing the
same realizability interpretation as was exercised in the previous section. These
instantiations of our framework are considerably more technical and
difficult to find, but share the idea of the {\sf EAL}\ interpretation which
is why we have decided not to include them in this Extended
Abstract. The interested reader may consult the full paper (or the
appendix).
In the following section, we will elaborate in some more detail a
rather different instantiation of our method.
}
\section{Interpreting {\sf LFPL}}\label{sect:lfpl}
In~\cite{hofmann99lics} one of us had introduced another
language, {\sf LFPL}, with the property that all definable functions on
natural numbers are polynomial time computable. The key difference
between {\sf LFPL}\ and other systems is that a function defined by iteration
or recursion is not marked as such using modalities or similar and can
therefore be used as a step function of subsequent recursive
definitions.
In this section we will describe a resource monoid $\mathcal{M}$ for {\sf LFPL},
which will provide a proof of polytime soundness for
that system. This is essentially the same as the proof from~\cite{hofmann99lics},
but more structured and, hopefully, easier to understand.
The new approach also yields some new results, namely the
justification of second-order quantification, a !-modality, and a new
type of binary trees based on cartesian product which allows
alternative but not simultaneous access to subtrees.
\subsection{Overview of {\sf LFPL}}
{\sf LFPL}\ is intuitionistic, affine linear logic, i.e., a linear functional
language with $\otimes, \multimap, +, \times$. Unlike in the original
presentation we also add polymorphic quantification here. In addition,
{\sf LFPL}\ has basic types for inductive datatypes, for example unary and
binary natural numbers, lists, and trees. There is one more basic
type, namely $\Diamond$, the resource type.
The recursive constructors for the inductive datatypes each take an additional
argument of type $\Diamond$ which prevents one to invoke more
constructor functions than one.
Dually to the constructors one has iteration principles
which make the $\Diamond$-resource available in the branches of a
recursive definition. For example, the type $T(X)$ of $X$-labelled
binary trees has constructors $\mathbf{leaf}:T(X)$ and
$\mathbf{node}:\Diamond\multimap X\multimap T(X)\multimap T(X)\multimap
T(X)$. The iteration principle allows one to define a function
$T(X)\multimap A$ from closed terms $A$ and
$\Diamond\multimap X\multimap A\multimap A\multimap A$.
In this paper we ``internalise'' the assumption of closedness using a
$!$-modality.
Using this iteration principle one can encode recursive definitions by
ML-style pattern matching provided recursive calls are made on
structurally smaller arguments only.
Here is a fragment of an {\sf LFPL}\ program for ``treesort'' written in
functional notation: the additional arguments of type $\Diamond$ are
supplied using @. Note that the insert function takes an extra
argument of type $\Diamond$.
{\small
\begin{verbatim}
let insert x t d = match t with
Leaf -> Node(x,Leaf,Leaf)@d
| Node(y,l,r)@d' ->
if x<=y then Node(y,insert x l d,r)@d'
else Node(y,l,insert x r d)@d'
let extract t = match t with
Leaf -> nil
| Node(x,l,r)@d ->
append (extract l) (cons(x,extract r)@d)
\end{verbatim}}
\subsection{A Resource Monoid for {\sf LFPL}}
The underlying set of $\mathcal{M}$ is the set of pairs $(l,p)$ where
$l\in\mathbb{N}$ is a natural number and $p$ is a monotone polynomial
in a single variable $x$. The addition is defined by
$(l_1,p_1)+(l_2,p_2)=(l_1+l_2,p_1+p_2)$, accordingly, the neutral
element is $0=(0,0)$. We have a submonoid $\mathcal{M}_0=\{(l,p)\in
\mathcal{M}\mid l=0\}$.
To define the ordering we set $(l_1,p_1)\leq(l_2,p_2)$ iff $l_1\leq
l_2$ and $(p_2-p_1)(x)$ is monotone and nonnegative for all $x\geq
l_2$. For example, we have $(1,42x)\leq (42,x^2)$, but
$(1,42x)\not\leq (41,x^2)$. The distance function is defined by
\[
\distpar{\mathcal{M}}{(l_1,p_1)}{(l_2,p_2)}=(p_2-p_1)(l_2)
\]
We can pad elements of $\mathcal{M}$ by adding a constant to the
polynomial. The following is now obvious.
\begin{lemma}
Both $\mathcal{M}$ and $\mathcal{M}_0$ are resource monoids.
\end{lemma}
A simple inspection of the proofs in Section~\ref{bloed} shows that
the realisers for all maps can be chosen from $\mathcal{M}_0$. This is
actually the case for an arbitrary submonoid of a resource monoid. We
note that realisers of elements may nevertheless be drawn from all of
$\mathcal{M}$. We are thus led to the following definition.
\begin{definition}
An {\sf LFPL}-space is a length space over the resource monoid $\mathcal{M}$. A
morphism from {\sf LFPL}\ length space $A$ to $B$ is a morphism between
length spaces which admits a majorizer from $\mathcal{M}_0$.
\end{definition}
\begin{proposition}
{\sf LFPL}\ length spaces with their maps form a symmetric monoidal
closed category.
\end{proposition}
\begin{definition}
Let $A$ be an {\sf LFPL}\ space and $n\in\mathbb{N}$. The {\sf LFPL}\ space $A^n$ is
defined by $|A^n|=|A|$ and $\maj{A^n}{\alpha}{e}{a}$ iff $\alpha\geq
(2n-1).\beta$ for some $\beta$ such that $\maj{A}{\beta}{e}{a}$.
\end{definition}
So, $A^n$ corresponds to the subset of $A\otimes\dots\otimes A$
consisting of those tuples with all $n$ components equal to each
other. The factor $2n-1$ (``modified difference'') instead of just $n$ is needed in order to justify the linear time needed to compute the copying involved in the obvious morphism from $A^{m+n}$ to $A^m\otimes A^n$.
Let $I$ be an index set and $A_i, B_i$ be $I$-indexed families of {\sf LFPL}\
spaces. A uniform map from $(A_i)_i$ to $(B_i)_i$ consists of a family
of maps $f_i :A_i\rightarrow B_i$ such that there exist $e,\alpha$
with the property that $\maj{}{\alpha}{e}{f_i}$ for all $i$. Recall
that, in particular, the denotations of proofs with free type
variables are uniform maps.
\begin{proposition} For each $A$ there is a uniform (in $m,n$) map $A^{m+n}\rightarrow
A^m\otimes A^n$. Moreover, $A^1$ is isomorphic to $A$.
\end{proposition}
The {\sf LFPL}-space $\Diamond$ is defined by $|\Diamond| = \{\Diamond\}$ and
put $\maj{\Diamond}{\alpha}{d}{\Diamond}$ if $\alpha\geq (1,0)$.
For each {\sf LFPL}-space $A$ we define {\sf LFPL}-space $!A$ by $|!A|=|A|$ and $\maj{!A}{\alpha}{t}{a}$ if there exists
$\alpha'=(0,p)\in\mathcal{M}_0$ with $\maj{A}{\alpha'}{t}{a}$ and
$\alpha\geq (0,(x+1)p)$.
\begin{proposition}\label{bangprop}
There is an {\sf LFPL}\ space $\Diamond$ and for each {\sf LFPL}\ space $A$ there
is an {\sf LFPL}\ space $!A$ with the following properties:
\begin{varitemize}
\item $|!A| = |A|$.
\item If $f:A\rightarrow B$ then $f:!A\rightarrow!B$.
\item $!(A\otimes B) \simeq !A\otimes !B$
\item The obvious functions
$!A\otimes \Diamond^{n}\rightarrow A^{n}\otimes
\Diamond^{n}$ are a uniform map.
\end{varitemize}
The last property means intuitively that with $n$ ``diamonds'' we can
extract $n$ copies from an element of type $!A$ and get the $n$
``diamonds'' back for later use.
\end{proposition}
\condinc{\begin{proof}
We have $(0+1)p(0)=p(0)\geq |t|$. Compatibility with $\otimes$ is obvious.
For functoriality assume that $\maj{}{\phi}{e}{f}$ where
$\phi=(0,q)\in\mathcal{M}_0$. We claim that $\maj{}{(0,(x+1)q)}{e}{f}$
\emph{qua} morphism from $!A$ to $!B$. Suppose that
$\maj{!A}{\alpha}{t}{a}$ where $\alpha\geq (0,(x+1)p)$ and
$\maj{A}{(0,p)}{t}{a}$. Since $f$ is a morphism, we obtain $v,\beta$ such
that $\maj{B}{\beta}{v}{f(a)}$ and $\beta\leq \phi+(0,p)$. This
implies that $\beta\in\mathcal{M}_0$ as well, say, $\beta=(0,r)$ where
$r\leq p+q$. We also know that $r(0)\geq |v|$ by the definition of
length spaces. Now $\maj{!B}{(0,(x+1)r)}{v}{f(b)}$. On the other hand
$(x+1)r\leq (x+1)(p+q)$. The resource bounds are obvious.
Finally, consider the required morphism $!A\otimes
\Diamond^{n}\rightarrow A^{n}\otimes \Diamond^{n}$.
Clearly, it may be realised by the identity; we claim that $0$
can serve as a majoriser. Indeed, a
majoriser of $(a,d)\in |!A \otimes \Diamond^{n}|$ is of
the form $(2n-1,(x+1)p)$ where $(0,p)$ majorises $a$ in $A$. Now,
$(2n-1,(2n-1)p)$ is a majoriser of $(a,d)$ in $A^n\otimes \Diamond^n$. But
$((x+1)-(2n-1)p$ is monotone and nonnegative above $2n-1$. \hfill$\Box$
\end{proof}}{The proof of the last assertion relies on the fact that $(2n-1,(2n-1)p)\leq (2n-1,(x+1)p)$ for arbitrary $n$.}
\paragraph{Remark}
We remark at this point that we obtain an alternative resource
monoid $\mathcal{M}_S$ for {\sf SAL}\ whose underlying set and ordering are as in
$\mathcal{M}$, but whose addition is given by addition as
$(l_1,p_1)+(l_2,p_2)=(\max(l_1,l_2),p_1+p_2)$. Length spaces over
$\mathcal{M}_S$ with maps majorised by $\mathcal{M}_S$
(not $\mathcal{M}_0$) then also form a
sound model of {\sf SAL}. This points to a close relationship between
{\sf LFPL}\ and {\sf SAL}\ and also shows a certain tradeoff between the two
systems. The slightly more complex model is needed for {\sf LFPL}\
since in {\sf LFPL}\ the C-rule of {\sf SAL}\ is so to say internalised in the form
of the uniform map $!A\otimes \Diamond^n\rightarrow
A^n\otimes\Diamond^n$. Notice that {\sf SAL}'s map $!A\rightarrow A^n$
cannot be uniform. This uniformity of {\sf LFPL}\ allows for an internal
implementation of datatypes and recursion as we now show.
\begin{definition}
Let $T_i$ be a family of {\sf LFPL}\ spaces such that $|T_i| = T$ independent
of $i$. The {\sf LFPL}\ space $\exists i.T_i$ is defined by $|\exists
i.T_i|=|T|$ and $\maj{\exists i.T_i}{\alpha}{e}{t}$ if
$\maj{T_i}{\alpha}{e}{t}$ for some $i$.
\end{definition}
Note that if we have a uniform family of maps $T_i\rightarrow U$ where
$U$ does not depend on $i$ then we obtain a map $\exists i.T_i
\rightarrow U$ (existential elimination).
Conversely, if we have a uniform family of maps $U_i\rightarrow
V_{f(i)}$ then we get a uniform family of maps $U_i\rightarrow \exists
j.V_j$ (existential introduction). We will use an informal ``internal
language'' to denote uniform maps which when formalised would amount
to an extension of {\sf LFPL}\ with indexed type dependency in the style of
Dependent ML \cite{xi99popl}.
\subsection{Inductive Datatypes}
In order to interpret unary natural numbers, we define $N = \exists
n.N_n$ where
\[
N_n = \Diamond^{n}\otimes \forall A.(A\multimap A)^{n}\multimap A\multimap A
\]
We can internally define a successor map $\Diamond\otimes N_n\rightarrow
N_{n+1}$ as follows: starting from $d:\Diamond, \vec
d:\Diamond^{n}$ and $f:\forall (A\multimap A)^{n}\multimap A\multimap A$
we obtain a member of $\Diamond^{n+1}$ (from $d$ and $\vec d$) and we
define $f':\forall (A\multimap A)^{n+1}\multimap A\multimap A$ as $\lambda
(u^{A{\multimap}A},\vec u^{(A{\multimap}A)^{n}}).\lambda z^A.u(f\ \vec u\
z)$. From this, we obtain a map $\Diamond\otimes N\rightarrow N$ by
existential introduction and elimination.
Of course, we also have a constant zero $I\rightarrow N_0$ yielding a
map $I\rightarrow N$ by existential introduction.
Finally, we can define an iteration map
\[
!(\Diamond\otimes A \multimap A) \multimap N_n\multimap A\multimap A
\]
as follows:
Given $t: !(\Diamond\otimes A\multimap A)$ and $(\vec
d,f)\in N_n$ we unpack $t$ using Proposition~\ref{bangprop} to
yield $t'\in ((\Diamond\otimes A)\multimap A)^n$ as well as
$\vec d\in\Diamond^{n}$. Feeding these ``diamonds'' one by one
to the components of $t'$ we obtain $t''\in (A\multimap A)^{\otimes n}$.
But then $f\ t''$ yields the required element of $A\multimap A$.
Existential elimination now yields a single map
\[
!(\Diamond\otimes A \multimap A) \multimap N\multimap A\multimap A
\]
\condinc{Similarly, we can interpret binary $X$-labelled trees using a type
family
\[
T_n = \Diamond^{n}\otimes \forall (X\multimap A \multimap
A\multimap A)^{n} \multimap A^{n+1}\multimap A
\]
and defining trees proper as $\exists n.T_{n}$. We get maps
$\mathbf{leaf}:T_{0}$ and $\mathbf{node}:\Diamond\otimes X\otimes
T_{n_1}\otimes T_{n_2}\rightarrow T_{n_1+n_2+1}$ and an analogous
iteration construct.
Finally, and this goes beyond what was already known, we can define
``lazy trees'' using cartesian product (also known as additive
conjunction).
First, we recall from ordinary affine linear logic that an additive
conjunction can be defined as
\[
A \times B = \forall C.(C\multimap A)\otimes (C\multimap B) \otimes C
\]
The first projection map $A\times B\rightarrow A$ is given internally
by $\lambda (f^{C{\multimap}A},g^{C{\multimap}B},c^C).f\ c$. Analogously,
we have a second projection. Given maps $f:C\rightarrow A$ and $g:
C\rightarrow B$ we obtain a map $\langle f,g\rangle : C\rightarrow
A\times B$ internally as $\lambda c^C.(f,g,c)$.
Now, following the pattern of the binary trees $T_{m,n}$ above, we
define another family
\[
T_{d}^{\times} = \Diamond^{d}\otimes \forall A.(X\multimap
(A\times A)\multimap A)^{d} \multimap A\multimap A
\]
and $T^{\times}=\exists d.T^{\times}_{d}$. We get
maps $\mathbf{leaf}:\Diamond\rightarrow T^{\times}_{0}$
and $\mathbf{node}:\Diamond\otimes X\otimes (T_{d_1}\times
T_{d_2})\rightarrow T_{1+\max(d_1,d_2)}$ as well as an analogous
iteration construct.
We describe in detail the construction of the ``node'' map which is
not entirely straightforward. First, we note that for any length
spaces $A, B$ and $m,n$ the obvious map $(\Diamond^m\otimes A)\times
(\Diamond^n\otimes B)\rightarrow \Diamond^{\max(m,n)}\otimes (A\times
B)$ is a morphism. This is because a majoriser of an element of
$(\Diamond^m\otimes A)\times (\Diamond^n\otimes B)$ must be of the
form $(k,p)$ where $k\geq\max(m,n)$ in view of the existence of the
projection maps.
Now suppose we are given (internally) $d:\Diamond, x:X,
\mathit{lr}:T^\times _{d_1}\times T^\times_{d_2}$. Using the just
described morphism we decompose $\mathit{lr}$ into $\vec
d:\Diamond^{\max(d_1,d_2)}$ and $\mathit{lr}': W_{d_1}\times W_{d_2}$
where $W_{i}=(X\multimap (A\times A)\multimap A)^i \multimap A\multimap A$. We
have stripped off the universal quantifier.
Now $d$ and $\vec d$ together yield an element of
$\Diamond^{1+\max(d_1,d_2)}$. It remains to construct a member of
$W_{1+\max(d_1,d_2)}$. To this end, we assume $u:X\multimap (A\times
A)\multimap A$ and $f:(X\multimap (A\times A)\multimap A)^{\max(d_1,d_2)}$
and define the required element of $A$ as $u\ x\ \langle
\mathit{lr}'.1\ f\ a, \mathit{lr}'.2\ f\ a\rangle$. Here $.1$ and $.2$
denote the projections from the cartesian product. The sharing of the
variables $f$, $a$, $\mathit{lr}'$ is legal in the two components of a
cartesian pairing, but would of course not be acceptable in a
$\otimes$ pairing. We have elided the obvious coercions from
$(\_)^{\max(d_1,d_2)}$ to $(\_)^{d_i}$.
We remark that these cartesian trees are governed by their depth
rather than their number of nodes. We also note that if $X=I$ we can
form the function $\lambda d^{\Diamond}.\lambda
t^{T^\times}.\mathbf{node}\ d\ ()\ \langle t,r\rangle :
\Diamond\multimap T^\times\multimap T^\times$. Iterating this map yields a
function $N\multimap T^\times$ computing full binary trees of a given
depth. Of course, on the level of the realisers, such a tree is not
laid out in full as this would require exponential space, but computed
lazily as subtrees are being accessed. Exploring the implications of
this for programming is left to future work.}
{In the appendix we also show how to interpret two different kinds of
binary trees.}
\section{Conclusion}
We have given a unified semantic framework with which to establish
soundness of various systems for capturing complexity classes by logic
and programming. Most notably, our framework has all of second-order
multiplicative linear logic built in, so that only the connectives and
modalities going beyond this need to be verified explicitly.
While resulting in a considerable simplification of previous soundness
proofs, in particular for {\sf LFPL}\ and {\sf LAL}, our method has also lead to
new results, in particular polymorphism and a modality for {\sf LFPL}.
The method proceeds by assiging both abstract resource bounds in the
form of elements from a resource monoid and resource-bounded
computations to proofs (respectively, programs). In this way, our method can
be seen as a combination of traditional Kleene-style realisability
(which only assigns computations) and polynomial and quasi
interpretation known from term rewriting (which only assigns resource
bounds). An altogether new aspect is the introduction of more general
notions of resource bounds than just numbers or polynomials as
formalised in the concept of resource monoid. We thus believe that
our methods can also be used to generalise polynomial interpretations
to (linear) higher-order.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:Intro}
Inflation is a generic intermediate attractor in the direction of expansion of the universe, and in the case of pure $R^2$ gravity it is an exact attractor~\cite{Starobinsky:1987zz}. However, it is not an attractor in the opposite direction in time. Thus, if we are interested in the most generic behavior before
inflation, more general anisotropic and inhomogeneous solutions should be considered. We know from General Relativity (GR) that already anisotropic homogeneous solutions
help us much in understanding the structure of a generic space-like curvature singularity. Thus, a natural question is to investigate anisotropic
solutions in the $R+R^2$ gravity, too.\\
In light of the latest Cosmic Microwave Background constraints by PLANCK~\cite{Ade:2015lrj}, the pioneer inflationary model based on the modified $R+R^2$ gravity
(with small one-loop corrections from quantum fields) ~\cite{Starobinsky:1980te} represents one of the most favourable models. It lies among the simplest ones from all viable inflationary models since it contains only one free ajustable parameter taken from observations. Also it provides a graceful exit from inflation and a natural mechanism for creation of known matter after its end, which is actually the same as that used to generate scalar and tensor
perturbations during inflation.
This theory can be read as a particular form of $f(R)$-theories of gravity which, in turn, is a limiting
case of scalar-tensor gravity when the Brans-Dicke parameter
$\omega_{BH}\to 0$, and it contains an additional scalar degree of freedom
(scalar particles, or quasi-particles, in quantum language) compared to GR
which is purely geometrical. The existence of a scalar degree of freedom
(an effective scalar field) is needed if we want to generate scalar (matter)
inhomogeneities in the universe from ``vacuum" fluctuations of some quantum
field~\cite{Mukhanov:1990me, Lyth:2009zz}. Such generalizations of the familiar Einstein-Hilbert action have been also studied as an explanation for dark energy and late-time acceleration of the universe's expansion~\cite{Capozziello:2002rd, Amendola:2006kh,Nojiri:2006su,Starobinsky:2007hu} and to include quantum behaviour in the gravitational theory~\cite{Stelle:1976gc}.\\
As is already very well known, through the Gauss-Bonnet term which in four dimensions is a surface term, the most general theory up to quadratic in curvature terms is of the type $R+R^2+C_{abcd}C^{abcd}$, where $C_{abcd}$ is the Weyl tensor. The investigations of this type of models began with \cite{weyl1918gravitation,buchdahl1962gravitational,ruzmaikina1970quadratic,Gurovich:1979xg}. After them, many authors have been analyzed the cosmological evolutions of such a model \cite{tomita1978anisotropic,Muller:1987hp,Berkin:1991nb,Barrow:2005qv,vitenti2006numerical,muller2006starobinsky,cotsakis2008slice,Barrow:2009gx,muller2011homogeneous,de2012bianchi,Muller:2012wx,MULLER:2014jaa,0264-9381-27-22-225013,PhysRevD.77.103523,PhysRevD.75.123515}. A particular attention is given to the asymptotic behavior in \cite{Cotsakis:2007un, Cotsakis:1997ck, Miritzis:2003eu, Miritzis:2007yn}. The addition of a Ricci square term creates a richer set of solutions, and in particular, in~\cite{Starobinsky:1987zz,Muller:1989rp, Barrow:2006xb}, has been shown that the cosmic no-hair theorems no longer hold.\\
Quadratic theory like $R+R^2$ gravity, is a particular case of the more general quadratic type, and has higher order time
derivatives in the equations of motion and this leads to appearance of solutions which have no analogs in GR. One of such solutions corresponds to the scale factor $a \sim \sqrt{t}$ behavior,
that coincides with a radiation dominated solution in GR. In quadratic gravity, this solution represents a vacuum solution which is also a stable past attractor for Bianchi I model and probably for all Bianchi models~\cite{Barrow:2006xb}.
The other solution, being an analog of GR solution with matter with equation
of state $p=(\gamma-1)\rho$, is $a \sim t^{4/3 \gamma}$ (instead of usual $a \sim t^{2/3 \gamma}$ in GR),
and it can be naively considered as a solution which would describe the last stages of a collapsing universe
when quadratic terms dominate. However, this solution appears to be a saddle, so
a collapsing universe for a general initial condition ends up with a vacuum ``false radiation'' regime,
in principle not possible in GR.\\
The $a(t)\propto\sqrt{t}$ behavior near singularity in the $R+R^2$ model
does not mean that the $R^2$ term behaves as radiation generically. This
behavior is specific only for the purely isotropic case, as it will be
shown in the present paper (and even in the isotropic case the late-time behavior
is different: $a(t)\propto t^{2/3}$ modulated by small high-frequency
oscillations). Neither does it behave as an ideal fluid in the anisotropic
case.
When shear is taken into account the situation becomes more complicated.
Vacuum solutions exist in GR also, and in the simplest case of a flat anisotropic
metrics (that is the case analyzed in the present paper) this is the famous
Kasner solution~\cite{Kasner:1921zz}. On the other hand, studies of cosmological evolution in a
general quadratic gravity (which includes apart from $R^2$ the Weyl tensor square term in the quadratic part of the action) indicate that the isotropic vacuum ``false radiation''
still exists and, moreover, it is an attractor~\cite{Barrow:2006xb}. Kasner solution is also a solution in
quadratic gravity. Due to complicated nature of dynamics near the Kasner solution
a generic trajectory could end up in Kasner or isotropic solution depending
on initial conditions~\cite{Toporensky:2016kss}.
So, for these reasons, the stability and the full behavior of quadratic theories of gravity is still subject of
investigation and one powerful tool to address these problems is the dynamical system approach~\cite{wainwright2005dynamical} which allows to find exact solutions
of the theory through the determination of fixed points and gives a description of the evolution of the system, at least at qualitative level.
Despite the obvious fact that the general quadratic gravity at the level of the action includes $R+R^2$ gravity
(which can be obtained setting the coefficient before Ricci square term to zero) it is not so at the level
of corresponding equations of motion for the universe model in question. The reason is that the number
of degrees of freedom in a general quadratic gravity is bigger than in $R+R^2$ theory (which, on its turn
is bigger than in GR). That is why we can not simply put corresponding constant to zero in
cosmological equations of motion. This means that cosmological evolution of a flat anisotropic universe
in $R+R^2$ gravity needs a special investigation which is the matter of the present paper.
The paper is organized as follows: In section 2, we present the basic equations of the model. In section 3 we describe schematically the strategy adopted to obtain the correct degrees of freedom and then we analyze the dynamics of $R^2$-gravity both in the vacuum case and in the case with matter; we find exact solutions and determine their stability. In section 4, we derive the analytic behavior of the shear using a general line element. Finally, section 5 contains a summary of the results and conclusions.
\section{System under consideration}
\label{sec-sys}
The gravitational action considered in our analysis is the following
\begin{equation}
S=\frac{1}{16\pi G}\int d^{4}x \sqrt{-g}\left[\left(R-2\Lambda\right)+\beta R^{2}\right]\,,
\label{acao}
\end{equation}
where $g$ is the determinant of the metric, $G$ the Newton constant and $\beta$ a parameter.\\ This theory can be interpreted as a particular form of $f(R)$ gravity.
Observations tell us that the dimensionless coefficient $\frac{\beta}{16\pi G}$ is very large, $\approx 5 \times 10^8$. This follows from the fact that its expression in terms of observable
quantities, in the leading order of the slow-roll approximation, is
$\frac{\beta}{16\pi G}= \frac{N^2}{288\,\pi^2 P_{\zeta}(k)}$ where $P_{\zeta}(k)$ is the power
spectrum of primordial scalar (adiabatic) perturbations, {\it N} is both
$ \ln{k_f/k}$ and the number of e-folds from the end of inflation, $k_f$ being
the wave vector corresponding to the comoving scale equal to the Hubble
radius at the end of inflation ($k_f/a(t_0)$ is slightly less than the CMB
temperature now): see e.g.~\cite{Netto:2015cba}. For the $R+R^2$ inflationary model, $P_{\zeta}\propto N^2$,
so $\beta$ is a constant indeed. Note also that $\beta=\frac{1}{6M^2}$ where $M$ is the scalaron mass after the end of inflaton (and in flat space-time, too). On the other hand, the coefficient of the
Weyl square term (that is present in general quadratic model of gravity) in the Lagrangian density generated by one-loop quantum
gravitational effects is not expected to be so large. Typically it is of
the order of unity (or even significantly less due to small numerical
factors) multiplied by the number of elementary quantum fields. Thus,
there exists a large range of the Riemann and Ricci curvature where the
$R^2$ term dominates while the contribution from the
Weyl square term is still small. For this reason, anisotropic solutions
preceding the inflationary stage may be studied using the same $R+R^2$ model
up to curvatures of the order of the Planck one.\\
Metric variation of the theory in (\ref{acao}) gives the following fields equations
\begin{equation}
E_{ab}\equiv\left(G_{ab}+g_{ab}\Lambda\right)+\beta H_{\: ab}^{(1)}=0\,,
\label{eq.campo}
\end{equation}
where
\begin{eqnarray}
& & G_{ab}=R_{ab}-\frac{1}{2}g_{ab}R\,,\nonumber\\
& & H_{ab}^{(1)}=-\frac{1}{2}g_{ab}R^{2}+2RR_{ab}+2g_{ab}\nabla^2 R-2R_{;ab}\,. \label{H1}
\end{eqnarray}
$H_{ab}^{(1)}$ is the contribution coming from the variation of $R^2$ term. Let us emphasize that every Einstein metric satisfying $R_{ab}=g_{ab}\Lambda$
is an exact solution of (\ref{eq.campo}). This implies that all vacuum solutions of GR are also exact solutions of the quadratic theory \eqref{acao}. Any source that satisfies $\nabla^{c}T_{ca}=0$, can be consistently added to the right hand side of \eqref{eq.campo}.\\
As anticipated, a powerful tool to provide exact solutions of quadratic theory of gravity, is the dynamical system approach which allows for the determination of fixed point and for a qualitative description of the global dynamics of the system. It is particularly suited for the study of the dynamics of anisotropic spacetimes~\cite{wainwright2005dynamical}, like spatially homogeneous Bianchi metrics. In this case we can write the line-element as
\begin{eqnarray}
& & ds^{2}=-\frac{dt^{2}}{H(t)^{2}}+\delta_{ij}{\bf {\omega}}^{i}\otimes\omega^{j}\,,
\end{eqnarray}
where the $i,\, j$ indices refer to the spatial part and $\omega^{j}$ is a triad of one-forms
\begin{equation}
d \omega^{i}=-\frac{1}{2}C^{a}_{\,bc} \omega^{b}\otimes \omega^{c}\,,
\end{equation}
where $C^{a}_{\,bc}$ are the spatial structure constants of the Bianchi group, and depend only on time. They are usually defined as
\begin{equation}
C^{a}_{\,bc}=\varepsilon_{bcd} n^{da}-\delta^{a}_{\,b} a_{c} + \delta^{a}_{\,c}a_{b}\,,
\end{equation}
where the values of the symmetric matrix $n^{ab}$ and the vector $a_{b}$ define the various Bianchi models. In our case, where we focus on Bianchi I metric, we have $n_{ab}=0$ and $a_{b}=0$. \\
Defining the time-like vector $u^{a}=(H,0,0,0)$, which satisfies the normalization condition $u^{a}u_{a}=-1$, and the projection tensor $h_{ab}=g_{ab}+u_{a}u_{b}$, we can define the relevant kinematical quantities \begin{eqnarray}
& & \nabla_{a}u_{b}=\sigma_{ab}+\omega_{ab}+\frac{1}{3}\theta\delta_{ab}-\dot{u}_{a}u_{b}\,,\nonumber \\
& & \sigma_{ab}=u_{(a;b)}-\frac{1}{3}\theta\delta_{ab}+\dot{u}_{(a}u_{b)}\,,\nonumber \\
& & \omega_{ab}=u_{[a;b]}+\dot{u}_{[a}u_{b]}\,,\nonumber \\
& & \dot{u}_{a}=u^{b}\nabla_{b}u_{a}\,,\nonumber \\
& & \theta=\nabla_{c}u^{c}\,,\label{teta}
\end{eqnarray}
where $\sigma_{ab}$ is the symmetric shear tensor $(\sigma_{ab}=\sigma_{(ab)}, \sigma_{ab}u^{b}=0, \sigma^{a}_{\;a}=0)$, $\omega_{ab}$ is the vorticity tensor $(\omega_{ab}=\omega_{(ab)}, \omega_{ab}u^{b}=0)$ and $\dot{u}_{a}$ is the acceleration vector. $\theta$ is the volume expansion, and it is related to the Hubble parameter by
\begin{equation}
\theta=\frac{1}{3}H\,.
\label{H}
\end{equation}
In our analysis we consider a cosmological model where the shear is diagonal and is defined as
\begin{eqnarray}
& & \sigma_{ij}=\mbox{diag}\left[-\frac{2\sigma_{+}}{H},\frac{\sigma_{+}+\sqrt{3}\sigma_{-}}{H},\frac{\sigma_{+}-\sqrt{3}\sigma_{-}}{H}\right],\label{sigma}
\end{eqnarray}
and since we will consider spatially homogeneous spacetimes,
the time-like vector is geodesic $\dot{u}^{a}=0$ with zero vorticity
$\omega_{ab}=0$, being normal to the time slices.\\
We consider a perfect fluid source with no anisotropic pressures so the energy-momentum tensor is
\begin{equation}
T_{ab}=(\rho+p)u_au_b +pg_{ab}\,,
\end{equation}
and it can be decomposed schematically as
\begin{equation}
8\pi GT_{ab}=\mbox{diag} [3\Omega_m,3wH^2\Omega_m,3wH^2\Omega_m,3wH^2\Omega_m]\,.
\label{emtensor}
\end{equation}
where $w$ is the equation of state (EoS) parameter.\\
In order to have a system of autonomous first order differential equations we divide the shear $\sigma_{\pm}$, given in \eqref{sigma}, and density parameters by appropriate powers of $H$, defining in this way the new dimensionless expansion-normalized variables (ENV)
\begin{eqnarray}
& & \Sigma_{\pm}=\frac{\sigma_{\pm}}{H}\,,\nonumber \\
& & \Omega_{m}=\frac{8 \pi G \rho }{3H^{2}}\,,\nonumber \\
& & \Omega_{\Lambda}=\frac{\Lambda}{3H^{2}}.\label{Omegas}
\end{eqnarray}
where $\rho/H^2= T_{00}$ is the energy density.\\
The rest of the ENV are zero since we are restricting to the Bianchi I case. The time evolution of the sources follow directly from the conservation of the energy momentum tensor ($\nabla^{b} T_{ab}=0$) and from the definition itself of \eqref{Omegas},
\begin{eqnarray}
& & \dot{\Omega}_{m}=-3(w+1)\Omega_{m}-2Q_1\Omega_{m}\,,\nonumber\\
& & \dot{\Omega}_{\Lambda}=-2Q_1\Omega_{\Lambda}\,.
\label{e.t.Omegas}
\end{eqnarray}
The fact that we have higher order theory of gravity, requires the introduction of
additional ENVs, which reflect the higher order time derivatives in the equations of motion, as firstly done in~\cite{Barrow:2006xb}
\begin{eqnarray}
& & Q_{1}=\frac{\dot{H}}{H}\,,\nonumber\\
& & Q_{2}=\frac{\ddot{H}}{H^2}\,,\nonumber\\
& & B=\frac{1}{3\beta H^{2}}\,.
\label{Sigma}
\end{eqnarray}
According to their own definitions, these ENV must satisfy the following differential equations
\begin{eqnarray}
& & \dot{\Sigma}_{\pm}=\Sigma_{\pm1}-\Sigma_{\pm}Q_{1}\,,\nonumber\\
& & \dot{B}=-2Q_{1}B\,,\nonumber\\
& & \dot{Q}_{1}=Q_{2}-Q_{1}^{2}\,.\label{e.t.Sigma}
\end{eqnarray}
So now we have all the ingredients the compute the evolution of our theory. The complete dynamical system is given by the equations (\ref{e.t.Omegas}), (\ref{e.t.Sigma}) and by the differential equations shown in the Appendix~\ref{app-A}.
\section{Generalized anisotropic solutions}
\label{sec-gensol}
Now we start from this particular line element for the spacetimes
\begin{equation}
ds^{2}=-d\tau^{2}+\tau^{2p_{1}}dx^{2}+\tau^{2p_{2}}dy^{2}+\tau^{2p_{3}}dz^{2}.\label{le}
\end{equation}
For vanishing cosmological constant, and near the singularity when $\tau\rightarrow 0$, the Einstein tensor for the above line element goes like $G_{ab}\sim 1/\tau^2$, so it becomes negligible in comparison to the $H^{(1)}_{ab}\sim 1/\tau^4$ given in (\ref{H1}). By direct substituting the line element (\ref{le})
into the field equations \eqref{eq.campo} for vacuum source, a purely algebraic
equation is obtained when $B=\frac{1}{3\beta H^2}\rightarrow 0$
\begin{equation}
p_{2}^{2}+p_{2}(-1+p_{1}+p_{3})-(p_{1}+p_{3}-p_{1}^{2}-p_{3}^{2}-p_{1}p_{3})=0\,.
\label{peq}
\end{equation}
The set of solutions \eqref{peq} can be parametrized using two angles $\psi$ and $\phi$
\begin{eqnarray}
& & p_{1}=\sqrt{\frac{3}{8}}\sin\phi\left(\frac{\cos\psi+\sqrt{2}\sin\psi}{\sqrt{2}}\right)+\frac{1}{4}\,,\nonumber \\
& & p_{3}=\sqrt{\frac{3}{8}}\sin\phi\left(\frac{\cos\psi-\sqrt{2}\sin\psi}{\sqrt{2}}\right)+\frac{1}{4}\,,\nonumber \\
& & p_{2}=\frac{1}{4}-\sqrt{\frac{3}{8}}\sin\phi\frac{\cos\psi}{\sqrt{2}}+\sqrt{\frac{3}{8}}\cos\phi,\label{ps}
\end{eqnarray}
where $\psi=[0,2\pi]$ and $0<\phi<\pi, $ and lies in the surface of an ellipsoid shown in figure \ref{ellipsoid}. In this surface are contained both generalized Kasner solution and the isotropic solution. \\
\begin{figure}[t!]
\centering
\includegraphics[scale=0.40, angle=0]{Ellipsoidg.pdf}
\caption{Ellipsoid in the parameter space $p_1$, $p_2$ and $p_3$ given by equation (\ref{ps}).}
\label{ellipsoid}
\end{figure}
\\
The expansion normalized variables for the line element \eqref{le} read
\begin{eqnarray}
& & Q_{1}=-\frac{3}{p_{1}+p_{2}+p_{3}}\,,\nonumber\\
& & Q_{2}=\frac{9}{(p_{1}+p_{2}+p_{3})^{2}}\,,\nonumber\\
& & \Sigma_{+}=\frac{-3p_{1}+(p_{1}+p_{2}+p_{3})}{2(p_{1}+p_{2}+p_{3})}\,,\nonumber\\
& & \Sigma_{-}=\frac{\sqrt{3}(p_{2}-p_{3})}{2(p_{1}+p_{2}+p_{3})}\,.
\label{sol_env}
\end{eqnarray}
The solution space given by eq.~\eqref{peq} can be written in a more compact form using the variables $u=p_1^2+p_2^2+p_3^2$ and $s=p_1+p_2+p_3$, with $\tau\rightarrow 0$
\begin{equation}
\frac{u}{2} +\frac{s^2}{2}-s=0\,,
\label{eqcampovacuo}
\end{equation}
or with respect to ENV with $B\rightarrow 0$, as
\begin{equation}
2+Q_1+\Sigma_-^2+\Sigma_+^2=0.\label{sol.env}
\end{equation}
The solution of (\ref{eqcampovacuo}) is easily obtained as
\begin{equation}
u=2s-s^2\,.
\label{solucaovacuo}
\end{equation}
This compact way of writing the equation, is particular suitable to check the solutions: in fact it can be easily seen that Kasner ($s=1$ and $u=1$) and the isotropic vacuum ($s=3/2$ and $u=3/4$) are both particular solutions of this equation\footnote{The same generic behaviour near an anisotropic curvature singularity occurs for a non-minimally coupled scalar field in many cases,
in particular, for a massive conformally-coupled field, see the recent
paper~\cite{Kamenshchik:2017fk} in this connection.}. We also remember that, in terms of the ENV, generalized Kasner's solution is given by $Q_1=-3$, $\Sigma_+^2+\Sigma_-^2=1$, and the isotropic vacuum solution by $Q_1=-2$ and $\Sigma_+=\Sigma_-=0$. And that both of these solutions belong to the solution set given by (\ref{eqcampovacuo}).
The Ricci scalar can be written in terms of the ENV, and in terms of variables $s$ and $u$, like
\begin{eqnarray}
&&R=\frac{2}{\beta B}\left(2+Q_{1}+\Sigma_{+}^{2}+\Sigma_{-}^{2}\right)\,,\nonumber\\
&&R=\frac{6}{\beta B}\left( \frac{s^2/2 +u/2-s}{s^2}\right)\,,
\label{Riemann_scalar}
\end{eqnarray}
such that the solution given in \eqref{solucaovacuo} ,\eqref{sol.env} as long as $B\ne0$, results in
\[
R=0\,.
\]
By looking at~\eqref{H1}, it is not difficult to convince ourselves that zero Ricci scalar ($R=0$) is in fact the asymptotic solution of eq.~\eqref{eq.campo}. If near the singularity when $\tau\rightarrow 0$, the most important contribution to (\ref{eq.campo}) comes from $H^{(1)}_{ab}$ given in (\ref{H1}), the field equation can be approximated by
\begin{equation}
H^{(1)}_{ab}\approx 0,
\end{equation}
then $R=const.=0$ implies $H^{(1)}_{ab} = 0$. Let us stress that this solution is only valid if the other terms in the field equation (\ref{eq.campo}) are negligible in comparison to $H^{(1)}_{ab}$.
On the other hand for vanishing cosmological constant and absence of classical sources, in a non perturbative picture in the sense that the Einstein tensor $G_{ab}$ is not disregarded, it behaves as an effective source for field equations. And even though it diverges at the singularity (\ref{eqcampovacuo}), the following ratios of the effective pressures to energy densities obtained directly from (\ref{le})
\begin{eqnarray*}
&&\epsilon_1=\frac{G_1^1}{G_{00}}=-\frac{p_3^2-p_3+p_3p_2+p_2^2-p_2}{p_2p_1+p_3p_1+p_3p_2}\,,\\
&&\epsilon_2=\frac{G_2^2}{G_{00}}=-\frac{p_1^2-p_1+p_3^2-p_3+p_1p_3}{p_2p_1+p_3p_1+p_3p_2}\,,\\
&&\epsilon_3=\frac{G_3^3}{G_{00}}=-\frac{p_1^2-p_1+p_2^2-p_2+p_2p_1}{p_2p_1+p_3p_1+p_3p_2}\,,
\end{eqnarray*}
do have a well defined limit. The trace indicates that at the singularity, given by (\ref{eqcampovacuo}), the effective EOS parameter behaves as in radiation
\begin{equation}
\epsilon_1+\epsilon_2+\epsilon_3=1\,.
\end{equation}
\subsection{Stability analysis \label{stability}}
In the dynamical system approach, the field equations are re written with respect to the ENV, such that the solutions are fixed points. In particular, the solution space described in the previous section constitute an invariant set of fixed points. The linearization around the fixed points reveals the local stability of the theory. In fact, since all eigenvalues $\lambda_i \geq 0$, this solution set is an attractor to the past, as all trajectories to the future deviate exponentially from this solution set. Stability with and without matter source is going to be addressed, and the presence of matter is irrelevant for sufficiently big shear.
\subsubsection{Obtaining the dynamical system\label{dof}}
In order to describe the evolution of the correct degrees of freedom, in this section we will describe the strategy that we have adopted in order to simplify
the system of equations of motion. From the $E_{11}, E_{22}$ and $E_{33}$ equations in \eqref{eq.campo} we can isolate the variable related to the higher order time derivative $Q_{2}$; then we find a system of $3$ differential equations
\begin{eqnarray}
& & \dot{Q}_{2}=f_{1}(Q_{1},Q_{2},\Sigma_{\pm2},\Sigma_{\pm1},\Sigma_{\pm},\Omega,B)\,,\nonumber\\
& & \dot{Q}_{2}=f_{2}(Q_{1},Q_{2},\Sigma_{\pm2},\Sigma_{\pm1},\Sigma_{\pm},B)\,,\nonumber\\
& & \dot{Q}_{2}=f_{3}(Q_{1},Q_{2},\Sigma_{\pm2},\Sigma_{\pm1},\Sigma_{\pm},B)\,,
\label{fieldeq}
\end{eqnarray}
where $f$ is a generic function of all the remaining ENV. Form the $E_{00}$ component of \eqref{eq.campo}, we obtain a constraint equation
\begin{equation}
0=C_{1}(\Sigma_{\pm1},\Sigma_{\pm},Q_{1},Q_{2},\Omega,B)\,,
\label{constr}
\end{equation}
that, as we will see, will be important to check the stability of the numerical evolution of the dynamical system.\\
By doing a linear combinations of the field equations~\eqref{fieldeq}, we can obtain two additional
constraints that read as
\begin{eqnarray}
& & 0=C_{2}(\Sigma_{\pm1},\Sigma_{\pm},Q_{1},Q_{2},\Omega,B)\,,\nonumber\\
& & 0=C_{3}(\Sigma_{\pm1},\Sigma_{\pm}Q_{1},Q_{2},\Omega,B)\,.
\label{constr2}
\end{eqnarray}
So now, from the constraints~\eqref{constr} and~\eqref{constr2}, it is possible to write three algebraic equation for $Q_{2}$, $\Sigma_{+1}$ and $\Sigma_{-1}$, that will be function of the remaining variables and we write schematically as
\begin{eqnarray}
& & Q_{2}(\Sigma_{\pm},Q_{1},\Omega,B)\,,\nonumber \\
& & \Sigma_{+1}(\Sigma_{\pm},Q_{1},\Omega,B)\,,\nonumber \\
& & \Sigma_{-1}(\Sigma_{\pm},Q_{1},\Omega,B)\,.
\label{Q2S1}
\end{eqnarray}
If now, we consider the ENV related to $\ddot{\sigma}_{\pm}$, defined as $\Sigma_{\pm 2}=\frac{\ddot{\sigma}_{\pm}}{H}$, we can use its definition to derive the equation
$\dot{\Sigma}_{\pm 1}=\Sigma_{\pm2}-Q_{1}\Sigma_{\pm1}$. Using~\eqref{Q2S1}, it is now possible
to derive the equation for $\Sigma_{\pm2}$
\begin{equation}
\Sigma_{\pm2}(\Sigma_{\pm},Q_{1},\Omega,B)\,.
\end{equation}
Substituting the last equation and eq.~\eqref{Q2S1} into
the original dynamical system equations we finally obtain also the equation for
$\dot{Q}_{2}=f_{1}(\Sigma_{\pm},Q_{1},\Omega,B)$.\\
Then the complete dynamical system is described by the following equations
\begin{eqnarray}
& & \dot{Q}_{1}=Q_{2}(\Sigma_{\pm},Q_{1},\Omega,B)-Q_{1}^{2}\,,\nonumber \\
& & \dot{\Sigma}_{\pm}=\Sigma_{\pm1}(\Sigma_{\pm},Q_{1},\Omega,B)-Q_{1}\Sigma_{\pm}\,,\nonumber \\
& & \dot{B}=-2Q_{1}B\,,\nonumber \\
& & \dot{\Omega}_{m}=\left(-2Q_{1}-3(w+1)\right)\Omega_{m}\,,\nonumber \\
& & \dot{Q}_{2}=f_{1}(\Sigma_{\pm},Q_{1},\Omega,B)\,.
\label{dynsyst}
\end{eqnarray}
where in our analysis the last equation will be integrated numerically to be compared with the algebraic
relation $Q_{2}(\Sigma_{\pm},Q_{1},\Omega,B)$ contained in~\eqref{Q2S1}. Moreover
we will use one of the constraints to obtain a conserved quantity
to numerically check the stability of our results. The last equation is not a dynamical degree
of freedom, but just an artifact to check numerically the solutions. \\
Looking at the above set of equations, it can be noted that there is only
one additional dynamical degree of freedom compared to General Relativity, which
is the first equation for $\dot{Q}_{1}$. This can be easily understood by remembering that, through
a conformal transformation, this gravitational theory is equivalent
to GR plus a scalar field~\cite{Sotiriou:2008rp}.
As we have described above, the linearization of the dynamical system (\ref{dynsyst}) around the solution (\ref{sol.env}) gives rise to the following eigenvalues, that will be discussed in the next subsections.
\subsubsection{Pure geometric modes}
As a first case we consider the vacuum case (without the matter modes). In this case the stability of the system is characterized by the following eigenvalues
\begin{eqnarray}
& &\lambda_{1}=2(2+\Sigma_{-}^2+\Sigma_{+}^2)\,,\nonumber\\
& &\lambda_{2}=3(\Sigma_{+})^{2}+3(\Sigma_{-})^{2}+3\,,\nonumber\\
& & \lambda_{3}=0_{2}\,.
\label{vacuumeigen}
\end{eqnarray}
Except for the two zero eigenvalues this solution is an attractor to the past. These two zero
eigenvalues appears naturally because in fact we have two-dimensional set of fixed points So, according to this theory the universe began as a generalized non-isotropic solution as shown in Figure \ref{backevolution}. There it can be seen in panel $b)$ that an arbitrary initial condition approaches eq. \eqref{sol_env} which defines this solution set. Even if non reported the constraint was numerically verified.
\begin{figure}[t!]
\centering
\captionsetup{width=.8\linewidth}
\begin{tabular}{c c}
\resizebox{0.44\columnwidth}{!}{\includegraphics{figure.pdf}} &
\resizebox{0.45\columnwidth}{!}{\includegraphics{fixed_point.pdf}}\\
$a)$ & $b)$
\end{tabular}
\caption{Backwards time evolution, showing the approach to the singularity. The expansion normalized shear variables $\Sigma_+$ and $\Sigma_-$ and $Q_1$ approach constant asymptotic values. We explicitly checked that asymptotically $Q_1<-4.37$. In $a)$ it can be seen that the Ricci scalar decreases which is consistent with eq. \eqref{ev_ricci_scalar} when $Q_1<-3$. In $b)$ it is explicitly shown that the numerical solution approaches eq. \eqref{sol_env}, as it should for all points of the past attractor. Even if not reported in the plot, the constraint has been checked to be satisfied numerically.}
\label{backevolution}
\end{figure}
\subsubsection{Matter modes}
Allowing perturbations in the matter sector $(\Omega_{\Lambda},\,\Omega_{m})$
we have the following eigenvalues,
\begin{eqnarray}
& & \lambda_{1}=3(\Sigma_{+})^{2}+3(\Sigma_{-})^{2}+3\,,\nonumber\\
& & \lambda_{2}=1-3w+2(\Sigma_{+})^{2}+2(\Sigma_{-})^{2}\,,\nonumber\\
& & \lambda_{3}=0_{2}\,,\nonumber\\
& & \lambda_{4}=(4+2(\Sigma_{+})^{2}+2(\Sigma_{-})^{2})_2\,.
\end{eqnarray}
Excluding the zero eigenvalues, it is interesting that this solution
is an attractor to the past in the presence of cosmic substance for
any initial values for $\Sigma_{+}$ and $\Sigma_{-}$ as long as
the EoS parameter $-1<w<1/3$. For $1/3<w<1$, the solution will still
be an attractor to the past for sufficiently big initial values of
$\Sigma_{+}$ and $\Sigma_{-}$ such that $\lambda_{2}$ is positive.
We focused on solutions which are attractors to the past, however there can be other solutions~\cite{Barrow:2006xb}.
\section{Analytic behavior}
\label{sec-analbe}
Now, starting from a remarkable property of any $f(R)$ gravity, obtained in~\cite{Gurovich:1979xg}, it is possible to determine the dynamical evolution of the shear as an exact, analytical result. Considering a Lagrangian like
\begin{equation}
\mathcal{L}=\sqrt{-g}f(R)\,,
\label{effeR}
\end{equation}
the variation with respect to the metric gives the following field equation
\begin{equation}
f^\prime R_{ab} -\frac{1}{2} f g_{ab} -\nabla_a\nabla_b f^\prime + \nabla ^2 f^\prime g_{ab}=\kappa T_{ab}\,,
\label{fr}
\end{equation}
where $f'$ is used to denote derivative wrt to $R$, and $'$ represents derivative wrt to the proper time. $T_{ab}$ is the energy momentum tensor of some matter source. Since we are considering a spatially homogeneous spacetime, with corresponding sources, the following combinations are zero $a)\,T_{22}-T_{33}=0$, $b)\,T_{11}-T_{22}/2-T_{33}/2=0$. In the same way, taking the same combinations of the left hand side of the field equations (\ref{fr}), we have
\begin{eqnarray}
&&a)\;2H\sqrt{3}\left(\frac{d}{dt}\sigma_- +3\sigma_-\right)f^\prime+2H\sqrt{3}\sigma_-\frac{d}{dt}f^\prime=0\,,\\
&&b)\; 3H\left(\frac{d}{dt}\sigma_+ +3\sigma_+\right)f^\prime+3 H\sigma_+\frac{d}{dt}f^\prime=0\,.
\end{eqnarray}
We can see that these equations admit as solutions
\begin{eqnarray}
&&a)\;\sigma_-=\frac{C_- e^{-3t}}{f^\prime}\,,\\
&&b)\;\sigma_+=\frac{C_+ e^{-3t}}{f^\prime}\,.
\label{sigmafr}
\end{eqnarray}
Now we can connect the constants ($C_{\pm}$) with the parameters $p_1, p_2$ and $p_3$. As in~\cite{Gurovich:1979xg} we can write the scale factors given in (\ref{le}) as
\begin{equation}
a=rg_1,\;\; b=rg_2,\;\;c=rg_3,\;\;\;\;\; \mbox{with} \;\;\;\;\; abc=r^3,
\end{equation}
such that assuming $\prod_ig_i=1$, we have
\begin{equation}
\frac{\dot{g}_1}{g_1}+\frac{\dot{g}_2}{g_2}+\frac{\dot{g}_3}{g_3}=0.
\end{equation}
Taking into account~\eqref{teta}, \eqref{H} and \eqref{sigma} we find that
\begin{eqnarray}
&&\frac{\dot{r}}{r}=H\,,\\
&&\frac{\dot{g}_1}{g_1}=-2\sigma_+,\;\;\;\frac{\dot{g}_2}{g_2}=\sigma_+ +\sqrt{3}\sigma_-,\;\;\; \frac{\dot{g}_3}{g_3}=\sigma_+- \sqrt{3}\sigma_-\,.
\label{gs}
\end{eqnarray}
Remembering that derivatives with respect to proper time are related to derivatives with respect to dynamical time by $\frac{d}{d\tau}=\frac{dt}{d\tau} \frac{d}{dt}=H\frac{d}{dt}$, we can solve the first of~\eqref{gs}
\begin{equation}
r=e^{t}\,,
\end{equation}
which, when substituted into \eqref{sigmafr} and \eqref{gs}, gives
\begin{equation}
\frac{\dot{g}_i}{g_i}=\frac{C_i}{r^3f^\prime}\,.
\end{equation}
This relation found here is the same of $(10)$ in~\cite{Gurovich:1979xg} with the constant $C_{\pm}$ related to $C_1$, $C_2$ and $C_3$ by
\begin{equation}
C_1=-2C_+,\;\; C_2=C_+ +\sqrt{3}C_-,\; \;C_3=C_+-\sqrt{3}C_-\,.
\end{equation}
\\
When we consider the asymptotic solution discussed above we finally find that the relation with the coefficient $p_1$, $p_2$ and $p_3$ is the following
\begin{equation}
C_1=p_1-\frac{s}{3}, \;\;C_2=p_2-\frac{s}{3}, \;\; \mbox{and } C_3=p_3-\frac{s}{3}\,,
\end{equation}
where again $s=p_1+p_2+p_3$ and (\ref{solucaovacuo}) must be satisfied. $C_\pm$ can also be expressed as
\begin{eqnarray}
&&C_+=\frac{-2p_1+p_2+p_3}{6}\,,\nonumber\\
&&C_-=\frac{3p_2-p_3}{6\sqrt{3}}\,.\label{c-c+}
\end{eqnarray}
It is now possible to understand the space of solutions close to the singularity, when $B\rightarrow 0$, as long as the solution stays near the asymptotic solution given in (\ref{eqcampovacuo}), which must be fulfilled since as explained in section \ref{stability} the solution is an attractor to the past.
First of all, by taking the trace of (\ref{eq.campo}) bearing in mind (\ref{H1}), gives the following equation for $R$ in absence of sources
\begin{eqnarray}
&&-R+6\beta\Box R=0\,,\nonumber\\
&&-R-\frac{1}{3\beta B}(\ddot{R}+(Q_1+3)\dot{R})=0 \label{ev_R}\,.
\end{eqnarray}
Near the singularity, when $t \rightarrow -\infty$, $Q_1=-3/s=const.$ and $B=\frac{e^{6t/s}}{3 \beta H_0^2 }$ and eq. (\ref{ev_R})
\begin{equation}
\ddot{R}+(-3/s+3)\dot{R}+R\frac{e^{6t/s}}{H_0^2 }=0\,,
\label{Ricciev}
\end{equation}
has an analytical solution given by Bessel function of the first type $J_a$ and the Bessel function of the second type $Y_a$, which when written with respect to proper time $\tau$, $\exp(3t/s)\propto\tau$ is given by
\begin{equation}
R=\{\hat{C}_1J_{(s-1)/2}(s\tau/(3H_0))+\hat{C}_2Y_{(s-1)/2}(s\tau/(3H_0))\}\tau^{(1-s)/2}.
\end{equation}
Asymptotically, as $\tau\rightarrow 0$ and $B\rightarrow 0$, \eqref{Ricciev} simplifies as
\begin{equation}
\ddot{R}+(-3/s+3)\dot{R}=0\,,
\end{equation}
and when $s\ne 1$ the solution is
\begin{equation}
R=C
+C_0\exp [-(Q_1+3)t]=\frac{C}{3(1-s)}+\tilde{C} \tau^{(1-s)}\,,
\end{equation}
while for $s=1$, which means $Q_1=-3$, it is
\begin{equation}
R=C_1+\tilde{C_1}t=C_2+\tilde{C_2}\frac{s}{3}\ln(\tau),
\end{equation}
where all the $C$s are constants.
This asymptotic behavior of $R$ can be substituted into \eqref{sigmafr} for the particular theory analyzed hitherto, for which we have $f^\prime=1\,+\,2\beta R$,
\begin{eqnarray}
&&\Sigma_+=\frac{\sigma_+}{H}=\left( \frac{1}{\exp(Q_1 t)}\right)\frac{C_+ e^{-3t}}{1+2\beta(C+C_0\exp [-(Q_1+3)t] )} \,,\nonumber\\
&&\Sigma_-=\frac{\sigma_-}{H}=\left( \frac{1}{\exp(Q_1 t)}\right)\frac{C_- e^{-3t}}{1+2\beta(C+C_0\exp [-(Q_1+3)t] )}\,. \label{sigmaCs}
\end{eqnarray}
When $Q_1>-3$, which corresponds to $s>1$, this expression gives at the singularity ($t\rightarrow -\infty $)
\begin{eqnarray}
&&\Sigma_+=\frac{C_+}{2\beta C_0}\,,\\
&&\Sigma_-=\frac{C_-}{2\beta C_0}\,,
\end{eqnarray}
and since $2+Q_1+\Sigma_-^2+\Sigma_+^2=0$ with $Q_1=-3/s=const.$, it results in the following asymptotic form for the Ricci scalar
\begin{equation}
R=C+\frac{\sqrt{C_-^2+C_+^2}}{2\beta\sqrt{-Q_1-2}}\exp [-(Q_1+3)t]\,,
\end{equation}
and since the Ricci scalar must be real then $Q_1<-2$ which gives $s<3/2$.
We can also obtain the constant $C$ since we know that, interchangeably when $s<1$ or $Q_1<-3$, the asymptotic solution set \eqref{solucaovacuo} must continue to be a past attractor, see section \ref{stability}. This attractor has constant well defined values for $Q_1$, $\Sigma_\pm$ satisfying \eqref{sol_env} and when $Q_1<-3$ this will only occur if there is a particular cancellation in the denominator of \eqref{sigmaCs} $f^\prime\rightarrow 0$ giving a well defined limit for $\Sigma_\pm$ at the singularity
\begin{equation}
1+2\beta C =0 \rightarrow C=-\frac{1}{2\beta}\,.
\end{equation}
We have the final asymptotic form for the Ricci scalar
\begin{equation}
R=-\frac{1}{2\beta}+\frac{\sqrt{C_-^2+C_+^2}}{2\beta\sqrt{-Q_1-2}}\exp [-(Q_1+3)t]\,,
\end{equation}
which is valid through $0<s<3/2$. Through (\ref{c-c+}), this last expression can be written as
\begin{equation}
R=-\frac{1}{2\beta}+\frac{\sqrt{9p_1^2-9p_1p_2-9p_1p_3+9p_2^2+3p_3^2}}{18\beta\sqrt{3/s-2}}\exp [-(-3/s+3)t]\,.\label{ev_ricci_scalar}
\end{equation}
The numerical behavior of the Ricci scalar is shown in Figure \ref{backevolution} in panel $a)$. There it can be seen that the asymptotic constant value $R \simeq-1/(2\beta)$ is not reproduced exactly in the plot. The reason for that is due to the fact that, in the dynamical system described in the appendix, the following denominator $4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8$ occurs in all equations and it vanishes on the attractor set. Although it is not possible to check numerically the asymptotic time evolution of the Ricci scalar it is possible to see in Figure (\ref{backevolution}) panel $a)$ that the Ricci scalar decreases asymptotically as it is expected for $Q_1<-3$.
\section{Conclusions}
\label{sec-concl}
In the present paper we have considered the past attractor solution for the evolution
of the flat anisotropic universe in $R+R^2$ gravity. Our results, in combination with
already known results, indicate that the properties of the universe evolution near a
cosmological singularity change significantly taking into account anisotropy
and/or modifications of gravity. Indeed, the evolution of isotropic universe is determined
solely by the matter equation of state. When anisotropy is taken into account,
this isotropic solution becomes future asymptotic solution, while generalized vacuum Kasner solution
becomes a past attractor (except for stiff fluid with Jacobs solution).
In general quadratic gravity without anisotropy new vacuum isotropic solution (``false radiation'' solution)
is stable to the past. The anisotropic case has instead has two sub-cases, because general quadratic
corrections to the gravitational action has two independent terms which can be chosen as proportional to squares of scalar curvature and the Weyl tensor. In a general situation, when
these two terms are of the same order, the dynamical system describing the universe past
evolution has both ``false radiation" and generalized Kasner solution as attractors (the latter is,
more precisely, a saddle-node fixed point). So the nature of cosmological singularity (isotropic or anisotropic) depends on initial conditions imposed.
However, since the $R+R^2$ inflationary model is observationally well
motivated, and we have argued in Sec. II above that there exists a large
range of the Riemann and Ricci curvatures where the anomalously large
$R^2$ term dominates the Einstein term $R$, while
a ``normal-size" Weyl squared term is still small, one can expect that new
solution appears. It has two parameters (so it is a two-dimensional set of solutions), and
includes both isotropic ``false radiation'' and generalized anisotropic Kasner solution (which is a one-dimensional set) as subsets. Moreover, in some sense, it interpolates between them, because it is possible to construct line of solutions with one end being isotropic solution and the other end being a point in the generalized Kasner set.
All this intermediate points disappears when the correction proportional to Weyl square is added to the action,
leaving only isotropic and generalized Kasner solutions and this represents one of the main results of our paper. However, since $R^2$ inflation model is observationally well motivated we can neglect the coefficient in front of the Weyl term, and we can expect that the two-dimensional set of solutions discussed in this paper could be a good approximation for realistic models in quadratic gravity.
In the present paper we have restricted the analysis to flat metrics. However positive spatial curvature could,
in principle, destroy this regime and generate more complicated behavior similar to the Belinsky-Khalatnikov-Lifshitz (BKL)~\cite{Belinsky:1970ew} singularity in General Relativity. We leave this problem for future analysis.
\subsection*{Acknowledgements}
We are delighted to thank Sigbj\o rn Hervik for illuminating discussions and comments. D. M. and A. T. thank the University of Stavanger for the warm hospitality when this paper was started. D. M\"uller would like to thank the Brazilian agency FAPDF process no. 193.000.181/2016 for partial support.
The work of A. S. and A. T. was supported by the RSF grant 16-12-10401.
The computations performed in this paper have been partially done with Maple 16 and with the \textit{Ricci.m} package for Mathematica.
For numerical codes we used GNU/GSL ode package, explicit embedded Runge-Kutta Prince-Dormand on Linux.
\begin{appendix}
\section{Appendix A}
\label{app-A}
Dynamical system equations in the case without matter ($\Omega_{m}=\Omega_{\lambda}=0$):
\begin{eqnarray}
& & \dot{Q}_{1}=-Q_{1}^{2}-\left\{ -288\Sigma_{-}^{2}-288\Sigma_{-}^{4}-72\Sigma_{-}^{6}+8B+B^{2}-
12Q_{1}\Sigma_{+}^{2}B-12Q_{1}\Sigma_{-}^{2}B\right.\nonumber\\
& &-240\Sigma_{-}^{2}Q_{1}\Sigma_{+}^{2}-36\Sigma_{-}^{2}\Sigma_{+}^{2}B
-216\Sigma_{-}^{2}\Sigma_{+}^{4}-576\Sigma_{-}^{2}\Sigma_{+}^{2}-32\Sigma_{-}^{2}B-192Q_{1}\Sigma_{-}^{2}\nonumber\\
& & -40Q_{1}^{2}\Sigma_{-}^{2}-18\Sigma_{-}^{4}B-120\Sigma_{-}^{4}Q_{1}-216\Sigma_{-}^{4}\Sigma_{+}^{2}
-40Q_{1}^{2}\Sigma_{+}^{2}-32B\Sigma_{+}^{2}\nonumber\\
& &-18B\Sigma_{+}^{4}-120Q_{1}\Sigma_{+}^{4}
-192Q_{1}\Sigma_{+}^{2}+2Q_{1}^{2}B+16Q_{1}B+64Q_{1}^{2}+96Q_{1}+8Q_{1}^{3}\nonumber\\
& & -\Sigma_{+}^{2}B^{2}-288\Sigma_{+}^{4}-288\Sigma_{+}^{2}-72\Sigma_{+}^{6}
\left.-\Sigma_{-}^{2}B^{2}\right\} /\left\{ 4\left(4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right)\right\}\,,\\
\nonumber\\
& & \dot{\Sigma}_{+}=-\Sigma_{+}Q_{1}-\Sigma_{+}\left\{ 24\Sigma_{+}^{2}+24+16Q_{1}+2Q_{1}^{2}+24\Sigma_{-}^{2}
+B\Sigma_{+}^{2}+6\Sigma_{-}^{4}+\Sigma_{-}^{2}B\right. \nonumber\\
& &+12\Sigma_{-}^{2}\Sigma_{+}^{2}+8Q_{1}\Sigma_{+}^{2} \left.+2B+6\Sigma_{+}^{4}+8Q_{1}\Sigma_{-}^{2}\right\} /\left\{ 4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right\}\,, \\
\nonumber\\
& & \dot{\Sigma}_{-}=-\Sigma_{-}Q_{1}-\Sigma_{-}\left\{ 24\Sigma_{+}^{2}+24+16Q_{1}+2Q_{1}^{2}+24\Sigma_{-}^{2}
+2B+6\Sigma_{+}^{4}+B\Sigma_{+}^{2}\right.\nonumber\\
& &+6\Sigma_{-}^{4}+\Sigma_{-}^{2}B+12\Sigma_{-}^{2}\Sigma_{+}^{2} \left.+8Q_{1}\Sigma_{+}^{2}+8Q_{1}\Sigma_{-}^{2}\right\} /\left\{ 4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right\} \,.
\end{eqnarray}
Dynamical system equations in the case with matter ($\Omega_{m}\neq0$ and $\Omega_{\lambda}\ne0$):
\begin{eqnarray}
& & \dot{Q}_{1}=-Q_{1}^{2}-\left\{ -\Omega_{\lambda}B^{2}-4Q_{1}B\Omega_{m}-12\Sigma_{+}^{2}B\Omega_{m}-12\Sigma_{-}^{2}B\Omega_{m}
-12Q_{1}\Sigma_{+}^{2}B-12Q_{1}\Sigma_{-}^{2}B\right.\nonumber\\
& &-240\Sigma_{-}^{2}Q_{1}\Sigma_{+}^{2}-
36\Sigma_{-}^{2}\Sigma_{+}^{2}B-4Q_{1}\Omega_{\lambda}B-12\Sigma_{+}^{2}\Omega_{\lambda}B-12\Sigma_{-}^{2}\Omega_{\lambda}B
-B^{2}\Omega_{m}-216\Sigma_{-}^{2}\Sigma_{+}^{4}\nonumber\\
& &-576\Sigma_{-}^{2}\Sigma_{+}^{2}-32\Sigma_{-}^{2}B-192Q_{1}\Sigma_{-}^{2}-40Q_{1}^{2}\Sigma_{-}^{2}-18\Sigma_{-}^{4}B
-120\Sigma_{-}^{4}Q_{1}-216\Sigma_{-}^{4}\Sigma_{+}^{2}\nonumber\\
& &-40Q_{1}^{2}\Sigma_{+}^{2}
-32B\Sigma_{+}^{2}-18B\Sigma_{+}^{4}-120Q_{1}\Sigma_{+}^{4}-192Q_{1}\Sigma_{+}^{2}
+2Q_{1}^{2}B+16Q_{1}B-8\Omega_{\lambda}B\nonumber\\
& &-8B\Omega_{m}+64Q_{1}^{2}+96Q_{1} +8Q_{1}^{3}-\Sigma_{+}^{2}B^{2}-288\Sigma_{+}^{4}-288\Sigma_{+}^{2}-72\Sigma_{+}^{6}
-\Sigma_{-}^{2}B^{2}\nonumber\\
& &-288\Sigma_{-}^{2}-288\Sigma_{-}^{4}-72\Sigma_{-}^{6} \left.+8B+B^{2}\right\} /\left\{ 4\left(4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right)\right\}\,, \\
\nonumber\\
& & \dot{\Sigma}_{+}=-\Sigma_{+}Q_{1}-\Sigma_{+}\left\{ 24\Sigma_{+}^{2}+24+24\Sigma_{-}^{2}+2B+16Q_{1}+2Q_{1}^{2}
+6\Sigma_{+}^{4}+B\Sigma_{+}^{2}+6\Sigma_{-}^{4}\right.\nonumber\\
& &+\Omega_{\lambda}B+\Sigma_{-}^{2}B+12\Sigma_{-}^{2}\Sigma_{+}^{2}
\left.+8Q_{1}\Sigma_{+}^{2}+B\Omega_{m}+8Q_{1}\Sigma_{-}^{2}\right\} /\left\{ 4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right\}\,, \\
\nonumber \\
& & \dot{\Sigma}_{-}=-\Sigma_{-}Q_{1}-\Sigma_{-}\left\{ 24\Sigma_{+}^{2}+24+24\Sigma_{-}^{2}+2B+16Q_{1}+2Q_{1}^{2}+6\Sigma_{+}^{4}
+B\Sigma_{+}^{2}+6\Sigma_{-}^{4}\right.\nonumber\\
& &+\Omega_{\lambda}B+\Sigma_{-}^{2}B+12\Sigma_{-}^{2}\Sigma_{+}^{2}+8Q_{1}\Sigma_{+}^{2}
\left.+B\Omega_{m}+8Q_{1}\Sigma_{-}^{2}\right\} \left\{ 4\Sigma_{+}^{2}+4\Sigma_{-}^{2}+4Q_{1}+B+8\right\} \,.
\end{eqnarray}
\end{appendix}
\newpage
\bibliographystyle{utcaps}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and preliminaries}
Let $\mathfrak{A}$ be a unital $C^*$-algebra with unit denoted by $e$.
We denote by $\mathcal{U}(\mathfrak{A})$ and $\mathcal{Z}(\mathfrak{A})$
the group of all unitary elements in $\mathfrak{A}$ and the centre of $\mathfrak{A}$, respectively.
For an element $x$ of $\mathfrak{A}$, we denote by
$\mbox{Re}(x) = \frac{1}{2}(x + x^*)$ and $\mbox{Im}(x) = \frac{1}{2i}(x - x^*)$
the real and the imaginary part of $x$.
Let $\mathfrak{A}'$ denote the dual space of $\mathfrak{A}$, and define the set of normalized states of
$\mathfrak{A}$ by
\begin{align*}
\mathcal{S}(\mathfrak{A}) = \{\varphi \in \mathfrak{A}':\, \varphi (e) = \|\varphi\| = 1\}.
\end{align*}
A linear functional $\varphi \in \mathfrak{A}'$ is said to be positive, and write $\varphi \geq 0$,
if $\varphi(x^*x) \geq 0$ for all $x\in \mathfrak{A}$ . Note that the set of
normalized states $\mathcal{S}(\mathfrak{A})$ is nothing but
\begin{align*}
\mathcal{S}(\mathfrak{A}) = \{\varphi \in \mathfrak{A}':\, \varphi \geq 0 \quad \mbox{and} \quad \varphi (e) = 1\}.
\end{align*}
Recall that a positive linear functional $\varphi$ on $\mathfrak{A}$ is said to be pure if for every positive
functional $\psi$ on $\mathfrak{A}$ satisfying $\psi(x^*x) \leq \varphi(x^*x)$
for all $x\in \mathfrak{A}$, there is a scalar $0 \leq \mu \leq 1$
such that $\psi = \mu \varphi$. The set of pure states on $\mathfrak{A}$
is denoted by $\mathcal{P}(\mathfrak{A})$.
The numerical range of an element $x\in \mathfrak{A}$ is
$V(x) = \{\varphi(x):\, \varphi \in \mathcal{S}(\mathfrak{A})\}.$
It is a nonempty compact and convex set of the complex plane $\mathbb{C}$,
and its maximum modulus is the numerical radius $v(x)$ of $x$; i.e.
$v(x) = \sup\{|z|: \, z\in V(x)\}$.
It is well-known that $v(\cdot)$ define a norm on $\mathfrak{A}$, which is equivalent
to the $C^*$-norm $\|\cdot\|$. In fact, the following inequalities are well-known:
\begin{align}\label{I.1.1}
\frac{1}{2}\|x\| \leq v(x)\leq \|x\| \qquad (x\in \mathfrak{A}).
\end{align}
It is a basic fact that the norm $v(\cdot)$ is self-adjoint (i.e., $v(x^*) = v(x)$ for
every $x\in \mathfrak{A}$) and also, if $x$ is normal, then $v(x) = \|x\|$.
Also, since $\mathcal{P}(\mathfrak{A})$ coincides with the set of
all extremal points of $\mathcal{S}(\mathfrak{A})$, thus
for every $x\in \mathfrak{A}$ we have
\begin{align*}
v(x) = \displaystyle{\sup_{\varphi \in \mathcal{S}(\mathfrak{A})}}|\varphi(x)|
= \displaystyle{\sup_{\varphi \in \mathcal{P}(\mathfrak{A})}}|\varphi (x)|.
\end{align*}
When $\mathfrak{A} = \mathbb{B}(\mathscr{H})$ is the $C^*$-algebra
of all bounded linear operators on a complex Hilbert space
$\big(\mathscr{H}, \langle \cdot, \cdot\rangle\big)$ and $T\in \mathbb{B}(\mathscr{H})$,
it well known that $V(T)$ is the closure of $W(T)$, the spatial numerical
range of $T$ defined by
$W(T) = \big\{\langle Tx, x\rangle:x\in \mathscr{H},\|x\| = 1\big\}.$
It is known as well that $W(T)$ is a nonempty bounded convex subset of $\mathbb{C}$
(not necessarily closed), and its supremum modulus, denoted by
$\omega(x) = \sup\{|z|: \, z\in W(T)\}$,
is called the spatial numerical radius of $T$ and coincides with $v(T)$.
For more material about the numerical radius and
other information on the basic theory of algebraic numerical range,
we refer the reader to \cite{B.D} and \cite{G.R}.
Some other related topics can be found in \cite{B.S, Dr, E.K, H.K.S, M.S, Sa, Sh, Z.2}.
Now, let $(\mathscr{X}, \|\cdot\|)$ be a normed space.
An element $x\in \mathscr{X}$ is said to be norm--parallel to another element
$y\in \mathscr{X}$ (see \cite{S, Z.M.1}), in short
$x\parallel y$, if
$\|x+\lambda y\|=\|x\|+\|y\|$ for some $\lambda\in\mathbb{T}$.
Here, as usual, $\mathbb{T}$ is the unit cycle of the complex plane $\mathbb{C}$.
In the context of continuous functions,
the well-known Daugavet equation $\|T + Id\| = \|T\| + 1$
is a particular case of parallelism. Here $Id$ denotes the identity function.
This property of a function, apart from being interesting in its own
right, arises naturally in problems dealing with best approximations in
function spaces; see \cite{We} and the references therein.
In the framework of inner product spaces, the norm--parallel relation
is exactly the usual vectorial parallel relation, that is,
$x\parallel y$ if and only if $x$ and $y$ are linearly dependent.
In the setting of normed linear spaces, two linearly
dependent vectors are norm--parallel, but the converse is false in general.
Some characterizations of the norm--parallelism for
operators on various Banach spaces and elements of an arbitrary Hilbert $C^*$-module
were given in \cite{B.C.M.W.Z, G, M.S.P, W, Z.1, Z.M.1, Z.M.2}.
Now, let us introduce a new type of parallelism in
$C^*$-algebras based on numerical radius.
\begin{definition}\label{D.1.2}
An element $x\in\mathfrak{A}$ is called the numerical radius parallel
to another element $y \in\mathfrak{A}$, denoted by $x\,{\parallel}_v \,y$, if
$v(x + \lambda x) = v(x) + v(y)$ for some $\lambda\in\mathbb{T}$.
\end{definition}
It is easy to see that the numerical radius parallelism is reflexive ($x\,{\parallel}_v \,x$),
symmetric ($x\,{\parallel}_v \,y$ if and only if
$y\,{\parallel}_v \,x$) and $\mathbb{R}$-homogenous
($x\,{\parallel}_v \,y \Rightarrow \alpha x\,{\parallel}_v \,\beta y$
for all $\alpha, \beta \in \mathbb{R}$)).
Notice that two linearly dependent elements are numerical
radius parallel. The converse is however not true, in general.
The organization of this paper will be as follows.
Inspired by the numerical radius inequalities of bounded linear operators
in \cite{A.K.1}, \cite{A.K.2}, \cite{K.1}, \cite{K.2}, \cite{K.3}, \cite{K.M.Y}, \cite{Y}
and by using some ideas of them,
we firstly state a useful characterization
of the numerical radius for elements of a $C^*$-algebra, as follows:
\begin{align*}
v(x) = \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\|.
\end{align*}
We then apply it to prove that several numerical
radius inequalities in $C^*$-algebras.
Moreover, we give new improvements of the inequalities (\ref{I.1.1}).
We also give an expression of $v(x)$ in terms of the real and imaginary
parts of $x\in \mathfrak{A}$, as follows:
\begin{align*}
v(x) = \displaystyle{\sup_{\alpha^2 + \beta^2 = 1}}\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\|.
\end{align*}
Particularly, then we show that if $x\in\mathfrak{A}$, then $v(x) = \frac{1}{2}\|x\|$
if and only if $\|x\| = \|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Im}(e^{i\theta}x)\|$
for all $\theta \in \mathbb{R}$.
Our results generalize recent numerical
radius inequalities of bounded linear operators due to Kittaneh et al.
\cite{K.2, K.M.Y, Y}.
In addition, we present a refinement of the triangle inequality
for the numerical radius in $C^*$-algebras.
We then apply it to give a necessary condition for the numerical radius parallelism.
Furthermore, for two elements $x$ and $y$ in $\mathfrak{A}$
we show that $x\,{\parallel}_v \,y$ if and only if there exists a pure state
$\varphi$ on $\mathfrak{A}$ such that $|\varphi(x)\varphi(y)| = v(x)v(y)$.
Finally, we prove that if $c\in \mathcal{Z}(\mathfrak{A})\cap \mathcal{U}(\mathfrak{A})$,
then $x\,{\parallel}_v \,y$ holds exactly when $cx\,{\parallel}_v \,cy$.
\section{Main results}
We start our work with the following lemma.
\begin{lemma}\label{L.2.1}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $\varphi$ be a state over $\mathfrak{A}$.
For $x\in \mathfrak{A}$ the following statements hold.
\begin{itemize}
\item[(i)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\big|\mbox{Re}\big(e^{i\theta}\varphi(x)\big)\big| = |\varphi(x)|$.
\item[(ii)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\big|\mbox{Im}\big(e^{i\theta}\varphi(x)\big)\big| = |\varphi(x)|$.
\end{itemize}
\end{lemma}
\begin{proof}
We may assume that $\varphi(x) \neq 0$ otherwise (i) and (ii) trivially hold.
(i) Put $e^{i{\theta}_0} = \frac{\overline{\varphi(x)}}{|\varphi(x)|}$.
Then we have
\begin{align*}
|\varphi(x)| = \big|\mbox{Re}\big(e^{i{\theta}_0}\varphi(x)\big)\big|
\leq \sup_{\theta \in \mathbb{R}}\big|\mbox{Re}\big(e^{i\theta}\varphi(x)\big)\big|
\leq \sup_{\theta \in \mathbb{R}}\big|e^{i\theta}\varphi(x)\big| = |\varphi(x)|,
\end{align*}
and hence $|\varphi(x)| = \sup_{\theta \in \mathbb{R}}\big|\mbox{Re}\big(e^{i\theta}\varphi(x)\big)\big|$.
(ii) By replacing $x$ in (i) by $ix$, we obtain
\begin{align*}
\sup_{\theta \in \mathbb{R}}\big|\mbox{Im}\big(e^{i\theta}\varphi(x)\big)\big|
= \sup_{\theta \in \mathbb{R}}\big|\mbox{Re}\big(e^{i\theta}\varphi(ix)\big)\big|
= |\varphi(ix)| = |\varphi(x)|.
\end{align*}
\end{proof}
Now, we are in a position to state two useful characterizations
of the numerical radius for elements of a $C^*$-algebra.
\begin{theorem}\label{T.2.2}
Let $\mathfrak{A}$ be a $C^*$-algebra.
For $x\in \mathfrak{A}$ the following statements hold.
\begin{itemize}
\item[(i)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\| = v(x)$.
\item[(ii)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Im}(e^{i\theta}x)\| = v(x)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Since $\mbox{Re}(e^{i\theta}x)$ is self adjoint for any $\theta \in \mathbb{R}$,
we have
\begin{align*}
\|\mbox{Re}(e^{i\theta}x)\| = v(\mbox{Re}(e^{i\theta}x)).
\end{align*}
Therefore, we get
\begin{align*}
\sup_{\theta \in \mathbb{R}}\|\mbox{Re}(e^{i\theta}x)\| &= \sup_{\theta \in \mathbb{R}}v(\mbox{Re}(e^{i\theta}x))
\\& = \sup_{\theta \in \mathbb{R}}\sup_{\varphi\in \mathcal{S}(\mathfrak{A})}|\varphi\big(\mbox{Re}(e^{i\theta}x)\big)|
\\& = \sup_{\theta \in \mathbb{R}}\sup_{\varphi\in \mathcal{S}(\mathfrak{A})}|\mbox{Re}(e^{i\theta}\varphi(x))|
\\& = \sup_{\varphi\in \mathcal{S}(\mathfrak{A})}\sup_{\theta \in \mathbb{R}}|\mbox{Re}(e^{i\theta}\varphi(x))|
\\& = \sup_{\varphi\in \mathcal{S}(\mathfrak{A})}|\varphi(x)|
\qquad\big(\mbox{by Lemma \ref{L.2.1} (i)}\big)
\\& = v(x).
\end{align*}
Thus $\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\| = v(x)$.
(ii) By replacing $x$ in (i) by $ix$, we reach that
\begin{align*}
\sup_{\theta \in \mathbb{R}}\|\mbox{Im}(e^{i\theta}x)\|
= \sup_{\theta \in \mathbb{R}}\|\mbox{Re}(e^{i\theta}(ix))\|
= v(ix) = v(x).
\end{align*}
\end{proof}
Recall that the Crawford number of $T\in\mathbb{B}(\mathscr{H})$ is defined by
\begin{align*}
c(T) = \inf \{|\langle Tx, x\rangle|:x\in \mathscr{H},\|x\| =1\}.
\end{align*}
This concept is useful in studying linear operators (e.g., see \cite{A.K.1, Z.2}, and their references).
The Crawford number of $z\in \mathfrak{A}$ can be defined by
\begin{align*}
c(z) = \inf\{|\varphi(z)|: \, \varphi \in \mathcal{S}(\mathfrak{A})\}.
\end{align*}
In the following theorem, we give new improvement of the inequalities (\ref{I.1.1}).
\begin{theorem}\label{T.2.3}
Let $\mathfrak{A}$ be a $C^*$-algebra.
For $x\in \mathfrak{A}$ the following statements hold.
\begin{itemize}
\item[(i)] $\frac{1}{2}\|x\| \leq \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\big\| + 2c(x^2)} \leq v(x)$.
\item[(ii)] $v(x) \leq \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\big\| + 2v(x^2)}
\leq \frac{1}{2}\big(\|x\| + {\|x^2\|}^{\frac{1}{2}}\big) \leq \|x\|$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Let $x\in \mathfrak{A}$. By \cite[Theorem 3.3.6]{Mur} there is a state $\varphi$ over $\mathfrak{A}$ such that
\begin{align}\label{I.2.3.1}
\varphi\big(|x|^2 + |x^*|^2\big) = \big\|\,|x|^2 + |x^*|^2\big\|.
\end{align}
Let ${\theta}_0$ be a real number such that $|\varphi(x^2)| = e^{2i{\theta}_0}\varphi(x^2)$. Then, by Theorem \ref{T.2.2} (i), we have
\begin{align*}
v(x) \geq \|\mbox{Re}(e^{i{\theta}_0}x)\| &= \frac{1}{2}\|e^{i{\theta}_0}x + e^{-i{\theta}_0}x^*\|
\\& = \frac{1}{2}\sqrt{\big\|\big(e^{i{\theta}_0}x + e^{-i{\theta}_0}x^*\big)\big(e^{i{\theta}_0}x + e^{-i{\theta}_0}x^*\big)^*\big\|}
\\& = \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2 + 2\mbox{Re}(e^{2i{\theta}_0}x^2)\big\|}
\\& \geq \frac{1}{2}\sqrt{\big|\varphi\big(|x|^2 + |x^*|^2 + 2\mbox{Re}(e^{2i{\theta}_0}x^2)\big)\big|}
\\& = \frac{1}{2}\sqrt{\big|\varphi\big(|x|^2 + |x^*|^2\big) + 2\mbox{Re}\big(e^{2i{\theta}_0}\varphi(x^2)\big)\big|}
\\& = \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\big\| + 2|\varphi(x^2)|} \qquad\big(\mbox{by (\ref{I.2.3.1})}\big)
\\& \geq \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\big\| + 2c(x^2)} \geq \frac{1}{2}\|x\|,
\end{align*}
which proves the inequalities in (i).
(ii) By Theorem \ref{T.2.2} (i), as in the proof of (i) we get
\begin{align*}
v(x) &= \sup_{\theta \in \mathbb{R}}\|\mbox{Re}(e^{i\theta}x)\|
\\& = \frac{1}{2}\sup_{\theta \in \mathbb{R}}\sqrt{\big\|\,|x|^2 + |x^*|^2 + 2\mbox{Re}(e^{2i\theta}x^2)\big\|}
\\& \leq \frac{1}{2}\sup_{\theta \in \mathbb{R}}\sqrt{\big\|\,|x|^2 + |x^*|^2\| + 2\|\mbox{Re}(e^{2i\theta}x^2)\big\|}
\\& \leq \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\| + 2\sup_{\theta \in \mathbb{R}}\|\mbox{Re}(e^{2i\theta}x^2)\big\|}
\\& = \frac{1}{2}\sqrt{\big\|\,|x|^2 + |x^*|^2\big\| + 2v(x^2)}
\\& \leq \frac{1}{2}\sqrt{\|x\|^2 + \|x^2\| + 2v(x^2)}
\\& \leq \frac{1}{2}\sqrt{\|x\|^2 + 3\|x^2\|} \qquad\big(\mbox{by (\ref{I.1.1})}\big)
\\& \leq \frac{1}{2}\sqrt{\|x\|^2 + 2\|x\|{\|x^2\|}^{\frac{1}{2}} + \|x^2\|}
\qquad\big(\mbox{since $\|x^2\| = {\|x^2\|}^{\frac{1}{2}}{\|x^2\|}^{\frac{1}{2}} \leq \|x\|{\|x^2\|}^{\frac{1}{2}}$}\big)
\\& = \frac{1}{2}\big(\|x\| + {\|x^2\|}^{\frac{1}{2}}\big) \leq \|x\|,
\end{align*}
which proves the inequalities in (ii).
\end{proof}
As a consequence of Theorem \ref{T.2.3}, we have the following result.
\begin{corollary}\label{C.2.4}
Let $\mathfrak{A}$ be a $C^*$-algebra. If $x\in \mathfrak{A}$ is such that $x^2 = 0$,
then $v(x) = \frac{1}{2}\|x\|$.
\end{corollary}
\begin{proof}
Since $x^2 = 0$, by Theorem \ref{T.2.3} (ii), we obtain
$v(x) \leq \frac{1}{2}\big(\|x\| + {\|x^2\|}^{\frac{1}{2}}\big) = \frac{1}{2}\|x\|$.
We also have that $\frac{1}{2}\|x\| \leq v(x)$ for every $x\in \mathfrak{A}$.
Thus $v(x) = \frac{1}{2}\|x\|$.
\end{proof}
The following result is another consequence of Theorem \ref{T.2.3}.
\begin{corollary}\label{C.2.5}
Let $\mathfrak{A}$ be a $C^*$-algebra. If $x\in \mathfrak{A}$ is such that $v(x) = \|x\|$,
then $\|x^2\| = {\|x\|}^2$.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{T.2.2} (ii) that $v(x) = \|x\|$ implies
$\|x\| \leq \frac{1}{2}\big(\|x\| + {\|x^2\|}^{\frac{1}{2}}\big) \leq \|x\|$.
Thus $\|x\| = {\|x^2\|}^{\frac{1}{2}}$, or equivalently $\|x^2\| = {\|x\|}^2$.
\end{proof}
In the following theorem we give an expression of $v(x)$
in terms of the real and imaginary parts of $x\in \mathfrak{A}$.
\begin{theorem}\label{T.2.6}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x\in \mathfrak{A}$.
Then for $\alpha , \beta \in \mathbb{R}$, the following statements hold.
\begin{itemize}
\item[(i)] $\displaystyle{\sup_{\alpha^2 + \beta^2 = 1}}\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\| = v(x)$.
\item[(ii)] $\max\big\{\|\mbox{Re}(x)\|, \|\mbox{Im}(x)\|\big\}\leq v(x)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Let $\theta \in \mathbb{R}$. Put $\alpha = \cos \theta$ and $\beta = -\sin \theta$.
We have
\begin{align*}
\mbox{Re}(e^{i\theta}x) &= \frac{e^{i\theta}x + e^{-i\theta}x^*}{2}
\\& = \frac{(\cos \theta + i\sin \theta)x + (\cos \theta -i \sin \theta)x^*}{2}
\\& = (\cos \theta)\frac{x + x^*}{2} - (\sin \theta)\frac{x - x^*}{2i}
\\& = \alpha \mbox{Re}(x) + \beta \mbox{Im}(x).
\end{align*}
Therefore
\begin{align*}
\sup_{\theta \in \mathbb{R}}\|\mbox{Re}(e^{i\theta}x)\|
= \sup_{\alpha^2 + \beta^2 = 1}\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\|,
\end{align*}
and hence by Theorem \ref{T.2.2} (i) we obtain
$v(x) = \displaystyle{\sup_{\alpha^2 + \beta^2 = 1}}\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\|$.
(ii) By setting $(\alpha, \beta) = (1, 0)$ and $(\alpha, \beta) =(0, 1)$ in (i), we get
$\|\mbox{Re}(x)\| \leq v(x)$ and $\|\mbox{Im}(x)\| \leq v(x)$. Thus $\max\big\{\|\mbox{Re}(x)\|, \|\mbox{Im}(x)\|\big\}\leq v(x)$.
\end{proof}
In the next result, we obtain a necessary
and sufficient condition for $v(x) = \frac{1}{2}\|x\|$ to hold.
We will need the following lemma.
\begin{lemma}\cite[Corollary 4.4]{Z.M.1}\label{L.2.7}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x, y\in \mathfrak{A}$.
Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $x\parallel y$.
\item[(ii)] There exists a state $\varphi$ over $\mathfrak{A}$ such that $|\varphi(x^*y)| = \|x\|\|y\|$.
\end{itemize}
\end{lemma}
\begin{theorem}\label{T.2.8}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x\in \mathfrak{A}$.
Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $v(x) = \frac{1}{2}\|x\|$.
\item[(ii)] $\|x\| = \|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Im}(e^{i\theta}x)\|$
for all $\theta \in \mathbb{R}$.
\end{itemize}
\end{theorem}
\begin{proof}
(i)$\Rightarrow$(ii) Suppose that $v(x) = \frac{1}{2}\|x\|$.
Then for any $\theta \in \mathbb{R}$, we have
\begin{align*}
\|x\| = \|e^{i\theta}x\| &= \|\mbox{Re}(e^{i\theta}x) + i\mbox{Im}(e^{i\theta}x)\|
\\&\leq \|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Im}(e^{i\theta}x)\|
\\&\leq 2\max\big\{\|\mbox{Re}(e^{i\theta}x)\|, \|\mbox{Im}(e^{i\theta}x)\|\big\}
\\& \leq 2v(e^{i\theta}x)
\qquad\big(\mbox{by Theorem \ref{T.2.6} (ii)}\big)
\\& = 2v(x) = \|x\|,
\end{align*}
and hence $\|x\| = \|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Im}(e^{i\theta}x)\|$.
(ii)$\Rightarrow$(i) Suppose (ii) holds.
Thus for all $\theta \in \mathbb{R}$,
\begin{align*}
\|\mbox{Re}(e^{i\theta}x) + i\mbox{Im}(e^{i\theta}x)\| = \|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Im}(e^{i\theta}x)\|,
\end{align*}
so $\mbox{Re}(e^{i\theta}x) \parallel \mbox{Im}(e^{i\theta}x)$.
By Lemma \ref{L.2.7}, there exists a state $\varphi$ over $\mathfrak{A}$ such that
\begin{align*}
\Big|\varphi\Big(\big(\mbox{Re}(e^{i\theta}x)\big)^*\mbox{Im}(e^{i\theta}x)\Big)\Big|
= \|\mbox{Re}(e^{i\theta}x)\|\,\|\mbox{Im}(e^{i\theta}x)\|,
\end{align*}
and hence
\begin{align*}
\big|\varphi\big(\mbox{Re}(e^{i\theta}x)\mbox{Im}(e^{i\theta}x)\big)\big|
= \|\mbox{Re}(e^{i\theta}x)\|\,\|\mbox{Im}(e^{i\theta}x)\|.
\end{align*}
From this it follows that $v\big(\mbox{Re}(e^{i\theta}x)\mbox{Im}(e^{i\theta}x)\big)
= \|\mbox{Re}(e^{i\theta}x)\|\,\|\mbox{Im}(e^{i\theta}x)\|$,
so by Theorem \ref{T.2.2} (ii) we reach that
\begin{align}\label{T.2.8.1}
\|\mbox{Re}(e^{i\theta}x)\|\,\|\mbox{Im}(e^{i\theta}x)\|
= \big\|\mbox{Im}\big((\mbox{Re}(e^{i\theta}x)\mbox{Im}(e^{i\theta}x)\big)\big\|.
\end{align}
On the other hand,
\begin{align*}
\mbox{Im}\big(\mbox{Re}(e^{i\theta}x)\mbox{Im}(e^{i\theta}x)\big)
&= \mbox{Im}\Big((\frac{e^{i\theta}x + e^{-i\theta}x^*}{2})(\frac{e^{i\theta}x - e^{-i\theta}x^*}{2i})\Big)
\\& = \mbox{Im}\Big(\frac{e^{2i\theta}x^2 - e^{-2i\theta}{x^*}^2 -xx^* + x^*x}{4i}\Big)
\\& = \frac{1}{2i}\Big\{\frac{e^{2i\theta}x^2 - e^{-2i\theta}{x^*}^2 -xx^* + x^*x}{4i}
\\& \qquad \qquad \qquad - \frac{e^{-2i\theta}{x^*}^2 - e^{2i\theta}x^2 -xx^* + x^*x}{-4i}\Big\}
\\& = \frac{xx^* - x^*x}{4}
= \mbox{Im}\big(\mbox{Re}(x)\mbox{Im}(x)\big),
\end{align*}
and by (\ref{T.2.8.1}) we get
\begin{align}\label{T.2.8.2}
\|\mbox{Re}(e^{i\theta}x)\|\,\|\mbox{Im}(e^{i\theta}x)\|
= \big\|\mbox{Im}\big(\mbox{Re}(x)\mbox{Im}(x)\big)\big\|.
\end{align}
Thus for all $\theta \in \mathbb{R}$, by (ii) and (\ref{T.2.8.2}) we obtain
\begin{align}\label{T.2.8.3}
\|\mbox{Re}(e^{i\theta}x)\| = \frac{\|x\| + \sqrt{\|x\|^2
- 4\big\|\mbox{Im}\big(\mbox{Re}(x)\mbox{Im}(x)\big)\big\|}}{2}
\end{align}
and
\begin{align}\label{T.2.8.4}
\|\mbox{Im}(e^{i\theta}x)\| = \frac{\|x\| - \sqrt{\|x\|^2
- 4\big\|\mbox{Im}\big(\mbox{Re}(x)\mbox{Im}(x)\big)\big\|}}{2}.
\end{align}
Since
\begin{align*}
\mbox{Re}(e^{i\theta}x)
= \frac{(\cos \theta + i\sin \theta)x + (\cos \theta - i\sin \theta)x^*}{2}
= \cos \theta \mbox{Re}(x) - \sin \theta\mbox{Im}(x)
\end{align*}
and
\begin{align*}
\mbox{Im}(e^{i\theta}x)
= \frac{(\cos \theta + i\sin \theta)x - (\cos \theta - i\sin \theta)x^*}{2i}
= \sin \theta\mbox{Re}(x) + \cos \theta\mbox{Im}(x)
\end{align*}
So, from relations (\ref{T.2.8.3}) and (\ref{T.2.8.4}), we conclude that
the functions $\|\mbox{Re}(e^{i\theta}x)\|, \|\mbox{Im}(e^{i\theta}x)\|$
are continuous on $\theta \in \mathbb{R}$ and therefore they must be constant, i.e.,
\begin{align*}
\|\mbox{Re}(e^{i\theta}x)\| = \|\mbox{Im}(e^{i\theta}x)\| = \frac{1}{2}\|x\| \qquad (\theta \in \mathbb{R}).
\end{align*}
Thus $\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\| = \frac{1}{2}\|x\|$.
Now, by Theorem \ref{T.2.2} (i) we conclude that $v(x) = \frac{1}{2}\|x\|$.
\end{proof}
Another, we present new improvement of the inequalities (\ref{I.1.1}).
\begin{theorem}\label{T.2.9}
Let $\mathfrak{A}$ be a $C^*$-algebra. For $x\in \mathfrak{A}$ the following statements hold.
\begin{itemize}
\item[(i)] $\frac{1}{2}\|x\| \leq \frac{1}{2}\sqrt{\|x^*x + xx^*\|}\leq v(x)$.
\item[(ii)] $v(x) \leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|} \leq \|x\|$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Let $x\in \mathfrak{A}$. Clearly, $\frac{1}{2}\|x\| \leq \frac{1}{2}\sqrt{\|x^*x + xx^*\|}$.
But, by simple computations,
\begin{align*}
x^*x + xx^* = 2\mbox{Re}^2(x) + 2\mbox{Im}^2(x).
\end{align*}
Consequently, by Theorem \ref{T.2.6} (ii) we get
\begin{align*}
\frac{1}{2}\sqrt{\|x^*x + xx^*\|} &= \frac{1}{2}\sqrt{\big\|2\mbox{Re}^2(x) + 2\mbox{Im}^2(x)\big\|}
\\& \leq \frac{1}{2}\sqrt{2\|\mbox{Re}(x)\|^2 + 2\|\mbox{Im}(x)\|^2}
\\& \leq \frac{1}{2}\sqrt{2v^2(x) + 2v^2(x)} = v(x)
\end{align*}
Therefore $\frac{1}{2}\sqrt{\|x^*x + xx^*\|}\leq v(x)$.
(ii) Obviously, $\frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|} \leq \|x\|$.
Now, let $\pi: \mathfrak{A} \rightarrow \mathbb{B}(\mathscr{H})$ be a non-degenerate faithful representation
of $\mathfrak{A}$ on some Hilbert space $\mathscr{H}$ (see \cite[Theorem 2.6.1]{Dix}).
Let $\alpha, \beta \in \mathbb{R}$ satisfy $\alpha^2 + \beta^2 = 1$.
Then for any unit vector $\xi \in \mathscr{H}$, we have
\begin{align*}
\big\|\pi\big(\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big)\xi\big\| &
=\left\|\begin{bmatrix}
\pi(\mbox{Re}(x)) & \pi(\mbox{Im}(x))\\
0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha \xi\\
\beta \xi
\end{bmatrix}
\right\|
\\&\leq \left\|\begin{bmatrix}
\pi(\mbox{Re}(x)) & \pi(\mbox{Im}(x))\\
0 & 0
\end{bmatrix}
\right\|
\\& = \left\|\begin{bmatrix}
\mbox{Re}(\pi(x)) & \mbox{Im}(\pi(x))\\
0 & 0
\end{bmatrix}
\right\|
\,\,\quad(\mbox{since $\pi$ is representation})
\\& = {\left\|\begin{bmatrix}
\mbox{Re}(\pi(x)) & \mbox{Im}(\pi(x))\\
0 & 0
\end{bmatrix}
\begin{bmatrix}
\mbox{Re}(\pi(x)) & 0\\
\mbox{Im}(\pi(x)) & 0
\end{bmatrix}
\right\|}^{\frac{1}{2}}
\\&= {\big\|\mbox{Re}^2(\pi(x)) + \mbox{Im}^2(\pi(x))\big\|}^{\frac{1}{2}}
\\&= \frac{1}{\sqrt{2}}{\big\|\pi(x)^*\pi(x) + \pi(x)\pi(x)^*\big\|}^{\frac{1}{2}}
\\&= \frac{1}{\sqrt{2}}{\big\|\pi(x^*x + xx^*)\big\|}^{\frac{1}{2}}
\\&= \frac{1}{\sqrt{2}}{\|x^*x + xx^*\|}^{\frac{1}{2}}.
\qquad(\mbox{since $\pi$ is isometric})
\end{align*}
Hence we have $\big\|\pi(\alpha \mbox{Re}(x) + \beta \mbox{Im}(x))\xi\big\| \leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|}$
and so by taking the supremum over all $\xi \in \mathscr{H}$ we obtain
$\|\pi(\alpha \mbox{Re}(x) + \beta \mbox{Im}(x))\big\| \leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|}$.
From this it follows that $\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\| \leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|}$
and hence
\begin{align*}
\displaystyle{\sup_{\alpha^2 + \beta^2 = 1}}\big\|\alpha \mbox{Re}(x) + \beta \mbox{Im}(x)\big\|
\leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|}.
\end{align*}
Now, by Theorem \ref{T.2.6} (i) we conclude that $v(x) \leq \frac{1}{\sqrt{2}}\sqrt{\|x^*x + xx^*\|}$.
\end{proof}
In what follows, $r(x)$ stands for the spectral radius of
an arbitrary element $x$ in a $C^*$-algebra $\mathfrak{A}$.
It is well known that for every $x\in \mathfrak{A}$, we
have $r(x) \leq \|x\|$ and that equality holds in this inequality if $x$ is normal.
In the following lemma we obtain a spectral radius inequality for sums of elements
in $C^*$-algebras.
\begin{lemma}\label{L.2.10}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $z, w\in \mathfrak{A}$. Then
\begin{align*}
r(z + w) \leq \frac{1}{2} \Big(\|z\| + \|w\|
+ \sqrt{(\|z\| - \|w\|)^2 + 4 \min\{\|zw\|, \|wz\|\}}\Big).
\end{align*}
\end{lemma}
\begin{proof}
We first recall that \cite[Corollary 1]{K.1} tells us that
\begin{align}\label{L.2.10.1}
r(T + S) \leq \frac{1}{2} \Big(\|T\| + \|S\|
+ \sqrt{(\|T\| - \|S\|)^2 + 4 \min\{\|TS\|, \|ST\|\}}\Big),
\end{align}
for all bounded linear operators $T, S$ that acting on a Hilbert space.
Now, let $\pi: \mathfrak{A} \rightarrow \mathbb{B}(\mathscr{H})$ be a non-degenerate faithful representation
of $\mathfrak{A}$ on some Hilbert space $\mathscr{H}$ (see \cite[Theorem 2.6.1]{Dix}).
Since $\pi$ is isometric, by letting $T = \pi(z)$ and $S = \pi(w)$ in (\ref{L.2.10.1}), we obtain
\begin{align*}
r(z + w) &= r\big(\pi(z) + \pi(w\big))
\\ &\leq \frac{1}{2} \Big(\|\pi(z)\| + \|\pi(w)\|
\\& \quad \qquad + \sqrt{(\|\pi(z)\| - \|\pi(w)\|)^2 + 4 \min\{\|\pi(z)\pi(w)\|, \|\pi(w)\pi(z)\|\}}\Big)
\\& = \frac{1}{2} \Big(\|z\| + \|w\|
+ \sqrt{(\|z\| - \|w\|)^2 + 4 \min\{\|zw\|, \|wz\|\}}\Big),
\end{align*}
and the statement is proved.
\end{proof}
Now, we present a refinement of the triangle inequality
for the numerical radius in $C^*$-algebras.
\begin{theorem}\label{T.2.11}
Let $\mathfrak{A}$ be a $C^*$-algebra.
For $x, y\in \mathfrak{A}$ the following statements hold.
\begin{itemize}
\item[(i)] \begin{align*}
v(x + y) &\leq \frac{1}{2} \big(v(x) + v(y)\big)
\\& \qquad + \frac{1}{2}\sqrt{\big(v(x) - v(y)\big)^2 +
4 \sup_{\theta \in \mathbb{R}}\big\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta} y)\big\|}
\\& \leq v(x) + v(y). \end{align*}
\item[(ii)] \begin{align*}
v(x + y) &\leq \frac{1}{2} \big(v(x) + v(y)\big)
\\& \qquad + \frac{1}{2}\sqrt{\big(v(x) - v(y)\big)^2 +
4 \sup_{\theta \in \mathbb{R}}\big\|\mbox{Im}(e^{i\theta}x)\mbox{Im}(e^{i\theta} y)\big\|}
\\& \leq v(x) + v(y). \end{align*}
\end{itemize}
\end{theorem}
\begin{proof}
(i) Since $\mbox{Re}(e^{i\theta}(x + y))$ is self adjoint for any $\theta \in \mathbb{R}$,
we have
\begin{align*}
\|\mbox{Re}(e^{i\theta}(x + y))\| = r\big(\mbox{Re}(e^{i\theta}(x + y))\big).
\end{align*}
So, by letting $z = \mbox{Re}(e^{i\theta}x)$
and $w = \mbox{Re}(e^{i\theta}y)$ in Lemma \ref{L.2.10}, we obtain
\begin{align*}
\|\mbox{Re}(e^{i\theta}(x + y))\|& = r\big(\mbox{Re}(e^{i\theta}(x + y))\big)
\\&= r\big(\mbox{Re}(e^{i\theta}x) + \mbox{Re}(e^{i\theta}y)\big)
\\& \leq \frac{1}{2} \Big(\|\mbox{Re}(e^{i\theta}x)\| + \|\mbox{Re}(e^{i\theta}y)\|
\\& \qquad + \sqrt{(\|\mbox{Re}(e^{i\theta}x)\| - \|\mbox{Re}(e^{i\theta}y)\|)^2
+ 4 \|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}\Big)
\\& = \left\|\begin{bmatrix}
\|\mbox{Re}(e^{i\theta}x)\| & \sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}
\\ \sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|} & \|\mbox{Re}(e^{i\theta}y)\|
\end{bmatrix}\right\|
\\& \leq \left\|\begin{bmatrix}
\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\| &
\displaystyle{\sup_{\theta \in \mathbb{R}}}\sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}\\
\displaystyle{\sup_{\theta \in \mathbb{R}}}\sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|} &
\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}y)\|
\end{bmatrix}\right\|
\\&(\mbox{by the norm monotonicity of matrices with nonnegative entries})
\\& = \left\|\begin{bmatrix}
v(x) & \displaystyle{\sup_{\theta \in \mathbb{R}}}\sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}\\
\displaystyle{\sup_{\theta \in \mathbb{R}}}\sqrt{\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|} & v(y)
\end{bmatrix}\right\|
\\& \qquad \qquad \qquad \qquad \qquad \qquad \qquad(\mbox{by Theorem \ref{T.2.2} (i)})
\\ & = \frac{1}{2} \big(v(x) + v(y)\big)
\\& \qquad \qquad + \frac{1}{2} \sqrt{(v(x) - v(y))^2
+ 4 \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}
\end{align*}
Therefore, for every $\theta \in \mathbb{R}$ we have
\begin{align*}
\|\mbox{Re}(e^{i\theta}(x + y))\| &\leq \frac{1}{2} \big(v(x) + v(y)\big)
\\& \qquad \qquad + \frac{1}{2} \sqrt{(v(x) - v(y))^2
+ 4 \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|},
\end{align*}
and hence
\begin{align*}
\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}(x + y))\| &\leq \frac{1}{2} \big(v(x) + v(y)\big)
\\& \qquad \qquad + \frac{1}{2} \sqrt{(v(x) - v(y))^2
+ 4 \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}.
\end{align*}
Now, by Lemma \ref{T.2.2} (i) and the above inequality we get
\begin{align}\label{I.2.11.1}
v(x + y) \leq \frac{1}{2} \Big(v(x) + v(y)
+ \sqrt{(v(x) - v(y))^2
+ 4 \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|}\Big).
\end{align}
Furthermore, by Lemma \ref{T.2.2} (i) we have
\begin{align*}
\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i\theta}y)\|
\leq \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\|
\displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}y)\| = v(x)v(y).
\end{align*}
Thus the inequalities (i) follow from (\ref{I.2.11.1}) and the above inequality.
(ii) It is enough to replace $x$ and $y$ in (i) by $ix$ and $iy$, respectively.
\end{proof}
\begin{corollary}\label{C.2.12}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x, y\in \mathfrak{A}$.
If $x\,{\parallel}_v \,y$ then there exists ${\theta}_0 \in \mathbb{R}$ such that
the following statements hold.
\begin{itemize}
\item[(i)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\big\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i(\theta + {\theta}_0)} y)\big\|
= \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Re}(e^{i\theta}x)\|\|\mbox{Re}(e^{i\theta}y)\|$.
\item[(ii)] $\displaystyle{\sup_{\theta \in \mathbb{R}}}\big\|\mbox{Im}(e^{i\theta}x)\mbox{Im}(e^{i(\theta + {\theta}_0)} y)\big\|
= \displaystyle{\sup_{\theta \in \mathbb{R}}}\|\mbox{Im}(e^{i\theta}x)\|\|\mbox{Im}(e^{i\theta}y)\|$.
\end{itemize}
\end{corollary}
\begin{proof}
Since $x\,{\parallel}_v \,y$, so there exists ${\theta}_0 \in \mathbb{R}$
such that
\begin{align*}
v(x + e^{i{\theta}_0}y) = v(x) + v(y) = v(x) + v(e^{i{\theta}_0}y).
\end{align*}
Hence by Theorem \ref{T.2.11} it follows that
\begin{align*}
\sup_{\theta \in \mathbb{R}}\big\|\mbox{Re}(e^{i\theta}x)\mbox{Re}(e^{i(\theta + {\theta}_0)} y)\big\| = v(x)v(y)
\end{align*}
and
\begin{align*}
\sup_{\theta \in \mathbb{R}}\big\|\mbox{Im}(e^{i\theta}x)\mbox{Im}(e^{i(\theta + {\theta}_0)} y)\big\| = v(x)v(y).
\end{align*}
These, together with Theorem \ref{T.2.2}, imply that (i) and (ii).
\end{proof}
In the following result we characterize the numerical radius parallelism for elements of a $C^*$-algebra.
\begin{theorem}\label{T.2.13}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x, y\in \mathfrak{A}$. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $x\,{\parallel}_v \,y$.
\item[(ii)] There exists a pure state $\varphi$ on $\mathfrak{A}$ such that $|\varphi(x)\varphi(y)| = v(x)v(y)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i)$\Rightarrow$(ii) Let $x\,{\parallel}_v \,y$. Thus $v(x + \lambda y) = v(x) + v(y)$
for some $\lambda\in\mathbb{T}$. Therefore, there exists a pure state $\varphi$
on $\mathfrak{A}$ such that $|\varphi(x + \lambda y)| = v(x + \lambda y)$.
From this it follows that
\begin{align*}
v(x) + v(y) = v(x + \lambda y) &= |\varphi(x + \lambda y)|
\\& \leq |\varphi(x)| + |\varphi(y)| \leq v(x) + |\varphi(y)| \leq v(x) + v(y),
\end{align*}
and hence $|\varphi(x)| = v(x)$ and $|\varphi(y)| = v(y)$. Thus $|\varphi(x)\varphi(y)| = v(x)v(y)$.
(ii)$\Rightarrow$(i) Suppose (ii) holds.
We may assume that $|\varphi(x)\varphi(y)| \neq 0$ otherwise (i) trivially holds.
Put $\lambda = \frac{\varphi(x)\overline{\varphi(y)}}{|\varphi(x)\varphi(y)|}$.
Here, $\overline{\varphi(y)}$ denotes the complex conjugate of $\varphi(y)$.
Since
\begin{align*}
v(x)v(y) = |\varphi(x)\varphi(y)| \leq |\varphi(x)|v(y)\leq v(x)v(y),
\end{align*}
we have $v(x) = |\varphi(x)|$ and so $v(y) = |\varphi(y)|$.
Therefore,
\begin{align*}
v(x) + v(y) &= |\varphi(x)| + |\varphi(y)|
\\& = \left||\varphi(x)| + \frac{\overline{\varphi(y)}}{|\varphi(y)|}\varphi(y)\right|
\\& = \left|\varphi(x) + \frac{\varphi(x)\overline{\varphi(y)}}{|\varphi(x)\varphi(y)|}\varphi(y)\right|
\\& = |\varphi(x + \lambda y)|
\\& \leq v(x + \lambda y) \leq v(x) + v(y).
\end{align*}
This implies that $v(x + \lambda y) = v(x) + v(y)$ and hence $x\,{\parallel}_v \,y$.
\end{proof}
As a consequence of the preceding theorem, we have the following result.
\begin{corollary}\label{C.2.14}
Let $\mathfrak{A}$ be a $C^*$-algebra with identity $e$.
Then for every $x\in \mathfrak{A}$, $x\,{\parallel}_v \,e$.
\end{corollary}
\begin{proof}
Let $x\in \mathfrak{A}$. Thus there exists a pure state $\varphi$ on $\mathfrak{A}$ such that $|\varphi(x)| = v(x)$
and so $|\varphi(x)\varphi(e)| = |\varphi(x)| = v(x) = v(x)v(e)$.
Therefore, Theorem \ref{T.2.13} tells us that $x\,{\parallel}_v \,e$.
\end{proof}
As an immediate consequence of Theorem \ref{T.2.13}, Lemma \ref{L.2.1} and Theorem \ref{T.2.2}, we have the following result.
\begin{corollary}\label{C.2.15}
Let $\mathfrak{A}$ be a $C^*$-algebra and let $x, y\in \mathfrak{A}$.
If $x\,{\parallel}_v \,y$ then the following statements hold.
\begin{itemize}
\item[(i)] There exists a pure state $\varphi$ on $\mathfrak{A}$ such that
\begin{align*}
\displaystyle{\sup_{\theta \in \mathbb{R}}}\big|\mbox{Re}\big(e^{i\theta}\varphi(x)\big)\big|
\big|\mbox{Re}\big(e^{i\theta}\varphi(y)\big)\big|
= \displaystyle{\sup_{\theta \in \mathbb{R}}}\big\|\mbox{Re}(e^{i\theta}x)\big\|\big\|\mbox{Re}(e^{i\theta}y)\big\|.
\end{align*}
\item[(ii)] There exists a pure state $\varphi$ on $\mathfrak{A}$ such that
\begin{align*}
\displaystyle{\sup_{\theta \in \mathbb{R}}}\big|\mbox{Im}\big(e^{i\theta}\varphi(x)\big)\big|
\big|\mbox{Im}\big(e^{i\theta}\varphi(y)\big)\big|
= \displaystyle{\sup_{\theta \in \mathbb{R}}}\big\|\mbox{Im}(e^{i\theta}x)\big\|\big\|\mbox{Im}(e^{i\theta}y)\big\|.
\end{align*}
\end{itemize}
\end{corollary}
We closed this paper with the following equivalence theorem.
In fact, our next result is a characterization of left or right homogenous for
the numerical radius parallelism in unital $C^*$-algebras.
\begin{theorem}
Let $\mathfrak{A}$ be a $C^*$-algebra with identity $e$
and let $c\in \mathcal{Z}(\mathfrak{A})\cap \mathcal{U}(\mathfrak{A})$.
Then for every $x, y \in \mathfrak{A}$ the following statements are equivalent:
\begin{itemize}
\item[(i)] $x\,{\parallel}_v \,y$.
\item[(ii)] $cx\,{\parallel}_v \,cy$.
\item[(iii)] $xc\,{\parallel}_v \,yc$.
\end{itemize}
\end{theorem}
\begin{proof}
Firstly, we show that $v(cz) = v(z) = v(zc)$ for all $z \in \mathfrak{A}$.
Let $\varphi$ be a pure state on $\mathfrak{A}$.
By \cite[Proposition 2.4.4]{Dix} there exist a Hilbert space $\mathscr{H}$,
an irreducible representation $\pi: \mathfrak{A}\rightarrow \mathbb{B}(\mathscr{H})$
and a unit vector $\xi\in \mathscr{H}$ such that for any $z\in\mathfrak{A}$
we have $\varphi(z)=\langle\pi(z)\xi, \xi\rangle.$
Since $c\in \mathcal{Z}(\mathfrak{A})$, by \cite[Proposition II.6.4.13]{Bl},
there exists $\alpha\in\mathbb{C}\setminus\{0\}$ such that $\pi(c) = \alpha e$.
Now from $c\in \mathcal{U}(\mathfrak{A})$ it follows that
\begin{align*}
|\alpha| = \|\pi(c)\| = \sup_{\|\xi\| = 1}\big\|\pi(c)\xi\big\|
= \sup_{\|\xi\| = 1}\sqrt{\langle\pi(c)\xi, \pi(c)\xi\rangle}
= \sup_{\|\xi\| = 1}\sqrt{\langle\pi(c^*c)\xi, \xi\rangle} = 1.
\end{align*}
Therefore for any $z\in\mathfrak{A}$ we obtain
\begin{align*}
|\varphi(cz)| = \big|\langle\pi(cz)\xi, \xi\rangle\big| = \big|\langle\pi(c)\pi(z)\xi, \xi\rangle\big|
= \big|\langle\alpha\pi(z)\xi, \xi\rangle\big| = \big|\langle\pi(z)\xi, \xi\rangle\big| = \big|\varphi(z)\big|.
\end{align*}
From this it follows that
\begin{align*}
v(cz) = \sup_{\varphi \in \mathcal{P}(\mathfrak{A})}\big|\varphi(cz)\big|
= \sup_{\varphi \in \mathcal{P}(\mathfrak{A})}\big|\varphi(z)\big| = v(z),
\end{align*}
and hence $v(cz) = v(z)$.
By using a similar argument we conclude that $v(z) = v(zc)$.
Now, let $x, y \in \mathfrak{A}$. Hence $x\,{\parallel}_v \,y$ if and only if $v(x + \lambda y) = v(x) + v(y)$
for some $\lambda\in\mathbb{T}$,
or equivalently, if and only if
$v\big(c(x + \lambda y)\big) = v(cx) + v(cy)$.
This holds if and only if
$v(cx + \lambda cy) = v(cx) + v(cy)$
for some $\lambda\in\mathbb{T}$,
or equivalently, if and only if
$cx\,{\parallel}_v \,cy$. Therefore, (i)$\Leftrightarrow$(ii).
The proof of the equivalence (i)$\Leftrightarrow$(iii) is similar, so we omit it.
\end{proof}
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Suppose our goal is to calculate $E_{\pi} g := \int_{\mathsf{X}} g (x)
\pi(dx)$ with $\pi$ a probability distribution having support ${\mathsf X}$
and $g$ a real-valued, $\pi$-integrable function. Also, suppose $\pi$
is such that Markov chain Monte Carlo (MCMC) is the only viable method
for estimating $E_{\pi} g$.
Let $X = \{X_0, X_1, X_2, \dots\}$ be a time-homogeneous, aperiodic,
$\pi$-irreducible, positive Harris recurrent Markov chain with state
space $({\mathsf X}, \cal{B} ({\mathsf X}))$ and invariant distribution $\pi$. (See
\citet{meyn:twee:1993} for definitions.) In this case, we say that
$X$ is Harris ergodic and the Ergodic Theorem implies that, with
probability 1,
\begin{equation}
\label{eq:erg_avg}
\bar{g}_{n} := \frac{1}{n} \sum_{i=0}^{n-1} g
(X_i) \rightarrow E_{\pi} g \quad\text{as $n \rightarrow \infty$.}
\end{equation}
Given an MCMC algorithm that simulates $X$ it is conceptually easy to
generate large amounts of data and use $\bar{g}_{n}$ to obtain an
arbitrarily precise estimate of $E_{\pi}g$.
There are several methods for deciding when $n$ is sufficiently large;
i.e., when to terminate the simulation. The simplest is to terminate
the computation whenever patience runs out. This approach is
unsatisfactory since the user would not have any idea about the
accuracy of $\bar{g}_{n}$. Alternatively, with several preliminary
(and necessarily short) runs the user might be able to make an
informed guess about the variability in $\bar{g}_{n}$ and hence make
an a priori choice of $n$. Another method would be to monitor the
sequence of $\bar{g}_{n}$ until it appears to have stabilized. None
of these methods are automated and hence are inefficient uses of user
time and Monte Carlo resources. Moreover, they provide only a point
estimate of $E_{\pi} g$ without additional work.
Convergence diagnostics are also sometimes used to terminate the
simulation \citep{cowl:carl:1996}. Some convergence diagnostics are
available in software, e.g. the R package {\tt boa}, and hence may be
considered automated. However, none of the diagnostics of which we
are aware explicitly address how well $\bar{g}_{n}$ estimates $E_{\pi}
g$; this is discussed again in subsection~\ref{sec:compare}.
An alternative is to calculate a Monte Carlo standard error and use it
to terminate the simulation when the width of a confidence interval
falls below a specified value. Under regularity conditions (see
Section~\ref{sec:mc_theory}) the Markov chain $X$ and function $g$
will admit a central limit theorem (CLT); that is,
\begin{equation}
\label{eq:clt}
\sqrt{n} (\bar{g}_{n} - \text{E}_{\pi} g) \stackrel{d}{\rightarrow}
\text{N} (0, \sigma^{2}_{g})
\end{equation}
as $n \rightarrow \infty$ where $\sigma^{2}_{g} := \text{var}_{\pi} \{
g(X_{0})\} + 2 \sum_{i=1}^{\infty} \text{cov}_{\pi} \{ g(X_{0}),
g(X_{i})\}$. Given an estimate of $\sigma_{g}^{2}$, say
$\hat{\sigma}_{n}^{2}$, we can form a confidence interval for
$\text{E}_{\pi} g$. If this interval is too large then the value of
$n$ is increased and simulation continues until the interval is
sufficiently small; this is a common way of choosing $n$ \citep[e.g.,
see][]{fish:1996,geye:1992,jone:hobe:2001}. Notice that the final
Monte Carlo sample size is random. We study sequential fixed-width
methods which formalize this approach. In particular, the simulation
terminates the first time
\begin{equation}
\label{eq:hw}
t_{*}\, \frac{\hat{\sigma}_{n}}{\sqrt{n}} + p(n) \le \epsilon
\end{equation}
where $t_{*}$ is an appropriate quantile, $p(n) \ge 0$ on
$\mathbb{Z}_{+}$ and $\epsilon >0$ is the desired half-width. The
role of $p$ is to ensure that the simulation is not terminated
prematurely due to a poor estimate of $\sigma_{g}^{2}$. One
possibility is to fix $n^{*} > 0$ and take $p(n) = \epsilon I(n\le
n^{*})$ where $I$ is the usual indicator function.
Sequential statistical procedures have a long history; see
\citet{lai:2001} for an overview and commentary. Moreover, classical
approaches to sequential fixed-width confidence intervals such as
those found in \citet{chow:robb:1965}, \citet{liu:1997} and
\citet{nada:1969} are known to work well. However, the classical
procedures are not relevant to the current work since they assume the
observations are random samples.
In a simulation context, procedures based on \eqref{eq:hw} were
studied most notably by \citet{glyn:whit:1992} who established that
these procedures are \textit{asymptotically valid} in that if our goal
is to have a $100(1-\delta)\%$ confidence interval with width
$2\epsilon$ then
\begin{equation}
\Pr (\text{E}_{\pi} g \, \in \, \text{Int}[T(\epsilon)]) \rightarrow 1
- \delta \hspace*{3mm} \text{ as } \, \epsilon \rightarrow 0
\end{equation}
where $T(\epsilon)$ is the first time that \eqref{eq:hw} is satisfied
and $\text{Int}[T(\epsilon)]$ is the interval at this time. Glynn and
Whitt's conditions for asymptotic validity are substantial: (i) A
functional central limit theorem (FCLT) holds; (ii)
$\hat{\sigma}^{2}_{n} \rightarrow \sigma_{g}^{2}$ with probability 1
as $n \rightarrow \infty$; and (iii) $p(n)=o(n^{-1/2})$. Markov
chains frequently enjoy an FCLT under the same conditions that ensure
a CLT. However, in the context of MCMC, little work has been done on
establishing conditions for (ii) to hold. Thus one of our goals is to
give conditions under which some common methods provide strongly
consistent estimators of $\sigma_{g}^{2}$. Specifically, our
conditions require the sampler to be either uniformly or geometrically
ergodic. The MCMC community has expended considerable effort in
establishing such mixing conditions for a variety of samplers; see
\citet{jone:hobe:2001} and \citet{robe:rose:1998b,robe:rose:2004} for
some references and discussion.
We consider two methods for estimating the variance of the asymptotic
normal distribution, regenerative simulation (RS) and non-overlapping
batch means (BM). Both have strengths and weaknesses; essentially, BM
is easier to implement but RS is on a stronger theoretical footing.
For example, when used with fixed number of batches BM \textit{cannot}
be even weakly consistent for $\sigma_{g}^{2}$. We give conditions
for the consistency of RS and show that BM can provide a consistent
estimation procedure by allowing the batch sizes to increase (in a
specific way) as $n$ increases. In this case it is denoted CBM to
distinguish it from the standard fixed-batch size version which we
denote BM. This was addressed by \citet{dame:1994} but, while the
approach is similar, our regularity conditions on $X$ are weaker.
Also, the regularity conditions required to obtain strong consistency
of the batch means estimator are slightly stronger than those required
by RS. Finally, it is important to note that RS and CBM do not
require that $X$ be stationary; hence burn-in is not required.
The justification of fixed-width methods is entirely asymptotic so it
is not clear how the finite sample properties of BM, CBM, and RS
compare in typical MCMC settings. For this reason, we conduct a
simulation study in the context of two benchmark examples and two
realistic examples, one of which is a complicated frequentist problem
and one which involves a high-dimensional posterior. Roughly
speaking, we find that BM performs poorly while RS and CBM are
comparable.
The rest of this article is organized as
follows. Section~\ref{sec:mc_theory} fixes notation and contains a
brief discussion of some relevant Markov chain theory. In
Section~\ref{sec:oa} we consider RS and CBM. Then
Section~\ref{sec:examples} contains the examples.
\section{Basic Markov Chain Theory}
\label{sec:mc_theory}
For $n \in \mathbb{N} := \{1,2,3,\ldots \}$ let $P^n(x,dy)$ be the
$n$-step Markov transition kernel; that is, for $x \in \mathsf{X}$ and
$A \in \cal{B} ({\mathsf X})$, $P^n(x,A) = \Pr\left(X_n \in A|X_0 = x\right)$.
A Harris ergodic Markov chain $X$ enjoys a strong form of convergence.
Specifically, if $\lambda(\cdot)$ is a probability measure on
$\cal{B}({\mathsf X})$ then
\begin{equation}
\label{eq:bas_con}
\| P^n(\lambda,\cdot) - \pi(\cdot)\| \; \downarrow \; 0
\quad\text{as $n \rightarrow \infty,$}
\end{equation}
where $P^n(\lambda,A) := \int_{{\mathsf X}} P^{n} (x, A) \lambda(dx)$ and
$\|\cdot\|$ is the total variation norm. Suppose there exists an
extended real-valued function $M(x)$ and a nonnegative decreasing
function $\kappa(n)$ on $\mathbb{Z}_{+}$ such that
\begin{equation}
\label{eq:tvbd}
\| P^{n} (x, \cdot) - \pi(\cdot) \| \le M(x) \kappa(n) \; .
\end{equation}
When $\kappa(n) = t^{n}$ for some $t < 1$ say $X$ is
\textit{geometrically ergodic} if $M$ is unbounded and
\textit{uniformly ergodic} if $M$ is bounded. \textit{Polynomial
ergodicity of order m} where $m \ge 0$ means $M$ may be unbounded
and $\kappa(n)=n^{-m}$.
Also, $P$ satisfies \textit{detailed balance} with respect to $\pi$ if
\begin{equation}
\label{eq:dbc}
\pi(dx) P(x,dy) = \pi(dy) P(y, dx) \hspace{5mm} \text{ for all } \,
x,y \in {\mathsf X} \; .
\end{equation}
Note that Metropolis-Hastings samplers satisfy \eqref{eq:dbc} by
construction but many Gibbs samplers do not. We are now in position
to give conditions for the existence of a CLT.
\begin{theorem}
\label{thm:clt}
Let $X$ be a Harris ergodic Markov chain on $\mathsf{X}$ with
invariant distribution $\pi$ and suppose $g : \mathsf{X} \rightarrow
\mathbb{R}$ is a Borel function. Assume one of the following
conditions: \vspace{-2mm}
\begin{enumerate}
\item $X$ is polynomially ergodic of order $m > 1$, $\text{E}_{\pi} M
< \infty$ and there exists $B< \infty$ such that $|g(x)| < B$ almost
surely; \vspace{-1mm}
\item $X$ is polynomially ergodic of order $m$, $\text{E}_{\pi} M <
\infty$ and $\text{E}_{\pi} |g(x)|^{2+\delta} < \infty$ for some
$\delta > 0$ where $m\delta > 2+\delta$; \vspace{-1mm}
\item $X$ is geometrically ergodic and $\text{E}_{\pi} [g^{2}(x)
(\log^{+} |g(x)|)] < \infty$; \vspace{-1mm}
\item $X$ is geometrically ergodic, satisfies \eqref{eq:dbc} and
$\text{E}_{\pi} g^{2}(x) < \infty$; or \vspace{-1mm}
\item $X$ is uniformly ergodic and $\text{E}_{\pi} g^{2}(x) < \infty$.
\end{enumerate} \vspace{-2mm}
Then, for any initial distribution, as $n \rightarrow \infty$
\[
\sqrt{n} (\bar{g}_{n} - \text{E}_{\pi} g) \stackrel{d}{\rightarrow}
\text{N} (0, \sigma_{g}^{2}) \; .
\]
\end{theorem}
\begin{remark}
The theorem was proved by \citet{ibra:linn:1971} (condition 5),
\citet{robe:rose:1997c} (condition 4), \citet{douk:mass:rio:1994}
(condition 3). See \citet{jone:2004} for details on conditions 1
and 2.
\end{remark}
\begin{remark}
Conditions 3, 4 and 5 of the theorem are also sufficient to
guarantee the existence of an FCLT; see \citet{douk:mass:rio:1994},
\citet{robe:rose:1997c} and \citet{bill:1968}, respectively.
\end{remark}
\begin{remark}
The mixing conditions on the Markov chain $X$ stated in
Theorem~\ref{thm:clt} are not necessary for the CLT; see, for
example, \citet{chen:1999}, \citet{meyn:twee:1993} and
\citet{numm:2002}. However, the weaker conditions are often
prohibitively difficult to check in situations where MCMC is
appropriate.
\end{remark}
\begin{remark}
\label{rm:geo}
There are constructive techniques for verifying the existence of an
appropriate $M$ and $\kappa$ from \eqref{eq:tvbd} \citep[Ch.
15]{meyn:twee:1993}. For example, one method of establishing
geometric ergodicity requires finding a function $V : {\mathsf X} \rightarrow
[1,\infty)$ and a small set $C \in{\cal B}({\mathsf X})$ such that
\begin{equation}
\label{eq:drift}
PV(x) \le \lambda V(x) + b I(x \in C) \hspace*{4mm} \forall \; x \in
{\mathsf X}
\end{equation}
where $PV(x) := \int V(y) P(x, dy)$, $0 < \lambda < 1$ and $b <
\infty$. Substantial effort has been devoted to establishing
convergence rates for MCMC algorithms via \eqref{eq:drift} or related
techniques. For example, \citet{hobe:geye:1998},
\citet{hobe:jone:pres:rose:2002}, \citet{jone:hobe:2004},
\citet{marc:hobe:2004}, \citet{mira:tier:2002}, \citet{robe:1995},
\citet{robe:pols:1994}, \citet{robe:rose:1999a},
\citet{rose:1995a,rose:1996} and \citet{tier:1994} examined Gibbs
samplers while \citet{chri:moll:waag:2001},
\citet{douc:fort:moul:soul:2004},
\citet{fort:moul:2000,fort:moul:2003}, \citet{geye:1999},
\citet{jarn:hans:2000}, \citet{jarn:robe:2002},
\citet{meyn:twee:1994b}, and \citet{meng:twee:1996} analyzed
Metropolis-Hastings algorithms.
\end{remark}
\subsection{The Split Chain}
\label{sec:split}
An object that is important to the study of both RS and CBM is the
\textit{split chain} $X':= \left \{ (X_0, \delta_0), (X_1, \delta_1),
(X_2, \delta_2), \dots \right \}$ which has state space ${\mathsf X} \times
\{0,1\}$. The construction of $X'$ requires a \textit{minorization
condition}; i.e., a function $s: {\mathsf X} \mapsto [0,1]$ for which
$E_{\pi} s > 0$ and a probability measure $ Q$ such that
\begin{equation}
\label{eq:mc}
P(x,A) \geq s(x) \, Q(A) \hspace*{5mm} \text{ for all } x \in {\mathsf X}
\text{ and } A \in \cal{B} ({\mathsf X}) \; .
\end{equation}
When ${\mathsf X}$ is countable it is easy to see that \eqref{eq:mc} holds by
fixing $x_{*} \in {\mathsf X}$, setting $s(x) = I(x=x_{*})$ and $Q(\cdot) =
P(x_{*}, \cdot)$. \citet{mykl:tier:yu:1995} and \citet{rose:1995a}
give prescriptions that are often useful for establishing
\eqref{eq:mc} in general spaces. Note that \eqref{eq:mc} allows us to
write $P(x,dy)$ as a mixture of two distributions,
\[
P(x,dy) = s(x) \, Q(dy) + \left[ 1-s(x) \right] R(x,dy),
\]
where $R(x,dy) := \left[ 1-s(x) \right]^{-1} \left[P(x,dy) - s(x) \,
Q(dy) \right]$ is the \textit{residual} distribution (define
$R(x,dy)$ as 0 if $s(x) = 1$). This mixture gives us a recipe for
simulating $X'$: given $X_i=x$, generate $\delta_i \sim
\text{Bernoulli}(s(x))$. If $\delta_i=1$, then draw $X_{i+1} \sim
Q(\cdot)$, else draw $X_{i+1} \sim R(x,\cdot)$.
The two chains, $X$ and $X'$ are closely related since $X'$ will
inherit properties such as aperiodicity and positive Harris recurrence
and the sequence $\{X_i : i=0,1,\dots\}$ obtained from $X'$ has the
same transition probabilities as $X$. Also, $X$ and $X'$ converge to
their respective stationary distributions at exactly the same rate.
If $\delta_i=1$, then time $i+1$ is a \textit{regeneration time} when
$X'$ probabilistically restarts itself. Specifically, suppose we
start $X'$ with $X_0 \sim Q$. Then each time that $\delta_i=1$,
$X_{i+1} \sim Q$. Let $0=\tau_{0} < \tau_{1} < \cdots$ be the
regeneration times. That is, set $\tau_{r+1} = \min \{ i > \tau_{r}
\, : \, \delta_{i-1}=1\}$. Also assume that $X'$ is run for $R$
tours; that is, the simulation is stopped the $R$th time that a
$\delta_i=1$. Let $\tau_{R}$ denote the total length of the
simulation and $N_r$ be the length of the $r$th tour; that is, $N_r =
\tau_r - \tau_{r-1}$. Define
\begin{equation*}
S_{r} = \sum_{i=\tau_{r-1}}^{\tau_r-1} g (X_i)
\end{equation*}
for $r=1,\ldots,R$. The $(N_r,S_{r})$ pairs are iid since each is
based on a different tour. In the sequel we will make repeated use of
the following lemma which generalizes Theorem 2 of
\citet{hobe:jone:pres:rose:2002}.
\begin{lemma}\label{lemma:rsmoments}
Let $X$ be a Harris ergodic Markov chain with invariant distribution
$\pi$. Assume that \eqref{eq:mc} holds and that $X$ is
geometrically ergodic. Let $p \ge 1$ be an integer.
\vspace{-2mm}
\begin{enumerate}
\item If $E_{\pi} |g|^{2^{(p-1)} + \delta} < \infty$ for some $ \delta
> 0$ then $E_{Q} N_{1}^{p} < \infty$ and $E_{Q} S_{1}^{p} < \infty$.
\vspace{-2mm}
\item If $E_{\pi} |g|^{2^{p} + \delta} < \infty$ for some $\delta > 0$
then $E_{Q} N_{1}^{p} < \infty$ and $E_{Q} S_{1}^{p+ \delta} <
\infty$.
\end{enumerate}
\end{lemma}
\begin{proof} See Appendix~\ref{app:rsmoments}. \end{proof}
\section{Output Analysis}
\label{sec:oa}
\subsection{Regenerative Simulation}
\label{sec:rs}
Regenerative simulation is based on directly simulating the split
chain. However, using the mixture approach described above is
problematic since simulation from $R(x, dy)$ is challenging.
\citet{mykl:tier:yu:1995} suggest a method for avoiding this issue.
Suppose \eqref{eq:mc} holds and that the measures $P(x, \cdot)$ and
$Q(\cdot)$ admit densities $k(\cdot | x)$ and $q(\cdot)$,
respectively. Then the following recipe allows us to simulate $X'$.
Assume $X_{0} \sim q(\cdot)$; this is typically quite easy to do, see
\citet{mykl:tier:yu:1995} for some examples. Also, note that this
means burn-in is irrelevant. Draw $X_{i+1} \sim
k(\cdot | x)$, that is, draw from the sampler at hand, and get
$\delta_{i}$ by simulating from the distribution of $\delta_{i} |
X_{i}, X_{i+1}$ with
\begin{equation}
\label{eq:reg_pr}
\Pr(\delta_{i} = 1\, | \, X_{i}, X_{i+1}) = \frac{s(X_{i})
q(X_{i+1})}{k(X_{i+1} \, | \, X_{i})} \; .
\end{equation}
\begin{example}
\label{ex:indep}
In a slight abuse of notation let $\pi$ also denote the density of the
target distribution. Consider an independence Metropolis-Hastings
sampler with proposal density $\nu$. This chain works as follows: Let
the current state be $X_{i}=x$. Draw $y \sim \nu$ and independently
draw $u \sim \text{Uniform} (0,1)$. If
\[
u < \frac{\pi(y) \nu(x)}{\pi(x) \nu(y)}
\]
then set $X_{i+1}=y$ otherwise set $X_{i+1}=x$.
\citet{mykl:tier:yu:1995} derive \eqref{eq:reg_pr} for this case. Let
$c > 0$ be a user-specified constant. Then conditional on an
acceptance, i.e. $X_{i}=x$ and $X_{i+1}=y$
\begin{equation}
\label{eq:indep_rp}
\Pr(\delta_{i} = 1\, | \, X_{i}=x, X_{i+1}=y) =
\begin{cases}
c \, \max \left\{ \frac{\nu(x)}{\pi(x)}, \,
\frac{\nu(y)}{\pi(y)}\right\} & \text{if } \min \left\{
\frac{\pi(x)}{\nu(x)}, \, \frac{\pi(y)}{\nu(y)}
\right\} > c \\
\frac{1}{c} \, \max \left\{ \frac{\pi(x)}{\nu(x)}, \,
\frac{\pi(y)}{\nu(y)}\right\} & \text{if } \max \left\{
\frac{\pi(x)}{\nu(x)}, \, \frac{\pi(y)}{\nu(y)} \right\} < c \\
1 & \text{otherwise} \; .
\end{cases}
\end{equation}
Note that we do not need to know the normalizing constants of $\pi$ or
$\nu$ to calculate \eqref{eq:indep_rp}.
\end{example}
In discrete state spaces regenerations can be easy to identify. In
particular, a regeneration occurs whenever the chain returns to any
fixed state; for example, when the Metropolis-Hastings chain accepts a
move to the fixed state. This regeneration scheme is most useful when
the state space is not too large but potentially complicated; see
subsection~\ref{sec:p-value}. It will not be useful when the state
space is extremely large because returns to the fixed state are too
infrequent. Further practical advice on implementing and automating
RS is given in \citet{broc:kada:2005}, \citet{gilk:robe:sahu:1998},
\citet{geye:thom:1995}, \citet{hobe:jone:pres:rose:2002},
\citet{hobe:jone:robe:2005} and \citet{jone:hobe:2001}.
Implementation of RS is simple once we can effectively simulate the
split chain. For example, the Ergodic Theorem implies that
\[
\bar{g}_{\tau_{R}} = \frac{1}{\tau_{R}} \sum_{j=0}^{\tau_{R} - 1}
g(X_{j}) \rightarrow \text{E}_{\pi} g
\]
with probability 1 as $R \rightarrow \infty$ and hence estimating
$\text{E}_{\pi} g$ is routine.
We now turn our attention to calculating a Monte Carlo standard error
for $\bar{g}_{\tau_{R}}$. Let $E_{Q}$ denote the expectation for the
split chain started with $X_{0} \sim Q(\cdot)$. Also, let ${\bar N}$
be the average tour length; that is, ${\bar N}=R^{-1} \sum_{r=1}^R
N_{r}$. Since the $(N_r,S_{r})$ pairs are iid the strong law implies
with probability 1, $\bar{N} \rightarrow E_{Q} N_{1}$ which is finite
by positive recurrence. If $E_Q N_1^2 < \infty$ and $E_Q S_{1}^2 <
\infty$ it follows that a CLT holds; i.e., as $R \rightarrow \infty$
\begin{equation}
\label{eq:rs_clt}
\sqrt{R} (\bar{g}_{\tau_{R}} - \text{E}_{\pi} g)
\stackrel{d}{\rightarrow} \text{N}\,(0,\xi_{g}^{2})
\end{equation}
where, as shown in \citet{hobe:jone:pres:rose:2002}, $\xi_{g}^{2} =
E_{Q} (S_{1} - N_{1} E_{\pi} g)^{2} / (E_{Q} N_{1} )^{2}$. An obvious
estimator of $\xi^{2}_{g}$ is
\begin{equation*}
\hat{\xi}_{RS}^{2} := \frac{1}{\bar{N}^{2}} \frac{1}{R} \sum_{r=1}^R
(S_{r} - \bar g_{\tau_{R}} N_{r})^{2} \; .
\end{equation*}
Now consider
\begin{equation*}
\begin{split}
\hat{\xi}_{RS}^{2} - \xi^{2}_{g} & = \frac{1}{\bar{N}^{2}}
\frac{1}{R} \sum_{r=1}^R (S_{r} - \bar g_{\tau_{R}} N_{r})^{2} -
\frac{E_{Q} (S_{1} - N_{1} E_{\pi} g)^{2}}{(E_{Q} N_{1} )^{2}} \pm
\frac{E_{Q}
(S_{1} - N_{1} E_{\pi} g)^{2}}{\bar{N}^{2}} \\
& = \frac{1}{\bar{N}^{2}} \frac{1}{R} \sum_{r=1}^R \left[(S_{r} -
\bar g_{\tau_{R}} N_{r})^{2} - E_{Q} (S_{1} - N_{1} E_{\pi} g)^{2}
\pm (S_{r} - N_{r} E_{\pi}g)^{2} \right] \\
& \hspace*{25mm} + \left[ E_{Q} (S_{1} - N_{1} E_{\pi} g)^{2} \left(
\frac{1}{\bar{N}^{2}} - \frac{1}{E_{Q} N_{1}^{2}} \right) \right]
\; .
\end{split}
\end{equation*}
Using this representation and repeated application of the strong law
shows that $\hat{\xi}_{RS}^{2} - \xi^{2}_{g} \rightarrow 0$ with
probability 1 as $R \rightarrow \infty$ \citep[also
see][]{hobe:jone:pres:rose:2002}. It is typically difficult to check
that $E_Q N_1^2 < \infty$ and $E_Q S_{1}^2 < \infty$. However, using
Lemma~\ref{lemma:rsmoments} yields the following result.
\begin{proposition}
Let $X$ be a Harris ergodic Markov chain with invariant distribution
$\pi$. Assume that $E_{\pi} |g|^{2+\delta} < \infty$ for some
$\delta > 0$, \eqref{eq:mc} holds and that $X$ is geometrically
ergodic. Then \eqref{eq:rs_clt} holds and $\hat{\xi}_{RS}^{2}
\rightarrow \xi_{g}^{2}$ w. p. 1 as $R \rightarrow \infty$.
\end{proposition}
Fix $\epsilon > 0$ and let $z$ denote an appropriate standard normal
quantile. An asymptotically valid fixed-width procedure results by
terminating the simulation the first time
\begin{equation}
\label{eq:rs_fw}
z\, \frac{\hat{\xi}_{RS}}{\sqrt{R}} + p(R) \le \epsilon \; .
\end{equation}
\subsection{Batch Means}
\label{sec:bm}
In standard batch means the output of the sampler is broken into
batches of equal size that are assumed to be approximately
independent. (This is not strictly necessary; c.f., the method of
overlapping batch means.) Suppose the algorithm is run for a total of
$n=ab$ iterations (hence $a=a_{n}$ and $b=b_{n}$ are implicit
functions of $n$) and define
\begin{equation*}
\bar{Y}_{j} := \frac{1}{b} \sum_{i=(j-1)b}^{jb-1} g (X_{i})
\hspace*{5mm} \text{ for } j=1,\ldots,a \; .
\end{equation*}
The batch means estimate of $\sigma_{g}^{2}$ is
\begin{equation}
\label{eq:bmvar}
\hat{\sigma}_{BM}^{2} = \frac{b}{a-1} \sum_{j=1}^{a} (\bar{Y}_{j} -
\bar{g}_n)^{2} \; .
\end{equation}
With a fixed number of batches \eqref{eq:bmvar} is not a consistent
estimator of $\sigma_{g}^{2}$ \citep{glyn:igle:1990,glyn:whit:1991}.
On the other hand, if the batch size \textit{and} the number of
batches are allowed to increase as the overall length of the
simulation does it may be possible to obtain consistency. The first
result in this direction is due to \citet{dame:1994} which we now
describe. The major assumption made by \citet{dame:1994} is the
existence of a strong invariance principle. Let $B=\{B(t), t \ge 0\}$
denote a standard Brownian motion. A strong invariance principle
holds if there exists a nonnegative increasing function $\gamma(n)$ on
the positive integers, a constant $0 < \sigma_{g} < \infty$ and a
sufficiently rich probability space such that
\begin{equation}
\label{eq:sip}
\left| \sum_{i=1}^{n} g(X_{i}) - n \text{E}_{\pi} g - \sigma_{g} B(n)
\right| = O(\gamma(n)) \hspace*{5mm} \text{w.p. 1 as } \; n
\rightarrow \infty \;
\end{equation}
where the w.p. 1 in \eqref{eq:sip} means for almost all sample paths.
In particular, \citet{dame:1994} assumed \eqref{eq:sip} held with
$\gamma(n) = n^{1/2 - \alpha}$ where $ 0 < \alpha \le 1/2$. However,
it would seem a daunting task to directly check this condition in any
given application. In an attempt to somewhat alleviate this
difficulty we have the following lemma.
\begin{lemma} \label{lemma:sip}
Let $g : {\mathsf X} \rightarrow \mathbb{R}$ be a Borel function and let
$X$ be a Harris ergodic Markov chain with invariant distribution
$\pi$.\vspace{-2mm}
\begin{enumerate}
\item If $ X$ is uniformly ergodic and $\text{E}_{\pi} |g|^{2 +
\delta} < \infty$ for some $\delta > 0$ then \eqref{eq:sip} holds
with $\gamma(n) = n^{1/2 - \alpha}$ where $\alpha < \delta/(24+12
\delta)$. \vspace{-2mm}
\item If $X$ is geometrically ergodic, \eqref{eq:mc} holds and
$\text{E}_{\pi} |g|^{4 + \delta} < \infty$ for some $\delta > 0$
then \eqref{eq:sip} holds with $\gamma(n) = n^{\alpha} \log n$ where
$\alpha = 1/(2 + \delta)$.\vspace{-2mm}
\end{enumerate}
\end{lemma}
\begin{proof}
The first part of the lemma is an immediate consequence of Theorem
4.1 of \citet{phil:stou:1975} and the fact that uniformly ergodic
Markov chains enjoy exponentially fast uniform mixing. The second
part follows from our Lemma~\ref{lemma:rsmoments} and Theorem 2.1 in
\citet{csak:csor:1995}.
\end{proof}
Using part 1 of Lemma~\ref{lemma:sip} we can state Damerdji's result
as follows.
\begin{proposition} \label{prop:ue_sc_bm} \citep{dame:1994}
Assume $g : {\mathsf X} \rightarrow \mathbb{R}$ such that $\text{E}_{\pi}
|g|^{2+\delta} < \infty$ for some $\delta > 0$ and let $X$ be a
Harris ergodic Markov chain with invariant distribution $\pi$.
Further, suppose $X$ is uniformly ergodic. If\vspace{-2mm}
\begin{enumerate}
\item $a_n \rightarrow \infty$ as $n \rightarrow \infty$,\vspace{-1mm}
\item $b_{n} \rightarrow \infty$ and $b_{n} / n \rightarrow 0$ as
$n \rightarrow \infty$, \vspace{-1mm}
\item $b_{n}^{-1} n^{1-2\alpha} \log n \rightarrow 0$ as $n
\rightarrow \infty$ where $\alpha \in (0, \delta/(24+12\delta))$
and\vspace{-1mm}
\item there exists a constant $c \ge 1$ such that $\sum_{n} (b_{n} /
n)^{c} < \infty$\vspace{-2mm}
\end{enumerate}
then as $n \rightarrow \infty$, $\hat{\sigma}_{BM}^{2} \rightarrow
\sigma_{g}^{2}$ w. p. 1.
\end{proposition}
In Appendix~\ref{app:sc_bm} we use part 2 of Lemma~\ref{lemma:sip} to
extend Proposition~\ref{prop:ue_sc_bm} to geometrically ergodic Markov
chains.
\begin{proposition} \label{prop:sc_bm}
Assume $g : {\mathsf X} \rightarrow \mathbb{R}$ such that $\text{E}_{\pi}
|g|^{4+\delta} < \infty$ for some $\delta > 0$ and let $X$ be a
Harris ergodic Markov chain with invariant distribution $\pi$.
Further, suppose $X$ is geometrically ergodic. If\vspace{-2mm}
\begin{enumerate}
\item $a_n \rightarrow \infty$ as $n \rightarrow \infty$,\vspace{-1mm}
\item $b_{n} \rightarrow \infty$ and $b_{n} / n \rightarrow 0$ as
$n \rightarrow \infty$, \vspace{-1mm}
\item $b_{n}^{-1} n^{2\alpha} [ \log n ]^{3} \rightarrow 0$ as $n
\rightarrow \infty$ where $\alpha = 1/(2 + \delta)$ and\vspace{-1mm}
\item there exists a constant $c \ge 1$ such that $\sum_{n} (b_{n} /
n)^{c} < \infty$\vspace{-2mm}
\end{enumerate}
then as $n \rightarrow \infty$, $\hat{\sigma}_{BM}^{2} \rightarrow
\sigma_{g}^{2}$ w. p. 1.
\end{proposition}
\begin{remark}
There is no assumption of stationarity in
Propositions~\ref{prop:ue_sc_bm} or~\ref{prop:sc_bm}. Hence burn-in
is not required to implement CBM.
\end{remark}
\begin{remark} \label{rem:bs}
Consider using $b_{n} = \lfloor n^{\theta} \rfloor$ and
$a_{n}=\lfloor n/b_{n} \rfloor$. Proposition~\ref{prop:ue_sc_bm}
requires that $1 > \theta > 1 - 2 \alpha > 1- \delta/(12+6\delta) > 5/6$
but Proposition~\ref{prop:sc_bm} requires only $1 > \theta >
(1+\delta/2)^{-1} >0$.
\end{remark}
Under the conditions of Propositions~\ref{prop:ue_sc_bm}
or~\ref{prop:sc_bm} an asymptotically valid fixed-width procedure for
estimating $E_{\pi} g$ results if we terminate the simulation the
first time
\[
t_{a_{n}-1} \frac{\hat{\sigma}_{BM}}{\sqrt{n}} + p(n)\le \epsilon
\]
where $t_{a_{n}-1}$ is the appropriate quantile from a student's $t$
distribution with $a_{n}-1$ degrees of freedom.
\subsection{Practical Implementation Issues}
\label{sec:practical}
Making practical use of the preceding theory requires (i) a moment
condition; (ii) establishing geometric ergodicity of the sampler at
hand; (iii) choosing $p(n)$; (iv) using RS requires \eqref{eq:mc} or
at least \eqref{eq:reg_pr}; and (v) CBM requires choosing $a_{n}$ and
$b_{n}$.
Since a moment condition is required even in the iid case we do not
view (i) as restrictive. Consider (ii). It is easy to construct
examples where the convergence rate is so slow that a Markov chain CLT
does not hold \citep{robe:1999} so the importance of establishing the
rate of convergence in \eqref{eq:tvbd} should not be underestimated.
On the other hand, the MCMC community has expended considerable effort
in trying to understand when certain Markov chains are geometrically
ergodic; see the references in Remark~\ref{rm:geo}. In our view, this
is not the obstacle that it once was.
Regarding (iii), we know of no work on choosing an optimal $p(n)$.
Recall that the theory requires $p(n)=o(n^{-1/2})$. In our examples
we use $p(n)=\epsilon I(n \le n^{*})$ where $n^{*} >0$ is fixed.
Since $n^{*}$ is typically chosen based on empirical experience with
the sampler at hand we might want a penalty for sample sizes greater
than $n^{*}$ so another reasonable choice might be $p(n)=\epsilon I(n
\le n^{*}) + C n^{-k}$ for some $k > 1/2$ and $C >0$.
The issue in (iv), i.e., calculating \eqref{eq:mc} or \eqref{eq:reg_pr}
is commonly viewed as overly burdensome. However, in our experience,
this calculation need not be troublesome. For example,
\citet{mykl:tier:yu:1995} give recipes for constructing \eqref{eq:mc}
and \eqref{eq:reg_pr} for Metropolis-Hastings independence and random
walk samplers; recall \eqref{eq:indep_rp}. There is also some work on
establishing these conditions for very general models; see
\citet{hobe:jone:robe:2005}. Finally, \citet{broc:kada:2005} and
\citet{geye:thom:1995} have shown that regenerations can be made to
occur naturally via simulated tempering.
Consider (v). As we noted in Remark~\ref{rem:bs}, it is common to
choose the batch sizes according to $b_{n} = \lfloor n^{\theta}
\rfloor$ for some $\theta$. \citet{song:schm:1995} and
\citet{chie:1988} have addressed the issue of what value of $\theta$
should be used from different theoretical points of view. In
particular, \citet{chie:1988} showed that (under regularity
conditions) using $\theta = 1/2$ results in the batch means
approaching asymptotic normality at the fastest rate.
\citet{song:schm:1995} showed that (under different regularity
conditions) using $\theta=1/3$ minimizes the asymptotic mean-squared
error of $\hat{\sigma}^{2}_{BM}$. Note that Remark~\ref{rem:bs} shows
that $\theta=1/3$ requires a stronger moment condition than
$\theta=1/2$. We further address this issue in
Section~\ref{sec:examples}.
\subsection{Alternatives to BM and RS}
\label{sec:alternative}
We chose to focus on BM and RS since in MCMC settings they seem to be
the most common methods for estimating the variance of the asymptotic
normal distribution. However, there are other methods which may enjoy
strong consistency; e.g. see \citet{dame:1991}, \citet{geye:1992},
\citet{numm:2002} and \citet{peli:shao:1995}. In particular,
\citet{dame:1991} uses a strong invariance principle to obtain strong
consistency of certain spectral variance estimators under conditions
similar to those required in Proposition~\ref{prop:ue_sc_bm}.
Apparently, this can be extended to geometrically ergodic chains via
Lemma~\ref{lemma:sip} to obtain a result with regularity conditions
similar to Proposition~\ref{prop:sc_bm}. However, we do not pursue
this further here.
\section{Examples}
\label{sec:examples}
In this section we investigate the finite sample performance of RS, BM
with 30 batches, and CBM with $b_{n}=\lfloor n^{1/3} \rfloor$ and
$b_{n}=\lfloor n^{1/2} \rfloor$ in four examples. In particular, we
examine the coverage probabilities and half-widths of the resulting
intervals as well as the required simulation effort. While each
example concerns a different statistical model and MCMC sampler there
are some commonalities. In each case we perform many independent
replications of the given MCMC sampler. The number of replications
ranges from 2000 to 9000 depending on the complexity of the example.
We used all methods on the \textit{same} output from each replication
of the MCMC sampler. When the half-width of a 95\% interval with
$p(n) = \epsilon I(n \ge n^{*})$ (or $p(R) = \epsilon I(R \ge R^{*})$
for RS) is less than $\epsilon$ for a particular method, that
procedure was stopped and the chain length recorded. Our choice of
$n^{*}$ is different for each example and was chosen based on our
empirical experience with the given Markov chain. Other procedures
would continue until all of them were below the targeted half-width,
at which time a single replication was complete. In order to estimate
the coverage probabilities we need true values of the quantities of
interest. These are not analytically available in three of our
examples. Our solution is to obtain precise estimates of the truth
through independent methods which are different for each example. The
details are described below. The results are reported in
Table~\ref{tab:summary}.
\subsection{Toy Example}
\label{sec:toy}
Consider estimating the mean of a $\text{Pareto}(\alpha, \beta)$
distribution, i.e., $\alpha \beta / (\beta-1)$, $\beta > 1$, using a
Metropolis-Hastings independence sampler with a $\text{Pareto}
(\alpha, \lambda)$ candidate. Let $\pi$ be the target density and
$\nu$ be the proposal density. Assume $\beta \ge \lambda$. Then for
$x \ge \alpha$
\[
\frac{\pi(x)}{\nu(x)} = \frac{\beta}{\lambda} \alpha^{\beta -
\lambda}x^{\lambda-\beta} \le \frac{\beta}{\lambda} \; .
\]
By Theorem 2.1 in \citet{meng:twee:1996} this sampler is uniformly
ergodic and
\[
\|P^{n}(x,\cdot) - \pi(\cdot)\| \le \left(1 -
\frac{\lambda}{\beta}\right)^{n} \; .
\]
In order to ensure the moment conditions required for
Proposition~\ref{prop:sc_bm} we set $\beta=10$ and $\lambda=9$ in
which case the right hand side is $10^{-n}$. Hence this sampler
converges extremely fast. Implementation of RS was accomplished using
\eqref{eq:indep_rp} with $c=1.5$.
\subsubsection{Comparing convergence diagnostics with CBM}
\label{sec:compare}
As noted by a referee, one method for terminating the simulation is
via convergence diagnostics. Consider the method of \citet{gewe:1992}
which is a diagnostic that seems close in spirit to the current work.
Geweke's diagnostic (GD) is based on a Markov chain CLT and hence does
not apply much more generally than CBM; the same can be said for many
other diagnostics. GD uses a hypothesis test to ascertain when
$\bar{g}_{n}$ has stabilized.
In the remainder of this subsection we compare GD and CBM in terms of
mean-squared error (MSE) and chain length. To this end we ran 9000
independent replications of the independence sampler with $\alpha=1$,
$\beta=10$ and $\lambda=9$. We used CBM and GD on the output in the
following manner. For each replication we set $n^{*}=45$ but the R
package {\tt boa} required a minimum of 120 iterations in order to
calculate GD. After the minimum was achieved and the cutoff for a
particular method was attained we noted the chain length and the
current estimate of $E_{\pi} g$. The cutoff for CBM was to set the
desired half-width to $\epsilon=.005$. The result of using GD is a
p-value. We chose four values (.05, .10, .2 and .4) for the threshold
in an attempt to tune the computation. The results are reported in
Table~\ref{tab:pareto}. As we previously noted, this sampler mixes
extremely well. Thus it is not surprising that using GD results in a
small estimated MSE. However, using CBM results in much smaller MSE
than GD. The average chain lengths make it is clear that GD stops the
simulation much too soon. Moreover, changing the p-value threshold for
GD does not result in substantial improvements in estimation accuracy.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
Method & Cutoff & Estimated MSE & Average Chain Length\\\hline
CBM ($b_{n}=\lfloor n^{1/3} \rfloor$) & $\epsilon=.005$ & $6.65 \times
10^{-6} (9.9 \times 10^{-8})$ & 2428 (5) \\
CBM ($b_{n}=\lfloor n^{1/2} \rfloor$)& $\epsilon=.005$ & $7.34 \times
10^{-6} (1.2 \times 10^{-8})$ & 2615 (3) \\ \hline
Geweke & p-value=.4 & $1.17 \times 10^{-4} (2\times 10^{-6})$
&202.6 (3.4)\\
Geweke & p-value=.2 & $1.30 \times 10^{-4} (2\times 10^{-6})$
&148.9 (1.6) \\
Geweke & p-value=.1 & $1.34 \times 10^{-4} (2\times 10^{-6})$
&133.4 (.9) \\
Geweke & p-value=.05& $1.37 \times 10^{-4} (2\times 10^{-6})$
&127.4 (.5) \\ \hline
\end{tabular}
\caption{\label{tab:pareto} Summary statistics for CBM versus GD for
Example 4.1. Standard errors of estimates are in parentheses.}
\end{table}
\subsection{A Hierarchical Model}
\label{sec:hm}
\citet{efro:morr:1975} present a data set that gives the raw batting
averages (based on 45 official at-bats) and a transformation
($\sqrt{45} \, \text{arcsin}(2x-1)$) for 18 Major League Baseball
players during the 1970 season. \citet{rose:1996} considers the
following conditionally independent hierarchical model for the
transformed data. Suppose for $i=1,\ldots,K$ that
\begin{eqnarray}
\label{eq:rose_model}
Y_{i} | \theta_{i} \sim \mbox{N}(\theta_{i},1) & \hspace*{5mm} &
\theta_{i} | \mu, \lambda \sim \mbox{N}(\mu, \lambda) \\
\lambda \sim \mbox{IG}(2, 2) & & f(\mu) \propto 1 \; .\nonumber
\end{eqnarray}
(Note that we say $W \sim \text{Gamma} (\alpha, \beta)$ if its density
is proportional to $w^{\alpha - 1} e^{-\beta w} I(w > 0)$ and if $X
\sim \text{Gamma}(b,c)$ then $X^{-1} \sim \text{IG}(b,c)$.)
\citet{rose:1996} introduces a Harris ergodic block Gibbs sampler that
has the posterior, $\pi(\theta,\mu,\lambda|y)$, characterized by the
hierarchy in \eqref{eq:rose_model} as its invariant distribution. This
Gibbs sampler completes a one-step transition $(\lambda', \mu',
\theta') \rightarrow (\lambda, \mu, \theta)$ by drawing from the
distributions of $\lambda | \theta'$ then $\mu | \theta', \lambda$ and
subsequently $\theta | \mu, \lambda$. The full conditionals needed to
implement this sampler are given by
$$
\lambda | \theta, y \sim \mathrm{IG} \left( 2 + \frac{K-1}{2},
2 + \frac{\sum{(\theta_i - \bar{\theta})^{2}}}{2} \right), \hspace*{3mm}
\mu | \theta, \lambda, y \sim \mathrm{N} \left( \bar{\theta},
\frac{\lambda}{K} \right),
$$
$$
\theta_i | \lambda, \mu, y \stackrel{\mathrm{ind}}{\sim}~ \mathrm{N}
\left( \frac{\lambda y_i + \mu}{\lambda + 1}, \frac{\lambda }
{\lambda + 1} \right).
$$
Rosenthal proved geometric ergodicity of the associated Markov
chain. However, MCMC is not required to sample from the posterior; in
Appendix~\ref{sec:rose} we develop an accept-reject sampler that
produces an iid sample from the posterior. Also in
Appendix~\ref{sec:rose} we derive an expression for the probability of
regeneration \eqref{eq:reg_pr}.
We focus on estimating the posterior mean of $\theta_{9}$, the
``true'' long-run (transformed) batting average of the Chicago Cubs'
Ron Santo. It is straightforward to check that the moment conditions
for CBM and RS are met. Finally, we employed our accept-reject
sampling algorithm to generate $9 \times 10^{7}$ independent draws
from $\pi(\theta_{9} |y)$ which were then used to estimate the
posterior mean of $\theta_{9}$ which we assumed to be the truth.
\subsection{Calculating Exact Conditional P-Values}
\label{sec:p-value}
\citet[][p. 432]{agre:2002} reports data that correspond to pairs of
scorings of tumor ratings by two pathologists. A linear by linear
association model specifies that the log of the Poisson mean in cell
$i,j$ satisfies
$$
\log \mu_{ij} = \alpha + \beta_i + \gamma_j + \delta \, i j \; .
$$
A parameter free null distribution for testing goodness-of-fit is
obtained by conditioning on the sufficient statistics for the
parameters, i.e., the margins of the table and $\sum_{ij} n_{ij} \,
ij$, where the $n_{ij}$ are the observed cell counts. The resulting
conditional distribution is a generalization of the hypergeometric
distribution. An exact p-value for goodness-of-fit versus a saturated
alternative can be calculated by summing the conditional probabilities
of all tables satisfying the margins and the additional constraint and
having deviance statistics larger than the observed.
For the current data set there are over twelve billion tables that
satisfy the margin constraints but an exhaustive search revealed that
there are only roughly 34,000 tables that also satisfy the constraint
induced by $\sum_{ij} n_{ij} \, ij$. We will denote this set of
permissible tables by $\Gamma$. Now the desired p-value is given by
\begin{equation}
\label{eq:p-value}
\sum_{y \in \Gamma} I[d(y) \ge d(y_{obs})] \, \pi(y)
\end{equation}
where $d(\cdot)$ is the deviance function and $\pi$ denotes the
generalized hypergeometric. Since we have enumerated $\Gamma$ we find
that the true exact p-value is .044 whereas the chi-squared
approximation yields a p-value of .368. However, a different data set
with different values of the sufficient statistics will have a
different reference set which must be enumerated in order to find the
exact p-value. This would be too computationally burdensome to
implement generally and hence it is common to resort to MCMC-based
approximations \citep[see e.g.][]{caff:boot:2001,diac:stur:1998}.
To estimate \eqref{eq:p-value} we will use the Metropolis-Hastings
algorithm developed in \citet{caff:boot:2001}. This algorithm is also
employed by the R package \texttt{exactLoglinTest}. The associated
Markov chain is Harris ergodic and its invariant distribution is the
appropriate generalized hypergeometric distribution. Moreover, the
chain is uniformly ergodic and since we are estimating the expectation
of a bounded function the regularity conditions for both RS and CBM
are easily met.
Implementation of RS is straightforward. As we mentioned earlier, in
finite state spaces regenerations occur whenever the chain returns to
any fixed state. In order to choose the fixed state we ran the
algorithm for 1000 iterations and chose the state which had the
highest probability with respect to the stationary distribution. The
same fixed state was used in each replication.
\subsection{A Model-Based Spatial Statistics Application}
\label{sec:spatial}
Consider the Scottish lip cancer data set \citep{clay:kald:1987} which
consists of the number of cases of lip cancer registered in each of
the 56 (pre-reorganization) counties of Scotland, together with the
expected number of cases given the age-sex structure of the
population. We assume a Poisson likelihood for areal (spatially
aggregated) data. Specifically, for $i=1,....,N$ we assume that given
$\mu_i$ the disease counts $Y_{i}$ are conditionally independent and
\begin{equation}
Y_{i} | \mu_{i} \sim \text{Poisson}(E_{i}e^{\mu_i})
\end{equation}
where $E_{i}$ is the known `expected' number of disease events in the
$i$th region assuming constant risk and $\mu_i$ is the log-relative
risk of disease for the $i$th region. Set $\phi = (\phi_{1}, \ldots,
\phi_{N})^{T}$. Each $\mu_i$ is modeled as $\mu_i = \theta_i +
\phi_i$ where
\begin{equation*}
\theta_i|\tau_h \sim \; \text{N} (0,1/\tau_h), \hspace*{6mm}
\phi | \tau_c \sim \; \text{CAR} (\tau_c) \; \propto \;
\tau_c^{N/2}\exp\left(-\frac{\tau_c}{2}\phi^TQ\phi\right) , \text{
and }
\end{equation*}
$$Q_{ij}=\left\{
\begin{array}{l l}
\phantom{-}n_i & \mbox{if }i=j\\
\phantom{-}0 & \mbox{if }i\mbox{ is not adjacent to }j\\
-1 & \mbox{if }i\mbox{ is adjacent to }j\\
\end{array} \right.
$$
with $n_i$ the number of neighbors for the $i$th region. Each
$\theta_i$ captures the $i$th region's extra-Poisson variability due
to area-wide heterogeneity, while each $\phi_i$ captures the $i$th
region's excess variability attributable to regional clustering. The
priors on the precision parameters are $\tau_h \sim \text{Gamma} (1,
.01)$ and $\tau_c \sim \text{Gamma} (1, .02)$. This is a challenging
model to consider since the random effects parameters
($\theta_i,\phi_i$) are not identified in the likelihood, and the
spatial prior used is improper. Also, no closed form expressions are
available for the marginal distributions of the parameters, and the
posterior distribution has $2N+2$ dimensions (114 for the lip cancer
data).
\citet{hara:tier:2004} establish uniform ergodicity of a Harris
ergodic Metropolis-Hastings independence sampler with invariant
distribution $\pi(\theta, \phi, \tau_h, \tau_c | y)$ where
$\theta=(\theta_{1}, \ldots, \theta_{N})^{T}$ and a heavy-tailed
proposal. In our implementation of RS we used the formula for the
probability of a regeneration given by \eqref{eq:indep_rp} with $\log
c= -342.72$. Using the empirical supremum of the ratio of the
invariant density to the proposal density (based on several draws from
the proposal) guided the choice of $c$.
We focus on estimating the posterior expectation of $\phi_{7}$, the
log-relative risk of disease for County 7 attributable to spatial
clustering. Finally, we used an independent run of length
$10^{7}$ to obtain an estimate which we treated as the `true value'.
\subsection{Summary}
\label{sec:summary}
Table~\ref{tab:summary} reveals that the estimates of the coverage
probabilities are all less than the desired .95. However, examining
the standard errors shows that only BM is significantly less in all of
the examples and the estimated coverage probability for RS is
\textit{not} significantly different from .95 in 3 out of 4. The
story for CBM is more complicated in that the coverage depends on the
choice of $b_{n}$. Using $b_{n}=\lfloor n^{1/3} \rfloor$ gives the
best coverage for the examples in Sections~\ref{sec:toy}
and~\ref{sec:hm} while $b_{n}=\lfloor n^{1/2} \rfloor$ is superior for
those in Sections~\ref{sec:p-value} and~\ref{sec:spatial}. The reason
for this is that the Markov chains in Sections~\ref{sec:toy}
and~\ref{sec:hm} mix exceptionally well and hence smaller batch sizes
can be tolerated. However, the examples in Sections~\ref{sec:p-value}
and~\ref{sec:spatial} are realistic problems and hence the chains do
not mix as well so that larger batch sizes are required. Thus we
would generally recommend using $b_{n}=\lfloor n^{1/2} \rfloor$.
The example in subsection~\ref{sec:p-value} deserves to be singled out
due to the low estimated coverage probabilities. The goal in this
example was to estimate a fairly small probability, a situation in
which the Wald interval is known to have poor coverage even in iid
settings.
While RS and CBM appear comparable in terms of coverage probability RS
tends to result in slightly longer runs than CBM which in turn results
in longer runs than BM. Moreover, RS and CBM are comparable in their
ability to produce intervals that meet the target half-width more
closely than BM. Also, the intervals for RS are apparently more
stable than those of CBM and BM. Finally, BM underestimates the Monte
Carlo standard error and therefore suggests stopping the chain too
early.
While RS has a slight theoretical advantage over CBM their finite
sample properties appear comparable. Also, like RS, CBM avoids the
burn-in issue, which has been a long standing obstacle to MCMC
practitioners. In addition, CBM enjoys the advantage of being slightly
easier to implement. Thus CBM clearly has a place in the tool kit of
MCMC practitioners.
\begin{appendix}
\section{Proof of Lemma~\ref{lemma:rsmoments}}
\label{app:rsmoments}
\subsection{Preliminary Results}
\label{app:rs_prelim}
Recall the split chain $X'$ and that $0 = \tau_0 < \tau_1 < \tau_2 <
\cdots$ denote the regeneration times; i.e., $\tau_{r+1} = \min \{i >
\tau_{r} : \delta_{i-1}=1 \}$.
\begin{lemma} \citep[Lemma 1]{hobe:jone:pres:rose:2002}
\label{lemma:hjpr1}
Let $X$ be a Harris ergodic Markov chain and assume that
\eqref{eq:mc} holds. Then for any function $h : {\mathsf X}^{\infty}
\rightarrow \mathbb{R}$
\[
\text{E}_{\pi} | h(X_{0}, X_{1}, \ldots )| \ge c \text{E}_{Q} |
h(X_{0}, X_{1}, \ldots )|
\]
where $c = \text{E}_{\pi} s$.
\end{lemma}
\begin{lemma} \citep[Lemma 2]{hobe:jone:pres:rose:2002}
\label{lemma:hjpr2}
Let $X$ be a Harris ergodic Markov chain and assume that
\eqref{eq:mc} holds. If $X$ is geometrically ergodic, then there
exists a $\beta > 1$ such that $\text{E}_{\pi} \beta^{\tau_{1}} <
\infty$.
\end{lemma}
\begin{corollary} \label{cor:hjpr3}
Assume the conditions of Lemma~\ref{lemma:hjpr2}. For any $a > 0$
\[
\sum_{i=0}^{\infty} \left[ \text{Pr}_{\pi} (\tau_{1} \ge i + 1)
\right]^{a} \le \left(\text{E}_{\pi}
\beta^{\tau_1}\right)^{a}\sum_{i=0}^{\infty} \beta^{-a(i+1)} <
\infty \; .
\]
\end{corollary}
\subsection{Proof of Lemma~\ref{lemma:rsmoments}}
We prove only part 2 of the lemma as part 1 is similar. Without loss
of generality we assume $0 < \delta < 1$. By Lemma~\ref{lemma:hjpr1},
it is enough to verify that $\text{E}_{\pi}\tau_{1}^{p} < \infty$ and
$\text{E}_{\pi} S_{1}^{p+\delta} < \infty$. Lemma~\ref{lemma:hjpr2}
shows that $\text{E}_{\pi}\tau_{1}^{p} < \infty$ for any $p>0$. Note
that
\begin{equation*}
\begin{split}
& \left( \sum_{i=0}^{\tau_{1} - 1} g(X_{i})\right)^{p+\delta} \le
\left( \sum_{i=0}^{\tau_{1} - 1} |g(X_{i})| \right)^{p+\delta} =
\left( \sum_{i=0}^{\infty} I(0 \le i \le \tau_{1} - 1)
|g(X_{i})| \right)^{p+\delta} \\
& \le \sum_{i_{1}=0}^{\infty} \cdots \sum_{i_{p}=0}^{\infty}
\sum_{i_{p+1}=0}^{\infty}\left[ \prod_{j=1}^{p} I(0 \le i_{j} \le
\tau_{1} - 1) |g(X_{i_{j}})| \right]I(0 \le i_{p+1} \le \tau_{1} -
1)|g(X_{i_{p+1}})|^{\delta}
\end{split}
\end{equation*}
and hence
\begin{equation*}
\begin{split}
\text{E}_{\pi} S_{1}^{p+\delta} & \le \sum_{i_{1}=0}^{\infty} \cdots
\sum_{i_{p}=0}^{\infty}
\sum_{i_{p+1}=0}^{\infty}\text{E}_{\pi}\left( \left[
\prod_{j=1}^{p+1} I(0 \le i_{j} \le \tau_{1} - 1)\right] \left[
\prod_{j=1}^{p}|g(X_{i_{j}})| \right] |g(X_{i_{p+1}})|^{\delta}
\right) \\
& \le \sum_{i_{1}=0}^{\infty} \cdots \sum_{i_{p}=0}^{\infty}
\sum_{i_{p+1}=0}^{\infty} \left[ \text{E}_{\pi} I(0 \le i_{1} \le
\tau_{1} - 1) |g(X_{i_{1}})|^{2}
\right]^{1/2} \times \\
& \cdots \times \left[\text{E}_{\pi} I(0 \le i_{p} \le \tau_{1} - 1)
|g(X_{i_{p}})|^{2^{p}} \right]^{1/2^{p}} \left[ \text{E}_{\pi} I(0
\le i_{p+1} \le \tau_{1} - 1) |g(X_{i_{p+1}})|^{2^{p} \delta}
\right]^{1/2^{p}}
\end{split}
\end{equation*}
where the second inequality follows with repeated application of
Cauchy-Schwartz. Set $a_{j}=1+2^{j}/\delta$ and
$b_{j}=1+\delta/2^{j}$ for $j=1,2,\ldots,p$ and apply H{\"o}lder's
inequality to obtain
\[
\text{E}_{\pi} I(0 \le i_{j} \le \tau_{1} - 1) |g(X_{i_{j}})|^{2^{j}}
\le \left[ \text{E}_{\pi} I(0 \le i_{j} \le \tau_{1} - 1)
\right]^{1/a_{j}} \left[ \text{E}_{\pi} |g(X_{i_{j}})|^{2^{j}+\delta}
\right]^{1/b_{j}} \; .
\]
Note that
\[
c_{j} := \left[ \left(\text{E}_{\pi}
|g(X_{i_{j}})|^{2^{j}+\delta}\right)^{1/b_{j}} \right]^{1/2^{p}}<
\infty \; .
\]
Also, if $a_{p+1} = 1 + 2^{p}$ and $b_{p+1} = 1 + 1/2^{p}$ then
\[
\text{E}_{\pi} I(0 \le i_{p+1} \le \tau_{1} - 1)
|g(X_{i_{p+1}})|^{2^{p} \delta} \le \left[ \text{E}_{\pi} I(0 \le
i_{p+1} \le \tau_{1} - 1) \right]^{\frac{1}{a_{p+1}}} \left[
\text{E}_{\pi} |g(X_{i_{p+1}})|^{\delta(2^{p}+\delta)}
\right]^{\frac{1}{b_{p+1}}} \; .
\]
Notice that
\[
c_{p+1} := \left[ \left(\text{E}_{\pi}
|g(X_{i_{p+1}})|^{\delta(2^{p}+\delta)}\right)^{1/b_{j}}
\right]^{1/2^{p}} < \infty
\]
and set $c=\max\{c_{1},\ldots,c_{p+1}\}$. Then an appeal to
Corollary~\ref{cor:hjpr3} yields
\begin{equation*}
\begin{split}
&\text{E}_{\pi} S_{1}^{p+\delta} \le c \left[ \prod_{j=1}^{p}
\sum_{i_{j}=0}^{\infty} \{\Pr_{\pi}(\tau_{1} \ge i_{j} +
1)\}^{1/(a_{j} 2^{j})} \right]\!\left[ \sum_{i_{p+1}=0}^{\infty}
\{\Pr_{\pi}(\tau_{1} \ge i_{j} + 1)\}^{1/(a_{p+1} 2^{p})}\right]<
\infty \; .
\end{split}
\end{equation*}
\section{Proof of Proposition~\ref{prop:sc_bm}}
\label{app:sc_bm}
\subsection{Preliminary Results}
\label{app:prelim}
Recall that $B=\{B(t), t \ge 0\}$ denotes a standard Brownian motion.
Define
\begin{equation}
\label{eq:unbm}
\tilde{\sigma}_{*}^{2} = \frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}
\left( \bar{B}_{j}(b_{n}) - \bar{B}(n) \right)^{2}
\end{equation}
where $\bar{B}_{j}(b_n) = b_{n}^{-1} \left( B((j+1)b_{n}) - B(j b_{n})
\right)$ and $\bar{B}(n) = n^{-1} B(n)$.
\begin{lemma}\citep[][p. 508]{dame:1994}
For all $\epsilon > 0$ and for almost all sample paths there exists
$n_0 (\epsilon)$ such that for all $n \ge n_0$\vspace{-1mm}
\citep[][p. 508]{dame:1994}
\begin{equation}
\label{eq:bj_lil}
| \bar{B}_{j} (b_n) | \le \sqrt{2} (1 + \epsilon) b_{n}^{-1/2} [ \log
(n / b_n) + \log \log n]^{1/2} \; .
\end{equation}
\end{lemma}
\vspace{-2mm}
\begin{lemma} \citep{csor:reve:1981}
For all $\epsilon > 0$ and for almost all sample
paths there exists $n_0 (\epsilon)$ such that for all $n \ge
n_0$\vspace{-1mm}
\begin{equation}
\label{eq:b_lil}
|B(n)| < (1 + \epsilon) [2 n \log \log n]^{1/2} \; .
\end{equation}
\end{lemma}
\subsection{Proof of Proposition~\ref{prop:sc_bm}}
Proposition~\ref{prop:sc_bm} follows from Lemma~\ref{lemma:sip} and
the following two lemmas:
\begin{lemma} \citep[][Proposition 3.1]{dame:1994} \label{lemma:dbm}
Assume \vspace*{-2mm}
\begin{enumerate}
\item $b_{n} \rightarrow \infty$ and $n / b_{n} \rightarrow \infty$ as
$n \rightarrow \infty$ and \vspace*{-2mm}
\item there exists a constant $c \ge 1$ such that $\sum_{n} (b_{n} /
n)^{c} < \infty$ \vspace*{-2mm}
\end{enumerate}
then as $n \rightarrow \infty$, $\tilde{\sigma}_{*}^{2} \rightarrow
1$ a.s.
\end{lemma}
\begin{lemma} \label{lemma:ours}
Assume that \eqref{eq:sip} holds with $\gamma(n)=n^{\alpha} \log n$
where $\alpha = 1/(2+\delta)$. If \vspace*{-2mm}
\begin{enumerate}
\item $a_n \rightarrow \infty$ as $ n \rightarrow \infty$, \vspace*{-2mm}
\item $b_{n} \rightarrow \infty$ and $n / b_{n} \rightarrow \infty$ as
$n \rightarrow \infty$ and \vspace*{-2mm}
\item $b_{n}^{-1} n^{2\alpha} [ \log n ]^{3} \rightarrow 0$ as $n
\rightarrow \infty$ where $\alpha = 1/(2 + \delta)$ \vspace*{-2mm}
\end{enumerate}
then as $n \rightarrow \infty$, $ \hat{\sigma}_{BM}^{2} - \sigma_{g}^{2}
\tilde{\sigma}_{*}^{2} \rightarrow 0$ a.s.
\end{lemma}
\noindent \textit{Proof of Lemma}~\ref{lemma:ours}.
Recall that $X=\{X_1 , X_2 , \ldots \}$ is a Harris ergodic Markov
chain. Define the process $Y$ by $Y_{i} = g(X_{i})- \text{E}_{\pi}g$
for $i=1, 2, 3, \ldots$. Then
\[
\hat{\sigma}_{BM}^{2} = \frac{b_n}{a_n -1} \sum_{j=0}^{a_n - 1} \left(
\bar{Y}_{j} (b_n) - \bar{Y} (n) \right)^{2}
\]
where $\bar{Y}_{j} (b_n) = b_{n}^{-1} \sum_{i=1}^{b_n} Y_{jb_n + i}$
for $j=0,\ldots , a_{n} - 1$ and $\bar{Y} (n) = n^{-1} \sum_{i=1}^{n}
Y_{i}$. Since
\[
\bar{Y}_{j} (b_n) - \bar{Y}(n) = \bar{Y}_{j} (b_n) - \bar{Y}(n) \pm
\sigma_{g} \bar{B}_{j} (b_n)
\, \pm \sigma_{g} \bar{B} (n)
\]
we have
\begin{equation*}
\begin{split}
\left|\hat{\sigma}_{BM}^{2} - \sigma_{g}^{2} \tilde{\sigma}_{*}^{2}
\right| & \le \frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1} \left[
(\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n) )^{2} + (\bar{Y} (n)
- \sigma_{g}
\bar{B}(n))^{2} \right.\\
& + |2(\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n)) (\bar{Y} (n) -
\sigma_{g} \bar{B}(n))| + |2 \sigma_{g} (\bar{Y}_{j} (b_n) -
\sigma_{g} \bar{B}_{j} (b_n))\bar{B}_{j} (b_n)| \\
& +| 2\sigma_{g}(\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n))
\bar{B}(n)| + | 2 \sigma_{g} (\bar{Y} (n) -
\sigma_{g}\bar{B}(n))\bar{B}_{j}(b_n)| \\
& \left. + |2 \sigma_{g} (\bar{Y} (n) - \sigma_{g} \bar{B}(n))\bar{B}(n)|
\right] \; .
\end{split}
\end{equation*}
Now we will consider each term in the sum and show that it tends to
0. \vspace*{-2mm}
\begin{enumerate}
\item Our assumptions say that there exists a constant $C$ such that
for all large $n$
\begin{equation}
\label{eq:asa1}
\left| \sum_{i=1}^{n} g(X_{i}) - n \text{E}_{\pi} g - \sigma_{g} B(n)
\right| < C n^{\alpha} \log n \hspace*{5mm} a.s.
\end{equation}
Note that
\[
\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n) = \frac{1}{b_n} \left[
\sum_{i=1}^{(j+1)b_n} Y_{i} - \sigma_{g} B((j+1)b_n)\right] -
\frac{1}{b_n} \left[ \sum_{i=1}^{jb_n} Y_{i} - \sigma_{g} B(jb_n)\right]
\]
and hence by \eqref{eq:asa1}
\begin{equation}
\label{eq:asa2}
\begin{split}
|\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n)| & \le \frac{1}{b_n}
\left[ | \sum_{i=1}^{(j+1)b_n} Y_{i} - \sigma_{g} B((j+1)b_n)| + |
\sum_{i=1}^{jb_n} Y_{i} - \sigma_{g} B(jb_n) | \right] \\
& < \frac{2}{b_n} C n^{\alpha} \log n
\end{split}
\end{equation}
Then
\[
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}(\bar{Y}_{j}
(b_n) - \sigma_{g} \bar{B}_{j} (b_n) )^{2} < 4 C^{2} \frac{a_n}{a_n -
1} b_{n}^{-1} n^{2\alpha} (\log n)^{2} \; \rightarrow 0
\]
as $n \rightarrow \infty$ by conditions 1 and 3.
\vspace*{-2mm}
\item Apply \eqref{eq:asa1} to obtain
\begin{equation}
\label{eq:asa3}
| \bar{Y} (n) - \sigma_{g} \bar{B} (n) | = n^{-1} | \sum_{i=1}^{n}
Y_{i} - \sigma_{g} B(n) | < C n^{\alpha - 1} \log n \; .
\end{equation}
Then
\[
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}(\bar{Y} (n) - \sigma_{g}
\bar{B}(n))^{2} < C^{2} \frac{a_n}{a_n - 1} \frac{b_n}{n} \frac{(\log
n)^{2}}{n^{1-2\alpha}} \rightarrow 0
\]
as $n \rightarrow \infty$ by conditions 1 and 2 and since
$1 - 2\alpha > 0$.
\vspace*{-2mm}
\item By \eqref{eq:asa2} and \eqref{eq:asa3}
\[
|2(\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n)) (\bar{Y} (n) - \sigma_{g}
\bar{B}(n))| < 4 C^{2} b_{n}^{-1} n^{2 \alpha - 1} (\log n)^{2} \; .
\]
Thus
\[
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}|2(\bar{Y}_{j} (b_n) -
\sigma_{g} \bar{B}_{j} (b_n)) (\bar{Y} (n) - \sigma_{g} \bar{B}(n))| < 4 C^{2}
\frac{a_n}{a_n - 1} \frac{(\log n)^{2}}{n^{1-2\alpha}} \rightarrow 0
\]
as $n \rightarrow \infty$ by condition 1 and since $1 - 2\alpha > 0$.
\vspace*{-2mm}
\item Since $b_n \ge 2$, \eqref{eq:bj_lil} and \eqref{eq:asa2}
together imply
\[
\begin{split}
|(\bar{Y}_{j} (b_n) - \sigma_{g} \bar{B}_{j} (b_n))\bar{B}_{j} (b_n)| &
< 2^{3/2} C (1 + \epsilon) b_{n}^{-1} \left[ b_{n}^{-1} n^{2 \alpha}
(\log n)^{2} \log(n/b_{n}) \right. \\
& \left. + b_{n}^{-1} n^{2 \alpha} (\log n)^{2} \log \log n
\right]^{1/2}
\end{split}
\]
Hence
\[
\begin{split}
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}|2 \sigma_{g} (\bar{Y}_{j}
(b_n) - \sigma_{g} \bar{B}_{j} (b_n))\bar{B}_{j} (b_n)| & \le 8 \sigma_{g} C (1
+ \epsilon) \frac{a_n}{a_n - 1} \left[ b_{n}^{-1} n^{2 \alpha}
(\log n)^{2} \log(n/b_{n}) \right. \\
& + \left. b_{n}^{-1} n^{2 \alpha} (\log n)^{2} \log \log n
\right]^{1/2} \; \rightarrow 0
\end{split}
\]
as $n \rightarrow \infty$ by conditions 1 and 3.
\vspace*{-2mm}
\item By \eqref{eq:asa2} and \eqref{eq:b_lil} $|(\bar{Y}_{j} (b_n) -
\sigma_{g} \bar{B}_{j} (b_n)) \bar{B}(n)| < 4 C (1 + \epsilon)
b_{n}^{-1} n^{-1/2 + \alpha} (\log n)(\log \log n)^{1/2}$ so that
\[
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}|2 \sigma_{g}(\bar{Y}_{j}
(b_n) - \sigma_{g} \bar{B}_{j} (b_n)) \bar{B}(n)| < 8 \sigma_{g} C (1 +
\epsilon) \frac{a_n}{a_n - 1} \frac{(\log n)(\log \log
n)^{1/2}}{n^{1/2 - \alpha}} \; \rightarrow 0
\]
as $n \rightarrow \infty$ by condition 1 and since $1/2 - \alpha > 0$.
\vspace*{-2mm}
\item Use \eqref{eq:bj_lil} and \eqref{eq:asa3} to get
\[
| (\bar{Y} (n) - \sigma_{g}\bar{B}(n))\bar{B}_{j}(b_n)| < \sqrt{2} C (1 +
\epsilon) \frac{n^{\alpha-1} \log n}{\sqrt{b_{n}}} \left[
\log(n/b_{n}) + \log \log n \right]^{1/2}
\]
and hence using conditions 1, 2 and 3 shows that as $n \rightarrow
\infty$
\[
\begin{split}
& \frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}|2 \sigma_{g} (\bar{Y}
(n) - \sigma_{g}\bar{B}(n))\bar{B}_{j}(b_n)| < \\
& 4 \sigma_{g} C (1 + \epsilon) \frac{a_n}{a_n - 1} \frac{b_n}{n}
\left[ b_{n}^{-1} n^{2\alpha} ((\log n)^{2}\log(n/b_{n}) +
(\log n)^{2} \log \log n )\right]^{1/2} \rightarrow 0
\end{split}
\]
\vspace*{-3mm}
\item Now \eqref{eq:b_lil} and \eqref{eq:asa3} imply $| (\bar{Y} (n) -
\sigma_{g} \bar{B}(n))\bar{B}(n)| < 2 C (1 + \epsilon) n^{-3/2 + \alpha}
(\log n)^{3/2}$. Hence
\[
\frac{b_{n}}{a_{n} - 1} \sum_{j=0}^{a_{n} - 1}|2 \sigma_{g}(\bar{Y}
(n) - \sigma_{g} \bar{B}(n))\bar{B}(n)| < 4 \sigma_{g} C (1 +
\epsilon) \frac{a_n}{a_n - 1} \frac{b_n}{n} \frac{(\log
n)^{3/2}}{n^{1/2 - \alpha}} \; \rightarrow 0
\]
as $n \rightarrow \infty$ by conditions 1 and 2 and since $1/2 -
\alpha > 0$.
\end{enumerate}
\section{Calculations for Example~\ref{sec:hm}}
\label{sec:rose}
We consider a slightly more general formulation of the model given in
\eqref{eq:rose_model}.
Suppose for $i=1,\ldots,K$ that
\begin{eqnarray}
\label{eq:rose_model1}
Y_{i} | \theta_{i} \sim \mbox{N}(\theta_{i},a) & \hspace*{5mm} &
\theta_{i} | \mu, \lambda \sim \mbox{N}(\mu, \lambda) \\
\lambda \sim \mbox{IG}(b,c) & & f(\mu) \propto 1 \; .\nonumber
\end{eqnarray}
where $a, b, c$ are all known positive constants.
\subsection{Sampling from $\pi(\theta,\mu,\lambda|y)$}
\label{app:ss}
Let $\pi(\theta,\mu,\lambda|y)$ be the posterior distribution
corresponding to the hierarchy in \eqref{eq:rose_model1}. Note
that $\theta$ is a vector containing all of the $\theta_{i}$ and that
$y$ is a vector containing all of the data. Consider the
factorization
\begin{equation}
\label{eq:post}
\pi(\theta,\mu,\lambda|y) = \pi(\theta|\mu,\lambda,y)
\pi(\mu|\lambda,y) \pi(\lambda|y) .
\end{equation}
If it is possible to sequentially simulate from each of the densities
on the right-hand side of \eqref{eq:post} we can produce iid draws
from the posterior. Now $\pi(\theta|\mu,\lambda,y)$ is the product of
independent univariate normal densities, i.e. $\theta_{i} | \mu ,
\lambda, y \sim \text{N}((\lambda y_{i} + a \mu)/(\lambda + a), \,
a\lambda / (\lambda + a))$. Also, $\pi(\mu|\lambda,y)$ is a normal
distribution, i.e. $\mu | \lambda ,y \sim \text{N}(\bar{y}, (\lambda +
a)/K)$. Next
\[
\pi(\lambda |y) \propto \frac{1}{\lambda^{b+1} (\lambda +
a)^{(K-1)/2}} e^{-c/\lambda - s^{2}/2(\lambda +a)}
\]
where $\bar{y}=K^{-1} \sum_{i=1}^{K} y_{i}$ and $s^{2} =
\sum_{i=1}^{K} (y_{i} - \bar{y})^{2}$. An accept-reject sampler with
an $\text{IG}(b,c)$ candidate can be used to sample from $\pi(\lambda
|y)$ since if we let $g(\lambda)$ be the kernel of an $\text{IG}(b,c)$
density
\[
\sup_{\lambda \ge 0} \frac{1}{g(\lambda) \lambda^{b+1} (\lambda +
a)^{(K-1)/2}} e^{-c/\lambda - s^{2}/2(\lambda +a)} = \sup_{\lambda
\ge 0} \, (\lambda + a)^{(1-K)/2} e^{-s^{2}/2(\lambda + a)} = M <
\infty
\]
It is easy to show that the only critical point is $\hat{\lambda}
=s^{2}/(K-1) - a$ which is where the maximum occurs if $\hat{\lambda}
> 0$. But if $\hat{\lambda} \le 0$ then the maximum occurs at 0.
\subsection{Implementing regenerative simulation}
We begin by establishing the minorization condition \eqref{eq:mc} for
\pcite{rose:1996} block Gibbs sampler. For the one-step transition
$(\lambda', \mu',\theta') \rightarrow (\lambda, \mu, \theta)$ the
Markov transition density, $p$, is given by $p(\lambda, \mu, \theta |
\lambda', \mu', \theta') = f(\lambda, \mu | \theta') f(\theta |
\lambda, \mu)$. Note that $\mathsf{X} = \mathbb{R}^+ \times
\mathbb{R}^1 \times \mathbb{R}^K$. Fix a point $(\tilde{\lambda},
\tilde{\mu}, \tilde{\theta}) \in \mathsf{X}$ and let $D \subseteq
\mathsf{X}$. Then
\[
\begin{split}
p(\lambda, \mu, \theta | \lambda', \mu', \theta')
& = f(\lambda, \mu | \theta') f(\theta | \lambda, \mu) \\
& \geq f(\lambda, \mu | \theta') f(\theta | \lambda, \mu)
I_{\left\{(\lambda, \mu, \theta) \in D \right\}} \\
& = \frac{f(\lambda, \mu | \theta')}{f(\lambda, \mu
|\tilde{\theta})} f(\lambda, \mu | \tilde{\theta}) f(\theta |
\lambda, \mu)
I_{\left\{(\lambda, \mu, \theta) \in D \right\}} \\
& \geq \left\{ \inf_{(\lambda, \mu, \theta) \in D} \frac{f(\lambda,
\mu | \theta')}{f(\lambda, \mu |\tilde{\theta})} \right\}
f(\lambda, \mu |\tilde{\theta}) f(\theta | \lambda, \mu)
I_{\left\{(\lambda, \mu, \theta) \in D \right\}}
\end{split}
\]
and hence \eqref{eq:mc} will follow by setting
\[
\varepsilon = \int_D f(\lambda, \mu|\tilde{\theta}) f(\theta |
\lambda, \mu) ~d\lambda ~d\mu ~d\theta ,
\]
\[
s(\lambda', \mu', \theta') = \varepsilon \inf_{(\lambda, \mu, \theta)
\in D} \frac{f(\lambda, \mu | \theta')}{f(\lambda, \mu
|\tilde{\theta})} \hspace*{5mm} \text{ and } \hspace*{5mm}
q(\lambda, \mu, \theta) = \varepsilon^{-1}
f(\lambda, \mu |\tilde{\theta}) f(\theta | \lambda, \mu)
I_{\left\{(\lambda, \mu, \theta) \in D \right\}}.
\]
Now using \eqref{eq:reg_pr} shows that when $(\lambda, \mu, \theta)
\in D$ the probability of regeneration is given by
\begin{equation}
\label{eq:prob}
\Pr(\delta=1 |\lambda', \mu', \theta', \lambda, \mu, \theta) = \left\{
\inf_{(\lambda, \mu, \theta) \in D} \frac{f(\lambda, \mu |
\theta')}{f(\lambda, \mu |\tilde{\theta})} \right\}
\frac{f(\lambda, \mu |\tilde{\theta})}{f(\lambda, \mu |\theta')}
\end{equation}
Thus we need to calculate the infimum and plug into \eqref{eq:prob}.
To this end let $0 < d_1 < d_2 < \infty$, $-\infty < d_3 < d_4 <
\infty$ and set $D = [d_1, d_2] \times [d_3, d_4] \times
\mathbb{R}^K$. Define $V(\theta, \mu) = \sum_{i=1}^K (\theta_i -
\mu)^{2}$ and note that
\[
\inf_{(\lambda, \mu, \theta) \in D} \frac{f(\lambda, \mu |
\theta')}{f(\lambda, \mu |\tilde{\theta})} = \inf_{\lambda \in [d_1,
d_2],~ \mu \in [d_3, d_4]} \exp \left\{\frac{V(\tilde{\theta}, \mu)
- V(\theta', \mu)}{2 \lambda} \right\} = \exp \left\{
\frac{V(\tilde{\theta}, \hat{\mu}) - V(\theta', \hat{\mu})}{2
\hat{\lambda}} \right\}
\]
where $\hat{\mu} = d_{4} I(\bar{\theta'} \leq \bar{\tilde{\theta}}) +
d_{3} I(\bar{\theta'} > \bar{\tilde{\theta}})$ and $\hat{\lambda} =
d_{2}I(V(\theta', \hat{\mu}) \leq V(\tilde{\theta}, \hat{\mu})) +
d_{1} I(V(\theta', \hat{\mu}) > V(\tilde{\theta}, \hat{\mu}))$. We
find the fixed point with a preliminary estimate of the mean of the
stationary distribution, and $D$ to be centered at that point. Let
$(\tilde{\lambda}, \tilde{\mu}, \tilde{\theta})$ be the ergodic mean
for a preliminary Gibbs sampler run, and let $S_{\lambda}$ and
$S_{\mu}$ denote the usual sample standard deviations of $\lambda$ and
$\mu$ respectively. After some trial and error we took $ d_1 = \max
\left\{.01,\tilde{\lambda} - .5 S_{\lambda}\right\}$, $d_2 =
\tilde{\lambda} + .5 S_{\lambda}$, $d_3 = \tilde{\mu} - S_{\mu}$ and
$d_4 = \tilde{\mu} + S_{\mu}$.
\end{appendix}
\bigskip
\bigskip
\noindent {\Large {\bf Acknowledgments}}
\bigskip
\noindent The authors are grateful to Ansu Chatterjee, Jeff Rosenthal
and Bill Sudderth for helpful conversations about this paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Interest in traffic models is old. In 1935 Greenshield (see Helbing
\cite{Helbing} and Chowdhury \emph{et al} \cite{CSS} for a review on traffic
models) introduced the first traffic study. In 1959 Greenberg \cite{Greenberg}
called the attention to the importance of the area. In 1992 Nagel-Schreckenberg
\cite{Nagel} study traffic using probabilistic cellular automata, computer
simulations and mean field models. \emph{Slow-to-start} models intend to capture
the behavior of cars that come out of a traffic jam: a driver needs a moment to
speed the car. Nagel and Schreckenberg \cite{Nagel}, Schadschneider and
Schreckenberg \cite{SS}, Gray and Griffeath \cite{GG}, Chowdhury, Santen and
Schadschneider \cite{CSS} and Yaguchi \cite{Y} studied these type of
models. Other studies of traffic have been done by Krug and Spohn \cite{KS},
Barlovic, Santen, Schadschneider and Schreckenberg \cite{Barlovic1}, Fuks
\cite{Fuks1}, Boccara and Fuks \cite{Boccara}, Belitsky and Ferrari\cite{BF2},
Belitsky, Krug, Neves and Sch\"utz \cite{Bel}, Blank \cite{Blank1,Blank2}, Helbing
\cite{Helbing}, Wolfram \cite{W} among many others.
We introduce a slow-to-start traffic model which is continuous in space and
time. Initially cars are distributed in the straight line $\mathbb{R}$ following
a Poisson process of parameter $\lambda$; cars may have speed $1$ or 0. All cars
start at zero speed and each car waits a random (delay) time exponentially
distributed with mean $1$ to to change its speed from 0 to 1. Delay times of
different cars are independent. After the delay time, each car moves at speed
$1$ until it collides with a stopped car to its left or forever if it does not
collide. When car $i$ collides with car $i-1$, its speed drops immediately to
zero and remains blocked until car $i-1$ leaves. Then car $i$ waits a further
random time with exponential distribution before moving. And so on. At each
time there are cars at speed $1$ and cars at speed $0$. We prove that if
$\lambda<1$ every car will eventually be unblocked forever and the final
relative positions of cars are distributed as a Poisson process of rate
$\lambda$.
The main tool is a relation between the traffic model and a $M/M/1$ stationary
queuing system. Customers arrive at rate $\lambda$ according to a Poisson
process to a single server whose service times are independent and exponentially
distributed with mean $1$. There exists a stationary version for the queue when
$\lambda<1$. The stationary process is reversible so that its distribution is
invariant by reversing the arrow of time. As a consequence the customer
departure process is a Poisson process as the arrival process. This is the
famous Burke's theorem.
In Section \ref{S2} we define the process and give the main results. In Section
\ref{S3} we construct a semi-infinite traffic model and define traffic
cycles. In Section \ref{S4} we define the queue process and construct workload
cycles. In Section \ref{S5} we relate traffic final relative car-positions and
customer exit times in the queue. In Section \ref{S6} we use an approach of
Thorisson to construct the stationary traffic process using cycles and in
Section \ref{S7} we do it for the queue and conclude the proofs of the results.
\section{Definition of model and main results}
\label{S2}
We consider a system of cars moving in ${{\mathbb{R}}}$ from right to left. Cars have two
possible velocities: either $0$ or $1$. Cars are represented by points and
cannot overpass. A car needs an exponential random time of mean 1 to pass from
speed 0 to speed $1$. A typical car starts at speed 0, waits an exponentially
distributed random time to change its speed to $1$, travels at this velocity
until it is blocked by another stopped car, waits the stopped car to leave,
waits another exponential time to get again velocity $1$ and so on.
We construct a sequence $(\Pi,V)= ((\Pi(t),V(t));\,t\ge 0)$ of car trajectories,
$\Pi(t)=(\pi_i(t), i\in {{\mathbb{Z}}})$ and $V(t)=(v_i(t), i\in {{\mathbb{Z}}})$. For each $i$,
$\pi_i:{{\mathbb{R}}}_+ \to {{\mathbb{R}}}$ is a piecewise linear function almost everywhere
differentiable; $\pi_i(t)$ represents the position of car $i$ at time $t$ and
$v_i(t)\in\{0,1\}$ its velocity. The initial car positions are given by
$(y_i,\,i\in{{\mathbb{Z}}})$, with $y_i\in{{\mathbb{R}}}$ and $y_i<y_{i+1}$ for all $i$: set
$\pi_i(0)=y_i$. The initial velocities are all null: $v_i\equiv 0$. The
trajectories must satisfy the following properties:
\begin{enumerate}
\item $\pi_i(0)=y_i$
\item $\dot\pi_i(t) = -v_i(t)$ if $v_i$ is continuous at $t$
\item $v_i(t)$ jumps from $0$ to $1$ at rate 1 if $\pi_i(t)>\pi_{i-1}(t)$.
\item $v_i(t) =0$ if $\pi_i(t)= \pi_{i-1}(t)$
\end{enumerate}
If instead of taking $i$ in ${{\mathbb{Z}}}$ we consider a semi-infinite configuration of
cars with initial positions $y_0\le y_1\le\dots$, the trajectories can be
constructed inductively. Car 0 waits an exponential time of rate 1 and then
assumes velocity $1$; since it will not be blocked by any other car, it will
continue at speed $1$ for ever. Car 1 does the same, but if it collides with car
0 (at $y_0$), then it stops and after car 0 leaves, it waits an extra
exponential time and then goes, and so on. Let $\pi_i := (\pi_i(t), \,t\ge
0)$. If we think $i$ as discrete time, the process of trajectories is Markovian
in the sense that the law of the trajectory $\pi_i$ given
$\pi_{i-1},\dots,\pi_0$ depends only on $\pi_{i-1}$. Our first result says that
it is possible to construct a spatially translation invariant traffic process if
the initial positions of the cars are given by a Poisson process of rate
$\lambda$.
\begin{propo}
\label{p33}
If $(y_i;\,i\in{{\mathbb{Z}}})$ is a stationary Poisson process in ${{\mathbb{R}}}$ with rate $\lambda$
and $\lambda<1$ then there exists a spatially stationary version of the process
$\Pi=((\pi_i(t);\,t\ge 0),\,i\in{{\mathbb{Z}}})$ with initial positions $\pi_i(0)=y_i$ for
all $i\in{{\mathbb{Z}}}$.
\end{propo}
In the stationary Poisson process by convention $y_0<0<y_1$, so that $-y_0$,
$y_1$ and $y_{i+1}-y_i$ for $i\neq 0$ are iid random variables exponentially
distributed. Let \[t_i := \sup\{s:\, v_i(s)=0\}\] (possibly equal to infinity)
be the last time car $i$ has velocity 0. From $t_i$ on, car $i$ goes freely at
speed $1$. We say that car $i$ is \emph{free} at times $t>t_i$.
\begin{propo}
\label{t6}
Assume $\lambda<1$ and consider the traffic process $\Pi$ of Proposition \ref{p33}. For
each $i\in{{\mathbb{Z}}}$, $t_i$ is finite almost surely.
\end{propo}
Let $D_i$ be the \emph{total delay} of car $i$ defined by
\begin{equation}
\label{63}
D_i := |-t_i-(\pi_i(t_i)-y_i)|
\end{equation}
Here $-t_i$ is the displacement car $i$ would have at time $t_i$ if it started
at speed $1$ and had never been blocked and $\pi_i(t_i)-y_i$ is the actual
displacement by that time. The absolute value of the difference is the delay of
car $i$. Call
\begin{equation}
\label{s_i}
s_i := y_i+D_i
\end{equation}
To understand the meaning of $s_i$, observe that if a car starts at position
$s_i$ at speed $1$ and it is never blocked, then its position at time $t$
coincides with $\pi_i(t)$ for all times $t\ge t_i$.
\begin{propo}
\label{t6a}
If $\lambda<1$ then $\{s_i;\,i\in{{\mathbb{Z}}}\}$ is a Poisson process of parameter
$\lambda$. Furthermore, $\pi_i(t) -\pi_{i-1}(t)= s_i-s_{i-1}$ for $t > \max\{t_i, t_{i-1}\}$.
\end{propo}
The statement is that $\{s_i;\,i\in{{\mathbb{Z}}}\}$ \emph{as a subset of ${{\mathbb{R}}}$} is a Poisson
process; the specification of the indexes may corrupt the Poisson property.
These results say that if the initial position of cars are distributed according
to a Poisson process and all with zero speed, then every car will eventually
have velocity $1$ and the relative car positions will be distributed according to
a Poisson process. For this reason we call $s_i$ the \emph{final relative
position} of car $i$.
Let $y\in{{\mathbb{R}}}$ and for initial position $y_i>y$ let $r_i$ be the random time defined by
\[
r_i(y) := \sup ((\pi_i)^{-1}(y))
\]
this is last time the time car $i$ is at position $y$.
Let $T(y)$ be the first time all cars crossing $y$ after $T(y)$ are free:
\[
T(y):= \inf \{r_i(y);\, t_i\le r_m(y) \hbox{ for all } m\ge i\}
\]
\begin{propo}
\label{p111}
If $\lambda<1$ then $T(y)<\infty$ almost surely for all $y\in{{\mathbb{R}}}$.
\end{propo}
\section{Construction of trajectories for a semi-infinite initial car
configuration}
\label{S3}
We construct car trajectories for initial car positions $0=y_0 < y_1<\dots$ and
relate them to a $M/M/1$ queue starting with an empty system at arrival of a
customer.
The trajectories will be defined as a function of a marked Poisson process
\begin{equation}
\label{p34}
(\mathbf{y},\mbox{\boldmath$\xi$})=((y_i,\xi_{i,m});\ i\ge 0,$ $ 0\le m\le i)\,
\end{equation}
where $\mathbf{y}=(y_i, i\ge 0 )$ is a Poisson process on $[0,\infty)$ with rate
$\lambda$ with a car added at the origin (that is, $y_0=0$ and $(y_{i+1}-y_i ;
i\ge 0)$ are \emph{iid} exponential random variables with mean $1/\lambda$) and
the \emph{delay times} $\mbox{\boldmath$\xi$}=((\xi_{i,m} ; \ 0\le m\leq i), i\ge 0)$ are
\emph{iid} exponential random variables with mean~$1$. The sequences $\mbox{\boldmath$\xi$}$ and
$\mathbf{y}$ are independent. These times are used as follows. Car $i$ may collide with
car $i-1$ at sites $y_0,\dots,y_{i-1}$; if the collision occurs at $y_m$, then
after car $i-1$ leaves $y_m$, car $i$ waits $\xi_{i,m}$ units of time before
taking again speed $1$. More rigorously, car 0 starts at the origin and waits
$\xi_{0,0}$ units of time to start moving at speed $1$. Since there are no cars
to the left of car 0, it will never be blocked and we define
\begin{equation}
\pi_0(t):=
\left\{
\begin{array}{ll}
0& \mbox{if}\,\ 0<t\leq\xi_{0,0}\\
-t+\xi_{0,0} & \mbox{if} \,\ t>\xi_{0,0}
\end{array}
\right.
\end{equation}
The trajectory of car $i$ is then defined as a function of the trajectory of car
$i-1$ and the waiting times $(\xi_{i,m};\, 0\le m\le i)$ as follows. We shall
define $A_{i,m}$ as the time car $i$ arrives to $y_m$ and $B_{i,m}$ as the time
car $i$ departs from $y_m$. Clearly $A_{i,m}\le B_{i,m}$ and if car $i$ is not
blocked at $y_m$, then $A_{i,m}=B_{i,m}$. Let
\[
A_{i,i}:=0, \quad B_{i,i} := \xi_{i,i}
\]
and then inductively assume $A_{i-1,m}, B_{i-1,m}$ are defined for all $0\le
m\le i-1$, as well as $A_{i,m}$ and $B_{i,m}$. Then set
\begin{eqnarray}
\label{t33}
A_{i,m-1}:=B_{i,m} + y_m-y_{m-1}\\
B_{i,m-1}:=\left\{
\begin{array}{ll} A_{i,m-1}& \mbox{if}\,\ A_{i,m-1}> B_{i-1,m-1}\\
B_{i-1,m-1}+\xi_{i,m-1} & \mbox{if} \,\ A_{i,m-1}< B_{i-1,m-1}
\end{array}\right.
\end{eqnarray}
In words: the arrival of car $i$ to site $m-1$ occurs $(y_m-y_{m-1})$ time units
after its departure from site $m$. The departure of car $i$ from site $y_{m-1}$
occurs immediately (at arrival time) if car $i-1$ has already left or,
$\xi_{i,m-1}$ units of time after $B_{i-1,m-1}$, the departure of car $i-1$ from
site $y_{m-1}$.
The vector $((A_{i,m};\,B_{i,m}),\, 0\le m\le i)$ determines the trajectory
$(\pi_i(t), t\ge 0)$:
\begin{equation}
\label{t33a}
\pi_i(t) := \sum_{k=0}^i y_k \one\{t\in (A_{i,k}, B_{i,k})\} +(y_k+B_{i,k} -t) \one\{t\in
(B_{i,k}, A_{i,k-1}) \}
\end{equation}
where $\one \{\cdot\}$ is the indicator function of the set $\{\cdot\}$. The
total delay of car $i$ defined in \eqref{63} satisfies
\begin{eqnarray}
D_i = \sum_{k=0}^i (B_{i,k}-A_{i,k})
\end{eqnarray}
To stress the dependence on $(\mathbf{y},\mbox{\boldmath$\xi$})$ we write $A_{i,k}(\mathbf{y},\mbox{\boldmath$\xi$})$,
$B_{i,k}(\mathbf{y},\mbox{\boldmath$\xi$})$, etc.
Let $\mathbf{s}(\mathbf{y},\mbox{\boldmath$\xi$})=(s_0,s_1,\dots)$, where the final relative position $s_i$ is
defined as a function of $y_i$ and $D_i$ as in~\eqref{s_i}.
Let $\mbox{\boldmath$\sigma$}(\mathbf{y},\mbox{\boldmath$\xi$})=(\sigma_0,\sigma_1,\dots)$ be the sequence defined by
$\sigma_0:=\xi_{0,0}$ and for $i\ge 1$
\begin{equation}
\label{t2}
\sigma_i := \left\{\begin{array}{ll}
B_{i,0}-B_{i-1,0}& \mbox{if}\,\ s_{i-1}>y_{i}, \\
\xi_{i,i} & \mbox{otherwise}
\end{array}\right.
\end{equation}
$\sigma_i$ is called the \emph{final delay} of car $i$.
\begin{vthm}
\label{t66}
$\mbox{\boldmath$\sigma$}=(\sigma_0,\sigma_1,\dots)$ is a sequence of iid random variables with
exponential law of mean~1. Furthermore $\mbox{\boldmath$\sigma$}$ is independent of $(y_i;\, i\ge
0)$.
\end{vthm}
\paragraph{Proof}
Fix the trajectory of car $i-1$. There are two cases: either car $i$ is blocked
at 0 by car $i-1$ or not. In the first case $B_{i,0}=B_{i-1,0}+\xi_{i,0}$, so
that $\sigma_i=\xi_{i,0}$ which is independent of the trajectories $(\pi_{m};\,
0\le m\le i-1)$ and in particular of $(\sigma_m;\,0\le m\le i-1)$. In the second
case the label of the leftmost blocking position of car $i$ is given by
\[
K := \min\{k\le i\,:\, B_{i-1,k}+\xi_{i,k} > B_{i-1,0}-(y_k-y_0) \}
\]
here by convention $B_{i-1,i}=0$. $K$ is a stopping time for $(\xi_{i,i-m};\,
m\ge 0)$; that is, the event $\{K=k\}$ is a function of $(\xi_{i,i-m};\, 0\le
m\le k)$. But the dependence on $\xi_{i,k}$ is only on the event $\{\xi_{i,k} >
B_{i-1,0}-(y_k-y_0)-B_{i-1,k}\}$. Then, given this event,
\[
\sigma_i = \xi_{i,K} - ( B_{i-1,0}-(y_K-y_0)-B_{i-1,K})
\]
is exponentially distributed with mean one and independent of $(\pi_{m};\, 0\le
m\le i-1)$.
\paragraph{\bf Traffic Cycle}
Call
\begin{eqnarray}
\label{p18}
X:= \min\{y_i>0\,:\, y_i>s_{i-1}\}\\
N := \min\{i>0\,:\, y_i>s_{i-1}\}\\
C := \big(((\pi_i(t);\,t\ge 0),\, i\in\{0,\dots,N-1\}),N,X\big)
\end{eqnarray}
We say that $C$ is a \emph{cycle} with \emph{length} $X$ and $N$ cars involved.
The cycle $C$ consists of a space interval $X$ and $N$ car trajectories in the
time interval $[0,\infty)$ with starting positions in $[0,X)$; however, since
car $i$ is free after time $B_{i,0}$, the trajectories are determined by the set
$((\pi_i(t);\,t\in[0,B_{i,0}));\, i\in\{0,\dots,N-1\})$ or, alternatively by the
arrival/departure times of the $N$ cars to/from sites $y_0,\dots,y_{N-1}$ given
by $((A_{i,m},B_{i,m});\, 0\le i\le N-1, 0\le m \le i)$.
The cycle $C$ induces a stochastic process $Z(C)=(Z_y(C), \, y\in[0,X))$ given
by the interval-valued vector (with dimension depending on $y$)
\begin{equation}
\label{p24}
Z_y(C) := (\pi_i^{-1}(y);\,y\le y_i<X)
\end{equation}
The $k$th coordinate of $Z_y(C)$ contains the time interval spent at $y$ by the
$k$th car to the right of $y$. Assuming this car has label $i$ there are two
cases: (a) if $y=y_m$ for some $m$, the interval is $\pi_i^{-1}(y)=
[A_{i,m},B_{i,m}]$ and (b) if $y$ is not an initial car position the interval is
a point, because car $k$ will not be delayed at $y$. $Z_y(C)$ is a vector of
zero length if $y\in[y_{N-1},X)$.
The cycle $C$, its length $X$ and its number o cars $N$ are functions of $\mathbf{y} =
(y_0,y_1,\dots)$ and $\mbox{\boldmath$\xi$} = (\xi_{i,m};\, i\ge 0,\, 0\le m \le i)$:
\[
C = C(\mathbf{y},\mbox{\boldmath$\xi$})\,;\quad X = X(\mathbf{y},\mbox{\boldmath$\xi$})\,;\quad N = N(\mathbf{y},\mbox{\boldmath$\xi$})
\]
Given a cycle $C$ we can recover the length of the cycle, the number of cars
involved, the initial car positions, the final delay
times and the final relative car positions which are denoted
\begin{eqnarray}
\label{p38}
N(C)\,,\; X(C)\,;\quad
y_i(C),\; \sigma_i(C),\; s_i(C),\quad i=0,\dots,N(C)-1\,.
\end{eqnarray}
The final relative car positions in the cycle coincide with the last passage
through the origin:
\begin{equation}
\label{p51}
s_i = B_{i,0},\quad \hbox{for }i=0,\dots,N-1
\end{equation}
\begin{figure}[!htp]
\psfrag{k1}{$t$}
\psfrag{k2}{$y$}
\psfrag{k3}{$y_{0}$}
\psfrag{k4}{$y_{1}$}
\psfrag{k5}{$y_{2}$}
\psfrag{k6}{$s_{0}$}
\psfrag{k7}{$s_{1}$}
\psfrag{k8}{$s_{2}$}
\psfrag{k9}{$y_{4}$}
\psfrag{k10}{$s_{4}$}
\psfrag{k20}{$s_{3}$}
\psfrag{k21}{$y_{3}$}
\psfrag{k11}{$\pi_{0}$}
\psfrag{k12}{$\pi_{1}$}
\psfrag{k13}{$\pi_{2}$}
\psfrag{k14}{$\pi_{4}$}
\psfrag{k19}{$\pi_{3}$}
\psfrag{k15}{$\sigma_{0}$}
\psfrag{k16}{$\sigma_{1}$}
\psfrag{k17}{$\sigma_{2}$}
\psfrag{k18}{$\sigma_{4}$}
\psfrag{k22}{$\sigma_{3}$}
\includegraphics[height=14.0cm,width=15.5cm]{particle1.eps}
\caption{The first cycle has 4 cars. Car 1 collides with car 0 and car 2
collides with car 1 at $y_0$ (the trajectories are drawn slightly separated at
$y_0$; the actual trajectories partially intersect at $y_0$). Car 3 does not
collide with previous cars, but since its starting position $y_3$ is to the right of
$s_2$, it still belongs to the cycle. Car 4 initial position $y_4$ is to the
right of $s_3$ so that the new cycle starts at $S_1=y_4$.}
\end{figure}
\section{FIFO $M/M/1$ queue}
\label{S4}
The FIFO $M/M/1$ queue is a Markov process in ${{\mathbb{N}}}=\{0,1,\dots\}$. At rate
$\lambda>0$ customers arrive into the system, stay in line and one customer at a
time is served at rate 1. This means that the service times are exponentially
distributed with mean 1. The customers respect the time of arrival: \emph{first
in, first out} in the jargon of queuing theory. We use the following notation
\begin{itemize}
\item ${\tilde y}_i$ arrival time of customer $i$
\item $\tilde{\sigma}_i$ service time of customer $i$.
\item $\tilde{s}_i$ exit time of customer $i$.
\end{itemize}
The queue size at time $y$ can be constructed as a function of a marked Poisson
process $(({\tilde y}_i,\tilde{\sigma}_i); \, i \in {{\mathbb{Z}}})$. The arrival process $({\tilde y}_i
\,; i\in {{\mathbb{Z}}})$ is a Poisson process of parameter~$\lambda$. The service times
$({\tilde\sigma}_i; \, i\in{{\mathbb{Z}}})$ are \emph{iid} random variables with exponential law of
mean $1$. The sequences are independent.
The \emph{workload} $W_y$, $y \in {{\mathbb{R}}}$ is the amount of service due by the server
at time $y$. This is the time a customer arriving at time $y$ has to wait to
start to be served. This process is continuous to the right with limits to the
left and piecewise linear. $W_y$ jumps at times ${\tilde y}_i$, the arrival time of
customer $i$ by an amount ${\tilde\sigma}_i$, the service time of this customer and it
decreases continuously with derivative $-1$ until hitting 0, where stays until
next arrival. The workload process must satisfy the following evolution
equations:
\begin{eqnarray}
\label{p4}
\frac {dW_y}{dy} = - \one\{W_y>0\}\qquad \hbox{ for } y\neq {\tilde y}_i, \quad i\in{{\mathbb{Z}}}\\
W_{{\tilde y}_i} = W_{{\tilde y}_i-}+{\tilde\sigma}_i
\end{eqnarray}
\paragraph{Construction of the workload process with an arrival to an empty
system}
Let $\mathbf{\tilde y} = ({\tilde y}_0,{\tilde y}_1,\dots)$ be a Poisson process of rate $\lambda$ as
$\mathbf{y}$, with $y_0=0$ and let $\mbox{\boldmath$\tilde\sigma$}=({\tilde\sigma}_0,{\tilde\sigma}_1,\dots)$ be a sequence
of iid exponential random variables with mean $1$ as $\mbox{\boldmath$\sigma$}$. Define
\begin{eqnarray}
\label{p13}
W_0 := {\tilde\sigma}_0
\end{eqnarray}
and then, recursively, for $i\ge 1$:
\begin{eqnarray}
\label{p14}
W_{{\tilde y}_i} := [W_{{\tilde y}_{i-1}} - ({\tilde y}_i-{\tilde y}_{i-1})]^+ + {\tilde\sigma}_i \\
W_y := [W_{{\tilde y}_{i-1}} - (y-{\tilde y}_{i-1})]^+ ,\qquad y\in({\tilde y}_{i-1},{\tilde y}_i)\label{p14a}
\end{eqnarray}
To stress the dependence of $(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$})$ we denote $W(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$})=
(W_y(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$});\,y\ge 0)$ the process defined by \eqref{p13}, \eqref{p14}
and \eqref{p14a}.
The exit time of customer $i$ is defined by
\begin{equation}
\label{p20a}
\tilde{s}_i:={\tilde y}_i+ W_{{\tilde y}_i}
\end{equation}
for $i\geq 0$. That is, the $i$th arrival time plus the workload of the server at
arrival (included the $i$th service time). Define
\begin{equation}
\label{p20}
\mathbf{\tilde s}(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$}) := ({\tilde s}_0,{\tilde s}_1,\dots)
\end{equation}
\paragraph{\bf Workload cycle}
Let ${\tilde X}$ be the first time after time zero the workload jumps from 0 to a
positive value:
\[
{\tilde X} := \inf\{y>0:\, W_y- = 0,\, W_y>0\}
\]
Define
\[
{\tilde C} := (W_y;\, y\in[0,{\tilde X}))
\]
and let ${\tilde N}$ be the number of arrivals during the cycle:
\[
{\tilde N} := \max\{i:\, {\tilde y}_i\in[0,{\tilde X})\} +1
\]
(we add 1 to take account of the arrival at time 0).
Clearly ${\tilde C}$, ${\tilde X}$ and ${\tilde N}$ are function of $(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$})$:
\[
{\tilde C} = {\tilde C}(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$}),\quad {\tilde X}={\tilde X}(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$}),\quad{\tilde N} ={\tilde N}
(\mathbf{\tilde y},\mbox{\boldmath$\tilde\sigma$})
\]
\begin{vthm}
\label{v7}
If $\lambda<1$ then ${\tilde X}$ has finite expectation and there exists a stationary
version of the workload process denoted $W=(W_y;\,y\in{{\mathbb{R}}})$.
\end{vthm}
For a proof see Loynes \cite{Loynes}; we prove this later using cycles,
following \cite{Thorisson}. The proof of the following theorem can be found in
Prabhu \cite{Prabhu}, theorem $8$ (page 98) or Baccelli and
Br\'emaud \cite{Baccelli}.
\begin{teo}[Burke's Theorem]
\label{burke}
The exit times $(s_i;\,i\in{{\mathbb{Z}}})$ of the stationary
process $W$ is a Poisson process with parameter $\lambda$.
\end{teo}
\begin{figure}[!htp]
\psfrag{i1}{$W_y$}
\psfrag{i2}{$y$}
\psfrag{i3}{$y_{0}$}
\psfrag{i4}{$y_{1}$}
\psfrag{i5}{$y_{2}$}
\psfrag{i6}{$\tilde{s}_{0}$}
\psfrag{i7}{$\tilde{s}_{1}$}
\psfrag{i8}{$\tilde{s}_{2}$}
\psfrag{i9}{$y_{4}$}
\psfrag{i10}{$\tilde{s}_{4}$}
\psfrag{i15}{$y_{3}$}
\psfrag{i16}{$\tilde{s}_3$}
\psfrag{i11}{$\tilde{\sigma_{0}}$}
\psfrag{i12}{$\tilde{\sigma_{1}}$}
\psfrag{i13}{$\tilde{\sigma_{2}}$}
\psfrag{i14}{$\tilde{\sigma_{4}}$}
\psfrag{i17}{$\tilde{\sigma_{3}}$}
\includegraphics[height=7.75cm,width=15.5cm]{queue.eps}
\caption{Workload evolution related with the traffic example of Figure 1. The
cycle has 4 customers. The exit time $s_3$ of customer 3 occurs before the
time arrival $y_4$ of customer~4 so that a new cycle starts at $S_1=y_4$. The
service times ${\tilde\sigma}_i$ coincide with the final delays $\sigma_i$ and the
customer exit times ${\tilde s}_i$ coincide with the final relative car positions
$s_i$ of Figure 1.}
\end{figure}
\section{Queue associated to traffic model}
\label{S5}
The car trajectories generated by semi-infinite car positions $\mathbf{y}$ and
car-delays $\mbox{\boldmath$\xi$}$ defined in \eqref{p34} generate final car-delays $\mbox{\boldmath$\sigma$} =
\mbox{\boldmath$\sigma$}(\mathbf{y},\mbox{\boldmath$\xi$})$ and last passages through the origin $B_{i,0}(\mathbf{y},\mbox{\boldmath$\xi$})$.
The queue generated by arrivals $\mathbf{y}$ and waiting times $\mbox{\boldmath$\sigma$}$ produce a
workload process $W_y(\mathbf{y},\mbox{\boldmath$\sigma$})$.
The relation between these processes is given by
\begin{equation}
\label{p60}
W_y = (B_{i,0}-y)^+ \qquad \hbox{for }y\in [y_i,y_{i+1})
\end{equation}
This follows from the definitions \eqref{p13}-\eqref{p14a} and \eqref{t2}.
In the corresponding cycles this relation reads
\begin{equation}
\label{p61}
W_y = \left\{\begin{array}{ll}
B_{i,0}-y& \mbox{if}\,\ y_i\le y <y_{i+1}\le y_{N-1}, \\
0 & \mbox{if} \,\ y_{N-1}\le y\le y_N
\end{array}\right.
\end{equation}
As a consequence of \eqref{p60}, the queue exit times $\mathbf{\tilde s}=\mathbf{\tilde s}(\mathbf{y},\mbox{\boldmath$\sigma$})$
coincide with the final relative car-positions $\mathbf{s}$. The length and number of
elements in the respective cycles agree:
\begin{vthm}
\label{t9}
Let $(\mathbf{y},\mbox{\boldmath$\xi$})$ be semi infinite car initial positions and delay times as in
\eqref{p34}.
Then
\begin{eqnarray}
\label{p41}
\mathbf{\tilde s}(\mathbf{y},\mbox{\boldmath$\sigma$}(\mathbf{y},\mbox{\boldmath$\xi$}))=\mathbf{s}(\mathbf{y},\mbox{\boldmath$\xi$})\\
{\tilde X}(\mathbf{y},\mbox{\boldmath$\sigma$}(\mathbf{y},\mbox{\boldmath$\xi$}))= X(\mathbf{y},\mbox{\boldmath$\xi$}),\quad {\tilde N}(\mathbf{y},\mbox{\boldmath$\sigma$}(\mathbf{y},\mbox{\boldmath$\xi$}))=
N(\mathbf{y},\mbox{\boldmath$\xi$})\label{p44}
\end{eqnarray}
\end{vthm}
\paragraph{Proof} \eqref{p44} is a consequence of \eqref{p41} and the
definitions. It suffices to prove \eqref{p41} for each cycle.
$\sigma_0=\xi_{0,0}=D_0$, so that by \eqref{s_i}, $s_0=0+\sigma_0={\tilde s}_0$ by
\eqref{p20a} because $W_0=\sigma_0$. For $1<i<N$,
\[
{\tilde s}_i = y_i + W_{y_i} = B_{i,0} = s_i
\]
by \eqref{p20a}, \eqref{p61} and \eqref{p51}.
\section{Stationary traffic process}
\label{S6}
Let $((\mathbf{y}_n,\mbox{\boldmath$\xi$}_n);\,n\in{{\mathbb{Z}}})$ be a iid sequence of Poisson processes and delay
times with the same law as $(\mathbf{y},\mbox{\boldmath$\xi$})$. Let $C_n = C(\mathbf{y}_n,\mbox{\boldmath$\xi$}_n)$ be the
cycles generated by these variables. Then $(C_n;\,n\in{{\mathbb{Z}}})$ is a sequence of iid
cycles with the same distribution as $C$. Let $X_n$ be the length of cycle
$C_n$, $N_n$ the number of cars involved with this cycle. By \eqref{p44} and
Lemma~{\ref{v7}} $X_n$ has finite expectation. Let
\begin{equation}
\label{p27}
S^o_n:=\left\{ \begin{array}{ll}
0&\hbox{if }n= 0\\
\sum_{k=1}^n X_k &\hbox{if }n> 0\\
-\sum_{k=n}^{0} X_k &\hbox{if }n\le0
\end{array}\right.
\qquad
M_n:=\left\{ \begin{array}{ll}
0&\hbox{if }n= 0\\
\sum_{k=1}^n N_k &\hbox{if }n> 0\\
-\sum_{k=n}^{0} N_k &\hbox{if }n\le0
\end{array}\right.
\end{equation}
Define the traffic process $Z^o=(Z^o_y;\,y\in{{\mathbb{R}}})$ by
\begin{equation}
\label{p23}
Z^o_y := Z_{y-S_n}(C_n) \qquad\qquad \hbox{ for } y\in [S^o_n,S^o_{n+1}),\quad n\in{{\mathbb{Z}}}
\end{equation}
as the process obtained by juxtaposing the cycles one after the other and
putting the beginning of cycle 0 at the origin; recall the definition of
$Z_y(C)$ in \eqref{p24}. In the process $Z^o$ the cars of a cycle are
\emph{not} blocked by cars of the previous cycle; in particular, all cars
starting at positions $S_n$, $n\in{{\mathbb{Z}}}$, will be free after waiting an exponential
time of mean~1.
Let $U$ be a uniform random variable in $[0,1]$ independent of $Z^o$. Let ${\mathbb P}^o$
be the law of $(((\mathbf{y}_n,\mbox{\boldmath$\xi$}_n);\,n\in{{\mathbb{Z}}}),U)$ and ${{\mathbb{E}}}^o$ the corresponding
expectation. As $Z^o$ is a function of $((\mathbf{y}_n,\mbox{\boldmath$\xi$}_n);\,n\in{{\mathbb{Z}}})$, we can also
think that ${\mathbb P}^o$ is the law of $(Z^o,U)$. Let $\theta_r$ be the translation
operator defined by $(\theta_r Z)_y = Z_{y-r}$. Define $Z$ as a function of $Z^o$
and $U$ as the process $Z^o$ ``as seen from $-(1-U)X_0$'':
\begin{eqnarray}
\label{p26}
Z &:=\theta_{-(1-U)X^o_0} Z^o\qquad y\in{{\mathbb{R}}}
\end{eqnarray}
and define the law ${\mathbb P}$ by size biasing cycle zero:
\begin{equation}
\label{p28}
d{\mathbb P} := \frac {X_0}{{{\mathbb{E}}}^o X_0} d{\mathbb P}^o
\end{equation}
The following result is in
Theorem 4.1 of Chapter 8 of Thorisson \cite{Thorisson}; see also Figure 2.1 in
that chapter for a better understanding of the relation between $Z$ and $Z^o$.
\begin{vthm}
\label{p30a}
The law of $Z$ under ${\mathbb P}$ is stationary.
\end{vthm}
For test functions $f$ (from the space where $Z$ is defined to ${{\mathbb{R}}}$) the law of
$Z$ under ${\mathbb P}$ satisfies
\[
{{\mathbb{E}}} f(Z) = \frac {{{\mathbb{E}}}^o [X_0f(\theta_{-X_0(1-U)}Z^o)] }{{{\mathbb{E}}}^o X_0}
\]
where $X_0$ is the lenght of cycle 0 of $Z^o$. To obtain a sample of $Z$ under
${\mathbb P}$, first sample a process with a cycle starting at the origin with the size
biased law ${\mathbb P}$, then translate the origin to a point uniformly distributed in
cycle 0.
Since the traffic process $Z$ is constructed as juxtaposition of cycles, the
initial car positions, the final delays and the final relative car positions can
be recovered as follows using the notation of \eqref{p38}:
\begin{eqnarray}
\label{p37}
S_0 = \sup\{y\le 0\,:\,Z_{y-}=0,\, Z_{y}>0\}, \\
S_n = \inf\{y>S_{n-1}\,:\, Z_{y-}=0,\, Z_{y}>0\}, n>0,\\
S_n = \sup\{y<S_{n+1}\,:\, Z_{y-}=0,\, Z_{y}>0\}, n<0\,;\\
X_n = S_{n+1}-S_n,\quad C_n = (Z_{y-S_n};\, y\in[S_n,S_n+X_n)),\quad N_n=N(C_n)\,;\\
y_k = S_n + y_{k-M_n}(C_n) ,\quad \sigma_k = \sigma_{k-M_n}(C_n),
\quad s_k = S_n + s_{k-M_n}(C_n),\\
\qquad\qquad \qquad \qquad\qquad \qquad \qquad \qquad \hbox{ for } M_n\le k < M_{n+1}.\label{p40}
\end{eqnarray}
Denote $\mathbf{y}(Z)$, $\mbox{\boldmath$\sigma$}(Z)$ and $\mathbf{s}(Z)$ the initial car-positions, the final
delays and final relative car-positions of the process $Z$.
\begin{vthm}
\label{p30}
The law of $\mathbf{y}(Z)$ and $\mbox{\boldmath$\sigma$}(Z)$ under ${\mathbb P}$ are stationary and $\mathbf{y}(Z)$
and $\mbox{\boldmath$\sigma$}(Z)$ are independent. In particular $\mathbf{y}(Z)$ is a Poisson process
of parameter $\lambda$ and $\mbox{\boldmath$\sigma$}(Z)$ is a sequence of iid exponential
random variables of mean 1. Furthermore $\mathbf{s}(Z)$ is a Poisson process of
parameter $\lambda$.
\end{vthm}
Stationarity of $\mathbf{y}(Z)$ and $\mbox{\boldmath$\sigma$}(Z)$ is immediate consequence of
stationarity of $Z$. The other properties follow from the stationary
construction of the queue and relation \eqref{p41} as shown in the next section.
\paragraph{Trajectory extension} The trajectory of the car starting
at $y_i\in[S_n,S_{n+1})$ is defined only for $t\in[0,B_{i,0}(C_n)]$; see
\eqref{p24}. We extend the trajectory by just continuing at speed $1$ from this
point on:
\begin{equation}
\label{pi}
\pi_i(t) =
\left\{
\begin{array}{ll}
S_n+ \pi^n_{i-M_n}(t)& \mbox{if}\,\ y_i\in[S_n,S_{n+1}) , \; t\in[0,B_{i,0}(C_n)] \\
S_n-t+ B_{i-M_n,0}(C_n) & \mbox{if} \,\ y_i\in[S_n,S_{n+1}) , \; t>B_{i,0}(C_n)
\end{array}
\right.
\end{equation}
where $\pi^n_i$ is the trajectory of the $i$th particle of cycle $C_n$. Using
definitions \eqref{pi} and \eqref{p24} the corresponding process $Z$ has a
countable number of coordinates at each position $y$; the $k$th coordinate
indicates the interval of time spent at $y$ by the $k$th car initially to the
right of $y$. We abuse notation and continue calling $Z$ this process. Notice
that $\mathbf{y}(Z)$, $\mbox{\boldmath$\sigma$}(Z)$ and $\mathbf{s}(Z)$ remain unchanged and Lemma \ref{p30}
holds for this extension.
\section{Stationary workload process}
\label{S7}
Assume $\lambda<1$ and let $(\mathbf{y}_n,\mbox{\boldmath$\xi$}_n)$ be the sequence introduced in the
beginning of previous section. Let $\mbox{\boldmath$\sigma$}_n=\mbox{\boldmath$\sigma$}(\mathbf{y}_n,\mbox{\boldmath$\xi$}_n)$ as defined
in \eqref{t2}. By \eqref{p44} the workload cycles ${\tilde C}_n = {\tilde C}(\mathbf{y}_n,\mbox{\boldmath$\sigma$}_n)$
have the same length as the traffic cycles $C_n$: ${\tilde X}_n=X_n$ and the number of
cars in cycle $C_n$ is the same as the number of customers in cycle ${\tilde C}_n$:
${\tilde N}_n=N_n$. Let $S_n$ as in \eqref{p27} and define $W^o=(W^o_y;\,y\in{{\mathbb{R}}})$ by
\begin{equation}
\label{p31}
W^o_y = W_{y-S_n}({\tilde C}_n) \qquad\qquad \hbox{ for } y\in [S^o_n,S^o_{n+1}),\quad n\in{{\mathbb{Z}}}
\end{equation}
and the process $W=(W_y;\,y\in{{\mathbb{R}}})$ by
\begin{equation}
\label{p32}
W_y = W^o_{y+(1-U)X^o_0}\qquad y\in{{\mathbb{R}}}
\end{equation}
where $U$ is the same variable used in \eqref{p26}. As before, for ${\mathbb P}$ given by
\eqref{p28},
\begin{vthm}
\label{p33a}
Under ${\mathbb P}$ the law of $W$ is stationary.
\end{vthm}
$W$ is a stationary $M/M/1$ queue with arrivals $\mathbf{y}(W)$ and departures
$\mathbf{\tilde s}(W)$. Hence the arrival process $\mathbf{y}(W)$ is a stationary Poisson process of
rate $\lambda$ in ${{\mathbb{R}}}$. By Burke's Theorem {\ref{burke}}, the same is true
for the departure process $\mathbf{\tilde s}(W)$.
The stationary workload process $W$ and the stationary traffic process $Z$ are
constructed in the same space (as function of $((\mathbf{y}_n,\mbox{\boldmath$\xi$}_n);\,n\in{{\mathbb{Z}}}),U)$ so
that they have exactly the same cycles and the initial and relative final car
positions of $Z$ coincide respectively with the arrival and departure process of $W$:
\begin{equation}
\label{p47}
\mathbf{y}(Z) = \mathbf{y}(W), \quad \mathbf{s}(Z)=\mathbf{\tilde s}(W)
\end{equation}
This finishes the proof of Lemma {\ref{p30}} and Proposition {\ref{t6a}}.
\section{Final remarks}
When the initial density of cars is smaller than the inverse of the delay time,
$\lambda<1$ the process is called \emph{subcritical}. We have described how a
stationary configuration of initial car positions organize the departure from
speed zero to speed $1$ under the rule slow-to-start in the subcritical
case. The method relates the space-stationary one-dimensional slow-to-start
traffic model with the workload process of a $M/M/1$ time-stationary queuing
system. If the initial position of cars is Poisson then the final relative
position of free cars is also Poisson. The same method shows that if the initial
position of cars $\mathbf{y}$ is a stationary ergodic process with density $\lambda<1$
(density is the mean number of cars per unit length) then the final relative car
position process coincides with the departure process of the queue with arrival
process~$\mathbf{y}$.
The supercritical case merits to be investigated. When $\lambda>1$ cars form
``traffic jams'' on a subset of the initial positions $\mathbf{y}$, depending on
time. As time grows, the density of this set goes to zero and the number of cars
per traffic jam increases. We agree with an anonymous referee who conjectures
that as $t$ goes to infinity the set of traffic jams suitably rescaled converges
to a Poisson process of rate 1. The critical case $\lambda=1$ should also show
traffic jams.
Another model to research might include spontaneous stops at some rate $\nu$
keeping the slow-to-start rule. These kind of models will be closer to those
studied by Nagel and Schreckenberg and Gray and Griffeath.
We are investigating the same phenomena for cellular automata in ${{\mathbb{Z}}}$. The
approach can be applied but there are complications coming from the hard core
interaction. The results are similar.
\section*{Acknowledgements}
We thank Andreas Schadschneider and David Griffeath for discussions about
slow-to-start models and Luis Renato Fontes for his inspiring comments. We thank
the referees for their reading and comments.
This paper is partially supported by Funda\c c\~ao de Amparo \`a Pesquisa do
Estado de S\`ao Paulo FAPESP, Funda\c c\~ao de Amparo \`a Pesquisa do Estado de
Minas Gerais FAPEMIG, Programa N\'ucleos de Excel\^encia PRONEX, Conselho
Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico CNPq and Instituto
do Mil\^enio Avan\c co Global e Integrado da Matem\'atica Brasileira, IM-AGIMB.
E. P. was partially support by CRDF, Grant RUMI-2693-MO-05. Part of this work
was done while the P.A.F. was visiting Isaac Newton Institute for Mathematical
Sciences during the program Principles of the Dynamics of Non-Equilibrium
Systems in 2006. Hospitality and support is greatly acknowledged.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Active nematic liquid crystals are fluids consisting of self (or mutually) propelling rod shaped particles resulting in an anisotropic fluid with broken rotational symmetry that drives itself at the microscopic scale \cite{Surrey:2001,Sanchez:2012}. This interesting combination of broken rotational symmetry and out of equilibrium, active behaviour has lead to an explosion of interest from both experimental and theoretical physics \cite{Kruse:2004,Giomi:2013,Giomi:2015,Surrey:2001,Sanchez:2012,Ramaswamy:2003}. There have been many successful experiments reproducing active nematics often utilising biological components, including microtubule kinesin suspensions \cite{Surrey:2001,Sanchez:2012,Guillamat:2017}, acto-myosin gels \cite{Schaller:2010,Schaller:2013} and elongated cells \cite{Dunkel:2013,Wioland:2016,You:2018,BlanchMercader:2018}, but also from inert components such as vibrated monolayers of granular rods \cite{Narayan:2007}.
These systems have been shown to display a rich phenomenology depending on many factors such as the degree to which the system is driven \cite{Giomi:2015}, the confining geometry \cite{Edwards:2009}, the density \cite{You:2018} and the boundary conditions \cite{Giomi:2014}. By varying these factors it is possible to observe diverse spatiotemporal patterns including vortices \cite{Edwards:2009,Wioland:2016}, oscillating textures \cite{Schaller:2010,Keber:2014} and travelling bands \cite{Edwards:2009,Schaller:2010}. When the driving force is sufficiently high, active nematics can spontaneously nucleate many topological defects, generating flows and interacting chaotically in a regime referred to as low reynolds number, active turbulence \cite{Hemingway:2016,BlanchMercader:2018,Sanchez:2012,Giomi:2015}. These defects have been shown to exert an elastic torque on each other \cite{Vromans:2016} and experiments have indicated that long range nematic order of defects is possible in a state of active turbulence \cite{DeCamp:2015} though this hasn't been reproduced theoretically. It has been shown that the position and orientation of these defects can be influenced by the substrate on which the active nematic is placed. By changing the geometry of the substrate it is possible to reorient defects and sort them by charge \cite{Ellis:2018}, by changing the topology of the substrate it is possible to control the total number of defects and their trajectories \cite{Keber:2014}. A defect ordered active nematic has been recreated by placing a 2D active nematic on top of a passive liquid crystal that can be controlled by an external magnetic field\cite{Guillamat:2016}. When the passive liquid crystal layer is ordered into a smectic state by the magnetic field it creates a global anisotropy, defined by the orientation of the director in the passive liquid crystal. The resulting active nematic layer forms anti-parallel channels of topological defects \cite{Guillamat:2016}. When the two layers are in contact, the passive liquid crystal layer acts as an anisotropic dissipative agent for the flows generated in the driven active layer.
In this paper we explore how the introduction of anisotropic dissipative forces can lead to activity driven order in a simulated turbulent active nematic. This is done by introducing friction and viscosity coefficients that depend on a fixed substrate orientation. First we define a general viscosity and friction for a standard continuum model for active nematics in two dimensions. We then introduce the anisotropy to these quantities based on the substrate frame of reference. We observe that the active stress drives global nematic alignment of defects in the presence of anisotropic viscosity but not anisotropic friction. This global nematic order depends on the degree of anisotropy in the viscosity and the degree to which the active nematic is driven. We then support the hypothesis that this ordering is an active process by simulating passive liquid crystals with anisotropic viscosity which display no such ordering. Finally we explore the ordering mechanism by analysing the flow patterns around a single defect, noting that the energy dissipation of the active flows induce a torque on the defects.
First we must introduce the anisotropic friction and viscosity to the active nematic equations. We start from the generic form of the equations governing an incompressible active nematic which are given by:
\begin{align}
\rho\partial_tv_i &= \partial_j\sigma^{(t)}_{ij} - p\delta_{ij} - \mu_{ij}v_j \label{eq:v}\\
[\partial_t + v_i\partial_i]Q_{ij} &= \lambda S u_ij + Q_{ik}\omega_{kj} - \omega_{ik}Q_{kj} + \gamma^{-1}H_{ij} \label{eq:Q}
\end{align}
Where $\rho$ is the density, $\sigma^{(t)}_{ij}$ is the total stress tensor and the tensor $\mu_{ij}$ contains the friction coefficients. Since we are considering the incompressible limit, $\partial_iv_i = 0$ and we set $\rho=1$ everywhere. $Q_{ij} = S(n_in_j - \delta_{ij}/2)$ is the nematic tensor, $S$ is the nematic order parameter and $\lambda$ is the flow alignment parameter. The strain rate tensor is given by $u_{ij} = (\partial_iv_j + \partial_jv_i)/2$, vorticity tensor $\omega_{ij} = (\partial_iv_j-\partial_jv_i)/2$ and molecular tensor $H_{ij} = -\partial F/\partial Q_{ij}$ where $F$ is the Landau de Gennes free energy, and in two dimensions is given by:
\begin{equation}
F = \frac{K}{2}\int dA \left[ |\nabla Q|^2 + \frac{1}{\epsilon^2} trQ^2(trQ^2-1)\right] \label{eq:F}
\end{equation}
Where the parameter $\epsilon$ is a characteristic length which is proportional to the core defect radius and $K$ is the elastic constant associated with distortions in the director field.
The total stress tensor ($\sigma^{(t)}$) is the sum of elastic stresses ($\sigma^{(e)}$), viscous stresses ($\sigma^{(v)}$), and the active stress generated by the molecular motors ($\sigma^{(a)}$) controlled by parameter $\alpha$. In the general form, these stress tensors are given by:
\begin{align}
\sigma^{(e)}_{ij} &= -\lambda S H_{ij} + Q_{ik}H_{kj} - H_{ik}Q_{kj} \label{eq:sig_e}\\
\sigma^{(v)}_{ij} &= \nu_{ijkl}\partial_kv_l \label{eq:sig_v}\\
\sigma^{(a)}_{ij} &= \alpha Q_{ij} \label{eq:sig_a}
\end{align}
We introduce the anisotropy through the viscous stress tensor and the friction tensor. Since this anisotropy is defined by the substrate we must introduce an external frame of reference. Without loss of generality we assume that the high and low viscosity directions are aligned parallel to the $x$ and $y$ axes. With this condition we can assume the viscous stress tensor in two dimensions has the form:
\begin{align*}
\sigma^{(v)}_{xx} &= \nu_{0}\partial_xv_x\\
\sigma^{(v)}_{xy} &= (\nu_{1}\partial_xv_y + \nu_{2}\partial_yv_x)/2\\
\sigma^{(v)}_{yx} &= (\nu_{3}\partial_yv_x + \nu_{4}\partial_xv_y)/2\\
\sigma^{(v)}_{yy} &= \nu_{0}\partial_yv_y
\end{align*}
If at this stage we were to set all values of $\nu$ to be identical, we would obtain the normal viscous stress tensor for an isotropic fluid. We make the further simplifying assumption, allowed by symmetry for an incompressible fluid, that $\nu_y = \nu_1 = \nu_2$ and $\nu_x = \nu_3 = \nu_4$. Here we introduce the anisotropy by setting $\nu_x = \nu_0(1-\Delta\nu)$ and $\nu_y = \nu_0(1+\Delta\nu)$. This would mean that the dissipative effects of the perpendicular gradient of a flow are different depending on whether it is aligned with the $x$ or $y$ axis. It should be noted that the anisotropic viscosity that we have introduced here depends on an external frame of reference (the $x$ and $y$ axes) hence it is no longer Galilean invariant \cite{ViscosityFrictionComment}. The viscosity introduced here is that which would exist in a fully ordered incompressible nematic oriented parallel to the $x$ axis \cite{deGennes:1995}, which is similar to the substrate used by Guillamat et al. \cite{Guillamat:2016}.
The anisotropic friction tensor can be defined generally as:
\begin{equation}
\mu_{ij} = \mu^{(0)}\delta_{ij} + \mu^{(1)}_{ij}
\end{equation}
Where $\mu_0$ is the general isotropic substrate friction and $\mu_{ij}$ is the anisotropic part of the friction containing the required symmetries of the substrate. Since the substrate anisotropy is arranged to be parallel to the $x$ and $y$ axis this implies that the off diagonal components of the anisotropic friction tensor must be zero ($\mu^{(1)}_{xy}=\mu^{(1)}_{yx} = 0$), leaving just two friction coefficients. We introduce the anisotropy in a similar fashion to the viscosity and set $\mu_{xx} = \mu_0(1-\Delta\mu)$ and $\mu_{yy} = \mu_0(1+\Delta\mu)$. This formulation allows us to control the the degree of anisotropy fully by the dimensionless parameters $\Delta \nu$ and $\Delta\mu$; without loss of generality we choose $\Delta\nu \ge 0$ and $\Delta\mu \ge 0$.
Equations \ref{eq:v}-\ref{eq:Q} are simulated using a stream function methodology with a periodic boundary in two dimensions. This allows us to recreate an active nematic with varying degrees of anisotropy in either the viscosity or friction. The model parameters are selected such that the system is in a state of active turbulence containing of the order of 200 defects, see S.I. for details. We choose to look at the influence of anisotropic viscosity and friction independently by either setting $\Delta\mu$ or $\Delta\nu$ to zero in all results presented here. Fig.~\ref{fig:active_snap} shows the resulting director (left column) and vorticity (right column) of active nematics with isotropic hydrodynamics (top row), anisotropic friction (middle row) and anisotropic viscosity (bottom row). From these images it is very difficult to distinguish the nematic textures of each system. The vorticity fields show some signs of anisotropy, with some short wavelength fluctuations in the $y$ direction being visible for the anisotropic viscosity system.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Active_snap.jpg}
\caption{\label{fig:active_snap} Typical director (left column) and vorticity (right column) for an isotropic active nematic (top row), an active nematic with anisotropic friction (middle row) and anisotropic viscosity (bottom row). The difference between the fields is not obvious, however some short wavelength fluctuations in the $y$ direction of the vorticity field are visible in the anisotropic viscosity case.}
\end{figure}
The lowest energy topological defects in a two dimensional nematic have half integer charge; this results in the characteristic $\pm1/2$ defects that are regularly observed in active nematics. Since these defects are not rotationally symmetric, they have an easily defined orientation which we will annotate $\psi$. The angle of the director field ($\theta$) around any of these defects can be expressed as $\theta = k(\phi - \psi) + \psi$ where $\phi$ is the polar angle between a reference axis (in this case the $x$ axis) and the position around the defect core and $k$ gives the charge of the defect \cite{Vromans:2016}. We use this definition to measure the orientation, $\psi$, of all defects in the simulated nematic.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Active.pdf}
\caption{\label{fig:active_res} (a) The nematic correlation function between positive defects. The location and depth of the minima is not significantly affected by the anisotropy, which implies that the active length scale and the elastic torques between defects are unaffected by the anisotropy. (b) The probability density function for the orientation of defects ($\psi$) within an active nematic with varying degrees of anisotropic friction ($\Delta\mu$), there is no clear order in the defect orientations. (c) The probability density function for the orientation of defects within an active nematic with varying degrees of anisotropic viscosity ($\Delta\nu$), a clear nematic order ($\Theta$) is observed that increases with the degree of anisotropy (inset). (d) The global nematic order is not observed for very small or very large values of activity for fixed anisotropy ($\Delta\nu = 0.3$). We can identify an apparent peak in the active stress (inset).}
\end{figure}
The nematic correlation function between positive defects is defined by $C_2(r) = \langle \textrm{cos}(2(\psi_i - \psi_j))\rangle_{i-j\sim r}$, shown in Fig.\ref{fig:active_res}a. Here we see that the orientational correlation length between defects is largely unaffected by the introduction of anisotropic friction or viscosity. This correlation length is set by the active length scale, $l_\alpha^2\sim K/\alpha$, which is the length at which the active and elastic forces balance and is proportional to the inter defect spacing \cite{Hemingway:2016,Giomi:2015}. The location of the minimum of the curve does not change, hence the introduction of anisotropic friction and viscosity does not affect the active length scale or the elastic torques that defects inflict upon each other at short ranges.
The distribution of $+1/2$ defect orientations within a simulated active nematic with isotropic friction and viscosity in the turbulent regime is uniform, i.e. there is no global alignment of defects. The same is true for an active nematic with anisotropic friction, Fig.~\ref{fig:active_res}b. However for an active nematic with anisotropic viscosity a clear nematic order emerges with positive defects being preferentially aligned parallel with the direction associated with the lowest viscosity, Fig.\ref{fig:active_res}c. We measure the magnitude of this order by fitting the histogram to the function $f(\psi) = 0.5/\pi + \Theta\textrm{cos}(2\psi)$ allowing us to observe that the ordering is stronger with a more significant anisotropy, see \ref{fig:active_res}c (inset). This emergent global order is mediated by the magnitude of the active stress, with the nematic ordering of the defects becoming reduced when the activity is either too high or too low, Fig.~\ref{fig:active_res}d. These results suggest that the ordering of defects within an active nematic is related to the interaction between the flow generated by the defects and the anisotropic viscosity of the fluid. However when the activity becomes very large, the alignment is lost, Fig.~\ref{fig:active_res}d(inset). This is likely due to the system becoming increasingly turbulent and chaotic, with the spacing and lifetime of defects becoming very small.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Passive.pdf}
\caption{\label{fig:passive} Snapshot of a typical nematic director (a) and vorticity (b) at the point of measurement for a passive nematic with anisotropic viscosity. (c) The number of defects as a function of simulation time. The introduction of anisotropic hydrodynamics does not appear to affect the course graining dynamics. (d) The probability density function for the orientation of positive defects in a passive nematic at the point of measurement containing 96-104 total defects. We see no emergent global order.}
\end{figure}
In order to test the hypothesis that the active flow drives a global alignment, we perform a similar study in passive nematic systems with anisotropic viscosity and friction. In the absence of an active stress, a passive nematic will relax toward the lowest energy state of the system; a uniform director field. If the nematic starts from an initially disordered state containing many defects, this process of minimising the internal energy involves significant rearrangement of the director and the annihilation of many defects, see Fig.~\ref{fig:passive}a. This motion of the nematic generates a flow in the suspending fluid, see Fig.~\ref{fig:passive}b. The introduction of anisotropic friction or viscosity does not appear to affect this relaxation process of a passive nematic, with the number of defects in all samples decaying at a very similar rate, Fig.~\ref{fig:passive}c. By simulating many passive nematics from independent, random initial conditions for the same amount of time, it is possible to create many samples of a passive nematic all at the same stage of relaxation containing a similar number of defects. This approach can be used to study large numbers of interacting defects in passive nematics, and has been used here to confirm no global nematic orientation of defects, even in cases with anisotropic friction or viscosity, see Fig.~\ref{fig:passive}d. These observations indicate that the hydrodynamic flows generated by elastic interactions alone are insufficient to generate any net defect ordering in our system.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Single.pdf}
\caption{\label{fig:single} (a) Flow pattern around a single positive defect for different orientations (column) and different anisotropies (row). We see that the mirror symmetry of flow around the defect core can be lost due to anisotropic viscosity or friction for certain orientations. (b) Kinetic energy ($E$) of the flow around each defect as a function of orientation for various degrees of anisotropy in either the viscosity (solid lines) or friction (dashed lines). (c) Kinetic energy
of the flow around a defect for various values of activity ($\alpha$) for systems with anisotropic viscosity (solid lines) and friction (dashed lines). Lowest energy configuration is always for the defect to be parallel with the $x$ axis.}
\end{figure}
The results presented in Figs.~\ref{fig:active_res},\ref{fig:passive} indicate that the ordering of defects observed in active nematics is due to the interactions between the active flow and the anisotropic viscosity. The flow is generated by gradients in the nematic director and usually maximised around defects which generate characteristic flow patterns \cite{Giomi:2015}. The positive defects generate strong polar flows which lead them to `swim' through the fluid giving them a self propelled particle like behaviour. By simulating the flow field generated by a fixed nematic texture containing a single defect with a predetermined orientation, we can observe directly how the flow interacts with the anisotropic viscosity and friction. In an isotropic fluid, this is of course independent of the orientation of the defect, see Fig.~\ref{fig:single}a (top row). The introduction of anisotropic viscosity and friction distort these flow patterns, as they adapt to the dissipative forces of the substrate, see Fig.~\ref{fig:single}a (middle and bottom row, respectively). It is immediately apparent that when a defect is not aligned with either principal direction, the flow pattern around the defect loses its mirror symmetry in anisotropic cases, Fig.~\ref{fig:single}a (middle column).
Fig.~\ref{fig:single}b shows the net kinetic energy of the flow around a defect $E = \rho\int v^2dA$, which for the isotropic case in independent of defect orientation. When anisotropic viscosity or friction are introduced we observe a clear dependence of the kinetic energy and the defect alignment, with the system being in a minimum energy configuration when the defect is aligned parallel to the direction of minimal shear viscosity $\psi = 0$. We see that in both cases the magnitude of the energy difference depends on the magnitude of the anisotropy $\Delta$ but is significantly larger for the anisotropic viscosity case, Fig.~\ref{fig:single}b. As the active stress is increased, we see that the dependence of the energy on the defect orientation increases for systems with anisotropic viscosity, but not in cases with anisotropic friction, \ref{fig:single}c. This supports the hypothesis that the global ordering of defects is driven by activity.
In active nematics, the active stress drives the system toward `active turbulence', a chaotic state at low reynolds number featuring many topological defects with no net order. This highlights a common feature, that the insertion of active stresses often acts to reduce order, in this case destroying the order of the nematic director and proliferating defects. When the viscosity of the active nematic has an anisotropy defined by an external frame of reference, in this case the substrate, the active flows can lead to the emergence of global nematic order of the topological defects driven by the active stress. This is evidenced by the fact that such order is not observed in systems with no active stress. Topological defects generate active flows in the fluid, the kinetic energy of which must be dissipated by the friction and viscosity of the fluid. When the fluid viscosity defined by the substrate is anisotropic, the rate of energy dissipation depends on the orientation of a defect relative to the substrate. This energy difference generates a torque on the core of the defect leading to a preferential orientation. This torque depends on the magnitude of the anisotropy and the active stress. However the active stress also defines the inter defect spacing and the defect lifetime. When the active stress is increased, the inter defect spacing is reduced. This leads to a relative increase in the elastic torques the defects exert on each other, which eventually overcomes the ordering effects of the anisotropic viscosity. In the case of anisotropic friction, the active stress does not increase the torque on the defects, so when the system is in a state of active turbulence, the disordering effects of the activity outweigh the ordering effects of the anisotropic friction in all observed cases.
\begin{acknowledgments}
I would like to thank Calres Blanch-Mercader, Karsten Kruse and Nicholas Ecker for insightful discussions.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
One of the important tasks at the LHC is to search for
physics beyond the Standard Model (SM), where the
Minimal Supersymmetric Standard Model (MSSM)~\cite{mssm} is one of the
leading candidates.
Two related important tasks are investigating the mechanism of electroweak
symmetry breaking, as well as the production and measurement of
the properties of Cold Dark Matter (CDM).
The most frequently investigated models for electroweak symmetry
breaking are the Higgs mechanism within the SM and within the MSSM.
The latter also offers a natural candidate for CDM, the
Lightest Supersymmetric Particle (LSP), i.e.\ the lightest neutralino,~$\neu{1}$~\cite{EHNOS}.
Supersymmetry (SUSY) predicts two scalar partners for all SM fermions as well
as fermionic partners to all SM bosons.
Contrary to the case of the SM, in the MSSM
two Higgs doublets are required.
This results in five physical Higgs bosons instead of the single Higgs
boson in the SM. These are the light and heavy ${\cal CP}$-even Higgs bosons, $h$
and $H$, the ${\cal CP}$-odd Higgs boson, $A$, and the charged Higgs bosons,
$H^\pm$.
In the MSSM with complex parameters (cMSSM) the three neutral Higgs
bosons mix~\cite{mhiggsCPXgen,mhiggsCPXRG1,mhiggsCPXFD1},
giving rise to the states $h_1, h_2, h_3$.
If SUSY is realized in nature and the scalar quarks and/or the gluino
are in the kinematic reach of the LHC, it is expected that these
strongly interacting particles are copiously produced.
The primarily produced strongly interacting particles subsequently
decay via cascades to
SM particles and (if $R$-parity conservation is assumed, as we do) the
LSP. One step in these decay chains is often the decay of a chargino,
$\cha{1,2}$,
to a SM particle and the LSP, or as a competing process the
chargino decay to another SUSY particle accompanied by a
SM particle. Also neutral and charged Higgs bosons can be produced this way.
Via these decays some characteristics of the LSP and/or Higgs bosons can be
measured, see, e.g., \citeres{atlas,cms} and references therein.
At any future $e^+e^-$ collider (such as ILC or CLIC)
a precision determination of the properties of the observed particles is
expected~\cite{teslatdr,ilc}. (For combined LHC/ILC analyses and further
prospects see \citere{lhcilc}.)
Thus, if kinematically accessible, the pair production of charginos
with a subsequent decay to the LSP and/or Higgs bosons
can yield important information about the lightest neutralino and
the Higgs sector of the model.
In order to yield a sufficient accuracy, one-loop corrections to
the various chargino decay modes have to be considered.
In this paper we evaluate full one-loop corrections to chargino decays
in the cMSSM.
If scalar quarks are sufficiently heavy (as in many GUT based models
such as CMSSM, GMSB or AMSB, see for instance
\citere{newbenchmark}) a chargino decay to a quark and a scalar
quark is kinematically forbidden. Assuming heavy squarks
we calculate the full one-loop correction to
all two body decay modes (which are non-zero at the tree-level),
\begin{align}
\label{CNH}
&\Gamma(\DecayCmNH{i}{j}) \qquad (i = 1,2,\; j = 1,2,3,4)~, \\
\label{CNW}
&\Gamma(\DecayCmNW{i}{j}) \qquad (i = 1,2,\; j = 1,2,3,4)~, \\
\label{CCh}
&\Gamma(\DecayCmCh{k}) \qquad (k = 1,2,3)~, \\
\label{CCZ}
&\Gamma(\DecayCmCZ) ~, \\
\label{CSln}
&\Gamma(\DecayCmnSl{i}{l}{k}) \qquad (i = 1,2,\; l = e, \mu, \tau,\; k = 1,2)~, \\
\label{CSnl}
&\Gamma(\DecayCmlSn{i}{l}) \qquad (i = 1,2,\; l = e, \mu, \tau)~.
\end{align}
The total width is defined as the sum of the channels (\ref{CNH}) to
(\ref{CSnl}), where for a given parameter point several channels may
be kinematically forbidden.
As explained above,
we are especially interested in the branching ratios (BR) of the
decays involving a Higgs boson, \refeqs{CNH}, (\ref{CCh})
as part of an evaluation of a Higgs production cross section, and/or
involving the LSP, \refeqs{CNH}, (\ref{CNW}) as part of the measurement
of CDM properties at the LHC.
Consequently, it is not necessary to investigate three- or four-body decay
modes. These only play a significant role once the two-body modes
are kinematically forbidden, and thus the relevant BR's are zero.
The same applies to two-body decay modes that exist only at the
one-loop level, such as $\cha{2} \to \cha{1} \gamma$ (see, for instance,
\citere{chachaga}). While this channel is
of \order{\alpha^2}, the size of the one-loop corrections to \refeqs{CNH}
to (\ref{CSnl}) is of \order{\alpha}. We have numerically verified that the
contribution of $\Gamma(\cha{2} \to \cha{1} \gamma)$ to the total width is
completely negligible.
Tree-level results for the decays of charginos in the MSSM
were presented in \citeres{chadectree,chachaga,haber3}.
Higher-order corrections to chargino decays have been evaluated in
various analyses over the last
decade.
However, they were
either restricted to one specific channel
or only a very restricted set of parameters were analyzed,
and in many cases only parts of the one-loop calculation have been performed.
More specifically, the available literature comprises the following.
First order electroweak corrections to the partial decay widths of charginos
in the MSSM with real parameters (rMSSM) where derived:
for the two-body decays into a neutralino/chargino and $W/Z$~boson
including only third generation quark-squark exchange
diagrams~\cite{ChaDecCorr1},
for the three-body decays into the LSP and quarks,
including corrections to the masses of third generation fermions and SUSY
particles~\cite{ChaDecCPC}, and for the three-body leptonic decays at full
one-loop order in \citere{ChaDec3Body}.
The one-loop electroweak corrections to all two-body decay channels
of charginos, evaluated in an on-shell renormalization scheme,
have been implemented in the code SloopS~\cite{BaroII}.
In \citere{GRACESUSY} a large set of two-body and three-body decay channels of
charginos including full one-loop corrections has been calculated using the
code GRACE/SUSY-loop, but only a very limited set of numerical results have
been published. However, \citere{BaroII} has compared its results on the
partial widths with those of \citere{GRACESUSY} and concluded that the latter
uses a renormalization scheme which leads to too large corrections.
The code SDECAY~\cite{SDECAY} also includes all two-body decays of
charginos. However, no radiative corrections to these decay channels have been
included so far. A full one-loop calculation of the electroweak
corrections to the partial width of the decay of a chargino into a neutralino
and a $W$~boson in the MSSM and NMSSM is presented in \citere{liebler},
and made available with the code CNNDecays.
A brief comparison with this calculation can be found in
\refse{sec:calc}.
In the cMSSM only the decay of charginos into a neutralino and a $W$ boson has
been studied. In \citere{ChaDecCPVYang} a partial one-loop calculation of
rate asymmetries of $\champ{i} \to \neu{1} W^\mp$ has been performed,
including contributions from the third generation quarks, while
\citere{ChaDecCPVEberl} evaluated this ${\cal CP}$-violating asymmetry
at the full one-loop level, highlighting the relevance of the contribution
from the chargino wave function corrections.
However, a complete one-loop result for the two-body total decay width in the cMSSM
is missing so far.
In this paper we present
for the first time a full one-loop calculation for all non-hadronic
two-body decay channels of a chargino, taking into
account soft and hard QED radiation, simultaneously and
consistently evaluated in the cMSSM.
In \refse{sec:cMSSM} we review the relevant sectors of the cMSSM
and their renormalization.
Details about the calculation can be
found in \refse{sec:calc}, and the numerical results for all decay
channels are presented in \refse{sec:numeval}. The conclusions can be
found in \refse{sec:conclusions}.
The evaluation of the branching ratios of the charginos will be
implemented into the Fortran code
{\tt FeynHiggs}~\cite{feynhiggs,mhiggslong,mhiggsAEC,mhcMSSMlong}.
\section{The relevant sectors of the complex MSSM}
\label{sec:cMSSM}
All the channels (\ref{CNH}) -- (\ref{CSnl}) are calculated at the
one-loop level, including real QED radiation. This requires the
simultaneous renormalization of several sectors of the cMSSM. In
the following
subsections we briefly review these sectors.
Details about the renormalization of most of the sectors can be found in
\citere{Stop2decay}. Here we only review the renormalization that can
not be found explicitly in \citere{Stop2decay}.
\subsection{The lepton/slepton sector of the cMSSM}
\label{sec:slepton}
For the evaluation of the one-loop contributions to the decay channels
in \refeqs{CSln}, (\ref{CSnl}) a renormalization of the scalar lepton
($\tilde{l}$) and neutrino ($\tilde{\nu_l}$) sector is needed (we assume no generation
mixing and discuss the case for one generation only).
The bilinear part of the $\tilde{l}$ and $\tilde{\nu_l}$ Lagrangian,
\begin{align}
{\cal L}_{\tilde{l}/\tilde{\nu_l}}^{\text{mass}} &= - \begin{pmatrix}
\tilde{l}_L^{\dagger}, \tilde{l}_R^{\dagger} \end{pmatrix}
\matr{M}_{\tilde{l}} \begin{pmatrix} \tilde{l}_L \\ \tilde{l}_R
\end{pmatrix}
- \begin{pmatrix} \tilde{\nu_l}^{\dagger} \end{pmatrix}
\matr{M}_{\tilde{\nu_l}}\begin{pmatrix} \tilde{\nu_l} \end{pmatrix}~,
\end{align}
contains the slepton and sneutrino mass matrices
$\matr{M}_{\tilde{l}}$ and $\matr{M}_{\tilde{\nu_l}}$,
given by
\begin{align}\label{Sfermionmassenmatrix}
\matr{M}_{\tilde{l}} &= \begin{pmatrix}
M_{\tilde{l}_L}^2 + m_l^2 + M_Z^2 c_{2 \beta} (I_l^3 - Q_l s_\mathrm{w}^2) &
m_l X_l^* \\[.2em]
m_l X_l &
M_{\tilde{l}_R}^2 + m_l^2 +M_Z^2 c_{2 \beta} Q_l s_\mathrm{w}^2
\end{pmatrix}~, \\[.5em]
\matr{M}_{\tilde{\nu_l}} &= M_{\tilde{l}_L}^2
+ I_\nu^3 c_{2\beta} M_Z^2
\end{align}
with
\begin{align}
X_l &= A_l - \mu^* \tan \beta~.
\end{align}
$M_{\tilde{l}_L}$ and $M_{\tilde{l}_R}$ are the soft SUSY-breaking mass
parameters, where $M_{\tilde{l}_L}$ is equal for all members of an
$SU(2)_L$ doublet.
$m_l$ and $Q_{l}$ are, respectively, the mass and the charge of the
corresponding lepton, $I_{l/\nu}^3$ denotes the isospin of $l/\nu$,
and $A_l$ is the trilinear soft-breaking parameter.
$M_Z$ and $M_W$ are the masses of the $Z$~and $W$~boson,
$c_\mathrm{w} = M_W/M_Z$, and $s_\mathrm{w} = \sqrt{1 - c_\mathrm{w}^2}$. Finally we use the
short-hand notations $c_{x} = \cos(x)$, $s_x = \sin(x)$.
The mass matrix $\matr{M}_{\tilde{l}}$ can be diagonalized with the help of a unitary
transformation ${\matr{U}}_{\tilde{l}}$,
\begin{align}\label{transformationkompl}
\matr{D}_{\tilde{l}} &=
\matr{U}_{\tilde{l}}\, \matr{M}_{\tilde{l}} \, {\matr{U}}_{\tilde{l}}^\dagger =
\begin{pmatrix} m_{\tilde{l}_1}^2 & 0 \\ 0 & m_{\tilde{l}_2}^2 \end{pmatrix}~, \qquad
{\matr{U}}_{\tilde{l}}=
\begin{pmatrix} U_{\tilde{l}_{11}} & U_{\tilde{l}_{12}} \\
U_{\tilde{l}_{21}} & U_{\tilde{l}_{22}} \end{pmatrix}
~.
\end{align}
The mass eigenvalues depend only on $|X_l|$.
The scalar lepton masses will always be mass ordered, i.e.\
$m_{\tilde{l}_1} \le m_{\tilde{l}_2}$:
\begin{align}
\label{MSlep}
m_{\tilde{l}_{1,2}}^2 &= \frac{1}{2} \left( M_{\tilde{l}_L}^2 + M_{\tilde{l}_R}^2 \right)
+ m_l^2 + \frac{1}{2} I_l^3 c_{2\beta} M_Z^2 \\
&\quad \mp \frac{1}{2} \sqrt{\left[ M_{\tilde{l}_L}^2 - M_{\tilde{l}_R}^2
+ M_Z^2 c_{2\beta} (I_l^3 - 2 Q_l s_\mathrm{w}^2) \right]^2 + 4 m_l^2 |X_l|^2}~,
\nonumber\\[.5em]
m_{\tilde{\nu_l}}^2 &= M_{\tilde{l}_L}^2 + I_\nu^3 c_{2\beta} M_Z^2~.
\end{align}
\subsubsection{Renormalization}
The parameter renormalization can be performed as follows,
\begin{align}
\matr{M}_{\tilde{l}} &\to \matr{M}_{\tilde{l}} + \delta\matr{M}_{\tilde{l}}~, \\
\matr{M}_{\tilde{\nu_l}} &\to \matr{M}_{\tilde{\nu_l}} + \delta\matr{M}_{\tilde{\nu_l}}
\end{align}
which means that the parameters in the mass matrix $\matr{M}_{\tilde{l}}$
are replaced by the renormalized parameters and a counterterm. After the
expansion $\delta\matr{M}_{\tilde{l}}$ contains the counterterm part,
\begin{align}
\label{proc1a}
\delta\matr{M}_{\tilde{l}_{11}} &= \delta M_{\tilde{l}_L}^2 + 2 m_l \delta m_l
- M_Z^2 c_{2 \beta}\, Q_l \, \delta s_\mathrm{w}^2 + (I_l^3 - Q_l s_\mathrm{w}^2)
( c_{2 \beta}\, \delta M_Z^2 + M_Z^2\, \delta c_{2\beta})~, \\
\label{proc1b}
\delta\matr{M}_{\tilde{l}_{12}} &= (A_l^* - \mu \tan \beta)\, \delta m_l
+ m_l (\delta A_l^* - \mu\, \delta \tan \beta - \tan \beta \, \delta \mu)~, \\
\label{proc1c}
\delta\matr{M}_{\tilde{l}_{21}} &=\delta\matr{M}_{\tilde{l}_{12}}^*~, \\
\label{proc1d}
\delta\matr{M}_{\tilde{l}_{22}} &= \delta M_{\tilde{l}_R}^2
+ 2 m_l \delta m_l + M_Z^2 c_{2 \beta}\, Q_l \, \delta s_\mathrm{w}^2
+ Q_l s_\mathrm{w}^2 ( c_{2 \beta}\, \delta M_Z^2+ M_Z^2\, \delta c_{2 \beta})~, \\
\label{proc1e}
\delta\matr{M}_{\tilde{\nu_l}} &= \delta M_{\tilde{l}_L}^2 + I_\nu^3
(c_{2 \beta}\, \delta M_Z^2 + M_Z^2\, \delta c_{2\beta})~.
\end{align}
Another possibility for the parameter renormalization of the sleptons is
to start out with the physical parameters which corresponds to
the replacement:
\begin{align} \label{proc2}
\matr{U}_{\tilde{l}}\, \matr{M}_{\tilde{l}} \,
{\matr{U}}_{\tilde{l}}^\dagger &\to\matr{U}_{\tilde{l}}\, \matr{M}_{\tilde{l}} \,
{\matr{U}}_{\tilde{l}}^\dagger + \matr{U}_{\tilde{l}}\, \delta \matr{M}_{\tilde{l}} \,
{\matr{U}}_{\tilde{l}}^\dagger =
\begin{pmatrix} m_{\tilde{l}_1}^2 & Y_l \\ Y_l^* & m_{\tilde{l}_2}^2 \end{pmatrix} +
\begin{pmatrix}
\delta m_{\tilde{l}_1}^2 & \delta Y_l \\ \delta Y_l^* & \delta m_{\tilde{l}_2}^2
\end{pmatrix}
\end{align}
where $\delta m_{\tilde{l}_1}$ and $\delta m_{\tilde{l}_2}$ are the counterterms of the
slepton masses. $\delta Y_l$ is the counterterm%
\footnote{The unitary matrix $\matr{U}_{\tilde{l}}$ can be expressed by a
mixing angle and a corresponding phase. Then the counterterm $\delta
Y_l$ can be related to the counterterms of the mixing angle and the
phase (see \citere{mhcMSSM2L}).}%
~to the slepton mixing parameter $Y_l$ (which vanishes
at tree-level, $Y_l = 0$, and corresponds to the
off-diagonal entries in $\matr{D}_{\tilde{l}} =\matr{U}_{\tilde{l}}\,
\matr{M}_{\tilde{l}} \,
{\matr{U}}_{\tilde{l}}^\dagger$, \refeq{transformationkompl}). Using
\refeq{proc2}
one can express $\delta\matr{M}_{\tilde{l}}$ by the counterterms $\delta m_{\tilde{l}_1}^2$,
$\delta m_{\tilde{l}_2}^2$ and $\delta Y_l$. Especially for $\delta\matr{M}_{\tilde{l}_{12}}$
one finds
\begin{align}\label{dMsq12physpar}
\delta\matr{M}_{{\tilde{l}}_{12}} &=
U^*_{\tilde{l}_{11}} U_{\tilde{l}_{12}}
(\delta m_{\tilde{l}_1}^2 - \delta m_{\tilde{l}_2}^2) +
U^*_{\tilde{l}_{11}} U_{\tilde{l}_{22}} \delta Y_l + U_{\tilde{l}_{12}}
U^*_{\tilde{l}_{21}} \delta Y_l^*~.
\end{align}
In the following the relation given by \refeqs{proc1b} and
\eqref{dMsq12physpar} will be used to express either $\delta Y_l$, $\delta
A_l$ or $\delta m_l$ by the other counterterms.
For the field renormalization the following procedure is applied,
\begin{align}
\begin{pmatrix} \tilde{l}_1 \\ \tilde{l}_2 \end{pmatrix} &\to
\left( \id + \frac{1}{2} \delta\matr{Z}_{\tilde{l}} \right)
\begin{pmatrix} \tilde{l}_1 \\ \tilde{l}_2 \end{pmatrix}
~~{\rm with}~~
\delta\matr{Z}_{\tilde{l}} = \begin{pmatrix}
\dZ{\tilde{l}_{11}} & \dZ{\tilde{l}_{12}} \\
\dZ{\tilde{l}_{21}} & \dZ{\tilde{l}_{22}}
\end{pmatrix}~, \\
\tilde{\nu_l} &\to \left( 1 + \tfrac{1}{2} \dZ{\tilde{\nu_l}} \right) \tilde{\nu_l}~.
\end{align}
This yields for the renormalized self-energies
\begin{align}
\hat{\Sigma}_{\tilde{l}_{11}}(k^2) &= \Sigma_{\tilde{l}_{11}}(k^2)
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_1}^2) (\dZ{\tilde{l}_{11}} + \dZ{\tilde{l}_{11}}^*)
- \dem_{\tilde{l}_1}^2~, \\
\hat{\Sigma}_{\tilde{l}_{12}}(k^2) &= \Sigma_{\tilde{l}_{12}}(k^2)
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_1}^2) \dZ{\tilde{l}_{12}}
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_2}^2) \dZ{\tilde{l}_{21}}^*
- \delta Y_l~, \\
\hat{\Sigma}_{\tilde{l}_{21}}(k^2) &= \Sigma_{\tilde{l}_{21}}(k^2)
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_1}^2) \dZ{\tilde{l}_{12}}^*
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_2}^2) \dZ{\tilde{l}_{21}}
- \delta Y_l^*~, \\
\hat{\Sigma}_{\tilde{l}_{22}}(k^2) &= \Sigma_{\tilde{l}_{22}}(k^2)
+ \tfrac{1}{2} (k^2 - m_{\tilde{l}_2}^2) (\dZ{\tilde{l}_{22}} + \dZ{\tilde{l}_{22}}^*)
- \dem_{\tilde{l}_2}^2~, \\
\hat{\Sigma}_{\tilde{\nu_l}}(k^2) &= \Sigma_{\tilde{\nu_l}}(k^2)
+ \tfrac{1}{2} (k^2 - m_{\tilde{\nu}_l}^2) (\dZ{\tilde{\nu_l}} + \dZ{\tilde{\nu_l}}^*)
- \dem_{\tilde{\nu}_l}^2~.
\end{align}
In order to complete the lepton/slepton sector renormalization also for the
corresponding lepton (i.e. its mass, $m_l$, and the lepton
fields $l_L$, $l_R$, $\nu_L$) renormalization constants have to be introduced:
\begin{align}
m_l &\to m_l + \delta m_l~,\\
l_{L/R} &\to (1 + \tfrac{1}{2} \dZ{l}^{L/R})\, l_{L/R}~, \\
\nu_L &\to (1 + \tfrac{1}{2} \dZ{\nu})\, \nu_L~,
\end{align}
with $\delta m_l$ being the lepton mass counterterm and $\dZ{l}^L$ and
$\dZ{l}^R$ being the $Z$~factors of the left-handed and the right-handed
charged lepton fields, respectively; $\dZ{\nu}$ is the neutrino
field renormalization.
Then the renormalized self energy $\hat{\Sigma}_{l}$
can be decomposed
into left/right-handed and scalar left/right-handed parts,
${\Sigma}_{l}^{L/R}$ and ${\Sigma}_{l}^{SL/SR}$,
respectively,
while only the left-handed part exists for the self energy $\hat{\Sigma}_\nu$ of the
massless neutrino
\begin{align}\label{decomposition}
\hat{\Sigma}_{l} (k) &= \not\! k\, {\omega}_{-} \hat{\Sigma}_l^L (k^2)
+ \not\! k\, {\omega}_{+} \hat{\Sigma}_l^R (k^2)
+ {\omega}_{-} \hat{\Sigma}_l^{SL} (k^2)
+ {\omega}_{+} \hat{\Sigma}_l^{SR} (k^2)~, \\[.3em]
\hat{\Sigma}_{\nu} (k) &= \not\! k\, {\omega}_{-} \hat{\Sigma}_\nu^L (k^2)
~,
\end{align}
where the components are given by
\begin{align}
\hat{\Sigma}_l^{L/R} (k^2) &= {\Sigma}_l^{L/R} (k^2)
+ \frac{1}{2} (\dZ{l}^{L/R} + {\dZ{l}^{L/R}}^*)~, \\
\hat{\Sigma}_l^{SL} (k^2) &= {\Sigma}_l^{SL} (k^2)
- \frac{m_l}{2} (\dZ{l}^L + {\dZ{l}^R}^*) - \delta m_l~, \\
\hat{\Sigma}_l^{SR} (k^2) &= {\Sigma}_l^{SR} (k^2)
- \frac{m_l}{2} (\dZ{l}^R + {\dZ{l}^L}^*) - \delta m_l~, \\[.3em]
\hat{\Sigma}_\nu^{L} (k^2) &= {\Sigma}_\nu^{L} (k^2)
+ \frac{1}{2} (\dZ{\nu}^{L} + {\dZ{\nu}^{L}}^*)~,
\end{align}
and ${\omega}_{\pm} = \frac{1}{2}(\id \pm \gamma_5)$
are the right- and left-handed projectors, respectively.
Note that
$\widetilde\re\hat{\Sigma}_{l}^{SR} (k^2) = (\widetilde\re\hat{\Sigma}_{l}^{SL} (k^2))^*$
holds due to ${\cal CPT}$ invariance.
\subsubsection{The neutrino/sneutrino sector}
\label{sec:sneutrino}
We follow closely the renormalization presented in
\citere{SbotRen,Stop2decay}, slightly modified to be applicable to the
lepton/slepton sector.
\begin{itemize}
\item[(i)] The neutrino is defined on-shell (OS), yielding the one-loop
field renormalization
\begin{align}
\label{RedZnu}
\mathop{\mathrm{Re}} \dZ{\nu} &= - \widetilde\re \Sigma_\nu(0)~, \\
\label{ImdZnu}
\mathop{\mathrm{Im}} \dZ{\nu} &= 0~.
\end{align}
$\widetilde\re$ denotes the real part with respect to
contributions from the loop integral, but leaves the complex
couplings unaffected.
\item[(ii)]
The $\tilde{\nu_l}$ mass is defined OS,
\begin{align}
\widetilde\re\hat{\Sigma}_{\tilde{\nu_l}}(m_{\tilde{\nu}_l}^2) = 0~.
\end{align}
This yields for the sneutrino mass counter terms
\begin{align}
\dem_{\tilde{\nu}_l}^2 = \widetilde\re\Sigma_{\tilde{\nu_l}}(m_{\tilde{\nu}_l}^2)~.
\end{align}
\item[(iii)]
Due to $m_\nu \equiv 0$ no off-diagonal parameters in the sneutrino mass
matrix have to be renormalized.
\item[(iv)]
We now determine the $Z$~factors in the sneutrino sector in the OS scheme.
The diagonal $Z$~factor is determined such that the real part of the
residua of the propagator are set to unity,
\begin{align}
\label{residuumSneutOS}
\widetilde\re \hat{\Sigma}'_{\tilde{\nu_l}}(k^2)\big|_{k^2 = m_{\tilde{\nu}_l}^2} = 0~.
\end{align}
with $\Sigma'(k^2) \equiv \frac{\partial \Sigma(k^2)}{\partial k^2}$.
This condition fixes the real parts of the diagonal $Z$~factor to
\begin{align}
\mathop{\mathrm{Re}}\,\dZ{\tilde{\nu_l}} = - \widetilde\re \Sigma'_{\tilde{\nu_l}}(k^2)\big|_{k^2 = m_{\tilde{\nu}_l}^2}~,
\end{align}
which is correct, since the imaginary parts of the diagonal
$Z$~factor does not contain any divergences and can be
(implicitly) set to zero,
\begin{align}
\mathop{\mathrm{Im}} \dZ{\tilde{\nu_l}} &= 0~.
\end{align}
\item[(v)]
Due to $m_\nu \equiv 0$ no off-diagonal field renormalization for the
sneutrinos has to be performed.
\end{itemize}
\subsubsection{The charged lepton/slepton sector}
We choose the slepton masses $m_{\tilde{l}_1}$, $m_{\tilde{l}_2}$ and the
lepton mass $m_l$ as independent parameters.
Since we also require an independent renormalization of the scalar
neutrino, this requires an explicit restoration of the $SU(2)_L$
relation, achieved via a shift in the $M_{\tilde{l}_L}$ parameter entering
the $\tilde{l}$~mass matrix (see also \citeres{stopsbot_phi_als,dr2lA}).
Requiring the $SU(2)_L$ relation
to be valid at the loop level induces the following shift in
$M^2_{\tilde{l}_L}(\tilde{l})$
\begin{align}
M_{\tilde{l}_L}^2(\tilde{l}) = M_{\tilde{l}_L}^2(\tilde{\nu_l})
+ \delta M_{\tilde{l}_L}^2(\tilde{\nu_l}) - \delta M_{\tilde{l}_L}^2(\tilde{l})
\label{MSnushift}
\end{align}
with
\begin{align}
\delta M_{\tilde{l}_L}^2(\tilde{l}) &= |U_{\tilde{l}_{11}}|^2 \dem_{\tilde{l}_1}^2
+ |U_{\tilde{l}_{12}}|^2 \dem_{\tilde{l}_2}^2
- U_{\tilde{l}_{22}} U_{\tilde{l}_{12}}^* \delta Y_l
- U_{\tilde{l}_{12}} U_{\tilde{l}_{22}}^* \delta Y_l^* - 2 m_l \dem_l \nonumber \\
&\quad + M_Z^2\, c_{2\beta}\, Q_l\, \delta s_\mathrm{w}^2
- (I_l^3 - Q_l s_\mathrm{w}^2) (c_{2\beta}\, \delta M_Z^2 + M_Z^2\, \delta c_{2\beta})~,
\\[.5em]
\delta M_{\tilde{l}_L}^2(\tilde{\nu_l}) &= \dem_{\tilde{\nu}_l}^2
- I_\nu^3(c_{2\beta}\, \delta M_Z^2 + M_Z^2\, \delta c_{2\beta})~.
\label{MSnushift-detail}
\end{align}
This choice avoids problems concerning UV- and IR-finiteness as
discussed in detail in \citere{SbotRen}, but also leads to shifts
in both slepton masses, which are therefore slightly shifted away
from their on-shell values.
An additional shift in $M_{\tilde{l}_R}$ recovers at least one on-shell
slepton mass.
\begin{align}
M_{\tilde{l}_R}^2(\tilde{l}_i) = \frac{m_l^2\, |A_l^* - \mu \tan \beta|^2}
{M_{\tilde{l}_L}^2(\tilde{l}) + m_l^2
+ M_Z^2\, c_{2\beta} (I_l^3 - Q_l s_\mathrm{w}^2) - m_{\tilde{l}_i}^2}
- m_l^2 - M_Z^2\, c_{2\beta}\, Q_l\, s_\mathrm{w}^2+ m_{\tilde{l}_i}^2~.
\label{backshift}
\end{align}
The choice of slepton for this additional shift, which relates its mass to
the slepton parameter $M_{\tilde{l}_R}$, also represents a choice of scenario,
with the chosen slepton having a dominantly right-handed character.
A ``natural'' choice is to preserve the character of the sleptons in the
renormalization process.
With our choice of mass ordering, $m_{\tilde{l}_1} \le m_{\tilde{l}_2}$ (see
above), this suggests to recover
$m_{\tilde{l}_1}$ for $M_{\tilde{l}_L}^2 > M_{\tilde{l}_R}^2$, and to recover
$m_{\tilde{l}_2}$ for the other mass hierarchy. Consequently, for our numerical
choice given below in \refta{tab:para}, we insert $m_{\tilde{l}_2}$ into
\refeq{backshift} and recover its original value from the
re-diagonalization after applying this shift.
\bigskip
For the scalar lepton sector we can now employ a ``full'' on-shell
scheme, where the following renormalization conditions are imposed:
\begin{itemize}
\item[(i)] The lepton mass is defined on-shell, yielding the one-loop
counterterm $\delta m_l$:
\begin{align}\label{dmt}
\delta m_l &= \tfrac{1}{2} \widetilde\re \left\{
m_l \left[\Sigma_l^L (m_l^2) + \Sigma_l^R (m_l^2) \right]
+ \left[ \Sigma_l^{SL} (m_l^2) + \Sigma_l^{SR} (m_l^2) \right] \right\}~,
\end{align}
referring to the Lorentz decomposition of the self energy
${\hat{\Sigma}}_{l}(k)$, see \refeq{decomposition}.\\
The field renormalization constants are given by
\begin{align}
\label{RedZl}
\mathop{\mathrm{Re}} \dZ{l}^{L/R} &= - \widetilde\re \Big\{ {\Sigma}_l^{L/R} (m_l^2) \nonumber \\
&\qquad + m_l^2 \left[ {{\Sigma}_l^{L}}'(m_l^2) + {{\Sigma}_l^{R}}'(m_l^2) \right]
+ m_l \left[ {{\Sigma}_l^{SL}}'(m_l^2) + {{\Sigma}_l^{SR}}'(m_l^2) \right]
\Big\}~, \\
\label{ImdZl}
\mathop{\mathrm{Im}} \dZ{l}^{L/R} &= \pm \frac{i}{2\, m_l}
\widetilde\re \left\{ {\Sigma}_l^{SR}(m_l^2) - {\Sigma}_l^{SL}(m_l^2) \right\}
= \pm \frac{1}{m_l} \mathop{\mathrm{Im}} \left\{ \widetilde\re {\Sigma}_l^{SL}(m_l^2) \right\}~.
\end{align}
with
$\Sigma'(m^2) \equiv \frac{\partial \Sigma(k^2)}{\partial k^2}\big|_{k^2 = m^2}$.
\item[(ii)]
The slepton masses are also determined via on-shell
conditions~\cite{mhiggslong,hr}, yielding
\begin{align}
\label{dmsl}
\dem_{\tilde{l}_i}^2 &= \widetilde\re\Sigma_{\tilde{l}_{ii}}(m_{\tilde{l}_i}^2) \qquad (i = 1,2)~.
\end{align}
\item[(iii)]
The non-diagonal entry of \refeq{proc2} is fixed
as~\cite{mhiggsFDalbals,hr,SbotRen}
\begin{align}
\delta Y_l = \tfrac{1}{2} \widetilde\re
\big\{ \Sigma_{\tilde{l}_{12}}(m_{\tilde{l}_1}^2) + \Sigma_{\tilde{l}_{12}}(m_{\tilde{l}_2}^2) \big\}~,
\end{align}
which corresponds to two separate conditions in the case of a complex
$\delta Y_l$.
The counterterm of the trilinear coupling $\deA_l$ can be obtained from the
relation of \refeqs{proc1b} and~\eqref{dMsq12physpar},
\begin{align}
\delta A_l &= \frac{1}{m_l}\bigl[U_{\tilde{l}_{11}} U_{\tilde{l}_{12}}^*
(\delta m_{\tilde{l}_1}^2 - \delta m_{\tilde{l}_2}^2)
+ U_{\tilde{l}_{11}} U_{\tilde{l}_{22}}^{*} \delta Y_l^*
+ U_{\tilde{l}_{12}}^{*} U_{\tilde{l}_{21}} \delta Y_l
- (A_l - \mu^* \tan \beta)\, \dem_l \bigr] \nonumber \\
&\quad + (\delta\mu^* \tan \beta + \mu^* \delta\!\tan\!\beta\,)~.
\end{align}
So far undetermined are $\delta\!\tan\!\beta\,$ and $\delta\mu$, which are defined
via the Higgs sector and the chargino/neutralino sector, see
\citere{Stop2decay} for details.
\item[(iv)]
We now determine the $Z$~factors of the scalar lepton sector in
the OS scheme.
The diagonal $Z$~factors are determined such that the real part of the
residua of the propagators is set to unity,
\begin{align}
\label{residuumSlepOS}
\widetilde\re \hat{\Sigma}'_{\tilde{l}_{ii}}(k^2) \big|_{k^2 = m_{\tilde{l}_i}^2} = 0 \qquad (i = 1,2)~.
\end{align}
This condition fixes the real parts of the diagonal $Z$~factors to
\begin{align}
\mathop{\mathrm{Re}}\,\dZ{\tilde{l}_{ii}} =
- \widetilde\re \Sigma'_{\tilde{l}_{ii}}(k^2)\big|_{k^2 = m_{\tilde{l}_i}^2} \qquad (i = 1,2)~,
\end{align}
which is correct, since the
imaginary parts of the diagonal $Z$~factors
does not contain any divergences and can be
(implicitly) set to zero,
\begin{align}
\mathop{\mathrm{Im}} \dZ{\tilde{l}_{ii}} &= 0 \qquad (i = 1,2)~.
\end{align}
\item[(v)]
For the non-diagonal $Z$~factors we impose the condition that for
on-shell sleptons no transition from one slepton to the other occurs,
\begin{align}
\widetilde\re\hat{\Sigma}_{\tilde{l}_{12}}(m_{\tilde{l}_i}^2) = 0~, \qquad
\widetilde\re\hat{\Sigma}_{\tilde{l}_{21}}(m_{\tilde{l}_i}^2) = 0 \qquad (i = 1,2)~.
\end{align}
This yields
\begin{align}
\dZ{\tilde{l}_{12}} = + 2 \frac{\widetilde\re\Sigma_{\tilde{l}_{12}}(m_{\tilde{l}_2}^2) - \delta Y_l}
{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~, \qquad
\dZ{\tilde{l}_{21}} = - 2 \frac{\widetilde\re\Sigma_{\tilde{l}_{21}}(m_{\tilde{l}_1}^2) - \delta Y_l^*}
{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~.
\label{dZslepoffdiagOS}
\end{align}
\end{itemize}
\bigskip
Alternative field renormalizations can be constructed if absorptive
parts of self-energy type corrections are included into them,
see \citere{Stop2decay} for more details.
These new combined factors ${\mathcal Z}$ are (in general) different
for incoming particles/outgoing antiparticles (unbarred) and
outgoing particles/incoming antiparticles (barred).
\begin{itemize}
\item[(a)]
The alternative diagonal slepton and sneutrino $Z$~factors read
\begin{align}
\delta{\mathcal Z}_{\tilde{l}} &= - \Sigma'_{\tilde{l}}(k^2)\big|_{k^2 = m_{\tilde{l}}^2}~,\hspace{-2.5cm}
& \delta \bar{\mathcal Z}_{\tilde{l}} &= \delta{\mathcal Z}_{\tilde{l}}~, \\
\delta{\mathcal Z}_{\tilde{\nu_l}} &= - \Sigma'_{\tilde{\nu_l}}(k^2)\big|_{k^2 = m_{\tilde{\nu}_l}^2}~,\hspace{-2.5cm}
& \delta \bar{\mathcal Z}_{\tilde{\nu_l}} &= \delta{\mathcal Z}_{\tilde{\nu_l}}~.
\end{align}
\item[(b)]
For the non-diagonal $Z$~factors we impose the condition that for
on-shell sleptons no transition from one slepton to the other occurs,
\begin{align}
\hat{\Sigma}_{\tilde{l}_{12}}(m_{\tilde{l}_i}^2) = 0~, \qquad
\hat{\Sigma}_{\tilde{l}_{21}}(m_{\tilde{l}_i}^2) = 0 \qquad (i = 1,2)~.
\end{align}
This yields the following alternative field renormalization constants,
\begin{align}
\delta{\mathcal Z}_{\tilde{l}_{12}}&=
+ 2 \frac{\Sigma_{\tilde{l}_{12}}(m_{\tilde{l}_2}^2) - \delta Y_l}{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~,
& \delta\bar{\mathcal Z}_{\tilde{l}_{12}} &=
+ 2 \frac{\Sigma_{\tilde{l}_{21}}(m_{\tilde{l}_2}^2) - \delta Y_l^*}{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~, \\
\delta{\mathcal Z}_{\tilde{l}_{21}} &=
- 2 \frac{\Sigma_{\tilde{l}_{21}}(m_{\tilde{l}_1}^2) - \delta Y_l^*}{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~,
& \delta\bar{\mathcal Z}_{\tilde{l}_{21}} &=
- 2 \frac{\Sigma_{\tilde{l}_{12}}(m_{\tilde{l}_1}^2) - \delta Y_l}{(m_{\tilde{l}_1}^2 - m_{\tilde{l}_2}^2)}~.
\end{align}
\end{itemize}
\subsection{The Higgs and gauge boson sector of the cMSSM}
\label{sec:higgs}
The two Higgs doublets of the cMSSM are decomposed in the following way,
\begin{align}
\label{eq:higgsdoublets}
{\cal H}_1 = \begin{pmatrix} H_{11} \\ H_{12} \end{pmatrix} &=
\begin{pmatrix} v_1 + \tfrac{1}{\sqrt{2}} (\phi_1-i \chi_1) \\
-\phi^-_1 \end{pmatrix}, \notag \\
{\cal H}_2 = \begin{pmatrix} H_{21} \\ H_{22} \end{pmatrix} &= e^{i \xi}
\begin{pmatrix} \phi^+_2 \\ v_2 + \tfrac{1}{\sqrt{2}} (\phi_2+i
\chi_2) \end{pmatrix}.
\end{align}
Besides the vacuum expectation values $v_1$ and $v_2$, in
\refeq{eq:higgsdoublets} a possible new phase $\xi$ between the two
Higgs doublets is introduced.
The Higgs potential $V_H$ can be written in powers of the Higgs fields,
\begin{align}
V_H &= \ldots + T_{\phi_1}\, \phi_1 +T_{\phi_2}\, \phi_2 +
T_{\chi_1}\, \chi_1 + T_{\chi_2}\, \chi_2 \nonumber \\
&\quad - \frac{1}{2} \begin{pmatrix} \phi_1,\phi_2,\chi_1,\chi_2
\end{pmatrix}
\matr{M}_{\phi\phi\chi\chi}
\begin{pmatrix} \phi_1 \\ \phi_2 \\ \chi_1 \\ \chi_2 \end{pmatrix} -
\begin{pmatrix} \phi^{+}_1,\phi^{+}_2 \end{pmatrix}
\matr{M}^{\top}_{\phi^\pm\phi^\pm}
\begin{pmatrix} \phi^{-}_1 \\ \phi^{-}_2 \end{pmatrix} + \ldots~,
\end{align}
where the coefficients of the linear terms are called tadpoles and
those of the bilinear terms are the mass matrices
$\matr{M}_{\phi\phi\chi\chi}$ and $\matr{M}_{\phi^\pm\phi^\pm}$.
After a rotation to the physical fields
one obtains
\begin{align}
\label{VHiggs}
V_H &= \ldots + T_{h}\, h + T_{H}\, H + T_{A}\, A \nonumber \\
&\quad - \frac{1}{2} \begin{pmatrix} h, H, A, G
\end{pmatrix}
\matr{M}_{hHAG}^{\rm diag}
\begin{pmatrix} h \\ H \\ A \\ G \end{pmatrix} -
\begin{pmatrix} H^{+}, G^{+} \end{pmatrix}
\matr{M}_{H^\pm G^\pm}^{\rm diag}
\begin{pmatrix} H^{-} \\ G^{-} \end{pmatrix} + \ldots~,
\end{align}
where the tree-level masses are denoted as
$m_h$, $m_H$, $m_A$, $m_G$, $M_{H^\pm}$, $m_{G^\pm}$.
With the help of a Peccei-Quinn
transformation~\cite{Peccei} $\mu$ and the complex soft SUSY-breaking
parameters in the Higgs sector can be
redefined~\cite{MSSMcomplphasen} such that the complex phases
vanish at tree-level.
Concerning the renormalization we follow the usual approach where the
gauge-fixing term does not receive a net contribution from the
renormalization transformations.
As input parameter we choose the mass of the charged Higgs boson, $M_{H^\pm}$.
All details can be found in \citeres{Stop2decay,mhcMSSMlong}%
\footnote{
Corresponding to the convention used in {\tt FeynArts}/{\tt FormCalc}, we exchanged in
the charged part the positive Higgs fields with the negative ones,
which is in contrast to \cite{mhcMSSMlong}.
As we keep the definition of the matrix
$\matr{M}_{\phi^\pm\phi^\pm}$ used in \cite{mhcMSSMlong} the
transposed matrix will appear in
the expression for $\matr{M}_{H^\pm G^\pm}^{\rm diag}$.
}
~(see also \citere{Demir} for the alternative effective potential
approach and \citere{mhcMSSMother} for the renormalization group improved
effective potential approach including Higgs pole mass effects).
Including higher-order corrections the three neutral Higgs bosons can
mix~\cite{mhiggsCPXgen,mhiggsCPXRG1,mhiggsCPXFD1,mhcMSSMlong},
\begin{align}
\left( h, H, A \right) \quad \longrightarrow \quad \left( h_1, h_2, h_3 \right)~,
\end{align}
where we define the loop corrected masses according to
\begin{align}
M_{\He} \le M_{\Hz} \le M_{\Hd}~.
\end{align}
A vertex with an external on-shell Higgs boson $h_{k}$ (${k} = 1,2,3$)
is obtained from the decay widths to the tree-level Higgs bosons via the
complex matrix $\matr{Z}$~\cite{mhcMSSMlong},
\begin{align}
\Gamma_{h_{k}} &=
[\matr{Z}]_{i1} \Gamma_h +
[\matr{Z}]_{i2} \Gamma_H +
[\matr{Z}]_{i3} \Gamma_A + \ldots ~,
\label{eq:zfactors123}
\end{align}
where the ellipsis represents contributions from the mixing with the
Goldstone boson and the $Z$~boson, see \refse{sec:calc}.
It should be noted that the `rotation' with $\matr{Z}$ is not a
unitary transformation, see \citere{mhcMSSMlong} for details.
Also the charged Higgs boson appearing as an external particle in a
chargino decay has to obey the proper on-shell conditions. This leads to
an extra $Z$~factor,
\begin{align}
\hat Z_{H^-H^+} =
\left[ 1 + \mathop{\mathrm{Re}} \hat{\Sigma}'_{H^-H^+}(p^2)\big|_{p^2 = M_{H^\pm}^2} \right]^{-1}~.
\end{align}
Expanding to one-loop order yields the $Z$~factor that has to be applied
to the process with external charged Higgs boson,
\begin{align}
\sqrt{\hat Z_{H^-H^+}} = 1 + \frac{1}{2} \delta\hat Z_{H^-H^+}
\end{align}
with
\begin{align}
\label{dhZHpHm}
\delta\hat Z_{H^-H^+} = - \mathop{\mathrm{Re}}\hat{\Sigma}'_{H^-H^+}(p^2)\big|_{p^2 = M_{H^\pm}^2} =
- \mathop{\mathrm{Re}}\Sigma'_{H^-H^+}(M_{H^\pm}^2) - \dZ{H^-H^+}~.
\end{align}
As for the neutral Higgs bosons, there are contributions from the
mixing with the Goldstone boson and the $W$~boson.
This $Z$~factor is by definition UV-finite. However, it contains
IR-divergences that cancel with the soft photon contributions from
the loop diagrams, see \refse{sec:calc}.
For the renormalization of $\tan \beta$ and the Higgs field
renormalization the \ensuremath{\overline{\mathrm{DR}}}\ scheme is
chosen~\cite{mhcMSSMlong,Stop2decay}. This leads to the introduction
of the scale $\mu_R$, which will be fixed later to the
mass of the decaying particle.
\subsection{The chargino/neutralino sector of the cMSSM}
\label{sec:chaneu}
The mass eigenstates of the charginos can be determined from the matrix
\begin{align}
\matr{X} =
\begin{pmatrix}
M_2 & \sqrt{2} \sin \beta\, M_W \\
\sqrt{2} \cos \beta\, M_W & \mu
\end{pmatrix}.
\end{align}
In addition to the higgsino mass parameter $\mu$ it
contains the soft breaking term $M_2$,
which can also be complex in the cMSSM.
The rotation to the chargino mass eigenstates is done by transforming
the original wino and higgsino fields with the help of two unitary 2$\times$2
matrices $\matr{U}$ and $\matr{V}$,
\begin{align}
\label{eq:charginotransform}
\tilde{\chi}^-_i =
\begin{pmatrix} \psi^L_i
\\ \overline{\psi}^R_i \end{pmatrix}
\quad \text{with} \quad \psi^L_{i} = U_{ij} \begin{pmatrix} \tilde{W}^-
\\ \tilde{H}^-_1 \end{pmatrix}_{j} \quad \text{and} \quad
\psi^R_{i} = V_{ij} \begin{pmatrix} \tilde{W}^+
\\ \tilde{H}^+_2 \end{pmatrix}_{j}~,
\end{align}
where the $i$th mass eigenstate can be expressed in terms of either the Weyl
spinors $\psi^L_i$ and $\psi^R_i$ or the Dirac spinor $\tilde{\chi}^-_i$.
These rotations lead to the diagonal mass matrix
\begin{align}
\matr{M}_{\cham{}} =
\matr{V}^* \, \matr{X}^{\top} \, \matr{U}^{\dagger} =
\matr{diag}(m_{\tilde{\chi}^\pm_1}, m_{\tilde{\chi}^\pm_2})~.
\end{align}
{}From this relation, it becomes clear that the mass ordered
chargino masses $\mcha{1} < \mcha{2}$ can be determined as the (real and
positive) singular values of $\matr{X}$,
\begin{align}
\mcha{1,2}^2 &= \frac{1}{2} \left( |M_2|^2 + |\mu|^2\right) + M_W^2 \\[.2em]
& \mp \frac{1}{2}\sqrt{\left( |M_2|^2 - |\mu|^2 \right)^2
+ 4 M_W^2 \left( |M_2|^2 +|\mu|^2
+ 2 |M_2| |\mu| s_{2 \beta} \cos(\varphi_{\mu} + \varphi_{M_2})
+ M_W^2 c_{2 \beta}^2 \right) }~. \nonumber
\label{eq:mcha}
\end{align}
The singular value decomposition of $\matr{X}$
also yields results for $\matr{U}$ and~$\matr{V}$.
A similar procedure is used for the determination of the neutralino masses and
mixing matrix, which can both be calculated from the mass matrix
\begin{align}
\matr{Y} =
\begin{pmatrix}
M_1 & 0 & -M_Z \, s_\mathrm{w} \cos \beta\,
& M_Z \, s_\mathrm{w} \sin \beta\, \\
0 & M_2 & \quad M_Z \, c_\mathrm{w} \cos \beta\,
& -M_Z \, c_\mathrm{w} \sin \beta\, \\
-M_Z \, s_\mathrm{w} \cos \beta\, & M_Z \, c_\mathrm{w} \cos \beta\, & 0
& -\mu \\
\quad M_Z \, s_\mathrm{w} \sin \beta\, & -M_Z \, c_\mathrm{w} \sin \beta\, & -\mu & 0
\end{pmatrix}.
\end{align}
This symmetric matrix contains the additional complex soft-breaking
parameter $M_1$.
The diagonalization of the matrix
is achieved by a transformation starting from the original
bino/wino/higgsino basis,
\begin{align}
\tilde{\chi}^0_i = \begin{pmatrix} \psi^0_i \\ \overline{\psi}^0_i
\end{pmatrix} \qquad \text{with} \qquad
\psi^0_{i} = N_{ij}\,
(\tilde{B}^0, \tilde{W}^0, \tilde{H}^0_1,\tilde{H}^0_2)_{j}^{\top}~,
\\
\matr{M}_{\neu{}} = \matr{N}^* \, \matr{Y} \, \matr{N}^{\dagger} =
\matr{diag}(\mneu{1}, \mneu{2}, \mneu{3}, \mneu{4})~,
\end{align}
where $\psi^0_i$ denotes the two component Weyl spinor and $\tilde{\chi}^0_i$
the four component Majorana spinor of the $i$th neutralino field.
The unitary 4$\times$4 matrix $\matr{N}$ and the physical neutralino
masses again result
from a numerical singular value decomposition of $\matr{Y}$.
The symmetry of $\matr{Y}$ permits the non-trivial condition of
using only one
matrix $\matr{N}$ for its diagonalization, in contrast to the chargino
case shown above.
Concerning the renormalization we use the results of
\citere{Stop2decay,dissAF,diplTF,dissTF}.
This includes the contributions from absorptive parts of self-energy
type corrections into `combined' ${\mathcal Z}$~factors (which in general
can be different for incoming and outgoing particles). The explicit
expressions can be found in the Appendix of \citere{Stop2decay}.
Since in our renormalization the chargino masses $\mcha{1}, \mcha{2}$
and the lightest neutralino
mass $\mneu{1}$ have been chosen as independent parameters the one-loop masses
of the heavier neutralinos
are obtained from the tree-level ones with the shifts
\begin{align}
\Delta \mneu{i} = -\mathop{\mathrm{Re}} \left\{ \mneu{i} \hat\Sigma_{\neu{i}}^{L}(\mneu{i}^2)
+ \hat\Sigma_{\neu{i}}^{SL}(\mneu{i}^2) \right\}
\qquad (i = 2,3,4)~,
\label{Deltamneu}
\end{align}
where the renormalized self energies of the neutralino have been decomposed
into their left/right-handed and scalar left/right-handed parts as in
Eq.~(\ref{decomposition}).
$\Delta \mneu{1} = 0$ is just the real part of one of
our renormalization conditions.
Special care has to be taken in the regions of the cMSSM parameter space where
the gaugino-higgsino mixing in the chargino sector is maximal,
i.e.\ where $\mu \approx M_2$.
Here $\delta M_2$ (see Eq.~(180) in \cite{Stop2decay})
and $\delta \mu$ (see Eq.~(181) in \cite{Stop2decay}) diverge as
$(U^*_{11}U^*_{22}V^*_{11}V^*_{22} - U^*_{12}U^*_{21}V^*_{12}V^*_{21})^{-1}$
and the loop calculation does not yield a reliable result.
An analysis of various renormalization schemes was recently
published in \citere{onshellCNmasses}, where this kind of divergences
were discussed.%
\footnote{Similar divergences appearing in the on-shell
renormalization in the sbottom sector, occurring for ``maximal sbottom
mixing'', have been observed and discussed in
\citeres{SbotRen,Stop2decay}.}%
~In \citere{onshellCNmasses} it was furthermore emphasized that
in the case of the renormalization of two chargino and one neutralino
mass always the most bino-like neutralino has to be renormalized in order
to find a numerically stable result (see also \citere{BaroII}).
In our numerical set-up, see
\refse{sec:numeval}, the lightest neutralino is nearly always rather
bino-like. If required, however, it would be trivial to change our
prescription from the lightest neutralino to any other neutralino.
As will be outlined in \refse{sec:parameter}, we choose the two
chargino masses as independent numerical input, which fixes also their
mass difference.
In the case of maximal mixing in the chargino sector this mass
difference tends to reach a minimum, where the counterterms depend on
the variation of this difference.
Consequently, our results will be less reliable where this minimum is
reached, and we will exclude a small range of parameters, about
$\sim 5 \,\, \mathrm{GeV}$ in mass, from our analysis, see below.
In \citere{onshellCNmasses} it was also suggested that the
numerically most stable result is obtained via the renormalization of
one chargino and two neutralinos.
This choice is well suited for tree level masses.
However, in our approach to calculate chargino decays, including their
renormalization, this choice leads to IR divergences,
since an electrically charged particle (the chargino)
changes its mass by the renormalization procedure
via an analogous shift to \refeq{Deltamneu}.
Using the shifted mass for the external particle, but the
tree-level mass for internal particles results in the IR divergence.
On the other hand,
inserting the shifted chargino mass everywhere yields an UV divergence,
see the corresponding discussion in
\citere{Stop2decay}.
Consequently, we choose to stick to our choice of imposing
on-shell conditions for the two charginos and one neutralino.
\section{Calculation of loop diagrams}
\label{sec:calc}
In this section we give some details about the calculation of the
higher-order corrections to the chargino decays. Sample diagrams are
shown in \reffis{fig:fdCCh} -- \ref{fig:fdCSnl}.
We only show the diagrams for the $\cham{i}$~decays, where the same
set of diagrams exist for the decays of~$\chap{i}$.
Not shown are the diagrams for real (hard or soft) photon
radiation. They are obtained from the corresponding tree-level diagrams
by attaching a photon to the electrically charged
particles. The internal generically depicted particles in
\reffis{fig:fdCCh} -- \ref{fig:fdCSnl} are labeled as follows:
$F$ can be a SM fermion, chargino or neutralino,
$S$ can be a sfermion or a Higgs, $V$ can be a $\gamma$,
$Z$ or $W^\pm$.
Internally appearing Higgs bosons do not receive higher-order
corrections in their masses or couplings, which would correspond to
effects beyond one-loop. Furthermore, we found that using loop corrected
Higgs boson masses and couplings for the internal Higgs bosons leads to
a divergent result.
For external Higgs bosons, as described in
\refse{sec:higgs}, the appropriate $\matr{Z}$~factors are applied.
Not shown are the diagrams with a gauge boson (Goldstone)--Higgs
self-energy contribution on the external Higgs boson leg. They appear in
the decay $\cham{2} \to \cham{1} h_{k}$ (${k} = 1, 2, 3$),
\reffi{fig:fdCCh},
with a $Z/G$--$h_{k}$ transition, and in
the decay {$\cham{i} \to \neu{j} H^-$}, ($i = 1,2, \; j = 1,2,3,4$),
\reffi{fig:fdCNH}, with a $W^-$/$G^-$--$H^-$ transition.%
\footnote{From a technical point of view, the $W^-$/$G^-$--$H^-$
transitions have been absorbed into the respective counterterms,
while the $Z/G$--$h_k$ transition has been calculated explicitly.}%
On the other hand, the self-energy correction for the chargino decay to
a chargino/neu-tralino and a gauge boson, $\cham{2} \to \cham{1} Z$ or
$\cham{i} \to \neu{j} W^-$ ($i = 1,2, \; j = 1,2,3,4$), vanish on mass shell,
i.e.\ for $p^2 = M_Z^2$ ($p^2 = M_W^2$) due to $\varepsilon \cdot p = 0$,
where $p$ denotes the external momentum and $\varepsilon$ the polarization
vector of the gauge boson.
In the figures we have furthermore omitted in general diagrams of
self-energy type
of external (on-shell) particles. While the real part of
such a loop
does not contribute to the decay width due to the on-shell
renormalization, the imaginary part, in product with an imaginary part
of a complex coupling (such as $A_l$) can give a real contribution to
the decay width. While these diagrams are not shown explicitly, they
have been taken into account in the analytical and numerical
evaluation.
The impact of those contributions will be discussed in
\refse{sec:numeval}.
The diagrams and corresponding amplitudes have been obtained with
{\tt FeynArts}~\cite{feynarts}. The model file, including the MSSM counter
terms, is discussed in more detail in \citere{Stop2decay}.
The further evaluation has been performed with
{\tt FormCalc}\ (and {\tt LoopTools})~\cite{formcalc}.
As regularization scheme for the UV-divergences we
have used constrained differential renormalization~\cite{cdr},
which has been shown to be equivalent to
dimensional reduction~\cite{dred} at the one-loop\ level~\cite{formcalc}.
Thus the employed regularization preserves SUSY~\cite{dredDS,dredDS2}.
All UV-divergences cancel in the final result.
The IR-divergences from diagrams with an internal photon have
to cancel with the ones from the corresponding real soft radiation,
where we have included the soft photon contribution
following the description given in \citere{denner}.
The IR-divergences arising from the diagrams involving a $\gamma$
are regularized by introducing a finite photon mass,
$\lambda$.
All IR-divergences, i.e.\ all divergences in the limit
$\lambda \to 0$, cancel to all orders
once virtual and real diagrams for one decay
channel are added.%
\footnote{
The only exception are the decays $\cham{i} \to \neu{2,3,4} W^-$.
The shift to the neutralino on-shell masses via \refeq{Deltamneu}
results in an IR divergence at the two-loop level, i.e.\ here we find a
cancellation of the divergences ``only'' at the one-loop level, as
required for our one-loop calculation.
The remaining IR divergences could be eliminated by a symmetry restoring
counterterm in the $\cha{i}\neu{2,3,4}W^\mp$ vertex, similar to the
evaluation of the decay $\tilde{t}_2 \to \tilde{b}_{1,2} W^+$ in \citere{Stop2decay}.
}%
~We have furthermore checked that our result does not depend on $\Delta E$
defining the energy cut that separates the soft from the hard
radiation. Our numerical results have been obtained for
$\Delta E = 10^{-5} \times \mcha{i}$
for all channels except for $\DecayCmlSn{2}{e}$,
for which $\Delta E = 10^{-3} \times \mcha{2}$ has been used.%
\footnote{
The larger cut is necessary to obtain a better convergence of the
integration over the three body phase space.
The contribution from nearly collinear photons
(along the direction of the electron) leads to numerical instabilities
in the integration.
This problem is more acute for the heavier chargino decay, with a larger
phase space and thus a larger electron energy.
}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2cha1h}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmCh{k}$ ($k = 1,2,3$).
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
Not shown are the diagrams with a $Z$--$h_{k}$ or $G$--$h_{k}$
transition contribution on the external Higgs boson leg.
}
\label{fig:fdCCh}
\end{center}
\end{figure}
\begin{figure}[ht!]
\vspace{2em}
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2cha1Z}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmCZ$.
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
}
\label{fig:fdCCZ}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2neu1H}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmNH{i}{j}$ ($i = 1,2, \; j = 1,2,3,4$).
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
Not shown are the diagrams with a $W^-$--$H^-$ or
$G^-$--$H^-$ transition
contribution on the external Higgs boson leg.
}
\label{fig:fdCNH}
\end{center}
\vspace{-2em}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2neu1W}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmNW{i}{j}$ ($i = 1,2, \; j = 1,2,3,4$).
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
}
\label{fig:fdCNW}
\end{center}
\vspace{-2em}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2nslep}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmnSl{i}{l}{k}$ ($i = 1, 2, \; l = e, \mu, \tau, \; k = 1,2$).
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
}
\label{fig:fdCSln}
\end{center}
\vspace{-2em}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{cha2lepsn}
\caption{
Generic Feynman diagrams for the decay
$\DecayCmlSn{i}{l}$ ($i = 1, 2, \; l = e, \mu, \tau$).
$F$ can be a SM fermion, chargino or neutralino, $S$ can be a
sfermion or a Higgs boson, $V$ can be a $\gamma$, $Z$ or $W^\pm$.
}
\label{fig:fdCSnl}
\end{center}
\vspace{-2em}
\end{figure}
\clearpage
\subsubsection*{Tree-level results}
For completeness we show here also the formulas that have been
used to calculate the tree-level decay widths:
\begin{align}
\label{CNHtree}
\Ga^{\rm tree}(\DecayCmNH{i}{j}) =&
\left[
\left( |C(\cham{i},\neu{j},H^+)_L|^2 + |C(\cham{i},\neu{j},H^+)_R|^2 \right)
(\mcha{i}^2+\mneu{j}^2-M_{H^\pm}^2)
\right.
\nonumber \\ &
\left.
+ 4 \mathop{\mathrm{Re}} \left\{ C(\cham{i},\neu{j},H^+)_L^* C(\cham{i},\neu{j},H^+)_R \right\}
\mcha{i}\mneu{j}
\right]
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{i}^2,\mneu{j}^2,M_{H^\pm}^2)}{32\pi \mcha{i}^3}
\quad (i = 1,2, \; j = 1,2,3,4)~, \\
\label{CNWtree}
\Ga^{\rm tree}(\DecayCmNW{i}{j}) =&
\left[
\left( |C(\cham{i},\neu{j},W^{+})_L|^2 + |C(\cham{i},\neu{j},W^{+})_R|^2 \right)
\right.
\nonumber \\ &
\left.
\times
\left( \mcha{i}^2+\mneu{j}^2-2M_W^2+\frac{(\mcha{i}^2-\mneu{j}^2)^2}{M_W^2} \right)
\right.
\nonumber \\ &
\left.
- 12\mathop{\mathrm{Re}} \left\{ C(\cham{i},\neu{j},W^{+})_L^* C(\cham{i},\neu{j},W^{+})_R \right\}
\mcha{i}\mneu{j}
\right]
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{i}^2,\mneu{j}^2,M_W^2)}{32\pi \mcha{i}^3}
\quad (i = 1,2, \; j = 1,2,3,4)~, \\
\label{CCHtree}
\Ga^{\rm tree}(\DecayCmCh{k}) =&
\left[
\left( |C(\cham{2},\chap{1},h_k)_L|^2 + |C(\cham{2},\chap{1},h_k)_R|^2 \right)
(\mcha{2}^2+\mcha{1}^2-m_{h_k}^2)
\right.
\nonumber \\ &
\left.
+ 4\mathop{\mathrm{Re}} \left\{ C(\cham{2},\chap{1},h_k)_L^* C(\cham{2},\chap{1},h_k)_R \right\}
\mcha{2}\mcha{1}
\right]
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{2}^2,\mcha{1}^2,m_{h_k}^2)}{32\pi \mcha{2}^3}
\quad (k = 1,2,3)~, \\
\label{CCZtree}
\Ga^{\rm tree}(\DecayCmCZ) =&
\left[
\left( |C(\cham{2},\chap{1},Z)_L|^2 + |C(\cham{2},\chap{1},Z)_R|^2 \right)
\right.
\nonumber \\ &
\left.
\times
\left( \mcha{2}^2+\mcha{1}^2-2M_Z^2+\frac{(\mcha{2}^2-\mcha{1}^2)^2}{M_Z^2} \right)
\right.
\nonumber \\ &
\left.
- 12\mathop{\mathrm{Re}} \left\{ C(\cham{2},\chap{1},Z)_L^* C(\cham{2},\chap{1},Z)_R \right\}
\mcha{2}\mcha{1}
\right]
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{2}^2,\mcha{1}^2,M_Z^2)}{32\pi \mcha{2}^3}
~, \\
\label{CSlntree}
\Ga^{\rm tree}(\DecayCmnSl{i}{l}{k}) =&
|C(\nu_l,\cham{i},\tilde{l}_k^\dagger)_L|^2
(\mcha{i}^2 - m_{\tilde{l}_k}^2)
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{i}^2,0,m_{\tilde{l}_k}^2)}{32\pi \mcha{i}^3}
\quad (i = 1,2, \; l = e, \mu, \tau, \; k = 1,2)~, \\
\Ga^{\rm tree}(\DecayCmlSn{i}{l}) =&
\left[
\left( |C(\cham{i},\bar l,\tilde{\nu_l})_L|^2 + |C(\cham{i},\bar l,\tilde{\nu_l})_R|^2 \right)
(\mcha{i}^2 + m_l^2 - m_{\tilde{\nu}_l}^2)
\right.
\nonumber \\ &
\left.
+ 4\mathop{\mathrm{Re}} \left\{ C(\cham{i},\bar l,\tilde{\nu_l})_L^* C(\cham{i},\bar l,\tilde{\nu_l})_R \right\} \mcha{i} m_l
\right]
\nonumber \\ &
\times \frac{\lambda^{1/2}(\mcha{i}^2,m_l^2,m_{\tilde{\nu}_l}^2)}{32\pi \mcha{i}^3}
\quad (i = 1,2, \; l = e, \mu, \tau)~,
\end{align}
where $\lambda(x,y,z) = (x - y - z)^2 - 4yz$ and the couplings
$C(a, b, c)$ can be found in the {\tt FeynArts}~model files~\cite{feynarts-mf}.
$C(a, b, c)_{L,R}$ denote the part of the coupling which
is proportional to $\omega_\mp = \ed{2}(\id \mp \gamma_5)$.
\subsubsection*{Comparison with other calculations}
We have performed a detailed comparison with \citere{liebler} for
the decay $\DecayCmNW{2}{1}$, where the chargino/neutralino sector
is renormalized differently to our prescription.
After a correction of the charge
renormalization in \citere{liebler} we found good agreement at the
level expected for different renormalization schemes in the
chargino/neutralino sector, see \citeres{Stop2decay,liebler} for details.
\newcommand{${\cal S}$}{${\cal S}$}
\newcommand{\ensuremath{{\cal S}_>}}{\ensuremath{{\cal S}_>}}
\newcommand{\ensuremath{{\cal S}_<}}{\ensuremath{{\cal S}_<}}
\section{Numerical analysis}
\label{sec:numeval}
In this section we present a numerical analysis of all decay
channels ($\DecayCmxy{i}$).
We restrict ourselves here to the decay of the charginos with
{\em negative} charge. Small differences with respect to $\chap{i}$
decays occur for complex parameters~\cite{ChaDecCPVYang,ChaDecCPVEberl}.
These effects are not
the scope of this paper, and we have checked that for the parameter
choices in this paper these effects are small.
In the various figures we show the decay width and its
relative correction at the tree-level (``tree'') and at the one-loop
level (``full''),
\begin{align}
\Ga^{\rm tree} &\equiv \Ga^{\rm tree}(\DecayCmxy{i})~, \\
\Ga^{\rm full} &\equiv \Gamma^{\rm full}(\DecayCmxy{i})~, \\
\Delta\Gamma/\Gamma &\equiv \frac{\Ga^{\rm full} - \Ga^{\rm tree}}{\Ga^{\rm tree}}~.
\end{align}
The total decay width is defined as the sum of all 38 decay widths,
\begin{align}
\Gamma_{\rm tot}^{\rm tree} &\equiv \sum_{{\rm xy}} \Ga^{\rm tree}(\DecayCmxy{i})~, \\
\Gamma_{\rm tot}^{\rm full} &\equiv \sum_{{\rm xy}} \Ga^{\rm full}(\DecayCmxy{i})~.
\end{align}
We also show the absolute and relative changes of the branching ratios,
\begin{align}
{\rm BR}^{\rm tree} &\equiv
\frac{\Gamma^{\rm tree}(\DecayCmxy{i})}{\Gamma_{\rm tot}^{\rm tree}}~, \\
{\rm BR}^{\rm full} &\equiv
\frac{\Gamma^{\rm full}(\DecayCmxy{i})}{\Gamma_{\rm tot}^{\rm full}}~, \\
\Delta{\rm BR}/{\rm BR} &\equiv \frac{{\rm BR}^{\rm full} - {\rm BR}^{\rm tree}}{{\rm BR}^{\rm full}}
\label{brrel}
\end{align}
The last quantity is crucial to analyze the impact of the one-loop
corrections on the phenomenology at the LHC and the ILC,
see below.
\subsection{Parameter settings}
\label{sec:parameter}
The renormalization scale, $\mu_R$, has been set to the mass of the
decaying particle, i.e.\ $\mu_R~=~\mcha{i}$.
The SM parameters are chosen as follows, see also \cite{pdg}:
\begin{itemize}
\item Fermion masses\index{leptonmasses}:
\begin{align}
m_e &= 0.51099891\,\, \mathrm{MeV}~, & m_{\nu_e} &= 0\,\, \mathrm{MeV}~, \nonumber \\
m_\mu &= 105.658367\,\, \mathrm{MeV}~, & m_{\nu_{\mu}} &= 0\,\, \mathrm{MeV}~, \nonumber \\
m_\tau &= 1776.84\,\, \mathrm{MeV}~, & m_{\nu_{\tau}} &= 0\,\, \mathrm{MeV}~, \nonumber \\
m_u &= 53.8\,\, \mathrm{MeV}~, & m_d &= 53.8\,\, \mathrm{MeV}~, \nonumber \\
m_c &= 1.27\,\, \mathrm{GeV}~, & m_s &= 104\,\, \mathrm{MeV}~, \nonumber \\
m_t &= 172.0\,\, \mathrm{GeV}~, & m_b(m_b) &= 4.25\,\, \mathrm{GeV}~.
\end{align}
$m_u$ and $m_d$ are effective parameters, calculated through the hadronic
contributions to:
\begin{align}
\Delta\alpha_{\text{had}}^{(5)}(M_Z) &=
\frac{\alpha}{\pi}\sum_{f = u,c,d,s,b}
Q_f^2 \Bigl(\ln\frac{M_Z^2}{m_f^2} - \frac 53\Bigr)~.
\end{align}
\item The CKM matrix has been set to unity.
\item Gauge boson masses:
\begin{align}
M_Z & = 91.1876 \,\, \mathrm{GeV}~, \quad M_W = 80.399 \,\, \mathrm{GeV}~,
\end{align}
\item Coupling constants:
\begin{align}
\alpha & = \frac{e^2}{4\pi} = 1/137.035999679 ~.
\end{align}
\end{itemize}
The Higgs sector quantities (masses, mixings, etc.) have been
evaluated using {\tt FeynHiggs} version\,2.7.4
\cite{feynhiggs,mhiggslong,mhiggsAEC,mhcMSSMlong}, where we used
the running top mass for the evaluation.
When performing an analysis involving complex parameters
it should be noted that the results for physical observables are
affected only
by certain combinations of the complex phases of the
parameters $\mu$, the trilinear couplings $A_t$, $A_b$, $A_\tau$, \ldots, and
the gaugino mass parameters $M_1$, $M_2$,
$M_3$~\cite{MSSMcomplphasen,SUSYphases}.
It is possible, for instance, to rotate the phase $\varphi_{M_2}$ away.
Experimental constraints on the (combinations of) complex phases
arise in particular from their contributions to electric dipole moments of
heavy quarks~\cite{EDMDoink}, of the electron and
the neutron, see \citeres{EDMrev2,EDMPilaftsis} and references therein,
and of the deuteron~\cite{EDMRitz}.
A recent review can be found in \citere{EDMrev3}.
Using the convention that $\varphi_{M_2} = 0$ (i.e.\ $M_2$ real
and positive) as done in this paper, in particular
the phase $\varphi_{\mu}$ is tightly constrained~\cite{plehnix} to be close to
zero or~$\pi$. Accordingly, we also choose $\mu$ to be real. To be in
agreement with the anomalous magnetic moment of the muon, $(g-2)_\mu$,
we furthermore choose $\mu$ to be positive~\cite{newBNL,g-2,newDavier}.
On the other hand, the bounds on the phases of the third generation
trilinear couplings are much weaker.
The phases of $\mu$ and $A_\tau$ (the scalar top and bottom sector as
well as the gluino enter only as virtual particles,
i.e.\ subleading, in the decays evaluated here) appear
only in the combination $(\varphi_{\Atau} + \varphi_{\mu})$
(or in different combinations together with phases of $M_1$ or $M_3$).
Setting $\varphi_{\mu} = 0$ (see above) as well as $\varphi_{\gl} = 0$
(we do not consider the gluino phase in this paper) leaves us with
$\varphi_{\Atau}$ and $\varphi_{M_1}$ as the only complex valued
parameters. (The dependence on $\varphi_{\At}$ and $\varphi_{\Ab}$ on decays involving
SUSY particles has recently been analyzed in detail in
\citeres{SbotRen,Stop2decay}.)
We will show the results for some representative numerical examples.
The SUSY parameters are chosen according to the scenario, ${\cal S}$,
shown in \refta{tab:para}, but with one of the parameters varied.
\begin{table}[t!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
Scen.\ & $\tan \beta$ & $M_{H^\pm}$ & $\mcha{2}$ & $\mcha{1}$
& $M_{\tilde{l}_L}$ & $M_{\tilde{l}_R}$ & $A_l$
\\ \hline\hline
${\cal S}$ & 20 & 160 & 600 & 350 & 300 & 310 & 400
\\ \hline
\end{tabular}
\caption{MSSM parameters for the initial numerical
investigation; all
masses are in GeV. $M_1$, $M_2$ and $\mu$ are chosen such that the
values for $\mcha{1}$ and $\mcha{2}$ and \refeq{M1M2} are fulfilled
(see text).
The diagonal soft SUSY-breaking parameters in the squark sector
are set to $1200 \,\, \mathrm{GeV}$
and the corresponding trilinear couplings to $2400 \,\, \mathrm{GeV}$.
}
\label{tab:para}
\end{center}
\renewcommand{\arraystretch}{1.0}
\end{table}
\noindent
The absolute value of $M_1$ (see above) is fixed via the GUT
relation (with $|M_2| \equiv M_2$)
\begin{align}
|M_1| &= \frac{5}{3} \tan^2 \theta_\mathrm{w} M_2 \approx \frac{1}{2} M_2~.
\label{M1M2}
\end{align}
For the numerical analysis we fix $\mcha{1,2}$. From the two chargino
masses and \refeq{M1M2} the numerical values for $|M_1|$, $M_2$ and $\mu$
can be evaluated avoiding any ambiguity (leaving $\varphi_{M_1}$ as a free
parameter), see below.
As default we use $\varphi_{M_1} = 0$.
This ensures that parameter variations keep the
variation of the phase space at a minimum level and the numerical
results mainly show the effects from the higher-order corrections
to the decay widths.
We invert the mass relations of \refeq{eq:mcha}
in order to express the parameters
$\mu$ and $M_2$ (which are taken to be real, see above) as a
function of chargino masses.
The resulting quartic equation leads to two sets of solutions.
Each set of solutions satisfies the relations
\begin{align}
|M_2 \mu - M_W^2 \sin 2\beta| &=
\eta_{\chi}(M_2 \mu - M_W^2 \sin 2\beta) = \mcha{1}\mcha{2},
\label{eq:mcha1mcha2}
\\
M_2^2 + \mu^2 + 2 M_W^2 &= \mcha{1}^2 + \mcha{2}^2
=: 2 \bmcha{}^2~,
\end{align}
where $\eta_\chi=\pm 1$ is defined by Eq.~(\ref{eq:mcha1mcha2}).
Choosing $\mu$ and $M_2$ real and positive and a lower experimental
bound on $\mcha{1}$ of $\sim 100 \,\, \mathrm{GeV}$ (see below) yields $\eta_\chi = +1$.
The above two relations are symmetric under an exchange of $M_2$
and $\mu$. One finds two solutions,
\begin{align}
\label{mu-gt-M2}
\{\mu, M_2\} &= \{x_+,x_- \}~, \\
\label{mu-lt-M2}
\{\mu, M_2\} &= \{x_-,x_+ \}~,
\end{align}
with
\begin{align}
x^2_{\pm} & =
\bmcha{}^2 - M_W^2
\pm \left[
\left( \bmcha{}^2 - M_W^2 \right)^2
- \left( \mcha{1}\mcha{2} + M_W^2 s_{2 \beta} \right)^2
\right]^{\frac{1}{2}}\, .
\label{eq.x2pm}
\end{align}
The two choices (\ref{mu-gt-M2}) and (\ref{mu-lt-M2}) correspond to a
more higgsino- or gaugino-like heavy chargino, respectively (and the
reverse for the lighter chargino).
While the phase space of a chargino decay is not
affected by this choice, the various branching ratios are. Consequently,
for our numerical analysis we define two scenarios,
\begin{align}
\label{eq.SE}
\ensuremath{{\cal S}_>} &: \mu > M_2 \quad (\cha{2} \mbox{~more higgsino-like})~, \\
\label{eq.SZ}
\ensuremath{{\cal S}_<} &: \mu < M_2 \quad (\cha{2} \mbox{~more gaugino-like})~.
\end{align}
The numerical scenarios are defined such that many decay modes are open
simultaneously to permit an analysis of as many channels as possible.
Only the channels $\cha{2} \to \neu{4} H^\pm/W^\pm$ and
$\cha{1} \to \neu{2,3,4} H^\pm/W^\pm$ are closed, mostly due to
\refeq{M1M2}.
We will start with a variation of $\mcha{2}$, and
analyze later the results for varying $\varphi_{M_1}$.
The scenarios are in agreement with the
MSSM Higgs boson searches at LEP~\cite{LEPHiggsSM,LEPHiggsMSSM}.
Too small values of the lightest Higgs boson mass would be reached for
$\tan \beta \lsim 5$ within ${\cal S}$\ as given in \refta{tab:para}.
Furthermore,
the following exclusion limits for neutralinos~\cite{pdg} hold in
our numerical scenarios:
\begin{align}
\mneu{1} &> 46 \,\, \mathrm{GeV}, \;
\mneu{2} > 62 \,\, \mathrm{GeV}, \;
\mneu{3} > 100 \,\, \mathrm{GeV}, \;
\mneu{4} > 116 \,\, \mathrm{GeV}~.
\end{align}
It should be noted that the limit for $\mneu{1}$ arises solely from
\refeq{M1M2}. In the absence of this condition, no limit on a light
neutralino mass exists, see \citere{masslessx} and references therein.
A few examples of the chargino and neutralino masses are shown in
\refta{tab:chaneu}, while Higgs and slepton masses are shown in
\refta{tab:higgsslep}. The values of $\mcha{1,2}$ allow copious
production of the charginos in SUSY cascades at the LHC.
Furthermore, the production of $\cha{1}\champ{2}$ or $\chap{1}\cham{1}$
at the ILC(1000), i.e.\ with $\sqrt{s} = 1000 \,\, \mathrm{GeV}$, via
$e^+e^- \to \cha{1}\champ{1,2}$ will be possible,
with all the subsequent decay modes (\ref{CNH}) -- (\ref{CSnl})
being (in principle) open. The clean environment of the ILC would
permit a detailed study of the chargino decays~\cite{ilc,lhcilc}.
For the parameters of scenarios $\ensuremath{{\cal S}_>}$ and $\ensuremath{{\cal S}_<}$~, see \refta{tab:para}
and \refeqs{eq.SE}, (\ref{eq.SZ}), we show the cross sections for
chargino pair production at the ILC(1000), varying the chargino
masses. The calculation has been performed at the tree-level, which
is sufficient to get an overview about the expected number of events.
Higher-order corrections could change these numbers by
\order{10\%}~\cite{ChaProd,BaroII}.
For the values in \refta{tab:para} and unpolarized beams
we find, for $\ensuremath{{\cal S}_>}$ ($\ensuremath{{\cal S}_<}$),
$\sigma(e^+e^- \to \cha{1}\champ{2}) \approx 4\, (12)~{\rm fb}$, and
$\sigma(e^+e^- \to \chap{1}\cham{1}) \approx 55\, (80)~{\rm fb}$.
Choosing appropriate polarized beams these cross sections can be
enhanced by a factor of approximately $2$ to $3$.
An integrated luminosity of $\sim 1\, \ensuremath{\mbox{ab}^{-1}}$ would yield about
$4-12 \times 10^3$ $\cha{1}\champ{2}$ events and about
$55 - 80 \times 10^3$ $\chap{1}\cham{1}$ events, with appropriate
enhancements in the case of polarized beams.
The ILC environment would result in an accuracy of
the relative branching ratio \refeq{brrel} close to the statistical
uncertainty. The statistical precisions for the various mass and
polarization assumptions, assuming a (hypothetical) 10\%~BR and $1\, \ensuremath{\mbox{ab}^{-1}}$,
are shown in the two right-most columns in \refta{tab:ILCproduction}.
Depending on the combination of allowed decay channels a determination of
the branching ratios at the per-cent level might be achievable in the
high-luminosity running of the ILC(1000).
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.6}
\begin{center}
\begin{tabular}{|c||c|r|r|r|r|r|r||c|c|c|}
\hline
Scenario
&
$\tan \beta$ &
$\mcha{2}$ & $\mcha{1}$
& $\mneu{4}$ & $\mneu{3}$ & $\mneu{2}$ & $\mneu{1}$
& $\mu$ & $M_2$ & $M_1$
\\ \hline\hline
\ensuremath{{\cal S}_>} & 20 & 600.0 & 350.0 & 599.4 & 586.0 & 350.1 & 171.4 &
581.8 & 362.1 & 172.8
\\ \hline
\ensuremath{{\cal S}_<} & 20 & 600.0 & 350.0 & 600.1 & 366.5 & 358.7 & 267.2 &
362.1 & 581.8 & 277.7
\\ \hline
\end{tabular}
\caption{The chargino and neutralino masses in \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}.
We also show the values for the ``derived'' parameters $M_1$,
$M_2$ and $\mu$.
All masses are in GeV, rounded to $0.1 \,\, \mathrm{GeV}$ to show the
size of small mass
differences, which can determine whether a certain decay channel is
kinematically closed or open.
}
\label{tab:chaneu}
\end{center}
\renewcommand{\arraystretch}{1.0}
\end{table}
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.6}
\begin{center}
\begin{tabular}{|c||c|r|r|r|r|r||r|r|r|r|}
\hline
Scenario
&
$M_S{\mu}{1} $ &
$M_S{\mu}{2} $ &
$M_S{\tau}{1}$ &
$M_S{\tau}{2}$ &
${m_{\tilde{\nu}_\mu}} $ &
${m_{\tilde{\nu}_\tau}} $ &
$M_{H^\pm}$ & $m_{\He}$ & $m_{\Hz}$ & $m_{\Hd}$
\\ \hline\hline
\ensuremath{{\cal S}_>} & 303.4 & 313.3 & 273.8 & 339.5 & 293.0 & 293.0 & 160.0 & 127.4 &
137.7 & 140.0
\\ \hline
\ensuremath{{\cal S}_<} & 303.7 & 313.1 & 287.5 & 328.0 & 293.0 & 293.0 & 160.0 & 127.2 &
137.5 & 140.4
\\ \hline
\end{tabular}
\caption{The slepton and Higgs masses in \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}.
The selectron and electron sneutrino masses
are equal to those of the corresponding smuon and muon sneutrino
up to a few tenths of $\,\, \mathrm{GeV}$.
All masses are in GeV, rounded to $0.1 \,\, \mathrm{GeV}$.
}
\label{tab:higgsslep}
\end{center}
\renewcommand{\arraystretch}{1.0}
\end{table}
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.6}
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c||c|c|}
\hline
Scen. & $\mcha{1},\mcha{2}[{\rm GeV}]$ & process & $\sigma_{0,0}[{\rm fb}]$
& $\sigma_{\rm pol}[{\rm fb}]$ &
stat.\ prec.$_{0,0}$ & stat.\ prec$_{\rm pol}$
\\ \hline\hline
\ensuremath{{\cal S}_>} & 350, 600 & $e^+e^- \to \chap{1}\cham{1}$ & 58.3 & 167.7 & 1\% & 1\%
\\ \hline
\ensuremath{{\cal S}_>} & 450, 600 & $e^+e^- \to \chap{1}\cham{1}$ & 19.8 & 56.0 & 2\% & 1\%
\\ \hline
\ensuremath{{\cal S}_<} & 350, 600 & $e^+e^- \to \chap{1}\cham{1}$ & 77.7 & 185.0 & 1\% & 1\%
\\ \hline
\ensuremath{{\cal S}_<} & 450, 600 & $e^+e^- \to \chap{1}\cham{1}$ & 29.1 & 64.2 & 2\% & 1\%
\\ \hline\hline
\ensuremath{{\cal S}_>} & 350, 500 & $e^+e^- \to \cha{1}\champ{2}$ & 21.5 & 56.5 & 2\% & 1\%
\\ \hline
\ensuremath{{\cal S}_>} & 350, 600 & $e^+e^- \to \cha{1}\champ{2}$ & 4.1 & 10.5 & 5\% & 3\%
\\ \hline
\ensuremath{{\cal S}_<} & 350, 500 & $e^+e^- \to \cha{1}\champ{2}$ & 34.2 & 93.1 & 2\% & 1\%
\\ \hline
\ensuremath{{\cal S}_<} & 350, 600 & $e^+e^- \to \cha{1}\champ{2}$ & 11.5 & 31.9 & 3\% & 2\%
\\ \hline
\end{tabular}
\caption{
Chargino production cross sections at the ILC 1000.
Here $\sigma_{0,0}$ denotes the cross section for
unpolarized beams, while $\sigma_{\rm pol}$
denotes that with electron and positron polarization $-80\%$ and
$+60\%$, respectively.
The two right-most columns show the statistical precision for a
(hypothetical) branching ratio of 10\% assuming an integrated
luminosity of $1\, \ensuremath{\mbox{ab}^{-1}}$, rounded to 1\%.
}
\label{tab:ILCproduction}
\end{center}
\renewcommand{\arraystretch}{1.0}
\end{table}
The numerical results we show in the next subsections are of
course dependent on the choice of the SUSY parameters. Nevertheless, they
give an idea of the relevance of the full one-loop corrections.
Channels (and their respective one-loop corrections) that may look
unobservable due to the smallness of their BR in the plots shown below,
could become important if other channels are kinematically forbidden.
Consequently, the one-loop corrections to {\em all} channels are
evaluated analytically, but in the numerical analysis we only show the
channels that are kinematically open in our numerical scenarios.
The results shown in this and the following subsections consist of
``tree'', which denotes the tree-level
value and of ``full'', which is the decay width including {\em all} one-loop
corrections as described in \refse{sec:calc}.
We start the numerical analysis with $\cham{2}$~decay widths evaluated as
a function of $\mcha{2}$.
For the ``tree'' contributions,
we start at $\mcha{2} = 469.3 \,\, \mathrm{GeV}$,
its lowest value (for fixed $\mcha{1} = 350 \,\, \mathrm{GeV}$),
up to $\mcha{2} = 1000\,\, \mathrm{GeV}$.
For the ``full'' results we start at $\mcha{2} = 475 \,\, \mathrm{GeV}$.
For lower values of $\mcha{2}$ the on-shell renormalization scheme
adopted here leads to insufficient results,
as $M_2$ approaches $\mu$, and the potential problems described in
\refse{sec:chaneu} start to take effect. However, this affects only a
parameter range of $\sim 5 \,\, \mathrm{GeV}$.
In the figures below
the upper panels contain the results for the absolute
value of the various decay widths, $\Gamma(\DecayCmxy{i})$ (left) and
the relative correction from the full one-loop contributions
(right). The lower panels show the same results for
${\rm BR}(\DecayCmxy{i})$.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
Since all parameters are chosen real no contributions from
absorptive parts of self-energy type corrections on external legs can
contribute at the one-loop level. This will be different in
\refse{sec:1LphiMe}.
In order to understand the qualitative behavior of the various decay widths
we first briefly summarize the composition of the relevant charginos and
neutralinos in the two numerical scenarios and their couplings to other
particles. In our notation the charginos are a mixture of gaugino
($\tilde G$) and higgsino ($\tilde H$), while the neutralinos are
mixtures of bino ($\tilde B$), wino ($\tilde W$), and higgsino,
\begin{align}
\cha{i} = [\tilde{H}^\pm + \tilde{W}^\pm]_i, \qquad
\neu{j}= [\tilde{H}^0 + \tilde{W}^3 + \tilde{B} ]_j~.
\end{align}
For the two numerical scenarios and depending on the relative size of
$\mcha{2}$ or $\mcha{1}$ we show the decomposition in
\refta{tab:chaneu_character}.
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c||c||c|c|c|c|c||c|c|}
\hline
Scenario & $\mu,M_2$ &\ $\quad\neu{1}\quad$ & $\quad\neu{2}\quad$ & $\quad\neu{3}\quad$ & $\quad\neu{4}\quad$
& $\quad\cha{1}\quad$ & $\quad\cha{2}\quad$
\\ \hline\hline
\ensuremath{{\cal S}_>}, ``low $\mcha{2}$'' & $\mu \gsim M_2$ & $\tilde{B}$ & $\tilde{W}|\tilde{H}$ & $\tilde{H}$ & $\tilde{H}|\tilde{W}$
& $\tilde{G}|\tilde{H}$ & $\tilde{G}|\tilde{H}$
\\ \hline
\ensuremath{{\cal S}_>}, ``high $\mcha{2}$'' & $\mu \gg M_2$ & $\tilde{B}$ & $\tilde{W}$ & $\tilde{H}$ & $\tilde{H}$
& $\tilde{G}$ & $\tilde{H}$
\\ \hline
\ensuremath{{\cal S}_<}, ``low $\mcha{2}$'' & $\mu \lsim M_2$ & $\tilde{B}$ & $\tilde{H}|\tilde{W}$ & $\tilde{H}$ & $\tilde{W}|\tilde{H}$
& $\tilde{G}|\tilde{H}$ & $\tilde{G}|\tilde{H}$
\\ \hline
\ensuremath{{\cal S}_<}, ``high $\mcha{2}$'' & $\mu \ll M_2$ & $\tilde{H}$ & $\tilde{H}$ & $\tilde{B}$ & $\tilde{W}$
& $\tilde{H}$ & $\tilde{G}$
\\ \hline\hline
\ensuremath{{\cal S}_>}, ``low $\mcha{1}$'' & $\mu \gg M_2$ & $\tilde{B}$ & $\tilde{W}$ & $\tilde{H}$ & $\tilde{H}$
& $\tilde{G}$ & $\tilde{H}$
\\ \hline
\ensuremath{{\cal S}_>}, ``high $\mcha{1}$'' & $\mu \gsim M_2$ & $\tilde{B}$ & $\tilde{W}|\tilde{H}$ & $\tilde{H}$ & $\tilde{H}|\tilde{W}$
& $\tilde{G}|\tilde{H}$ & $\tilde{G}|\tilde{H}$
\\ \hline
\ensuremath{{\cal S}_<}, ``low $\mcha{1}$'' & $\mu \ll M_2$ & $\tilde{B}$ & $\tilde{H}$ & $\tilde{H}|\tilde{B}$ & $\tilde{W}$
& $\tilde{H}$ & $\tilde{G}$
\\ \hline
\ensuremath{{\cal S}_<}, ``high $\mcha{1}$'' & $\mu \lsim M_2$ & $\tilde{B}$ & $\tilde{H}|\tilde{W}$ & $\tilde{H}$ & $\tilde{W}|\tilde{H}$
& $\tilde{G}|\tilde{H}$ & $\tilde{G}|\tilde{H}$
\\ \hline
\end{tabular}
\caption{
Character of the charginos and neutralinos in the analyzed regions of parameter space,
indicating their main electroweak eigenstate component(s).
We introduce the short-hand notation:
$\tilde{B}$ = bino,
$\tilde{W}$ = wino,
$\tilde{H}$ = higgsino,
$\tilde{G}$ = gaugino,
$\tilde{G}|\tilde{H}$ = mixed gaugino-higgsino (for charginos),
and
$\tilde{W}|\tilde{H}$, $\tilde{H}|\tilde{W}$, $\tilde{H}|\tilde{B}$ = mixed wino-higgsino, mixed higgsino-wino, mixed higgsino-bino (for neutralinos).
}
\label{tab:chaneu_character}
\end{center}
\renewcommand{\arraystretch}{1.0}
\end{table}
The coupling structure relevant in the chargino decays can be read off
from the interaction Lagrangians, which is symbolically given by
\begin{align}
{\cal L}_{\cha{}\neu{}W^\pm} = ~&
{\cal L}_{\tilde{H}^\pm\tilde{H}^0 W^\pm}
+
{\cal L}_{\tilde{W}^\pm\tilde{W}^3W^\pm }
\label{eq:Lagr.CNW}
~,\\
{\cal L}_{\cha{}\neu{}H^\pm} = ~&
{\cal L}_{\tilde{W}^\pm\tilde{H}^0 H^\pm }
+
{\cal L}_{\tilde{H}^\pm \tilde{W}^3 H^\pm}
+ {\cal L}_{\tilde{H}^\pm\tilde{B}H^\pm }
~,
\label{eq:Lagr.CNH}
\\
{\cal L}_{\cha{i}\cha{j}Z} = ~&
{\cal L}_{\tilde{W}^\pm \tilde{W}^\pm Z} +
{\cal L}_{\tilde{H}^\pm \tilde{H}^\pm Z}
+ \delta_{ij} {\cal L}_{\ldots}
\label{eq:Lagr.CCZ} ~,
\\
{\cal L}_{\cha{i}\cha{j}h_k} = ~&
{\cal L}_{\tilde{W}^\pm \tilde{H}^\pm h_k }
\label{eq:Lagr.CCH} ~,\\
{\cal L}_{\cha{} \nu_l \tilde{l}_k} = ~&
{\cal L}_{\tilde{W}^\pm \nu_l \tilde{l}_L}
+ {\cal L}_{\tilde{H}^\pm \nu_l \tilde{l}_R} (\propto m_l\tan \beta)
\label{eq:Lagr.CSln} ~,\\
{\cal L}_{\cha{} l \tilde{\nu}_l} = ~&
{\cal L}_{\tilde{W}^\pm l \tilde{\nu}_l}
+ {\cal L}_{\tilde{H}^\pm l \tilde{\nu}_l} (\propto m_l\tan \beta)~,
\label{eq:Lagr.CSnl}
\end{align}
where all other field combinations correspond to ``forbidden''
interactions. The allowed combinations can be summarized as follows,
\begin{itemize}
\item Decay into $W^\pm$: only gaugino-gaugino and
higgsino-higgsino interaction, but no bino-gaugino-$W$.
\item Decay into $H^\pm$: only gaugino-higgsino interaction.
\item Decay into $Z$: only gaugino-gaugino and higgsino-higgsino
interaction.
\item Decay into $h_k$: only gaugino-higgsino interaction.
\item Decay into $\nu_l\, \tilde{l}$: gaugino-$\tilde{l}_L$
(EW), higgsino-$\tilde{l}_R$ (Yukawa, suppressed with $m_l$).
\item Decay into $l\, \tilde{\nu_{l}}$: gaugino (EW), higgsino
(Yukawa, suppressed with $m_l$).
\end{itemize}
\subsection{Full one-loop results for varying \boldmath{$\mcha{2}$}}
\label{sec:1Lmcha2}
We start our numerical analysis with the decays $\DecayCmNH{2}{i}$
($i = 1, 2, 3$, where $\DecayCmNH{2}{4}$ is kinematically forbidden).
The results for $\DecayCmNH{2}{1}$ are presented in
\reffi{fig:mC2.cha2neu1hp}.
The dips, visible best in the upper right plots are due to thresholds in
the vertex corrections.
At $\mcha{2} = 488 \,\, \mathrm{GeV}$ the $\cha{1} h_2$
threshold can be seen in \ensuremath{{\cal S}_>}\ (with the $\cha{1} h_3$ threshold only $\sim 0.5 \,\, \mathrm{GeV}$
above remains nearly invisible). A second dip can be seen at
$\mcha{2} = 510\ (513) \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}) corresponding to the
$\neu{2} H^-$ threshold, and a third dip in \ensuremath{{\cal S}_<}\ at $\mcha{2} = 533 \,\, \mathrm{GeV}$
corresponding to the $\neu{3} H^-$ threshold.
In \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}\ for low $\mcha{2}$ the higgsino
component of the heavy chargino allows the decay to the bino-like
$\neu{1}$ and the charged Higgs. At large $\mcha{2}$ within \ensuremath{{\cal S}_<}\
the lightest neutralino changes from bino to wino dominated and
the decay proceeds via the ``fully allowed'' higgsino-wino interaction with a
corresponding increase in the decay width.
Within \ensuremath{{\cal S}_>}, on the other
hand, we find a pure higgsino-bino induced decay.
The loop corrections
can be larger than $20\%$
at the smallest $\mcha{2}$ in both scenarios,
see the discussion in \refse{sec:chaneu} on the $\mu\simeq M_2$ region.
At large $\mcha{2}$ the size of the loop corrections levels out at
$\sim -6\%\ (+9\%)$ in \ensuremath{{\cal S}_<}\ (\ensuremath{{\cal S}_>}). The BR in \ensuremath{{\cal S}_<}\ rises from zero
at $\mcha{2}\simeq 485\,\, \mathrm{GeV}$
to above $4.5\%$, whereas in \ensuremath{{\cal S}_>}\ it is found around $4\%$ for
most of the parameter space.
The corrections to the BR's reach the level of $30\%$
at the smallest $\mcha{2}$. In \ensuremath{{\cal S}_>}\ they are found to be around
$\sim -1\%$ for $\mcha{2} \gsim 600 \,\, \mathrm{GeV}$, whereas in \ensuremath{{\cal S}_<}\ they remain
at the $10\%$ level. In view of the ILC precision at the per-cent
level, at least in \ensuremath{{\cal S}_<}\ the loop corrections are highly relevant.
The results for $\DecayCmNH{2}{j}$ ($j = 2,3$) are shown in
\reffi{fig:mC2.cha2neujhp}.
The dip in \ensuremath{{\cal S}_<}\ at $\mcha{2} = 533 \,\, \mathrm{GeV}$ visible in $\DecayCmNH{2}{2}$
stems from the $\neu{3} H^-$ threshold.
Finally dips in $\DecayCmNH{2}{2(3)}$ can be observed at
$\mcha{2} \sim 604, 606, (873, 886) \,\, \mathrm{GeV}$,
which correspond, respectively, to the thresholds
$\neu{2}\to\neu{1}Z $, $\neu{2}\to\neu{1}h $
($\neu{3}\to \neu{1}h$, $\neu{3}\to \cha{1} W^{\mp}$).
As before the general behavior of the decay
widths can be understood from the decomposition of the heavy chargino
and the neutralinos. In \ensuremath{{\cal S}_>}\ only the decay $\DecayCmNH{2}{2}$ is
kinematically allowed, and the width rises up to $\sim 1.7 \,\, \mathrm{GeV}$ for
large $\mcha{2}$. In \ensuremath{{\cal S}_<}\ two observations can be made.
At $\mcha{2} = 652 \,\, \mathrm{GeV}$ the $\neu{2}$ and $\neu{3}$ ``switch
character''.%
\footnote{
The fact that $\mcha{2} + \mcha{1} = 652 + 350 \approx 1000 \,\, \mathrm{GeV}$ (and
consequently, the ``character switch'' occurs nearly at the vertical
line) is a pure numerical coincidence.
~For this chargino mass one finds $\mneu{2} = \mneu{3}$, and
the neutralino mixing matrix exhibits a discontinuity. As a consequence,
as can be observed in \reffi{fig:mC2.cha2neujhp}, the $\neu{2}$-curves
continue for $\neu{3}$ and vice versa.
At low masses we find ``partially''
allowed wino/higgsino interaction for $\DecayCmNH{2}{2,3}$. At high
masses the $\neu{2}$ couples as a higgsino to the charged wino, whereas
the $\neu{3}$ becomes a bino and decouples from the wino-like $\cha{2}$,
and the
decay width goes to zero. The relative size of the one-loop corrections
to the decay width vary roughly between $-4\%$ and $+4\%$.
The ${\rm BR}(\DecayCmNH{2}{2})$ in \ensuremath{{\cal S}_>}\ reaches $\sim 12\%$ already close to
threshold, whereas in \ensuremath{{\cal S}_<}\ it rises only up to $\sim 5\%$
due to the
larger total width, see below. The ${\rm BR}(\DecayCmNH{2}{3})$ in \ensuremath{{\cal S}_<}\
reaches a
maximum of $4\%$ around $\mcha{2} \approx 600 \,\, \mathrm{GeV}$ and goes to zero for
larger chargino masses. The relative size of the one-loop effects on the
BR's remains below $3\%$ in the ILC(1000) relevant mass region,
which, however, can still be relevant, see \refta{tab:ILCproduction}.
\medskip
Next we analyze the decays $\DecayCmNW{2}{1,2,3}$ shown in
\reffis{fig:mC2.cha2neu1w}, \ref{fig:mC2.cha2neujw}. The general
behavior of the decays to $W^-$ is very similar to the decays to $H^-$
discussed above.
The lightest neutralino is presented in \reffi{fig:mC2.cha2neu1w}.
The dip at $\mcha{2} = 488 \,\, \mathrm{GeV}$ corresponds to the $\cha{1} h_2$
threshold.
A second dip can be seen at
$\mcha{2} = 510\ (513) \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}) to the
$\neu{2} H^-$ threshold, as in $\DecayCmNH{2}{1}$,
and a third dip at $\mcha{2} = 533 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_<}.
The size of the decay widths for the decay $\DecayCmNW{2}{1}$ can be
understood as follows. The dominant components of $\cha{2}$ and
$\neu{1}$ have a vanishing coupling according to \refeq{eq:Lagr.CNW}.
The size of the width is then driven by the ``small'' components with a
generic size of $\sim M_W/M_2$. However, their smallness is
compensated by factors of $M_2/M_W$, as can be seen in the tree-level
expression, \refeq{CNWtree}. In \ensuremath{{\cal S}_>}\ the width rises from
$0.1 \,\, \mathrm{GeV}$ at the lowest $\mcha{2}$ up to $\sim 0.6 \,\, \mathrm{GeV}$ at high $\mcha{2}$ values.
In \ensuremath{{\cal S}_<}\ we find a minimum (reaching zero) around $\mcha{2} = 540 \,\, \mathrm{GeV}$,
and rising up to $\sim 1.6 \,\, \mathrm{GeV}$ including one-loop corrections. The
relative size of the corrections varies between $\sim +10\%$ and $+4\%$
in \ensuremath{{\cal S}_>}\ and around $-10\%$ in \ensuremath{{\cal S}_<}\ (apart from the region where the
width is negligible small). The BR's behave accordingly, reaching $\sim 8\%$
in \ensuremath{{\cal S}_>}\ for $\mcha{2} \approx 500 \,\, \mathrm{GeV}$, going down to $\sim 4.5 \,\, \mathrm{GeV}$
for large $\mcha{2}$, where in \ensuremath{{\cal S}_<}\ values up to $5\%$ are found. The
relative size of the one-loop effects on the BR's in the ILC(1000)
relevant mass region varies between $+10\%$ and $+3\%$ in \ensuremath{{\cal S}_>}\ and
between $\sim -20\%$ and $-7\%$ in \ensuremath{{\cal S}_<}. This corresponds to several
times the anticipated ILC(1000) precision.
The decays involving the heavier neutralinos, $\DecayCmNW{2}{2,3}$
(where the decay $\DecayCmNW{2}{4}$ is kinematically forbidden) are
presented in \reffi{fig:mC2.cha2neujw}.
The two dips in \ensuremath{{\cal S}_>}\ for
$\DecayCmNW{2}{2}$
stem from the $\cha{1} h_2$ and the $\neu{2} H^-$ threshold, which
can also be observed in the two other curves in \ensuremath{{\cal S}_<}.
In \ensuremath{{\cal S}_<}\ an additional dip from the $\neu{3} H^-$ threshold can be seen
at $\mcha{2} = 533 \,\, \mathrm{GeV}$ for $\DecayCmNW{2}{2,3}$.
Finally the dips at
$\mcha{2} \sim 604, 606, 873, 886 \,\, \mathrm{GeV}$
have been described for $\DecayCmNH{2}{2,3}$.
The behavior of the decays $\DecayCmNW{2}{2,3}$, as stated above, is
similar to the one of $\DecayCmNH{2}{2,3}$, where again the small
chargino/neutralino components determine the size of the widths. As for
$\DecayCmNH{2}{1}$ the smallness of the components is compensated by
factors $\sim M_2/M_W$ in the tree-level expressions, see
\refeq{CNWtree}. Also the ``character switch'' between $\neu{2}$ and
$\neu{3}$ in \ensuremath{{\cal S}_<}\ appears as in \reffi{fig:mC2.cha2neujhp}. The relative
size of the one-loop corrections, shown in the upper right plot, are
found around $\sim -10\%$ and $\sim -5\%$ in \ensuremath{{\cal S}_<}, except where the width
becomes very small. In \ensuremath{{\cal S}_>}\ the one-loop corrections are small close to
threshold and grow to $\sim -9\%$ at large $\mcha{2}$.
The ${\rm BR}(\DecayCmNW{2}{2})$ in \ensuremath{{\cal S}_>}\ reaches a maximum of $19\%$ around
$\mcha{2} = 500 \,\, \mathrm{GeV}$ and settles at $\sim 15\%$ at large $\mcha{2}$. In
\ensuremath{{\cal S}_<}\ the BR's are between $\sim 5\%$ and $10\%$ in the ILC(1000)
relevant region, where $\sim 5\%$ are also found for
${\rm BR}(\DecayCmNW{2}{2})$ for large $\mcha{2}$. The relative size of the
one-loop effects on the BR's in the ILC(1000) region (and not directly
at the production threshold) is found between $\sim -10\%$ and $\sim -5\%$.
Again, this corresponds to several times the anticipated ILC(1000)
precision.
\medskip
Now we turn to the decays involving neutral Higgs bosons.
The channels $\DecayCmCh{k}$ ($k = 1,2,3$) can serve as source for Higgs
production from SUSY cascades at the LHC, and are therefore of
particular interest.
The decay $\DecayCmCh{1}$ is shown in \reffi{fig:mC2.cha2cha1h1}.
The dips are due to the $\cha{1} h_2$, $\neu{2} H^-$ and $\neu{3} H^-$ thresholds
and have been described in $\DecayCmNW{2}{2,3}$, see above.
The tree-level results show a very small difference between \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}.
This holds also after the full one-loop corrections are included.
The widths rise from zero at threshold to $\sim 2.6 \,\, \mathrm{GeV}$.
The soft rise at threshold is due to the p-wave suppression
of the decay into a ${\cal CP}$-even scalar.
As for the decays
to a charged Higgs boson the admixture of the charginos is crucial for
the size of the decay widths: from the wino/higgsino mixtures at low
$\mcha{2}$ nearly pure wino and higgsino states are reached at large
$\mcha{2}$, corresponding to an ``allowed'' coupling in
\refeq{eq:Lagr.CCH}. The size of the one-loop corrections slightly above
the production threshold is relatively large,
$\gsim +10\%$, and dominated by their s-wave contribution%
\footnote{\label{threshold-loop}
It should be noted that a calculation very close to threshold requires
the inclusion of additional (non-relativistic) contributions, which is
far beyond the scope of this paper. Consequently, very close to threshold
our calculation (at the tree- or at the loop-level) does not provide a
very accurate description of the decay width.}%
, %
and reaches $\sim -5\%$ at large $\mcha{2}$.
The BR's reach $20\%$ in \ensuremath{{\cal S}_>}\ and $9\%$
in \ensuremath{{\cal S}_<}\ at the highest $\mcha{2}$ values in the ILC(1000) relevant
region, and remain nearly flat for higher $\mcha{2}$ values. The
relative size of the one-loop effects on the BR's is only sizable close
to threshold, and is found at the $\sim 1\%$ level for
$\mcha{2} \gsim 600 \,\, \mathrm{GeV}$, which corresponds roughly to the
anticipated ILC(1000) precision, see \refta{tab:ILCproduction}.
The decay $\DecayCmCh{2}$ are shown in \reffi{fig:mC2.cha2cha1h2}, where
a qualitatively similar result to $\DecayCmCh{1}$ can be found, with the
main difference of somewhat smaller decay width and branching ratio,
mostly due to the smaller and negative contribution of the chirally
violating part in the corresponding coupling, see \refeq{CCHtree}.
The steep rise at threshold is due to the unsuppressed s-wave contribution of
this channel, where $h_2$ is the ${\cal CP}$-odd Higgs boson.
The dips at $\mcha{2}=510 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ and $\mcha{2}=513, 533 \,\, \mathrm{GeV}$
correspond to the thresholds described above for $\DecayCmCh{1}$.
Finally, the dip at $\mcha{2}=606\,\, \mathrm{GeV}$ corresponds to the
$\DecayCNW{1}{1}$ threshold.
It should be noticed that the dips correspond
to the thresholds for tree-level processes while the widths are
evaluated with one-loop masses,
leading to the effect of the $\cha{1} h_2$ threshold in this decay,
(see, however, footnote~\ref{threshold-loop}).
The relative size of the one-loop corrections on the decay width varies
in \ensuremath{{\cal S}_<}\ between $+8\%$ at $\mcha{2} = 500 \,\, \mathrm{GeV}$ and very small negative
values for large $\mcha{2}$ (where possible larger negative values can
be reached for even larger $\mcha{2}$).
In \ensuremath{{\cal S}_<}\ the corrections vary around $\sim -3\%$.
Concerning the one-loop effects on the BR's, in
\ensuremath{{\cal S}_>}\ at $\mcha{2} = 500 \,\, \mathrm{GeV}$ around $+6\%$ are found, going down to
$\sim +3\%$ at large $\mcha{2}$. In \ensuremath{{\cal S}_<}\ the corrections vary between
$-2\%$ at low $\mcha{2}$ and $+2\%$ at high $\mcha{2}$. For small
$\mcha{2}$ values the corrections can be substantially larger than the
ILC(1000) precision.
The last channel involving a neutral Higgs, $\DecayCmCh{3}$ is shown in
\reffi{fig:mC2.cha2cha1h3}.
We observe the same dips as for $\DecayCmCh{2}$.
A straight rise of the decay width in
\ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}\ is found, reaching $\sim 1 \,\, \mathrm{GeV}$ at $\mcha{2} = 1000 \,\, \mathrm{GeV}$.
The smaller results with respect to $\DecayCmCh{1}$ are due to the
opposite sign of the chirally violating contributions in the
corresponding coupling, see \refeq{CCHtree}.
As in the decay into $h_1$, the threshold behavior is p-wave suppressed
at tree-level with a small s-wave dominated contribution to the loop
corrections.
A corresponding rise in the BR up to $7.5\%$ ($3\%$) is found in
\ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}). Within \ensuremath{{\cal S}_>}\ the relative size of the one-loop corrections is
small above threshold, reaching $\sim -5\%$ at large $\mcha{2}$. Within
\ensuremath{{\cal S}_<}\ the corrections can be very large slightly above threshold,
reaching $\sim 30\%$ at $\mcha{2} \gsim 500 \,\, \mathrm{GeV}$, going down to zero
for large $\mcha{2}$. The relative effect on the BR's is similar, where
values around $5\%$--$10\%$ ($\sim 2.5\%$) are found in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}),
exceeding by far the anticipated ILC(1000) precision.
\medskip
The last channel involving SM gauge bosons, $\cham{2} \to \cham{1} Z$, is presented
in \reffi{fig:mC2.cha2cha1z}.
The same dips as in the decays into neutral Higgs bosons can be observed.
The tree-level widths are equal in the two scenarios \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}, since
both the $Z\cha{1}\cha{2}$~coupling as well as the two chargino masses
are symmetric under the exchange of $\mu \leftrightarrow M_2$.
As for the decays $\DecayCmNW{2}{1,2,3}$
the large admixtures of the charginos yield mostly a vanishing decay
width. However, as above, the smallness of the ``allowed couplings'' is
compensated by factors in the tree-level expression $\sim M_2/M_Z$,
see \refeq{CCZtree}.
Including one-loop corrections the decay widths rise up to
$\sim 1.9 \,\, \mathrm{GeV}$
for $\mcha{2} = 1000 \,\, \mathrm{GeV}$, where, contrary to the tree-level widths, a
small difference between \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}\ can be observed. The branching
ratios are, except at the smallest $\mcha{2}$,
relatively flat around
$14.5\%$ $(6\%)$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}).
The size of the one-loop corrections to
the decay width grows from $\sim -2\%$ $(-6\%)$ at small $\mcha{2}$ to
$-7.5\%$ $(-9.5\%)$ at large $\mcha{2}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}). The effects on
the branching ratios is mostly at the $-5\%$ level, which is larger
than the ILC(1000) precision.
\medskip
Now we turn to the decays involving (scalar) leptons.
All these decay widths follow the same pattern that can be understood
from \refeqs{eq:Lagr.CSln}, (\ref{eq:Lagr.CSnl}).
We have chosen $M_{\tilde{l}_L} < M_{\tilde{l}_R}$, leading to a left-handed lighter and a
right-handed heavier slepton, where significant mixing can
be found in the scalar tau sector.
In \reffis{fig:mC2.cha2stau1nu}, \ref{fig:mC2.cha2smu1nu},
\ref{fig:mC2.cha2sel1nu} we show the decays
$\DecayCmnSl{2}{\tau}{1}, \DecayCmnSl{2}{\mu}{1}, \DecayCmnSl{2}{e}{1}$
respectively.
The dips, best visible in the upper right panels, are due to
the $\cha{1} h_2$, $\neu{2} H^-$ and $\neu{3} H^-$ thresholds
and have been already described for $\DecayCmNW{2}{2,3}$.
Within \ensuremath{{\cal S}_>}\ the $\cha{2}$
turns from a mixed higgsino/wino state at low $\mcha{2}$ to a pure
higgsino state at large $\mcha{2}$ with a vanishing coupling to the
left-handed slepton.
Consequently, these widths are very small, and only in the case of
$\tilde{\tau}_1$, due to the mixture between left- and right-handed states it
does not go to zero for large $\mcha{2}$, but goes through zero at
$\mcha{2} \approx 600 \,\, \mathrm{GeV}$ due to a cancellation of the
higgsino (suppressed by the Yukawa term) and the small gaugino
contributions
to the $\cham{2}\nu_\tau\tilde{\tau}_1^\dagger$ coupling.
In \ensuremath{{\cal S}_<}, on the other hand,
$\cha{2}$ changes from the mixed wino/higgsino state to a wino-like
state at large $\mcha{2}$, and the EW coupling (which is flavor
independent) dominates the decay to the
left-handed sleptons, leading to a rise of the decay widths to
$\sim 3.2 \,\, \mathrm{GeV}$ in the case of
$\DecayCmnSl{2}{\mu}{1}, \DecayCmnSl{2}{e}{1}$ and a reduced
value of $2 \,\, \mathrm{GeV}$ for $\DecayCmnSl{2}{\tau}{1}$, again due to the mixing
in the stau sector.
The ${\rm BR}(\DecayCmnSl{2}{\tau}{1})$ in \ensuremath{{\cal S}_>}\ is large
only at the smallest $\mcha{2}$ and below $\sim 0.5 \%$ for most
$\mcha{2}$ values. For the first and second slepton generation we also
find a monotonous decrease, although a bit weaker than for the $\tilde{\tau}_1$.
Within \ensuremath{{\cal S}_<}\ the ${\rm BR}(\DecayCmnSl{2}{\tau}{1})$ is mostly found at
$\sim 6\%$, whereas
${\rm BR}(\DecayCmnSl{2}{\mu}{1})$ and ${\rm BR}(\DecayCmnSl{2}{e}{1})$ are
somewhat larger,
due to the absence of mixing, and reaches $\sim 9.5\%$. The size of the
one-loop corrections to the decay widths and BR's is only substantial
where the decay widths are small, reaching nearly $-12\%$ at
$\mcha{2} \approx 500 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}. In the case of a large BR, i.e.\ in
\ensuremath{{\cal S}_<}, the corrections stay at the level of $\sim 1\%$, staying below
the precision anticipated for the ILC(1000).
The decays to the heavier sleptons,
$\DecayCmnSl{2}{\tau}{2}, \DecayCmnSl{2}{\mu}{2}, \DecayCmnSl{2}{e}{2}$,
shown in \reffis{fig:mC2.cha2stau2nu}, \ref{fig:mC2.cha2smu2nu},
\ref{fig:mC2.cha2sel2nu}, follow a similar pattern.
The dips visible in
the upper right panels stem from the same thresholds as those observed
for the decays to the lighter sleptons.
At low $\mcha{2}$ values the mixed higgsino/wino state couples to the
right-handed sleptons through the Yukawa term~$\propto m_l$. At large
$\mcha{2}$ in \ensuremath{{\cal S}_>}\ only the higgsino part of $\cha{2}$ survives, leading
to a (Yukawa term) suppressed coupling to the right-handed slepton and
the corresponding decay widths are very small, below
$0.3, 0.01, 2 \times 10^{-7} \,\, \mathrm{GeV}$ for
$\DecayCmnSl{2}{\tau}{2}, \DecayCmnSl{2}{\mu}{2}, \DecayCmnSl{2}{e}{2}$,
respectively. In \ensuremath{{\cal S}_<}, on the other hand, the small wino component
couples to the small left-handed admixture of the heavier slepton.
For $\DecayCmnSl{2}{\tau}{2}$, due to the relatively large mixing, this
still yields a loop-corrected decay width up to $1.2 \,\, \mathrm{GeV}$ for large
$\mcha{2}$, while for the first and second generation sleptons the
widths stay below $0.045$ and $1.1 \times 10^{-6} \,\, \mathrm{GeV}$, respectively.
Substantial branching ratios are only found for $\DecayCmnSl{2}{\tau}{2}$,
where values between $\sim 6\%$ and $\sim 2\%$ are realized. In this
case the size of the one-loop corrections varies between $0$~and
$+5\%$. At the high end, this exceeds the anticipated ILC(1000)
accuracy.
The last decays involving scalar leptons are $\DecayCmlSn{2}{l}$
($l = \tau, \mu, e$), presented in \reffis{fig:mC2.cha2snutau} --
\ref{fig:mC2.cha2snuel}.
The dips visible in the upper right panels correspond to the same
thresholds as in the decays into charged sleptons.
The behavior of the decay widths are understood from
\refeq{eq:Lagr.CSnl}. The higgsino part of $\cha{2}$, dominating in \ensuremath{{\cal S}_>},
couples $\propto m_l$, whereas the wino part, dominating in \ensuremath{{\cal S}_<}, couples
with electroweak strength, which is the same for all three
generations. Consequently, we find very similar results for
$\Gamma(\DecayCmlSn{2}{l})$ for $l = \tau, \mu, e$ in \ensuremath{{\cal S}_<}, while the decay
widths are suppressed with $m_l^{2}$ in \ensuremath{{\cal S}_>}. The BR's in \ensuremath{{\cal S}_>}\ are at
(or above) the $10\%$ level, whereas in \ensuremath{{\cal S}_<}\ values above
$2\%$ are only
reached for ${\rm BR}(\DecayCmlSn{2}{\tau})$, and tiny BR's are found in the
other two cases. The one-loop effects in \ensuremath{{\cal S}_>}\ are at the $2\%$ level in
all three generations, and vary between $-5\%$ and $+11\%$ for
${\rm BR}(\DecayCmlSn{2}{\tau})$ in \ensuremath{{\cal S}_<}, exceeding by far the ILC(1000)
accuracy.
\bigskip
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2neu1hpsq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2neu1hpsq.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2neu1hpsq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2neu1hpsq.eps}
\end{tabular}
\vspace{2em}
\caption{$\Gamma(\DecayCmNH{2}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2neu1hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2neujhpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2neujhpNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2neujhpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2neujhpNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{2}{j})$ for $j = 2,3$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2neujhp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2neu1wsq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2neu1wsq.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2neu1wsq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2neu1wsq.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{2}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:chaneu}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2neu1w}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2neujwNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2neujwNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2neujwNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2neujwNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{2}{j})$ for $j = 2,3$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2neujw}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2cha1h1.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2cha1h1.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2cha1h1.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2cha1h1.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmCh{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2cha1h1}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2cha1h2.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2cha1h2.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2cha1h2.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2cha1h2.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmCh{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2cha1h2}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2cha1h3.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2cha1h3.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2cha1h3.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2cha1h3.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmCh{3})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2cha1h3}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2cha1z.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2cha1z.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2cha1z.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2cha1z.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\cham{2} \to \cham{1} Z)$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2cha1z}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2stau1nusq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2stau1nusq.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2stau1nusq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2stau1nusq.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{\tau}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2stau1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2stau2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2stau2nu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2stau2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2stau2nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{\tau}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2stau2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2smu1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2smu1nu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2smu1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2smu1nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{\mu}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2smu1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2smu2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2smu2nu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2smu2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2smu2nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{\mu}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2smu2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2sel1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2sel1nu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2sel1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2sel1nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{e}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2sel1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2sel2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2sel2nu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2sel2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2sel2nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{2}{e}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2sel2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2snutau.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2snutau.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2snutau.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2snutau.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{2}{\tau})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2snutau}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2snumu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2snumu.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2snumu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2snumu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{2}{\mu})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2snumu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2snuel.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2snuel.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.br.cha2snuel.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.relbr.cha2snuel.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{2}{e})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{2}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000).
}
\label{fig:mC2.cha2snuel}
\end{center}
\end{figure}
\clearpage
\subsection{Full one-loop results for varying \boldmath{$\mcha{1}$}}
\label{sec:1Lmcha1}
In this section we analyze the $\cham{1}$~decay widths evaluated as
a function of $\mcha{1}$, starting at $\mcha{1} = 300 \,\, \mathrm{GeV}$.
For the ``tree'' contributions we show results up to
$\mcha{1} = 480.2 \,\, \mathrm{GeV}$, about the highest value for a heavy chargino
mass fixed to $\mcha{2} = 600 \,\, \mathrm{GeV}$.
The ``full'' results are only shown up to $\mcha{1} = 475 \,\, \mathrm{GeV}$.
For larger values of $\mcha{1}$ the on-shell renormalization scheme
adopted here leads to incorrect results,
as $M_2$ approaches $\mu$, and the potential problems described in
\refse{sec:chaneu} start to take effect.
The leading production channel of the lightest charginos at the
ILC(1000), $e^+e^- \to \chap{1}\cham{1}$,
is open for the full parameter range.
In general the line of argument for a behavior of a certain decay width
is identical to the ones given in detail in
\refse{sec:1Lmcha2}. Consequently, we will be very brief about these
arguments here and mainly discuss the size of the effects.
\medskip
We start with only $\cham{1}$ decay involving Higgs bosons,
$\DecayCmNH{1}{1}$, shown in \reffi{fig:mC1.cha1neu1hp}.
The decay widths reach values up to $\sim 0.1\ (0.15) \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}).
The ${\rm BR}(\DecayCmNH{1}{1})$ in \ensuremath{{\cal S}_>}\ varies around $2\%$,
while in \ensuremath{{\cal S}_<}\ it steeply rises above threshold and goes up to
nearly $10\%$ at $\mcha{1} \approx 450 \,\, \mathrm{GeV}$. Within \ensuremath{{\cal S}_>}\ the one-loop
effects do hardly exceed $-10\%$, while in \ensuremath{{\cal S}_<}, where the BR is large, a
variation of $\sim 15\%$ is found.
The corresponding decay involving the $W$~boson, $\DecayCmNW{1}{1}$, is
shown in \reffi{fig:mC1.cha1neu1w}.
The dip at $\mcha{1}=428\,\, \mathrm{GeV}$ visible in \ensuremath{{\cal S}_<}\ in the upper right panel
is due to the $\DecayCmNH{1}{1}$ threshold. The two decay
widths in \ensuremath{{\cal S}_>}\ and \ensuremath{{\cal S}_<}\ rise above threshold to reach values between
$\sim 0.1$ and $\sim 0.15 \,\, \mathrm{GeV}$.
The BR's behave somewhat differently: in \ensuremath{{\cal S}_>}\ values larger than $10\%$
are reached for small $\mcha{1}$, ${\rm BR}(\DecayCmNH{1}{1})$ reaches about
$2\%$ for larger $\mcha{1}$. In \ensuremath{{\cal S}_<}, on the other hand, intermediate
values larger than $25\%$ are reached.
The size of the one-loop effects on the BR's are substantial.
They vary around $-15\%$ in \ensuremath{{\cal S}_>}\ and between $3\%$ and more than $10\%$
in \ensuremath{{\cal S}_<}. Consequently, these corrections have to be taken into account
in a reliable ILC analysis.
\medskip
Next we discuss the $\cha{1}$ decays into scalar leptons. The results for
$\DecayCmnSl{1}{\tau}{1}$, $\DecayCmnSl{1}{\mu}{1}$, $\DecayCmnSl{1}{e}{1}$
are shown in \reffis{fig:mC1.cha1stau1nu}, \ref{fig:mC1.cha1smu1nu},
\ref{fig:mC1.cha1sel1nu}, respectively.
The dips at $\mcha{1}=347, 428 \,\, \mathrm{GeV}$, visible best in the upper right panels,
are due to the $\DecayCmNW{1}{1}$, $\neu{1}H^-$ thresholds.
The decay widths grow monotonously up to values of $\sim 0.5 \,\, \mathrm{GeV}$
in \ensuremath{{\cal S}_>}\ for all three decays. Within \ensuremath{{\cal S}_<}\ they
rise up to $\sim 0.25 \,\, \mathrm{GeV}$ for $\DecayCmnSl{1}{\tau}{1}$ due to the
non-vanishing mixing in the scalar tau sector. For the second and first
generation values of $\sim 0.15 \,\, \mathrm{GeV}$ are
reached.
The ${\rm BR}(\DecayCmnSl{1}{\tau}{1})$ is found between $50\%$ ($30\%$)
and $\sim 15\%$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}). The size of the one-loop effects exceeds
$1\%$ only in \ensuremath{{\cal S}_<}, where it varies between $\sim 2\%$ and
$3\%$, which is roughly at the level of the anticipated ILC(1000)
accuracy.
The decays to the heavier scalar leptons are, as for the $\cha{2}$
decays, determined by the size of the Yukawa couplings $\propto m_l$.
The results for
$\DecayCmnSl{1}{\tau}{2}, \DecayCmnSl{1}{\mu}{2}, \DecayCmnSl{1}{e}{2}$
are shown in \reffis{fig:mC1.cha1stau2nu}, \ref{fig:mC1.cha1smu2nu},
\ref{fig:mC1.cha1sel2nu}. In the case of the scalar tau the highest
values reached are $\Gamma(\DecayCmnSl{1}{\tau}{2}) \lsim 0.075 \,\, \mathrm{GeV}$ in
\ensuremath{{\cal S}_>}, corresponding to a branching ratio below $\sim 2\%$. All other
decays have a very small decay width and a correspondingly small BR.
Finally, the decays $\DecayCmlSn{1}{l}$ ($l = \tau, \mu, e$)
are presented in
\reffis{fig:mC1.cha1snutau} -- \ref{fig:mC1.cha1snuel}. These decays
proceed mainly with electroweak strength and thus are very similar for
the three generations, and can indeed be substantial. The dips
are due to the $\neu{1}W^-$ and $\neu{1}H^-$ thresholds.
The size of the decay widths reaches about $0.6 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_<}\ and
$\sim 0.3 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}. The ${\rm BR}(\DecayCmlSn{1}{\tau})$ is nearly $18\%$
for most $\mcha{1}$ values, while it drops from $40\%$ at small
$\mcha{1}$ to about $18\%$ at large $\mcha{1}$. The size of the one-loop
effects in the latter case varies between $\sim 3\%$ and $\sim 0.5\%$.
For ${\rm BR}(\DecayCmlSn{1}{l})$ ($l = \mu, e$)
values between $\sim 18\%$ in \ensuremath{{\cal S}_>}\ and
$\sim 12\%$ in \ensuremath{{\cal S}_<}\ are found. In the latter case the one-loop
corrections can be sizable around $-6\%$, which are relevant
for a reliable ILC analysis.
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1neu1hp.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1neu1hp.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{1}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1neu1hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1neu1w.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1neu1w.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{1}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1neu1w}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1stau1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1stau1nu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1stau1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1stau1nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{\tau}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1stau1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1stau2nusq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1stau2nusq.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.logcha1stau2nusq.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1stau2nusq.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{\tau}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1stau2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1smu1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1smu1nu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1smu1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1smu1nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{\mu}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1smu1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1smu2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1smu2nu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1smu2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1smu2nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{\mu}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1smu2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1sel1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1sel1nu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1sel1nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1sel1nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{e}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1sel1nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1sel2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1sel2nu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1sel2nu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1sel2nu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmnSl{1}{e}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1sel2nu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1snutau.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1snutau.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1snutau.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1snutau.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{1}{\tau})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1snutau}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1snumu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1snumu.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1snumu.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1snumu.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{1}{\mu})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1snumu}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1snuel.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1snuel.eps}
\\[5em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.br.cha1snuel.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.relbr.cha1snuel.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmlSn{1}{e})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{1}$ varied.
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:mC1.cha1snuel}
\end{center}
\end{figure}
\subsection{Full one-loop results for varying \boldmath{$\varphi_{M_1}$}}
\label{sec:1LphiMe}
As in the previous sections, the results shown in this
subsection consist of ``tree'', which denotes the tree-level
value and of ``full'', which is the decay width including {\em all} one-loop
corrections as described in \refse{sec:calc}.
We also show the result leaving out the contributions from absorptive
parts of the one-loop self-energy corrections as discussed in
\refse{sec:cMSSM}, labelled as ``full R''.
We concentrate on the dependence on $\varphi_{M_1}$ for the decays with a final
neutralino. For all other decays with no external neutralinos, the
neutralinos appear only as virtual particles in the loops, resulting in a
negligible dependence on $\varphi_{M_1}$.
It should be noted, however, that all decay channels must be computed to
obtain the correct branching ratios.
The parameters are chosen according to \refta{tab:para}, and
consequently the full parameter range is accessible at the ILC(1000).\\
In \reffis{fig:PhiM1.cha2neu1hp} -- \ref{fig:PhiM1.cha2neu3hp}
we present the results for the decays involving the charged Higgs boson,
$\DecayCmNH{2}{1,2,3}$. The decay to the lightest neutralino
(see \reffi{fig:PhiM1.cha2neu1hp}) reaches
decay widths around $0.25\ (0.07) \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}), where the
dependence on $\varphi_{M_1}$ in the latter is substantial, varying between
$-1\%$ and $+9\%$.
The inclusion of the absorptive self-energy parts, on
the other hand, yields only a small effect for the parameter chosen
here. The ${\rm BR}(\DecayCmNH{2}{1})$ stays below $1\%$ in \ensuremath{{\cal S}_<}\
and reaches around $4\%$ in \ensuremath{{\cal S}_>}. Here also the
one-loop effects on the BR are substantial around $+9\%$.
Consequently, an analysis of $\varphi_{M_1}$ at the ILC(1000) requires the
inclusion of the full one-loop corrections.
The case of $\DecayCmNH{2}{2}$ is shown in \reffi{fig:PhiM1.cha2neu2hp}.
The dips, best visible again in the upper right panel are due to
the $\DecayNNZ{2}{1}$ threshold, at $\varphi_{M_1}=15^\circ$
and the $\DecayNNh{2}{1}{1}$ threshold, at $\varphi_{M_1}=28^\circ$.
Due to ${\cal CPT}$-invariance the masses are invariant under
$\varphi_{M_1}\to -\varphi_{M_1}$ and
mirrored dips are observed at
$\varphi_{M_1}=332^\circ$ and $\varphi_{M_1}=345^\circ$.
Notice that at tree-level the lightest Higgs boson $h_1$ and the $Z$~boson
are typically almost degenerate.
The widths reach values around $\sim 0.7 \,\, \mathrm{GeV}$ in both scenarios,
leading to branching rations at the level of $12\%$ in \ensuremath{{\cal S}_>}\ and $4.5\%$
in \ensuremath{{\cal S}_<}\ with a small variation due to $\varphi_{M_1}$. Again the effects of the
absorptive self-energy contributions are small.
The relative effect of the one-loop corrections is also small at the
level of $\pm 1\%$, roughly at the level of the anticipated
ILC(1000) precision.
Since the decay $\DecayCmNH{2}{3}$ (\reffi{fig:PhiM1.cha2neu3hp})
is kinematically forbidden in \ensuremath{{\cal S}_>}\ we
show it only for \ensuremath{{\cal S}_<}. We find ${\rm BR}(\DecayCmNH{2}{3}) \sim 4\%$, again
with a small variation with $\varphi_{M_1}$. The effects of the absorptive
self-energy contributions is visible at the level of $\sim 1\%$ in the
corrections to $\Gamma(\DecayCmNH{2}{3})$, whereas the one-loop effects on
the BR is below $\pm 1\%$.
\medskip
Now we turn to the decays involving a $W$~boson, $\DecayCmNW{2}{1,2,3}$,
as shown in \reffis{fig:PhiM1.cha2neu1w} -- \ref{fig:PhiM1.cha2neu3w}.
For $\Gamma(\DecayCmNW{2}{1})$ we find values of $\sim 0.34 \,\, \mathrm{GeV}$ in
\ensuremath{{\cal S}_>}\ with a small dependence on $\varphi_{M_1}$ and values between $0.03 \,\, \mathrm{GeV}$
and $0.18 \,\, \mathrm{GeV}$ with a large dependence on $\varphi_{M_1}$. The one-loop
effects appear at the level of $+5\%$ and $-5\%$, respectively, where in
the latter case a sizable effect of the absorptive self-energy
contributions can be observed. The ${\rm BR}(\DecayCmNW{2}{1})$ yield
$\sim 5\%$ in \ensuremath{{\cal S}_>}\ and $\sim 1\%$ in \ensuremath{{\cal S}_<}, where the one-loop effects
are found to be $\sim 4\%$ (\ensuremath{{\cal S}_>}) and between $-9\%$ and $+2\%$
(\ensuremath{{\cal S}_<}). Especially in the latter case a reliable ILC analysis
requires the inclusion of the full one-loop calculation.
The decay $\DecayCmNW{2}{2}$ (see \reffi{fig:PhiM1.cha2neu2w})
yields decay widths around $\sim 1 \,\, \mathrm{GeV}$ in
both scenarios, corresponding to BR's of $\sim 16\%$ in \ensuremath{{\cal S}_>}\ and
$\sim 7\%$ in \ensuremath{{\cal S}_<}, with a small dependence on $\varphi_{M_1}$.
The same dips as in $\DecayCmNH{2}{2}$ can be observed.
The one-loop
effects on the BR's is found to be
$\sim -4\%$ and the variation with $\varphi_{M_1}$
is small {\em after} the inclusion of the absorptive self-energy
contributions, as can be seen in the lower
right panel. However, the size of the corrections still exceed the
anticipated ILC(1000) accuracy.
The last decay, $\DecayCmNW{2}{3}$ (\reffi{fig:PhiM1.cha2neu3w}),
which is again only realized in \ensuremath{{\cal S}_<}, gives a BR around $\sim 6\%$,
where the one-loop corrections can be substantial at the level of $-7\%$.
Again, the variation with $\varphi_{M_1}$ becomes small {\em after} the
inclusion of the absorptive self-energy contributions.
\medskip
Finally we discuss the two relevant $\cham{1}$ decays.
In \reffi{fig:PhiM1.cha1neu1hp} we present the results for
$\DecayCmNH{1}{1}$. The decay is kinematically allowed only in
\ensuremath{{\cal S}_>}\ (with the other parameters chosen according to \refta{tab:para}).
The decay width is small, not exceeding $0.017 \,\, \mathrm{GeV}$,
corresponding to a ${\rm BR}(\DecayCmNH{1}{1})$ below $2.5\%$. The variation
with $\varphi_{M_1}$ is small.
The effect of the absorptive self-energy contributions is negligible in
both scenarios.
In view of the anticipated ILC(1000) accuracy at the per-cent level
these corrections should still be taken into account for a reliable
analysis.
The last decay is $\DecayCmNW{1}{1}$ shown in
\reffi{fig:PhiM1.cha1neu1w}. While in \ensuremath{{\cal S}_>}\ the decay is kinematically
allowed for the full range of $\varphi_{M_1}$ only in \ensuremath{{\cal S}_>}, while in \ensuremath{{\cal S}_<}\ it
remains kinematically forbidden for $120^\circ \le \varphi_{M_1} \le 240^\circ$.
The decay widths are below $0.012\ (0.004) \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ (\ensuremath{{\cal S}_<}),
corresponding to BR's at the level of $\sim 1\%$ (\ensuremath{{\cal S}_>}) and between
$\sim 4\%$ and $0\%$ (\ensuremath{{\cal S}_<}). Here a strong dependence on $\varphi_{M_1}$ is
visible. The size of the one-loop effects exceed $-15\%$ in \ensuremath{{\cal S}_>}\ and are
around $+4\%$ in \ensuremath{{\cal S}_<}.
The effect of the absorptive self-energy contributions is negligible.
As before, the corrections are potentially relevant for an
ILC(1000) analysis.
\bigskip
We have also analyzed the effects of a variation with $\varphi_{\Atau}$, but
found negligible effects for the parameters in \refta{tab:para}. The
situation would be different if the off-diagonal element in the scalar
tau mass matrix, $X_\tau = A_\tau - \mu\tan \beta$ were depending strongly on
$A_\tau$, i.e.\ for small $\mu$ and/or $\tan \beta$. However, a detailed
analysis of these effects is beyond the scope of our paper.
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu1hp.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu1hp.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{2}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu1hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu2hpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu2hpNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu2hpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu2hpNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{2}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu2hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu3hpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu3hpNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu3hpNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu3hpNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{2}{3})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu3hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu1w.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu1w.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{2}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu1w}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu2wNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu2wNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu2wNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu2wNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{2}{2})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu2w}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha2neu3wNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha2neu3wNS.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha2neu3wNS.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha2neu3wNS.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{2}{3})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha2neu3w}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha1neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha1neu1hp.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha1neu1hp.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha1neu1hp.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNH{1}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha1neu1hp}
\end{center}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.absf.cha1neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relf.cha1neu1w.eps}
\\[4em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.brf.cha1neu1w.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{PhiM1.relbrf.cha1neu1w.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\DecayCmNW{1}{1})$.
Tree-level (``tree'') and full one-loop (``full'') corrected
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\phi_{M_1}$ varied.
Also shown are the full one-loop corrected decay widths omitting
the absorptive contributions (``full R'').
The upper left plot shows the decay width, the upper right plot shows
the relative size of the corrections.
The lower left plot shows the BR, the lower right plot shows
the relative size of the BR.
}
\label{fig:PhiM1.cha1neu1w}
\end{center}
\end{figure}
\clearpage
\newpage
\subsection{Full one-loop results: total decay widths}
\label{sec:gatot}
In this final subsection we briefly show the results for the total decay
widths in \reffi{fig:mCi.chaitotal}. The results for $\cha{2}$ are shown
in the upper row. The total width rises from its lowest values at
$\mcha{2} = 475 \,\, \mathrm{GeV}$ to about
$13 \,\, \mathrm{GeV}$ at $\mcha{2} = 1000 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}. In \ensuremath{{\cal S}_<}\ the width goes up to
$34 \,\, \mathrm{GeV}$, once loop corrections are included. The overall size of the
one-loop corrections varies strongly with $\mcha{2}$ as can be seen in
the upper right plot. Values of $\pm 5\%$ can easily be
reached.
In the lower row of \reffi{fig:mCi.chaitotal} we show the total width of
the lighter chargino. It rises only up to about $3.3 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_>}\ and
$\sim 1.5 \,\, \mathrm{GeV}$ in \ensuremath{{\cal S}_<}. Again the size of the one-loop corrections
vary with $\mcha{1}$, where away from threshold we find corrections at
the level of $+2\%$ in \ensuremath{{\cal S}_>}\ and between $+2\%$ and $\sim -10\%$ in \ensuremath{{\cal S}_<}.
\begin{figure}[htb!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.abs.cha2all.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC2.rel.cha2all.eps}
\\[2em]
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.abs.cha1all.eps}
\hspace{-4mm}
\includegraphics[width=0.49\textwidth,height=7.5cm]{mC1.rel.cha1all.eps}
\end{tabular}
\vspace{2em}
\caption{
$\Gamma(\cham{i} \to all)$, $i=1,2$.
Tree-level (``tree'') and full one-loop (``full'') corrected total
decay widths are shown with the parameters chosen according to ${\cal S}$\
(see \refta{tab:para}), with $\mcha{i}$ varied.
The upper left plot shows the decay width of $\cham{2}$ and
the upper right plot shows the corresponding relative size of the
corrections, both as a function of its mass.
The lower left and right plots show the same observables for $\cham{1}$.
The vertical lines indicate where $\mcha{1} + \mcha{2} = 1000 \,\, \mathrm{GeV}$,
i.e.\ the maximum reach of the ILC(1000)
for $\cha{1}\champ{2}$ pair production.
}
\vspace{2em}
\label{fig:mCi.chaitotal}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have evaluated two-body decay widths of charginos in
the Minimal Supersymmetric Standard Model with complex parameters
(cMSSM). Assuming heavy scalar quarks we take into account all decay channels
involving charginos, neutralinos, (scalar) leptons, Higgs bosons and SM
gauge bosons.
The decay modes are given in \refeqs{CNH} -- (\ref{CSnl}).
The evaluation of the decay widths is based on a full one-loop calculation
including hard and soft QED radiation.
Such a calculation is necessary to derive a reliable
prediction of any two-body branching ratio.
Three-body decay modes can become sizable only if all the two-body channels
are kinematically (nearly) closed and have thus been neglected
throughout the paper. The same applies to two-body decay modes that
appear only at the one-loop level.
We first reviewed the one-loop renormalization of the cMSSM, which is
relevant for our calculation.
We have given details for the lepton/slepton sector, whereas the details
for the chargino/neutralino and the Higgs boson sector can be found in
\citere{Stop2decay}.
We have discussed the calculation of the one-loop diagrams, the
treatment of UV- and IR-divergences that are canceled by the inclusion
of soft QED radiation.
Our calculation set-up can easily be extended to other two-body decays
involving (scalar) quarks.
For the numerical analysis we have chosen a parameter set that allows
simultaneously {\em all} two-body decay modes under investigation
(but could potentially be in conflict with the most recent SUSY
search results from the LHC).
The masses of the charginos in this scenario are $600$ and $350 \,\, \mathrm{GeV}$.
This scenario allows copious
production of the charginos in SUSY cascades at the LHC.
Furthermore, the production of $\cha{1}\champ{2}$ or $\chap{1}\cham{1}$
at the ILC(1000), i.e.\ with $\sqrt{s} = 1000 \,\, \mathrm{GeV}$, via
$e^+e^- \to \cha{1,2}\champ{1}$ will be possible,
with all the subsequent decay modes (\ref{CNH}) -- (\ref{CSnl})
being (in principle) open. The clean environment of the ILC would
then permit a detailed, statistically dominated study of the chargino
decays. Depending on the channel and the polarization, a precision at
the per-cent level seems to be achievable.
Special attention is paid to chargino decays involving the
Lightest Supersymmetric Particle (LSP), i.e.\ the lightest
neutralino, or a neutral or charged Higgs boson.
In our numerical analysis we have shown results for varying $\mcha{1,2}$ and
$\varphi_{M_1}$, the phase of the soft SUSY-breaking parameter~$M_1$.
In the results with varied chargino masses only the lighter values allow
$\cha{1}\champ{2}$ production at the ILC(1000), whereas the results with
varied $\varphi_{M_1}$ have sufficiently light charginos to permit
$e^+e^- \to \cha{1}\champ{2}$.
In the numerical analysis we compared the tree-level width with the one-loop
corrected decay width. In the analysis with $\varphi_{M_1}$ varied we
explicitly took into account contributions from the absorptive parts of
self-energy contributions on external legs.
We also analyzed the relative change of the width
to demonstrate the size of the loop corrections on each individual
channel. In order to see the effect on the experimentally accessible
quantities we also show the various branching ratios at tree-level (all
channels are evaluated at tree-level) and at the one-loop level (with
all channels evaluated including the full one-loop
contributions). Furthermore we presented the relative change of the BRs
that can directly be compared with the anticipated experimental
accuracy.
We found sizable corrections in many of the decay channels.
Especially, the higher-order corrections of the chargino decay widths
involving the LSP can easily reach
a level of about $\pm 10\%$.
Decay modes involving Higgs bosons turn out to have slightly smaller
corrections. The size of the full one-loop corrections to the decay
widths and the branching ratios also depends strongly on $\varphi_{M_1}$. The
one-loop contributions, again being roughly of \order{5\%}, often vary
by a factor of $2-3$ as a function of $\varphi_{M_1}$.
All results on partial decay widths are given in detail in
\refses{sec:1Lmcha2} -- \ref{sec:1LphiMe}, while the total decay widths
are shown in \refse{sec:gatot}.
The numerical results we have shown are of course dependent on choice of
the SUSY parameters. Nevertheless, they give an idea of the relevance
of the full one-loop corrections.
For other choices of SUSY masses the
corrections to the decay widths would stay the same, but the branching
ratios would look very different.
Channels (and their respective one-loop corrections) that may look
unobservable due to the smallness of their BR in our numerical examples
could become important if other channels are kinematically forbidden.
Following our analysis it is evident that the full one-loop corrections
are mandatory for a precise prediction of the various branching ratios.
This applies to LHC analyses, but even more to analyses at the ILC,
where a precision at the per-cent level is anticipated for the
determination of chargino branching ratios (depending on the chargino
masses, the center-of-mass energy and the integrated luminosity).
The results for the chargino decays will be implemented into the
Fortran code {\tt FeynHiggs}.
\subsection*{Acknowledgements}
We thank
A.~Bharucha,
M.~Drees,
A.~Fowler,
H.~Haber,
T.~Hahn,
O.~Kittel,
S.~Liebler,
H.~Rzehak
and
G.~Weiglein
for helpful discussions.
The work of S.H.\ was partially supported by CICYT (grant FPA
2007--66387 and FPA 2010--22163-C02-01).
F.v.d.P.\ was supported by
the Spanish MICINN's Consolider-Ingenio 2010 Programme under grant MultiDark CSD2009-00064.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
There is a strong tradition in X-ray astronomy of using data taken during
slewing manoeuvres to perform shallow surveys of the sky.
The Einstein (\cite{elvis}), Exosat (\cite{reynolds}) and RXTE (\cite{revnivtsev}) slew surveys all provide a useful
complement to dedicated all-sky surveys such as ROSAT (\cite{voges}) and
HEAO-1 (\cite{piccinotti}) and the
smaller area, medium sensitivity ASCA survey (\cite{ueda}) and pencil-beam {XMM-{\em Newton} }
(\cite{hasinger}) and Chandra (\cite{brandt}) deep looks. It was long recognised that
XMM-Newton (\cite{jansen}) with its great collecting area, efficient CCDs,
wide energy band and tight point-spread function (PSF) has the potential to
make an important contribution to our knowledge of the local universe
from its slew data. Early estimates (\cite{Lumb98}; \cite{jonesandlumb}), based on an expected slewing speed of 30 degrees per hour,
and a slightly lower
background level than that actually encountered in orbit, predicted a 0.5--2 keV
flux limit of $2\times10^{-13}$ erg cm$^{-2}$s$^{-1}$. While the chosen in-orbit
slew speed of 90 degrees per hour reduces the sensitivity,
initial assessments of the data showed that the quality of the data is
good and that many sources are detected (\cite{freyberg}). A review of properties shows that the
{XMM-{\em Newton} } slew survey compares favourably with other large area surveys in terms of depth and positional accuracy (Table 1).
During slews all the three imaging EPIC cameras take data in the observing mode
set in the previous pointed observation and with the Medium filter in
place. The slew speed of 90 degrees per hour combined with the slow
readout time of the MOS detectors (2.6s; \cite{turner}) means that
sources appear as
long streaks in the MOS cameras but are
well formed in the fast observing modes of the pn camera (\cite{struder}).
For this reason, only the EPIC-pn data have been analysed.
In this paper we present a catalogue drawn from slews taken between revolutions
314 and 978 covering a sky region of 6240 square degrees. The main
properties of the slew survey discussed in this paper are given in Table 2.
\begin{table}
\caption{Properties of large area X-ray surveys}
\label{table:SurveySumm}
\begin{center}
\begin{tabular}{l c c c c}
\hline\hline
Satellite & Energy range & Coverage$^{a}$ & Flux lim. & Position \\
& (keV) & \% of sky & $^{b}$ & error \\
\hline
RASS & 0.2-2.4 & 92 & 0.03 & 12\arcsec \\
Einstein slew & 0.2-3.5 & 50 & 0.3 & 1.2\arcmin \\
{\bf XMM slew (soft)} & {\bf 0.2-2} & {\bf 14} & {\bf 0.06} & {\bf 8\arcsec} \\
\hline
EXOSAT slew & 1-8 & 98 & 3 & 20\arcmin \\
HEAO-1/A2 & 2-10 & 100 & 3 & 60\arcmin \\
RXTE slew & 3-20 & 95 & 1.8 & 60\arcmin \\
{\bf XMM slew (hard)} & {\bf 2-12} & {\bf 14} & {\bf 0.4} & {\bf 8\arcsec} \\
\hline
\end{tabular}
\\
\end{center}
$^{a}$ The {XMM-{\em Newton} } slew sky coverage has been computed by adding the area
contained in all of the images used in source searching with an exposure time
greater than 1 second. \\
$^{b}$ Flux limit, units of $10^{-11}$ ergs $s^{-1}$ cm$^{-2}$ \\
\end{table}
\begin{table}
\caption{Properties of the {XMM-{\em Newton} } slew survey}
\label{table:slewprops}
\begin{center}
\begin{tabular}{l c c c}
\hline\hline
Property & \multicolumn{3}{c}{Range (keV)}\\
& 0.2--2 & 2--12 & 0.2--12 \\
\hline
Observing time (s) & $5.4\times10^{5}$ & $5.4\times10^{5}$ & $5.4\times10^{5}$ \\
Mean exposure time$^{a}$ & 6.2 s & 6.0 s & 6.1 s\\
Total number of photons & $1.6\times10^{6}$ & $2.2\times10^{6}$ & $3.8\times10^{6}$ \\
Mean background$^{b}$ & 0.09 & 0.14 & 0.23\\
Median source count rate$^{c}$ & 0.68 & 0.90 & 0.81\\
Limiting source flux$^{d}$ & $6.0\times10^{-13}$ & $4.0\times10^{-12}$ & $1.2\times10^{-12}$\\
Num. sources (full cat)$^{e}$ & 2606 & 692 & 3863\\
Num. sources (clean cat)$^{e}$ & 1874 & 257 & 2364\\
\hline
\end{tabular}
\\
\end{center}
$^{a}$ The mean exposure time, after correcting for the energy-dependent
vignetting.\\
$^{b}$ cnts/arcmin$^{2}$. \\
$^{c}$ cnts/second. \\
$^{d}$ ergs/s/cm$^{2}$. Based on a detection of 4 photons from a source passing near the detector centre (exp. time = 10s), with a power-law spectrum of slope 1.7 and galactic absorption of 3$\times10^{20}$ atoms cm$^{-2}$. \\
$^{e}$ The number of sources flagged as good in the full and clean catalogues (see text). \\
\end{table}
Slews have been source searched down to a likelihood threshold of 8, which
after manual rejection of false detections gives 4710 candidate sources. Using simulations we have been able to identify a subset of high significance sources, with a likelihood
threshold dependent upon the background conditions, that gives 2692 sources
with a spurious fraction of $\sim4\%$ (the "clean" catalogue). Of these,
2621 are from unique sources. In the hard (2--12 keV) band the clean catalogue
contains 257 sources (253 unique) of which $\sim9\%$ are expected to be due to
statistical fluctuations.
The slew catalogue and accompanying images and exposure maps have been
made available through the XMM Science Archive (XSA) as a queryable
database and as FITS files\footnote{The catalogue was initially released
on May 3 2006. In this paper we discuss the updated version released in October
2006.}. A summary of scientific highlights from the slew survey has been
published in \cite{Read06}.
\section{Data selection}
The {XMM-{\em Newton} } satellite moves between targets by performing
an open-loop slew along the roll and pitch axes
and a closed-loop slew, where measurements from the star tracker are used in addition to the
Sun--sensor measurements to provide a controlled slew about all three axes, to correct for residual errors in the
long open-loop phase. The open-loop slew is performed at a steady rate
of about 90 degrees per hour and it is data from this phase which may be
used to give a uniform survey of the X-ray sky.
Slew Data Files (SDF) have been stored in the XSA
from revolution 314 onwards (before this date slews were performed
with the CLOSED filter in place and no scientifically useful data
was taken). Data from revolutions 314 to 978 have been
used for this first catalogue but slews continue to be accumulated
and increments to the catalogue are planned to be released on a regular
basis.
Only science data from slews with a duration of greater than 30 minutes
were downlinked during the majority of these revolutions\footnote{This policy was changed from revolution 921 onwards to include all slews longer than 15 minutes}.
For the {\em Slew Survey} catalogue
we have selected only EPIC-pn exposures
performed in {\em Full Frame} (FF),
{\em Extended Full Frame} (eFF), and {\em Large Window}
(LW) modes, i.e.\ modes where all 12 CCDs are
integrating (in LW mode only half of each CCD).
The corresponding cycle times are
73.36\,ms, 199.19\,ms, and 47.66\,ms, which converts
to a scanned distance of 6.6 arcseconds, 17.9 arcseconds, and
4.3 arcseconds per cycle time, respectively for a slew speed of 90 degrees
per hour.
In the {\em Small Window} mode only the
central CCD is operated and a window of $64\times64$
pixels is read out, i.e.\ only about 1/3 of the single, prime CCD.
In the fast modes, {\em Timing} and {\em Burst},
only 1-dimensional spatial information
for the central CCD is available
and thus these modes are not well suited for
source detection. It was discovered in initial tests that slews
with a high background gave a large number
of false detections and resulted in extremely long execution times
for the source searching software. For this reason, slews with
an average 7.5--12 keV (FLAG=0, PATTERN=0-4) count rate in
excess of 5.5 c/s (25\% of all slews), were
excluded from the analysis (Fig.~\ref{fig:bckhist}). This leaves
a nominal 312 slews potentially useful for scientific analysis.
In practice a significant number of slews were not able to be processed
for a variety of reasons including failure to create exposure maps,
unreasonably high exposure times, attitude reconstruction problems,
missing keywords and the processing producing too large images.
While it is strongly hoped that some of these
datasets may be recoverable in the future by improvements to the processing
system, for the purposes of this first catalogue they have been left out
and the catalogue constructed from 218 slews. A list of the observation numbers of these slews is available at
{\it http://xmm.esac.esa.int/external/xmm\_science/slew\_survey}
{\it/obsid\_tab.html}
and the slew paths are shown in Fig.~\ref{fig:slewpath}.
About 4\% of the covered area has been
slewed over more than once and eventually a deeper survey
will be available by combining overlapping slew data, especially near the ecliptic poles.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=9cm]{9193fg1.ps}}
\end{tabular}
\end{center}
\caption[Slew background histogram]
{ \label{fig:bckhist} The background level distribution measured from the
mean 7.5--12 keV count rate in each slew. The dashed line shows the
cut-off used to exclude high background slews in the tail of the distribution. }
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=9cm]{9193fg2.ps}}
\end{tabular}
\end{center}
\caption[Slew paths in celestial coordinates]
{ \label{fig:slewpath} The paths of the slews used in the construction of
XMMSL1 in Galactic coordinates.}
\end{figure}
The EPIC-pn detector passes over a source in about 14 seconds depending
on the position of the source and the angle subtended by the Y-axis of the
detector to the slew path (the impact angle). Normally this is close to zero,
but an impact angle of up to 20 degrees is possible. If the source
passes through the detector optical-axis an effective on-axis exposure time of
$\sim 11$ seconds is achieved. In Fig.~\ref{fig:expsky} we show the exposure time of the
slew survey as a function of sky coverage. Due to the
vignetting function, the mean exposure time is energy-dependent being
6.2 seconds in the soft (0.2--2 keV) band and 6.0 seconds in the hard (2--12 keV) band.
Small ripples in the histogram are caused by gaps between
the EPIC-pn CCDs
and also by the different observing modes; in LW mode only
half of the CCD is exposed and the maximum effective exposure time
is ~6 seconds.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg3.ps}}
\end{tabular}
\end{center}
\caption[Exposure time v Sky area]
{ \label{fig:expsky} A histogram of the cumulative sky area covered
for a given effective on-axis exposure time in
the total (0.2-12 keV) band.
}
\end{figure}
\section{Data processing}
The data have been used to perform three independent surveys; a soft band
(0.2--2 keV) X-ray survey with strong parallels to the ROSAT all--sky survey (\cite{voges}; RASS), a hard band (2--12 keV) survey and an {XMM-{\em Newton} } full-band
(0.2--12 keV) survey.
Data reduction was performed as detailed below, with the
public {XMM-{\em Newton} } Science Analysis Software (SAS), version 6.1 plus the following modifications:
\begin{itemize}
\item a modification for
the {\tt oal} library to handle the relatively large time gaps in the
Raw Attitude File (RAF); subsequently released with
{\tt SAS v7.0}.
\item an increase in the maximum number of attitude points which may be used
by {\tt eexpmap}; released with {\tt SAS v6.5}.
\end{itemize}
\subsection{Initial reduction}
The {XMM-{\em Newton} } Slew Data Files (SDFs) for EPIC-pn
were processed using the {\tt epchain} package of the SAS.
For diagnostic reasons a few parameters were set to
non-default values (e.g. events below 150\,eV were kept).
\subsection{Slew division}
Photon events are recorded initially in RAW or detector coordinates and have to
be transformed, using the satellite attitude history, into sky coordinates.
The tangential plane geometry commonly used to define a coordinate grid for
flat images is only valid for distances of 1--2 degrees from a reference
position, usually placed at the centre of the image. To avoid this
limitation, slew datasets have been divided into roughly one degree
by half a degree
event files and attitude corrected using the task {\tt attcalc}. Images and exposure maps were then
extracted from the event files using the tasks {\tt evselect} and {\tt eexpmap}.
This procedure relies on the attitude history of the satellite being accurately known
during the slew; a point which is addressed in section~\ref{sect:attitude} .
Software, based on SAS and ftools (http://heasarc.gsfc.nasa.gov/ftools ; \cite{blackburn}), to perform the procedure of
dividing and attitude-correcting slew data has been made available
via the {XMM-{\em Newton} } web-site\footnote{http://xmm.esac.esa.int/sas}.
The procedure has been repeated in three
separate energy bands: full band (0.2$-$0.5\,keV [pattern=0] +
0.5$-$12.0\,keV [pattern=0$-$4]), soft band (0.2$-$0.5\,keV
[pattern=0] + 0.5$-$2.0\,keV [pattern=0$-$4]), and hard band
(2.0$-$12.0\,keV [pattern=0$-$4]).
During the data processing stage severe problems with the transfer and
storage of files from long slews were encountered. To alleviate this,
only exposure maps for the full energy band were produced.
This leads to an approximation being needed for exposure times in the different
energy bands which is addressed in section~\ref{sect:countrates}.
\subsection{Source searching}
Pilot studies were performed to investigate the optimum
processing and source-search strategies.
Uneven (and heightened) slew exposure is
observed at the end of some slews (the 'closed-loop' phase) and images
with an exposure time greater than 20 seconds have been removed to
ensure the uniformity of the survey. We
tested a number of source-searching techniques and found that the
optimum strategy was the usage of a semi-standard
`eboxdetect (local) + esplinemap + eboxdetect (map) + emldetect'
method, tuned to $\sim$zero background, and performed on a single
image containing just the single events (pattern=0) in the
0.2$-$0.5\,keV band, plus single and double events (pattern=0$-$4) in
the 0.5$-$12.0\,keV band. This is similar to the technique used for
producing the RASS catalogue (\cite{Cruddace88}) and resulted in the
largest numbers of
detected sources, whilst minimising the number of spurious sources
due to detector anomalies (usually caused by non-single, very soft
($<$0.5\,keV) events). The source density was found to be $\approx$0.5
sources per square degree to an emldetect detection likelihood threshold
({DET\_ML }) of 10 ($\sim3.9\sigma$).
\section{Attitude reconstruction and positional accuracy}
\label{sect:attitude}
The good point spread function of the X-ray telescopes
(\cite{aschenbach}) should allow slew source positions to be determined to an
accuracy of around 4 arcseconds, similar to that found for faint
objects in the 1XMM catalogue of serendipitous sources detected in pointed
observations \footnote{The First XMM-Newton Serendipitous Source Catalogue, XMM-Newton Survey Science Centre (SSC), 2003.}. Any errors in the attitude reconstruction
for the slew could seriously degrade this performance and a major technical
challenge of the data processing has been to achieve the nominal accuracy.
The attitude information of the XMM-Newton satellite is provided by
the Attitude and Orbit Control Subsystem (AOCS). A star tracker co-aligned
with the telescopes allows up to a maximum of five stars to be continuously
tracked giving accurate star position data every 0.5 seconds, which
operates in addition to the Sun
sensor that provides a precise Sun-line determination. Such information is
processed resulting in an absolute accuracy of the reconstructed astrometry
of typically 1 arcsecond (1 sigma) for pointed observations.
For the open-loop slews, large slews outside the
star-tracker field of view of 3 x 4 degrees, the on-board software generates a
three axis momentum reference profile and a two-axis (roll and pitch)
Sun-sensor profile, both based on the ground slew telecommanding. During
slew manoeuvring a momentum correction is superimposed onto the reference
momentum profile and, as there are no absolute measurements for the yaw axis,
a residual yaw attitude error exists at the end of each slew that may be
corrected in the final closed-loop slew (Elfving 1999).
Two types of attitude data may be used as the primary
source of spacecraft positioning during event file processing.
They are the Raw Attitude File (RAF) and the Attitude History File (AHF).
For pointed observations, the RAF provides the attitude information at the maximum possible rate,
with one entry every 0.5 seconds while the AHF is a smoothed and filtered version
of the RAF, with times rounded to the nearest second. In slew datasets the
RAF stores attitude information every 40--60 seconds while the AHF
contains the same records as the RAF with identical positions and again
with timing information in integer seconds.
The user can select which one to use for data processing by
setting an environment variable.
In a pilot study where the AHF was used for attitude reconstruction,
source detection was performed and correlations with the ROSAT and 2MASS
catalogues indicated a slew relative pointing accuracy of $\sim10$ arcseconds
. However, an absolute
accuracy of 0-60 arcseconds (30 arcseconds mean) was obtained in the slew direction,
resulting in a thin, slew-oriented error ellipse around each source.
This error appears to be consistent with the error introduced by the
quantisation of the time to 1 second in the attitude file and led us to change the processing software
as a better accuracy should be obtained. As a test, the
RAF was used to compute the astrometry for some observations. Here
, an offset of $\sim1$ arcminute from the ROSAT positions was found,
but with a smaller scatter compared with the positions returned by the AHF
processing. The consistency of these offsets suggested that they could be due to a timing issue. After discussions with the flight dynamics group it
was realised that the star tracker CCD integration time of 0.75 seconds
is not included in the times in the attitude history.
When this 0.75 seconds is subtracted from every entry in the
RAF we obtain an optimal attitude file for the processing.
Note that this offset remains in all XMM raw attitude files but an
automatic correction has been applied in the SAS software from version 7
onwards. Also note that this discrepancy has no practical effect on normal
stable-pointed observational data.
Other issues affecting the astrometry performance appeared after a careful
visual examination of the RAF files, where two types of peculiarities
appeared in some of the slews affecting either a localised region or the totality
of the slew.
Five of the slews presented sharp discontinuities in the attitude
reconstruction, revealing the
existence of single bad RAF point.
As an example
a source in the slew 9073300002 was discovered to have a closest ROSAT counterpart at
a distance of 8 arcminutes. Investigation showed that the source was observed at a time
coincident with a large error in the attitude file (Fig.~\ref{fig:peak}).
A test involving the removal of the bad point and recalculation of the
attitude, while showing an improvement in source positions didn't improve
the astrometry to the level of the other slews. Therefore, sources falling
in a region of bad attitude have been flagged in the catalogue
with the 'Position Suspect'
flag (see section~\ref{sect:flags}).
Sections of seven other slews displayed an attitude reconstruction
that can best be described as turbulent
(Fig.~\ref{fig:turbulence}). Again, sources falling in these slew
sections have been marked with the 'Position Suspect'
flag.
A subsample of 1260 non-extended sources (defined as having an extent parameter
$<2$ from the emldetect source fitting) with ${DET\_ML }>10$, have been correlated with
several catalogues within a 60 arcseconds offset.
The correlation with the RASS reveals that 63\% of the slew sources
have an X-ray counterpart of which 68\% (90\%) lie within 16 (31) arcseconds
(Fig.~\ref{fig:ROSAThist}).
This gives confidence that the
majority of slews have well reconstructed attitude.
To form a sample of catalogues with highly accurate positions
but which minimise the number of false matches, we used the Astronomical Virtual Observatory (AVO)
to correlate the slew positions against non-X ray SIMBAD catalogues. This
gave 508 matches of which 68\% (90\%) were contained within 8 (17) arcseconds (Fig.~\ref{fig:Simbadhist}).
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{90}{\includegraphics[height=9.5cm]{9193fg4.ps}}
\end{tabular}
\end{center}
\caption[Rev 0733 peak]
{ \label{fig:peak}
A zoom into the problematic region of the attitude file in the slew 9073300002.
The points show the generated attitude information.
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{90}{\includegraphics[height=9.5cm]{9193fg5.ps}}
\end{tabular}
\end{center}
\caption[Rev 0841 turbulence]
{ \label{fig:turbulence}
A plot showing non-smooth, or turbulent, attitude reconstruction in
the revolution 0841 attitude file.}
\end{figure}
\begin{figure}
\centering
\rotatebox{90}{\includegraphics[height=9.5cm]{9193fg6.ps}}
\rotatebox{90}{\includegraphics[height=9.5cm]{9193fg7.ps}}
\caption[RASS comparison]
{ \label{fig:ROSAThist}
A comparison of slew source positions with those from
the RASS catalogue; 68\% of the sources lie within 16 arcseconds.
The upper panel shows a histogram of the offset magnitude while the
lower panel gives the absolute offset in arcseconds of the slew source
from the ROSAT position.}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{90}{\includegraphics[height=9.5cm]{9193fg8.ps}}
\end{tabular}
\end{center}
\caption[Simbad cross-correlation]
{ \label{fig:Simbadhist}
A histogram of the distribution of the angular separation
of the slew sources from their Simbad counterpart;
68\% of the matches lie within 8 arcseconds.}
\end{figure}
\section{The catalogue}
Source lists were produced by searching each slew down to a likelihood
${DET\_ML }>8$ and combined to produce an initial catalogue of 5180
detections. This is available as the 'full' catalogue where known spurious
detections have been flagged out according to a set of criteria
laid out in the next section.
\subsection{Causes of spurious detections}
\label{sect:flags}
\subsubsection{Multiple detections of an extended source}
The source detection software attempts to parameterise source extents
up to 20 pixels (82 arcseconds here) in radius. Larger sources, or sources with
discontinuous or lumpy emission are reported as multiple small sources.
This is particularly evident for the bright supernovae remnants (SNR),
e.g. Puppis-A
which results in 81 separate, confused, detections (Fig.~\ref{fig:pupa}).
All affected sources
have the VER\_INEXT flag set true.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=5cm]{9193fg9.ps}}
\end{tabular}
\end{center}
\caption[Large extended sources: Puppis-A]
{ \label{fig:pupa} The slew image of the large SNR, Puppis-A. It is
detected as many small sources (circles).
}
\end{figure}
\subsubsection{The wings of the PSF of a bright source}
It was noticed during the construction of the 1XMM serendipitous
source catalogue
that, due to the imperfect modelling of the PSF, a halo of false detections
is often seen around bright sources. The same effect is seen in slew
exposures but due to the reduced exposure time is only important for
very bright sources $\gg 10$ c/s. All affected sources
have the VER\_HALO flag set true.
\subsubsection{High background}
Flares in the background, due to solar protons (\cite{lumb02}, \cite{carter07}), cause a sudden
increase in the number of events seen in a slew which mimic the effect
of slewing over an SNR (Fig.~\ref{fig:bgndflare}). These flares typically last between
10 and 40 seconds and hence affect between 15 arcminutes and 1 degree
of a slew. No automatic flare screening has been performed on the
data but light curves of all slews have been manually inspected
and 175 sources, falling within flare-affected sections have been flagged
as bad. All affected sources have the VER\_HIBGND flag set true.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg10.ps}}
\end{tabular}
\end{center}
\caption[Effect of background flare]
{ \label{fig:bgndflare} A heavily smoothed section of the slew 9097100002.
The bright patch of events in the centre has been produced
by a background flare.
}
\end{figure}
\subsubsection{Bright sources outside detector field-of-view}
Reflections from the CRAB SNR, about 10 arcminutes outside the
EPIC-pn field-of-view during the slew 9041000004, and the TYCHO SNR,
5 arcminutes outside the field-of-view during the slew 9058600002,
caused 5 false detections; VER\_HALO flag set true.
\subsubsection{Bad position}
Seven sources are found near the edge of the detector or the edge of an image
which leaves an uncertainty as to the true count rate of the
source and where its centre lies.
These sources have the VER\_NREDG flag set true and also the
VER\_PSUSP flag set true to indicate that the position is suspect.
In addition, sources in sections of slew with bad attitude
(see section~\ref{sect:attitude})
have their VER\_PSUSP flag set true.
\subsubsection{Zero exposure}
Two sources are flagged as false because they lie at the very edge of the
slew and are reported as having zero exposure time in one or more
of the energy bands. These have the VER\_FALSE flag set true.
\subsubsection{Optical Loading}
Despite initial concerns that the lack of a detector offset map during slews,
would lead to optical loading problems, in practice little or no effect
was found. In Fig.~\ref{fig:optload} we show a first magnitude star
which is brilliant in raw slew data but completely disappears when the
default event filtering is applied. None of the source fluxes in the slew
catalogue are believed to be contaminated by optical photons.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg11.ps}}
\end{tabular}
\end{center}
\caption[Optical loading check]
{ \label{fig:optload} Left: A raw image of gamma Cru, an M star with
m$_{v}$=1.6, detected in slew 90130900002. Right: The same image after
applying the filter (FLAG==0, PATTERN==0, PI$>$200).
}
\end{figure}
\subsection{Statistical fluctuations}
The number of detections, after removing the spurious sources
highlighted in the previous section, rises steeply with
decreasing detection likelihood (Fig.~\ref{fig:srccntvdetml}) as expected.
They are also seen to have a dependency on the background rate within
the image in which the source was found (Fig.~\ref{fig:srccntvbckgnd})
; where the background is defined as the event
count rate above 10 keV ($PI>10000$).
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg12.ps}}
\end{tabular}
\end{center}
\caption[Sources as function of likelihood]
{ \label{fig:srccntvdetml} Number of detections as a function of detection likelihood
(total band; 0.2--12 keV).
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg13.ps}}
\end{tabular}
\end{center}
\caption[The source density as a function of the
background rate in the image in which the source was found]
{ \label{fig:srccntvbckgnd} The mean density of total band (0.2-12 keV)
sources, flagged good in the full catalogue, plotted as a
function of the background count rate (PI$>10000$).
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg14.ps}}
\end{tabular}
\end{center}
\caption[False source rate from simulations]
{ \label{fig:falserate} The false source rate as determined from simulations.
The histogram represents the distribution of counts in the real slew images.
}
\end{figure}
\begin{table}
\caption{The number of sources found in slew images and in
simulated slew images for combinations of minimum detection likelihood
and background count rate}
\label{tab:spurFrac}
\begin{center}
\begin{tabular}{l l l l l l}
\hline\hline
ML$^{a}$ & bkg$^{b}$ & \multicolumn{4}{c}{Band$^{c}$} \\
& c/s & Combined & Total & Hard & Soft \\
\hline
\hline
8 & - & 4710 / 929 & 3863 / 580 & 692 / 272 & 2606 / 186 \\
8 & 3.0 & 3451 / 456 & 3018 / 348 & 427 / 93 & 1981 / 69 \\
10 & - & 2998 / 195 & 2580 / 118 & 312 / 61 & 2031 / 46 \\
10 & 3.0 & 2419 / 106 & 2155 / 86 & 239 / 24 & 1638 / 13 \\
12 & - & 2161 / 40 & 1875 / 28 & 185 / 12 & 1661 / 10 \\
12 & 3.0 & 1782 / 21 & 1587 / 20 & 157 / 5 & 1354 / 2 \\
14 & - & 1700 / 9 & 1470 / 7 & 139 / 4 & 1361 / 1 \\
14 & 3.0 & 1423 / 6 & 1257 / 4 & 119 / 3 & 1122 / 1 \\
10/14$^{d}$ & 3.0/- & 2696 / 109 & 2368 / 89 & 259 / 25 & 1877 / 13 \\
\hline
\end{tabular}
\end{center}
$^{a}$ The minimum source likelihood. \\
$^{b}$ The maximum background rate accepted within an image, defined
as the count rate of events with energy greater than 10 keV (PI$>10000$). \\
$^{c}$ The number of detected sources (first number) and the number of expected false sources (second number) from simulations, in this energy band, for this combination of
minimum detection likelihood and maximum background count rate.
The "combined" band is made from the unique distinct sources detected in
any of the total, soft or hard bands.\\
$^{d}$ A selection of all sources with DET\_ML$>$14 and sources with
DET\_ML$>$10 from images where the background count rate is less than 3 c/s.\\
\end{table}
Simulations have been conducted to investigate the
relationship between the number of spurious
sources expected from background fluctuations and the number of
events in an image. Simulated
slew images have been created by inserting events into a template
slew image, of 842 by 600 pixels and area 0.5 square degrees, at
random positions. A flat spatial distribution of events
has been used
because the background is likely dominated by charged particle induced
events, which show little variation across the detector, and internal
flourescent emission lines which map the distribution of metals in the
detector itself (\cite{lumb02}).
The resultant simulated images have been source searched
and reveal a strong increase in the number of spurious detections
with image counts, that rises steeply until
around 500 events and then flattens out (Fig.~\ref{fig:falserate}).
We have overlaid the distribution of counts in the real slew
images in figure~\ref{fig:falserate}
to show that the majority of real images should generate few spurious sources
for {DET\_ML }$>12$.
To assess the absolute number of false sources found by the source detection
chain as a function of {DET\_ML }, the positions of the photons in all 11137
slew images have been randomised and the images source-searched again.
Results show that a significant number of sources with low {DET\_ML } can
be expected to be false. From Fig.~\ref{fig:falserate} we know that
the false source rate is influenced by the number of events in the image
which is in turn
related to the background rate. Selections of {DET\_ML } and image background rate
can be made from the simulation results to choose a particular
spurious source fraction for a given purpose (Table~\ref{tab:spurFrac}).
It is clear that the hard band, having
typically a higher background than the soft band and a lower signal to
noise ratio than either the soft or total band, is most affected by
statistical fluctuations. A source selection [{DET\_ML }$>14$ or
({DET\_ML }$>10$ and image\_bckgnd\_rate$<=$3.0)] gives an expected 25 false
detections from 259 hard band sources ($\sim 9\%$), 13 from 1877 (0.7\%)
for the soft band and 109 from 2696 (4\%) for all sources.
Note that 80\% of images have
a background rate $<=3.0$ c/s. A list of sources has been produced from this
selection and is termed the 'clean' catalogue. Finally, four sources which
showed large position errors, due to the attitude file problems
discussed in section~\ref{sect:attitude} had their VER\_PSUSP flag
set true in the 'full' catalogue and were removed from the 'clean'
catalogue. This leaves
a total of 2692 sources (257 in the hard band) in the 'clean' catalogue.
\subsection{Released Catalogues}
Both the 'full' catalogue, with 5180 detections with {DET\_ML }$>8$ and
the 'clean' catalogue, with 2692 sources having {DET\_ML }$>14$ or
({DET\_ML }$>10$ and image\_bckgnd\_rate$<=$3.0) and the VER\_INEXT, VER\_HIBGND,
VER\_HALO and VER\_FALSE flags set false (see section ~\ref{sect:flags})
are available from {\it http://xmm.esac.esa.int/external/xmm\_data\_acc/xsa}.
The catalogues contain columns with the detection
threshold, background
rate and spurious source flags sufficient to allow the user to select
a subsample for a particular scientific purpose. The 'clean' catalogue
is conservative for the soft band but may not be strict enough for some
applications in the hard band.
From hereon we discuss the properties of sources drawn from the 'clean'
catalogue unless otherwise stated.
\section{Source properties}
\subsection{Naming convention}
The adopted name for sources detected in the XMM-Newton slew survey
starts with the prefix, XMMSL1, and
then encodes the J2000 sky position, e.g. XMMSL1 J010537.6+364858 . The name
is assigned in two passes. When the three independent energy band source
lists are combined to form one catalogue the source name is set using
the position in the band where the DET\_ML likelihood is the highest.
A second pass is then performed such that sources which have been
observed in more than one slew are given the same name.
Again, priority is given depending on the detection likelihood.
Detections are deemed to be from the same source if their
centres lie within 30 arcseconds of each other.
Given the scarcity of slew sources (0.8 detections in the 'full' catalogue
per square degree)
on the sky, 30 arcseconds was found to be a reasonably robust match radius
for point sources. It is not so good for extended sources
and the catalogue in some cases contains multiple detections of the
same extended source with different names.
\subsection{Source Extent}
The source search algorithm attempts to parameterise detections as a
point source or as an extended source with a radius of up to 20 pixels (82 arcseconds).
A large number of sources (30\%) are detected as being extended
(extension parameter$>1$),
The measured extension is related to the extension likelihood and
to the number of source counts as shown in Fig.~\ref{fig:extvcnts}.
Here it can
be seen that there is an upper branch where an increasing number of
photons are needed to detect larger source extensions and a lower
branch where small extensions are found for a large range of source
strengths and extension likelihoods. The point spread function (PSF)
for the EPIC-pn detector is a function of off-axis angle
(the distance between the optical-axis and the source). In the data analysis
we have used the average off-axis angle of the path of a source
through the detector, to calculate the appropriate PSF.
This is reasonably accurate for the LW and FF modes where the frame time
is short and the extension of the PSF along the slew direction small,
but introduces an inaccuracy for the $\sim20\%$ of observations taken in
eFF mode, where the extension along the slew direction is $\sim18\arcsec$
(Table~\ref{tab:obsMode}). The lower branch is contaminated by false detections
of extension caused by this effect which can be demonstrated by the
fraction of eFF sources in this branch (35\%) compared with the expected
20\% in the upper branch. At large count rates, photon pile-up depresses the
counts in the central pixels of the source profile also causing an apparent
extension. A correct treatment of the slew PSF is needed to properly
parameterise source extension, nevertheless sources falling in the upper
branch of Fig.~\ref{fig:extvcnts} are considered to be genuinely extended.
\begin{table}
\caption{Observing mode statistics}
\label{tab:obsMode}
\begin{center}
\begin{tabular}{l c c c}
\hline\hline
Mode & Frame Time & Extension$^{a}$ & Fraction$^{b}$ \\
& (ms) & (arcseconds) & \% \\
\hline
\hline
FF & 73 & 6.6 & 69 \\
eFF & 199 & 17.9 & 21 \\
LW & 48 & 4.3 & 10 \\
\hline
\end{tabular}
\end{center}
$^{a}$ The extension of the PSF along the slew direction caused by the
satellite movement during CCD integration. \\
$^{b}$ The percentage of the slew sky area covered in this mode. \\
\end{table}
\begin{figure}
\centering
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg15.ps}}
\\
\vspace{0.5cm}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg16.ps}}
\caption[Source extent v source counts]
{ \label{fig:extvcnts} Source extension, in units of 4.1 arcseconds image
pixels, plotted against the extension likelihood (upper) and number of
source counts (lower) for extended sources detected in the total energy band.
Sources identified as clusters of galaxies are marked with circles.
}
\end{figure}
\subsection{Count rates}
\label{sect:countrates}
The count rates are calculated from the background subtracted counts within
a circle about the source, corrected for the
encircled energy fraction and divided by the PSF-weighted exposure time
within the source region (see the {\em emldetect} user guide for more details
\footnote{http://xmm.esac.esa.int/sas/6.1.0/doc/emldetect/index.html}).
For reasons of limited resources, source searching was performed in
all bands using the exposure map for the total energy band. This produces
incorrect exposures in the soft and hard bands due to the energy-dependent
vignetting. A correcting factor:
\begin{equation}
e_{\mathrm{h}} = 0.0013806 \times e_{\mathrm{t}}^{3} + 0.0085632 \times e_{\mathrm{t}}^{2} + 0.84282 \times e_{\mathrm{t}}
\end{equation}
\begin{equation}
e_{\mathrm{s}} = 0.0014723 \times e_{\mathrm{t}}^{3} - 0.058884 \times e_{\mathrm{t}}^{2} + 1.3509\times e_{\mathrm{t}}
\end{equation}
where e$_{s}$, e$_{h}$ and e$_{t}$ are the soft, hard and total band
exposure times respectively, has been applied to the exposure times
to correct for this effect. These factors were calculated by comparing
several sets of total, soft and hard-band exposure maps and fitting a function
to the relationship between the bands.
This introduces a systematic uncertainty
of $\sim5$\% into the soft and hard band exposure times and count rates.
Due to the limit of 82 arcseconds on source extent used by the search algorithm, sources extended beyond this value will have their count rate underestimated.
Very bright sources are affected by photon pile-up which tends to reduce
the count rate. The source strength limits for the observing modes are
given by \cite{struder} for FF mode as 6 c/s and for LW mode as 9 c/s.
For eFF mode the pile-up limit is in principle 2 c/s for a pointed observation
but will be higher here as the slewing movement, together with the relatively
long frame time, will reduce the count rate
on the central pixel of the PSF.
A comparison of the soft band count rates against RASS count rates
is presented in Fig.~\ref{fig:roscnts} for 894 non-extended sources
with a ROSAT counterpart. The count rate ratio,
XMM/RASS, is typically $\sim10$ but varies considerably with the
source spectrum. The comparison of these two surveys, coupled with
upper-limits analysis represents a powerful tool for finding high
variability X-ray sources. An initial analysis of high variability extragalactic
sources found in this way has been published in \cite{esquej}.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=8cm]{9193fg17.ps}}
\end{tabular}
\end{center}
\caption[A comparison of RASS and slew soft band count rates]
{ \label{fig:roscnts} A comparison of RASS and slew soft band count rates
(c/s). The
solid line represents a ratio of 10:1 and the dotted lines represent
a factor of 10 variation from this value.
}
\end{figure}
\subsection{Flux conversion}
Source fluxes have been calculated from count rates based on energy conversion factors assuming a spectral model of an absorbed power-law with $N_{H} = 3 \times 10^{20}$ cm$^{-2}$ and photon index 1.7 (see XMM science survey centre memo,
SSC-LUX TN-0059, for a general description of the technique). The energy
conversion factors are given in Table~\ref{tab:fluxConv}.
\begin{table}
\caption{Flux conversion factors}
\label{tab:fluxConv}
\begin{center}
\begin{tabular}{c c c}
\hline\hline
Band & energy range & Conversion factor$^{a}$ \\
& (keV) & \\
\hline
\hline
Total & 0.2--12.0 & 3.16 \\
Hard & 2.0--12.0 & 9.14 \\
Soft & 0.2--2.0 & 1.44 \\
\hline
\end{tabular}
\end{center}
$^{a}$ Converts from source count rate (c/s) to flux in the given energy
band in units of $10^{-12}$ ergs/s/cm$^{2}$ \\
\end{table}
The soft-band fluxes are particularly dependent on the spectral model
used and can be quite discrepant for stars where the absorbing
column may be small.
At the fluxes probed here, source confusion within the 20 arcseconds extraction
radius is almost absent.
\subsection{Hardness ratio}
In Fig.~\ref{fig:allsrchr} we plot the positions in Galactic coordinates of the soft
and hard band sources and separately all the sources, colour-coded by
hardness ratio. The hardness ratio is defined as
\begin{equation}
H_{R} = ( S_{H} - S_{S} ) / ( S_{H} + S_{S} )
\end{equation}
where $S_{H}$ is defined as the source count rate in the hard band
and $S_{S}$ as the count rate in the soft band.
As expected, hard sources, notably the bright LMXB, predominate in the
Galactic plane and soft sources at higher Galactic latitudes.
\begin{figure}
\begin{center}
\rotatebox{90}{\includegraphics[bb=25 -675 500 -50, height=9cm]{9193fg18.ps}}
\rotatebox{90}{\includegraphics[bb=25 -675 500 -50, height=9cm]{9193fg19.ps}}
\rotatebox{90}{\includegraphics[bb=25 -675 500 -50, height=9cm]{9193fg20.ps}}
\end{center}
\caption[An AITOFF projection in Galactic coordinates of the slew
sources.]
{ \label{fig:allsrchr} An AITOFF projection in Galactic coordinates of
sources from the first {XMM-{\em Newton} } slew survey, where the circle size scales
logarithmically with the count rate. Top panel: the total band sources;
the hardness ratio is colour-coded such
that light red is soft and blue is hard. Middle panel: the soft band
sources. Bottom panel: the hard band sources. The two bright sources seen
in the hard-band plot at
latitude $\sim -30$ are LMC~X-1 and LMC~X-3.
}
\end{figure}
\subsection{Spectra}
Spectral analysis of slew data is fundamentally complicated by
the fact that a source bright enough to produce a reasonable spectrum
will suffer from photon pile-up.
A quantative analysis of these spectra awaits the implementation of a
realistic pile-up correction and the development of an energy-dependent
point spread function, which will be a vignetting-weighted average of
the PSF at each detector position traversed by the motion of the source
through the detector.
\section{Sample properties}
In Fig.~\ref{fig:logns} we show the cumulative number count distribution for the
full energy band for sources within ($|b|\le20^\circ$) and outside
($|b|>20^\circ$)
the Galactic plane. A linear fit to sources outside the Galactic plane,
in the central part of the distribution (log counts between -0.2 and 1.0)
gives a slope of $-1.41\pm0.01$. This is a little flatter than the
Euclidian slope of 1.5 found by ASCA (\cite{ueda}) and earlier by
HEAO-1 A2 (\cite{piccinotti}) and Ginga (\cite{kondo}).
The difference in slope may be due to incompletenes, which appears to become evident in
the source distribution below $\sim 0.8$ c/s.
The distribution of sources within the
Galactic
plane shows a break at about 12 c/s (F$_{0.2-12}=3.6\times10^{-11}$
ergs s$^{-1}$ cm$^{-2}$ for an absorbed power-law with $N_{H} = 3 \times 10^{20}$ cm$^{-2}$ and photon index 1.7). At fainter fluxes the slope
is $-1.29\pm0.02$ while for brighter sources it is $-0.43\pm0.02$.
The bright source population agrees well with the previous result of UHURU
(\cite{forman}), except at fluxes above F$_{0.2-12}\sim 8\times10^{-10}$ ergs s$^{-1}$ cm$^{-2}$ where gross pile-up effects significantly reduce the observed count rate (see section~\ref{sect:countrates}).
The log {\it N}--log {\it S} relation below F$_{0.2-12}=3.6\times10^{-11}$
is steeper than the $0.79\pm0.07$ observed by ASCA in a survey with
$|b|<0.3^\circ$ in the 2--10 keV band (\cite{sugizaki}). It is likely that
the difference is at least partly due to the sky regions sampled.
The wider {XMM-{\em Newton} } slew sample will
, for example, contain a higher fraction of extragalactic sources.
A thorough analysis of the log {\it N}--log {\it S} relation will require a
careful analysis of the effects of pile-up, completeness, sky coverage
and the Eddington bias and we defer this to a later date when a larger
fraction of the sky has been covered.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=9cm]{9193fg21.ps}}
\end{tabular}
\end{center}
\caption[LogN/logS plot for total band sources]
{ \label{fig:logns} The cumulative number count distribution of total band (0.2-12 keV) sources inside and outside the Galactic plane, plotted against count rate and absorbed flux. A spectral model of an absorbed power-law of photon index 1.7
and $N_{H} = 3 \times 10^{20}$ cm$^{-2}$ has been used to convert the slew count rate into flux.
The UHURU Galactic plane (dotted line; \cite{forman})
and the ASCA Galactic plane survey (dotdash line; \cite{sugizaki}) log {\it N}--log {\it S} relations
have been displayed for comparison. In both of these cases the fluxes have been converted to an energy range 0.2--12 keV and to the above spectral model.
}
\end{figure}
\section{Duplicate detections and variability}
In the full catalogue 96 sources have been detected in more than one slew.
Count rate variability up to a factor 5, 5 and 2 is seen in the total, soft
and hard bands respectively. The greatest variability is seen in the total
and soft band
fluxes of XMMSL1 J125616.0-114632 (2MASXJ12561595-1146367); which
varied by a factor 5 between slews 9037400004 and 9083600004 (a baseline
of 2.5 years). An upper
limits analysis will likely find greater variability.
\section{Identifications}
All sources detected in the survey have been correlated with different
catalogues in order to identify the XMM-Newton slew sources with
previously known objects (See Table~\ref{tab:cats} for a summary of
the resources used). The catalogues used for this aim comprise two
astronomical databases, a catalogue of clusters of galaxies and nine other
catalogues (some which have been queried through the HEASARC astronomical
database). Although the astrometric uncertainty of the slew sources was found
to be 8 arcseconds, the offset radius for the correlations was
30 arcseconds (with a few exceptions described below) in order
to include sources from catalogues with worse accuracy or truncated coordinates.
For the EXOSAT CMA catalogue the offset radius was 45 arcseconds,
and for the Einstein IPC it was 2 arcmin, both due
to the larger uncertainty in source coordinates. A radius of 5
arcmin was chosen for the clusters catalogue due to the extension of
this type of object.
The identification process results in unidentified sources, sources with a
single counterpart and sources with multiple matches. A hierarchical
selection scheme has been applied for sources with different counterparts. A
decision has been made to derive the most plausible identification candidate
using the technique described below. Firstly, the SIMBAD and
NED astronomical databases have been used for the cross-correlation and
results from both databases have been compared in detail (SIMBAD has
been queried in the frame of the Astronomical Virtual Observatory (AVO)).
When a source has the same counterpart in both databases the one
selected for the identification is that which gives more specification about
the source category. When contradictory identification candidates have
been found, the one with the smallest positional offset has been chosen.
These two
databases provide the large majority (90\%) of the total number
of identifications. Then, a correlation with a clusters table
(Abell and Zwicky Clusters of Galaxies ) has been performed.
The final identification for sources with a reported extension greater
than 2 pixels, which have a SIMBAD/NED and also a cluster counterpart,
comes from the
clusters table. The remaining catalogues used for the cross-match are
listed below ordered in priority for the preferred identification. For sources
with multiple matches in a catalogue, the identification was selected as
the closest match. These catalogues are: All-Sky Optical
Catalog of Radio/X-Ray Sources, Catalog of PSPC
WGA Sources, Einstein IPC Sources Catalog, EXOSAT CMA Images/Lightcurves,
ROSAT All-Sky Survey catalogue, ROSAT Results Archive Sources for
the PSPC, ROSAT Results Archive Sources for the HRI, RXTE Master
Catalog, XMM-Newton Serendipitous Source Catalog, Version 1.1.0,
INTEGRAL Bright Source Catalog. Results from the identification process
appear in the final catalogue in the columns:
\begin{table}
\caption{Catalogues used for source identification}
\label{tab:cats}
\begin{center}
\begin{tabular}{l c l}
\hline\hline
Catalogue & match radius & Reference \\
& (arcseconds) & \\
\hline
\hline
SIMBAD & 30 & CDS$^{a}$ \\
NED & 30 & NED$^{b}$ \\
Abell clusters & 300 & \cite{Abell} \\
Zwicky clusters & 300 & \cite{zwicky}\\
All-Sky Optical Catalog & & \\
of Radio/X-Ray Sources & 30 & \cite{fleschAndHard}\\
Catalog of PSPC WGA sources & 30 & \cite{white04}\\
Einstein IPC & 120 & \cite{harris}\\
EXOSAT CMA & 45 & HEASARC$^{c}$\\
RASS & 30 & \cite{voges} \\
ROSAT PSPC Results Archive & 30 & HEASARC$^{c}$\\
ROSAT HRI Results Archive & 30 & HEASARC$^{c}$\\
RXTE Master Catalog & 30 & HEASARC$^{c}$\\
1XMM V1.1.0 & 30 & \cite{watson}\\
INTEGRAL BSC & 30 & HEASARC$^{c}$\\
\hline
\end{tabular}
\end{center}
$^{a}$ CDS, Strasbourg (2006) \\
$^{b}$ The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\\
$^{c}$ Data obtained from the High
Energy Astrophysics Science Archive Research Center (HEASARC), provided
by NASA's Goddard Space Flight Center.
\end{table}
IDENT: name of the source \\
ALTIDENT: alternative name of the source \\
ID\_DIST: distance in arcminutes between the slew source and the identification. Distances have been rounded to the nearest 0.1 arcminutes to ensure a uniform accuracy across the catalogues. \\
ID\_CATEGORY: type of the identified source that, when existing,
has been extracted from the catalogues (it is not very homogeneous
because type convention differs between catalogues) \\
RASSNAME: the closest RASS match \\
RASSDIST: distance in arcseconds to the closest RASS match.
Of the full catalogue sources, not flagged as spurious, 51\% are identified.
The fraction rises to 71\% when only sources from the CLEAN catalogue are
considered. Of these, 48\% are extragalactic, 30\% are galactic and the
remainder are of unknown type, e.g. "X-ray source".
The identification list is expected to improve as more catalogues come on-line
and with follow-up observations. The list is maintained at
{\it http://xmm.esac.esa.int/external/xmm\_science/slew\_survey}
{\it/ident\_tab.html}
and suggestions for counterparts are welcomed.
In figure~\ref{fig:magflux} we plot the ratio of the X-ray to optical flux against the optical magnitude. Here we define the X-ray to optical flux ratio
using the total band X-ray flux and the optical blue-band flux. Optical
fluxes have been obtained either from the Simbad counterpart or
from the USNO B1 magnitude for the cases where a single plausible match
is found within the slew error circle. Distinct source types lie in
distinct regions of the
plane and statistically it can be seen that the unknown
sources with an unambiguous optical counterpart predominantly fall
in the region occupied by AGN but there are also a significant fraction of
sources which lie in the region occupied by stars and binaries that
have not been assigned a source category.
Source types may be further distinguished by considering the X-ray
hardness ratio (HR; e.g. \cite{delaceca}). In general the low number of counts
precludes an accurate measurement of the HR for slew sources.
In figure~\ref{fig:hrmag} we show sources which have either a detection
in the hard band or $\ge10$ counts in the soft band and use an upper limit
of three photons in the case of a non-detection in one of the bands.
The two points with the
highest X-ray to optical ratio are two detections of the isolated neutron star
RXJ~1856.6-3754. Unidentified sources are present in both the hard and soft
regions of the plot.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=9cm]{9193fg22.ps}}
\end{tabular}
\end{center}
\caption[The fx/fopt ratio against optical magnitude]
{ \label{fig:magflux} The log of the ratio of the X-ray to optical flux
plotted against the optical magnitude of the counterpart.
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\rotatebox{-90}{\includegraphics[height=9cm]{9193fg23.ps}}
\end{tabular}
\end{center}
\caption[The ratio of X to optical fluxes plotted against the hardness ratio]
{ \label{fig:hrmag} The ratio of X to optical fluxes plotted against the X-ray hardness ratio. In the case of a non-detection in the hard or soft band, the hardness
ratio has been calculated by assuming an upper limit of three photons in
the non-detection band. For reasons of simplicity neither error-bars nor
upper limit arrows are shown on the plot.
}
\end{figure}
\section{Summary}
The {XMM-{\em Newton} } slew data represent the deepest near all-sky hard band X-ray survey made to date, while the soft band survey is comparable with the RASS.
The source density, from the clean catalogue, is $\sim0.45$ per square
degree and $\sim70\%$ have plausible identifications.
The {XMM-{\em Newton} } slew survey catalogue will continue to grow as the mission
continues and it is expected that a sky coverage in excess of 50\%
will eventually be achieved.
With the excellent attitude reconstruction
this will leave a powerful legacy for future variability studies in
the soft and hard X-ray bands.
\acknowledgements
We would like to thank Mark Tuttlebee, Pedro Rodriguez and Aitor Ibarra
for their patient explanations of how {XMM-{\em Newton} } performs slews.
We thank Ramon Munoz and his team for the compilation of the slew datasets
and Mike Watson and Norbert Schartel for many useful discussions.
This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France the NASA/IPAC Extragalactic
Database (NED) which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National
Aeronautics and Space Administration and data obtained from the High
Energy Astrophysics Science Archive Research Center (HEASARC), provided
by NASA's Goddard Space Flight Center.
The XMM-Newton project is an ESA science mission with instruments and contributions directly funded by ESA member states and the USA (NASA).
The {XMM-{\em Newton} } project is supported by the Bundesministerium f\"{u}r Wirtschaft und Technologie/Deutches Zentrum f\"{u}r Luft- und Raumfahrt (BMWI/DLR, FKZ 50 OX 0001),
the Max-Planck Society and the Heidenhain-Stiftung. AMR acknowledges the
support of PPARC funding and PE the support of MPE funding.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{intro}
Coagulation and fragmentation play a fundamental role in a number of diverse phenomena arising both in natural science and in industrial processes. Specific examples can be found in ecology, human biology, polymer and aerosol sciences, astrophysics and the powder production industry; see \cite{BLL} for further details and references. A feature shared by these examples is that each involves an identifiable population of inanimate or animate objects that are capable of forming larger or smaller objects through, respectively, coalescence or breakup. The earliest mathematical investigation into processes governed by coagulation or fragmentation was carried out by Smoluchowski in two papers \cite{Smoluch, Smoluch17}, published in 1916 and 1917. Smoluchowski introduced, and investigated, a coagulation model in the form of an infinite set of ordinary differential equations that describes the time-evolution of a system of particle clusters that, as a result of Brownian motion, become sufficiently close to enable binary coagulation of clusters to occur. In this discrete-size model, it is assumed that the clusters are comprised of a finite number of identical fundamental particles, and so a discrete (positive integer) variable can be used to distinguish between cluster sizes. Over the past one hundred years, the pioneering work of Smoluchowski has been extended considerably, and various models, both deterministic and stochastic, and incorporating both coagulation and fragmentation, have been produced and studied.
In certain applications, such as droplet growth in clouds and fogs \cite{Schu40,Scot68}, where it is more realistic to have a continuous particle size variable which can take any positive real value, the standard deterministic coagulation-fragmentation (C-F) model is given by
\begin{equation}\label{contscfeqn}
\partial_t f(x,t) = {\mathfrak F}f(x,t) +
\mathfrak{K}f(x,t)\ , \ \ (x,t)\in \mbb{R}_+^2\ , \ \ f(x,0) = \mr f(x)\ , \ \ x\in \mathbb{R}_+\ ,
\end{equation}
where $\mathbb{R}_+ : = (0,\infty)$, and
\begin{align}
{\mathfrak F}f(x,t) &= -a(x) f(x,t) + \ \int_x^{\infty}a(y)b(x,y) f(y,t)\,\md y \,, \label{wlcontsfrag}\\
\mathfrak{K}f(x,t) &= \frac{1}{2}\,\int_0^x
k(x-y,y)f(x-y,t)f(y,t)\,\md y - f(x,t)\,\int_0^{\infty}
k(x,y)f(y,t)\,\md y \, \label{wlcontscoag}
\end{align}
model fragmentation and coagulation respectively; see \cite{vizi89}. Here, it is assumed that only a single size variable, such as particle mass, is required to differentiate between the reacting particles, with $f(x,t)$ denoting the density of particles of size $x \in \mbb{R}_+$ at time $t \geq 0$. The coagulation kernel $k(x,y)$ gives the rate at which particles of size $x$ coalesce with particles of size $y$, and $a(x)$ represents the overall rate of fragmentation of an $x$-sized particle. The coefficient $b(x,y)$, often called the fragmentation kernel or daughter distribution function, can be interpreted as giving the number
of size $x$ particles produced by the fragmentation of a size $y$
particle; more precisely, it is the distribution function of the
sizes of the daughter particles. In most investigations into \eqref{contscfeqn}, $b$ is assumed to be nonnegative and measurable, with $b(x,y)=0$ for $x >y$ and
\begin{equation}
\int_0^y xb(x,y)\,\md x = y, \ \mbox{ for each } y > 0, \label{baleq1}
\end{equation}
but is otherwise arbitrary. Note that equation~
\eqref{baleq1} can be viewed as a local mass conservation property, as it expresses the fact that, when the size variable is taken to be particle mass, the total mass of all the daughter particles produced by a fragmentation event is the same as that of the parent particle.
In the case of deterministic models, either discrete or continuous size, two main approaches have been used extensively in their analysis, with one involving weak compactness arguments and the other utilising the well-developed theory of operator semigroups. Comprehensive treatments of each are given in \cite{BLL}, and there is also an excellent account in \cite[Chapter 36]{bobrowski2016convergence} of the semigroup approach to the discrete C-F equation. We focus here on the application of semigroup techniques to continuous C-F models, where the strategy is to express the pointwise initial-value problem \eqref{contscfeqn} as a semilinear abstract Cauchy problem (ACP) of the form
\begin{equation}\label{ACPFK}
\frac{d}{dt}f(t) = Ff(t) + Kf(t),\ t \in \mathbb{R}_+; \quad f(0) = \mr f,
\end{equation}
posed in a physically relevant Banach space $X$. In \eqref{ACPFK}, $F$ and $K$ are operator realisations in $X$ of the formal expressions
\begin{align}
(\mathcal{F}f)(x) &:= -a(x) f(x) + \ \int_x^{\infty}a(y)b(x,y)f(y)\,\md y,\ x \in \mbb{R}_+, \label{formalF}\\
(\mathcal{K}f)(x) &:= \frac{1}{2}\,\int_0^x\!\!
k(x-y,y)f(x-y)f(y)\,\md y \!-\! f(x)\,\int_0^{\infty}\!\!
k(x,y)f(y)\,\md y,\ x \in \mbb{R}_+. \label{wlcontscoag}
\end{align}
Initially, only the linear fragmentation part of \eqref{ACPFK} is examined, and a representation $F$ is sought such that $F$ generates a strongly continuous semigroup \sem{S_{F}} on $X$. If this is possible, then the full abstract C-F problem is recast as the fixed point equation
\begin{equation}\label{Fixedpt}
f(t) = S_{F}(t)\mr f + \int_0^t S_{F}(t-s)Kf(s)\,\md s, \ t \in \mathbb{R}_+,
\end{equation}
to which standard results can be applied to yield the existence and uniqueness of mild and classical solutions
$f: [0,\tau_{\max}) \to X$. The identification $[f(t)](x) = f(x,t)$ then leads, after some further analysis, to a solution of the pointwise problem
\eqref{contscfeqn}.
Historically, the semigroup approach to C-F problems originated in 1979 with the publication of a seminal paper by Aizenman and Bak \cite{AizBak} for the specific case where the coagulation kernel $k$ is constant, and the fragmentation rate and the fragmentation kernel are given by $a(x) = x$ and $b(x,y) = 2/y$. The work presented in \cite{AizBak} was later extended in 1997 to bounded coagulation kernels and more general fragmentation rates and kernels \cite{MLM97a, LML}. Common to these early semigroup investigations is the use of more tractable, truncated versions of the fragmentation problem to generate a sequence of semigroups that converge, in an appropriate manner, to the semigroup for the original problem; for example, see \cite[Sections~3 \& 4]{LML}. In contrast, the year 2000 saw the introduction, in \cite{BaT01}, of a novel approach to the fragmentation problem that relies on the theory of substochastic semigroups. In recent years, this substochastic semigroup approach has been developed further and used to prove many important properties of the fragmentation semigroup such as its analyticity and, in the discrete case, compactness, \cite{BaLa12a, BaLa12b}. These properties have made it possible to extend earlier semigroup derived results on the well-posedness of C-F equations to the case where the coagulation kernel may be unbounded; see \cite{BaLa12a, BLL13, Ban2020}. Moreover, it is shown in \cite{Ban2020} that whenever the semigroup and weak compactness approaches are both applicable to a C-F problem, they both lead to the same solutions.
With regard to the choice of an appropriate space $X$, the early semigroup (and also weak compactness) analyses of \eqref{contscfeqn} used the spaces $X_0 := L_1(\mathbb{R}_+, \md x)$, $X_1 := L_1(\mathbb{R}_+, x\md x)$ and also $X_{0,1} := L_1(\mathbb{R}_+, (1+x) \md x)$, with respective norms
\[
\|f\|_{[0]} := \int_0^\infty |f(x)| \md x; \ \|f\|_{[1]} := \int_0^\infty |f(x)|x \md x; \ \|f\|_{[0,1]} := \int_0^\infty |f(x)| (1+x) \md x.
\]
These spaces were chosen due to the fact that, for a nonnegative solution $f$ of \eqref{ACPFK}, $\|f(t)\|_{[0]}$ gives the total number of particles in the system, while $\|f(t)\|_{[1]}$ gives the total mass. However, in later investigations it was found that improved results could be obtained by imposing some additional control on the evolution of large particles. A convenient way of introducing such a control is to consider the C-F problem in the more general weighted $L_1$ spaces $X_{m}:= L_1(\mathbb{R}_+, x^m \md x)$ and $X_{0,m}:=L_1(\mathbb{R}_+, (1+ x^m) \md x)$. The norms on these spaces are defined by
\begin{equation}\label{norms}
\|f\|_{[m]} := \int_0^\infty |f(x)|x^m \md x; \ \|f\|_{[0,m]} := \int_0^\infty |f(x)|w_m(x) \md x, \
\end{equation}
where $w_m(x) := 1+x^m$. We shall also use the notation
\begin{equation}\label{Moments}
M_m(t):= \int_0^\infty f(x,t)x^m\,\md x; \ \ M_{0,m}(t) := \int_0^\infty f(x,t) w_m(x)\md x,
\end{equation}
when discussing the norms of nonnegative solutions to \eqref{contscfeqn}. Clearly, $M_m(t)$ and $M_{0.m}(t)$ are finite provided $f(\cdot, t) \in X_m$ and $f(\cdot,t) \in X_{0,m}$.
For ease of exposition, we have restricted our attention in the above discussion to situations involving only the opposing processes of fragmentation and coagulation, and in which the total mass in the system of particles should be a conserved quantity. In many cases, however, these two processes may be complemented by other events which can change the total mass in the system. For example, mass loss can arise due to oxidation, melting, sublimation and dissolution of matter on the exposed particle surfaces. The reverse process of mass gain can also occur due to the precipitation of matter from the environment. Continuous coagulation and fragmentation processes, combined with a mass transport term that leads to either mass loss or mass gain, have also been studied using functional analytic and, in particular, semigroup methods; for example, see \cite{BaAr, BaLa09, Bana12a} and \cite[Section 5.2]{BLL}, or \cite{DoGa10, Ber2019, PerTr} where, however, the focus is on the long-term behaviour of the linear growth-fragmentation processes. The discrete version of the models have been comprehensively analysed in \cite{Banasiak2018, Banasiak2019}. In the case when the growth rate of a particle of mass $x$ is $r(x)$, the appropriate modified version of \eqref{contscfeqn} is
\begin{align}
\partial_t f(x,t) &= - \p_x[r(x)f(x,t)] + {\mathfrak F}f(x,t) +
\mathfrak{K}f(x,t)\ , \ \ (x,t)\in \mbb{R}_+^2\ ,\nn\\ f(0,x) & = \mr f(x)\ , \ \ x\in \mathbb{R}_+\ .\label{initprof}
\end{align}
The main goal of the paper is to prove global classical solvability of \eqref{initprof} in the spaces $X_{0,m}$ for sufficiently large $m,$ when the coagulation rate $k$ is unbounded (though controlled by the fragmentation rate). In this way we extend the results of \cite{Bana12a}, where only bounded coagulation operators were considered. We use the standard semigroup theory based approach of re-writing \eqref{initprof} as an abstract Volterra equation with the kernel given by the linear growth--fragmentation semigroup. The main tool is the moment improving property of this semigroup, proven in \cite{Ber2019}, that makes it a little like an analytic semigroup and allows for an approach similar to that used in \cite{BaLa12a, Ban2020} for pure fragmentation--coagulation problems, where the fragmentation semigroup is indeed analytic. In other words, the growth--fragmentation semigroup retains the moment regularization property of the fragmentation semigroup but, since it is not regularizing with respect to the differentiation operator, it fails to be analytic. Thus, while the well-posedness proof for \eqref{initprof} follows standard steps, particular estimates must be tailor made for this specific case to yield the desired result. More precisely, while the existence of the mild solution is obtained by a typical fixed point argument, the involved integral operator is weakly singular, in contrast to the standard theory where it is assumed to be continuous, see e.g. \cite[Theorem 6.1.2]{Pa}. Similarly, the proof that the mild solution is a classical solution cannot be obtained, as in other cases where unbounded nonlinearities occur, by using the differentiability of the semigroup, since the growth--fragmentation semigroup is not analytic. Instead, the approach we adopt is to follow \cite[Theorem 6.1.5]{Pa}, where a regularity result is established for the case of a continuous nonlinearity, but again we have to show that the result can be extended to an appropriately restricted singular nonlinearity.
The paper is organized as follows. Section 2 deals with the linear growth--fragmentation equation. In particular, we use the Miyadera perturbation theorem to show that the growth--fragmentation operator is the generator of a positive semigroup on $X_{0,m}$ and provide a precise characterization of its domain, without imposing any restriction on the behaviour of the growth rate $r$ at $x=0.$ In this way we improve the corresponding results of \cite{Bana12a, Ber2019}. The improved generation theorem is further used to slightly simplify the proof of the moment regularization property, given in \cite{Ber2019}. Section 3 is devoted to the full equation \eqref{initprof}. The existence of local mild and classical solutions is proved under quite general conditions, while the global solvability, done along the lines of \cite{Ban2020}, requires some additional assumptions to control the growth term.
\section{Fragmentation with growth}
Adopting the semigroup based strategy described in Section 1, we begin our analysis of equation \eqref{initprof} by considering the linear equation that is obtained on ignoring the coagulation terms. For technical reasons, which will become clear later, it is convenient to introduce an additional absorption term, $-a_1f$. This results in the linear equation
\begin{align}
\begin{split}
\p_t f(x,t) &=
-\p_x[r(x)f(x,t)] -q(x)f(x,t) + \cl{x}{\infty}a(y)b(x,y)f(y,t)\,\mathrm{d}y,\ \ (x,t) \in \mbb{R}_+^2,\\
f(x,0)&=\mr f(x),\ \ x \in \mathbb{R}_+,
\label{reml}
\end{split}
\end{align}
where $q(x) = a(x)+a_1(x)$.
The aim is to express \eqref{reml} as an ACP of the form
\begin{equation}\label{ACPFG}
\frac{d}{dt}f(t)= T_{0,m}f(t) + B_{0,m}f(t), \ t > 0;\ \ f(0) = \mr f,
\end{equation}
where $T_{0,m}$ and $B_{0,m}$, respectively, are operator realisations in $X_{0,m}$ of the formal expressions
\begin{equation}\label{formalTB}
(\mathcal{T}f)(x) := -\p_x[r(x)f(x)] -q(x)f(x); \ \ (\mathcal{B}f)(x) := \cl{x}{\infty}a(y)b(x,y)f(y)\,\mathrm{d}y.
\end{equation}
The ACP \eqref{ACPFG} will be well posed in $X_{0,m}$ provided the operator $G_{0,m}:= T_{0,m}+B_{0,m}$ is the infinitesimal generator of a strongly continuous semigroup, \sem{S_{G_{0,m}}}, on $X_{0,m}$. To show that it is possible, under suitable conditions, to define such an operator $G_{0,m}$, we first use the Hille-Yosida theorem to establish that $T_{0,m}$, when defined appropriately, generates a strongly continuous semigroup, \sem{S_{T_{0,m}}} (the absorption semigroup), on $X_{0,m}$. The operator $B_{0,m}$ is then shown to be a Miyadera perturbation of $T_{0,m}$ and this leads immediately to the existence of \sem{S_{G_{0,m}}}.
\subsection{The absorption semigroup}
The transport part of the problem is given by
\begin{align}
\begin{split}
\p_t f(x,t) &=
-\p_x[r(x)f(x,t)] -q(x)f(x,t), \ \ (x,t) \in \mbb{R}_+,\\
f(x,0)&=\mr f(x),\ \ x \in \mbb{R}_+,
\label{remla}
\end{split}
\end{align}
where, as stated above, $q = a + a_1$. We assume throughout that the fragmentation and growth rates, $a$ and $r$ respectively, satisfy
\begin{eqnarray}
&& 0 \leq a \in L_{\infty,loc}([0,\infty)); \label{aloc}\\
&& 1/r \in L_{1,loc}(\mathbb{R}_+) \mbox{ and } 0<r(x)\leq r_0+r_1 x \leq \ti r(1+x)\quad {\rm on}\quad \mbb R_+, \label{fmlras}
\end{eqnarray}
for some nonnegative constants $r_0,r_1$ and $\ti r =\max\{r_0,r_1\}$. With regard to the additional absorption term, $a_1$, it is assumed that
\begin{equation}\label{a1con}
0 \leq a_1 \in L_{\infty,loc}([0,\infty)) \mbox{ and } a_1(x)/a(x) \mbox{ remains bounded as } x \to \infty.
\end{equation}
On defining operators $A_{0,m}$ and $A^{(1)}_{0,m}$ on their maximal domains in $X_{0,m}$ by
\begin{eqnarray}
&& A_{0,m}f : = -af; \ \ D(A_{0,m}):= \{f \in X_{0,m} : af \in X_{0,m}\}, \label{defnA}\\
&& A^{(1)}_{0,m}f : = -a_1f; \ \ D(A^{(1)}_{0,m}):= \{f \in X_{0,m} : a_1f \in X_{0,m}\}, \label{defnA1}
\end{eqnarray}
the second assumption in \eqref{a1con} guarantees that $D(A_{0,m}) \subseteq D(A^{(1)}_{0,m})$.
In the following treatment of \eqref{remla} we have to distinguish between two distinct cases that may arise due the behaviour of $r(x)$ close to $x = 0$. If we use the symbol $\cl{0^+}{}$ to denote an integral in some right neighbourhood of $0$, then we may have either
\begin{equation}
\cl{0^+}{}\frac{\md x}{r(x)}=+\infty, \label{assr2}
\end{equation}
or
\begin{equation}
\cl{0^+}{}\frac{\md x}{r(x)}<+\infty. \label{assr1}
\end{equation}
When \eqref{assr2} is satisfied, the characteristics associated with the transport equation do not reach $x=0$ and therefore the problem does not require a boundary condition to be specified. This case has been thoroughly researched in \cite{Bana12a, BLL}, and, as in \emph{op. cit.}, we define $T_{0,m}$ by
\begin{equation}\label{dtkmax}
T_{0,m}f : = \mathcal{T}f; \ \ D(T_{0,m}) := \left\{f \in X_{0,m}\; :\; rf \in AC(\mbb R_+)\;{\rm
and}\;\frac{d}{dx}(rf), qf \in X_{0,m}\right\},
\end{equation}
where $AC(\mbb R_+)$ denotes the class of functions that are absolutely continuous on all compact subintervals of $\mbb{R}_+$. On the other hand, when \eqref{assr1} holds, the characteristics do reach $x = 0$ and therefore a boundary condition is required. Here, following \cite{Ber2019}, we impose the homogeneous condition
\begin{equation}
\lim\limits_{x\to 0^+} r(x)f(x,t) = 0
\label{bc1}
\end{equation}
but note that more general cases can also be considered, \cite{BaLa09}. It follows that $D(T_{0,m})$ is then given by
\begin{align}
D(T_{0,m})&:= \left\{f \in X_{0,m}\; :\; rf \in AC(\mbb R_+),\, \frac{d}{dx}(rf), qf \in X_{0,m}\; \right.\nn\\&\left.\mbox{ and } r(x)f(x) \to 0 \mbox{ as } x \to 0^+ \right\}.\label{dtbcmax}
\end{align}
To make the Hille-Yosida theorem applicable, we must determine the resolvent operator, $R(\lambda, T_{0,m})$. Following \cite[Section 5.2]{BLL}, we begin by solving
\begin{equation}\label{resolveq}
\lambda f(x) + \frac{d}{dx}(r(x)f(x)) + q(x)f(x) = g(x),\ \ x \in \mathbb{R}_+,
\end{equation}
where $g \in X_{0,m}$. On introducing antiderivatives, $R$ and $Q$, of $1/r$ and $q/r$ respectively, defined on $\mathbb{R}_+$ by
\begin{equation}
R(x) := \cl{1}{x}\frac{1}{r(s)}\md s, \qquad Q(x):=
\cl{1}{x}\frac{q(s)}{r(s)}\md s,
\label{RQ}
\end{equation}
we can proceed formally to obtain the general solution of \eqref{resolveq} in the form
\begin{equation}
f(x) = v_\la(x)\, \cl{0}{x}e^{\la
R(y)+Q(y)}g(y) \mdm{d}y + C\,v_\la(x),
\label{Ph10}
\end{equation}
where $C$ is an arbitrary constant and
\begin{equation}\label{vlambda}
v_\la(x) = \frac{e^{-\la R(x)-Q(x)}}{r(x)},\ \ x \in \mathbb{R}_+.
\end{equation}
An immediate consequence of \eqref{fmlras} and (\ref{RQ}) is that
$R$ is strictly increasing (and hence invertible) on $\mathbb{R}_+$,
and $Q$ is nondecreasing on $\mathbb{R}_+$. Consequently, if we define
\begin{equation}\begin{split}
\lio{x} R(x) =: m_R,&\quad \lii{x} R(x) =: M_R,\\
\lio{x} Q(x) =: m_Q, &\quad\lii{x} Q(x) =: M_Q,\label{mr}\end{split}
\end{equation}
then $m_R$ is finite and negative if \eqref{assr1} holds and $m_R=-\infty$ otherwise. Furthermore, $M_R=\infty$ due to \eqref{fmlras}, whereas $m_Q$ and $M_Q$ can be finite or infinite depending on the interplay between $q$ and $r$. In what follows we need the following result that is a slight modification of \cite[Lemma 2.1]{BaLa09} and \cite[Corollary 5.2.9]{BLL}.
\begin{lemma}\label{lemomrm}
Let $m\geq 1$ be fixed and define $\omega_{r,m}:=2m\ti r$, where $\ti r$ is the positive constant in \eqref{fmlras}. Then, for any $\la> \omega_{r,m}$ and $0<\alpha<\beta \leq \infty$,
\begin{align}
&I_{0,m}(\alpha,\beta):=\cl{\alpha}{\beta} \frac{e^{-\la R(s)}}{r(s)}
w_m(s)\md s \,\leq \, \frac{e^{-\la
R(\alpha)}}{\la-\omega_{r,m}}\,w_m(\alpha)\,, \label{ixest0m}\\
&J_{0,m}(\alpha,\beta) := \cl{\alpha}{\beta}\frac{(\la+q(s))e^{-\la R(s)-Q(s)}}{r(s)}w_m(s)\md s \,\leq \, \frac{\la e^{-\la
R(\alpha)-Q(\alpha)}}{\la-\omega_{r,m}}\,w_m(\alpha),\label{Jest0m}
\end{align}
where, as in \eqref{norms}, $w_m(x) = 1+ x^m.$
\end{lemma}
\proof Since
\[
I_{0,m}(\alpha,\beta) = -\frac{1}{\la}\cl{\alpha}{\beta}w_m(s)\frac{d}{ds} e^{-\la R(s)}\md s,
\]
we can integrate by parts, and then use \eqref{fmlras}, to obtain
\begin{eqnarray}
&&I_{0,m}(\alpha,\beta)= \frac{1}{\la}e^{-\la R(\alpha)}w_m(\alpha) - \frac{1}{\la}e^{-\la R(\beta)}w_m(\beta) +\frac{m}{\la}\cl{\alpha}{\beta}e^{-\la R(s)}s^{m-1}\md s\nn\\
&&\phantom{xx}\leq \frac{1}{\la}e^{-\la R(\alpha)}w_m(\alpha) +\frac{m\ti r}{\la}\cl{\alpha}{\beta}\frac{e^{-\la R(s)}}{r(s)}(1+s)s^{m-1}\md s.\nn
\end{eqnarray}
The inequality $(1+s)s^{m-1} \leq 2(1+s^m)$, which holds for all $s > 0$ and each fixed $m \geq 1$, yields
\begin{equation}
I_{0,m}(\alpha,\beta) \leq \frac{1}{\la}e^{-\la
R(\alpha)}w_m(\alpha)+\frac{2m\ti r}{\la}I_{0,m}(\alpha,\beta),\label{Ik}
\end{equation}
and \eqref{ixest0m} follows.
To prove \eqref{Jest0m}, we note first that
\begin{equation}
\frac{\la + q(x)}{r(x)}e^{-\la R(x) -Q(x)} = - \frac{d}{dx}e^{-\la R(x) -Q(x)}. \label{difeeab}
\end{equation}
Hence, on integrating by parts and using \eqref{fmlras}, together with the monotonicity of $e^{-Q}$, we obtain similarly to \eqref{Ik},
\begin{eqnarray}
J_{0,m}(\alpha,\beta
&\leq& e^{-\la R(\alpha)-Q(\alpha)}w_m(\alpha)+ m\cl{\alpha}{\beta}e^{-\la R(s)-Q(s)}s^{m-1}\md s\nn \\
&\leq& e^{-\la R(\alpha)-Q(\alpha)}w_m(\alpha)+ 2m\ti r e^{-Q(\alpha)}\cl{\alpha}{\beta}\frac{e^{-\la R(s)}}{r(s)}w_m(s)\md s\nn \\
&=& e^{-\la
R(\alpha)-Q(\alpha)}w_m(\alpha) +e^{-Q(\alpha)}\omega_{r,m}I_{0,m}(\alpha,\beta).
\end{eqnarray}
The stated inequality, \eqref{Jest0m}, now follows from \eqref{ixest0m}.
\qed
\begin{lemma} \label{tgin} Let $\la>0$ and let $v_\lambda$ be defined by \eqref{vlambda}. \newline
(a) \ If \eqref{assr1} holds, then $v_\la$ does not satisfy \eqref{bc1}. \newline
(b) \ If \eqref{assr2} holds, then $v_\la\notin X_{0,m}$ for any $m \geq 1$.
\label{tgin}
\end{lemma}
\proof
(a) For $0<x<1$ we have
\[
r(x)v_\lambda(x) = e^{\cl{x}{1}\frac{\la +q(s)}{r(s)}\md s},
\]
and so $r(x)v_\lambda(x)$ does not converge to $0$ as $x \to 0^+.$
(b) Let \eqref{assr2} be satisfied. Then, for each $\la > 0$,
\begin{equation}\label{infinitelimit}
\lim_{x \to 0^+} e^{-\la R(x)} = \lim_{x \to 0^+} e^{\cl{x}{1}\frac{\la}{r(s)}\md s} = \infty.
\end{equation}
Consequently, since $e^{-Q(x)}\geq 1$ for $x \in [0,1]$, and $R(1) =0$, we obtain
\[
\cl{0}{\infty} v_\la(x)w_m(x)\md x \geq \cl{0}{1} \frac{e^{-\la R(x)}}{r(x)} \md x = -\la \cl{0}{1} \frac{d}{dx} e^{-\la R(x)} \md x = \la (\lim\limits_{x\to 0^+} e^{-\la R(x)} - 1),
\]
and, from \eqref{infinitelimit}, it follows that $v_\lambda \notin X_{0,m}$.
\qed
Motivated by \eqref{Ph10} and Lemma \ref{tgin}, we are led, as in \cite[Section 5.2.2]{BLL}, to
\begin{equation}
[\mc R(\la)g](x) := \frac{e^{-\la R(x)-Q(x)}}{r(x)} \cl{0}{x}e^{\la
R(y)+Q(y)}g(y) \mdm{d}y \label{defres0}
\end{equation}
as a natural candidate for the resolvent, $R(\la,T_{0,m})$, of $T_{0,m}$.
\begin{theorem}\label{th.5.2.11}
Let \eqref{aloc}, \eqref{fmlras} and \eqref{a1con} be satisfied. Then, for each $m\geq 1$ and $\la > \omega_{r,m}$, the resolvent of $(T_{0,m},D(T_{0,m}))$ (in both cases \eqref{dtkmax} and \eqref{dtbcmax}) is given by $R(\la,T_{0,m}))g = \mc R(\la)g, \ g \in X_{0,m}$. Moreover,
\begin{equation}
\| R(\la,T_{0,m})g\|_{[0,m]} \leq \frac{1}{\la-\omega_{r,m}}\,\| g \|_{[0,m]}, \ \mbox{ for all } g \in X_{0,m},
\label{resest0m}
\end{equation}
and therefore $(T_{0,m},D(T_{0,m}))$
is the generator of a strongly continuous, positive,
quasi-contractive semigroup, \sem{S_{T_{0,m}}}, on $X_{0,m}$ with type not exceeding
$\omega_{r,m}$; that is,
$$
\|S_{T_{0,m}}(t)f \|_{[0,m]}\leq e^{\omega_{r,m}t}\|f\|_{[0,m]}\,, \ \mbox{ for all } f\in X_{0,m}.
$$
\end{theorem}
\begin{proof} Let $g \in X_{0,m}$, where $m \geq 1$. Then, by \eqref{ixest0m} and the monotonicity of $e^{-Q}$,
\begin{equation}\begin{split}
\| \mc R(\la)g\|_{[0,m]} &\leq
\cl{0}{\infty}\frac{e^{-\la
R(x)-Q(x)}}{r(x)} \left(\cl{0}{x}e^{\la
R(y)+Q(y)}|g(y)| \mdm{d}y\right)w_m(x) \mdm{d}x \\
&=
\cl{0}{\infty}\left(|g(y)|e^{\la
R(y)+Q(y)}\cl{y}{\infty}\frac{e^{-\la R(x)-Q(x)}}{r(x)}w_m(x) \md x\right) \mdm{d}y\\
& \leq \int_0^\infty e^{\lambda R(y)} I_{0,m}(y,\infty) |g(y)| \md y \leq \frac{1}{\la-\omega_{r,m}} \|g\|_{[0,m]}\,.\end{split}
\label{resest00}
\end{equation}
Similarly, using (\ref{Jest0m}) and
(\ref{resest00}), we obtain
\begin{equation}\label{2.29}
\begin{split}
\|q \mc R(\la)g\|_{[0,m]} &\leq
\cl{0}{\infty}\left({e^{\la
R(y)+Q(y)}}\cl{y}{\infty}\frac{w_m(x)q(x)e^{-\la R(x)-Q(x)}}{r(x)} \mdm{d}x\right)|g(y)| \mdm{d}y\\
&\leq
\cl{0}{\infty}{e^{\la
R(y)+Q(y)}}J_{0,m}(y,\infty)|g(y)| \mdm{d}y \leq
\frac{\la}{\la-\omega_{r,m}}\|g\|_{[0,m]}.\end{split}
\end{equation}
To establish that $r \mc R(\la)g \in
AC(\mbb R_+)$, we observe that $\exp(-\la R-Q)$ is a bounded function
that is differentiable a.e. on $(0,\infty)$, and also that the function defined by the integral in \eqref{defres0} is absolutely continuous on $(0,\infty)$. Furthermore, direct substitution shows that
$$
\la [\mc R(\la)g](x) + \frac{d}{dx}(r(x)[\mc R(\la)g](x)) + q(x)[\mc R(\la)g](x) = g(x)$$
for almost all $x>0$ and hence, by \eqref{resest00} and \eqref{2.29}, $ \frac{d}{dx}(r\mc R(\la)g) \in X_{0,m}$. Since
\[
r(x)[\mc R(\la)g](x) \to 0 \mbox{ as } x \to 0^+
\]
whenever \eqref{assr1} is satisfied, it follows that
$\mc R(\la)g \in D(T_{0,m})$. On the other hand, thanks
to Lemma~\ref{tgin}, the operator $\la I-T_{0,m}$ is injective,
which shows that (\ref{defres0}) defines the resolvent of $T_{0,m}$, and the generation of a strongly continuous, positive, quasi-contractive semigroup generated by $T_{0,m}$ can then be deduced from the Hille-Yosida theorem together with the positivity of $R(\la,T_{0,m})$.
\end{proof}
\subsection{The growth-fragmentation semigroup}
We now consider the growth--fragmentation equation \eqref{reml}. In addition to the restrictions \eqref{aloc}, \eqref{fmlras} and \eqref{a1con} imposed on $a,r$ and $a_1$ respectively, we assume that the fragmentation kernel, $b$, satisfies \eqref{baleq1} and further for each $m \geq 0$, we define
\begin{align}
n_m(y)&=\cl{0}{y}b(x,y)x^m\md x, \label{nmy}\\
N_m(y)& = y^m-n_m(y).
\label{Nmy}
\end{align}
The local mass conservation condition in \eqref{baleq1} then leads to
\begin{equation}
n_0(y) > 1, \; N_m(y) > 0, \; m>1; \quad N_1(y) = 0; \quad N_m(y) <0, \; 0\leq m<1;
\label{Nm}
\end{equation}
see\cite[Eqns. (2.2.53) \& (2.3.16)]{BLL}. The function $n_0$ is also assumed to satisfy
\begin{equation}
n_0(y) \le b_0 (1+y^l)\ , \qquad y \in \mathbb{R}_+\ , \label{PhPr005}
\end{equation}
for constants $b_0 > 0$ and $l\ge 0$.
A crucial role in the analysis is played by the further assumption that there exists $m_0>1$ such that
\begin{equation}
\liminf\limits_{y\to \infty}\frac{N_{m_0}(y)}{y^{m_0}} >0.
\label{goodchar1}
\end{equation}
It follows, \cite[Theorem 2.2]{Ban2020}, that for any fixed $y>0$, $(1,\infty)\ni m \mapsto \frac{N_{m}(y)}{y^{m}}$ is an increasing and concave function. Hence, if \eqref{goodchar1} holds for some $m_0>1$, then
\begin{equation}
\liminf\limits_{y\to \infty}\frac{N_{m}(y)}{y^{m}} >0,
\label{goodchar}
\end{equation}
for all $m >1$. For a given $m>0$, \eqref{goodchar} yields the existence of $y_m>0$ and $c_m<1$ such that
\begin{equation}
n_m(y) \leq c_my^m, \quad y\geq y_m.
\label{bmom}
\end{equation}
The monotonicity and concavity of $(1,\infty)\ni m \mapsto \frac{N_{m}(y)}{y^{m}}$ implies further that there is $y_0>0$ such that for any $m>1$ there is $c_m' <1$ such that
\begin{equation}
n_m(y) \leq c'_my^m, \quad y\geq y_0
\label{bmom1}
\end{equation}
and, for any $m>1$, we can take $c_l' = c_m'$ for $l\geq m$.
We note that \eqref{goodchar} is satisfied for a large class of fragmentation kernels $b,$ including the homogeneous ones used in \cite{Ber2019}; there are, however, cases when it does not hold, \cite[Example 5.1.51]{BLL}.
Henceforth, we assume that
\begin{equation}
m > \max\{1,l\},\label{massump}
\end{equation}
and for each $m$ we define an operator realisation, $B_{0,m}$, of the formal expression $\mathcal{B}$ in \eqref{formalTB} by
\begin{align}
(B_{0,m}f)(x) &:= \cl{x}{\infty}a(y)b(x,y)f(y,t)dy,\ x \in \mathbb{R}_+; \nn\\
D(B_{0,m})&:= \{f \in X_{0,m}: B_{0,m}f \in X_{0,m}\}. \label{B0m}
\end{align}
\begin{theorem} Let \eqref{PhPr005}, \eqref{goodchar} and \eqref{massump}, together with the assumptions of Theorem \ref{th.5.2.11}, be satisfied. Then
$(G_{0,m},D(T_{0,m})) = (T_{0,m}+B_{0,m}, D(T_{0,m}))$ generates a strongly continuous, positive semigroup, \sem{S_{G_{0,m}}}, on $X_{0,m}$. \label{Miy}
\end{theorem}
\begin{proof}
We use a version, \cite[Lemma 5.12]{BaAr}, of a theorem due to Desch that is applicable to positive operators in $L_1$ spaces. Thus, we must prove that $\|B_{0,m} R(\la, T_{0,m}) \| < 1$ for some
$\la > \omega_{r,m}$. Since $B_{0,m}R(\la, T_{0,m})$ is positive, we need only establish that
$\|B_{0,m} R(\la, T_{0,m}) f\|_{[0,m]} < \|f\|_{[0,m]}$ for all $f$ in the positive cone, $X_{0,m,+}$, and some
$\la > \omega_{r,m}$; see \cite[Proposition 2.67]{BaAr}. Given that $f \in X_{0,m,+}$ and $\la > \omega_{r,m}$, we have
\begin{align*}
\|B_{0,m} R(\la, T_{0,m}) f\|_{[0,m]} &= \cl{0}{\infty} \left(\cl{x}{\infty} a(y) b(x,y) [R(\la, T_{0,m}) f](y)\md y \right)w_m(x)\md x \\
&=\cl{0}{\infty}a(y) [R(\la, T_{0,m}) f](y)(n_0(y) + n_m(y))\md y,
\end{align*}
where we have used the notation introduced in \eqref{nmy}. On setting
$a_\rho = \mathrm{ess}\!\!\!\!\sup\limits_{x \in [0,\rho]}a(x)$ for each fixed $\rho > 0$, we obtain, by \eqref{PhPr005}, \eqref{Nm} and \eqref{resest0m},
\begin{align*}
\cl{0}{\rho}a(y) [R(\la, T_{0,m}) f](y)(n_0(y) + n_m(y))\md y&\leq a_\rho \cl{0}{\infty}[R(\la, T_{0,m}) f](y)(b_0 w_l(y) + y^m)\md y \\&\leq C_m a_\rho \cl{0}{\infty}[R(\la, T_{0,m}) f](y)w_m(y)\md y\\
&\leq \frac{C_m a_\rho}{\la-\omega_{r,m}}\|f\|_{[0,m]},
\end{align*}
where
$$C_m := \sup\limits_{0\leq y < \infty} b_0\frac{w_l(y)}{w_m(y)} + \frac{y^m}{w_m(y)} \leq 2b_0+1.$$
To obtain a suitable estimate on the integral over the infinite interval $[\rho, \infty)$, we now use \eqref{bmom}. Since $\rho > y_m$ can be chosen sufficiently large so that
\begin{equation}
b_0\frac{w_l(y)}{w_m(y)} <\delta, \ \ \mbox{ for all } y \geq \rho,
\label{bmom2}
\end{equation}
where $c_m + \delta < 1$, we can argue as in \eqref{2.29} to obtain
\begin{align*}
&\cl{\rho}{\infty}a(y) [R(\la, T_{0,m}) f](y)(n_0(y) + n_m(y))\md y \\
&\leq (\delta + c_m)\cl{0}{\infty}a(y) [R(\la, T_{0,m}) f](y)w_m(y)\md y\\
&= (\delta + c_m) \cl{0}{\infty}\left({e^{\la
R(x)+Q(x)}}\cl{x}{\infty}\frac{w_m(y) a(y)e^{-\la R(y)-Q(y)}}{r(y)} \mdm{d}y\right)f(x) \mdm{d}x \\
&\leq (\delta + c_m) \cl{0}{\infty}\left({e^{\la
R(x)+Q(x)}}\cl{x}{\infty}\frac{w_m(y) (\la + q(y))e^{-\la R(y)-Q(y)}}{r(y)} \mdm{d}y\right)f(x) \mdm{d}x \\
&=
(\delta + c_m)\cl{0}{\infty}{e^{\la
R(x)+Q(x)}}J_{0,m}(x,\infty)f(x) \mdm{d}x \leq
\frac{\la(\delta + c_m)}{\la-\omega_{r,m}}\|f\|_{[0,m]}.
\end{align*}
Hence
\begin{align*}
\|B_{0,m} R(\la, T_{0,m}) f\|_{[0,m]} \leq \left(\frac{C_m a_\rho}{\la-\omega_{r,m}} + \frac{\la}{\la-\omega_{r,m}}(\delta + c_m)\right)\|f\|_{[0,m]}.
\end{align*}
Since $ \frac{\la}{\la-\omega_{r,m}}(\delta + c_m) \to \delta + c_m < 1$ and $\frac{C_m a_\rho}{\la-\omega_{r,m}}\to 0$ as $\la \to \infty$, it follows that there exists $\la_0$ such that for all $\la>\la_0$
$$
\frac{C_m a_\rho}{\la-\omega_{r,m}} + \frac{\la}{\la-\omega_{r,m}}(\delta + c_m)<1.
$$
Therefore $B_{0,m}$ is a Miyadera perturbation of $T_{0,m}$, and the stated result follows.
\end{proof}
Under the conditions of Theorem \ref{Miy}, it follows that constants $C(m)$ and $\theta(m)$ exist such that
\begin{equation}
\|S_{G_{0,m}}(t)f\|_{[0,m]} \leq C(m) e^{\theta(m)t}\|f\|_{[0,m]}, \ \mbox{ for all } f \in X_{0,m} \mbox{ and } t \geq 0.
\label{Stype}
\end{equation}
Moreover, an alternative, but equivalent, representation of the generator $G_{0,m}$ is
\begin{equation}
G_{0,m}:= T^0_{0,m} + A^{(1)}_{0,m} + A_{0,m} +B_{0,m} = T^0_{0,m} + A^{(1)}_{0,m} + F_{0,m},
\label{genrep}
\end{equation}
where $ A^{(1)}_{0,m}\,,A_{0,m}$ and $ B_{0,m}$ are defined by \eqref{defnA}, \eqref{defnA1} and \eqref{B0m} respectively, and
\begin{align*}
[T^0_{0,m}f](x) &:= -\p_x[r(x)f(x)]\,; \\ D(T^0_{0,m}) &:= \left\{f \in X_{0,m}\; :\; rf \in AC(\mbb R_+)\;{\rm
and}\;\frac{d}{dx}(rf)\in X_{0,m}\right\}.
\end{align*}
As with the operator $T_{0,m}$, the homogeneous boundary condition must also be incorporated in the above definition of $D(T^0_{0,m})$ when \eqref{assr1} holds. In \cite[Theorem 2.2]{Ban2020}, it is shown that the fragmentation operator, $(F_{0,m},D(A_{0,m})):= (A_{0,m}+B_{0,m},D(A_{0,m}))$, is the generator of an analytic semigroup on $X_{0,m}$.
We now establish a regularising property of the growth-fragmentation semigroup that holds under an additional assumption on the fragmentation rate function $a$. The proof involves the adjoint semigroup,
$\left(S^*_{G_{0,m}}(t)\right)_{t \geq 0}$, defined on the dual space $X_{0,m}^*$, where the latter can be identified with the function space
\[
L_{\infty,1/w_m}:= \left\{f : f \mbox{ is measurable on } \mbb{R}_+ \mbox{ and } \|f\|_{\infty,m}:= \mbox{ess}\!\sup_{x \in \mbb{R}_+} \frac{|f(x)|}{w_m(x)} < \infty\right\}
\]
via the duality pairing
\[
\langle f,g \rangle := \cl{0}{\infty} f(x)g(x)\md x,\ \ f \in L_{\infty,1/w_m},\, g \in X_{0,m}.
\]
Since $w_m \in L_{\infty,1/w_m}$, we can define
\begin{equation}\label{Psi}
\Psi_m(x,t) := [S^*_{G_{0,m}}(t)w_m](x),\ (x,t) \in \mathbb{R}_+^2.
\end{equation}
The following result is an extension of \cite[Lemma 2.7]{Ber2019} to the more general setting of this paper. The proof, while using the better characterization of the generator obtained in Theorem \ref{Miy}, essentially follows the lines of \textit{op.cit}.
\begin{theorem}
In addition to the conditions required for Theorem \ref{Miy} to hold, assume that positive constants $a_0\,,\gamma_0$ and $x_0$ exist such that
\begin{equation}
a(x)\geq a_0x^{\gamma_0},\quad \mbox{ for all } x\geq x_0.
\label{assa1}
\end{equation}
Then, for any $n,\, p$ and $m$ satisfying $\max\{1,l\} < n < p < m$, there are constants $C=C(m,n,p)>0$ and $\theta=\theta(m,n)>0$ such that
\begin{equation}
\|S_{G_{0,p}}(t)\mr f\|_{[0,m]} \leq Ce^{\theta t} t^{\frac{n-m}{\gamma_0}}\|f\|_{[0,p]},\ \mbox{ for all } f \in X_{0,p}.
\label{regest}
\end{equation}\label{regthm}
\end{theorem}
\begin{proof}
First we note that $X_{0,m} \hookrightarrow X_{0,p} \hookrightarrow X_{0,n}$, where $\hookrightarrow$ denotes a continuous embedding. Moreover, for each $j=n,p,m$, the operator $G_{0,j}$ generates a positive strongly continuous semigroup \sem{S_{G_{0,j}}} on $X_{0,j}.$ Suppose, initially, that
$\mr f \in D(T_{0,m})_+ = D(G_{0,m})_+$. Then, for all $t \geq 0$,
\[
f(\cdot,t) := [S_{G_{0,m}}(t)\mr f](\cdot)= [S_{G_{0,p}}(t)\mr f](\cdot)\in D(G_{0,m}) = D(T^0_{0,m})\cap D(A_{0,m}).
\]
Consequently, we can multiply \eqref{reml} by $w_m(x) = 1+x^m$ and then integrate term by term to obtain, as in \cite[Lemma 5.2.17]{BLL},
\begin{align}
\frac{d}{dt} M_{0,m}(t) &= \cl{0}{\infty}\left(mr(x)x^{m-1} -(N_0(x)+N_m(x))a(x)\right)f(x,t) \mdm{d}x \nn\\
&\phantom{xx}- \cl{0}{\infty} a_1(x) f(x,t)w_m(x)\md x \leq \cl{0}{\infty} \Phi_m(x)f(x,t)\md x\,,
\label{subfuncta'}
\end{align}
where
\begin{equation}\label{Phi}
\Phi_m(x):= mr(x)x^{m-1} -(N_0(x)+N_m(x))a(x),\ \ x \in \mathbb{R}_+.
\end{equation}
Recalling from \eqref{bmom} that $n_m(y) \leq c_my^m$ for all $y \geq y_m$, where $0 < c_m < 1$, we choose a positive constant $R_m > \max\{1,x_0,y_m\}$ such that
\begin{equation}
(b_0(1+x^l) -1) -(1-c_m)x^m \leq 0, \ \mbox{ for all } x \geq R_m.
\label{Rm}
\end{equation}
It then follows from \eqref{fmlras}, \eqref{PhPr005}, \eqref{bmom} and \eqref{assa1} that, for any fixed $R \geq R_m$ and for all $x\geq R$, we have
\begin{align*}
\Phi_m(x)&\leq m\ti r (1+x)x^{m-1} +(b_0(1+x^l) -1) -(1-c_m)x^m)a_0 R^{\gamma_0}\\
& \leq (2m\ti r - (1-c_m)a_0R^{\gamma_0})w_m(x) + (b_0w_l(x) -c_m)a_0R^{\gamma_0}.
\end{align*}
If we now impose the further restriction that $R_m$ is also chosen so that
$$
2m\ti r - (1-c_m)a_0R^{\gamma_0}\leq -d_m R^{\gamma_0}, \ \mbox{ for each } R \geq R_m,
$$
where $d_m>0$, then, for any $x$ and $R$ satisfying $x \geq R \geq R_m$, we have
\begin{equation}
\Phi_m(x) \leq -d_mR^{\gamma_0}w_m(x) + b_0 a_0R^{\gamma_0}w_n(x).
\label{Phi1}
\end{equation}
Turning to the case when $x\leq R$, we have $N_m(x) \geq 0$ for all $x$, by \eqref{Nm}, and know also that \eqref{Rm} holds for $x \in [R_m,R]$. Consequently, on setting $a_{R_m} = \mathrm{ess}\!\!\!\!\sup\limits_{x \in [0,R_m]}a(x)$, we obtain, for $0 < x \leq R$,
\begin{align*}
&\Phi_m(x) \leq 2 m\ti r w_m(x) + (b_0(1+R_m^l) -1)a_{R_m} \\
&= -d_mR^{\gamma_0}w_m(x) +(d_mR^{\gamma_0}+ 2 m\ti r )w_m(x) + (b_0(1+R_m^l) -1)a_{R_m}\\
&\leq -d_mR^{\gamma_0}w_m(x) +\left((d_mR^{\gamma_0}+ 2 m\ti r )\frac{w_m(x)}{w_n(x)} + \frac{(b_0(1+R_m^l) -1)a_{R_m}}{w_n(x)}\right)w_n(x)\\
&\leq -d_mR^{\gamma_0}w_m(x)\! +\!\left(\!(d_mR^{\gamma_0}\!+\! 2 m\ti r )(1+R^{m-n})\! +\! \frac{(b_0(1\!+\!R_m^l) -1)a_{R_m}}{w_n(x)}\!\right)w_n(x),
\end{align*}
where we have used the inequality $w_m(x)/w_n(x) \leq 1 + x^{m-n},\, x > 0$. It follows that, for any fixed $R \geq R_m$, there exist positive constants $d_m$ and $D_m$ such that
$$
\Phi_m(x) \leq -d_m R^{\gamma_0} w_m(x) + D_m R^{\gamma_0+m-n}w_{n}(x),\ \mbox{ for all } x\in \mbb R_+,
$$
and therefore, from \eqref{subfuncta'},
\begin{equation}
\frac{d}{dt}M_{0,m}(t) \leq - d_m R^{\gamma_0} M_{0,m}(t) + D_m R^{\gamma_0+m-n} M_{0,n}(t).
\label{Mom1}
\end{equation}
Since Theorem \ref{Miy} ensures that
\[
M_{0,n}(t)\! =\! \|S_{G_{0,m}}(t)\mr f\|_{[0,n]}\! =\! \|S_{G_{0,n}}(t)\mr f\|_{[0,n]}\!\leq\! C(n) e^{\theta(n) t}\|\mr f\|_{[0,n]}\! =:\! \sigma_n(t)\|\mr f\|_{[0,n]},
\]
\eqref{Mom1} leads to
$$
\frac{d}{dt}(e^{d_m R^{\gamma_0} t}M_{0,m}(t)) \leq D_m C(n)R^{\gamma_0+m-n}e^{(d_mR^{\gamma_0} + \theta(n))t}\|\mr f\|_{[0,n]}.
$$
Hence, for any fixed $R\geq R_m$, and with $\Psi_m$ defined by \eqref{Psi},
\begin{equation}\label{Mom3}\begin{split}
M_{0,m}(t) &= \cl{0}{\infty} [S_{G_{0,m}}(t)\mr f](x)w_m(x) \md x = \cl{0}{\infty} \mr f(x)[S^*_{G_{0,m}}(t)w_m](x) \md x\\& = \cl{0}{\infty}\Psi_m(x,t)\mr f(x) \md x \\
&\leq e^{-d_m R^{\gamma_0} t} \| \mr f\|_{[0,m]} + \frac{D_m R^{\gamma_0}}{d_mR^{\gamma_0} + \theta(n)}R^{m-n}(\sigma_n(t)-e^{-d_m R^{\gamma_0} t})\| \mr f\|_{[0,n]}\\
&\leq e^{-d_m R^{\gamma_0} t} \| \mr f\|_{[0,m]} + D'_mR^{m-n}\sigma_n(t)\| \mr f\|_{[0,n]}\\
&= \cl{0}{\infty}(e^{-d_m R^{\gamma_0} t} w_m(x) + D'_m R^{m-n} \sigma_n(t) w_n(x))\mr f(x) \md x.
\end{split}
\end{equation}
Since all positive $C^\infty_0(\mbb R_+)$ functions are in $D(G_{0,m})_+$, this leads to
\begin{equation}
\Psi_m(x,t) \leq e^{-d_m R^{\gamma_0} t} w_m(x) + D'_m R^{m-n} \sigma_n(t) w_n(x),
\label{Psi1}
\end{equation}
for almost any $x>0$ and each $R\geq R_m$.
Next, as in the proof of \cite[Lemma 2.7]{Ber2019}, we use the fact that $t, x$ and $R$ are independent. Consequently, for fixed $t$ and $x$, with $x \geq e^{\frac{d_m R_m^{\gamma_0} t}{m-n}}$, we can define $R \,(=R(x,t))$ by $R:= \left(\frac{m-n}{d_m}\frac{\log x}{t}\right)^{1/\gamma_0}$. It then follows from \eqref{Psi1} that
\begin{equation}
\label{Mom5}
\begin{split}
\Psi_m(x,t) &\leq x^{n-m}w_m(x) + D'_m \left(\frac{m-n}{d_m}\right)^{1/\gamma_0}t^{\frac{n-m}{\gamma_0}}(\log x)^{\frac{n-m}{\gamma_0}} \sigma_n(t) w_n(x)\\
&\leq D_{m,n,p}\, \widehat{\sigma}_n(t)t^{\frac{n-m}{\gamma_0}} w_{p}(x),
\end{split}
\end{equation}
where $p$ is any number bigger than $n$, the function $\widehat{\sigma}_n(t)$ is bounded as $t\to 0^+$ and exponentially bounded as $t\to \infty$, and $D_{m,n,p}$ is a constant depending on $m,n,p$. For $x < e^{\frac{d_m R_m^{\gamma_0} t}{m-n}},$ we take $R=R_m$ and use the fact that $w_m(x)$ and $w_n(x)$ are increasing functions to obtain
\begin{equation}
\label{Mom6}
\begin{split}
\Psi_m(x,t) &\leq e^{-d_m R_m^{\gamma_0} t} w_m(x) + D'_m R_m^{m-n} \sigma_n(t) w_n(x)\\& \leq e^{-d_m R_m^{\gamma_0} t} w_m\left(e^{\frac{d_m R_m^{\gamma_0} t}{m-n}}\right) + D'_m R_m^{m-n} \sigma_n(t) w_n\left(e^{\frac{d_m R_m^{\gamma_0} t}{m-n}}\right)\\
&\leq D_{m,n} \widetilde{\sigma}_n(t) e^{\frac{md_m R_m^{\gamma_0} t}{m-n}} \leq D_{m,n}\widetilde{\sigma}_n(t) e^{\frac{md_m R_m^{\gamma_0} t}{m-n}} w_{p}(x).
\end{split}
\end{equation}
Summarising, there are constants $C=C(m,n,p)$ and $\theta = \theta(m,n)$ such that, for almost all $x>0$ and $t>0$,
$$
\Psi_m(x,t) \leq Ce^{\theta t} t^{\frac{n-m}{\gamma_0}} w_{p}(x)
$$
and hence, using \eqref{Mom3},
$$
\|S_{G_{0,p}}(t)\mr f\|_{[0,m]} \leq Ce^{\theta t} t^{\frac{n-m}{\gamma_0}}\cl{0}{\infty} \mr f(x) w_{p}(x) \md x.
$$
The inequality can be extended to $\mr f \in X_{0,p}$ by linearity and density.
\end{proof}
\begin{corollary}
Let the assumptions of Theorem \ref{regthm} be satisfied. Then $S_{G_{0,p}}(t):D(G_{0,p}) \to D(G_{0,m})$ for all $t>0$ .\label{correg}
\end{corollary}
\begin{proof}
Let $m,n$ and $p$ be as in Theorem \ref{regthm}. Since $f$ and $G_{0,p}f$ belong to $ X_{0,p}$, both $S_{G_{0,p}}(t)f$ and $S_{G_{0,p}}(t)G_{0,p}f$ are in $X_{0,m}$ for $t>0$, and therefore we can evaluate
$$\frac{S_{G_{0,m}}(h) - I}{h}S_{G_{0,p}}(t)f = \frac{S_{G_{0,p}}(h) - I}{h}S_{G_{0,p}}(t)f=S_{G_{0,p}}(t)\frac{S_{G_{0,p}}(h) - I}{h}f.$$
It then follows from Theorem \ref{regthm} that
\begin{align}
&\lim\limits_{h\to 0^+}\left\|\frac{S_{G_{0,m}}(h) - I}{h}S_{G_{0,p}}(t)f - S_{G_{0,p}}(t)G_{0,p} f\right\|_{[0,m]}\nn\\
&\phantom{xx}=\lim\limits_{h\to 0^+}\left\|S_{G_{0,p}}(t)\left(\frac{S_{G_{0,p}}(h) - I}{h}f - G_{0,p} f\right)\right\|_{[0,m]} \nn\\ &\phantom{xx} \leq \lim\limits_{h\to 0^+}Ce^{\theta t}t^{-\frac{m-n}{\gamma_0}}\left \|\frac{S_{G_{0,p}}(h) - I}{h}f - G_{0,p} f\right\|_{[0,p]}
=0,\label{247}
\end{align}
which establishes that $S_{G_{0,p}}(t)f \in D(G_{0,m})$ for all $t > 0$.
\end{proof}
\begin{corollary} Assume that \eqref{fmlras}, \eqref{PhPr005}, \eqref{goodchar}, \eqref{massump} and \eqref{assa1} are all satisfied, and let $p>\max\{1,l\}$. Then, for each
$\mr f \in X_{0,m}\cap D(G_{0,p}),$ problem \eqref{reml} has a classical solution in $X_{0,m}$.
\end{corollary}
\begin{proof}
Let $f(t) = S_{G_{0,m}}(t)\mr f$. We can assume that $p< m$, as otherwise $\mr f \in D(G_{0,m})$. Then, for all $t>0$, $S_{G_{0,p}}(t)\mr f=S_{G_{0,m}}(t)\mr f$ and, by Corollary \ref{correg}, $S_{G_{0,p}}(t)\mr f\in D(G_{0,m})$ so that, as in \eqref{247},
\begin{align*}
\lim\limits_{h\to 0^+} \!\frac{f(t+h)\! -\! f(t)}{h} \! &=\! \lim\limits_{h\to 0^+}\!\frac{S_{G_{0,m}} (h)\! -\! I}{h} S_{G_{0,m}} (t)\mr f =\!
\lim\limits_{h\to 0}\frac{S_{G_{0,p}} (h)\! -\! I}{h} S_{G_{0,p}}(t)\mr f \\
& = G_{0,p} S_{G_{0,p}} (t)\mr f = G_{0,m} S_{G_{0,m}} (t)\mr f
\end{align*}
in $ X_{0,m},$ where the last equality follows from Corollary \ref{correg}.
\end{proof}
\section{Coagulation-fragmentation with growth}
The results obtained in the previous section can now be exploited to establish the well-posedness of the initial value problem (IVP)
\eqref{initprof}. The restrictions placed on $r,a$ and $b$ for Theorem \ref{regthm} to hold continue to be assumed, but now we specify that $a_1(x) := \beta(1+x^\alpha)$, where $\beta$ is a constant that will be determined later, and $0 < \alpha < \gamma_0$, so that, due to \eqref{assa1}, $a_1(x)/a(x)$ remains bounded as $x \to \infty$. The coagulation kernel is required to satisfy
\begin{equation}
k(x,y) \leq k_0 (1+ x^\alpha)(1+y^\alpha),
\label{kass}
\end{equation}
for some positive constant $k_0$. It is convenient to express \eqref{initprof} in the form
\begin{eqnarray}
\partial_t f(x,t) &=& - \p_x[r(x)f(x,t)] - \beta(1+x^\alpha)f(x,t) + {\mathfrak F}f(x,t)\nn\\
&&\phantom{xx} +
\mathfrak{K}_\beta f(x,t)\ , \ (x,t)\in \mbb{R}_+^2, \label{rem2} \\
f(x,0) &=& \mr f(x)\ , \ \ x\in \mathbb{R}_+\ , \label{rem3}
\end{eqnarray}
where
\begin{equation}\label{modifiedcoag}
\mathfrak{K}_\beta f(x,t):= \beta(1+x^\alpha)f(x,t) + \mathfrak{K}f(x,t),
\end{equation}
and $\mathfrak{F},\,\mathfrak{K}$ are given by \eqref{wlcontsfrag} and \eqref{wlcontscoag} respectively. Denoting the generator of the growth-fragmentation semigroup in this case by $G_{0,m}^{(\beta)}$, the corresponding abstract formulation of the IVP \eqref{rem2} - \eqref{rem3} can be written as
\begin{equation}\label{ACPbeta}
\frac{d}{dt}f(t) = G_{0,m}^{(\beta)}f(t) + K_{0,m}^{(\beta)}f(t),\ t > 0; \ \ f(0) = \mr f,
\end{equation}
where the operator $K_{0,m}^{(\beta)}$ is defined on $X_{0,m}$ via
\begin{eqnarray}
(K_{0,m}^{(\beta)}f)(x) &:=& \beta(1+x^\alpha)f(x) + \frac{1}{2}\,\int_0^x
k(x-y,y)f(x-y)f(y)\,\md y \nn \\
&& \quad - f(x)\,\int_0^{\infty}
k(x,y)f(y)\,\md y,\ x \in \mbb{R}_+. \label{coagop}
\end{eqnarray}
The following inequalities will often be used . For $0\leq \delta \leq \eta$ and $x\geq 0$
\begin{equation}
(1+x^\delta)\leq 2(1+x^\eta)
\label{in1}
\end{equation}
and, for $\delta,\eta\geq 0$ and $x\geq 0$,
\begin{equation}
(1+x^\delta)(1+x^\eta)\leq 4(1+x^{\delta+\eta}).
\label{in2}
\end{equation}
\subsection{Local Existence}
We begin by proving the local (in time) existence and uniqueness of a mild solution to \eqref{ACPbeta}.
\begin{theorem}\label{lm3.1}
Let $r, a , b$ satisfy the conditions for Theorem \ref{regthm} to hold, and define $a_1(x) := \beta(1+x^\alpha)$, where $\beta > 0$ is appropriately chosen and $ 0 < \alpha < \gamma_0$, with $\gamma_0$ the constant given in \eqref{assa1}. Further, let $k$ satisfy \eqref{kass} and let $m > \alpha+\max\{1,l\}$. Then, for each $\mr f \in X_{0,m,+}$, the semilinear ACP \eqref{ACPbeta} has a unique nonnegative mild solution $f \in C([0, \tau_{\max}), X_{0,m})$ defined on its maximal interval of existence $[0,\tau_{\max})$, where $\tau_{max} = \tau_{\max}(\mr f)$. If $\tau_{\max}< \infty$, then $\|f(t)\|_{[0,m]}$ is unbounded as $t\to \tau_{max}^-$.
\end{theorem}
\begin{proof}
Let $p$ be defined by
\begin{equation}
p:= m-\alpha,\label{p}
\end{equation}
and, noting that $(m-n)/\gamma_0 = (p-n)/\gamma_0 + \alpha/\gamma_0$ , we are then able to choose $n < p$ such that $(m-n)/\gamma_0 < 1$ and $n>\max\{1,l\}$, so that $m,n,p$ satisfy assumptions of Theorem \ref{regthm}. We begin by showing that the bilinear form $\mc K_{0,m}^{(\beta)}$, defined by
\begin{align}
[\mc K_{0,m}^{(\beta)}(f,g)](x) &:= \beta(1 +x^\alpha)f(x)- f(x)\!\!\cl{0}{\infty}\!\!k(x,y)g(y)\md y\nn\\
&\phantom{xx}+
\frac{1}{2}\cl{0}{x}\!\!k(x-y,y)f(x-y)g(y)\md y,\label{mcK}
\end{align}
is a continuous mapping from $X_{0,m} \times X_{0,m}$ into $X_{0,p}$. From \eqref{kass}, \eqref{in2} and \eqref{p}, we obtain, for all $f,g \in X_{0,m}$,
\begin{equation}\label{wllinear}
\beta \cl{0}{\infty}(1+x^\alpha)|f(x)|w_p(x) \md x \leq 4\beta \|f\|_{[0,m]},
\end{equation}
\begin{equation}\label{cest1}
\int_{0}^\infty |f(x)|\left(\int_{0}^\infty
k(x,y) |g(y)|\md y\right)w_p(x)\md x \leq 4k_0 \|f\|_{[0,m]}\|g\|_{[0,m]}
\end{equation}
and, in a similar way,
\begin{equation}\label{cest2}\begin{split}
&\frac{1}{2}\int_{0}^{\infty}\!\!\! \left(\int_{0}^{x}
k(x-y,y) |f(y)||g(x-y)|\md y\right)w_{p}(x)\md x\\&\phantom{xxx}= \frac{1}{2}\int_{0}^{\infty}\!\!\int_{0}^{\infty}\!\! k(x,y)|f(y)||g(x)|w_{p}(x+y)\md x\md y\\
&\phantom{xxx}\leq 2^{p-1}k_0\cl{0}{\infty}\cl{0}{\infty} |f(y)||g(x)| ((1+x^\alpha)(1+y^\alpha) (w_{p}(x)+w_{p}(y))\md x\md y \\
&\phantom{xxx}\leq 2^{p-1} k_0(4\|g\|_{[0,m]} \|f\|_{[0,\alpha]} + 4\|g\|_{[0,\alpha]}\|f\|_{[0,m]})\leq 2^{p+3} k_0\|f\|_{[0,m]}\|g\|_{[0,m]}.\end{split}
\end{equation}
Hence,
\begin{equation}\label{boundedbl}
\|\mc K_{0,m}^{(\beta)}(f,g) \|_{[0,p]} \leq \left(4\beta +(4+2^{p+3})k_0\|g\|_{[0,m]}\right)\|f\|_{[0,m]},
\end{equation}
{ for all } $f,g \in X_{0,m}$. Since $K_{0,m}^{(\beta)}f = \mc K_{0,m}^{(\beta)}(f,f)$, it follows that $K_{0,m}^{(\beta)}$ is a continuous mapping from $X_{0,m}$ into $X_{0,p}$. Consequently, the integral equation that arises as the mild formulation of \eqref{ACPbeta} can be written as
\begin{equation}\label{eq3.2}
f(t) = S_{G_{0,m}^{(\beta)}}(t) \mr f + \int_{0}^{t} S_{G_{0,p}^{(\beta)}} (t - s) K_{0,m}^{(\beta)} f(s) \md s.
\end{equation}
Next consider the set
\begin{equation}
\mc U := \{ f \in X_{0,m,+}:\;\| f\|_{[0,m]}\leq 1+b\},
\label{ball}
\end{equation}
for some arbitrarily fixed $b>0$. For each $ f\in \mc U$, we can use \eqref{kass}, \eqref{in2} and the fact that $\alpha < \gamma_0 \leq m$, to obtain
$$
\int_{0}^{\infty} k(x,y)f(y)\md y \leq 2k_0 (1 + x^\alpha)\|f\|_{[0,m]} \leq \beta(1+x^\alpha), \ \mbox{ for all } x > 0,
$$
where we now define \begin{equation}
\beta := 2k_0(1 + b),
\label{gammak}\end{equation}
and therefore, with this choice of $\beta$,
\begin{equation}\label{Cpos}
(K_{0,m}^{(\beta)} f)(x) \geq \frac{1}{2}\cl{0}{x}k(x-y,y) f(x-y)f(y)\md y \geq 0.
\end{equation}
Also, from \eqref{boundedbl}, we have
\begin{equation}
\|K_{0,m}^{(\beta)} f\|_{[0,p]} \leq K(\mc U), \ \mbox{ for all } f \in \mc U,
\label{cest2a}
\end{equation}
where $K(\mc U) = \frac{\beta^2}{k_0}(2 +(1+2^{p+1}))$, and
\begin{align}
&\Vert K_{0,m}^{(\beta)} f - K_{0,m}^{(\beta)} g\Vert_{[0,p]} \nn\\
&\leq 4\beta \Vert f-g\Vert_{[0,m]} + (4+ 2^{p+3})k_0 \left( \Vert f \Vert_{[0,m]}+ \Vert g \Vert_{[0,m]}\right)\Vert f - g \Vert_{[0,m]}\nn\\&\leq L(\mathcal{U})\Vert f - g \Vert_{[0,m]}, \label{K1B}
\end{align}
for all $f,g \in \mc U$, where, by \eqref{gammak}, $
L(\mathcal{U}) = 8\beta(1 + 2^{p}).
$
For $\mr f \in X_{0,m,+}$ satisfying
\begin{equation}\label{wlfin}
\|\mr f\|_{0,m} \leq { b},
\end{equation}
we define the operator
\begin{equation}\label{wlSL}
\textsl{T}f(t) = S_{G_{0,m}^{(\beta)}}(t)\mr f +
\int_{0}^{t}S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)} f(s)\md s
\end{equation}
in the space $Y_m=C([0,\tau], \mc U),$ with $\mc U$ defined by (\ref{ball}) and $\tau$ to be determined so that $\textsl{T}$ is a contraction on $Y_m$, when $Y_m$ is equipped with the metric induced by the norm from $C([0,\tau], X_{0,m})$. First, observe that $\textsl{T}f \in C([0,\tau], X_{0,m,+})$ for all $f \in Y_m$. Indeed, for any $t\geq 0$ and $h>0$, with $t + h \leq \tau$,
\begin{align*}
&\|\textsl{T}f(t+h) - \textsl{T}f(t)\|_{[0,m]}\\
= &\left\|\int_{0}^{t+h}S_{G_{0,p}^{(\beta)}}(t+h-s)K_{0,m}^{(\beta)} f(s)\md s-\int_{0}^{t}S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)} f(s)\md s\right\|_{[0,m]}\\
\leq & \int_{t}^{t+h}\left\|S_{G_{0,p}^{(\beta)}}(t+h-s)K_{0,m}^{(\beta)} f(s)\right\|_{[0,m]}\md s \\
&+ \int_{0}^{t}\left\|S_{G_{0,p}^{(\beta)}}(t-s)(S_{G_{0,p}^{(\beta)}}(h)-I)K_{0,m}^{(\beta)} f(s)\right\|_{[0,m]}\md s= I_1(h)+I_2(h).
\end{align*}
Now, by \eqref{regest} and \eqref{cest2a},
$$
\|S_{G^{(\beta)}_{0,p}}(t+h-s)K_{m,\beta} f(s)\|_{[0,m]}\leq C(m,n,p)e^{\theta(m,n) (t+h-s)} (t+h-s)^{\frac{n-m}{\gamma_0}} K(\mc U).
$$
Since $(n-m)/\gamma_0 > -1$, it follows that
$$
\cl{t}{t+h}(t+h-s)^{\frac{n-m}{\gamma_0}} ds = \cl{0}{h} \sigma^{\frac{n-m}{\gamma_0}} d\sigma \to 0 \ \mbox{ as } h \to 0^+,
$$
and therefore $\lim_{h\to 0^+} I_1(h) = 0$.
Similarly,
\begin{align}
&\|S_{G_{0,p}^{(\beta)}}(t-s)(S_{G_{0,p}^{(\beta)}}(h)-I)K_{0,m}^{(\beta)} f(s)\|_{[0,m]} \nn \\
\leq & C(m,n,p)e^{\theta(m,n) (t-s)} (t-s)^{\frac{n-m}{\gamma_0}}\|(S_{G_{0,p}^{(\beta)}}(h)-I)K_{0,m}^{(\beta)} f(s)\|_{[0,p]}\nn\\
&\phantom{xx}\leq C(m,n,p)e^{\theta(m,n) (t-s)} (t-s)^{\frac{n-m}{\gamma_0}}(C(p)
e^{\theta(p)h}+1)\|K_{m,\beta} f(s)\|_{[0,p]}\nn\\
& \phantom{xx}\leq C(m,n,p,\tau)(t-s)^{\frac{n-m}{\gamma_0}},\label{osz1}
\end{align}
where $C(m,n,p,\tau)$ is a constant that is independent of $h$. Thus, from the second line in the above calculation, we see that the integrand in $I_2(h)$ converges to zero as $h \to 0^+$ for each $0\leq s< t$ and the last line ascertains that this convergence is dominated by an integrable function. Hence, an application of the Lebesgue dominated convergence theorem shows that $\textsl{T}f$ is right continuous at $t$ for all $t \in [0,\tau)$. When $0<h<t \leq \tau$ and $t-h \geq 0$ , we have
\begin{align*}
&\|\textsl{T}f(t-h) - \textsl{T}f(t)\|_{[0,m]} \\
= &\left\|\int_{0}^{t-h}S_{G_{0,p}^{(\beta)}}(t-h-s)K_{0,m}^{(\beta)} f(s)\md s-\int_{0}^{t}S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)} f(s)\md s\right\|_{[0,m]}\\
\leq &\int_{t-h}^{t}\left\|S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)} f(s)\right\|_{[0,m]}\md s \\
&+ \int_{0}^{t-h}\left\|S_{G_{0,p}^{(\beta)}}(t-h-s)(I-S_{G_{0,p}^{(\beta)}}(h))K_{0,m}^{(\beta)} f(s)\right\|_{[0,m]} \md s= I'_1(h)+I'_2(h).
\end{align*}
Arguing as before, we obtain $\lim_{h\to 0^+}I'_1(h) =0$. As for $I'_2(h),$ we rewrite it as
\begin{align}
I'_2(h) &= \int_{h}^{t}\left\|S_{G_{0,p}^{(\beta)}}(t-\sigma)(I-S_{G_{0,p}^{(\beta)}}(h))K_{0,m}^{(\beta)} f(\sigma - h)\right\|_{[0,m]}\md \sigma\nn\\
&= \int_{0}^{t}\chi_{[h,t]}\left\|S_{G_{0,p}^{(\beta)}}(t-\sigma)(I-S_{G_{0,p}^{(\beta)}}(h))K_{0,m}^{(\beta)} f(\sigma - h)\right\|_{[0,m]} \md \sigma,\label{I'}
\end{align}
where $\chi_\Omega$ is the characteristic function of $\Omega$. Since $t \to K_{0,m}^{(\beta)} f(t)$ is a continuous function in $X_{0,p},$ $\lim\limits_{h\to 0^+} K_{0,m}^{(\beta)} f(\sigma - h) = K_{0,m}^{(\beta)} f(\sigma)$ for each $\sigma > 0.$ Then, on account of the local uniform boundedness of \sem{S_{G_{0,p}^{(\beta)}}}, a corollary of the Banach-Steinhaus theorem ensures that $\lim\limits_{h\to 0^+} (I-S_{G_{0,p}^{(\beta)}}(h))K_{0,m}^{(\beta)} f(\sigma - h) = 0$ for any fixed $\sigma>0$ and we see that the integrand in \eqref{I'} converges to zero on $[0,t].$ Moreover, from \eqref{osz1},
\begin{align*}
\|S_{G_{0,p}^{(\beta)}}(t-\sigma)(I-S_{G_{0,p}^{(\beta)}})(h))K_{0,m}^{(\beta)} f(\sigma-h)\|_{[0,m]}&\leq
C(m,n,p,\tau)(t-\sigma)^{\frac{n-m}{\gamma_0}}
\end{align*}
for all $\sigma \in [h,t]$, where, by \eqref{cest2a}, $K_{0,m}^{(\beta)} f(\sigma-h)$ is estimated by the $Y_m$ norm of $f$, and this is independent of $h$. Consequently, on applying the Lebesgue dominated convergence theorem once again, we obtain $\lim_{h\to 0^+}I'_2(h) =0$. Further, thanks to \eqref{gammak},
$\textsl{T}f(t) \geq 0$ since $f(s)\geq 0$ for all $s \in [0,\tau]$.
By continuity, for any initial condition $\mr f$ satisfying \eqref{wlfin} we can choose $\tau'_1$ such that $\|\textsl{T}f(t)\|_{[0,m]} \leq 1+b$ for $0\leq t \leq \tau'$. We need however, a more uniform estimate. For this, using \eqref{regest}, we get
\begin{equation}\label{inter}
\begin{split}
\|\textsl{T}f(t)\|_{[0,m]}&\leq \|S_{G_{0,m}^{(\beta)}}(t)\mr f\|_{[0,m]} + \int_{0}^{t}\|S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)}f(s)\|_{[0,m]}\md s\\
&\leq \|S_{G_{0,m}^{(\beta)}}(t)\mr f\|_{[0,m]} + C(m,n,p)e^{\theta(m,n)\tau} K(\mc U)\int_0^t (t-s)^{-\frac{m-n}{\gamma_0}} \, \rm{d}s \\
&\leq \|S_{G_{0,m}^{(\beta)}}(t)\mr f\|_{[0,m]}+ C(m,n,p)e^{\theta(m,n)\tau} K(\mc U)\frac{\gamma_0}{\gamma_0+n-m} \tau^{\frac{\gamma_0+n-m}{\gamma_0}}\\
&\leq 1+b,
\end{split}
\end{equation}
provided that $f(s) \in \mathcal{U}$ for $0 \leq s \leq \tau_{1}(\mc U),$ where the existence of such a $\tau_{1}$ is ensured by the fact that
\begin{equation}
\lim\limits_{\tau\to 0^+}\|S_{G_{0,m}^{(\beta)}}(\tau)\mr f\|_{[0,m]} =\|\mr f\|_{[0,m]} \quad \text{and}\quad \lim\limits_{\tau\to 0^+}e^{\theta(m,n)\tau} \tau^{\frac{\gamma_0+n-m}{\gamma_0}}=0.
\label{lims}
\end{equation}
Finally, to establish that $\textsl{T}$ is a contraction on $Y_m$ when $\tau$ is sufficiently small, we use (\ref{K1B}) to obtain
\begin{eqnarray*}
&&\|\textsl{T}f(t)-\textsl{T}g(t) \|_{[0,m]} \leq \cl{0}{t}\|S_{G_{0,p}^{(\beta)}}(t-s)([K_{0,m}^{(\beta)} f](s)- [K_{0,m}^{(\beta)}] g(s))\|_{m}\md s \\
&&\phantom{xx}\leq C(m,n,p)e^{\theta(m,n)\tau} L(\mc U)\sup_{0\leq s\leq \tau}\| f(s)- g(s)\|_{[0,m]} \cl{0}{t}(t-s)^{-\frac{m-n}{\gamma_0}} \md s\\
&&\phantom{xx}\leq C(m,n,p)e^{\theta(m,n)\tau} L(\mc U)\frac{\gamma_0}{\gamma_0+n-m} \tau^{\frac{\gamma_0+n-m}{\gamma_0}}\sup_{0\leq s\leq \tau}\| f(s)-g(s)\|_{[0,m]}.
\end{eqnarray*}
We now choose $\tau_{2}$ such that
$$
C(m,n,p)e^{\theta(m,n)\tau} L(\mc U)\frac{\gamma_0}{\gamma_0+n-m} \tau^{\frac{\gamma_0+n-m}{\gamma_0}}<1,
$$
for any $0\leq \tau\leq \tau_2$. Hence, by taking $0<\tau = \min\{\tau_1,\tau_2\}$ we see that $\textsl{T}$ is a contractive mapping on $Y_m$. We note that $\tau$ is uniform on bounded subsets of $X_{0,m}$. Hence, in the usual way, we can extend the solution to the maximal interval $[0,\tau_{\max})$. The last statement of the theorem follows from the preceding observation that the length of the interval of existence is uniform on bounded subsets and thus if the solution is bounded in any left neighbourhood of $\tau_{\max}$, it can be extended beyond it. \end{proof}
The next objective is to prove that the mild solution of the previous theorem is, in fact, a classical solution of \eqref{ACPbeta} under an additional restriction on $\mr f$. We require the following three lemmas.
\begin{lemma}
$K_{0,m}^{(\beta)} :X_{0,m} \to X_{0,p}$ is continuously Fr\'{e}chet differentiable. \label{lemdif}
\end{lemma}
\begin{proof}
Recall that $K_{0,m}^{(\beta)} f= \mc K_{0,m}^{(\beta)}(f,f)$, where $\mc K_{0,m}^{(\beta)}: X_{0,m}\times X_{0,m} \mapsto X_{0,p}$ is the bilinear form defined by \eqref{mcK}.
Using \eqref{cest1} and \eqref{cest2}, we see that
$K_{0,m}^{(\beta)}$ is Fr\'{e}chet differentiable at each $f \in X_{0,m}$, with Fr\'{e}chet derivative given by
\[
[\p K_{0,m}^{(\beta)} f]h := \beta w_{\alpha} h + \mc K_{0,m}^{(0)}(f,h) + \mc K_{0,m}^{(0)}(h,f)\,, \quad h \in X_{0,m}.
\]
Moreover, again by \eqref{cest1} and \eqref{cest2}, for any $f,g,h \in X_{0,m}$,
\begin{eqnarray*}
\Vert [\p K_{0,m}^{(\beta)} f]h - [\p K_{0,m}^{(\beta)} g]h\Vert_{[0,p]}
&=& \Vert \mc K_{0,m}^{(0)}(f-g,h) + \mc K_{0,m}^{(0)}(h,f-g) \Vert_{[0,p]} \\
&\leq& 8\beta(1 + 2^{p}) \Vert h \Vert_{[0,m]}\,\Vert f - g \Vert_{[0,m]} \to 0,
\end{eqnarray*}
as $\Vert f - g \Vert_{[0,m]} \to 0$, uniformly in $\|h\|_{[0,m]}\leq 1$. Hence, the Fr\'{e}chet derivative is continuous.
\end{proof}
\begin{lemma}
Assume that $1<p<m$ and $0<T<\infty$ be arbitrary. Let $\sem{S}$ be a strongly continuous semigroup on $X_{0,p}$ and $\{P(t)\}_{t\geq 0}$ be a family of bounded linear operators from $X_{0,m}$ to $X_{0,p}$ such that, for all $u \in X_{0,p}$ and $f \in X_{0,m}$,
\[
\|S(t)u\|_{[0,m]}\leq M(t)t^{-\kappa}\|u\|_{[0,p]}\,, \quad \|P(t)f\|_{[0,p]}\leq L(t)\|f\|_{[0,m]},
\]
where $M,L \in L_{\infty}([0,T])$ and $0<\kappa<1$. Moreover, let $g \in C((0,T], X_{0,m})$ be such that
$\|g(t)\|_{[0,m]} \leq G(t)t^{-\delta}$, where $G \in L_{\infty}([0,T])$ and $0<\delta<1$. Then the integral equation
\begin{equation}
f(t) = g(t) + \cl{0}{t}S(t-s)P(s)f(s)\md s
\label{inteq}
\end{equation}
has a unique solution $f$ in $C((0,T],X_{0,m})$ that satisfies $\|f(t)\|_{[0,m]}\leq F(t)t^{-\delta}$ for some function $F\in L_{\infty}([0,T]).$\label{leminteq}
\end{lemma}
\begin{proof}We use some ideas from \cite[Lemma 3.2]{Banasiak2019}. Denoting Laplace convolution by $\ast$ and defining $\theta_{r}(t) := t^{-r}$, a simple argument shows that $\theta_\delta\ast \theta_{\kappa}$ exists for any choice of $\delta < 1$ and $\kappa < 1$, with
\begin{equation}\label{WLconv}
(\theta_{\delta}\ast \theta_{\kappa})(t) = B(1-\delta,1-\kappa)\,t^{1-\delta-\kappa} = B(1-\delta,1-\kappa)\,\theta_{\delta + \kappa - 1}(t),
\end{equation}
where $B$ is the Beta function. Let us denote by $M_T,L_T$ and $G_T$ the (essential) suprema of the respective functions $M, L$ and $G$ on $[0,T]$. Then, for all $t \in (0,T]$, we have
\begin{eqnarray*}
\|g_1(t)\|_{[0,m]} &:=& \left\|\cl{0}{t}S(t-s)P(s)g(s)\md s\right\|_{[0,m]} \leq M_T L_T G_T \cl{0}{t} (t-s)^{-\kappa} s^{-\delta}\md s \\&=& M_T L_T G_T B(1-\kappa, 1-\delta)t^{-(\delta+\kappa-1)},
\end{eqnarray*}
and, by induction,
\begin{eqnarray*}
\|g_n(t)\|_{[0,m]} &:=&\left\|\cl{0}{t}S(t-s)P(s)g_{n-1}(s)\md s\right\|_{[0,m]} \\
&\leq& (M_T L_T )^nG_T \prod\limits_{i=1}^{n}B(1-\kappa, i-(i-1)\kappa -\delta)t^{-(n\kappa -n +\delta)},
\end{eqnarray*}
for all $n \in \mathbb{N}$. Since $n -n\kappa -\delta = n(1 - \kappa - \delta/n)$, there exists $n_0 \in \mathbb{N}$ such that $n -n\kappa -\delta > 0$ for all $n \geq n_0$. Then, denoting $g(t) =g_0(t)$, we can re-write \eqref{inteq} as
\begin{equation}
f(t) - \sum\limits_{i=0}^{n_0-1} g_i(t) = g_{n_0}(t)+\cl{0}{t} S(t-s)\left(f(s) - \sum\limits_{i=0}^{n_0-1} g_i(s)\right)\md s
\label{ineq1}
\end{equation}
where we have $g_{n_0} \in C([0,T], X_{0,m})$. Consider now an operator on $C([0,T], X_{0,m})$ given by the formula
$$
Qu(t) = g_{n_0}(t)+\cl{0}{t} S(t-s)P(s)u(s)\md s.
$$
The argument used in Theorem \ref{lm3.1} to prove the continuity of the operator $\textsl{T}$ can be applied again to establish the continuity of $Q$. Then, for $u,v \in C([0,T], X_{0,m})$ we obtain
\begin{align*}
\|Qu(t) -Qv(t)\|_{[0,m]} &\leq M_TL_T \sup\limits_{s\in [0,T]}\|u(s)-v(s)\|_{[0,m]}\cl{0}{t} (t-s)^{-\kappa}\md s \\&= M_TL_T \sup\limits_{s\in [0,T]}\|u(s)-v(s)\|_{[0,m]}B(1-\kappa,1)t^{1-\kappa}
\end{align*}
and, again by induction,
\begin{align*}
&\|Q^ku(t) -Q^kv(t)\|_{[0,m]}\\ &\leq M^k_TL^k_T \sup\limits_{s\in [0,T]}\|u(s)-v(s)\|_{[0,m]}\prod_{i=0}^{k-1} B(1-\kappa, i+1 -i\kappa) t^{-i(\kappa-1)}.
\end{align*}
Now, using the fact that $B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ and \cite[Inequality 6.1.47]{abramowitz1964handbook}
$$
\frac{\Gamma(y)}{\Gamma(x+y)} \leq c_y y^{-x}, x> 0, y\to \infty,
$$
we see, on account of $ i+1 -i\kappa = i(1-\kappa) + 1\geq 1$ for $i\geq 0$, that
\begin{align*}
\prod_{i=0}^{k-1} B(1-\kappa, i+1 -i\kappa)&\leq \Gamma^k(1-\kappa)c_{1-\kappa}^k \prod_{i=0}^{k-1} (i(1-\kappa)+1)^{-(1-\kappa)}\\
&\leq \Gamma^k(1-\kappa)c_{1-\kappa}^k \left(\frac{1}{1-\kappa}\right)^{k(1-\kappa)}\left(\frac{1}{(k-1)!}\right)^{1-\kappa}.
\end{align*}
Hence, for some constant $C_T,$
\begin{align*}
\sup\limits_{t\in [0,T]}\|Q^ku(t) -Q^kv(t)\|_{[0,m]} &\leq \left(\frac{C_T^{\frac{k}{1-\kappa}}}{(k-1)!}\right)^{1-\kappa} \sup\limits_{s\in [0,T]}\|u(s)-v(s)\|_{[0,m]}
\end{align*}
and therefore there exists $k$ such that $Q^k$ is a contraction. Thus, the equation $u = Qu$ has a unique solution $u \in C([0,T], X_{0,m})$ (the uniqueness follows from the Gronwall-Henry inequality, see \cite[Lemma 3.2]{Banasiak2019}) and therefore
$$
f(t) = u(t) + \sum\limits_{i=0}^{n_0-1} g_i(t)
$$
is a unique solution to \eqref{inteq} satisfying the growth condition at $t=0$.
\end{proof}
We now give the following lemma which seems to belong to mathematical folklore.
\begin{lemma}
Let $X,Y$ be Banach spaces and let $K$ be a continuously Fr\'{e}chet differentiable operator from $X$ to $Y$ with Fr\'{e}chet derivative $\p K\in C( X, \mc L(X,Y))$. Further, let the remainder $\omega: X\times X\mapsto Y$ be defined by
$$
K(x+h)- K(x) - \p K(x) h = \omega(h,x), \quad x, h\in X.
$$
Then the function $\omega_0(h,x) := \frac{\omega(h,x)}{\|h\|_X}$ if $h\neq 0$ and $\omega_0(0,x) := 0$ otherwise, is continuous.\label{rem}\end{lemma}
\begin{proof}
By the definition of Fr\'{e}chet differentiability, $\lim_{h\to 0}\frac{\omega(h,x)}{\|h\|_X} = 0$ for any $x\in X$.
The only questionable points are $(h,x) =(0,x)$. Let us consider $(h_n,x_n)\to (0,x)$, where $h_n\neq 0$ for $n \in \mathbb{N}$. Then
\begin{align*}
\|\omega_0(h_n,x_n)\|_Y& = \frac{\left\|K(x_n+h_n)- K(x_n) - \p K(x_n) h_n\right\|_Y }{\|h_n\|_X}\\
& \leq \cl{0}{1}\|\p K(x_n + th_n) - \p K(x_n) \| \md t.
\end{align*}
Now, $ \|\p K(x_n + th_n) - \p K(x_n) \| \to 0$ for any $t\in [0,1]$ and $\|\p K(x_n + th_n) - \p K(x_n) \|$ is bounded irrespective of $n$ as $\p K$ is continuous and the sets $\{x_n \}_{n \in \mbb N}$ and $\{x_n + th_n\}_{n\in \mbb N, t\in [0,1]}$ are compact. Consequently, $\|\omega_0(h_n,x_n)\|_Y\to 0$ by the Lebesgue dominated convergence theorem. \end{proof}
In the next theorem we address the issue of differentiability of the mild solution constructed in Theorem \ref{lm3.1} and it being a classical solution to \eqref{ACPbeta}. The result is similar to that for analytic semigroups in that the mild solution in a smaller space (here $X_{0,m}$) is a classical solution in a bigger space (here $X_{0,p}$), see \cite[Definitions 7.0.1 \& 7.0.2]{Lun} or \cite[Section 4.7.1]{SY}.
\begin{theorem}\label{th3.4} Let the assumptions of Theorem \ref{lm3.1} hold and assume also that $\mr f \in X_{0,m} \cap D(G_{0,p}^{(\beta)})$, where $p = m-\alpha$. Then the mild solution $f$, defined on its maximal interval of existence $[0,\tau_{\max})$, satisfies $f \in C([0, \tau_{\max}), X_{0,m}) \cap C^{1}((0, \tau_{\max}), X_{0,m})\cap C((0, \tau_{\max}), D(G_{0,p}^{(\beta)}))$ and is a classical solution to \eqref{ACPbeta} in $X_{0,p}$.\end{theorem}
\begin{proof} The proof follows the lines of \cite[Theorem 6.1.5]{Pa} but additional steps are required due to the unboundedness of the nonlinear term. To simplify the notation we observe that it suffices to prove the additional regularity on $(0,\tau)$ of the local solution constructed in Theorem \ref{lm3.1}. If $\tau \ne \tau_{\max}$, then we extend the result in the usual manner to a larger interval $(0,\tau_1),\, \tau_1 > \tau$, by taking $f(t_0)$ as a new initial value, for some $0< t_0 < \tau$. Provided the theorem holds on $(0,\tau)$, we know that $f(t_0)\in D(G_{0,m}^{(\beta)}) \subset D(G_{0,p}^{(\beta)})$ and we repeat the proof on $(t_0, \tau_1)$ which overlaps with $(0,\tau)$ on an open interval and thus the theorem is valid on $(0,\tau_1)$. Continuing in this manner, we eventually reach $\tau_{\max}$.
As in the proof of Theorem \ref{lm3.1}, we choose $n$ so that $\kappa := \frac{m-n}{\gamma_0} \in (0,1).$
Since $\mr f \in D(G_{0,p}^{(\beta)})$, the mild solution $f$ satisfies the integral equation
\begin{align}
f(t) &= S_{G_{0,m}^{(\beta)}}(t)\mr f + \cl{0}{t} S_{G_{0,p}^{(\beta)}}(t-s)K_{0,m}^{(\beta)} f(s)\md s\nn\\
& = S_{G_{0,p}^{(\beta)}}(t)\mr f + \cl{0}{t} S_{G_{0,p}^{(\beta)}}(s)K_{0,m}^{(\beta)} f(t-s)\md s.
\label{mild}
\end{align}
We first consider the Lipschitz continuity of $f$. Let $t>0$ and $h>0$. We have
\begin{eqnarray*}
&& \frac{f(t+h) - f(t)}{h} \ = \ \frac{1}{h}\left(S_{G_{0,m}^{(\beta)}}(h) - I\right) S_{G_{0,m}^{(\beta)}} (t)\mr f \\
&+& \frac{1}{h} \int_{0}^{h} S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s)\md s\\&
+& \frac{1}{h} \int_{0}^{t} S_{G_{0,p}^{(\beta)}} (t - s)(K_{0,m}^{(\beta)} f(s+h)-K_{0,m}^{(\beta)} f(s)) \md s =:\ I_1(h) + I_2(h) + I_3(h).
\end{eqnarray*}
Arguing as in Corollary \ref{correg}, we have
\begin{eqnarray*}
&& \left\|\frac{1}{h}\left(S_{G_{0,m}^{(\beta)}} (h) - I\right)\, S_{G_{0,m}^{(\beta)}} (t)\mr f\right\|_{[0,m]} \ = \ \ \left\|\frac{1}{h} S_{G_{0,p}^{(\beta)}}(t)\left(S_{G_{0,p}^{(\beta)}} (h) - I\right) \mr f\right\|_{[0,m]}\\
&& \ \leq C(\tau)t^{-\kappa} \left\|\frac{1}{h}\left(S_{G_{0,p}^{(\beta)}} (h) - I\right) \mr f\right\|_{[0,p]}
\leq C_1(\tau)t^{-\kappa} \|G_{0,p}^{(\beta)}\mr f\|_{[0,p]},
\end{eqnarray*}
where $C(\tau) = Ce^{\theta \tau}$, see \eqref{regest}, and $C_1(\tau) = C(\tau)\max\limits_{0\leq t\leq \tau}\|S_{G_{0,p}^{(\beta)}}(t)\|_{[0,p]}$.
Next, using \eqref{cest2a},
$$
\| S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s)\|_{[0,m]}\leq C(\tau) K(\mc U) (t+h-s)^{-\kappa}
$$
and
\begin{align*}
\frac{1}{h} \cl{0}{h} \| S_{G_{0,p}^{(\beta)}} (t + h -s) K_{0,m}^{(\beta)} f(s)\|_{[0,m]} \md s &\leq C(\tau) K(\mc U)\frac{1}{h} \cl{0}{h}(t+h-s)^{-\kappa}\md s \\
\leq C(\tau) K(\mc U)t^{-\kappa} \frac{1}{h} \cl{0}{h}\md s &= C(\tau) K(\mc U)t^{-\kappa}.
\end{align*}
Finally, as in \eqref{K1B},
\begin{align*}
&\frac{1}{h} \int_{0}^{t}\| S_{G_{0,p}^{(\beta)}} (t - s)(K_{0,m}^{(\beta)} f(s+h)-K_{0,m}^{(\beta)} f(s))\|_{0,m} \md s\\& \leq M(\tau)L(\mc U) \cl{0}{t}(t-s)^{-\kappa}\frac{\|f(s +h) - f(s)\|_{[0,m]}}{h} \md s. \end{align*}
Thus, for some constants $C_1, C_2$
$$
\frac{\|f(t +h) - f(t)\|_{[0,m]}}{h} \leq \frac{C_1}{t^\kappa} + C_2\cl{0}{t}(t-s)^{-\kappa}\frac{\|f(s +h) - f(s)\|_{[0,m]}}{h} \md s
$$
and, by the Gronwall--Henry inequality \cite[Lemma 7.1]{BLL}, for some constant $C_3$,
\begin{equation}
\frac{\|f(t +h) - f(t)\|_{[0,m]}}{h} \leq C_3t^{-\kappa}.
\label{LC1}
\end{equation}
We note that in the estimates above, we can use the same bounds for $f(t)$ and $f(t+h)$ as the function $t\mapsto f(t+h)$ can be treated as the solution for the initial value $f(h)$ which is in $\mc U$ for $h$ small enough.
To prove the differentiability of $f$, first we observe that formally differentiating \eqref{mild} gives, for $t \in (0,\tau),$
\begin{equation}
\p_tf(t) = S_{G_{0,p}^{(\beta)}}(t)G_{0,p}^{(\beta)}\mr f + S_{G_{0,p}^{(\beta)}}(t)K_{0,m}^{(\beta)} \mr f + \cl{0}{t} S_{G_{0,p}^{(\beta)}}(t-s)\partial K_{0,m}^{(\beta)}f(s)\p_sf(s)\md s.
\label{mild1}
\end{equation}
On defining $g(t) :=G_{0,p}^{(\beta)} S_{G_{0,p}^{(\beta)}}(t)\mr f + S_{G_{0,p}^{(\beta)}}(t)K_{0,m}^{(\beta)} \mr f$ and $P(s)=\partial K_{0,m}^{(\beta)} f(s),$ we see that the derivative of $f$, if it exists, satisfies the linear integral equation
\begin{equation}
w(t) = g(t) + \cl{0}{t} S_{G_{0,m}^{(\beta)}}(t-s)P(s)w(s)\md s.
\label{mild2}
\end{equation}
Now, for $t>0, h>0,$
\begin{align*}
&\|S_{G_{0,p}^{(\beta)}}(t+h)(G_{0,p}^{(\beta)}\mr f +K_{0,m}^{(\beta)} \mr f )- S_{G_{0,p}^{(\beta)}}(t)(G_{0,p}^{(\beta)}\mr f+K_{0,m}^{(\beta)} \mr f )\|_{[0,m]}\\
& = \|S_{G_{0,p}^{(\beta)}}(t)(S_{G_{0,p}^{(\beta)}}(h)-I)(G_{0,p}^{(\beta)}\mr f+K_{0,m}^{(\beta)} \mr f )\|_{[0,m]}\\& \leq C(\tau)t^{-\kappa}\|(S_{G_{0,p}^{(\beta)}}(h)-I)(G_{0,p}^{(\beta)}\mr f+K_{0,m}^{(\beta)} \mr f )\|_{[0,p]}
\end{align*}
and, analogously, for left-hand limits. Hence $t\mapsto g(t)$ is in $C((0,\tau), X_{0,m})$ and is $O(t^{-\kappa})$ close to $t=0$. Next, by Lemma \ref{lemdif}, $s\mapsto P(s)$ is a continuous function that takes values in $\mc L(X_{0,m}, X_{0,p})$. Hence, Lemma \ref{leminteq} yields the existence of a solution $w \in C((0,T], X_{0,m})$ to \eqref{mild2} for any $0<T<\tau$, with $\|w(t)\|_{[0,m]} = O(t^{-\kappa})$ as $t\to 0^+$.
Next, we prove that $f$ is differentiable in $X_{0,m}$ for $0<t<\tau$. From \eqref{eq3.2}, we obtain
\[
\frac{f(t+h) - f(t)}{h} -w(t) = J_1(h) + J_2(h) + J_3(h),
\]
where
\begin{eqnarray*}
J_1(h) &:=& \frac{1}{h}\left(S_{G_{0,p}^{(\beta)}} - I\right) S_{G_{0,p}^{(\beta)}} (t)\mr f - S_{G_{0,p}^{(\beta)}}(t)G_{0,p}^{(\beta)}\mr f, \\
J_2(h) &:=& \frac{1}{h} \int_{0}^{h}\left( S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s) - S_{G_{0,p}^{(\beta)}}(t)K_{0,m}^{(\beta)} \mr f\right)\md s,\\
J_3(h) &:=& \frac{1}{h} \int_{0}^{t} S_{G_{0,p}^{(\beta)}} (t - s)(K_{0,m}^{(\beta)} f(s+h)-K_{0,m}^{(\beta)} f(s)) \md s\\
&& -\cl{0}{t} S_{G_{0,p}^{(\beta)}}(t-s)P(s)w(s)\md s.
\end{eqnarray*}
Clearly $\lim_{h\to 0^+} J_1(h) =0$ by \eqref{247}. For $J_2(h)$, we take $t > 0$ and $0 \leq s \leq h \leq t/2$. Then
\begin{align*}
&\|S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s) - S_{G_{0,p}^{(\beta)}} (t ) K_{0,m}^{(\beta)} \mr f\|_{[0,m]}\\
&\leq \|S_{G_{0,p}^{(\beta)}} (t - s)(S_{G_{0,p}^{(\beta)}} ( h) -I)K_{0,m}^{(\beta)} f(s) \|_{[0,m]}\\&\phantom{xx}+\|S_{G_{0,p}^{(\beta)}} (t - s) (K_{0,m}^{(\beta)} f(s) - K_{0,m}^{(\beta)} \mr f)\|_{[0,m]}\\&\phantom{xx}+\|(S_{G_{0,p}^{(\beta)}} (t - s) - S_{G_{0,p}^{(\beta)}} (t )) K_{0,m}^{(\beta)}\mr f\|_{[0,m]}
=:\mathcal{J}_1(s,h)+\mathcal{J}_2(s)+\mathcal{J}_3(s).
\end{align*}
Now
\begin{align*}
\mathcal{J}_1(s,h) &\leq Ce^{\theta (t-s)} (t-s)^{-\kappa} \|(S_{G_{0,p}^{(\beta)}} ( h) -I)K_{0,m}^{(\beta)} f(s)\|_{[0,p]}.
\end{align*}
Since $t\mapsto S_{G_{0,p}^{(\beta)}}(t)$ is strongly continuous in $X_{0,p}$, it is uniformly continuous on compact sets of $X_{0,p}$; that is, for any compact set $\Omega\subset X_{0,p}$ and each $\e>0$, there exists $h_0 > 0$ such that $\sup_{u\in \Omega} \|S_{G_{0,p}^{(\beta)}}(h)u-u\|_{[0,p]}\leq \e$ for all $0<h<h_0$. Moreover, as the function $s \mapsto K_{0,m}^{(\beta)} f(s)$ is $X_{0,p}$-continuous for any $X_{0,m}$-continuous function $f$, and the continuous image of the compact interval $\left[0,\frac{t}{2}\right]$ is compact, we see that for any $\e>0$ there is $h_0<\frac{t}{2}$ such that for all $0<h\leq h_0$
\begin{equation}
\mathcal{J}_1(s,h)\leq \e
\label{J1}
\end{equation}
uniformly in $s \in [0,h_0]$. Similarly, by \eqref{K1B},
\begin{align*}
\mathcal{J}_2(s)&\leq \|S_{G_{0,p}^{(\beta)}} (t - s) (K_{0,m}^{(\beta)} f(s) - K_{0,m}^{(\beta)}) \mr f\|_{[0,m]}\\
& \leq Ce^{\theta (t-s)} (t-s)^{-\kappa}\|K_{0,m}^{(\beta)} f(s) - K_{0,m}^{(\beta)} \mr f\|_{[0,p]}\\&\leq Ce^{\theta (t-s)} (t-s)^{-\kappa}L(\mc U)\|f(s) - \mr f\|_{[0,m]}
\end{align*}
and for any $\e$ there is $0<h_0<\frac{t}{2}$ such that for any $0\leq s \leq h\leq h_0$ we have
\begin{equation}
\mathcal{J}_2(s)\leq \e.
\label{J2}
\end{equation}
Finally, as with $\mathcal{J}_1$,
\begin{align*}
\mathcal{J}_3(s)&\leq \|S_{G_{0,p}^{(\beta)}} (t - s)(S_{G_{0,p}^{(\beta)}} (s )-I) K_{0,m}^{(\beta)} \mr f\|_{[0,m]} \\
&= Ce^{\theta (t-s)} (t-s)^{-\kappa} \|(S_{G_{p}} ( s) -I)K_{0,m}^{(\beta)} \mr f\|_{[0,p]},
\end{align*}
hence $\mathcal{J}_3$ is a continuous function at $0$ and therefore,
\begin{equation}
\lim\limits_{h\to 0^+} \frac{1}{h}\cl{0}{h}\mathcal{J}_3(s)\md s = 0.
\label{J3}
\end{equation}
Summarizing,
$$
\lim\limits_{h\to 0^+} J_2(h) =\lim\limits_{h\to 0^+} \frac{1}{h} \int_{0}^{h} S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s) \md s - S_{G_{0,p}^{(\beta)}} (t ) K_{0,m}^{(\beta)} \mr f =0.
$$
Finally, by Lemma \ref{lemdif}, and with $\omega$ defined as in Lemma \ref{rem},
$$
K_{0,m}^{(\beta)} f(s+h)-K_{0,m}^{(\beta)} f(s) - P(s) (f(s+h) - f(s)) = \omega(f(s+h) - f(s),f(s)).
$$
Now
$$
\frac{\|\omega(f(s+h)\! -\! f(s))\|_{[0,p]}}{h} = \frac{\|\omega(f(s+h)\! -\! f(s),f(s))\|_{[0,p]}}{\|f(s+h)\! - \! f(s)\|_{[0,m]}}\frac{\|f(s+h) \!- \!f(s)\|_{[0,m]}}{h}.
$$
By Lemma \ref{rem}, the function $$(h,s)\mapsto \frac{\|\omega(f(s+h) - f(s), f(s))\|_{[0,p]}}{\|f(s+h) - f(s)\|_{[0,m]}}$$ is continuous on $[0,h_0]\times [0,s']$ for any $s'<s$ and hence it is uniformly continuous. Thus, for any $\e>0$ there is $h_0$ such that for any $0<h<h_1\leq h_0, s \in [0,s']$
$$
\frac{\|\omega(f(s+h) - f(s), f(s))\|_{[0,p]}}{\|f(s+h) - f(s)\|_{[0,m]}}\leq \e.
$$
Hence, by \eqref{LC1} and \eqref{WLconv}
\begin{align*}
&\left\|\frac{1}{h} \int_{0}^{t}\!\! S_{G_{0,p}^{(\beta)}} (t \!-\! s)(K_{0,m}^{(\beta)} f(s+h)\!-\!K_{0,m}^{(\beta)} f(s)) \md s \!-\!\!\cl{0}{t}\!\! S_{G_{0,p}^{(\beta)}}(t-s)P(s)w(s)\md s\right\|_{[0,m]}\\
&= \cl{0}{t} \left\|S_{G_{0,p}^{(\beta)}} (t - s)\frac{\omega(f(s+h) - f(s),f(s))}{h}\right\|_{[0,m]} \md s \\&\phantom{x}+ \cl{0}{t}\left\|S_{G_{0,p}^{(\beta)}} (t - s)P(s)\left( \frac{f(s+h) - f(s)}{h} - w(s)\right )\right\|_{[0,m]}\md s\\
&\leq C_1\cl{0}{t} (t - s)^{-\kappa}\left\|\frac{\omega(f(s+h) - f(s),f(s))}{h}\right\|_{[0,p]} \md s\\&\phantom{x} + C_2\cl{0}{t} (t - s)^{-\kappa}\left\| \frac{f(s+h) - f(s)}{h} - w(s)\right\|_{[0,m]}\md s \\
&\leq C_1C_3\e\cl{0}{t} (t - s)^{-\kappa}s^{-\kappa} \md s + C_2\cl{0}{t} (t - s)^{-\kappa}\left\| \frac{f(s+h) - f(s)}{h} - w(s)\right\|_{[0,m]}\!\!\!\md s \\
&= C_1C_3B(1-\kappa,1-\kappa) \e t^{1-2\kappa} + C_2\!\!\cl{0}{t} (t - s)^{-\kappa}\left\| \frac{f(s+h) - f(s)}{h} - w(s)\right\|_{[0,m]}\!\!\!\md s.
\end{align*}
Since for small $t$ we have $t^{1-2\kappa}\leq t^{-\kappa}$, it follows that, on any time interval $(0,s')$ where $s' < s$, and for any $\e>0$, there is $h_0$ such that for any $0<h<h_0$
\begin{align*}
&\left\| \frac{f(t+h) - f(t)}{h} - w(t)\right\|_{[0,m]}\\
&\phantom{xx}\leq \e t^{-\kappa}C_5 + C_2 \cl{0}{t} (t - s)^{-\kappa}\left\| \frac{f(s+h) - f(s)}{h} - w(s)\right\|_{[0,m]}\md s
\end{align*}
and thus, by \cite[Lemma 3.2]{Banasiak2019},
$$
\left\| \frac{f(t+h) - f(t)}{h} - w(t)\right\|_{[0,m]}\leq \e t^{-\kappa} C_6.
$$
Hence the right-hand derivative of $f$ exists on $(0,\tau)$, and satisfies \eqref{mild1}. As in the proof of Theorem \ref{lm3.1}, the right-hand side of \eqref{mild1} is continuous on $(0,s)$ and thus the left-hand derivative also exists.
Hence $f \in C^1((0,\tau), X_{0,m})$.
To show that $f(t) \in D(G_{0,p}^{(\beta)})$ for $t>0$, we evaluate
\begin{align*}
& \frac{1}{h}(S_{G_{0,p}^{(\beta)}} (h) - I)f(t) = \frac{1}{h}(S_{G_{0,p}^{(\beta)}} (h) - I) S_{G_{0,p}^{(\beta)}} (t)\mr f \\& + \frac{1}{h} \cl{0}{t} S_{G_{0,m}^{(\beta)}} (t - s) K_{0,m}^{(\beta)} f(s+ h) \md s
- \frac{1}{h} \cl{0}{t} S_{G_{0,p}^{(\beta)}} (t - s)K_{0,m}^{(\beta)} f(s) \md s\\
&= \frac{1}{h}S_{G_{0,p}^{(\beta)}} (t)(S_{G_{p}} (h) - I) \mr f + \frac{1}{h} \int_{0}^{h} S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s)\md s \\
&\phantom{x}- \frac{1}{h} \int_{t-h}^{t} S_{G_{0,p}^{(\beta)}} (t + h - s) K_{0,m}^{(\beta)} f(s)\md s\\&\phantom{x}
+ \frac{1}{h} \int_{0}^{t} S_{G_{0,p}^{(\beta)}} (t - s)(K_{0,m}^{(\beta)} f(s+h)-K_{0,m}^{(\beta)} f(s)) \md s \\&
=: L_1(h) + L_2(h) + L_3(h) +L_4(h).
\end{align*}
Using again \eqref{247}, $L_1(h) \to S_{G_{0,m}^{(\beta)}}(t) G_{0,m}^{(\beta)}\mr f$ in $X_{0,m}$ for $t>0.$ Also, as above,
$$
\lim\limits_{h\to 0^+} L_2(h) = S_{G_{0,m}^{(\beta)}} (t ) K_{0,m}^{(\beta)} \mr f
$$
and
$$
\lim\limits_{h\to 0^+} L_4(h) = \cl{0}{t} S_{G_{0,p}^{(\beta)}}(t-s)\partial K_{0,m}^{(\beta)}f(s)\p_sf(s)\md s.
$$
Then, in the same way as for $L_2$, we have
$$
\lim\limits_{h\to 0^+} L_3(h) = -K_{0,m}^{(\beta)} f(t)
$$
in $X_{0,p}$. Hence $f(t) \in D(G_{0,p}^{(\beta)})$ for $t>0$ and
\begin{align}
G_{0,p}^{(\beta)} f(t) &= - K_{0,m}^{(\beta)} f(t) \nn\\
&\phantom{x}+S_{G_{0,p}^{(\beta)}}(t) G_{0,p}^{(\beta)}\mr f + S_{G_{0,p}^{(\beta)}} (t ) K_{0,m}^{(\beta)} \mr f +\!\cl{0}{t}\! S_{G_{0,p}^{(\beta)}}(t\!-\!s)\partial K_{0,m}^{(\beta)}f(s)\p_sf(s)\md s\nn\\
&= - K_{0,m}^{(\beta)} f(t) + \p_t f(t).\label{classsol}
\end{align}
\end{proof}
\subsection{Global solvability}
To establish the existence of global (in time) solutions to the growth C-F equation we must impose the more restrictive condition
\begin{equation}
k(x,y) \leq k_0 (1+ x^\alpha+y^\alpha)
\label{kass1}
\end{equation}
on the coagulation kernel. As in \eqref{kass}, $k_0$ is a positive constant and $0 < \alpha<\gamma_0$, where $\gamma_0$ is given in \eqref{assa1}. Also, the inclusion of the term $a_1(x) = \beta(1+ x^\alpha)$ being required only to prove the nonnegativity of mild solutions in Theorem \ref{lm3.1}, we now set $\beta = 0$, in which case, from \eqref{genrep} and Theorem \ref{th3.4}, there exists a unique solution $f$ to
\begin{equation}\label{feq0}
\frac{d}{dt}f(t) = T_{0,p}^0f(t) + A_{0,p}f(t) + B_{0,p}f(t) + K_{0,m}f(t),\ \quad f(0) = \mr f \in X_{0,m}\cap D(G_{0,p}),
\end{equation}
in $C([0, \tau_{\max}), X_{0,m}) \cap C^{1}((0, \tau_{\max}), X_{0,m})\cap C((0, \tau_{\max}), D(G_{0,p}))$,
where $K_{0,m} := K_{0,m}^{(0)}$ and $G_{0,p} := G_{0,p}^{(0)}$. We emphasize that, once $\alpha$ is given, we can use an arbitrary $p>\max\{1,l\}$ and then take $m = p+\alpha$.
\begin{theorem}\label{th3.3}
Let all the assumptions of Theorem \ref{lm3.1} hold, but with \eqref{kass} replaced by \eqref{kass1}. If either \begin{description}
\item {(i)} there are constants $m_0$ and $m_1$ such that $(n_0(x)-1)a(x) \leq m_0+m_1x$, for all $x \geq 0$,
where $n_0$ is defined by \eqref{nmy}, or
\item {(ii)} $r(x)\leq \ti r x$, for all $x > 0$ (i.e. $r_0 = 0$ in \eqref{fmlras}),
\end{description}
then the solutions of Theorem \ref{lm3.1} are global in time.
\end{theorem}
\begin{proof} The proof follows similar lines to that of \cite[Theorem 5.1]{Ban2020}, but some of the technicalities are slightly different. The hypothesis of the theorem guarantee the existence of a mild solution to
\begin{equation}
\frac{d}{dt}f(t) = T^0_{0,m} f(t)+ A_{0,m} f(t) +B_{0,m} f(t) + K_{0,m} f(t),\ \quad t\in (0,\tau_{\max}).
\label{feq}
\end{equation}
Using the classical identities and estimates, \cite[Eqn. (8.1.22) \& Lemma 7.4.2]{BLL}, and \eqref{kass1},
\begin{align}
\cl{0}{\infty}x^i\mc Kf(x)\md x &=
\frac{1}{2}\cl{0}{\infty}\cl{0}{\infty}((x+y)^i-x^i-y^i)k(x,y)f(x)f(y)\md x\md y, \nn\\
&\leq \frac{C_ik_0}{2}\cl{0}{\infty}\cl{0}{\infty}(yx^{i-1}+xy^{i-1})(1+ x^\alpha+y^\alpha)f(x)f(y)\md x\md y,\nn\\
&\leq K_i (\|f\|_{[1]}\|f\|_{[i-1]} + \|f\|_{[1]} \|f\|_{[\alpha+i-1]} + \|f\|_{[\alpha+1]}\|f\|_{[i-1]}),
\label{coag1}
\end{align}
for $i\geq 1$, where $K_i=C_ik_0$ and the norms are defined by \eqref{norms}. First we consider $\mr f$ to be a $C^\infty(\mbb R_+)$ function with bounded support. Then $\mr f \in D(G_{0,i})$ for any $i$ and, if additionally $i>\max\{1,l\}$, then, by Theorem \ref{th3.4}, the corresponding solution $(0,\tau_{\max})\ni t\mapsto f(t)= f(t,\mr f)$ is differentiable in any space $X_{i}$. Hence, using \eqref{subfuncta'} (with $a_1(x) \equiv 0$), and recalling that $M_m(t)$ is given by \eqref{Moments}, we obtain
\begin{align}
\frac{d}{dt} M_0(t) &= -\cl{0}{\infty}N_0(x)a(x)f(x,t)\md x-\frac{1}{2}\cl{0}{\infty}\cl{0}{\infty}k(x,y)f(x,t)f(y,t)\md x\md y \label{M0}\\
\frac{d}{dt} M_1(t) &= \cl{0}{\infty}r(x) f(x,t) \md x\label{M1}\\
\frac{d}{dt} M_i(t) &= \cl{0}{\infty}\left(ir(x)x^{i-1} -N_i(x)a(x)\right)f(x,t) \mdm{d}x \nn\\&\phantom{xx}+ \frac{1}{2}\cl{0}{\infty}\int_{0}^{\infty}((x+y)^i-x^i-y^i)k(x,y)f(x,t)f(y,t)\md x\md y, \quad i>1.\label{feco1}
\end{align}
As pointed out earlier, $N_0(y) = 1-n_0(y) <0$ due to \eqref{Nm}.
Let us consider first the term in \eqref{feco1} containing $N_i$ and recall that $a_0,\gamma_0$ and $x_0$ are the constants given in \eqref{assa1}. Similarly to \eqref{bmom1} (see also \cite[Theorem 2.2]{Ban2020}), we have that if $N_{m_0}(x)/x^{m_0} \geq \delta'_{m_0}$ holds for some $m_0 > 1,$ $\delta'_{m_0}$ and $x\geq x_0$, then for any $i >1$ there is $\delta'_i>0$ such that $N_i(x)/x^i \geq \delta'_i>0$ for any $x\geq x_0$. Hence,
\begin{align}
-\cl{0}{\infty}N_i(x)a(x)f(x)\md x &= -\cl{0}{x_0}a(x) N_i(x)f(x)\md x - \cl{x_0}{\infty} a(x)f(x) x^i \frac{N_i(x)}{x^i}\md x\nn\\
&\leq - \delta'_i\cl{0}{\infty} a(x)f(x) x^i \md x + \delta'_i\cl{0}{x_0}a(x) x^if(x)\md x\nn\\
&\leq - \delta_i \|f\|_{[i+\gamma_0]} + \nu_i \| f\|_{[i]},\label{fragest}
\end{align}
where $\delta_i= \delta'_i a_0$ and $\nu_i = \delta_i\text{ess}\sup_{0\leq x\leq x_0} a(x)$.
First, let us consider an integer $i \geq 2$. Then, from \eqref{fragest}, together with \eqref{M0} and \eqref{M1}, \begin{align}
\frac{d}{dt} M_0(t) &\leq \cl{0}{\infty}(n_0(x)-1)a(x)f(x,t)\md x \nn\\
\frac{d}{dt} M_1(t) &\leq \ti r M_0(t) + \ti r M_1(t)\nn\\
\frac{d}{dt} M_{i}(t) &\leq \ti rM_{i-1}(t) + (\nu_i+\ti r) M_{i}(t) - \delta_i M_{i+\gamma_0}(t) \nn \\
& \quad + K_i(M_{1}(t)M_{i-1}(t) + M_{1}(t) M_{\alpha+i-1}(t) + M_{\alpha+1}(t)M_{i-1}(t)).\label{firstmom}
\end{align}
To simplify \eqref{firstmom}, we use the following auxiliary inequalities. For $i \geq 2$ and $1\leq r \leq i-1,$ we apply H\"{o}lder's inequality with $p=\gamma_0/\alpha$ and $q =\gamma_0/(\gamma_0-\alpha)$ to obtain
\begin{align}
\|f\|_{[r+\alpha]} &= \int_0^\infty x^r x^\alpha f(x) \md x = \int_0^1 x^r x^\alpha f(x) \md x + \int_1^\infty x^r x^\alpha f(x) \md x\nn\\
&\leq c_\alpha \int_0^1 x f(x) \md x + \int_1^\infty x^{(i-1)/q}f^{1/q}(x)x^{r-\frac{i-1}{q}} x^{\frac{\gamma_0}{p}} f^{1/p}(x) \md x\nn\\
&\leq c_\alpha\| f\|_{[1]} + \left(\int_0^\infty x^{i-1}f(x)\md x\right)^{\frac{1}{q}}\left(\int_1^\infty x^{pr-\frac{p(i-1)}{q}} x^{\gamma_0} f(x) \md x\right)^{\frac{1}{p}}\nn\\
&\leq c_\alpha\| f\|_{[1]} + \|f\|_{[i-1]}^{\frac{\gamma_0-\alpha}{\gamma_0}}\|f\|_{[i+\gamma_0]}^{\frac{\gamma_0}{\alpha}}, \label{wl866}
\end{align}
where we used the fact that for $1\leq r\leq i-1$
$$
pr-\frac{p(i-1)}{q} = \frac{\gamma_0}{\alpha}r - \left(\frac{\gamma_0}{\alpha}-1\right)(i-1) \leq i-1 < i$$ and hence
$$
x^{pr-\frac{p(i-1)}{q}} \leq x^i, \qquad x \in [1,\infty).
$$
Then Young's inequality gives, for any $\e > 0$,
\begin{align}
\|f\|_{[i+\alpha -1]}\|f\|_{[1]} &\leq c_\alpha\|f\|^2_{[1]} + \|f\|_{[1]}\|f\|_{[i-1]}^{\frac{\gamma_0-\alpha}{\gamma_0}}\|f\|_{[i+\gamma_0]}^{\frac{\gamma_0}{\alpha}}\nn\\
&\leq c_\alpha\|f\|^2_{[1]} + \|f\|_{[1]}\left(\frac{\gamma_0-\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha-\gamma_0}}\|f\|_{[i-1]} + \frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}}\|f\|_{[i+\gamma_0]}\right)
\end{align}
and
\begin{align}
\|f\|_{[i-1]}\|f\|_{[1+\alpha]} &\leq c_\alpha\|f\|_{[1]} \|f\|_{[i-1]}+ \|f\|_{[i-1]}^{\frac{2\gamma_0-\alpha}{\gamma_0}}\|f\|_{[i+\gamma_0]}^{\frac{\gamma_0}{\alpha}}\nn\\
&\leq c_\alpha\|f\|_{[1]} \|f\|_{[i-1]}\! +\! \left(\!\frac{\gamma_0-\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha-\gamma_0}}\|f\|_{[i-1]}^{\frac{2\gamma_0-\alpha}{\gamma_0-\alpha}} + \frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}}\|f\|_{[i+\gamma_0]}\!\!\right).
\end{align}
We now apply these inequalities to the solution $t\mapsto f(t)$, transforming the last inequality in \eqref{firstmom} into
\begin{align}
&\frac{d}{dt} M_{i}(t) \leq \ti rM_{i-1}(t) + (\nu_i+\ti r) M_{i}(t) - \delta_i M_{i+\gamma_0}(t) \nn \\
& \quad + K_i\left(\phantom{\frac{a}{b}}\!\!\!\!\!M_{1}(t)M_{i-1}(t) + c_\alpha M^2_{1}(t) \right.\nn\\
&\quad \left.+ M_{1}(t)\left(\frac{\gamma_0-\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha-\gamma_0}}M_{i-1}(t) + \frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}}M_{i+\gamma_0}(t)\right)\right. \nn\\
&\quad + \left. c_\alpha M_1(t)M_{i-1}(t) + \left(\frac{\gamma_0-\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha-\gamma_0}}M_{i-1}^{\frac{2\gamma_0-\alpha}{\gamma_0-\alpha}}(t) + \frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}}M_{i+\gamma_0}(t)\right)\right).\label{mom2}
\end{align}
There remains the problem that the estimates derived above require some control of $M_1(t)$. This presents no difficulties for the standard, mass-conserving C-F models, as then $M_{1}(t) = \|\mr f\|_{[1]}$ for all $t \in [0, \tau_{\max})$. Here, however, the second inequality of \eqref{firstmom} shows that $M_1(t)$ is coupled with $M_0(t)$, and the latter in general depends on higher order moments. There are two easy ways to remedy this situation, related to assumptions (i) and (ii), respectively. If (i) is satisfied, then
\begin{align*}
\frac{d}{dt} M_0(t) &\leq \cl{0}{\infty}(n_0(x)-1)a(x)f(x,t)\md x \leq m_0 M_0(t) + m_1 M_1(t),\nn\\
\frac{d}{dt} M_1(t) &\leq r_0 M_0(t) + r_1M_1(t),
\end{align*}
which yields $M_0(t) \leq \mr M_0 e^{\mu t}$ and $M_1(t) \leq \mr M_1 e^{\mu t}$ for some constant $\mu$ and thus neither moment blows up in finite time. If (ii) is satisfied, then obviously $M_1(t) \leq \mr M_1 e^{\ti r t}$ and the inequalities for the moments of order greater than one become decoupled from the zeroth order moment. In both cases $M_1(t)\leq M_{1,\tau_{\max}}$ on $[0,\tau_{max})$
and, by choosing $\e$ so that $\frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}} K_i(M_{1,\tau_{\max}} +1)\leq \delta_i$, we see that there are positive constants $D_{0,i}, D_{1,i}, D_{2,i}, D_{3,i}$ such that (\ref{mom2}) can be written as
\begin{equation}
\frac{d}{dt} M_{i}(t) \leq D_{0,i}+ D_{1,i} M_{i}(t) + D_{2,i} M_{i-1}(t) + D_{3,i} M_{i-1}^{\frac{2\gamma_0-\alpha}{\gamma_0-\alpha}}(t), \label{firstmom1}
\end{equation}
for $t\in [0,\tau_{\max})$. In particular, for $i = 2$ we obtain
\begin{equation}
\frac{d}{dt} M_{2}(t) \leq D_{0,2}+ D_{1,2} M_{2}(t) + D_{2,2} M_{1}(t) + D_{3,2} M_{1}^{\frac{2\gamma_0-\alpha}{\gamma_0-\alpha}}(t), \label{firstmom2}
\end{equation}
for $ t\in [0,\tau_{\max})$, and thus $t \mapsto M_{2}(t)$ is bounded on $[0, \tau_{\max})$. Then we can use \eqref{firstmom1} to proceed inductively to establish the boundedness of $t \mapsto M_{i}(t)$ for any integer $i$ (for the chosen initial condition). Further, since for any $i>1$ we have $x^i \leq x$ for $x \in [0,1]$ and $x^i \leq x^{\lfloor i\rfloor +1}$ for $x \geq 1$, then
$$
\|f\|_{[i]} \leq \|f\|_{[1]} + \|f\|_{[\lfloor i\rfloor +1]},
$$
and we find that all moments of the solution of order $i\geq 1$ are bounded on the maximal interval of its existence.
It remains to prove that $t\mapsto M_0(t)$ is bounded on $[0,\tau_{\max})$ (in case (ii)). Let us fix an integer $i>\max\{1,l\}.$ Using the fact that
$$
\cl{0}{\infty} \mc K f(x,t)\md x \leq 0$$
and, from \eqref{PhPr005},
\begin{align}
\cl{0}{\infty} \mc F f(x,t)\md x &\leq \cl{0}{\infty} (n_0(y)-1)a(y) f(y,t)\md y \leq 2b_0\cl{0}{\infty} a(y) f(y,t) w_i(y)\md y \nn\\
&\leq \ti a\cl{0}{x_0} f(y,t) \md y + 2b_0 R(t),
\label{eqP1}
\end{align}
on $[0,\tau_{\max}),$ where $\ti a = 2b_0 \text{ess}\sup_{y\in [0,x_0]}a(y)w_i(y) $, for the zeroth moment we have
$$
\frac{d}{dt}M_0(t) \leq \ti a M_0(t) + 2b_0 R(t),
$$
where we denoted
$$
R(t)= \cl{x_0}{\infty} a(x) f(x,t)w_i(x)\md x.
$$
Hence
\begin{equation}
M_{0}(t) \leq e^{\ti a t}\left(\|\mr f\|_{[0]} + 2b_0\cl{0}{t} R(s)\md s\right).
\label{Pt0}
\end{equation}
We have the estimate
\begin{align}
\cl{0}{t}R(s) \md s&=\cl{0}{t}\cl{x_0}{\infty} a(x) f(x,s)w_i(x)\md x\md s \leq (1+x_0^{-i})\cl{0}{t}\cl{x_0}{\infty} a(x) f(x,s)x^i\md x\md s.
\label{Pt1}
\end{align}
Now, as in \eqref{fragest},
\begin{align}
\cl{0}{\infty} \mc F f(x)x^i\md x & = -\cl{0}{\infty}N_i(x)a(x)f(x)\md x \leq - \cl{x_0}{\infty} a(x)f(x) x^i \frac{N_i(x)}{x^i}\md x\nn\\
&\leq - \frac{\delta'_i}{2}\cl{x_0}{\infty} a(x)f(x) x^i \md x - \frac{\delta'_i}{2}\cl{0}{\infty} a(x)f(x) x^i \md x + \frac{\delta'_i}{2}\cl{0}{x_0}a(x) x^if(x)\md x\nn\\
&\leq - \frac{\delta'_i}{2}\cl{x_0}{\infty} a(x)f(x) x^i \md x - \frac{\delta_i}{2} \|f\|_{[i+\gamma_0]} + \nu_i \| f\|_{[i]},
\label{Fest1}
\end{align}
where $\delta_i$ and $\nu_i$ were defined previously.
Now, knowing that all lower order moments are finite on $[0,\tau_{\max})$ and selecting $\e$ so that $\frac{\alpha}{\gamma_0}\e^{\frac{\gamma_0}{\alpha}} K_i(M_{1,\tau_{\max}} +1)\leq \frac{\delta_i}{2},$ we can write \eqref{firstmom1} as
\begin{equation}
\frac{d}{dt} M_{i}(t) \leq - \frac{\delta'_i}{2}\cl{x_0}{\infty} a(x)f(x,t) x^i \md x + D_{0,i}+ D_{1,i} M_{i}(t) + \Theta(t), \label{firstmom1'}
\end{equation}
where $\Theta$ is bounded on $t\in [0,\tau_{\max})$. This can be re-written as
\begin{align*}
&\frac{d}{dt} \left(M_{i}(t)+ \frac{\delta'_i}{2}\cl{0}{t}\cl{x_0}{\infty} a(x)f(x,s) x^i \md x \md s\right) \leq D_{0,i}+ D_{1,i} M_{i}(t) + \Theta(t) \\
&\phantom{xxx}\leq D_{0,i}+ D_{1,i} \left(M_{i}(t)+ \frac{\delta'_i}{2}\cl{0}{t}\cl{x_0}{\infty} a(x)f(x,s) x^i \md x \md s\right) + \Theta(t).
\end{align*}
Denoting
$$
\Phi(t) = M_{i}(t)+ \frac{\delta'_i}{2}\cl{0}{t}\cl{x_0}{\infty} a(x)f(x,s) x^i \md x \md s
$$
and integrating,
\begin{align*}
\Phi(t) &\leq e^{D_{1,i} t}\left(\Phi(0) + \frac{D_{0,i}}{D_{1,i}}(1-e^{-D_{1,i}t}) + \cl{0}{t}\Theta(s)e^{-D_{1,i} s}\md s\right)
\end{align*}
and we see that neither $\Phi,$ nor
$$
t\mapsto \cl{0}{t}\cl{x_0}{\infty} a(x) f(x,s)x^i\md x\md s
$$
can blow up at $t=\tau_{\max}$. Hence, by \eqref{Pt1} and \eqref{Pt0}, neither can $t\mapsto M_0(t)$.
This shows that solutions emanating from compactly supported differentiable initial conditions are global in time. Consider now $\mr f\in X_{0,m,+}$ and a sequence of such regular initial conditions $(\mr f_k)_{k\geq 1}$ approximating $\mr f$ and assume that $t\to f(t, \mr f)$ has a finite time blow up at $\tau_{\max}$. By the moment estimates above, the bounds of $\|f(t, \mr f_k)\|_{[0,m]}$ over any finite time interval depend continuously on $\mr f_k$ and thus are uniform in $k$ on $[0,\tau_{\max}]$. On the other hand,
there is a sequence $(t_n)_{n\geq 1}$ such that $t_n\to \tau_{\max}, n\to \infty,$ and $\|f(t_n, \mr f)\|_{[0,m]}$ is unbounded; that is, the distance between $f(t_n,\mr f)$ and all $f(t_n,\mr f_k)$ becomes arbitrarily large. This contradicts the continuous dependence of solutions on the initial conditions following, on each $[0,t_n]$, from the Gronwall-- Henry inequality, \cite[Lemma 3.2]{Banasiak2019}), see also \cite[Theorem 8.1.1]{BLL}.
\end{proof}
\begin{remark} The additional restrictions in Theorem \ref{th3.3} are due to the fact that, in the general case, we cannot control the production of particles; that is, the zeroth moment. In principle, there is a positive feedback loop in which $M_0$ contributes to $M_1$ which, in turn, amplifies, in a nonlinear way, higher order moments that determine the rate of growth of $M_0$. The adopted assumptions, which postulate that either $M_0$ is controlled by $M_1,$ or that the evolution of mass is not influenced by other mechanisms ($r_0\neq 0$ implies that there is a production of mass independent of the existing one), although technical, seem to be the simplest ones that break this cycle. We do not claim that these assumptions are optimal but at present we do not have any examples of a finite time blow up of solutions in this setting. It is, however, worthwhile to note that there are known cases of a finite time blow up of solutions to growth--fragmentation--coagulation equations even with bounded coagulation kernels but with the renewal boundary condition,\cite{Bana12c}.
\end{remark}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Introduction}
It is common when working in abstract homotopy theory to deal with several equivalent Quillen model categories, each with their own strengths and weaknesses.
This even extends to Quillen model structures on a single category --- for instance, in categories of diagrams of a fixed shape in a model category $\mathcal{M}$, there are often both projective and injective model structures, each of which have their place.
It is in this spirit that we present the discovery of several new model structures for $\infty$-categories and $\infty$-groupoids.
In particular, we give model structures supported on categories of cubical sets, prederivators, bisimplicial sets, and marked simplicial sets.
For the purposes of the introduction, we focus on the first of these.
Cubical sets are presheaves on a category of cubes.
But there are many possible categories of cubes, and there is a tension between the simplicity of the cube category and the expressivity of the corresponding category of cubical sets.
We take this opportunity to point the reader to the introduction of \cite{CavalloMortbergSwan:UCMUTT} for an overview of what is known about various choices in the context of Homotopy Type Theory and Univalent Foundations.
In our case, we consider the category of cubes as a full subcategory of the category of posets, which is the same rich context that Kapulkin and Voevodsky operate under in \cite{KV}.
This is a homotopically challenging indexing category to work with, as it does not admit a natural generalized Reedy structure.
Our first main result is \cref{cor cubical QE}, which is about the existence of induced model structures on cubical sets along the triangulation functor from the category of cubical sets to the category of simplicial sets.
These model structures are (left- or right-) induced either from the Joyal or the Kan--Quillen model structure on the category of simplicial sets.
In all cases, the triangulation functor becomes both a left and a right Quillen equivalence.
On the other hand, in \cite{DohertyKapulkinLindseySattler:CMI1C}, a model structure for $(\infty,1)$-categories was given on a different category of cubical sets; in \cref{theorem comparison with dkls} we show that the natural comparison between the two categories of cubical sets is a Quillen equivalence between this one and the model structure left-induced from the Joyal model structure.
The main technical tools we use are lifting theorems for model structures in the presence of adjoint strings.
Let $\mathcal{M}$ be a model category, $\mathcal{N}$ be a (bicomplete) category, and suppose we have a string of adjoint functors
\begin{equation*}\label{adjoint string}
\begin{tikzcd}[column sep=large]
\mathcal{N} \rar["F" description]
\rar[phantom, bend right=18, "\scriptscriptstyle\perp"]
\rar[phantom, bend left=18, "\scriptscriptstyle\perp"]
& \mathcal{M}
\lar[bend right=30, "L" swap]
\lar[bend left=30, "R"]
\end{tikzcd}
\end{equation*}
(which below we will write more compactly as $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R$).
In \cite{DrummondColeHackney}, Drummond-Cole and the first author showed that if $\mathcal{M}$ is cofibrantly generated and if the adjunction $FL\dashv FR$ on $\mathcal{M}$ is a Quillen adjunction, then there exists a \emph{right-induced} model structure on $\mathcal{N}$, where weak equivalences and fibrations in $\mathcal{N}$ are created by $F$.
In that paper, the question was posed about whether one could guarantee a \emph{left-induced} model structure on $\mathcal{N}$, that is, one where the weak equivalences and \emph{co}fibrations are created by $F$.
We give a partial answer.
\begin{utheorem}[Theorem~\ref{existenceleft} and Proposition~\ref{Quillenidempotent2}]\label{thm intro left induced}
Suppose that we have an adjoint string $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R$ with $\mathcal{N}$ a locally presentable category and $\mathcal{M}$ an accessible model category.
If $FL\dashv FR$ is a Quillen adjunction and the adjoint string between homotopy categories
$\ho\mathcal{N} \lrl \ho\mathcal{M}$
is an \emph{idempotent} adjoint string, then $\mathcal{N}$ admits a model structure left-induced along $F$.
This occurs in the special case when $L$ or $R$ is fully faithful.
\end{utheorem}
In the theorem statement, the homotopy category $\ho\mathcal{N}$ is obtained by inverting all morphisms which are sent by $F$ to weak equivalences in $\mathcal{M}$.
An \emph{idempotent} adjoint string is an adjoint string where the two constituent adjunctions are idempotent adjunctions.
We should make clear that, just as the aforementioned Drummond-Cole--Hackney result rests on Kan's theorem for right-induced model structures \cite[Theorem 11.3.2]{Hirschhorn:MCL}, so too does our present theorem rest on the the Acyclicity Theorem of \cite{GarnerKedziorekRiehl,HKRS}.
\subsection*{Acknowledgements}
We would like to thank Gabriel C.~Drummond-Cole, Richard Garner, Chris Kapulkin, Viktoriya Ozornova, Emily Riehl and Christian Sattler for useful input, suggestions, and encouragement.
\section{The abstract framework and results}
\subsection{Strings of adjoint functors}
We start by recalling the terminology related to strings of adjoint functors.
A pair of adjoint functors with $F$ left adjoint to $G$ will be denoted by either $F \lcolon \mathcal{M} \rightleftarrows \mathcal{N} \rcolon G$ or $F \dashv G$, depending on whether or not we wish to emphasize the (co)domains of the functors.
\begin{defn}
Let $\mathcal{M}$ and $\mathcal{N}$ be categories. A \emph{string of adjoint functors} (or \emph{adjoint string}) $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ consists of functors $F\colon\mathcal{N}\to\mathcal{M}$ and $L,R\colon\mathcal{M}\to\mathcal{N}$ that form adjunctions $L\dashv F$ and $F\dashv R$.
\end{defn}
\begin{rmk}
Given any string of adjoint functors $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $, the adjunctions
$F\lcolon\mathcal{N}\rightleftarrows\mathcal{M}\rcolon R$ and $L\lcolon\mathcal{M}\rightleftarrows\mathcal{N}\rcolon F$
can be composed to obtain an adjunction
$F L\lcolon\mathcal{M}\rightleftarrows\mathcal{M}\rcolon F R$.
\end{rmk}
The following gives a large source of examples of strings of adjoint functors.
\begin{rmk}\label{remark left right kan extension adjoint triple}
Let $\mathcal{S}$ be a bicomplete category.
Any functor $f\colon {\mathcal{I}}\to \mathcal{J}$ between small categories induces a string of adjoint functors $f^*\lcolon\mathcal{S}^{\mathcal{J}}\lrl\mathcal{S}^{{\mathcal{I}}}\rcolon f_!,f_*$ where $f^*\colon\mathcal{S}^{\mathcal{J}}\to\mathcal{S}^{\mathcal{I}}$ denotes the restriction functor and $f_!\colon\mathcal{S}^{{\mathcal{I}}}\to\mathcal{S}^{\mathcal{J}}$ (resp.\ $f_*\colon\mathcal{S}^{{\mathcal{I}}}\to\mathcal{S}^{\mathcal{J}}$) denotes the left (resp.\ right) Kan extension along $f$.
\end{rmk}
We consider the following two properties that adjoint strings of functors may have.
The first is studied more deeply in \cite[\textsection2]{Johnstone:RPLC}.
\begin{defn}
\label{characterizationidempotent}
Let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R$ be a string of adjoint functors, let $\eta$ and $\epsilon$ be the unit and counit of the adjunction $L\dashv F$, let $\eta'$ and $\epsilon'$ be the unit and counit of the adjunction $F\dashv R$.
We say that the adjoint string is \emph{idempotent} if all of the maps in the following diagram are isomorphisms.
\[ \begin{tikzcd}[column sep=small]
& FLF \ar[dr,Rightarrow,"F\epsilon"] \\
F \ar[rr,Rightarrow,"\id_F"] \ar[dr, Rightarrow,"F\eta'"'] \ar[ur,Rightarrow,"\eta F"] & & F \\
& FRF \ar[ur,Rightarrow,"\epsilon'F"']
\end{tikzcd} \]
This happens if and only if one of the outside maps is an isomorphism.
\end{defn}
If $F$ happens to be fully faithful, then $F\epsilon$ is an isomorphism by \cite[Theorem IV.3.1]{MacLane}, hence the string is idempotent
(if $F$ is conservative then the converse also holds).
Another, different, special case of idempotency is that of a `fully faithful' string of adjoint functors, which we now recall.
\begin{defn}
\label{characterizationfullyfaithful}
A string of adjoint functors $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ is said to be \emph{fully faithful} if both $L$ and $R$ are fully faithful.
If one of $L$ or $R$ is fully faithful, so is the other (see \cite[Lemma~1.3]{DyckhoffTholen:EMPPPC}).
\end{defn}
\begin{rmk}
Any fully faithful string of adjoint functors $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ is an idempotent string of adjoint functors.
This is because $L$ being fully faithful is equivalent to the unit $\eta\colon \id_{\mathcal{M}}\Rightarrow FL$ of the adjunction $L\dashv F$ being an isomorphism (alternatively, $R$ being fully faithful is equivalent to the counit $\epsilon'$ of the adjunction $F \dashv R$ being an isomorphism) by \cite[Theorem IV.3.1]{MacLane}.
\end{rmk}
\begin{rmk}
\label{adjointtriplefromchangeofshape}
Let $\mathcal{S}$ be a bicomplete category. If $f\colon{\mathcal{I}}\to\mathcal{J}$ is a fully faithful functor between small categories, then the string of adjoint functors $f^*\lcolon\mathcal{S}^{\mathcal{J}}\lrl\mathcal{S}^{{\mathcal{I}}}\rcolon f_!,f_*$ from Remark~\ref{remark left right kan extension adjoint triple} is fully faithful.
\end{rmk}
\subsection{Model structures induced via adjoint strings}
We discuss situations in which one can transfer a model structure along the middle functor of a string of adjoint functors.
In this paper, model categories will admit all small limits and colimits (which we refer to as `bicomplete') and will be assumed to come equipped with functorial factorizations.
In particular, a model category $\mathcal{M}$ comes equipped with a natural transformation $(-)^c \Rightarrow \id_{\mathcal{M}}$ so that each component $X^c \to X$ is an acyclic fibration from a cofibrant object (and dually for $\id_{\mathcal{M}} \Rightarrow (-)^f$).
The term `left Quillen functor' will be synonymous with `left adjoint in a Quillen adjunction', and `left Quillen equivalence' will mean `left adjoint in a Quillen equivalence' (and similarly for right adjoints).
We will often be interested in the accessible model categories of \cite[Definition 3.1.6]{HKRS}, which are a generalization of Jeff Smith's combinatorial model categories (see, e.g., \cite[\S A.2.6]{htt}).
\begin{defn}
Let $F\colon\mathcal{N}\to\mathcal{M}$ be a functor, and $\mathcal{M}$ a model category. The \emph{left-induced model structure} $\lims{\mathcal{N}}$ (resp.~\emph{right-induced model structure} $\rims{\mathcal{N}}$) on $\mathcal{N}$, when it exists, is the model structure in which the cofibrations (resp.~the fibrations) and the weak equivalences are created by $F$.
\end{defn}
The following criterion for existence of right-induced model structure along $F$ in the presence of a string of adjoint functors $L\dashv F\dashv R$ was given by Drummond-Cole and the first author.
\begin{thm}[{\cite[Theorem~2.3]{DrummondColeHackney}}]
\label{existenceright}
Let $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors, where $\mathcal{M}$ is a cofibrantly generated model category and $\mathcal{N}$ is a bicomplete category.
If $F L\lcolon\mathcal{M}\rightleftarrows\mathcal{M} \rcolon F R$ is a Quillen adjunction, then the category $\mathcal{N}$ supports the right-induced model structure $\rims{\mathcal{N}}$.
Further, the functor $F\colon\rims{\mathcal{N}}\to\mathcal{M}$ is both left and right Quillen.
\end{thm}
Under the additional hypothesis that $F$ is fully faithful, Campbell observed in \cite[Proposition 2.2]{Campbell:JCC} that the model structure on $\mathcal{N}$ is also left-induced.
The following makes no reference to the functor $R$, but we state it this way as it is most meaningful in the case when $FL \dashv FR$ is a Quillen adjunction on $\mathcal{M}$.
\begin{definition}[Homotopy idempotent string]\label{def homotopy idempotent}
Let $\mathcal{M}$ be a model category and let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors.
\begin{itemize}[leftmargin=*]
\item For each $X$ in $\mathcal{N}$, define a map $\epsilon_X^h$ of $\mathcal{N}$ as the composite
\[
\epsilon_X^h\colon L((FX)^c) \to LFX \to X
\]
whose first map is $L$ applied to the cofibrant replacement $(FX)^c \overset\sim\to FX$ in $\mathcal{M}$, and whose second map is the counit of $L\dashv F$.
Notice that $\epsilon^h \colon L((F-)^c) \Rightarrow \id_{\mathcal{N}}$ is a natural transformation.
\item We say that the string of adjoint functors is \emph{homotopy idempotent} if the map $F\epsilon_X^h \colon FL((FX)^c) \to FX$ is a weak equivalence in $\mathcal{M}$ for every object $X$ in $\mathcal{N}$.
\end{itemize}
\end{definition}
\begin{remark}\label{remark derived unit}
We have chosen this definition because it is the most convenient for the proof of Theorem~\ref{existenceleft}, but of course there are other formulations (see also Proposition~\ref{Quillenidempotent2} below).
For example, by combining the naturality square for the unit $\eta$ of the adjunction $L\dashv F$ with a triangle identity, we obtain the following commutative diagram in $\mathcal{M}$:
\[ \begin{tikzcd}
(FX)^c \dar{\eta_{(FX)^c}} \rar[two heads, "\sim"] & FX \dar{\eta_{FX}} \arrow[dr, bend left=15, "="' swap] \\
FL((FX)^c) \rar & FLFX \rar["F\epsilon_X"] & FX.
\end{tikzcd} \]
By two-of-three, one sees that $F\epsilon_X^h$ is a weak equivalence in $\mathcal{M}$ if and only if the map $\eta_{(FX)^c}\colon(FX)^c\to FL((FX)^c)$ is a weak equivalence.
\end{remark}
\begin{rmk}
A string of adjoint functors $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ is automatically
a homotopy idempotent string in the following cases:
\begin{itemize}[leftmargin=*]
\item when the adjoint string $L\dashv F\dashv R$ is idempotent, the adjunction $FL\dashv FR$ is a Quillen pair and every object in $\mathcal{M}$ is cofibrant, or
\item when the adjoint string $L\dashv F\dashv R$ is fully faithful.
\end{itemize}
\end{rmk}
We now give a criterion for the existence of the left-induced model structure along $F$ in presence of a string of adjoint functors $L\dashv F\dashv R$.
\begin{thm}
\label{existenceleft}
Let $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors, where $\mathcal{M}$ is an accessible model category and $\mathcal{N}$ is a locally presentable category.
If $FL \dashv FR$ is a Quillen adjunction and $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ is a homotopy idempotent string of adjoint functors, then the category $\mathcal{N}$ supports the left-induced model structure $\lims{\mathcal{N}}$.
Further, the functor $F\colon\lims{\mathcal{N}}\to\mathcal{M}$ is both left and right Quillen.
\end{thm}
The proof of the preceding theorem uses the cylinder object argument, whose statement we recall here for the convenience of the reader.
The conclusion of Theorem~\ref{cylinderobject} relies on the Garner--Hess--K\k{e}dziorek--Riehl--Shipley Acyclicity Theorem, whose corrected proof may be found in \cite{GarnerKedziorekRiehl}.
\begin{thm}[{\cite[Theorem 2.2.1]{HKRS}}]
\label{cylinderobject}
Let $F\lcolon\mathcal{N}\rightleftarrows\mathcal{M}\rcolon G$ be an adjoint pair, $\mathcal{M}$ an accessible model category and $\mathcal{N}$ a locally presentable category. Suppose that the following conditions hold.
\begin{enumerate}[leftmargin=*]
\item For every object $X$ of $\mathcal{N}$, there exists a morphism $\phi_X \colon QX \to X$ such that $F\phi_{X}$ is a weak equivalence and $F(QX)$ is cofibrant in $\mathcal{M}$, \label{cylinder item number one}
\item For each morphism $f\colon X \to Y$ in $\mathcal{N}$ there exists a morphism $Qf\colon QX \to QY$ satisfying $\phi_{Y}\circ Qf =f\circ \phi_{X}$, and \label{cylinder item number two}
\item For every object $X$ of $\mathcal{N}$, there exists a factorization $QX \amalg QX \xrightarrow j\mathrm{Cyl}(QX) \xrightarrow p QX$ of the fold map such that $Fj$ is a cofibration and $Fp$ is a weak equivalence.\label{cylinder item number three}
\end{enumerate}
Then $\mathcal{N}$ supports the model structure $\lims{\mathcal{N}}$ left-induced along $F$.
\end{thm}
The following proof includes a verification of the conditions of \cref{cylinderobject}.
\begin{proof}[Proof of \cref{existenceleft}]
For an object $X$ of $\mathcal{N}$, we define $QX$ to be $L((FX)^c)$ and let $\phi_X$ denote the map $\epsilon_X^h$ from \cref{def homotopy idempotent}:
\[ \phi_X \coloneqq \epsilon^h_X \colon QX = L((FX)^c) \xrightarrow{L(\overset\sim\twoheadrightarrow)} LFX \xrightarrow{\epsilon_X} X.\]
We now verify the conditions to apply \cref{cylinderobject}.
The map $F\epsilon^h_X$ is a weak equivalence in $\mathcal{M}$ by assumption, and, since $FL$ is left Quillen, $FQX=FL((FX)^c)$ is a cofibrant object of $\mathcal{M}$. Thus we have established Condition \eqref{cylinder item number one}.
Condition \eqref{cylinder item number two} holds because $\epsilon^h$ is a natural transformation $Q\Rightarrow \id_{\mathcal{N}}$.
For Condition \eqref{cylinder item number three}, notice that we have a factorization
\[
\begin{tikzcd}
(FX)^c \amalg (FX)^c \rar[rightarrowtail, "J"] & \mathrm{Cyl}((FX)^c) \rar[two heads, "\sim" swap, "P"] & (FX)^c
\end{tikzcd}
\]
of the fold map of $(FX)^c$ in $\mathcal{M}$.
The objects $\mathrm{Cyl}((FX)^c)$ and $(FX)^c$ are cofibrant in $\mathcal{M}$, and by Ken Brown's lemma \cite[Corollary 7.7.2(1)]{Hirschhorn:MCL} the left Quillen functor $FL$ preserves weak equivalences between cofibrant objects, hence $FLP$ is a weak equivalence in $\mathcal{M}$.
Further, $FLJ$ is again a cofibration since $FL$ is left Quillen.
Thus the factorization
\[
\begin{tikzcd}
QX \amalg QX \cong L\left[(FX)^c \amalg (FX)^c\right] \rar["LJ"] & L\mathrm{Cyl}((FX)^c) \rar["LP"] & QX
\end{tikzcd}
\]
of the fold map of $QX$ has the desired properties.
We conclude from \cref{cylinderobject} that the left-induced model structure exists. The functor $F$ is left Quillen by construction, and the proof that $F$ is right Quillen is the evident variation from the one appearing in \cite[Theorem 2.3]{DrummondColeHackney} and does not depend on homotopy idempotency.
That is, one uses that $FL$ preserves (acyclic) cofibrations and $F$ reflects them to infer that $L$ is left Quillen.
\end{proof}
\begin{rmk}
A dual version of \cite[Theorem 5.6]{DrummondColeHackney} holds, which allows one to efficiently lift Quillen equivalences in circumstances where Theorem~\ref{existenceleft} holds.
As we will not need this result here we will not repeat the statement (the proof is formally dual), and instead merely note that the last two bullet points of \cite[Theorem 5.6]{DrummondColeHackney} should be replaced with `$F'$ reflects cofibrations and preserves fibrant objects' and `$F$ preserves cofibrations and cofibrant objects', respectively.
\end{rmk}
By the dual of \cite[Proposition 2.4]{DrummondColeHackney}, if $\mathcal{M}$ is left or right proper, then the same is true for $\lims{\mathcal{N}}$.
The following proposition guarantees that the left- and right-induced model structures, if they exist, have equivalent homotopy theories.
\begin{prop}\label{prop comparison of left and right}
Let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors, and $\mathcal{M}$ a model category. If the left-induced model structure $\lims{\mathcal{N}}$ and the right-induced model structure $\rims{\mathcal{N}}$ both exist,
then the adjunction
$\id_{\mathcal{N}}\lcolon\rims{\mathcal{N}}\rightleftarrows\lims{\mathcal{N}}\rcolon \id_{\mathcal{N}}$
is a Quillen equivalence.
\end{prop}
\begin{proof}
As the two model structures share the same class of weak equivalences, it is enough to show that $\id_{\mathcal{N}} \colon \lims{\mathcal{N}} \to \rims{\mathcal{N}}$ preserves fibrations.
This holds because $F\colon \lims{\mathcal{N}} \to \mathcal{M}$ preserves fibrations and $F\colon \rims{\mathcal{N}} \to \mathcal{M}$ reflects them.
\end{proof}
\begin{cor}
\label{corollary fully faithful string}
Let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a fully faithful string of adjoint functors, $\mathcal{M}$ a combinatorial model category and $\mathcal{N}$ a locally presentable category.
Then the category $\mathcal{N}$ admits both the left-induced model structure $\lims{\mathcal{N}}$ and the right-induced model structure $\rims{\mathcal{N}}$.
Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=small]
\rims{\mathcal{N}} \ar[rr,"\id_{\mathcal{N}}"] \ar[dr,"F"'] & & \lims{\mathcal{N}} \ar[dl, "F"]
&
\rims{\mathcal{N}} \ar[dr,"F"'] & & \lims{\mathcal{N}} \ar[dl, "F"] \ar[ll,"\id_{\mathcal{N}}"']
\\
& \mathcal{M} & & & \mathcal{M}
\end{tikzcd} \]
\end{cor}
\begin{proof}
As $\mathcal{M}$ is combinatorial, it is both accessible and cofibrantly generated.
We assumed that $L$ is fully faithful, implying that the functor $FL\cong \id_{\mathcal{M}}$ is left Quillen, so both Theorem~\ref{existenceleft} and Theorem~\ref{existenceright} apply to show the existence of the indicated model structures.
By \cite[Theorem 3.2]{FKKR}, the functor $F \colon \rims{\mathcal{N}} \to \mathcal{M}$ is both a left and right Quillen equivalence.
The fact that $F \colon \lims{\mathcal{N}} \to \mathcal{M}$ is both a left and a right Quillen equivalence now follows from \cref{prop comparison of left and right} and two out of three for Quillen equivalences \cite[Corollary 1.3.15]{Hovey:MC}.
\end{proof}
We devote the remainder of this section to Proposition~\ref{Quillenidempotent2}, which gives an interpretation of what being a homotopy idempotent adjoint string (in the sense of \cref{def homotopy idempotent}) means at the level of homotopy categories.
To give the most natural statement of this proposition, we make use of the language of deformable adjunctions from \cite[\S2.2]{RiehlCHT} (originally from \cite[\textsection44.2]{DwyerHirschhornKanSmith}), which will not be needed elsewhere.
The reader who prefers to stay within the world of Quillen model categories should be comforted to know that with some adjustments this is possible, and such a reader is advised to immediately skip down to Remark~\ref{remark stay within Quillen world} and the statement of Proposition~\ref{Quillenidempotent2}.
\begin{rmk}
\label{adjunctionhomotopycategory}
Let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors and $\mathcal{M}$ a model category such that $F L\lcolon\mathcal{M}\rightleftarrows\mathcal{M}\rcolon F R$ is a Quillen pair.
We endow $\mathcal{N}$ with the class of weak equivalences created by $F$, and denote by $\ho\mathcal{N}$ the corresponding homotopy category (which is potentially not locally small).
In this situation, one can then show that $F\colon\mathcal{N}\to\mathcal{M}$ is homotopical, while $L\colon\mathcal{M}\to\mathcal{N}$ (resp.~$R\colon\mathcal{M}\to\mathcal{N}$) preserves weak equivalences between cofibrant (resp.~fibrant) objects, by Ken Brown's lemma \cite[Corollary 7.7.2(1)]{Hirschhorn:MCL}.
In particular, the adjunctions $L\lcolon\mathcal{M}\rightleftarrows\mathcal{N}\rcolon F$ and $F\lcolon\mathcal{N}\rightleftarrows\mathcal{M}\rcolon R$ are \emph{deformable adjunctions}, and by \cite[Theorem~2.2.11]{RiehlCHT} they induce adjunctions $\overline L\lcolon\ho\mathcal{M}\rightleftarrows\ho\mathcal{N}\rcolon\overline F$ and $\overline F\lcolon\ho\mathcal{N}\rightleftarrows\ho\mathcal{M}\rcolon\overline R$ at the level of homotopy categories, where the values of the functors on objects are $\overline LA = L(A^c)$, $\overline RA=R(A^f)$ and $\overline FX = FX$.
There is a string of adjoint functors at the level of homotopy categories $\overline F\lcolon \ho\mathcal{N} \lrl \ho\mathcal{M} \rcolon \overline L,\overline R$.
\end{rmk}
\begin{prop}
\label{Quillenidempotent2}
Let $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ be a string of adjoint functors, and $\mathcal{M}$ a model category such that $F L\lcolon\mathcal{M}\rightleftarrows\mathcal{M}\rcolon F R$ is a Quillen pair.
When $\mathcal{N}$ is endowed with the class of weak equivalences created by $F$, the following are equivalent.
\begin{enumerate}[leftmargin=*]
\item The functor $\overline F \colon \ho\mathcal{N} \to \ho\mathcal{M}$ is fully faithful.
\item The string $\overline F \lcolon \ho\mathcal{N} \lrl \ho\mathcal{M} \rcolon \overline L,\overline R $ is an idempotent adjoint string.\label{item idempotent at homotopy level}
\item The string $F\lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R $ is a homotopy idempotent adjoint string.\label{item homotopy idempotent}
\end{enumerate}
\end{prop}
\begin{proof}
As observed in \cref{adjunctionhomotopycategory}, the adjunction $L\lcolon\mathcal{M}\rightleftarrows \mathcal{N}\rcolon F$ induces an adjunction $\overline{L}\lcolon\ho\mathcal{M}\rightleftarrows \ho\mathcal{N}\rcolon \overline{F}$ at the level of homotopy categories.
The natural bijections coming from the total derived adjunction $\overline L \dashv \overline F$ and the adjunction $L\dashv F$ fit into the square
\[
\begin{tikzcd}[row sep=small]
\mathcal{N}(LA,X) \dar \rar[leftrightarrow,"\cong"] & \mathcal{M}(A,FX) \dar \\
\ho\mathcal{N}(LA,X) \dar & \ho\mathcal{M}(A,FX) \dar{=} \\
\ho\mathcal{N}(\overline LA,X) \rar[leftrightarrow,"\cong"] & \ho\mathcal{M}(A,\overline FX).
\end{tikzcd}
\]
One sees that the counit $\overline \epsilon \colon \overline L \phantom{.}\overline F \Rightarrow \id_{\ho \mathcal{N}}$ of the adjunction $\overline{L}\dashv\overline{F}$ is represented at an object $X$ by the map $\epsilon_X^h \colon L (FX)^c \to LFX \to X$ in $\mathcal{N}$, obtained by applying $L$ to the cofibrant replacement map of $FX$ and by composing with $\epsilon_X$, the counit of the adjunction $L\dashv F$.
As $F$ is homotopical, the natural transformation $\overline F \overline \epsilon \colon \overline F \phantom{.}\overline L\phantom{.} \overline F \Rightarrow \overline F$ at $X$ is then represented by $F\epsilon_X^h\colon FL (FX)^c \to FLFX \to FX$ in $\mathcal{M}$.
Since $F$ creates weak equivalences, $\epsilon_X^h$ is weak equivalence if and only if $F\epsilon_X^h$ is.
The equivalence now follows from \cref{characterizationidempotent} and \cref{def homotopy idempotent}.
\end{proof}
\begin{remark}\label{remark stay within Quillen world}
Suppose we are in the situation of Proposition~\ref{Quillenidempotent2}, and additionally assume that $\mathcal{M}$ is cofibrantly generated and that $\mathcal{N}$ is bicomplete.
One then has access to the right-induced model structure $\rims{\mathcal{N}}$ from Theorem~\ref{existenceright}, so that the functor $F \colon \rims{\mathcal{N}} \to \mathcal{M}$ is both left and right Quillen.
Then the category $\ho\mathcal{N} = \ho\rims{\mathcal{N}}$ is defined as usual, and the induced adjoint string at the level at the level of homotopy categories uses that the left and right derived functors of $F$ coincide (see \cite[Corollary 7.8]{Shulman:CCLRDF}).
In that special case, this proposition may be proved by explicitly comparing the map from \cref{def homotopy idempotent} to the derived unit of the Quillen pair $L\dashv F$.
\end{remark}
\begin{remark}[Bousfield--Friedlander (co)localization]
Suppose $\mathcal{M}$ and $\mathcal{N}$ are model categories and $F \lcolon \mathcal{N} \lrl \mathcal{M} \rcolon L,R$ is a homotopy idempotent string where $F$ creates weak equivalences.
It would be interesting to investigate further conditions on the adjoint string which guarantee that the Bousfield--Friedlander localization $\mathcal{M}^{FL}$ of $\mathcal{M}$ at the $FL$-equivalences exists \cite{MR2427416}, since one can prove that $F\colon \mathcal{N} \to \mathcal{M}^{FL}$ is a right Quillen equivalence.
Dually, $F$ will give a left Quillen equivalence from $\mathcal{N}$ to the colocalization of $\mathcal{M}$ at the $FR$-equivalences \cite[\S3]{MR2277698}, provided it exists.
\end{remark}
\section{Cubical sets}
In this section and the next, we denote by $\mathit{s}\set_{(\infty,0)}$ the Kan--Quillen model structure on the category $\mathit{s}\set$ for $(\infty,0)$-categories (namely $\infty$-groupoids), and by $\mathit{s}\set_{(\infty,1)}$ the Joyal model structure on $\mathit{s}\set$ for $(\infty,1)$-categories (see e.g.~\cite{DuggerSpivakMapping,htt}). In these model structures, the cofibrations are precisely the monomorphisms and the fibrant objects are respectively the Kan complexes and the quasi-categories.
We uniformly denote these two model structures by $\mathit{s}\set_{(\infty,\varepsilon)}$, where $\varepsilon=0,1$.
The goal of the present section is to discuss several model structures on the category of cubical sets.
As mentioned in the introduction, there are many useful categories of cubical sets, depending on the choice of the underlying cube category; several of these are discussed in \cite{GrandisMauri,BuchholtzMorehouse}.
For example, one can use the minimal structure where only the face and degeneracy maps are present, or one could add in either positive or negative connections, or one can consider all poset maps between cubes.
These cube categories are Grothendieck test categories, so each is suitable for modeling $\infty$-groupoids (see \cite{Jardine:CHT,cisinski,Maltsiniotis:CCCUCT,StreicherWeinberger}).
More recently, model structures for higher categories have been developed for cubical sets with connection: for $\infty$-categories this was done in \cite{DohertyKapulkinLindseySattler:CMI1C}, and using marked cubical sets a model for $(\infty,n)$-categories was given in \cite{CampionKapulkinMaehara:ACMINC}.
The methods of the last references break down when working with the full cube category (which is not even a generalized Reedy category in the sense of \cite{BergerMoerdijk:OENRC}), but below we will show that one can nevertheless obtain interesting results.
We compare one of our new model structures to a type-theoretical model structure from \cite{SattlerIdempotent,StreicherWeinberger} and another to the cubical Joyal model structure of \cite{DohertyKapulkinLindseySattler:CMI1C}.
\subsection{New model structures on cubical sets}\label{subsec cube}
Let $\square$ denote the full subcategory of $\cC\!\mathit{at}$ of cubes $[1]^n$ for $n\ge0$,
and let $\mathit{c}\set=\cS\!\mathit{et}^{\square^{\mathrm{op}}}$ denote the category of \emph{cubical sets}, namely presehaves $X\colon\square^{\mathrm{op}}\to\cS\!\mathit{et}$.
This category of cubes, and the corresponding category of cubical sets, are those studied in \cite{SattlerIdempotent, KV} (see \cite{CCHM} for a closely related cube category).
There is a triangulation functor $T\colon\mathit{c}\set\to\mathit{s}\set$, which can be defined on representables by $T(\square[1]^n) \coloneqq \Delta[1]^n$, and extended cocontinuously to all cubical sets, and the functor $T$ has a right adjoint $C\colon\mathit{s}\set\to\mathit{c}\set$.
Kapulkin--Voevodsky show in \cite[\textsection1]{KV} that the functor $C$ is fully faithful.
Sattler shows in \cite[Theorem 2.1]{SattlerIdempotent} that for this specific choice of category of cubical sets the functor $T$ also admits a left adjoint $L\colon\mathit{s}\set\to\mathit{c}\set$. In particular, there is a fully faithful adjoint string
\[T\lcolon\mathit{c}\set\lrl\mathit{s}\set\rcolon L,C\]
between the category of simplicial sets and this specific choice of category of cubical sets.
From \cref{corollary fully faithful string}, we obtain the following, which endows $\mathit{c}\set$ with two Quillen equivalent model structures for $(\infty,\varepsilon)$-categories for any fixed $\varepsilon=0,1$.
\begin{prop}\label{cor cubical QE}
Let $\varepsilon=0,1$.
The category $\mathit{c}\set$ admits both the left-induced model structure $\mathit{c}\set_{\ell,\varepsilon}$ and the right-induced model structure $\mathit{c}\set_{r,\varepsilon}$ along $T\colon\mathit{c}\set\to\mathit{s}\set_{(\infty,\varepsilon)}$.
Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=tiny]
\mathit{c}\set_{r,\varepsilon}\ar[rr,"\id_{\mathit{c}\set}"] \ar[dr,"T"'] & & \mathit{c}\set_{\ell,\varepsilon} \ar[dl, "T"]
&
\mathit{c}\set_{r,\varepsilon} \ar[dr,"T"'] & & \mathit{c}\set_{\ell,\varepsilon} \ar[dl, "T"] \ar[ll,"\id_{\mathit{c}\set}"']
\\
& \mathit{s}\set_{(\infty,\varepsilon)} & & & \mathit{s}\set_{(\infty,\varepsilon)}
\end{tikzcd} \]
\end{prop}
\begin{proof}
As mentioned above, $T$ admits both adjoints.
Further, as discussed above, the right adjoint $C$ was shown to be fully faithful by Kapulkin--Voevodsky in \cite[\S1]{KV}, so this is a fully faithful adjoint string.
Corollary~\ref{corollary fully faithful string} implies the result.
\end{proof}
The following lemma, which we learned from Sattler, clarifies the relation between the
classes of cofibrations in the model structures $\mathit{c}\set_{r,\varepsilon}$ and $\mathit{c}\set_{\ell,\varepsilon}$.
\begin{lem}\label{lemma monos cofib}
The class of monomorphisms of $\mathit{c}\set$ contains the cofibrations of $\mathit{c}\set_{r,\varepsilon}$
and is contained in the class of cofibrations of $\mathit{c}\set_{\ell,\varepsilon}$.
\end{lem}
\begin{proof}
The generating cofibrations of $\mathit{c}\set_{r,\varepsilon}$ are given by applying $L$ to a set of generating cofibrations for $\mathit{s}\set_{(\infty,\epsilon)}$, and
Sattler shows in \cite[Proposition~3.3]{SattlerIdempotent} that the functor $L$ preserves monomorphisms. Since $\mathit{c}\set$ is a presheaf category, the class of monomorphisms of $\mathit{c}\set$ is saturated. It follows that the class of cofibrations of $\mathit{c}\set_{r,\varepsilon}$ is contained in the class of monomorphisms of $\mathit{c}\set$.
Further, being a right adjoint the functor $T$ preserves monomorphisms.
It follows, using the description of the cofibrations in $\mathit{c}\set_{\ell,\varepsilon}$, that the class of monomorphisms of cubical sets is contained in the class of cofibrations of $\mathit{c}\set_{\ell,\varepsilon}$.
\end{proof}
\begin{prop}
\label{corcsets}
Let $\varepsilon=0,1$.
There is a model structure on cubical sets, denoted by $\mathit{c}\set_{m,\varepsilon}$, in which the cofibrations are the monomorphisms and the weak equivalences are created by $T\colon\mathit{c}\set\to\mathit{s}\set_{(\infty,\varepsilon)}$. Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=small]
\mathit{c}\set_{r,\varepsilon}\rar["\id"] \ar[dr,"T"'] & \mathit{c}\set_{m,\varepsilon} \dar["T"] \rar["\id"] & \mathit{c}\set_{\ell,\varepsilon} \ar[dl, "T"]
\\
& \mathit{s}\set_{(\infty,\varepsilon)} &
\end{tikzcd}
\quad
\begin{tikzcd}[column sep=small]
\mathit{c}\set_{r,\varepsilon} \ar[dr,"T"'] &\lar["\id",swap] \mathit{c}\set_{m,\varepsilon} \dar["T"] & \lar["\id",swap] \mathit{c}\set_{\ell,\varepsilon} \ar[dl, "T"]
\\
& \mathit{s}\set_{(\infty,\varepsilon)} &
\end{tikzcd}
\]
\end{prop}
\begin{proof}
By a theorem of Jardine \cite{JardineIntermediate} (following the interpretation from \cite[Remark 2.3.4]{HKRS}), we know that if $\mathcal{N}$ is a category endowed with two model structures $\mathcal{N}_1$ and $\mathcal{N}_2$ having the same class of weak equivalences, and if $\mathcal{K}$ is a class of maps which contains the cofibrations of $\mathcal{N}_1$ and is contained in the cofibrations of $\mathcal{N}_2$, then $\mathcal{N}$ supports a model structure in which the class of weak equivalences is the same as for $\mathcal{N}_1$ and $\mathcal{N}_2$, and the class of cofibrations is $\mathcal{K}$.
By \cref{lemma monos cofib}, we can apply this when $\mathcal{K}$ is the class of monomorphisms of $\mathit{c}\set$ to obtain the indicated model structures.
\end{proof}
The model structure $\mathit{c}\set_{m,0}$ should be related to the \emph{test model structure} (see \cite[Remark 1.2]{KV}) which has the same cofibrations and also has weak equivalences created in $\infty$-groupoids.
The model structures $\mathit{c}\set_{r,\varepsilon}$ for $\varepsilon=0,1$ were known to Sattler, while the model structures $\mathit{c}\set_{\ell,\varepsilon}$ for $\varepsilon=0,1$, as well as the model structure $\mathit{c}\set_{m,1}$ are new.
\subsection{A comparison with cubical models of homotopy type theory}
The other model structure on the category $\mathit{c}\set$ considered in the literature, motivated by homotopy type theory, is the minimal Cisinski model structure on $\mathit{c}\set$ from \cite[\textsection3]{StreicherWeinberger}, which was also studied in \cite[\textsection3.3]{SattlerIdempotent}.
\begin{thm}
There is a model structure on $c\cS\!\mathit{et}$, which we denote by $\mathit{c}\set_{\mathrm{HoTT}}$, in which the cofibrations are the monomorphisms, and the fibrations are the maps that have the right lifting property with respect to the class of maps
\[(\square[0]\times B)\amalg_{\square[0]\times A}(\square[1]\times A)\to\square[1]\times B\]
where $A\to B$ is a monomorphism of cubical sets and $\square[0]\to\square[1]$ is either of the two canonical inclusions.
Further, the functor $T\colon\mathit{c}\set_{\mathrm{HoTT}}\to\mathit{s}\set_{(\infty,0)}$ is left and right Quillen.
\end{thm}
It is mentioned in \cite{Sattler:DCMTTMHT} and in \cite[\textsection5]{StreicherWeinberger} that, unlike for other categories of cubical sets, it is an open problem whether the homotopy theoretic model structure $\mathit{c}\set_{\mathrm{HoTT}}$ is Quillen equivalent to the model structure $\mathit{s}\set_{(\infty,0)}$ for $\infty$-groupoids.
We now explore how the model structure $\mathit{c}\set_{\mathrm{HoTT}}$ compares with $\mathit{c}\set_{m,0}$.
\begin{prop}
\label{remark quillen pair hott}
The identity is a left Quillen functor $\mathit{c}\set_{\mathrm{HoTT}} \to \mathit{c}\set_{m,0}$.
\end{prop}
\begin{proof}
The two model structures have the same class of cofibrations, so it suffices to show that pushout-products
\[(\square[0]\times B)\amalg_{\square[0]\times A}(\square[1]\times A)\to\square[1]\times B\]
of monomorphisms $A\hookrightarrow B$ and inclusions $\square[0]\hookrightarrow\square[1]$ of $\mathit{c}\set_{\mathrm{HoTT}}$ are weak equivalences in $\mathit{c}\set_{m,0}$, namely are sent by $T$ to weak equivalences of $\mathit{s}\set_{(\infty,0)}$.
Since the functor $T$ is both a left and a right adjoint, it preserves pushouts, products and monomorphisms, and therefore it sends this map to the pushout product in $\mathit{s}\set$
\[(\Delta[0]\times TB)\amalg_{\Delta[0]\times TA}(\Delta[1]\times TA)\to\Delta[1]\times TB\]
of the monomorphism $TA\hookrightarrow TB$ (which is a cofibration of $\mathit{s}\set_{(\varepsilon,0)}$) and the inclusion $\Delta[0]\hookrightarrow\Delta[1]$ (which is an acyclic cofibration of $\mathit{s}\set_{(\varepsilon,0)}$). Since $\mathit{s}\set_{(\infty,0)}$ is a cartesian closed model category, the desired map is a weak equivalence of $\mathit{s}\set_{(\infty,0)}$.
\end{proof}
However, understanding whether the identity functor from \cref{remark quillen pair hott} is a left Quillen equivalence\footnote{As these model structures share the same class of cofibrations, this is a left Quillen equivalence if and only if the two model structures are equal.} is a non-trivial matter. Indeed, it is equivalent to the functor $T \colon \mathit{c}\set_{\mathrm{HoTT}} \to \mathit{s}\set_{(\infty,0)}$ being a left Quillen equivalence, which as mentioned earlier is an open question.
\subsection{A comparison between cubical models for \texorpdfstring{$(\infty,1)$}{(∞,1)}-categories}
In \cite{DohertyKapulkinLindseySattler:CMI1C}, a different model structure was constructed on cubical sets which also serves as a model for $(\infty,1)$-categories.
They use smaller cube categories than we have used above, and they give a much more explicit description of their model structure than we have given; this process is helped along by the fact that their cube categories are EZ-Reedy categories in the sense of \cite{elegant}.
For concreteness, write $\square'$ for the wide subcategory of $\square$ which is generated by faces, degeneracies, and both positive and negative connections \cite[1.2]{DohertyKapulkinLindseySattler:CMI1C}, and let $k\colon \square' \to \square$ be the inclusion functor.
The main theorem of \cite{DohertyKapulkinLindseySattler:CMI1C} is that there is a Quillen model structure $\mathit{c}\set'_{\mathrm{cJ}}$, dubbed the \emph{cubical Joyal model structure}, on $\mathit{c}\set' \coloneqq \cS\!\mathit{et}^{(\square')^\mathrm{op}}$ so that the triangulation functor $T' \colon \mathit{c}\set'_{\mathrm{cJ}} \to \mathit{s}\set_{(\infty,1)}$ is a left Quillen equivalence.
\begin{theorem}\label{theorem comparison with dkls}
The restriction functor $k^* \colon \mathit{c}\set_{\ell,1} \to \mathit{c}\set'_{\mathrm{cJ}}$ is a right Quillen equivalence.
\end{theorem}
\begin{proof}
For the proof, we show that left Kan extension $k_! \colon \mathit{c}\set'_{\mathrm{cJ}} \to \mathit{c}\set_{\ell,1}$ is a left Quillen equivalence.
The triangulation functors are given by sending the object $[1]^n$ to $\Delta[1]^{n}$ in $\mathit{s}\set$, and then extending using that the category of presheaves is the free cocompletion \cite[Theorem 4.51]{Kelly}.
In particular, we have the following commutative diagram,
\[ \begin{tikzcd}
\square' \rar{k} \dar & \square \dar \rar & \mathit{s}\set \\
\mathit{c}\set' \rar{k_!} \ar[urr, bend right=40,"T'"' near end] & \mathit{c}\set \ar[ur,"T"]
\end{tikzcd} \]
whose vertical morphisms are Yoneda embeddings.
If we knew that $k_!$ was a left Quillen functor, then we would have a diagram
\[ \begin{tikzcd}[column sep=tiny]
\mathit{c}\set'_{\mathrm{cJ}} \ar[rr,"k_!"] \ar[dr,"T'"'] & & \mathit{c}\set_{\ell,1} \ar[dl,"T"] \\
& \mathit{s}\set_{(\infty,1)}
\end{tikzcd} \]
of left Quillen functors with $T$ and $T'$ left Quillen equivalences by \cref{cor cubical QE} and \cite[Theorem 6.1]{DohertyKapulkinLindseySattler:CMI1C}.
By two out of three for Quillen equivalences \cite[Corollary 1.3.15]{Hovey:MC}, this would imply that $k_!$ is a left Quillen equivalence as well.
It remains to check that $k_!$ is a left Quillen functor.
But this is automatic, since $T'$ preserves cofibrations and acyclic cofibrations, and $T$ reflects cofibrations and weak equivalences.
\end{proof}
\begin{remark}
It is also true that $k^* \colon \mathit{c}\set_{m,1} \to \mathit{c}\set'_{\mathrm{cJ}}$ is a right Quillen equivalence. To establish this, it is enough to show that $k_!$ preserves monomorphisms.
This uses that $\square'$ is an EZ-Reedy category by \cite[Corollary 1.17]{DohertyKapulkinLindseySattler:CMI1C}, so that monomorphisms are generated by boundary inclusions of representables.
These generators are sent to monomorphisms by $k_!$, hence the same is true for all monomorphisms.
\end{remark}
\section{Further applications}
In this section we give model structures, induced from the Joyal and Kan--Quillen model structures on simplicial sets, on several other categories.
We will study three fully faithful strings of adjoint functors of the form $F\lcolon\mathcal{N}\lrl\mathit{s}\set_{(\infty,\varepsilon)}\rcolon L,R$, and apply \cref{corollary fully faithful string} to obtain new model structures on $\mathcal{N}$ for $(\infty,\varepsilon)$-categories.
\subsection{Model structures on prederivators}\label{subsec pred}
Let $cat$ denote the $2$-category of \emph{homotopically finite categories}, namely those categories whose nerve has only finitely many nondegenerate simplices, and let $\mathit{p}\cD\!\mathit{er}$ denote the category of \emph{small prederivators}, namely $2$-functors $\mathbb D\coloncat^{\mathrm{op}}\to\cC\!\mathit{at}$, and strict natural transformations. As discussed in \cite[\textsection1]{FKKR}, this category is a locally presentable category of prederivators, as opposed to the more traditional (large and not locally presentable) category of $2$-functors $\mathbb D\colon\cC\!\mathit{at}^{\mathrm{op}}\to\mathcal{C} AT$.
There is an underlying functor $U\colon\mathit{p}\cD\!\mathit{er}\to\mathit{s}\set$, defined by $(U\mathbb D)_n \coloneqq \operatorname{ob} (\mathbb D([n]))$.
It is shown in \cite[\textsection1.15, 1.16]{FKKR} that the functor $U$ admits both a left and a right adjoint, and it is shown in \cite[Proposition~1.18]{FKKR} that the left adjoint is fully faithful.
In particular, there is a fully faithful string of adjoint functors $U\colon\mathit{p}\cD\!\mathit{er}\lrl\mathit{s}\set$ between the category of simplicial sets and the category of small prederivators.
From \cref{corollary fully faithful string}, we obtain the following, which endows $\mathit{p}\cD\!\mathit{er}$ with two Quillen equivalent model structures for $(\infty,\varepsilon)$-categories for any fixed $\varepsilon=0,1$.
\begin{prop}
Let $\varepsilon=0,1$.
The category $\mathit{p}\cD\!\mathit{er}$ admits both the left-induced model structure $\mathit{p}\cD\!\mathit{er}_{\ell,\varepsilon}$ and the right-induced model structure $\mathit{p}\cD\!\mathit{er}_{r,\varepsilon}$ along $U\colon\mathit{p}\cD\!\mathit{er}\to\mathit{s}\set_{(\infty,\varepsilon)}$. Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=tiny]
\mathit{p}\cD\!\mathit{er}_{r,\varepsilon}\ar[rr,"\id_{\mathit{p}\cD\!\mathit{er}}"] \ar[dr,"U"'] & & \mathit{p}\cD\!\mathit{er}_{\ell,\varepsilon} \ar[dl, "U"]
&
\mathit{p}\cD\!\mathit{er}_{r,\varepsilon} \ar[dr,"U"'] & & \mathit{p}\cD\!\mathit{er}_{\ell,\varepsilon} \ar[dl, "U"] \ar[ll,"\id_{\mathit{p}\cD\!\mathit{er}}"']
\\
& \mathit{s}\set_{(\infty,\varepsilon)} & & & \mathit{s}\set_{(\infty,\varepsilon)}
\end{tikzcd} \]
\end{prop}
The model structure $\mathit{p}\cD\!\mathit{er}_{r,1}$ is the one considered in \cite[\textsection3]{FKKR}, while the others are new.
\subsection{Model structures on marked simplicial sets}\label{subsec marked}
Let $\mathit{s}\set^{+}$ denote the category of marked simplicial sets, namely simplicial sets endowed with a specified set of \emph{marked} $1$-simplices, and maps that preserve the marking, as in \cite[\textsection3.1]{htt}.
There is an underlying functor $U\colon\mathit{s}\set^{+}\to\mathit{s}\set$ which just forgets the marking.
This functor admits both a left adjoint and a right adjoint which are given by the minimal and maximal marking respectively, $(-)^{\flat},(-)^{\sharp}\colon\mathit{s}\set\to\mathit{s}\set^{+}$.
The minimal and maximal marking functors are fully faithful, since the unit of the adjunction $(-)^{\flat}\dashv U$ is an identity.
In particular, there is a fully faithful string of adjoint functors
\[U\lcolon \mathit{s}\set^{+}\lrl\mathit{s}\set\rcolon (-)^{\flat},(-)^{\sharp}\]
between the category of simplicial sets and the category of marked simplicial sets.
From \cref{corollary fully faithful string}, we obtain the following, which endows $\mathit{s}\set^{+}$ with two Quillen equivalent model structures for $(\infty,\varepsilon)$-categories for any fixed $\varepsilon=0,1$.
\begin{prop}
Let $\varepsilon=0,1$.
The category $\mathit{s}\set^{+}$ admits both the left-induced model structure $\mathit{s}\set^{+}_{\ell,\varepsilon}$ and the right-induced model structure $\mathit{s}\set^{+}_{r,\varepsilon}$ along $U\colon\mathit{s}\set^{+}\to\mathit{s}\set_{(\infty,\varepsilon)}$. Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=tiny]
\mathit{s}\set^{+}_{r,\varepsilon}\ar[rr,"\id_{\mathit{s}\set^{+}}"] \ar[dr,"U"'] & & \mathit{s}\set^{+}_{\ell,\varepsilon} \ar[dl, "U"]
&
\mathit{s}\set^{+}_{r,\varepsilon} \ar[dr,"U"'] & & \mathit{s}\set^{+}_{\ell,\varepsilon} \ar[dl, "U"] \ar[ll,"\id_{\mathit{s}\set^{+}}"']
\\
& \mathit{s}\set_{(\infty,\varepsilon)} & & & \mathit{s}\set_{(\infty,\varepsilon)}
\end{tikzcd} \]
\end{prop}
These model structures on $\mathit{s}\set^{+}$ are all new, and seemingly different from the model structure on $\mathit{s}\set^{+}$ that models $\infty$-categories constructed by Lurie in \cite{htt}.
\begin{rmk}\label{lurie model structure remark}
Recall from \cite[\S3.1.3]{htt} the \emph{Cartesian model structure} on $\mathit{s}\set^{+}$, which we denote by $\mathit{s}\set^{+}_{\mathrm{qcat}}$ where the cofibrations are the monomorphisms and the fibrant objects are the naturally marked quasi-categories.
Lurie shows in \cite[Proposition 3.1.5.3]{htt} that the functor $U\colon\mathit{s}\set^{+}_{\mathrm{qcat}}\to\mathit{s}\set_{(\infty,1)}$
is a right Quillen equivalence.
Although the model structures $\mathit{s}\set^{+}_{\ell,1}$ and $\mathit{s}\set^{+}_{r,1}$ do not seem to be comparable with $\mathit{s}\set^{+}_{\mathrm{qcat}}$ via the identity functor, we have the following composable chain of Quillen equivalences.
\[ \begin{tikzcd}
\mathit{s}\set^{+}_{r,1} \rar[shift left, "\id"] &
\mathit{s}\set^{+}_{\ell,1} \lar[shift left, "\id"] \rar[shift left, "U"] &
\mathit{s}\set_{(\infty,1)} \lar[shift left, "(-)^{\sharp}"] \rar[shift left, "(-)^{\flat}"] &
\mathit{s}\set^{+}_{\mathrm{qcat}} \lar[shift left, "U"]
\end{tikzcd} \]
\end{rmk}
\begin{remark}
There is an adjoint string $\mathcal St_{\leq n} \lrl \mathcal St_{(\infty,n)}$ given in \cite[\S2.5]{EmilyNotes}, where $\mathcal St_{(\infty,n)}$ is the category of stratified simplicial sets equipped with the model structure for $(\infty,n)$-categories \cite[4.25]{EmilyNotes}, and $\mathcal St_{\leq n}$ are objects marked only through dimension $n$.
This is an idempotent and homotopy idempotent adjoint string, though it is not fully faithful.
The left- and right-induced model structures on $\mathcal St_{\leq n}$ exist and coincide, and the inclusion is a right Quillen equivalence.
Moreover, when $n=0$ we have $\mathcal St_{\leq 0} = \mathit{s}\set_{(\infty,0)}$ and when $n=1$ we have $\mathcal St_{\leq 1} = \mathit{s}\set^{+}_{\mathrm{qcat}}$.
\end{remark}
\subsection{Model structures on bisimplicial sets} \label{subsec bisimp}
Let $\mathit{ss}\set$ denote the category of bisimplicial sets, and $i_1^*\colon\mathit{ss}\set\to\mathit{s}\set$ the zeroth row functor, defined by taking the first row of a bisimplicial set as in \cite[\textsection4]{JT}.
The functor $i_1^*$ can be seen as the functor induced by
the fully faithful inclusion $i_1\colon \Delta\hookrightarrow\Delta\times\Delta$, given by $[n]\mapsto[n]\times[0]$. In particular, $i_1^*$ admits a left and a right adjoint, $(i_1)_!,(i_1)_*\colon\mathit{s}\set\to\mathit{ss}\set$ obtained as left and right Kan extension along the fully faithful functor $i_1$, and they are therefore fully faithful. In particular, there is a fully faithful string of adjoint functors
\[i_1^*\lcolon \mathit{ss}\set\lrl\mathit{s}\set\rcolon (i_1)_!,(i_1)_*\]
between the category of simplicial sets and the category of bisimplicial sets.
From \cref{corollary fully faithful string}, we obtain the following, which endows $\mathit{ss}\set$ with two Quillen equivalent model structures for $(\infty,\varepsilon)$-categories for any fixed $\varepsilon=0,1$.
\begin{prop}
Let $\varepsilon=0,1$.
The category $\mathit{ss}\set$ admits both the left-induced model structure $\mathit{ss}\set_{\ell,\varepsilon}$ and the right-induced model structure $\mathit{ss}\set_{r,\varepsilon}$ along $i_1^* \colon\mathit{ss}\set\to\mathit{s}\set_{(\infty,\varepsilon)}$. Further, the left diagram below is a diagram of left Quillen equivalences, and the right diagram below is a diagram of right Quillen equivalences.
\[ \begin{tikzcd}[column sep=tiny]
\mathit{ss}\set_{r,\varepsilon}\ar[rr,"\id_{\mathit{ss}\set}"] \ar[dr,"i_1^*"'] & & \mathit{ss}\set_{\ell,\varepsilon} \ar[dl, "i_1^*"]
&
\mathit{ss}\set_{r,\varepsilon} \ar[dr,"i_1^*"'] & & \mathit{ss}\set_{\ell,\varepsilon} \ar[dl, "i_1^*"] \ar[ll,"\id_{\mathit{ss}\set}"']
\\
& \mathit{s}\set_{(\infty,\varepsilon)} & & & \mathit{s}\set_{(\infty,\varepsilon)}
\end{tikzcd} \]
\end{prop}
These model structures on $\mathit{ss}\set$ are all new, and seemingly different from the model structure on $\mathit{ss}\set$ that models $\infty$-categories constructed by Rezk in \cite{rezkhomotopy}.
\begin{rmk}\label{Rezk model structure remark}
Recall from \cite[Theorem 7.2]{rezkhomotopy} the \emph{complete Segal space model structure} on $\mathit{ss}\set$, which we denote by $\mathit{ss}\set_{\mathrm{css}}$ where the cofibrations are the monomorphisms and the fibrant objects are the (injectively fibrant) complete Segal spaces.
Joyal--Tierney show in \cite[Theorem 4.11]{JT} that the functor $i_1^*\colon\mathit{ss}\set_{\mathrm{css}}\to\mathit{s}\set_{(\infty,1)}$
is a right Quillen equivalence.
Although the model structures $\mathit{ss}\set_{\ell,1}$ and $\mathit{ss}\set_{r,1}$ do not seem to be comparable with $\mathit{ss}\set_{\mathrm{css}}$ via the identity functor, we have the following composable chain of Quillen equivalences.
\[ \begin{tikzcd}
\mathit{ss}\set_{r,1} \rar[shift left, "\id"] &
\mathit{ss}\set_{\ell,1} \lar[shift left, "\id"] \rar[shift left, "i_1^*"] &
\mathit{s}\set_{(\infty,1)} \lar[shift left, "(i_1)_*"] \rar[shift left, "(i_1)_!"] &
\mathit{ss}\set_{\mathrm{css}} \lar[shift left, "i_1^*"]
\end{tikzcd} \]
\end{rmk}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Unambiguous experimental realization of a quantum spin liquid (QSL) state remains an enduring challenge \cite{Balents10:464,Zhou17:89,broholm20:367}. Characterized by a ground state featuring highly entangled spins exhibiting no long-range magnetic order, QSL states are born out of an intricate and often subtle interplay of comparable, often competing, energy scales and are thought to be quenched by relatively small perturbations. Thus, understanding and controlling crystalline disorder, structural distortions, chemical impurities, and intrinsic defects are critical challenges when developing QSL phenomenology in real materials.
NaRuO$_2$ is a newly proposed, candidate QSL host that straddles a unique energy landscape -- one where Heisenberg-Kitaev interactions as well as extended exchange foster a native, quantum disordered ground state \cite{ortizNaRuO2}. NaRuO$_2$ is a member of the layered family of $AB$O$_2$ delafossite-like oxides, a larger family of $R\overline{3}m$ quasi-two-dimensional materials that support ideal antiferromagnetic triangular lattices on the $B$-site sublattice. Specifically, NaRuO$_2$ (Figure \ref{fig:Crystal}) features a triangular lattice of Ru$^{3+}$ ions separated by planes of Na$^+$. The edge-sharing RuO$_6$ octahedra place the Ru$^{3+}$ (4d$^5$) ions in a lightly trigonally distorted cubic crystal field. With appreciable spin-orbit coupling $\lambda$ and Coulomb repulsion $U$, the system is capable of supporting a half-filled $J_\text{eff}=1/2$ orbital. The result is a weak $J_\text{eff}=1/2$ Mott state with a disordered magnetic ground state and energetic antiferromagnetic interactions \cite{ortizNaRuO2}.
Despite lacking native chemical disorder such as that present in triangular lattice compounds like YbMgGaO$_{4}$ \cite{Paddison17:13,li19:2}, off-stoichometry and the resulting defects are a persistent concern among the alkali metal delafossite variants \cite{Dally17:459,clarke1998synthesis}. The typical culprit tends to be alkali-metal vacancies, whose presence is traditionally countered by the introduction of an excess of alkali precursors during growth. However, the historical precedent for alkali-vacancies as the dominant defect often neglects complex structure-defect-property relationships that can dominate in real systems -- NaRuO$_2$ is one such example.
In this work, we examine the defect chemistry of the Heisenberg-Kitaev candidate material NaRuO$_{2}$, mapping the Na--Ru--O phase diagram in the vicinity of NaRuO$_{2}$ to understand the extent and type of off-stoichiometry supported by the compound. We demonstrate the formation of a single solid-solution Na$_{3+x}$Ru$_{3-x}$O$_6$~between the triangular lattice compound NaRuO$_{2}$ and the disordered honeycomb lattice compound Na$_{2}$RuO$_{3}$ \cite{mogare2004syntheses}, highlighting the tendency for NaRuO$_{2}$ to form Na-rich Na$_\text{Ru}$ defects. A combination of bulk magnetization and electron transport measurements highlight strong property changes as a function of Na-loading, highlighting the importance, and more importantly, the ability to control stoichiometry in NaRuO$_{2}$.
\begin{figure}
\includegraphics[width=1\linewidth]{CrystalStructure.png}
\caption{Delafossite ($R\bar{3}m$) crystal structure assumed by the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution between the ternary end members NaRuO$_{2}$ ($x$=0) and disordered Na$_{2}$RuO$_{3}$ ($x$=1). Na$_{3+x}$Ru$_{3-x}$O$_6$~forms a triangular sublattice comprised of edge-sharing Ru$^{3+}$ (4$d^{5}$) octahedra. Na-rich conditions overwhelmingly favor formation of Na$_\text{Ru}$ anti-site defects, diluting the Ru$^{3+}$ sublattice with nonmagnetic Na$^{+}$.}
\label{fig:Crystal}
\end{figure}
\section{Experimental Methods}
\subsection{Synthesis}
Polycrystalline members of the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution were synthesized using mechanochemical methods. Na$_{2}$O$_{2}$ beads (Sigma, 97\%), RuO$_{2}$ powder (Alfa, 99.95\%), and Na metal (Alfa 99.8\%) were
combined in a pre-seasoned tungsten carbide ball mill vial and sealed under Ar. Due to the volitility of Na and and potential oxygen off-stoichiometry in RuO$_{2-x}$, adjustments are required to the nominal Na:Ru:O ratios. Specifically, both the compositions for Na$_2$RuO$_3$ and NaRuO$_2$ were empirically tuned to yield phase-pure compositions at Na$_{1.07}$(RuO$_2$)$_{1.13}$(Na$_2$O$_2$)$_{0.70}$ (Na$_{2.0}$Ru$_{0.9}$O$_{3.0}$) and Na$_{1.07}$(RuO$_2$)$_{1.37}$(Na$_2$O$_2$)$_{0.37}$ (Na$_{1.0}$Ru$_{0.8}$O$_{2.0}$) respectively. Using a combination of excess Na metal, Na$_2$O$_2$ and RuO$_2$, we iteratively narrowed down the single-phase region of the NaRuO$_2$--Na$_2$RuO$_3$ alloy, adjusting the compositional vectors until secondary phases were eliminated. All alloys were generated through a subsequent linear interpolation of the \textit{tuned} compositions of Na$_2$RuO$_3$ and NaRuO$_2$. Empirical tuning and interpolation is essential, as the compensating ratio of Na:Ru:O that yields phase pure NaRuO$_2$ is not the same as the compensation required for Na$_2$RuO$_3$.
The resulting mixture was milled for 60~min in a Spex 8000D Mixer/Mill using four 7.9~mm tungsten carbide balls. The reaction generates a substantial amount of heat, and care must be taken with large sample volumes. The resulting precursor is confirmed amorphous by powder x-ray diffraction. The milled powder was then lightly ground in an agate mortar under Ar to disperse any agglomerates, sieved through a 100 micron sieve, and loaded into 2~mL alumina cylindrical crucibles (CoorsTek). In addition, a small portion of the milled powder was cold-pressed into 5~mm diameter pellets and buried within the powder bed. The crucibles were subsequently sealed under 1~atm of Ar in fused silica ampoules and placed within a 900$^{\circ}$C preheated furnace. Samples were annealed for 30~min and then immediately air-quenched before extracting powders under Ar. The final powders and sintered pellets are largely phase pure with trace amounts of Ru metal ($<$2~\%). Powders are black and moisture sensitive, with sensitivity increasing dramatically with additional Na content.
\subsection{Structural Characterization}
Phase purity was initially examined with powder x-ray diffraction (XRD) measurements at room temperature on a Panalytical Empyrean diffractometer (Cu K$_{\alpha_{1,2}}$) in Bragg-Brentano ($\theta$-$\theta$) geometry. Na$_{3+x}$Ru$_{3-x}$O$_6$~powders were placed on a Si zero-diffraction plate under argon and capped with a 12~mm$\times$12~mm piece Kapton film to shield against atmospheric moisture. Pawley and Rietveld refinements were performed using \texttt{TOPAS Academic} v6 \cite{Coelho}. Structural models and visualization utilized the \textsc{VESTA} software package \cite{Momma2011}.
\subsection{Magnetization and Electron Transport Measurements}
Temperature dependent dc-magnetization data under zero-field-cooled (ZFC) and field-cooled (FC) conditions were collected on a 7~T Quantum Design Magnetic Property Measurement System (MPMS3) SQUID magnetometer. Samples were sealed in polypropylene holders under argon to minimize absorption of atmospheric moisture. Data was collected continuously in sweep mode with a ramp rate of 2~K/min in the presence of an external DC field of 1000~Oe. Isothermal dc-magnetization measurements at 2~K were collected continuously in sweep mode with a ramp rate of 100~Oe/sec.
Resistivity measurements were performed on sintered pellets of Na$_{3+x}$Ru$_{3-x}$O$_6$~that were sectioned into rectangular bars with approximate dimensions of 1$\times$2$\times$0.5~mm. Electrical contacts were made in a standard four-point geometry with contacts being made with a combination of gold wire and silver paint. Thermal contact and electrical isolation was ensured using layers of GE varnish and cigarette paper. The temperature dependence of the electrical resistivity was measured with the Electrical Transport Option (ETO) in a 9~T Quantum Design Dynacool Physical Property Measurement System (PPMS) using a drive current of 10 $\mu$A and drive frequency of 100~Hz. Data was collected continuously in sweep mode with a ramp rate of 2~K/min.
\section{Results \& Discussion}
\subsection{Synthesis \& Structure}
Motivated by the combination of strong spin-orbit coupling, the expanded nature of the Ru $d$-orbitals, and remnant Coulomb interaction effects, ruthenates have continued to garner substantial attention. Owing to the many stable oxidation states of Ru, the Na--Ru--O phase diagram is remarkably complex. Within a relatively narrow set of chemical potentials there are at least 7 reported Na--Ru--O ternary compounds: NaRuO$_2$ \cite{shikano2004naruo2}, NaRu$_2$O$_4$ \cite{shikano2004synthesis}, Na$_2$RuO$_3$ \cite{mogare2004syntheses}, Na$_3$RuO$_4$ \cite{regan2005isolated}, Na$_2$RuO$_4$ \cite{mogare2004syntheses}, Na$_{27}$Ru$_{14}$O$_{48}$ \cite{allred2011na27ru14o48}, and Na$_{3-x}$Ru$_4$O$_9$ \cite{regan2006structure}.
NaRuO$_2$ is of particular interest due to the triangular sublattice of Ru$^{3+}$ and the potential applications as a QSL candidate material \cite{ortizNaRuO2}. Remarkably, a survey of adjacent phases to NaRuO$_2$ reveals the ``disordered'' ($R\bar{3}m$) polymorph of Na$_2$RuO$_3$ is structurally identical to NaRuO$_2$, except for the random dilution of the Ru$^{3+}$ triangular sublattice with nonmagnetic Na$_\text{Ru}$ defects. It is important to note that while Na$_2$RuO$_3$ can also crystallize in a ordered $C2/c$ monoclinic structure, it is not clear which phase is the thermodynamic ground state.
Such a relationship and the resulting potential for off-stoichiometry in NaRuO$_{2}$ is supported by a comparison the available crystallographic data. The original synthesis procedure reported for NaRuO$_2$ involves a three step decomposition process where: 1) \ce{Na2RuO4} was synthesized from a stoichiometric mixture of \ce{Na2O2} and \ce{RuO2}, 2) stoichiometric amounts of \ce{Na2RuO4} and Ru metal were mixed, dried, and sealed inside gold tubing, and finally 3) the mixture was heated at 1173~K for 12~h and then 1273~K for 120~h \cite{shikano2004synthesis}. This processing route produces material with lattice parameters [$a,c$] : [3.02~\AA, 16.49~\AA]. We have developed a new, rapid, mechanochemical route for the synthesis of NaRuO$_{2}$ \cite{ortizNaRuO2}, which is the method utilized in the present study. This processing route renders NaRuO$_2$ with lattice parameters [3.06~\AA, 16.18~\AA].
The difference observed in the $c$-axis lattice parameters reported in this work \cite{ortizNaRuO2} and prior work by Shikano et al. \cite{shikano2004naruo2} is substantial and noteworthy. One potential origin of this discrepancy is the impact of Na off-stoichiometry, which would naturally impact the interlayer spacing. Looking to the analogous titanate structure (Na$_{1-x}$TiO$_{2}$), detailed structural studies have identified a contraction along \textit{c} and an expansion in \textit{a} as Na-vacancies were eliminated and the composition approached nominal NaTiO$_2$ \cite{clarke1998synthesis}. We suggest that the smaller \textit{c}-axis lattice parameter of NaRuO$_{2}$ synthesized via the mechanochemical route presented herein are closer to the ideal 1:1:2 stoichiometry. This is further supported by our previous neutron powder diffraction refinement \cite{ortizNaRuO2}, which indicates that the \textit{tuned} NaRuO$_2$ composition is stoichiometric within the resolution of our measurement. The discrepancy between the prior report and our results suggests that off-stoichiometry and defect control are important factors in NaRuO$_2$.
Drawing inspiration from the thermoelectric community and the concept of ``phase boundary mapping" \cite{pbmortiz2019carrier, pbmohno2017achieving,pbmohno2018phase,pbmcrawford2018experimental}, we sought to map the phase space surrounding NaRuO$_2$. Wide swaths of the space immediately surrounding NaRuO$_2$ are dominated by 2-phase equilibria, which is unexpected if NaRuO$_2$ is a prototypical line compound. This is instead consistent with the formation of a large single-phase region or an extended alloy. Furthermore, NaRuO$_2$ shows an unusual proclivity to incorporate excess Na into the structure. Considering the structural similarity of disordered Na$_2$RuO$_3$, an extended solid solution between NaRuO$_2$ and Na$_2$RuO$_3$ could exist. In support of this conjecture, synthesizing Na$_{2}$RuO$_{3}$ using the same synthetic conditions as NaRuO$_{2}$ results in the formation of disordered $R\bar{3}m$ Na$_{2}$RuO$_{3}$. This disordered Na$_{2}$RuO$_{3}$ polymorph persists after extended annealing and appears to be the stable structure under our processing conditions.
\begin{figure}
\includegraphics[width=1\linewidth]{Scattering.png}
\caption{X-ray patterns of Na$_{3+x}$Ru$_{3-x}$O$_6$ alloy series demonstrate Successful alloying of NaRuO$_{2}$ ($x$=0) and Na$_2$RuO$_3$ ($x$=1) through continuous shifts in the peak positions and intensities. Black traces indicate resulting Pawley refinements in the $R\overline{3}m$ structure. All samples up to $x$=1 are predominately phase-pure Na$_{3+x}$Ru$_{3-x}$O$_6$ with trace Ru metal. Samples extending beyond nominal Na$_{2}$RuO$_{3}$ ($x$=1) exhibit increased Ru formation, suggesting a geometrical shift in the single-phase boundary.}
\label{fig:Scattering}
\end{figure}
To verify the solid-solution hypothesis, a series of samples ranging from NaRuO$_{2}$--Na$_{2}$RuO$_{3}$ were synthesized. For the sake of convenience, we will refer to the series using the renormalized stoichiometry Na$_{3+x}$Ru$_{3-x}$O$_6$ where the end members of $x$=0 and $x$=1 correspond to nominal NaRuO$_{2}$ and Na$_{2}$RuO$_{3}$, respectively. As illustrated in Fig.~\ref{fig:Scattering}, x-ray and neutron diffraction data confirm that the series of alloys constructed along the NaRuO$_2$--Na$_2$RuO$_3$ pseudobinary phase diagram are predominantly single phase, with a only a small secondary fraction of Ru metal. In the spirt of phase-boundary mapping \cite{pbmcrawford2018experimental,pbmohno2017achieving,pbmohno2018phase,pbmortiz2019carrier}, this impurity was intentionally introduced to pin the samples to the Ru-rich edge of the single-phase region. Significant changes in peak positions and the corresponding lattice parameters (Fig.~\ref{fig:Vegard}) are clearly observed in the x-ray scattering measurements.
A summary of the changes in the crystallographic parameters accompanying the transition from NaRuO$_{2}$ to Na$_{2}$RuO$_{3}$ is presented in Fig.~\ref{fig:Vegard}. The cell volume increases both monotonically and linearly from NaRuO$_{2}$ ($x$=0) to Na$_{2}$RuO$_{3}$ ($x$=1), consistent with Vegard's Law. This serves as confirmation of a solid solution, and further highlights the propensity for the formation of Na$_\text{Ru}$ antisite defects in NaRuO$_{2}$. Unexpectedly, the off-stoichometry of disordered Na$_{2}$RuO$_{3}$ is similarly complex and has the ability to absorb excess Na up to $x$=4/3. Past this point, samples become multiphase and exhibit a mixture of Na-rich Na$_{3+x}$Ru$_{3-x}$O$_6$ and Na$_{3}$RuO$_{4}$. It is interesting to note that the symmetry of Na$_{3}$RuO$_{4}$ (space group $C2/m$) is a subgroup for $R\bar{3}m$ and is structurally similar to NaRuO$_{2}$ and Na$_{2}$RuO$_{3}$ ($e.g.$ 6-coordinate Na/Ru, approximate planes of metal cations).
\begin{figure}
\includegraphics[width=1\linewidth]{VegardLaw.png}
\caption{(a) Compositional dependence of the lattice parameters and cell volume for the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution extracted from Pawley refinements of room temperature pXRD data. (b) Tentative processing ternary phase diagram at 900$^{\circ}$C isotherm for Na--Ru--O space surrounding the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution. }
\label{fig:Vegard}
\end{figure}
The volumetric expansion of the lattice observed in Fig.~\ref{fig:Vegard} with additional Na-loading can be rationalized through simple ionic radii arguments. In a 6-coordinate environment, the Shannon radius of Ru$^{3+}$ is 0.68~\AA\ and Ru$^{4+}$ is 0.62~\AA. While excess Na is expected to convert Ru$^{3+}$ to Ru$^{4+}$, the effect of substituting the much larger Na$^+$ (1.02~\AA) on Ru$^{3+}$ dominates. Thus, a general expansion of the lattice is expected as Na$_\text{Ru}$ defects accumulate.
The Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution poses a synthetic challenge, particularly when the stoichiometry of polycrystalline NaRuO$_2$ needs to be tightly controlled. As illustrated in Fig.~\ref{fig:Vegard}(b), the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution creates several large 2-phase (blue) regions where Na$_{3+x}$Ru$_{3-x}$O$_6$~ is at equilibrium with NaRu$_{2}$O$_{4}$ under O-rich conditions, Ru metal under O-poor conditions, and Na$_{3}$RuO$_{4}$ under Na-rich conditions. Three unique three-phase (gray) equilibria were identified between Na$_{3+x}$Ru$_{3-x}$O$_6$--NaRu$_2$O$_4$--Ru, Na$_{3+x}$Ru$_{3-x}$O$_6$--Na$_2$RuO$_4$--Na$_3$RuO$_4$, and Na$_{3+x}$Ru$_{3-x}$O$_6$--Na$_3$RuO$_4$--Ru. In our experience, the NaRuO$_2$--Na$_2$RuO$_3$ alloy does not readily support off-stoichiometry in the Ru-rich direction beyond NaRuO$_2$. Employing the principles of phase boundary mapping, we would aim to synthesize NaRuO$_2$ under conditions that place it in equilibrium with NaRu$_2$O$_4$ and Ru metal. A convenient metric would be to minimize the cell volume of NaRuO$_2$.
Attempts to make samples in the O-rich region above nominal Na$_{2}$RuO$_{3}$ indicate the presence of \textit{at least one} unknown Na--Ru--O ternary, complicating the mapping process. Although we would na\"{i}vely suspect samples to contain Na$_{28}$Ru$_{14}$O$_{48}$ \cite{allred2011na27ru14o48}, this phase could not be reproduced using the processing techniques described here. Considering the potential complexity in this region of the diagram, we refrain from postulating on the phase equilibria in this region. This is complicated by the existence of the Na$_{3-x}$Ru$_4$O$_9$ solid-solution, creating large swaths of 2-phase regions. Future work will be required to fully understand the O-rich side of the Na--Ru--O phase diagram.
Regardless of the additional complexities present in the O-rich regime, the isothermal phase diagram presented here establishes a reliable method for Ru-rich processing of NaRuO$_{2}$, minimizing the substitution of nonmagnetic Na$_\text{Ru}$ defects on the Ru triangular lattice. Compositions located in the three-phase NaRuO$_2$--NaRu$_2$O$_4$--Ru Alkemade triangle will reliably produce NaRuO$_{2}$ at the compositional invariant point where the ternary Alkemade triangle adjoins the vertex of the Na$_{3+x}$Ru$_{3-x}$O$_6$~single-phase region. Tuning the composition to produce NaRuO$_{2}$ at this vertex with minimal contributions from Ru-metal and NaRu$_2$O$_4$ enables stoichiometry control in a system with a complex phase diagram containing volatile elements.
\begin{figure}
\includegraphics[width=\linewidth]{Resistivity.png}
\caption{Temperature dependence of electronic resistivity of Na$_{3+x}$Ru$_{3-x}$O$_6$~alloys up to $x=2/3$ is consistent with a lightly doped insulator, with (inset) resistivity increasing exponentially with Na incorporation.}
\label{fig:Transport}
\end{figure}
\begin{figure*}
\includegraphics[width=1\textwidth]{MagnetizationData.png}
\caption{(a) Temperature dependence of the ZFC and FC dc magnetic susceptibility for Na$_{3+x}$Ru$_{3-x}$O$_6$ alloys in an external applied field of 1000~Oe. Black triangles denote bifurcation temperatures of the ZFC/FC curves. (b) Compositional dependence of the ZFC/FC bifurcation temperature. Peaking for intermediate compositions, ZFC/FC splitting falls below 2~K for the nominal end members $x$=0 and 1. (c) Field dependence of the dc isothermal magnetization at 2~K with (d) magnified view about $H=0$, highlighting non-zero coercivity for intermediate Na loading. Note that the coercivity vanishes to within the level of background for $x$=0 and 1. (e) Temperature dependence of the in-phase component $\chi'$ of the ac susceptibility in the absence of an external dc field with (f) corresponding Arrhenius plot.}
\label{fig:Mag}
\end{figure*}
\subsection{Magnetization and Electrical Transport}
Our prior investigation on both the magnetic and electronic properties of stoichiometric NaRuO$_{2}$ identified the system as a magnetic insulator with a quantum disordered ground state \cite{ortizNaRuO2}. Considering that Na$_{2}$RuO$_{3}$ was considered a distinct compound to date, the discovery of the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution should provide an experimental route to exploring the physical properties and possibly unique crossovers ($e.g.$ metal-to-insulator) between the endpoint members. However, literature reports on the magnetic and electronic properties of Na$_{2}$RuO$_{3}$ are varied. Much of the variation stems from the ambiguity whether the ordered or disordered polymorph is present. Even within studies focused predominantly on disordered Na$_{2}$RuO$_{3}$ or mixtures of the ordered/disordered phase, there are conflicting reports. Some works suggest insulating behavior with long range antiferromagnetic order \cite{Wang14:90,gapontsev2017spectral}, while others report a paramagnetic, moderately correlated electron metal with no observable magnetic excitations \cite{Veiga20:4}.
This lack of consensus on Na$_{2}$RuO$_{3}$ is likely driven by the existence of the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid-solution. Since Na$_{2}$RuO$_{3}$ is not a line compound, the stoichiometry of a given synthesis is not well-defined. In the case of disordered Na$_{2}$RuO$_{3}$, the majority of samples were produced as a product of decomposition reactions, yielding lattice parameters \textit{a} : [3.11--3.17~\AA] and \textit{c} : [15.94-16.04~\AA] \cite{mogare2004syntheses,tamaru2013layered,Veiga20:4}. One of the ``hallmark'' features of disordered Na$_2$RuO$_3$ in prior work is the merger of the (101) and (006) peak positions. In good agreement with prior literature, we find that the peak merger occurs with \textit{a}=3.11~\AA~and \textit{c}=15.94~\AA. However, our nominal stoichiometry at that point is only $x$=2/3 instead of $x$=1. This is conceptually consistent with our findings that the Na--Ru--O systems require additional Na and O to compensate for volitility issues. Furthermore, Na incorporation continues well past the point of peak merger -- and well beyond nominal Na$_{2}$RuO$_{3}$ (Fig.~\ref{fig:Vegard}).
The Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution presents an opportunity to study the defect-sensitivity of NaRuO$_2$ and the consequence of diluting the Ru-sublattice. We first address the electrical resistivity to determine whether all members of the Na$_{3+x}$Ru$_{3-x}$O$_6$~solid solution remain insulating, or whether the Na$_\text{Ru}$ defects cause any increase in the free carrier concentration. As illustrated in Fig.~\ref{fig:Transport}, the resistivity at room temperature for many of the series falls within the lightly doped semiconducting regime (10-100\,m$\Omega$-cm), and rises exponentially with decreasing temperature. Both observations suggest that members of the Na$_{3+x}$Ru$_{3-x}$O$_6$ solid solution up to $x$=2/3 are insulators or small-gap semiconductors.
The isothermal resistivity at 300~K (Fig.~\ref{fig:Transport}(inset)) exhibits an exponential \textit{increase} with Na content, contradicting the most facile defect formation ($e.g.$ $\text{Na}_\text{Ru} + 2\text{h})$ and instead supports the localization of holes via a shift of Ru into a higher oxidation state. The influence of poorly screened, higher charged Ru$^{4+}$ -- coupled with increased alloy/disorder scattering likely contribute to the strong resistivity increases. Potentially more complex compensation reactions such as oxygen vacancies could be present, and more research (e.g. DFT defect studies) will be important for fully understanding the defect energetics in the alloys. We note here that members with higher Na content ($x\geq$1) become progressively deliquescent and will condense atmospheric water on the surfaces, precluding reliable measurement of the resistivity.
The dc susceptibility data for select Na$_{3+x}$Ru$_{3-x}$O$_6$ compositions are plotted in Fig.~\ref{fig:Mag}(a). A manual vertical offset has been introduced to facilitate a visual qualitative comparison, and an unscaled set of magnetization curves is included in the supplementary information for comparison \cite{ESI}. Notably, an onset of irreversibility in the ZFC/FC curves appears in compositions with nonintenger $x$. This irreversibility is absent in the stoichiometric $x$=0 end member above 2 K. Then, as summarized in Fig.~\ref{fig:Mag}(b), ZFC/FC irreversibility onsets at finite $x$ and increases in temperature as further disorder is introduced. Near the midpoint between NaRuO$_2$ and Na$_2$RuO$_3$, the irreversibility temperature reaches a local maximum and then begins to decrease again as $x=1$ is approached. In the nominal $x$=1 composition with uniform Ru$^{4+}$ sites, the system naively assumes a $J_\text{eff}=0$ nonmagnetic singlet state and irreversibility vanishes. With continued Na loading beyond $x=1$, moments are reintroduced and a sharp reemergence of irreversibility occurs. It should be noted that as $x=0,1/6,1$ samples exhibit no discernible splitting by 2~K, this lower limit on the onset of an irreversibility temperature is denoted as open circles in Fig.~\ref{fig:Mag}(b).
As illustrated in Figs.~\ref{fig:Mag}(c,d), the main qualitative trends presented in Fig.~\ref{fig:Mag}(b) are also reflected in the compositional dependence of the isothermal dc magnetization. Compositions with higher irrversibility temperatures exhibit larger coercivity, particularly for those samples where $x>1$ (Fig.~\ref{fig:Mag}(d)). Irreversibility in FC/ZFC data reflect that local Ru moments freeze, and Fig.~\ref{fig:Mag}(e) illustrates this freezing further in the Na-rich side of the phase diagram with $x=7/6$. The ac-susceptibility data reveal a clear frequency-dependence associated with local moment freezing and an activation energy $E_{a}/R$ $\sim$ 150~K for the higher temperature feature. Further work exploring this freezing process and whether long-range correlations form will require neutron scattering measurements on single crystals.
It is worth stressing here that even in the nominal $x=0$ composition, a low-temperature cusp appears in the ac-susceptibility below 2 K \cite{ortizNaRuO2}. Near 1.7 K, signs of partial moment freezing were observed, indicating a weak spin freezing transition and crossover in the low frequency spin dynamics. We attribute this crossover/partial freezing as likely driven by a small percentage of remnant Na defects (~1 $\%$). This is consistent with the amplification of the freezing onset upon the intentional introduction of additional Na defects along the solid solution line between NaRuO$_2$ and Na$_2$RuO$_3$.
\section{Conclusions}
Born from the need to control and understand defect relationships in the Heisenberg-Kitaev candidate material NaRuO$_{2}$, we studied the chemical potential phase space surrounding NaRuO$_{2}$. We discovered the existence of a full solid-solution Na$_{3+x}$Ru$_{3-x}$O$_6$~between NaRuO$_{2}$ ($x$=0) and disordered Na$_2$RuO$_3$ ($x$=1). While resistivity measurements demonstrate that all members of Na$_{3+x}$Ru$_{3-x}$O$_6$~are insulators, increased Na-incorporation into the alloy results in a glass-like freezing of local Ru moments between stoichiometric endpoints. At small $x$, this is conceptually consistent with moment dilution/induced freezing on a highly frustrated Ru$^{3+}$ sublattice. Our study provides key information needed to control chemical disorder and off-stoichiometry in the Heisenberg-Kitaev candidate material NaRuO$_2$.
\section{Acknowledgments}
We acknowledge fruitful conversations with A.~A.~Aczel, G.~Pokharel, and A.~R.~Ericks. This work was supported by the US Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Grant No. DE-SC0017752. B.R.O. and P.M.S. both acknowledge financial support from the California NanoSystems Institute through the Elings Fellowship program. The research made use of the shared facilities of the NSF Materials Research Science and Engineering Center at UC Santa Barbara (DMR- 1720256). The UC Santa Barbara MRSEC is a member of the Materials Research Facilities Network. (www.mrfn.org). This work also used facilities supported via the UC Santa Barbara NSF Quantum Foundry funded via the Q-AMASE-i program under award DMR-1906325. Use of the Advanced Photon Source at Argonne National Laboratory was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. A portion of this research used resources at the High Flux Isotope Reactor (HFIR), which is a DOE Office of Science User Facility operated by Oak Ridge National Laboratory.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The recent technological breakthroughs in the manipulation of many-body systems coupled to an external bath
are setting the ground for a careful testing of a new wealth
of physical phenomena in the quantum realm~\cite{kasprzak2006, syassen2008, baumann2010}.
Specifically, several promising experimental platforms aimed at investigating the scenario emerging
from driven-dissipative quantum many-body systems have been recently proposed and realized in the lab.
The most remarkable ones are atomic and molecular optical systems through the use of Rydberg atoms, trapped
ions or atomic ensembles coupled to a condensate reservoir~\cite{Muller_2012},
arrays of coupled QED cavities~\cite{Houck_2012}, or coupled optomechanical resonators~\cite{Ludwig_2013}.
These implementations are scalable enough to enable the construction of tunable
and interacting artificial lattice structures with hundreds of sites.
The coupling between different unit cells can give rise to a plethora of cooperative phenomena determined
by the interplay of on-site interactions, nonlocal (typically nearest-neighbor) processes,
and dissipation~\cite{Tomadin_rev, Hartmann_rev, Sieberer_2016, LeHur_rev, Angelakis_rev}.
Recently, a large body of theoretical works has been devoted to the investigation of the collective behavior
emerging in dynamical response~\cite{Tomadin2010}, many-body spectroscopy~\cite{Carusotto_2009, Grujic_2012, Rivas_2014},
transport~\cite{biella2015, angelakis2015, mertz2016, savona2017_01, savona2017_02},
as well as stationary properties.
In the latter context, a careful engineering of the coupling between the system and the environment
can stabilize interesting many-body phases in the steady state~\cite{Diehl2008, Verstraete2009}.
The phase-diagram of such lattice systems has been predicted to be incredibly
rich~\cite{hartmann2010, umucalilar2012, jin2013, Yuge_2014, hoening2014, chan2015, wilson2016, ff2017} and can display
spontaneous ordering associated with the breaking of a discrete~\cite{Lee_2011, Lee_2013, savona2017}
or continuous symmetry~\cite{jose2017, biella2017} possessed by the model.
Recently, the critical behavior emerging at the onset of phase transitions started to be investigated
by means of different analytical and numerical approaches~\cite{torre2012, sieberer2013, marino2016, Rota_2017}.
Theoretically, while at equilibrium we have reached a fairly good understanding
of several aspects of the many-body problem under the framework of textbook statistical mechanics,
this is no longer the case for quantum systems coupled to some external bath.
In such case, we are indeed facing an inherently out-of-equilibrium situation, where the Hamiltonian
of the system $\hat H$ is no longer capable to describe it in its whole complexity,
and the environmental coupling needs to be accounted for and suitably modeled.
Due to the intrinsic difficulty of the problem, a number of approximations are usually considered,
which assume a weak system-bath coupling, neglect memory effects in the bath, and discard fast oscillating terms.
In most of the experimental situations with photonic lattices, these assumptions
are typically met~\cite{Houck_2012, Fitzpatrick_2017}.
As a result, in many cases of relevance, the coupling to the environment leads to a Markovian dynamics of the system's
density matrix $\rho$, according to a master equation in the Lindblad form~\cite{Petruccione_book}:
\begin{equation}
\partial_t \rho = \mathbb{L} [\rho] = - {\rm {i}} [ \hat H,\rho] + \mathbb{D}[\rho],
\label{eq:Master}
\end{equation}
where $\mathbb{L}$ denotes the so called Liouvillian superoperator (we will work in units of $\hbar = 1$).
While the commutator in the r.h.s.~of Eq.~\eqref{eq:Master} accounts for the unitary part of the dynamics, the dissipative processes are ruled by
\begin{equation}
\mathbb{D}[\rho] = \sum_j \Big[ \hat L_j \rho \hat L_j^\dagger - \tfrac12 \big\{ \hat L_j^\dagger \hat L_j , \rho \big\} \Big],
\end{equation}
where $\hat L_j$ are suitable local jump operators that describe the incoherent coupling to the environment.
The master equation~\eqref{eq:Master} covers a pivotal role in the treatment of open quantum systems,
since it represents the most general completely-positive trace preserving dynamical semigroup~\cite{Rivas_book}.
In the following we will restrict our attention to it, and specifically address
the steady-state (long-time limit) solution $\rho_{\rm SS} = \lim_{t \to \infty} \exp(\mathbb{L}t) \rho(0)$
(and thus $\partial_t\rho_{\rm SS} =0$) in situations where the steady state is guaranteed to be unique~\cite{albert2014}.
Solving the long-time dynamics ruled by Eq.~\eqref{eq:Master} for a many-body system is a formidable,
yet important, task.
Indeed contrary to equilibrium situations, the effect of short-range correlations can be dramatic in a driven-dissipative context,
and thus they deserve an accurate treatment through the (in principle) full many-body problem.
Exact solutions are restricted to very limited classes of systems,
which are typically represented by quadratic forms in the field operators and specific jump terms~\cite{Prosen_2008}.
A number of viable routes have been thus proposed, in the recent few years.
Under certain hypotheses, analytic approaches such as perturbation theory~\cite{Li_2016} or renormalization-group techniques
based on the Keldysh formalism~\cite{Sieberer_2016, Maghrebi2015} are possible.
However, their limited regime of validity calls for more general numerical methods which do not suffer these limitations.
From a computational point of view, the main difficulty resides in the exponential growth of the many-body
Hilbert space with the number $N$ of lattice sites. Moreover, the non-Hermitian Liouvillian superoperator $\mathbb{L}$
acts on the space of density matrices (whose dimension is the square of the corresponding Hilbert space dimension),
and its spectral properties are generally much more difficult to be addressed
than the low-lying eigenstates of a Hamiltonian system.
The difficulty remains even for the fixed point of the dynamics $\rho_{\rm SS}$,
that is the density matrix associated with the zero-eigenvalue of $\mathbb{L}$.
While in one dimension tensor-network approaches based on a straightforward generalization of
matrix product states to operators can be effective~\cite{Verstraete_2004, Zwolak_2004, Prosen_2009}
and alternative strategies have been proposed in order to improve
their performances~\cite{Cui_2015, Mascarenhas_2015, Werner_2016},
going to higher dimensions is much harder.
Numerical strategies specifically suited for this purpose have been recently put forward,
including cluster mean-field~\cite{Jin_2016},
correlated variational Ans\"atze~\cite{Degenfeld_2014, Weimer_2015},
truncated correlation hierarchy schemes~\cite{Casteels_2016},
corner-space renormalization methods~\cite{Finazzi_2015}, and even two-dimensional tensor-network structures~\cite{Orus_2016}.
The nonequilibrium extension of the dynamical mean-field theory (which works directly in the thermodynamic limit)
has been also proved to be very effective in a wide class of lattice systems~\cite{tsuji2009,amaricci2012,aoki2014}.
Each of such methods presents advantages and limitations, and typically performs better on specific regimes.
In this paper we will adapt a class of techniques that, in the past, has revealed to be extremely useful
and versatile in the study of thermal and quantum phase transitions~\cite{Oitmaa_book}.
The key idea consists in computing extensive properties of lattice systems in the thermodynamic limit,
out of certain numerical series expansions. The method, dubbed linked-cluster expansion (LCE),
sums over different contributions associated to clusters of physical sites.
In combination with perturbation theories, LCEs have already proved their worth
in the context of equilibrium statistical mechanics,
both in classical and quantum systems (see Ref.~\onlinecite{Oitmaa_book} and references therein).
Their predictive power lies beyond the range of validity of the perturbation expansion:
using established tools for the analysis of truncated series~\cite{Yang_1952}, it has been possible
to study equilibrium quantum phase transitions, and extract critical exponents.
Here we focus on numerical linked-cluster expansions (NLCEs), where the $k$-th order contribution in the LCE
is obtained by means of exact diagonalization techniques on finite-size clusters with $k$ sites~\cite{Rigol_2006}.
The NLCE has been successfully employed in order to evaluate static properties at zero and finite
temperature~\cite{Rigol_2007}, as well as to study the long-time dynamics and thermalization
in out-of-equilibrium closed systems~\cite{Rigol_2014, Mallayya_2017}.
Moreover it has also revealed its flexibility in combination with other numerical methods that can be used to
address finite-size clusters, such as density-matrix renormalization group algorithms~\cite{Bruognolo_2017}.
Nonetheless, to the best of our knowledge, it has never been applied in the context of open quantum systems.
Here we see NLCE at work in an interacting two-dimensional spin-1/2 model with incoherent spin relaxation~\cite{Lee_2013},
which is believed to exhibit a rich phase diagram, and represents a testing ground
for strongly correlated open quantum systems~\cite{Jin_2016, Rota_2017, Orus_2016}.
We will test our method both far from critical points, and in the proximity of a phase transition:
in the first case NLCE allows us to accurately compute the value of the magnetization, while in the latter
we are able to estimate the critical point as well as the critical exponent $\gamma$
for the divergent susceptibility.
The paper is organized as follows.
In Sec.~\ref{sec:method} we introduce our NLCE method and discuss how it can be applied
to the study of the steady-state of a Markovian Lindblad master equation.
The NLCE is then benchmarked in a dissipative two-dimensional spin-1/2 XYZ model (Sec.~\ref{sec:Model}).
By properly tuning the coupling constants of the Hamiltonian, we are able to
study steady-state properties far away from any phase boundary (Sec.~\ref{sec:XXX}),
and a more interesting scenario exhibiting a quantum phase transition from a paramagnetic
to a ferromagnetic phase (Sec.~\ref{sec:XYZ}).
In the latter case we discuss a simple strategy (based on the Pad\'e analysis of the expansion)
in order to locate the critical point and to extrapolate the critical exponent $\gamma$.
Finally, Sec.~\ref{sec:concl} is devoted to the conclusions.
\section{Linked-cluster method}
\label{sec:method}
We start with a presentation of the NLCE formalism~\cite{Rigol_2006}, unveiling its natural applicability
to the study of driven-dissipative quantum systems whose dynamics is governed by a Lindblad master equation.
We follow an approach that is routinely employed in series expansions
for lattice models, such as high-temperature classical expansions~\cite{Oitmaa_book}.
Since we are interested in the steady-state properties of the system, our target
objects will be the expectation values of generic extensive observables $\hat {\cal O}$ onto
the asymptotic long-time limit solution $\rho_{\rm SS}$ of the master equation:
${\cal O} = {\rm Tr} \big[\hat {\cal O} \rho_{\rm SS} \big]$.
In practice, for each cluster appearing in the expansion, the steady-state density matrix $\rho_{\rm SS}$ is reached
by time-evolving a random initial state according to the master equation~\eqref{eq:Master} by means of a fourth-order
Runge-Kutta method.
We stress that there are no restrictions in the limits of applicability of
this approach to different scenarios for homogenous systems, which can be straightforwardly extended to the case
of generic non-Markovian master equations and/or non-equilibrium states $\rho(t)$.
Therefore, boundary-driven systems~\cite{biella2015, mertz2016, savona2017_01, savona2017_02, buca2017}
and disordered lattices~\cite{biondi2015} do not fit within this framework.
Let us first write the Liouvillian operator ${\mathbb L}$ as a sum of local terms ${\mathbb L}_k$,
each of them supposedly acting on few neighbouring sites.
For the sake of simplicity and without loss of generality,
each term ${\mathbb L}_k$ only couples two neighboring sites:
\begin{equation}
{\mathbb L} = \sum_{k} \alpha_k {\mathbb L}_k = \sum_{\langle i,j \rangle} \alpha_{ij} {\mathbb L}_{ij} ,
\end{equation}
where $\alpha_{ij}$ denotes the local coupling strength, and the index $k=(i,j)$ is
a short-hand notation for the couple of $i$-$j$ sites.
The terms of ${\mathbb L}$ acting exclusively on the $i$th site can be arbitrary absorbed in the terms of the sum such that $i\in k$.
The observable ${\cal O}$ can be always arranged in a multivariable expansion in powers of $\alpha_k$:
\begin{equation}
{\cal O} \big( \{\alpha_k\} \big) = \sum_{\{n_k\}}{\cal O}_{\{n_k\}}\prod_k \alpha_k^{n_k}
\label{M1}
\end{equation}
where $n_k$ runs over all non-negative integers for each $k$,
such that any possible polynomial in the $\alpha_k$ couplings is included.
The expansion~\eqref{M1} can be then reorganized in clusters:
\begin{equation}
{\cal O} = \sum_{c} W_{[{\cal O}]}(c),
\label{M2}
\end{equation}
where each $c$ represents a non-empty set of $k$-spatial indexes,
which identify the links belonging to the given cluster.
Specifically, the so called cluster weight $W_{[{\cal O}]}(c)$ contains all terms of the expansion~\eqref{M1},
which have at least one power of $\alpha_k, \; \forall k\in c$, and no powers of $\alpha_k$ if $k \notin c$.
Vice-versa, all terms in Eq.~\eqref{M1} can be included in one of these clusters.
Using the inclusion-exclusion principle, one can take $W_{[{\cal O}]}(c)$ out of the sum~\eqref{M2}
obtaining the recurrence relation:
\begin{equation}
W_{[{\cal O}]}(c) = {\cal O}(c) - \sum_{s \subset c}W_{[{\cal O}]}(s),
\label{WMc}
\end{equation}
where ${\cal O}(c) = {\rm Tr} \big[ \hat {\cal O} \rho_{\rm SS}(c)\big]$
is the steady-state expectation value of the observable calculated for the finite cluster $c$,
the sum runs over all the subclusters $s$ contained in $c$, and $\rho_{\rm SS}(c)$ is the steady state
of the Liouvillian ${\mathbb L}(c)$ over the cluster $c$.
An important property of Eq.~\eqref{WMc} is that, if $c$ is formed out of two
disconnected clusters $c_1$ and $c_2$, its weight $W_{[{\cal O}]}(c)$ is zero.
This follows from the fact that ${\cal O}$ is an extensive property (${\cal O}(c) = {\cal O}(c_1) + {\cal O}(c_2)$)
and $c = c_1 + c_2$.
The symmetries of the Liouvillian ${\mathbb L}$ may drastically simplify the summation~\eqref{M2},
since it is typically not needed to compute all the contributions coming from each cluster.
This can be immediately seen, e.g., for situations where the interaction term $\alpha_k$
between different couples of sites is homogeneous throughout the lattice.
In such cases, it is possible to identify the topologically distinct (linked) clusters, so that a representative $c_n$
for each class can be chosen and counted according to its multiplicity $\ell(c_n)$
per lattice site (the lattice constant of the graph $c_n$).
Here the subscript ${}_n$ denotes the number of $k$-spatial indexes that are grouped in the cluster,
that is, its size.
The property ${\cal O}$ per lattice site can be thus written directly in the thermodynamic limit $L \to \infty$ as:
\begin{equation}
\frac{\cal O}{L} = \sum_{n=1}^{+\infty} \bigg[ \sum_{\{ c_n \}} \ell(c_n) \, W_{[{\cal O}]}(c_n) \bigg] \,.
\label{M3}
\end{equation}
The outer sum runs over all possible cluster sizes, while the inner one accounts for all topologically
distinct clusters $\{ c_n \}$ of a given size $n$.
Let us emphasize that, if the series expansion~\eqref{M3} is truncated up to order $n=R$,
only clusters $c$ at most of size $R$ have to be considered.
Indeed each of them should include at least one power of $\alpha_k, \; \forall k\in c$.
Therefore a cluster of size $R+1$ or larger does not contribute to the expansion, up to order $\alpha^R$.
As a matter of fact, dealing with open many-body systems significantly reduces our ability to compute large orders
in the expansion, with respect to the {\it closed}-system scenario.
The size of the Liouvillian superoperator governing the dynamics scales as $\dim(\mathbb{L})=d^{2n}$,
where $d$ is the dimension of the local Hilbert space and
$n$ is the number of sites of a given cluster.
In isolated systems, one would need to evaluate the ground state of the cluster Hamiltonian, of size $\dim(\hat H)=d^n$.
Therefore, for the case of spin-$1/2$ systems ($d=2$), we are able to compute the steady state for clusters up to $n=8$,
such that $\dim(\mathbb{L})=2^{2 \times 8} = 65536$.
The complexity of the problem is thus comparable to what has been done for spin systems at equilibrium,
where the NLCE has been computed up to $n=15$ (see, for example, Refs.~\onlinecite{Rigol_2007,Tang_2013}).
In graph theory, there are established algorithms
to compute all topologically distinct clusters, for a given size and lattice geometry.
This could drastically increase the efficiency of the NLCE algorithm, since for highly symmetric systems
the number of topologically distinct clusters is exponentially smaller than the total number of connected clusters.
Explaining how to optimize the cluster generation lies beyond the scope of the present work.
The basic cluster generation scheme we used is explained in full detail in Ref.~\onlinecite{Tang_2013}.
Notice that once all the topologically distinct $n$-site clusters and their multiplicities
have been generated for a given lattice geometry, one can employ NLCE for any observable
and Liouvillian within the same spatial symmetry class of the considered lattice.
A remarkable advantage of NLCE over other numerical methods is that it enables a direct access to the thermodynamic limit,
up to order $R$ in the cluster size, by only counting the cluster contributions of sizes equal or smaller than $R$
(i.e. using a limited amount of resources).
We should stress that, contrary to standard perturbative expansions, there is no perturbative parameter
in the system upon which the NLCE is based and can be controlled.
Properly speaking, the actual control parameter is given by the amount of correlations that are present in the system:
the convergence of the series~\eqref{M3} with $n$ would be ensured from an order $R^\star$
which is larger than the typical length scale of correlations~\cite{Rigol_2006, Tang_2013}.
In the next sections we give two illustrative examples of how NLCE performs for 2D dissipative
quantum lattice models of interacting spin-1/2 particles.
\section{Model}
\label{sec:Model}
Our model of interest is a spin-$1/2$ lattice system in two dimensions, whose coherent internal dynamics is governed
by the anisotropic XYZ-Heisenberg Hamiltonian:
\begin{equation}
\hat H = \sum_{\langle i,j\rangle} \left( J_x \hat \sigma_i^x \hat \sigma_j^x
+ J_y \hat \sigma_i^y \hat \sigma_j^y + J_z \hat \sigma_i^z \hat \sigma_j^z \right) \,,
\label{eq:Hamiltonian}
\end{equation}
where $\hat \sigma_j^\beta$ ($\beta = x,y,z$) denote the Pauli matrices for the $j$th spin of the system
and $\langle i,j \rangle$ restricts the summation over all couples of nearest neighboring spins.
Each spin is subject to an incoherent dissipative process that tends to
flip it down along the $z$ direction, in an independent way with respect to all the other spins.
In the Markovian approximation, such mechanism is faithfully described by the Lindblad jump operator
$\hat L_j = \sqrt{\Gamma} \ \hat \sigma^-_j$ acting on each spin:
\begin{equation}
\mathbb{D}[\rho] = \Gamma \sum_{j} \Big[ \hat \sigma_j^- \rho \, \hat \sigma_j^+
- \tfrac{1}{2} \big\{ \hat \sigma_j^+ \hat \sigma_j^- , \rho \big\} \Big] \,,
\label{eq:Lindblad}
\end{equation}
where $\hat \sigma_j^\pm = \frac{1}{2} \left( \hat \sigma_j^x \pm {\rm {i}} \, \hat \sigma_j^y \right)$
stands for the corresponding raising and lowering operator along the $z$ axis,
while $\Gamma$ is the rate of the dissipative processes.
in the following we will always work in units of $\Gamma$.
The outlined model is particularly relevant as being considered a prototypical dissipative quantum many-body
system: its phase diagram is very rich and has been subject to a number of studies
at the mean-field level~\cite{Lee_2013} and even beyond such regime,
by means of the cluster mean-field~\cite{Jin_2016}, the corner-space renormalization group~\cite{Rota_2017},
and the dissipative PEPS~\cite{Orus_2016}.
Remarkably, the Lindblad master equation with the Hamiltonian in Eq.~\eqref{eq:Hamiltonian}
and the dissipator in Eq.~\eqref{eq:Lindblad} presents a $\mathbb{Z}_2$ symmetry
which is associated to a $\pi$ rotation along the $z$ axis: $\hat \sigma^x \to -\hat \sigma^x$,
$\hat \sigma^y \to - \hat \sigma^y$.
For certain values of the couplings $J_\alpha$, it is possible to break up this symmetry, thus leading
to a dissipative phase transition from a paramagnetic (PM) to a ferromagnetic (FM) phase,
the order parameter being the in-plane $xy$ magnetization.
We stress that a XY anisotropy ($J_x \neq J_y$) is necessary to counteract the incoherent spin flips,
otherwise the steady-state solution of Eq.~\eqref{eq:Hamiltonian} would be
perfectly polarized, with all the spins pointing down along the $z$ direction.
The existing literature allows us to benchmark our approach, both far from criticality (Sec.~\ref{sec:XXX})
where correlations grow in a controllable way, and in proximity of a $\mathbb{Z}_2$-symmetry breaking
phase transition (Sec.~\ref{sec:XYZ}), where correlations diverge in the thermodynamic limit.
In the latter we show how it is possible to exploit the NLCE method in combination with a Pad\'e approximants
analysis, in order to calculate the location of the critical point as well as the critical exponent
$\gamma$ of the transition, that is associated to a power-law divergence of the magnetic susceptibility to an external field.
Contrary to all the other known methods, either being mean-field or dealing with finite-length systems,
the NLCE directly addresses the thermodynamic limit and thus, to the best of our knowledge,
at present it represents the only unbiased numerical method to calculate such exponent.
\subsection{Isotropic case}
\label{sec:XXX}
Let us start our analysis by considering a cut in the parameters space which do not cross any critical line.
Specifically we set
\begin{equation}
\alpha = J_x = -J_y = J_z.
\end{equation}
For $\alpha=0$ the coherent dynamics is switched off, the coupling in $x$-$y$ plane is thus isotropic
and the dissipative processes cannot be counteracted regardless of the value of the local relaxation rates~\cite{Lee_2013}.
As a consequence, regardless of the initial conditions, the steady-state is the pure state
having all spins pointing down along the $z$-axis:
\begin{equation}
\label{trivial_ss}
\left. \rho_{\rm SS} \right|_{\alpha=0}= \bigotimes_i | \!\! \downarrow \rangle \langle \downarrow \!\! | .
\end{equation}
Thus we expect the NLCE would give the exact thermodynamic limit already at first order in the cluster size.
As the parameter $\alpha$ is increased, correlations progressively build up on top of the fully factorizable
density matrix~\eqref{trivial_ss}, therefore higher orders in the expansion of Eq.~\eqref{M3} are needed.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{XXX_bare}
\caption{(color online). Steady-state average magnetization along the $z$ direction for the isotropic
Heisenberg model, evaluated by means of the NLCE (bare sum) at different orders $R$
in the cluster size, as a function of $1/\alpha$.
The arrows indicate the values of $\alpha^\star$ at which each curve at the $R$-th order ($R<8$)
starts deviating significantly from the highest accuracy curve ($R=8$; thick black line)
that we have.}
\label{fig:XXX_bare}
\end{figure}
This is exactly what we observe in Fig.\ref{fig:XXX_bare}, where we show the steady-state value
of the average magnetization along the $z$ direction, $\mathcal{O}/L = \langle \hat \sigma^z_j\rangle$,
evaluated by means of the NLCE in Eq~\eqref{M3} up to a given order $R$, as function of $\alpha$.
Note that, as long as $R$ is increased, the convergence of the NLCE to the most accurate
data (highest order that we have) progressively improves.
This shows that, in the region where different curves overlap, correlations among the different sites
are well captured by the clusters that we are considering in the expansion, up to a given order.
When $\alpha$ is increased the range of correlations grows as well, and one needs to
perform the expansion to larger orders.
For $\alpha \gtrsim 0.075$ orders higher than $R=8$ are needed to obtain a good convergence
in the bare data.
\begin{figure}[!b]
\centering
\includegraphics[width=0.9\columnwidth]{XXX_resum}
\caption{(color online). Steady-state average $z$-magnetization as a function of $\alpha$,
after implementing two resummation techniques on the bare data at order $R=8$ (black curve,
same as in Fig.~\ref{fig:XXX_bare}): Wynn's algorithm (colored curves in the upper panel)
and Euler transformation (colored curves in the lower panel).
The symbols denote the results of QT simulations for a finite system using periodic boundary
conditions, with a $4\times4$ square plaquette (red circles) and a $4\times3$ plaquette (black circles)
constructed from the previous one after removing the four sites at the corners.}
\label{fig:XXX_resum}
\end{figure}
It is however possible to improve the convergence of the expansion without increasing
the size of the considered clusters, by simply exploiting two resummation algorithms
that have been already shown to be very useful in the context of NLCEs of given
thermodynamic properties~\cite{Rigol_2006, Tang_2013}.
Specifically we employ the Wynn's algorithm~\cite{wynn_book} and the Euler transformation~\cite{euler_book}.
A detailed explanation on how such resummation schemes can be exploited
in the context of NLCE can be found in Ref.~\onlinecite{Tang_2013}.
The results for $\langle \hat \sigma^z \rangle$ as a function of $\alpha$ are shown in
Fig.~\ref{fig:XXX_resum} for various orders in the two resummation schemes (see legends for details).
It is immediate to see that the convergence of the expansion is drastically improved of about
one order of magnitude.
A comparison of NLCEs data with the outcome of simulations obtained by means of quantum
trajectories (QT)~\cite{Dalibard_1992} for finite-size plaquettes shows that the resummed data
give qualitatively analogous results up to $\alpha \approx 1$, despite a slight discrepancy between them.
Such difference is due to the fact that, even if for small $\alpha$ correlations are very small,
finite-system effects are non-negligible: while NLCEs data are directly obtained in the thermodynamic
limit, QT are inevitably affected by such effects.
As long as $\alpha$ is decreased, the discrepancy between the two approaches decreases,
both leading to $\langle \hat \sigma^z \rangle \to -1$ in the limit $\alpha\to 0$ of Eq.~\eqref{trivial_ss}.
\subsection{Anisotropic case and the paramagnetic to ferromagnetic phase transition}
\label{sec:XYZ}
We now discuss the more interesting scenario of an anisotropic Heisenberg model
($J_x \neq J_y \neq J_z$), where the system can cross a critical line and exhibit
a dissipative phase transition~\cite{Lee_2013}.
To this purpose, we set
\begin{equation}
J_x = 0.9, \; J_y = 0.9 + \alpha, \; J_z = 1, \,
\label{eq:parXYZ}
\end{equation}
with $\alpha \in [0,0.25]$.
For $\alpha=0$ (i.e., $J_x = J_y$), we come back to the trivial situation where the Hamiltonian
conserves the magnetization along the $z$ direction, and the steady state is
the pure state in Eq.~\eqref{trivial_ss}, with all the spins pointing down in the $z$ direction.
Away from this singular point, for a certain $\alpha_c > 0$ the system undergoes
a second-order phase transition associated to the spontaneous breaking of the $\mathbb{Z}_2$
symmetry possessed by the master equation~\eqref{eq:Master}, from a paramagnetic (PM)
for $\alpha < \alpha_c$, to a ferromagnetic (FM) phase for $\alpha > \alpha_c$.
In the FM phase, a finite magnetization in the $x$-$y$ plane develops:
$\langle \hat \sigma^x \rangle, \langle \hat \sigma^y \rangle \neq 0$,
which also defines the order parameter of the transition.
The phenomenology of this phase transition has recently received a lot of attention,
and has been investigated at a Gutzwiller mean-field level~\cite{Lee_2013} and by means
of more sophisticated methods, including the cluster mean-field approach~\cite{Jin_2016},
the corner-space renormalization technique~\cite{Rota_2017}, and
the projected entangled pair operators~\cite{Orus_2016}.
The phase transition point for the same choice of parameters of Eq.~\eqref{eq:parXYZ}
has been estimated to be $\alpha_c = 0.1$~\cite{Lee_2013}, $0.14 \pm 0.01$~\cite{Jin_2016}
and $0.17 \pm 0.02$~\cite{Rota_2017}.
Here we follow the approach of Rota {\it et al.}~\cite{Rota_2017} and discuss the magnetic
linear response to an applied magnetic field in the $x$-$y$ plane,
which modifies the Hamiltonian in Eq.~\eqref{eq:Hamiltonian} according to:
\begin{equation}
\hat H \to \hat H + \sum_j h \big( \hat \sigma^x_j \cos \theta + \hat \sigma^y_j \sin \theta \big),
\label{eq:field}
\end{equation}
where $\theta$ denotes the field direction, $\big[ \vec h(\theta) \big] = (h_x, \, h_y)^T$ and
$h_x = h \cos \theta, \; h_y = h \sin \theta$.
Such response is well captured by the susceptibility tensor $\boldsymbol{\chi}$, with matrix
elements $\chi_{\alpha\beta} = \lim_{h_\beta \to 0} \langle \hat \sigma^\alpha \rangle / h_\beta$.
In particular we concentrate on the angularly averaged magnetic susceptibility
\begin{equation}
\label{chiave}
\chi_{\rm ave} = \lim_{h \to 0} \, \frac{1}{2 \pi} \int_0^{2 \pi} d \theta \, \frac{|\vec M(\theta)|}{h} \,,
\end{equation}
where $\vec M(\theta) = \boldsymbol{\chi} \cdot \vec h(\theta)$ is the induced magnetization along
an arbitrary direction of the field.
We start by computing the NLCE for the magnetic susceptibility $\chi_{\rm ave}$
in the parameter range $0 \le \alpha \le 0.25$, and improving the convergence of the series
up to a given order, by exploiting the Euler algorithm.
Along this specific cut in the parameter space, the latter has been proven to be the most
effective (contrary to what we observed far from the criticality -- see Fig.~\ref{fig:XXX_resum}).
The relevant numerical data are shown in Fig.~\ref{fig:chiave}, and are put in direct comparison
with those obtained with an alternative method (the corner-space renormalization group)
in Ref.~\onlinecite{Rota_2017}.
We observe a fairly good agreement with the two approaches, in the small-$\alpha$ parameter range ($0\le\alpha\lesssim0.02$),
and point out that in both cases a sudden increase of $\chi_{\rm ave}$ for $\alpha \gtrsim 0.1$
supports the presence of a phase transition in that region.
It is important to remark that, the result of the expansion at different orders in the uncovered region $\alpha\gtrsim0.02$
has not physical meaning.
However, as we will show in the next section, by analyzing how the expansion behaves when approaching the criticality,
it is possible to provide an estimate of the critical point $\alpha_c$, as well as of the critical exponent $\gamma$.
We also note that, contrary to the isotropic case, here we do not observe an {\it exact}
data collapse of the NLCEs for $\chi_{\rm ave}$, even for $\alpha=0$.
The reason resides in the fact that the presence of an external field~\eqref{eq:field} makes
the structure of the steady state nontrivial, as soon as $h \neq 0$, thus admitting
correlations to set in.
\begin{figure}[!t]
\centering
\includegraphics[width=0.92\columnwidth]{XYZ_chiave_bis}
\caption{(color online). Angularly averaged magnetic susceptibility to an external field
in the $x$-$y$ plane, as a function of $\alpha = J_y-0.9$. The continuous curves denote
Euler resummed data, to the best achievable expansion, of the bare NLCE results up to the order $R=8$.
The dashed lines are the results of the bare expansions.
Symbols are the results from the corner-space renormalization method,
taken from Ref.~\onlinecite{Rota_2017}.
The dotted area highlights the region of alpha $0\le\alpha\le0.02$, for which the NLCE converges.}
\label{fig:chiave}
\end{figure}
\subsubsection{Critical behavior}
\label{sec:critical}
We now show how to exploit the above NLCE data (in combination with a Pad\'e analysis~\cite{Oitmaa_book})
in order to locate the critical point $\alpha_c$ for the PM-FM transition,
and extract the critical exponent $\gamma$ of the magnetic susceptibility~\cite{Sachdev_book}
$\chi_{\rm ave} \sim |\alpha-\alpha_c|^{-\gamma}$.
The possibility to extrapolate the critical exponents for a dissipative quantum phase transition
is very intriguing, since, to the best of our knowledge, the only numerical work in this context,
that is present in the literature, is Ref.~\onlinecite{Rota_2017}.
However, since finite-size systems are considered there, it was only possible to estimate the finite-size
ratio $\gamma/\nu$, where $\nu$ denotes the critical exponent associated to
the divergent behavior of the correlation length.
The present work offers a complementary point of view since here we are able, for the first time,
to provide an independent estimate of the critical exponent $\gamma$ by directly accessing the thermodynamic limit.
To achieve this goal we study the logarithmic derivative of the averaged magnetic susceptibility,
which converts an algebraic singularity into a simple pole~\cite{Oitmaa_book}:
\begin{equation}
\label{defdlog}
{\rm Dlog} \ \chi_{\rm ave} (\alpha) \equiv \frac{\chi_{\rm ave}' (\alpha)}{\chi_{\rm ave} (\alpha)}.
\end{equation}
If $\chi_{\rm ave} \sim |\alpha-\alpha_c|^{-\gamma}$ for $|\alpha-\alpha_c|\ll1$, the logarithmic derivative behaves as
\begin{equation}
\label{dlog_chiave}
{\rm Dlog} \ \chi_{\rm ave} (\alpha) \sim \frac{\gamma}{|\alpha-\alpha_c|}.
\end{equation}
Studying the divergent behavior of Eq.~\eqref{dlog_chiave} simplifies the problem,
since the function ${\rm Dlog} \ \chi_{\rm ave} (\alpha)$ has a simple pole at the critical point $\alpha=\alpha_c$
with a residue corresponding to the critical exponent $\gamma$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{XYZ_pade}
\caption{(color online). Logarithmic derivative of $\chi_{\rm ave}$ as a function of $\alpha$.
The black line is obtained from Euler resummed data to the order $R=8$ (blue line of Fig.~\ref{fig:chiave}).
The red and blue lines are the results of the Pad\'e analysis with different $[3|3]$ and $[3|4]$
approximants respectively.}
\label{fig:XYZ_pade}
\end{figure}
In Fig.~\ref{fig:XYZ_pade} we show the behavior of the logarithmic derivative
calculated from the Euler resummed data to the order $R=8$ (blue line in Fig.~\ref{fig:chiave}) which represents
our best approximation for $\chi_{ave}$ at small $\alpha$.
The behavior at large $\alpha$ of the function ${\rm Dlog} \ \chi_{ave} (\alpha)$ is extrapolated exploiting the Pad\'e approximants.
A Pad\'e approximant is a representation of a finite power series as a ratio of two polynomials
\begin{equation}
\label{PQpoly}
{\rm Dlog} \ \chi_{ave} (\alpha) = \sum_{n=0}^R a_n\alpha^n = \frac{P_L(\alpha)}{Q_M(\alpha)},
\end{equation}
where $P_L(\alpha)$ and $Q_M(\alpha)$ are polynomials of degree $L$ and $M$ (with $L+M\le R$), respectively.
This is denoted as the $[L|M]$ approximant, and can represent functions with simple poles {\it exactly}.
Next, we fit ${\rm Dlog} \ \chi_{\rm ave}(\alpha)$ (black line in Fig.~\ref{fig:XYZ_pade}) with an $8$-th degree polynomial
between $\alpha_{\rm in}=0.05$ to $0.06\le\alpha_{\rm fin}\le0.1$ in order to obtain
the coefficients $\{a_n\}_{n=1,\dots,R}$ (with $R=8$).
Once the coefficients $\{a_n\}$ are known, it is straightforward to evaluate the coefficients of the polynomials
$P_L$ and $Q_M$ through Eq.~\eqref{PQpoly}.
Further details about this procedure can be found in App.~\ref{app:pade}.
As is clear from Eq.~\eqref{PQpoly}, the position of the critical point $\alpha_c$ can be deduced by studying the zeroes of $Q_M(\alpha)$.
Typically, only one of the $M$ zeros is real and located in the region of interest.
Finally, the critical exponent is evaluated by computing the residue of $Q_M(\alpha)$ at $\alpha=\alpha_c$:
\begin{equation}
\gamma = - \lim_{\alpha\to\alpha_c} Q_M(\alpha) (\alpha-\alpha_c).
\end{equation}
Of course, the values of $\alpha_c$ and $\gamma$ will depend on the specific choice of the approximates $[L,M]$ and on the region
over which the fit is performed.
The dependence of the results on $\alpha_{\rm fin}$ is shown in App.~\ref{app:pade}.
We found that the Pad\'e analysis gives stable results for $0.06\lesssim\alpha_{\rm fin}\lesssim0.08$ and $0.06\lesssim\alpha_{\rm fin}\lesssim0.095$
for the approximants $[3|3]$ and $[3|4]$ respectively.
The results of the Pad\'e analysis hint for a divergence at $\alpha_c=0.179\pm0.001$ with $\gamma=1.85\pm0.05$ for $[3|3]$,
and $\alpha_c=0.1665\pm0.0005$ with $\gamma=1.5\pm0.05$ for [3|4].
The other approximants $[L|M]$ such that $L+M\le R=8$ do not give physical results in this range of parameters.
The error bar is underestimated, since it accounts only for the error introduced in the fitting procedure and neglects the propagation of the numerical error made on the steady-state evaluation.
Furthermore, the Pad\'e analysis has been performed over a range of $\alpha$ for which the resummed NLCE is not exactly converged (see Fig.~\ref{fig:chiave}).
To overcome this issue, one should be able to compute higher orders in the expansion and to perform a more accurate analysis of the criticality.
However, the value of the critical point we found is in agreement with the results reported
in Ref.~\onlinecite{Jin_2016} and Ref.~\onlinecite{Rota_2017}, so far.
\section{Conclusions}
\label{sec:concl}
In this work we have proposed a numerical algorithm based on the generalization of the linked-cluster expansion
to open quantum systems on a lattice, allowing to directly access the thermodynamic limit
and to evaluate extensive properties of the system.
Specifically, we extended the formalism to the Liouvillian case and showed how the basic properties
of the expansion are translated to the open-system realm.
Given its generality, this method can be applied to open fermionic, bosonic and spin systems
in an arbitrary lattice geometry.
We tested our approach with a study of the steady-state properties of the paradigmatic dissipative spin-1/2 XYZ model
on a two-dimensional square lattice.
Far away from the critical boundaries of the model, we accurately computed the spin magnetization.
Upon increasing the order of the expansion, we were able to progressively access regions of the phase diagram
that are characterized by a larger amount of correlations among distant sites.
The convergence properties of the expansion can be dramatically improved by employing more sophisticated resummation schemes.
We then used the numerical linked-cluster expansion across a phase transition in order to study its critical properties.
By means of a Pad\'e analysis of the series, we located the critical point and provided the first estimate of the
critical exponent $\gamma$, which determines the divergent behavior of the (average) magnetic susceptibility
close to the phase transition.
At present, this method together with the one in Ref.~\onlinecite{Orus_2016}
are the only (non mean-field) numerical approaches that allow to compute
the steady-state properties of an open lattice model in two spatial dimensions in the thermodynamic limit.
Here the intrinsic limitation is that, in order to compute high-order terms in the expansion
(and thus to access strongly correlated regions of the phase space),
the evaluation of the steady state on a large number of connected sites is required.
Furthermore, in the case of bosonic systems, a further complication arises from the local Hilbert space dimension.
We believe that a very interesting perspective left for the future, is the combination of the linked-cluster expansion
with the corner-space renormalization method~\cite{Rota_2017}, and also possibly with Monte Carlo approaches~\cite{savona_private}.
Additionally, a careful identification of the internal symmetries of the model may help in decreasing the effective dimension of the Liouvillian space.
\acknowledgments
We thank M. C\`e, L. Mazza, and R. Rota for fruitful discussions.
We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support.
AB and CC acknowledge support from ERC (via Consolidator Grant CORPHO No.~616233).
RF acknowledges support by EU-QUIC, CRF, Singapore Ministry of Education, CPR-QSYNC, SNS-Fondi interni 2014, and the Oxford Martin School.
JJ acknowledges support from the National Natural Science Foundation of China No.~11605022,
Natural Science Foundation of Liaoning Province No.~2015020110, and the Xinghai Scholar Cultivation Plan
and the Fundamental Research Funds for the Central Universities.
OV thanks Fundaci\'on Rafael del Pino, Fundaci\'on Ram\'on Areces and RCC Harvard.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $p:X\rightarrow Y$ be a proper surjective holomorphic mapping between complex manifolds $X$ and $Y$ whose differential has maximal rank everywhere such that every fiber $X_y:=p^{-1}(y)$ is a compact K\"ahler manifold. This is called a \emph{smooth family of compact K\"ahler manifolds} or a \emph{compact K\"ahler fibration}. If every fiber $X_y$ is a Calabi-Yau manifold, i.e., a compact K\"ahler manifold whose canonical line bundle $K_{X_y}$ is trivial, then the family is called a \emph{smooth family of Calabi-Yau manifolds} or a \emph{Calabi-Yau fibration}.
If $(X,\omega)$ is a K\"ahler mainfiold, then a celebrated theorem due to Calabi and Yau implies that on each fiber $X_y$, there exists a unique Ricci-flat metric $\omega_{KE,y}$ in the cohomology class $[\omega\vert_{X_y}]$.
This family of Ricci-flat metrics induces a fiberwise Ricci-flat metric on the total space $X$.
\medskip
The main theorem of this paper is the following:
\begin{theorem} \label{T:main_theorem}
Let $p:X\rightarrow Y$ be a smooth family of Calabi-Yau manifolds.
Suppose that $X$ is a K\"ahler manifold equipped with a K\"ahler form $\omega$.
Let $\omega_{KE,y}$ be the unique Ricci-flat form in the cohomology class $[\omega\vert_{X_y}]$. Then there exists a unique smooth function $\varphi\in C^\infty(X)$ which satisfies the following properties:
\begin{itemize}
\item[(\romannumeral1)] $\int_{X_y}\varphi(\omega_{KE,y})^n=0$ for every $y\in Y$,
\item[(\romannumeral2)] $\omega+dd^c\varphi\vert_{X_y}$ is a Ricci-flat K\"ahler form on $X_y$ for every $y\in Y$ and
\item[(\romannumeral3)] $p_*\paren{\omega+dd^c\varphi}^{n+1}$ is a positive $(1,1)$-form on $Y$.
\end{itemize}
\end{theorem}
Here $d^c$ means the real operator defined by
\begin{equation*}
d^c=\frac{\im}{2}\paren{\partial-\bar\partial}.
\end{equation*}
Then we have $dd^c=\im\ddbar$.
We call the $(1,1)$-form $\rho:=\omega+dd^c\varphi$ which satisfies the property $(\romannumeral2)$ a \emph{fiberwise Ricci-flat metric} or a \emph{fiberwise Ricci-flat K\"ahler form} on a Calabi-Yau fibration $p:X\rightarrow Y$.
Note that a real $(1,1)$-form on $X$ satisfying $(\romannumeral2)$ is not uniquely determined.
With the normalization condition $(\romannumeral1)$, the fiberwise Ricci-flat metric is uniquely determined. From now on, the fiberwise Ricci-flat metric on a Calabi-Yau fibration means the real $(1,1)$-form which satisfies $(\romannumeral1)$ and $(\romannumeral2)$. It is remarkable to note the following:
\begin{itemize}
\item Theorem \ref{T:main_theorem} basically deals with a smooth family of polarized Calabi-Yau manifolds in the sense of deformation theory.
\item Theorem \ref{T:main_theorem} does not assume the compactness of the base $Y$.
\end{itemize}
\medskip
For a family of canonically polarized compact K\"ahler manifolds, we have a fiberwise K\"ahler-Einstein metric by the similar way.
The positivity of the fiberwise K\"ahler-Einstein metric on a family of compact K\"ahler manifolds was first studied by Schumacher.
In his paper \cite{Schumacher}, he have proved that the fiberwise K\"ahler-Einstein metric on a family of canonically polarized compact K\"ahler manifolds is semi-positive.
Moreover he have also proved that it is strictly positive if the family is effectively parametrized.
This is equivalent to the semi-positivity or positivity of the relative canonical line bundle of the family, respectively.
P\v aun have shown that if the relative adjoint line bundle is positive on each fiber, then it is semi-positive on the total space by generalizing the method of Schumacher (\cite{Paun2}).
Guenancia also have proved the semi-positivity of the fiberwise conic singular K\"ahler-Einstein metric (\cite{Guenancia}).
In case of a family of complete K\"ahler manifolds, Choi have proved that the fiberwise K\"ahler-Einstein metric on a family of bounded pseudoconvex domains is semi-positive or positive if the total space is pseudoconvex or strongly pseudoconvex, resepctively (\cite{Choi1, Choi2}).
\medskip
The proof of Schumacher's theorem starts with the following identity from \cite{Semmes}: For a real $(1,1)$-form $\tau$ on $X$,
\begin{equation}\label{E:Semmes0}
\tau^{n+1}
=
c(\tau)\tau^n\im ds\wedge d\bar s
\end{equation}
where $\tau^n$ is the $n$-fold exterior power divided by $n!$. Here $c(\tau)$ is called a \emph{geodesic curvature} of $\tau$. (For the detail, see Section \ref{SS:horizontal_lift}.) Now suppose that $\tau$ is positive-definite on each fiber $X_y$. Then \eqref{E:Semmes0} says that $\tau$ is semi-positive or positive if and only if $c(\tau)\ge0$ or $c(\tau)>0$, respectively. Schumacher have proved that the geodesic curvature of the fiberwise K\"ahler-Einstein metric on a family of canonically polarized compact K\"ahler manifolds satisfies a certain second order linear elliptic partial differential equation. This PDE gives a lower bound of the geodesic curvature by the maximum principle or an lower bound estimate on the heat kernel.
However, in case of a Calabi-Yau fibration, the PDE which the geodesic curvature of fiberwise Ricci-flat metric satisfies is of a different type from the previous one.
(See Section \ref{S:fiberwiseRFf}.)
In particular, it does not give a lower bound of the geodesic curvature. This is why Schumacher's method does not give positivity or semi-positivity of the fiberwise Ricc-flat metric.
But the approximation procedure of complex Monge-Ampere equations, it is possible to obtain a lower bound of the direct image of the fiberwise Ricci-flat metric (see Section \ref{S:proof}.)
This is the main contribution of this paper.
This difference of the PDEs, which the fiberwise K\"ahler-Einstein metric on a family of canonically polarized manifolds and the fiberwise Ricc-flat metric on a family of Calabi-Yau manifold satisfy, arises from the difference of complex Monge-Amp\`ere equations which give the K\"ahler-Einstein metrics. More precisely, the complex Monge-Amp\`ere equation of type:
\begin{equation}\label{E:1}
\paren{\omega+dd^c\varphi}^n
=
e^{\lambda\varphi+f}\omega^n,
\end{equation}
for some constant $\lambda>0$ and some suitable smooth function $f$, gives the K\"aher-Einstein metric on a canonically polarized compact K\"aher manifold. On the other hand, the complex Monge-Amp\`ere equation of type:
\begin{equation}\label{E:0}
\paren{\omega+dd^c\varphi}^n
=
e^{\tilde{f}}\omega^n
\end{equation}
for some suitable smooth function $\tilde f$, gives the K\"ahler-Einstein (in this case Ricci-flat) metric on a Calabi-Yau manifold. It is remarkable to note that if $f$ and $\tilde f$ coincide, then \eqref{E:1} converges to \eqref{E:0} as $\lambda\rightarrow0$.
Then by the a priori estimate for complex Monge-Amp\`ere equation, it is well known that the solutions $\varphi_\lambda$ of \eqref{E:1} converges to the solution of \eqref{E:0} (see Section \ref{S:approximation}).
This is the key obervation of the proof of approximation procedures which we already mentioned.
\medskip
Although we cannot obtain the positivity of the fiberwise Ricci-flat metric $\rho$ on $X$, Theorem \ref{T:main_theorem} gives a lower bound of $\rho$ which is given by the Green kernel of each fiber and the Weil-Petersson metric on the base $Y$.
\begin{theorem}\label{T:main_theorem2}
Let $G_y(z,w)$ be the Green kernel of $-\Delta_{\omega_{KE,y}}$ on $X_y$ which is normalized by
\begin{equation*}
\int_{X_y}
G_y(z,w)dV_{\omega_{KE,y}}(z)=0.
\end{equation*}
Let $-K(y)$ with $K(y)\ge0$ be the lower bound of the Green kernel, i.e.,
\begin{equation*}
\inf_{(z,w)\in X_y\times X_y}G_y(z,w)=-K(y).
\end{equation*}
Then $\rho+K(y)\omega^{WP}$ is positive on $X$, where $\omega^{WP}$ is the Weil-Petersson metric on $Y$. (About the Green kernel, see \cite{Aubin2}.)
\end{theorem}
It is remarkable to note that the lower bound of the Green kernel of a compact K\"ahler manifold is bounded from below by a constant which depends only on the geometry of the compact K\"ahler manifold, more precisely, the Ricci curvature, the diameter and the volume (Theorem 3.2 in \cite{Bando:Mabuchi}). Since the Ricci curvature of every fiber vanishes and the volume of every fiber is same (see Subsection \ref{SS:AFCMAE}), the fiberwise constant $K(y)$ is uniformly bounded from below if the diameter of every fiber is bounded.
In the meantime, the second order elliptic PDE for the geodesic curvature $c(\rho)$ of the fiberwise Ricci-flat metric of a Calabi-Yau fibration gives several informations about Calabi-Yau fibrations.
Among them, there is a result about the local triviality of Calabi-Yau fibrations.
\begin{theorem}\label{T:local_triviality}
Let $p:X\rightarrow Y$ be a smooth family of Calabi-Yau manifolds. Let $E:=p_*(K_{X/Y})$ be the direct image bundle of the relative canonical line bundle $K_{X/Y}$. We denote by $\Theta(E)$ the curvature of the natural $L^2$ metric of $E$. If $\Theta(E)$ vanishes along a complex curve, then the family is trivial along the complex curve.
\end{theorem}
A similar result was obtained by Tosatti in \cite{Tosatti} (cf. see also \cite{Fujiki:Schumacher}). Jolany informed the author that he also proved Theorem \ref{T:local_triviality} and some estiamtes of this paper (\cite{Jolany}).
\medskip
\noindent{\bf Acknowlegement.}
The author happily acknowledges his thanks to Mihai P\v aun who suggested this problem, shared his ideas and Dano Kim for very helpful discussions. He is also indebted to Hoang Lu Chinh for teaching him the approximation process of complex Monge-Amp\`ere equations, Bo Berndtsson for enlightening discussions about many topics, including the direct image bundles and Griffiths theorem, Henri Guenancia for helpful comments about Proposition \ref{P:approximation2}, Long Li for many helpful comments about Green kernels, Philippe Eyssidieux for many helpful comments and discussions and Jean-Pierre Demailly for enlightening comments including the natural hermitian metric on a family of Calabi-Yau manifolds. Finally, he would like to thank Yuxin Ge for informing him the error of the previous version.
\section{Preliminaries}
Let $p:X^{n+d}\rightarrow Y^d$ be a smooth family of K\"ahler manifolds. Taking a local coordinate $(s^1,\dots,s^d)$ of $Y$ and a local coordinate $(z^1,\dots,z^n)$ of a fiber of $p$, $(z^1,\dots,z^n,s^1,\dots,s^d)$ forms a local coordinate of $X$ such that under this coordinate, the holomorphic mapping $p$ is locally given by
\begin{equation*}
p(z^1,\dots,z^n,s^1,\dots,s^d)
=
(s^1,\dots,s^d).
\end{equation*}
We call this an \emph{admissible coordinate of $p$}.
Throughout this paper we use small Greek letters, $\alpha,\beta,\dots=1,\dots,n$ for indices on $z=(z^1,\dots,z^n)$ and small roman letters, $i,j,\dots=1,\dots,d$ for indices on $s=(s^1,\dots,s^d)$ unless otherwise specified. For a properly differentiable function $f$ on $X$, we denote by
\begin{equation} \label{E:convention}
f_\alpha
=\pd{f}{z^\alpha},\;\;
f_{\bar\beta}
=\pd{f}{z^{\bar\beta}},\;\;
\;\;\text{and}\;\;
f_{i}
=\pd{f}{s^i},\;\;
f_{\bar{j}}
=\pd{f}{s^{\bar{j}}},
\end{equation}
where $z^{\bar\beta}$ and $s^{\bar{j}}$ mean $\overline{z^\beta}$ and $\overline{s^j}$, respectively. In case $d=1$, we denote by
\begin{equation*}
f_{s}
=\pd{f}{s}\;\;
\text{and}\;\;
f_{\bar{s}}
=\pd{f}{\bar{s}}.
\end{equation*}
If there is no confusion, we always use the Einstein convention. For simplicity we denote by $v_i:=\partial/\partial{s^i}$. If $d=1$, then we denote by $v:=\partial/\partial{s}$.
\subsection{Horizontal lifts and geodesic curvatures}\label{SS:horizontal_lift}
For a complex manifold $M$, we denote by $T'M$ the complex tangent bundle of type $(1,0)$.
\begin{definition} \label{D:lift&curvature}
Let $V\in T'Y$ and $\tau$ be a real $(1,1)$-form on $X$. Suppose that $\tau$ is positive definite on each fiber $X_y$.
\begin{itemize}
\item[1.] A vector field $V_\tau$ of type $(1,0)$ is called a \emph{horizontal lift} of $V$ if $V_\tau$ satisfies the following:
\begin{itemize}
\item [(\romannumeral1)]$\inner{V_\tau,W}_\tau=0$ for all $W\in{T'X_y}$,
\item [(\romannumeral2)]$d\pi(V_\tau)=V$.
\end{itemize}
\item[2.] The \emph{geodesic curvature} $c(\tau)(V)$ of $\tau$ along $V$ is defined by the norm of $V_\tau$ with respect to the sesquilinear form $\inner{\cdot,\cdot}_\tau$ induced by $\tau$, namely,
\begin{equation*}
c(\tau)(V)=\inner{V_\tau,V_\tau}_\tau.
\end{equation*}
\end{itemize}
\end{definition}
\begin{remark} \label{R:horizontal_lift}
Let $(z^1,\dots,z^n,s^1,\dots,s^d)$ be an admissible coordinate of $p$. Then we can write $\tau$ as follows:
\begin{equation*}
\tau
=
\im\paren{\tau_{i\bar{j}}ds^i\wedge{ds}^{\bar{j}}
+\tau_{i\bar\beta}ds^i\wedge{dz}^{\bar\beta}
+\tau_{\alpha\bar{j}}dz^\alpha\wedge{ds}^{\bar{j}}
+\tau_{\alpha\bar\beta}dz^\alpha\wedge{dz}^{\bar\beta}
}.
\end{equation*}
Since $\tau$ is positive-definite on each fiber $X_y$, the matrix $(\tau_{\alpha\bar\beta})$ is invertible. We denote by $(\tau^{\bar\beta\alpha})$ the inverse matrix. Then it is easy to see that the horizontal lift of $\partial/\partial{s^i}$ is given as follows.
\begin{equation*}
\paren{\pd{}{s^i}}_\tau
=
\pd{}{s^i}-\tau_{i\bar\beta}\tau^{\bar\beta\alpha}\pd{}{z^\alpha},
\end{equation*}
in particular, any horizontal lift with respect to $\tau$ is uniquely determined.
On the other hand, the geodesic curvature $c(\tau)(v_i)$ is computed as follows:
\begin{equation*}
\begin{aligned}
c(\tau)(v_i)
&=
\inner{(v_i)_\tau,(v_i)_\tau}_\tau \\
&=
\inner{\pd{}{s^i}-\tau_{i\bar\beta}\tau^{\bar\beta\alpha}\pd{}{z^\alpha},
\pd{}{s^i}-\tau_{i\bar\delta}\tau^{\bar\delta\gamma}\pd{}{z^\gamma}
}_\tau \\
&=
\tau_{i\bar i}
-\overline{\tau_{i\bar\delta}\tau^{\bar\delta\gamma}}\tau_{i\bar\gamma}
-\tau_{i\bar\beta}\tau^{\bar\beta\alpha}\tau_{\alpha\bar i}
+\tau_{i\bar\beta}\tau^{\bar\beta\alpha}
\overline{\tau_{i\bar\delta}\tau^{\bar\delta\gamma}}\tau_{\alpha\bar\gamma} \\
&=
\tau_{i\bar i}
-\tau_{i\bar\beta}\tau^{\bar\beta\alpha}\tau_{\alpha\bar i},
\end{aligned}
\end{equation*}
because $\tau$ is a real $(1,1)$-form.
\end{remark}
\begin{remark} \label{R:metric_K_X/Y}
The real $(1,1)$-form $\tau$ in Definition \ref{D:lift&curvature} induces a hermitian metric on the relative canonical line bundle $K_{X/Y}$ as follows:
Let $(z^1,\dots,z^n,s^1,\dots,s^d)$ be an admissible coordinate in $X$.
Since $\tau$ is positive-definite on each fiber, $(\tau_{\alpha\bar\beta})$ is positive-definite. Hence
$$
\sum\tau_{\alpha\bar\beta}(z,s)dz^\alpha\wedge dz^{\bar\beta}
$$
gives a K\"ahler metric on each fiber $X_s$. It follows that
\begin{equation}\label{E:metric_on_RCLB}
\det(\tau_{\alpha\bar\beta}(z,s))^{-1}
\end{equation}
gives a hermitian metric on the relative line bundle $K_{X/Y}$. We denote this metric by $h^\tau_{X/Y}$. The curvature form $\Theta_{h^\tau_{X/Y}}(K_{X/Y})$ of $h^\tau_{X/Y}$ is given by
\begin{equation*}
\Theta_{h^\tau_{X/Y}}(K_{X/Y})
=
dd^c\log\det(\tau_{\alpha\bar\beta}(z,s)).
\end{equation*}
It is obvious that the cuvature is also written as follows:
\begin{equation*}
\Theta_{h^\tau_{X/Y}}(K_{X/Y})
=
dd^c\log\det\paren{\tau^n\wedge dV_s},
\end{equation*}
where we denote by $\tau^n$ the $n$-fold exterior power divided by $n!$.
\end{remark}
Suppose that $Y$ is $1$-dimensional.
Then it is well known (cf, see \cite{Semmes}) that
\begin{equation}\label{E:Semmes}
\tau^{n+1}
=
c(\tau)\cdot \tau^n\wedge\im{d}s\wedge{d}\bar{s}.
\end{equation}
It follows that if $c(\tau)>0 \; (\ge0)$, then $\tau$ is a positive (semi-positive) real $(1,1)$-form as $\tau$ is positive definite when restricted to $X_y$. On the other hand, \eqref{E:Semmes} says that
\begin{equation*}
p_*\tau^{n+1}
=
\int_{X_s}\tau^{n+1}
=
\int_{X_s}c(\tau)\cdot \tau^n\wedge\im{d}s\wedge{d}\bar{s}.
\end{equation*}
Hence $p_*\tau^{n+1}$ is positive or semi-positive if and only if $\int_{X_s}c(\tau)\tau^n$ is positive or nonnegative, respectively.
For later use, we introduce the following lemma.
\begin{lemma}\label{L:contraction}
The following identity holds:
\begin{equation*}
i_{v_\tau} \tau
=
\im c(\tau)d\bar s.
\end{equation*}
\end{lemma}
\begin{proof}
The computation is quite straightforward.
\begin{align*}
i_{v_\tau} \tau
&=
\im\paren{
\tau_{s\bar s}d\bar s
+
\tau_{s\bar\beta}dz^{\bar\beta}
-
\tau_{s\bar\beta}\tau^{\bar\beta\alpha}\tau_{\alpha\bar s}d\bar s
-
\tau_{s\bar\delta}\tau^{\bar\delta\alpha}\tau_{\alpha\bar\beta}dz^{\bar\beta}
}
\\
&=
\im\paren{
\tau_{s\bar s}d\bar s
-
\tau_{s\bar\beta}\tau^{\bar\beta\alpha}\tau_{\alpha\bar s}d\bar s
}
\\
&=
\im c(\tau)d\bar s.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Kodaira-Spencer classes and Direct image bundles}
Let $p:X\rightarrow Y$ be a smooth family of compact K\"ahler manifolds. We denote the Kodaira-Spencer map for the family $p:X\rightarrow Y$ at a given point $y\in Y$ by
\begin{equation*}
K_y:T'_y Y\rightarrow
H^1(X_y,T'X_y).
\end{equation*}
The Kodaira-Spencer map is induced by the edge homomorphism for the short exact sequence
\begin{equation*}
0\rightarrow
T'_{X/Y}
\rightarrow
T'X
\rightarrow
p^*T'Y
\rightarrow0.
\end{equation*}
If $V\in T'_y Y$ is a tangent vector, and if
\begin{equation*}
V+b^\alpha\pd{}{z^\alpha}
\end{equation*}
is any smooth lifting of $V$ along $X_y$, then
\begin{equation*}
\bar\partial
\paren{
V+b^\alpha\pd{}{z^\alpha}
}
=
\pd{b^\alpha}{z^{\bar\beta}}
\pd{}{z^\alpha}
\otimes
dz^{\bar\beta}
\end{equation*}
is a $\bar\partial$-closed form on $X$, which represents $K_y(V)$, i.e.,
\begin{equation*}
K_y(V)
=
\left[
\pd{b^\alpha}{z^{\bar\beta}}
\pd{}{z^\alpha}
\otimes
dz^{\bar\beta}
\right]
\in H^{0,1}(X_y,T'X_y).
\end{equation*}
This cohomology class $K_y(V)$ is called the \emph{Kodaira-Spencer class} of $V$. The celebrated theorem of Kodaira and Spencer says that if the Kodaira-Spencer class vanishes locally, then the family is locally trivial (\cite{Kodaira:Spencer}, see also \cite{Kodaira}).
\medskip
The direct image sheaf $E:=p_*(K_{X/Y})$ of $K_{X/Y}$ is defined by the sheaf over $Y$ whose fiber $E_y$ is given by
\begin{equation*}
E_y=H^0(X_y,K_{X_y}).
\end{equation*}
It is remarkable to note that this sheaf is indeed a holomorphic vector bundle by the Ohsawa-Takegoshi extension theorem (for more details, see Section 4 in \cite{Berndtsson1}). $E$ is a hermitian vector bundle with $L^2$ metric defined by following: For $u_y, v_y\in E_y$, define $\inner{u_y,v_y}$ by
\begin{equation*}
\inner{u_y,v_y}_y^2
=
\int_{X_y}c_n u_y\wedge \overline{v_y}
\end{equation*}
where $c_n=(\im)^{n^2}$ chosen to make the form positive. The Kodaira-Spencer class acts on $u_y\in E_y$ as follows: Let $k_y(V)$ be any representative of $K_y(V)$, i.e., $T'X_y$-valued $(0,1)$-form in $K_y(V)$, which locally decomposes as
\begin{equation*}
k_y=\zeta\otimes w
\end{equation*}
where $\zeta$ is a $(0,1)$-form and $w$ is a vector field of type $(1,0)$. Then $k_y(V)$ acts on $u_y$ by
\begin{equation*}
k_y(V)\cdot u_y=\zeta\wedge(i_w(u_y)),
\end{equation*}
where $i_w$ is the contraction. This gives a globally defined $\bar\partial$-closed form of type $(n-1,1)$ and
\begin{equation*}
K_y(V)\cdot u_y
:=
\bparen{k_y(V)\cdot u_y
}
\in H^{n-1,1}(X_y).
\end{equation*}
The following theorem due to Griffiths says the curvature of $E$ is computed in terms of Kodaira-Spencer classes (\cite{Griffiths}, see also \cite{Berndtsson2}).
\begin{theorem}
Let $\Theta(E)$ be the curvature of $E$ with $L^2$-metric. Then for $V\in T_y'Y$,
\begin{equation}\label{E:Griffiths}
\inner{\Theta_{V\bar V}(E)u,u}=\norm{K_y(V)\cdot u}^2,
\end{equation}
where $\norm{K_y(V)\cdot u}$ is the norm of its unique harmonic representative. It does not depend on the choice of K\"ahler metric.
\end{theorem}
\section{Approximations of complex Monge-Amp\`ere equations}\label{S:approximation}
In this section, we discuss approximations of a solution of complex Monge-Amp\`ere equation \eqref{E:0} in terms of the solutions of \eqref{E:1}. First we consider the approximation on a single compact K\"ahler manifold. After that, we apply the approximation procedure to a family of complex Monge-Amp\`ere equations. First, we recall the existence and uniqueness theorem of complex Monge-Amp\`ere equations due to Aubin and Yau.
Let $(X,\omega)$ be a compact K\"ahler manifold. Let $f$ be a smooth function on $X$. The complex Monge-Amp\`ere equation is given by the following:
\begin{equation} \label{E:CMAE}
\begin{aligned}
\paren{\omega+dd^c\varphi}^n
&=
e^{\lambda\varphi+f}\omega^n,
\\
\omega+dd^c\varphi
&>0.
\end{aligned}
\end{equation}
This fully nonlinear complex partial differential equation was first raised by E. Calabi. The easiest case, $\lambda>0$, was solved by Aubin and Yau, independently (\cite{Aubin1, Yau}). The next case, $\lambda=0$, was solved by Yau (\cite{Yau}). The last case, $\lambda<0$, is not solved in general. This is why a compact K\"ahler manifold with positive first Chern class does not have the K\"ahler-Einstein metric in general (cf, see \cite{Tian}).
\begin{theorem} \label{T:AY}
The following holds:
\begin{itemize}
\item [1.] (Aubin/Yau) If $\lambda>0$, then there exists a unique smooth function $\varphi$ satisfying \eqref{E:CMAE} for every smooth function $f\in C^\infty(X)$.
\item [2.] (Yau) If $\lambda=0$, then there exists a smooth function $\varphi$ satisfying \eqref{E:CMAE} for $f\in C^\infty(X)$ such that $\int_X e^f\omega^n=\int_X \omega^n$. Moreover, the solution is unique up to the addition of constants.
\end{itemize}
\end{theorem}
\subsection{Approximation on a compact K\"ahler manifold}
\label{SS:approximation1}
Let $(M,\omega)$ be a compact K\"ahler manifold and $f$ be a smooth function on $M$ satisfying
\begin{equation*}
\int_M e^f\omega^n=\int_M\omega^n.
\end{equation*}
Consider the following complex Monge-Amp\`ere equation:
\begin{equation} \label{E:CMAE0}
\begin{aligned}
\paren{\omega+dd^c\varphi}^n &= e^f\omega^n, \\
\omega+dd^c&\varphi>0.
\end{aligned}
\end{equation}
By Theorem \ref{T:AY}, we already know that there exists a solution which is unique up to addition of constants.
Let $\{f_\varepsilon\}$ be a sequence of smooth functions in $M$ which converges to $f$ as $\varepsilon$ goes to $0$ in $C^{k,\alpha}(M)$-topology for any $k\in\NN$ and $\alpha\in(0,1)$. We want to approximate a solution of \eqref{E:CMAE0} by the solutions $\varphi_\varepsilon$ of the following complex Monge-Amp\`ere equations:
\begin{equation} \label{E:CMAE1}
\begin{aligned}
\paren{\omega+dd^c\varphi_\varepsilon}^n &=
e^{\varepsilon\varphi_\varepsilon+f_\varepsilon}\omega^n \\
\omega+dd^c&\varphi_\varepsilon>0,
\end{aligned}
\end{equation}
as $\varepsilon\rightarrow0$. Note that if $\varepsilon\rightarrow0$, then Equation \eqref{E:CMAE1} converges to Equation \eqref{E:CMAE0}.
The convention all over this paper is that we will use the same letter ``$C$'' to denote a generic constant, which may change from one line to another, but it is independent of the pertinent parameters involved (especially $\varepsilon$).
\begin{proposition} \label{P:approximation1}
For each $\varepsilon$ with $0<\varepsilon\le1$, let $\varphi_\varepsilon$ be the solution of \eqref{E:CMAE1}. Then for any $k\in\NN$ and $\alpha\in(0,1)$, there exists a constant $C>0$ which depend only on $k$, $\alpha$, the geometry of $(M,\omega)$ and the function $f$ such that
\begin{equation*}
\norm{\varphi_\varepsilon}_{C^{k,\alpha}(M)}<C.
\end{equation*}
In particular, $\{\varphi_\varepsilon\}$ is a relatively compact subset of $C^{k,\alpha}(M)$ for any positive integer $k$ and $\alpha\in(0,1)$.
\end{proposition}
\begin{proof}
We may assume that
\begin{equation*}
\mathrm{Vol}(M)
=
\int_X\omega^n
=1.
\end{equation*}
The first step is obtaining a uniform upper bound for $\varphi_\varepsilon$. For each $\varepsilon>0$, the solution $\varphi_\varepsilon$ of \eqref{E:CMAE1} satisfies that
\begin{equation*}
1=\int_M\paren{\omega+dd^c\varphi_\varepsilon}^n
=
\int_M e^{\varepsilon\varphi_\varepsilon} e^{f_\varepsilon}\omega^n
\end{equation*}
Then Jensen inequality implies that
\begin{equation*}
1\ge
\exp\paren{\int_M\varepsilon\varphi_\varepsilon e^{f_\varepsilon}\omega^n},
\end{equation*}
it is equivalent to
\begin{equation*}
\int_M\varphi_\varepsilon e^{f_\varepsilon}\omega^n
\le0.
\end{equation*}
Note that $f_\varepsilon$ converges to $f$ as $\varepsilon\rightarrow0$. The Hartogs lemma for quasi-plurisubharmonic functions implies that
\begin{equation}\label{E:upper}
\sup_M\varphi_\varepsilon
<
C,
\end{equation}
where $C$ is a constant which depends only on the geometry of $(M,\omega)$ and $f$ (\cite{Guedj_Zerihai}). Here we recall the simple version of Ko{\l}odziej's uniform estimates (for the general theorem, see \cite{Kolodziej1, Kolodziej2}).
\begin{theorem}\label{T:uniform}
Let $(M,\omega)$ be a compact K\"ahler manifold. Assume that $\varphi$ satisfies the following complex Monge-Amp\`ere equation:
\begin{align*}
\paren{\omega+dd^c\varphi}^n
&=
F\omega^n, \\
\omega+dd^c\varphi
&>
0.
\end{align*}
Then
\begin{equation*}
\norm\varphi_{C^0(M)}\le
C
\end{equation*}
where $C>0$ depends only on $(M,\omega)$ and on an upper bound for $\norm{F}_p$ for some $1<p\le\infty$.
\end{theorem}
If we set $F=e^{\varepsilon\varphi_\varepsilon+f_\varepsilon}$, then $\abs{F}<C$ for some $C>0$ by \eqref{E:upper}. Then it follows from Theorem \ref{T:uniform} that
\begin{equation}\label{E:uniform}
\norm{\varphi_\varepsilon}_{C^0(M)}<C
\end{equation}
for some $C>0$ which depends only on $M$ and the function $f$.
\medskip
The second step is obtaining the Laplacian estimates. We recall the following theorem in \cite{Di Nezza_Lu}, which is essentially due to M. P\v{a}un (\cite{Paun1}, cf. see \cite{Siu}).
\begin{theorem}\label{T:Laplacian}
Let $\psi^+$ and $\psi^-$ be smooth quasi-plurisubharmonic functions on $M$. Let $\varphi\in~C^\infty(M)$ be such that $\sup_M\varphi=0$ and
\begin{equation*}
(\omega+dd^c\varphi)^n
=
e^{\psi^+-\psi^-}\omega^n.
\end{equation*}
Assume given a constant $C>0$ such that
\begin{equation*}
dd^c\psi^\pm\ge-C\omega,
\;\;\;
\sup_M\psi^+\le C.
\end{equation*}
Assume also that the holomorphic bisectional curvature of $\omega$ is bounded from below by $-C$. Then there exists $A>0$ depending on $C$ and $\int_Me^{-2(4C+1)\varphi}\omega^n$ such that
\begin{equation*}
0\le
n+\Delta_\omega\varphi
\le
Ae^{-2\psi^-}.
\end{equation*}
\end{theorem}
We take $\psi^+=\varepsilon\varphi_\varepsilon+f_\varepsilon$ and $\psi^-=0$. Since $f_\varepsilon$ converges to $f$ as $\varepsilon\rightarrow0$ and every $\varphi_\varepsilon$ satisfies that
\begin{equation*}
dd^c\varphi_\varepsilon > -\omega,
\end{equation*}
it follows from \eqref{E:uniform} that $\psi^+$ satisfies the hypothesis of Theorem~\ref{T:Laplacian}. Note that $\{\varphi_\varepsilon\}_{0<\varepsilon\le1}$ is a relatively compact subset of $L^1(X,\omega)$. This implies the Laplacian estimates for $\varphi_\varepsilon$:
\begin{equation*}
\abs{\Delta_\omega\varphi_\varepsilon}<C
\end{equation*}
for some constant $C>0$ which depends only on the geometry of $(M,\omega)$ and the function $f$ by the Uniform Skoda Integrability Theorem due to Zeriahi (\cite{Zeriahi}).
\medskip
The final step is $C^{2,\alpha}(M)$-estimate. For $k\ge2$ and $\alpha\in(0,1)$, the standard Evans-Krylov method (\cite{Evans, Krylov}) and Schauder estimates (cf, see \cite{Aubin2, Gilbarg_Trudinger}) imply
\begin{equation*}
\norm{\varphi_\varepsilon}_{C^{k,\alpha}(X)}
\le
C,
\end{equation*}
where $C$ is a positive constant which depends only on $k,\alpha$, the geometry of $(M,\omega)$ and the function $f$. This completes the proof.
\end{proof}
Proposition \ref{P:approximation1} implies that there exists a $\hat\varphi\in C^\infty(M)$ such that $\varphi_\varepsilon\rightarrow\hat\varphi$ as $\varepsilon\rightarrow0$ by passing through a subsequence. However, $\varphi_\varepsilon$ converges without choosing a subsequence.
\begin{corollary}\label{C:convergence_vp}
The solution $\varphi_\varepsilon$ converges to $\varphi$ which satisfies the following normalization condition
\begin{equation*}
\int_M\varphi e^f\omega^n=0.
\end{equation*}
\end{corollary}
\begin{proof}
Consider the following complex Monge-Amp\`ere equation:
\begin{equation} \label{E:CMAE0}
\begin{aligned}
\paren{\omega+dd^c\varphi}^n &= e^f\omega^n, \\
\omega+dd^c&\varphi>0.
\end{aligned}
\end{equation}
By Theorem \ref{T:AY}, we already know that there exists a solution which is unique up to addition of constants. Let $\varphi_0$ be the unique solution which satisfies that
\begin{equation*}
\int_M\varphi_0e^f\omega^n=1.
\end{equation*}
For $0<\varepsilon\le1$, we consider the following equation:
\begin{equation}
\begin{aligned}
\paren{\omega+dd^c\varphi_\varepsilon}^n &=
e^{\varepsilon\varphi_\varepsilon+f}\omega^n \\
\omega+dd^c&\varphi_\varepsilon>0,
\end{aligned}
\end{equation}
Now we want to show that $\varphi_\varepsilon\rightarrow\varphi_0$ in $C^{k,\alpha}(X)$-topology.
It is enough to show that
\begin{equation*}
\lim_{\varphi\rightarrow0}
\int_M\varphi_\varepsilon e^f\omega^n=0.
\end{equation*}
By Kolodziej's estimate, there exists a uniform constant $C>0$ such that
\begin{equation*}
\norm{\varphi_\varepsilon}_{C^{k,\alpha}(M)}<C.
\end{equation*}
It follows that
\begin{equation*}
e^{\varepsilon\varphi_\varepsilon}
=
1+\varepsilon\varphi_\varepsilon+o(\varepsilon).
\end{equation*}
On the other hand, we have
\begin{equation*}
1
=
\int_M\omega^n
=
\int_M\paren{\omega+dd^c\varphi_\varepsilon}^n
=
\int_M e^{\varepsilon\varphi_\varepsilon} e^{f_\varepsilon}\omega^n
=
\int_M
\paren{1+\varepsilon\varphi_\varepsilon+o(\varepsilon)}
e^{f_\varepsilon}\omega^n,
\end{equation*}
so we have
\begin{equation*}
\varepsilon\int_M\varphi_\varepsilon e^{f_\varepsilon}\omega^n
=
o(\varepsilon)\int_X e^{f_\varepsilon}\omega^n.
\end{equation*}
This implies the conclusion.
\end{proof}
\subsection{Approximation on a family of complex Monge-Amp\'ere equations}
\label{SS:AFCMAE}
Let $p:X^{n+d}\rightarrow Y^d$ be a smooth family of compact K\"ahler manifolds and $\omega$ be a fixed K\"ahler form on $X$. Let $\xi$ be a differential form of degree $2n+r$ on $X$. Then the fiber integral is a differential form of degree $r$ on $Y$, which is defined as follows: Fix a point $y\in Y$ and let $(U,s=(s^1,\dots,s^d))$ be a coordinate centered at $y$ such that there exists a $C^\infty$ trivialization of the family:
\begin{equation*}
\Phi:X_0\times U\rightarrow p^{-1}(U).
\end{equation*}
In an admissible coordinate $(z,s)$, the pull-back $\Phi^*\xi$ is of the form
\begin{equation*}
\sum\xi_k(z,s) dV_z\wedge d\sigma^{k_1}\wedge\cdots\wedge d\sigma^{k_r},
\end{equation*}
where the $\sigma^{k_j}$ run through the real and imaginay parts of $s^j$ and $dV_z$ denotes the relative Euclidean volume form. Now the fiber integral is defined by
\begin{equation*}
\int_{X/Y}\xi
=
\int_{X_0\times Y/Y}
\Phi^*\xi
=
\sum\paren{\int_{X_s}\xi_k(z,s)dV_z}
d\sigma^{k_1}\wedge\cdots\wedge d\sigma^{k_r}.
\end{equation*}
Note that this definition is independent of the choice of coordinates and differentiable trivializations. The fiber integral coincides with the push-forward of the corresponding current. Hence, if $\xi$ is a differentiable form of type $(n+r,n+s)$, then the fiber integral is of type $(r,s)$. In particular, if $\xi$ be a differentiable form of type $(n,n)$ on $X$, then $\int_{X_s}\xi$ is a smooth function on $Y$. Moreover, we have the following properties (for the details, see \cite{Schumacher}.):
\begin{itemize}
\item [(\romannumeral1)] Fiber integration coincides with the push forward of a form, which is defined as follows:
For a form $\xi$ on $X$, $p_*\xi$ is defined by the form on $Y$ which satisfies
\begin{equation*}
\int_Y (p_*\xi)\wedge\zeta
=
\int_X \xi\wedge(p^*\zeta)
\end{equation*}
for any form $\zeta$ on $Y$.
\item [(\romannumeral2)]Fiber integration commutes with taking exterior derivatives:
\begin{equation*}
d\int_{X_s}\xi = \int_{X_s}d\xi
\end{equation*}
\item [(\romannumeral3)] For a smooth form $\xi$ of type $(n,n)$,
\begin{equation*}
\pd{}{s^i}\int_{X_s}\xi=\int_{X_s}L_V(\xi)
\end{equation*}
for any smooth lifting $V$ of $\partial/\partial s^i$ on $X$.
\end{itemize}
Note that the volume of a fiber does not change, namely, (\romannumeral2) implies that
\begin{equation*}
d\mathrm{Vol}_{\omega\vert_{X_s}}(X_s)
=
d\int_{X_s}\omega^n
=
\int_{X_s}d\omega^n
=0.
\end{equation*}
Hence we may assume that $\mathrm{Vol}_{\omega\vert_{X_y}}(X_y)=1$ for every $y\in Y$. The third property (\romannumeral3) will be used in Section \ref{S:app_geo_curv}.
\medskip
From now on, we consider a smooth family $p:X\rightarrow\DD$ of compact K\"ahler manifolds over the unit disc $\DD$ in $\CC$.
Let $\omega$ be a K\"ahler form on $X$.
Under an admissible coordinate $(z^1,\dots,z^n,s)$ in $X$, $\omega$ is written as follows:
\begin{equation}\label{E:local_omega}
\omega
=
\im\paren{g_{s\bar s}ds\wedge d\bar s
+g_{s\bar\beta}ds\wedge{dz}^{\bar\beta}
+g_{\alpha\bar s}dz^\alpha\wedge d\bar s
+g_{\alpha\bar\beta}dz^\alpha\wedge{dz}^{\bar\beta}
}.
\end{equation}
For $0<\varepsilon\le1$, let $\{f_\varepsilon\}$ be a sequence of smooth functions on $X$. We consider the following fiberwise complex Monge-Amp\`ere equations:
\begin{equation} \label{E:ACMAE}
\begin{aligned}
\paren{\omega_y+dd^c\varphi_y}^n
&=
e^{\varepsilon\varphi_y+f_\varepsilon\vert_{X_y}}(\omega_y)^n,
\\
\omega_y+dd^c\varphi_y
&>0
\end{aligned}
\end{equation}
on $X_y$ for $y\in\DD$. Theorem \ref{T:AY} implies that for each $y$, there exists a unique solution of \eqref{E:ACMAE}, call it $\varphi_{y,\varepsilon}\in C^\infty(X_y)$. It is remarkable to note that the function $\varphi_\varepsilon$ defined by
\begin{equation*}
\varphi_\varepsilon(x)=\varphi_{y,\varepsilon}(x),
\end{equation*}
where $y=p(x)$, is a smooth function on $X$. This follows from the openness analysis of the continuity method for complex Monge-Amp\`ere equations and the implicit function theorem (\cite{Yau}). By Section \ref{SS:approximation1}, there exists a constant $C_y>0$ such that
\begin{equation}\label{E:single_estimate}
\norm{\varphi_\varepsilon}_{C^{k,\alpha}(X_y)}
\le
C_y
\end{equation}
where $C_y$ does not depend on $\varepsilon$. Since we are now considering a local property on $y$, we may assume that $C=C_y$ does not depend on $y$.
In this section, we consider the $C^{k,\alpha}$-estimates for $V\varphi_\varepsilon$ and $\bar VV\varphi_\varepsilon$ on a fixed fiber $X_y$, where $V$ is any smooth lifting of $\partial/\partial s$ written as follows:
\begin{equation*}
V=\pd{}{s}+a\ind{s}{\gamma}{}\pd{}{z^\gamma}.
\end{equation*}
Before going further, we introduce the following proposition.
\begin{proposition}\label{P:Key_Prop}
Let $(X,\omega)$ be a compact K\"ahler manifold.
Let $\{\rho_\varepsilon\}_{\varepsilon\in I}$ be a family of K\"ahler metrics on $X$ which are uniformly equivalent to $\omega$, i.e., there exists a constant $C_1>0$ such that
\begin{equation*}
\frac{1}{C_1}\omega<\rho_\varepsilon<C_1\omega
\;\;\;
\text{for all}\;\;\;
\varepsilon\in I.
\end{equation*}
Let $u_\varepsilon$ be a solution of the following PDE:
\begin{equation}\label{E:laplacian}
-\Delta_{\rho_\varepsilon}u_\varepsilon
+
\varepsilon u_\varepsilon
=
R_\varepsilon,
\end{equation}
where $R_\varepsilon$ is a smooth function on $X$ with
$$
\norm{R_\varepsilon}_{C^{k,\alpha}(X)}<C_2.
$$
Suppose that
\begin{equation*}
\abs{\int_X u_\varepsilon\omega^n}<C_3.
\end{equation*}
Then there exists a uniform constant $C>0$ which depends only on $C_1$, $C_2$, $C_3$ and the geometry of $(X,\omega)$ such that
\begin{equation*}
\norm{u_\varepsilon}_{C^{k,\alpha}(X)}<C.
\end{equation*}
\end{proposition}
\begin{proof}
In this proof, we shall use the Schauder estimate, Poincar\'e inequality and Sobolev inequality with respect to the K\"ahler metric $\rho_\varepsilon$ (cf, see \cite{Gilbarg_Trudinger, Aubin2}).
It is remakable to note that the constants in those inequalities do not depend on $\varepsilon\in I$ since all $\rho_\varepsilon$ are uniformly equivalent to $\omega$.
If we have the uniform estimate, i.e., $C^0$-estiamte of $u$, then Schauder estimate completes the proof.
\medskip
The Poincar\'e inequality says that there exists a constant $C$ which depends only on $C_1$ and the geometry of $(M,\omega)$ such that
\begin{equation*}
\norm{u_\varepsilon-\int_X u_\varepsilon{\rho_\varepsilon}^n}_{L^2_{\rho_\varepsilon}(X)}
<
C\norm{Du_\varepsilon}_{L^2_{\rho_\varepsilon}(X)},
\end{equation*}
where $D$ is a total derivative.
It follows from the assumption that
\begin{equation*}
\norm{u_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}
<C\norm{Du_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}+C_1C_3.
\end{equation*}
On the other hand, multiplying $u_\varepsilon$ to \eqref{E:laplacian} and integrating it with respect to $(\rho_\varepsilon)^n$, we have
\begin{equation*}
\norm{Du_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}^2
+
\varepsilon\norm{u_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}^2
=
\int_X R_\varepsilon u_\varepsilon{\rho_\varepsilon}^n.
\end{equation*}
The H\"older inequality says that
\begin{equation}
\norm{Du_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}^2
\le
\norm{R_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}
\norm{u_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}
\end{equation}
Combining the two equations, there exists a uniform constant $C$ which depends only on $C_1, C_2, C_3$ and the geometry of $(X,\omega)$ such that
\begin{equation*}
\norm{u_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}<C.
\end{equation*}
Now we follow the Moser iteration.
Multiplying \eqref{E:pde_vpve'} by $\abs{u_\varepsilon}^{2p-1}\cdot u_\varepsilon/\abs{u_\varepsilon}$ and integrating it, we have
\begin{equation*}
\frac{2p-1}{p^2}\int_{X_y}\abs{D\abs{u_\varepsilon}^p}^2\omega^n
+
\varepsilon\int_{X}\abs{u_\varepsilon}^{2p}\omega^n
=
\int_{X} R_\varepsilon u_\varepsilon\omega^n.
\end{equation*}
The Sobolev inequality says that
\begin{equation*}
\norm{\abs{u_\varepsilon}^p}^2_{L^{2n/(n-1)}_{\rho_\varepsilon}(X)}
\le
C
\paren{
\norm{\abs{u_\varepsilon}^p}_{L^2_{\rho_\varepsilon}(X)}
+
\norm{D\abs{u_\varepsilon}^p}_{L^2_{\rho_\varepsilon}(X)}
}
\end{equation*}
for $p\ge1$ (\cite{Aubin2}). Combining two equations, we have
\begin{equation*}
\norm{u_\varepsilon}_{L^{2p\cdot\frac{n}{n-1}}_{\rho_\varepsilon}(X)}
\le
(Cp)^{1/p}\norm{u_\varepsilon}_{L^{2p}_{\rho_\varepsilon}(X)}
\end{equation*}
for $p\ge1$.
The uniform estimate is obtained by the Moser iteration method (cf, see \cite{Gilbarg_Trudinger}). Indeed, set
\begin{equation*}
p_1=1,
\;\;\;
p_k=\paren{\frac{n}{n-1}}^k.
\end{equation*}
Then it follows that
\begin{equation*}
\norm{u}_{L^\infty(X)}
=
\lim_{k\rightarrow\infty}\norm{u_\varepsilon}_{L^{2p_k}_{\rho_\varepsilon}(X)}
\le \prod_{k=1}^n(Cp_k)^{1/p_k}\norm{u_\varepsilon}_{L^2_{\rho_\varepsilon}(X)}.
\end{equation*}
This completes the proof.
\end{proof}
\begin{proposition}\label{P:approximation2}
Suppose that there exist constants $C_1>0$ and $C_2>0$ such that
\begin{equation*}
\abs{\int_{X_y}(V\varphi_\varepsilon)(\omega_y)^n}<C_1
\end{equation*}
and
\begin{equation*}
\norm{Vf_\varepsilon}_{C^{k,\alpha}(X_y)}<C_2.
\end{equation*}
Then there exits a constant $C$ which depends only on the constants $C_1$, $C_2$, the lift $V$ and the geometry of $(X_y,\omega_y)$ such that
\begin{equation*}
\norm{V\varphi_\varepsilon}_{C^{k,\alpha}(X_y)}<C
\end{equation*}
for $0<\varepsilon\le1$. In particular, $\{V\varphi_\varepsilon\}_{0<\varepsilon\le1}$ is a relatively compact subset in $C^{k,\alpha}(X_y)$ for any $k\in\NN$ and $\alpha\in(0,1)$.
\end{proposition}
\begin{proof}
We denote by $\rho_\varepsilon=\omega+dd^c\varphi_\varepsilon$. Note that Proposition \ref{P:approximation1} implies that there exists a uniform constant $C>0$ such that
\begin{equation}\label{E:equivalence}
\frac{1}{C}\omega_y
<
\rho_\varepsilon\vert_{X_y}
<
C\omega_y,
\end{equation}
for $0<\varepsilon\le1$. Under an admissible coordinate $(z^1,\dots,z^n,s)$, the first equation of \eqref{E:ACMAE} is written as follows:
\begin{equation} \label{E:CMAE_coord}
\det(g_{\alpha\bar\beta}+(\varphi_\varepsilon)_{\alpha\bar\beta})
=
e^{\varepsilon\varphi_{\varepsilon}+f_\varepsilon}
\det(g_{\alpha\bar\beta})
\end{equation}
on each $X_y$. Taking logarithm of \eqref{E:CMAE_coord} and differentiating it with respect to $V$, we have
\begin{equation*}
(\rho_\varepsilon)^{\alpha\bar\beta}
V\paren{g_{\alpha\bar\beta}+(\varphi_\varepsilon)_{\alpha\bar\beta}}
=
\varepsilon V\varphi_\varepsilon
+
V f_\varepsilon
+
g^{\alpha\bar\beta}
V\paren{g_{\alpha\bar\beta}}.
\end{equation*}
For a smooth function $\xi$, we denote by
\begin{align*}
[V,\xi]_{\alpha\bar\beta}
&=
V(\xi_{\alpha\bar\beta})-(V\xi)_{\alpha\bar\beta} \\
&=
-a\ind{s}{\gamma}{\alpha\bar\beta}\xi_\gamma
-a\ind{s}{\gamma}{\alpha}\xi_{\gamma\bar\beta}
-a\ind{s}{\gamma}{\bar\beta}\xi_{\alpha\gamma}.
\end{align*}
It is remarkable to note that $[V,\xi]_{\alpha\bar\beta}$ does not include $s$-derivative of $\xi$.
Then it follows that
\begin{equation}\label{E:pde_vpve}
\begin{aligned}
-\Delta_{\rho_\varepsilon\vert_{X_y}}\paren{V\varphi_\varepsilon}
+\varepsilon\paren{V\varphi_\varepsilon}
=&
-V f_\varepsilon
-
g^{\alpha\bar\beta}
V\paren{g_{\alpha\bar\beta}}\\
&+
(\rho_\varepsilon)^{\alpha\bar\beta}
\paren{
V\paren{g_{\alpha\bar\beta}}
+
[V,\varphi_\varepsilon]_{\alpha\bar\beta}
}
\end{aligned}
\end{equation}
on each fiber $X_y$, where $\Delta_{\rho_\varepsilon\vert_{X_y}}$ is the Laplace-Beltrami operator on $X_y$ with respect to $\rho_\varepsilon\vert_{X_y}$. Here $(V\varphi_\varepsilon)$ and $(Vf_\varepsilon)$ mean that
\begin{equation*}
V\varphi_\varepsilon
=
(V\varphi_\varepsilon)\vert_{X_y}
\;\;\;\text{and}\;\;\;
V f_\varepsilon
=
(Vf_\varepsilon)\vert_{X_y}.
\end{equation*}
From now on, when we think about a family of PDEs, we omit the subsrcript $X_y$ in the Laplace-Beltrami opertor, i.e., we write as follows:
\begin{equation*}
\Delta_{\rho_\varepsilon}=\Delta_{\rho_\varepsilon\vert_{X_y}}.
\end{equation*}
Equation \eqref{E:pde_vpve} says that the right hand side of \eqref{E:pde_vpve} is a globally defined function on $X_y$, call it $R_\varepsilon$. Then we have
\begin{equation}\label{E:pde_vpve'}
-\Delta_{\rho_\varepsilon}(V\varphi_\varepsilon)
+
\varepsilon(V\varphi_\varepsilon)
=
R_\varepsilon.
\end{equation}
This is a second order elliptic partial differential equation with the hypotheses in Proposition \ref{P:Key_Prop}. This completes the proof.
\end{proof}
\begin{proposition}\label{P:approximation3}
Under the assumption in Proposition \ref{P:approximation2}, suppose that there exists a constant $C_3>0$ and $C_4$ such that
\begin{equation*}
\abs{\int_{X_y}\paren{\bar VV\varphi_\varepsilon}(\omega_y)^n}<C_3
\end{equation*}
and
\begin{equation*}
\norm{\bar VVf_\varepsilon}_{C^{k,\alpha}(X_y)}<C_4.
\end{equation*}
Then there exits a constant $C$ which depends only on constants $C_1, C_2, C_3, C_4$, the lift $V$ and the geometry of $(X_y,\omega_y)$ such that
\begin{equation*}
\norm{\bar VV\varphi_\varepsilon}_{C^{k,\alpha}(X_y)}<C
\end{equation*}
for $0<\varepsilon\le1$. In particular, $\{\bar VV\varphi_\varepsilon\}_{0<\varepsilon\le1}$ is a relatively compact subset in $C^{k,\alpha}(X_y)$ for any $k\in\NN$ and $\alpha\in(0,1)$.
\end{proposition}
\begin{proof}
Differentiating \eqref{E:pde_vpve'} with respect to $\bar V$, we have
\begin{equation*}
-\Delta_{\rho_\varepsilon}\paren{\bar VV\varphi_\varepsilon}
+
\varepsilon\paren{\bar VV\varphi_\varepsilon}
=
\bar V\paren{(\rho_\varepsilon)^{\bar\beta\alpha}}\cdot\paren{V\varphi_\varepsilon}_{\alpha\bar\beta}
+
(\rho_\varepsilon)^{\bar\beta\alpha}[\bar V,V\varphi_\varepsilon]_{\alpha\bar\beta}
+
\bar V(R_\varepsilon).
\end{equation*}
Since $\norm{\varphi_\varepsilon}_{C^{k,\alpha}(X_y)}$ and $\norm{V\varphi_\varepsilon}_{C^{k,\alpha}(X_y)}$ are bounded, the same argument in the proof of Propostion \ref{P:approximation2} says this PDE satisfies the hypotheses in Proposition \ref{P:Key_Prop}. This completes the proof.
\end{proof}
\section{Fiberwise Ricci-flat metrics on Calabi-Yau fibrations}\label{S:fiberwiseRFf}
In this section, we discuss the properties of the fiberwise Ricci-flat metric $\rho$. We first discuss a partial differential equation which the geodesic curvature $c(\rho)$ satisfies and several applications of this PDE.
\medskip
Let $p:X\rightarrow Y$ be a smooth family of Calabi-Yau manfiolds and $\omega$ be a K\"ahler form on $X$. We write $\omega$ like as \eqref{E:local_omega}. Since every fiber $X_y$ is a Calabi-Yau manifold, the first Chern class $c_1(X_y)$ vanishes for each fiber $X_y$. Since $c_1(X_y)$ is represented by the Ricci form of $\omega_y$, we know that
\begin{equation*}
\left[
-dd^c\log\det(g_{\alpha\bar\beta}(\cdot,y))
\right]
=0.
\end{equation*}
By the $dd^c$-lemma, there exists a unique function $\eta_y\in C^\infty(X_y)$ such that
\begin{itemize}
\item $\displaystyle dd^c\eta_y=dd^c\log\det(g_{\alpha\bar\beta})$ and
\item $\int_{X_y} e^{\eta_y}(\omega_y)^n=\int_{X_y}(\omega_y)^n$.
\end{itemize}
For each $y\in Y$, there exists a unique solution $\varphi_y\in C^\infty(X_y)$ of the following complex Monge-Amp\`ere equation on each fiber $X_y$:
\begin{equation}
\begin{aligned}
\paren{\omega_y+dd^c\varphi_y}^n
&=
e^{\eta_y}(\omega_y)^n, \\
\omega_y+dd^c\varphi_y
&>
0,
\end{aligned}
\end{equation}
which is normalized by
\begin{equation*}
\int_{X_y}\varphi_ye^{\eta_y}(\omega_y)^n=0.
\end{equation*}
Then it is easy to see that $\omega_y+dd^c\varphi_y$ is the Ricci-flat K\"ahler metric on $X_y$. As we already mentioned, we can consider $\varphi$ as a smooth function on $X$ by letting $\varphi(x)=\varphi_y(x)$ where $y=p(x)$. Define a real $(1,1)$-form $\rho$ on $X$ by
\begin{equation*}
\rho=\omega+dd^c\varphi.
\end{equation*}
Since $e^{\eta_y}(\omega_y)^n=(\omega_{KE,y})^n$, this is the fiberwise Ricci-flat metric in Theorem \ref{T:main_theorem}.
\medskip
Since every fiber $X_y$ is Calabi-Yau, $K_{X_y}$ is a trivial line bundle for every $y\in Y$.
Hence the direct image bundle $E=p_*(K_{X/Y})$ is a line bundle over $Y$.
Take an admissible coordinate system $(z^1,\dots,z^n,s^1,\dots,s^d)$ in $X$.
Let $u$ be a local holomorphic section of $E$ over an open set $U\subset Y$.
(Shrinking $U$ if necessary, $s=(s^1,\dots,s^d)$ can be considered as a local coordinate in $U$.) Since $E$ is a line bundle, the curvature of $(E,\norm\cdot)$ is given by
\begin{equation*}
\Theta(E)=-dd^c\log\norm{u}_s.
\end{equation*}
We say that $\mathbf{u}$ is a representative of $u$ if $\mathbf{u}$ is an $(n,0)$-form on $p^{-1}(U)$, such that $\mathbf{u}$ restricts to $u_s$ on fibers $X_s$, i.e.,
\begin{equation*}
\iota_s^*(\mathbf{u}) = u_s
\end{equation*}
where $\iota_s$ is the natural inclusion map from $X_s$ to $X$ (for more details, see \cite{Berndtsson1, Berndtsson2}).
The representative is not uniquely determined, but any two representatives are differ from $ds\wedge v$ for some $(n-1,0)$-form $v$.
Hence if we denote by $u\wedge\overline u\wedge dV_s:=\mathbf{u}\wedge\overline{\mathbf{u}}\wedge dV_s$, where $dV_s=c_d ds\wedge d\bar s$, then it does not depend on the choice of the representative. Moreover, it also follows that
\begin{equation*}
\displaystyle{\norm{u}^2_s=c_n\int_{X_s}u\wedge\overline u=c_n\int_{X_s}\mathbf{u}\wedge\overline{\mathbf{u}}}
\end{equation*}
for any representative $\mathbf{u}$ of $u$. In terms of $u$, the function $\eta$ is written explicitly:
\begin{proposition} \label{P:key_p}
On $p^{-1}(U)$, $\eta$ is written as follows:
\begin{equation}\label{E:eta}
\eta(z,s)
=
-\log\frac{\omega^n\wedge dV_s}
{c_n u\wedge \overline{u}\wedge dV_s}
-\log\norm{u}^2_s.
\end{equation}
In particular, we have the following:
\begin{equation*}
dd^c\eta
=
-\Theta_{h^\omega_{X/Y}}(K_{X/Y})+\Theta(E).
\end{equation*}
\end{proposition}
\begin{proof}
Let $\mathbf{u}$ be a representative of $u$. Denote the right hand side of \eqref{E:eta} by $\tilde\eta$.
It is enough to show the following:
\begin{itemize}
\item [1.]$\int_{X_s}e^{\tilde\eta}(\omega_s)^n=1$.
\item [2.]$dd^c\tilde\eta\vert_{X_s}=-dd^c\log\det(g_{\alpha\bar\beta})\vert_{X_s}.$
\end{itemize}
First we compute
\begin{align*}
\int_{X_s}e^{\tilde\eta}(\omega_s)^n
&=
\int_{X_s}\left[\exp\paren{-\log\frac{\omega^n\wedge dV_s}
{c_n \mathbf{u}\wedge \overline{\mathbf{u}}\wedge dV_s}
-\log\norm{u}_s^2}\right](\omega_s)^n \\
\end{align*}
If we write $dz=dz^1\wedge\dots\wedge dz^n$, then
\begin{equation*}
(\omega_s)^n=\det(g_{\alpha\bar\beta})c_n dz\wedge d\bar z
\;\;\;\text{and}\;\;\;
\mathbf{u}\vert_{X_s}=\hat u(z,s)dz
\end{equation*}
for some local holomorphic function $\hat u(z,s)$. It follows that
\begin{align*}
\int_{X_s}e^{\tilde\eta}(\omega_s)^n
&=
\int_{X_s}\exp
\paren{-\log\frac{\det(g_{\alpha\bar\beta})}{c_n\abs{\hat u(z,s)}^2}-\log\norm{u}_s^2
}
(\omega_s)^n \\
&=
\frac{1}{\norm{u}_s^2}\int_{X_s}
\frac{c_n\abs{\hat u(z,s)}^2}{\det(g_{\alpha\bar\beta})}
{\det(g_{\alpha\bar\beta})}dz\wedge d\bar z \\
&=
\frac{1}{\norm{u}_s^2}
\cdot c_n\int_{X_s}
\frac{\abs{\hat u(z,s)}^2}{\det(g_{\alpha\bar\beta})}
{\det(g_{\alpha\bar\beta})}dz\wedge d\bar z \\
&=
\frac{1}{\norm{u}_s^2}
\cdot c_n\int_{X_s}
\hat u(z,s)dz\wedge \overline{\hat u(z,s)dz} \\
&=
\frac{1}{\norm{u}_s^2}
\cdot c_n\int_{X_s}
\mathbf{u}\wedge\overline{\mathbf{u}}
=1
\end{align*}
This yields the first assertion. For the second assertion,
\begin{align*}
dd^c\tilde\eta\vert_{X_s}
&=
-dd^c
\paren{\log\frac{\omega^n\wedge dV_s}
{c_n u\wedge \overline{u}\wedge dV_s}+\log\norm{u_s}^2
}\Big\vert_{X_s}
\\
&=
-dd^c
\paren{\log\det(g_{\alpha\bar\beta})
+\log\abs{\hat u(z,s)}^2
}\Big\vert_{X_s}
\\
&=
-dd^c\log\det(g_{\alpha\bar\beta})\vert_{X_s}.
\end{align*}
For the second assertion,
\begin{align*}
dd^c\eta
& =
-dd^c\log\frac{\omega^n\wedge dV_s}{c_n u\wedge \overline{u}\wedge dV_s}
-
dd^c\log\norm{u}^2_s
\\
& =
-
dd^c\log\frac{\det(g_{\alpha\bar\beta})c_n dz\wedge d\bar z\wedge dV_s}
{\abs{\hat u(z,s)}^2 c_n dz\wedge d\bar z\wedge dV_s}
-
dd^c\log\norm{u}^2_s
\\
& =
-dd^c\log\det\paren{g_{\alpha\bar\beta}(z,s)}
+
dd^c\log\abs{\hat u(z,s)}
-
dd^c\log\norm{u}^2_s \\
& =
-\Theta_{h^\omega_{X/Y}}(K_{X/Y})
+
dd^c\log\abs{\hat u(z,s)}^2
+
\Theta(E)
\\
& =
-\Theta_{h^\omega_{X/Y}}(K_{X/Y})+\Theta(E).
\end{align*}
This completes the proof.
\end{proof}
Since $\rho$ is positive-definite on each fiber, it induces a hermitian metric $h^\rho_{X/Y}$ on $K_{X/Y}$ as in Remark \ref{R:metric_K_X/Y}. The curvature of $h_{X/Y}^\rho$ is computed by Proposition \ref{P:key_p} as follows:
\begin{align*}
\Theta_{h^\rho_{X/Y}}(K_{X/Y})
& =
dd^c\log\paren{\rho^n\wedge\im ds\wedge d\bar s} \\
& =
dd^c\log\paren{(\omega+dd^c\varphi)^n\wedge\im ds\wedge d\bar s}
\\
& =
dd^c\log\paren{e^{\eta}\omega^n\wedge\im ds\wedge d\bar s}
\\
& =
dd^c\eta+\Theta_{h^\omega_{X/Y}}(K_{X/Y})
-
dd^c\log\omega^n\wedge dV_s \\
& =
-\Theta_{h^\omega_{X/Y}}(K_{X/Y})+\Theta(E)
+
\Theta_{h^\omega_{X/Y}}(K_{X/Y})
\\
& =
\Theta(E).
\end{align*}
Here $\Theta(E)$ means $p^*\Theta(E)$. This formula enables us to compute the Laplacian of $c(\rho)$ on each fiber $X_y$:
\begin{theorem}\label{T:PDE0}
Let $V\in T_yY$. Then the following PDE holds on $X_y$:
\begin{equation}\label{E:PDE0}
-\Delta_\rho c(\rho)(V)
=
\abs{\bar\partial V_\rho}_\rho^2-\Theta_{V\bar V}(E).
\end{equation}
\end{theorem}
The computation is quite straight forward. Later, we will prove this for more general situation (See Theorem \ref{T:PDE}).
\begin{remark}\label{R:sufficient_condition}
To show that $p_*\rho^{n+1}$ is positive on $Y$, it is enough to consider a Calabi-Yau fibration over the unit disc by the following:
\begin{itemize}
\item [1.] Let $\sigma_1$ and $\sigma_2$ be real $(1,1)$-forms on $Y$. Suppose that
\begin{equation*}
p_*(\sigma_1\vert_{\gamma(\DD)})^{n+1}
\ge
p_*(\sigma_2\vert_{\gamma(\DD)})^{n+1}
\end{equation*}
for each holomorphic disc $\gamma^{n+1}:\DD\rightarrow Y$.
Then we have
$p_*(\sigma_1)^{n+1}\ge p_*(\sigma_2)^{n+1}$ on $X$.
\item [2.] Every computation concerning the positivity of $p_*\rho^{n+1}$ is local in $s$-variable, which is a local coordinate in $Y$.
\end{itemize}
Therefore we only consider a famliy of Calabi-Yau manifolds over the unit disc in $\CC$ as long as we are interested in positivity properties of $p_*\rho^{n+1}$. In this case, \eqref{E:PDE0} turns out to be
\begin{equation}\label{E:PDE0'}
-\Delta_\rho c(\rho)
=
\abs{\bar\partial v_\rho}_\rho^2-\Theta_{s\bar s}(E),
\end{equation}
where $v=\partial/\partial s$ and $\Theta_{s\bar s}(E)=\Theta(E)(v,\bar v)$. As we mentioned in Section \ref{SS:horizontal_lift}, the positivity of $p_*\rho^{n+1}$ is equivalent to $\int_{X_y}c(\rho)\rho^n>0$.
\end{remark}
\begin{remark}
In case of a family of canonically polarized compact complex manifolds $p:X\rightarrow\DD$, Schumacher have proved that the geodesic curvature $c(\tilde\rho)$ of the form $\tilde\rho$, which is induced by the fiberwise K\"ahler-Einstein metrics of Ricci curvature $-1$, satisfies the following PDE:
\begin{equation}\label{E:Schumacher}
-\Delta_\rho c(\tilde\rho)+c(\tilde\rho)
=
\abs{\bar\partial v_{\tilde\rho}}_{\tilde\rho}^2
\end{equation}
for each fiber $X_y$ (\cite{Schumacher}). This PDE gives a lower bound of $c(\tilde\rho)$ directly by the maximum principle. (More precise lower bound is also obtained using heat kernel estimates by Schumacher.) Hence the fiberwise K\"ahler-Einstein form $\tilde\rho$ is a semi-positive metric on $X$. However \eqref{E:PDE0'} does not gives a lower bound by the maximum principle.
It is worthwhile to note that the Weil-Petersson metric on the moduli space of canonically polarized manifolds is expressed by the fiberwise K\"ahler-Einstein metric $\tilde\rho$. More precisely, it follows from \eqref{E:Schumacher} and \eqref{E:Semmes} the Weil-Petersson metric $\omega_{WP}$ is written by
\begin{equation}\label{E:Weil-Petersson}
\omega_{WP}=\int_{X_s}\tilde\rho^{n+1}.
\end{equation}
In case of the moduli space of polarized Calabi-Yau manifolds, our fiberwise Ricci-flat metric does not give such kind of identity. Recently, Braun proved that there exists a K\"ahler form $\omega_{SRF}$ on a family of polarized Calabi-Yau manifolds with vanishing first betti number such that the restriction of the K\"ahler form on each fiber is Ricc-flat metric and it satisfies \eqref{E:Weil-Petersson} (\cite{Braun}).
\end{remark}
In the last of this section, we discuss some applications of Theorem \ref{T:PDE0}.
The \emph{Weil-Petersson form} $\omega^{WP}$ of a family $p:X\rightarrow Y$ is a real $(1,1)$-form on $Y$ which is induced by the following norm:
\begin{equation*}
\norm{V}_{WP}^2
=
\int_{X_y}\norm{\bar\partial V_\rho}_\rho^2
dV_\rho.
\end{equation*}
\begin{proposition} \label{P:norm_dbarv}
For $V\in T'Y$, the following holds:
\begin{equation*}
\norm{\bar\partial V_\rho}_\rho^2
=
\Theta_{V\bar V}(E).
\end{equation*}
In particular, $\omega^{WP}=\Theta(E)$.
\end{proposition}
\begin{proof}
Integrating \eqref{E:PDE0} on $X_y$ gives the conclusion.
\end{proof}
\begin{proposition}\label{P:harmonic_representative}
$\bar\partial V_\rho\cdot u_y$ is the harmonic representative of the cohomology class $K_y(V)\cdot u_y$ with respect to $\rho\vert_{X_y}$.
\end{proposition}
\begin{proof}
Since $E$ is a line bundle, Griffiths' theorem implies that
\begin{equation*}
\Theta_{V\bar V}(E)
=
\frac{\norm{K_y(V)\cdot u_y}^2}{\norm{u_y}^2}.
\end{equation*}
Note that
\begin{equation*}
\bar\partial V_\rho
\in
K_y(V).
\end{equation*}
It follows that
\begin{equation*}
\frac{\norm{K_y(V)\cdot u_y}^2}{\norm{u_y}^2}
\le
\frac{\norm{\bar\partial V_\rho\cdot u_y}^2}{\norm{u_y}^2}.
\end{equation*}
The following lemma is well known (cf, see \cite{Popovici}).
\begin{lemma}
Let $(X,\omega)$ be a Calabi-Yau manifold. Let $u$ be a non-vanishing holomorphic $n$-form on $X$ such that
\begin{equation*}
\norm{u}^2_\omega
:=\int_X \abs{u}^2_\omega\;dV_\omega
=\int_X dV_\omega
=1.
\end{equation*}
Denote by $A^{(p,q)}(E)$ the space of smooth $(p,q)$-forms with values in $E$. Define a map
$$
T_u:A^{(0,1)}(T'X)\rightarrow A^{(n-1,1)}(X)
$$
by $T_u(V)=V\cdot u$.
Then $T_u$ is an isometry with respect to the pointwise scalar product induced by $\omega$.
\end{lemma}
Hence Propositioin 4.4 implies that
\begin{equation*}
\norm{\bar\partial V_\rho}_\rho^2
=
\Theta_{V\bar V}(E)
=
\frac{\norm{K_y(V)\cdot u_y}^2}{\norm{u_y}^2}
\le
\frac{\norm{\bar\partial V_\rho\cdot u_y}^2}{\norm{u_y}^2}
=
\norm{\bar\partial V_\rho}_\rho^2.
\end{equation*}
It follows that $\bar\partial V_\rho\cdot u_y$ is the harmonic representative with respect to $\rho\vert_{X_y}$ of $K_y(V)\cdot u_y$. This completes the proof.
\end{proof}
\begin{proposition}
Let $p:X\rightarrow Y$ be a Calabi-Yau fibration. If the curvature of the direct image bundle $p_*(K_{X/Y})$ vanishes along a complex curve, then the fibration is trivial along the complex curve.
\end{proposition}
\begin{proof}
Denote by $\gamma$ the complex curve in $Y$. Then $p\vert_\gamma:X_\gamma\rightarrow\gamma$ is a Calabi-Yau fibration over a $1$-dimensional base. If we take $s$ be a holomorphic coordinate of $\gamma$, then we have Equation \eqref{E:PDE0'} on each fiber $X_y$ for $y\in\gamma$. By the Hypothesis, $\Theta_{s\bar s}(E)$ vanishes on $\gamma$. Proposition \ref{P:norm_dbarv} implies that $v_\rho$ is a holomorphic vector field on $X_\gamma$. The flow of $v_\rho$ makes $X_\gamma$ a trivial fibration.
\end{proof}
\section{Proof of Theorem \ref{T:main_theorem} and Theorem \ref{T:main_theorem2}}\label{S:proof}
In this section we shall prove the main theorem. As we mentioned in Remark \ref{R:sufficient_condition}, it is enough to show that $\int_{X/\DD}c(\rho)\rho^n\ge0$ for a family of Calabi-Yau manifolds over the unit disc in $\CC$.
Let $p:X\rightarrow\DD$ be a smooth family of Calabi-Yau manifolds. For each $\varepsilon$ with $0<\varepsilon\le1$, we consider the following fiberwise complex Monge-Amp\`ere equation on each fiber $X_y$:
\begin{equation}\label{E:PDE1'}
\begin{aligned}
\paren{\omega_y+dd^c\varphi_y}^n
&=
e^{\varepsilon\varphi_y}e^{\eta_y}(\omega_y)^n
\;\;\text{and}
\\
\omega_y+dd^c\varphi_y
&>0,
\end{aligned}
\end{equation}
where $\eta$ is defined in Section \ref{S:fiberwiseRFf}.
Theorem \ref{T:AY} implies that there exists a unique solution $\varphi_{y,\varepsilon}\in C^\infty(X_y)$ of \eqref{E:PDE1'}.
As we mentioned, we can consider $\varphi_\varepsilon$ as a smooth function on $X$ by letting $\varphi_\varepsilon(x):=\varphi_{y,\varepsilon}(x)$, where $y=p(x)$.
We consider next the $(1,1)$-form
\begin{equation} \label{E:rho}
\rho_\varepsilon
:=\omega+dd^c\varphi_\varepsilon
\end{equation}
on the manifold $X$. Since $\rho_\varepsilon$ is positive definite when restricted to $X_y$, it induces a hermitian metric $h^{\rho_\varepsilon}_{X/Y}$ on the bundle $K_{X/Y}\vert_{X_0}$.
By Proposition \ref{P:key_p}, the curvature is computed as follows:
\begin{align*}
\Theta_{h^{\rho_\varepsilon}_{X/Y}}(K_{X/Y})
& =
dd^c\log
\paren{
(\rho_\varepsilon)^n\wedge\im ds\wedge d\bar s
}
\\
& =
dd^c\log
\paren{
(\omega+dd^c\varphi_\varepsilon)^n
\wedge\im ds\wedge d\bar s
}
\\
& =
dd^c\log
\paren{
e^{\varepsilon\varphi_\epsilon+\eta}
\omega^n\wedge\im ds\wedge d\bar s} \\
& =
dd^c\eta
+
\varepsilon dd^c\varphi_\varepsilon
+
\Theta_{h^\omega_{X/Y}}(K_{X/Y}) \\
& =
\Theta(E)
+
\varepsilon dd^c\varphi_\varepsilon.
\end{align*}
From \eqref{E:rho}, we have $dd^c\varphi_\varepsilon=\rho_\varepsilon-\omega$,
it follows that
\begin{equation}\label{E:Ricci}
\Theta_{h^{\rho_\varepsilon}_{X/Y}}(K_{X/Y})
=
\varepsilon{\rho_\varepsilon}
-
\varepsilon\omega
+
\Theta(E)
\end{equation}
in the other expression,
\begin{equation*}
\varepsilon{\rho_\varepsilon}
=
\varepsilon\omega
+
\Theta_{h^{\rho_\varepsilon}_{X/Y}}(K_{X/Y})
-
\Theta(E).
\end{equation*}
Our next claim is the geodesic curvature $c(\rho_\varepsilon)$ satisfies a certain elliptic partial differential equation of second order on each fiber $X_y$.
\medskip
Under an admissible coordinate $(z^1,\dots,z^n,s)\in X$, $\rho_\varepsilon$ is written as follows:
\begin{equation*}
\rho_\varepsilon
=
\im\paren{(h_\varepsilon)_{s\bar s}ds\wedge d\bar s
+(h_\varepsilon)_{s\bar\beta}ds\wedge{dz}^{\bar\beta}
+(h_\varepsilon)_{\alpha\bar s}dz^\alpha\wedge d\bar s
+(h_\varepsilon)_{\alpha\bar\beta}dz^\alpha\wedge{dz}^{\bar\beta}
}.
\end{equation*}
For each $y\in\DD$, $(h_\varepsilon)_{\alpha\bar\beta}(\cdot,y)$ gives a K\"{a}hler metric on $X_y$. (If there is no confusion, we simply write $(h_\varepsilon)_{\alpha\bar\beta}$.) Thus we can define contraction and covariant derivative on each $X_y$ with respect to $(h_\varepsilon)_{\alpha\bar\beta}$. We use raising and lowering of indices as well as the semi-colon for the contractions and the covariant derivatives with respect to the K\"{a}hler metric $(h_\varepsilon)_{\alpha\bar\beta}$, respectively, on the fiber $X_y$. We denote by $\Delta_{\rho_\varepsilon}=\Delta_{\rho_\varepsilon\vert_{X_y}}$ the Laplace-Beltrami operator with negative eigenvalues on the fiber $X_y$ with respect to $\rho_\varepsilon\vert_{X_y}$.
By raising of indices, we can write the horizontal lift $v_{\rho_\varepsilon}$ of $v=\partial/\partial s$ with respect to $\rho_\varepsilon$ by
\begin{equation*}
v_{\rho_\varepsilon}
=
\pd{}{s}
-(h_\varepsilon)_{s\bar\beta}(h_\varepsilon)^{\bar\beta\alpha}\pd{}{z^\alpha}
=
\pd{}{s}
-(h_\varepsilon)\ind{s}{\alpha}{}\pd{}{z^\alpha}.
\end{equation*}
Then $\bar\partial{v_{\rho_\varepsilon}}$ is a $T'X_y$-valued $(0,1)$-form which is defined by
\begin{align*}
\bar\partial{v_{\rho_\varepsilon}}
&=
\bar\partial\paren{\pd{}{s}
-(h_\varepsilon)_{s}^{\phantom{i}\alpha}\pd{}{z^\alpha}} \\
&=
\paren{-\bar\partial{(h_\varepsilon)}\ind{s}{\alpha}{}}\otimes\pd{}{z^\alpha} \\
&=
-\pd{(h_\varepsilon)\ind{s}{\alpha}{}}{z^{\bar\beta}}dz^{\bar\beta}\otimes\pd{}{z^\alpha}.
\end{align*}
Since $(h_\varepsilon)_{\alpha\bar\beta}$ is a K\"{a}hler metric and we use holomorphic coordinates, $\bar\partial v_{\rho_\varepsilon}$ is written by
\begin{equation*}
\bar\partial v_{\rho_\varepsilon}
=
-(h_\varepsilon)\ind{s}{\alpha}{;\bar\beta}dz^{\bar\beta}\otimes\pd{}{z^\alpha}.
\end{equation*}
Then Remark \ref{R:horizontal_lift} says that the geodesic curvature $c(\rho_\varepsilon):X\rightarrow\RR$ is given by
\begin{align*}
c(\rho_\varepsilon)(z,s)
&=
\inner{v_{\rho_\varepsilon},v_{\rho_\varepsilon}}_{\rho_\varepsilon}\\
&=
(h_\varepsilon)_{s\bar{s}}
-
(h_\varepsilon)_{s\bar\beta}(h_\varepsilon)^{\bar\beta\alpha}(h_\varepsilon)_{\alpha\bar{s}}.
\end{align*}
The following theorem is inspired by Schumacher's method in \cite{Schumacher}. P\v aun generalized the computation to the twisted K\"ahler-Einstein metric case (\cite{Paun2}). (See also \cite{Choi1}.)
\begin{theorem}\label{T:PDE}
The following partial differential equation holds on each fiber $X_y$:
\begin{equation*}
-\Delta_{\rho_\varepsilon} c(\rho_\varepsilon)
+
\varepsilon c(\rho_\varepsilon)
=
\varepsilon\omega(v_{\rho_\varepsilon},\overline{v_{\rho_\varepsilon}})
+
\abs{\bar\partial v_{\rho_\varepsilon}}_{\rho_\varepsilon}^2
-
\Theta_{s\bar s}(E),
\end{equation*}
where $\abs{\bar\partial v_{\rho_\varepsilon}}_{\rho_\varepsilon}$ is the pointwise norm of $\bar\partial v_{\rho_\varepsilon}$ with respect to the K\"ahler metric $\rho_\varepsilon\vert_{X_y}$.
\end{theorem}
\begin{proof}
We fix a fiber $X_y$ and $\varepsilon>0$. During this proof, if there is no confusion, we omit the subscript $\varepsilon$ in the components in $\rho_\varepsilon$ for simplicity, namely, we write as follows:
\begin{equation*}
h_{s\bar s}=(h_\varepsilon)_{s\bar s},
\quad
h_{s\bar\beta}=(h_\varepsilon)_{s\bar\beta}
\quad\text{and}\quad
h_{\alpha\bar\beta}=(h_\varepsilon)_{\alpha\bar\beta}.
\end{equation*}
We have to compute the following:
\begin{equation*}
\Delta_{\rho_\varepsilon} c(\rho_\varepsilon)
=h^{\bar\delta\gamma}(c(\rho_\varepsilon))_{;\gamma\bar\delta}
=h^{\bar\delta\gamma}\paren{
h_{s\bar{s}}-h_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar{s}}
}_{;\gamma\bar\delta}.
\end{equation*}
First we consider the term $h^{\bar\delta\gamma}h_{s\bar{s};\gamma\bar\delta}$.
Since $\omega$ is a K\"ahler form on $X$, $\rho_\varepsilon$ is locally $\ddbar$-exact. So we have that
\begin{align*}
h_{s\bar{s};\gamma\bar\delta}
& =\pd{^2h_{s\bar{s}}}{z^\gamma\partial{z}^{\bar\delta}}
= \pd{^2}{s\partial\bar{s}}h_{\gamma\bar\delta}.
\end{align*}
Then it follows that
\begin{align*}
h^{\bar\delta\gamma}h_{s\bar{s};\gamma\bar\delta}
& = h^{\bar\delta\gamma}\pd{^2}{s\partial\bar{s}}h_{\gamma\bar\delta} \\
& = \pd{}{s}\paren{h^{\bar\delta\gamma}\pd{}{\bar{s}}h_{\gamma\bar\delta}}
- \pd{h^{\bar\delta\gamma}}{s}\pd{h_{\gamma\bar\delta}}{\bar{s}}
\\
& =\pd{^2}{s\partial\bar{s}}\log{\det(h_{\alpha\bar\beta})}
+h^{\bar\delta\alpha}\pd{h_{\alpha\bar\beta}}{s}
h^{\bar\beta\gamma}\pd{h_{\gamma\bar\delta}}{\bar{s}}
\end{align*}
By \eqref{E:Ricci}, we have
\begin{equation*}
\pd{^2}{s\partial\bar{s}}\log{\det(h_{\alpha\bar\beta})}
=
\varepsilon\rho_\varepsilon\paren{\pd{}{s},\pd{}{\bar s}}
-
\varepsilon\omega\paren{\pd{}{s},\pd{}{\bar s}}
+
\Theta_{s\bar s}(E).
\end{equation*}
Hence it follows that
\begin{equation}\label{E:first_term}
h^{\bar\delta\gamma}h_{s\bar{s};\gamma\bar\delta}
=
\varepsilon
\paren{
h_{s\bar s}
-
g_{s\bar s}
}
+
\Theta_{s\bar s}(E)
+
h_{s\bar\beta;\alpha}
h_{\bar{s}\gamma;\bar\delta}
h^{\bar\beta\gamma}h^{\bar\delta\alpha}.
\end{equation}
Next we consider the term
$h^{\bar\delta\gamma}\paren{h_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar{s}}}_{;\gamma\bar\delta}$, which can be written by
$$
h^{\bar\delta\gamma}\paren{h\ind{s}{\alpha}{} h_{\alpha\bar{s}}}_{;\gamma\bar\delta}.
$$
Define a tensor $\{A\ind{s}{\alpha}{\bar\beta}\}$ by
$$
A\ind{s}{\alpha}{\bar\beta}=-h\ind{s}{\alpha}{;\bar\beta}.
$$
Then it follows that
\begin{equation*}
\bar\partial{v_\rho}=A\ind{s}{\alpha}{\bar\beta}\pd{}{z^\alpha}\otimes{dz}^{\bar\beta}.
\end{equation*}
Hence we have
\begin{align*}
h^{\bar\delta\gamma}\paren{h\ind{s}{\sigma}{}h_{\bar{s}\delta}}_{;\gamma\bar\delta}
& = h^{\bar\delta\gamma}\paren{
h\ind{s}{\sigma}{;\gamma\bar\delta}h_{\bar{s}\sigma}
+A\ind{s}{\sigma}{\bar\delta}A_{\bar{s}\sigma\gamma}
+h\ind{s}{\sigma}{;\bar\delta}h_{\bar{s}\sigma;\bar\delta}
+h\ind{s}{\sigma}{}A_{\bar{s}\sigma\gamma;\bar\delta}
}
\\
& := I_1+I_2+I_3+I_4.
\end{align*}
First of all, it is obvious that
\begin{align*}
I_2
=
A\ind{s}{\sigma}{\bar\delta}A_{\bar{s}\sigma\gamma}h^{\bar\delta\gamma}
=
\abs{\bar\partial{v_{\rho_\varepsilon}}}_{\rho_\varepsilon}^2.
\end{align*}
And the term $I_3$ is equal to $h_{s\bar\beta;\alpha}h_{\bar{s}\gamma;\bar\delta}h^{\bar\beta\gamma}h^{\bar\delta\alpha}$, which is appeared in \eqref{E:first_term}. So these terms are cancelled in the last computation.
Before computing $I_1$ and $I_4$, we introduce some ingredients. Let $R\ind{}{\delta}{\alpha\bar\beta\gamma}$ be a Riemann curvature tensor of $\rho_\varepsilon\vert_{X_y}$. Then by the commutation formula for covariants derivatives, we have
\begin{equation} \label{E:commutation}
T\ind{}{\alpha}{;\bar\beta\gamma}
-T\ind{}{\alpha}{;\gamma\bar\beta}
=R\ind{}{\alpha}{\delta\bar\beta\gamma}T^\delta.
\end{equation}
Let $R_{\alpha\bar\beta}:=R\ind{}{\gamma}{\alpha\bar\beta\gamma}$ be the Ricci tensor of $\rho_\varepsilon\vert_{X_y}$. By the definition of $h^{\rho_\varepsilon}_{X/Y}$ in Remark 2.3, we have
\begin{equation*}
\Theta_{h^{\rho_\varepsilon}_{X/Y}}\vert_{X_y}=-\mathrm{Ric}(\rho_\varepsilon\vert_{X_y}).
\end{equation*}
Hence it follows from \eqref{E:Ricci} that
\begin{equation*}
R_{\alpha\bar\beta}
=
\varepsilon h_{\alpha\bar\beta}
-
\varepsilon g_{\alpha\bar\beta}.
\end{equation*}
\begin{lemma} \label{L:harmonic}
Let $\bar\partial^*_{\rho_\varepsilon}$ be the adjoint of $\bar\partial$ with respect to the $L^2$-inner product with $\rho_\varepsilon\vert_{X_y}$, which is defined by
\begin{equation*}
\bar\partial^*\paren{A\ind{s}{\alpha}{\bar\beta}
\pd{}{z^\alpha}\otimes{dz}^{\bar\beta}}
:=h^{\bar\beta\gamma}A\ind{s}{\alpha}{\bar\beta;\gamma}\pd{}{z^\alpha}
\end{equation*}
Then we have the following:
\begin{equation}\label{E:dbarstar}
\bar\partial^*
\paren{
\bar\partial v_{\rho^\varepsilon}
}
=
\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\alpha}
-h_{s\bar\delta}g^{\bar\delta\alpha}
}
\pd{}{z^\alpha}.
\end{equation}
In particular, we have
\begin{equation*}
h^{\bar\beta\gamma}A\ind{s}{\alpha}{\bar\beta;\gamma}
=
\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\alpha}
-h_{s\bar\delta}g^{\bar\delta\alpha}
}.
\end{equation*}
\end{lemma}
\begin{proof}
Since the Riemannian connection induced by a K\"{a}hler metric is torsion-free, we have
\begin{align*}
h^{\bar\beta\gamma}A\ind{s}{\alpha}{\bar\beta;\gamma}
= -h^{\bar\beta\gamma}h^{\bar\delta\alpha}h_{s\bar\delta;\bar\beta\gamma}
= -h^{\bar\beta\gamma}h^{\bar\delta\alpha}h_{s\bar\beta;\bar\delta\gamma}.
\end{align*}
By \eqref{E:Ricci} and \eqref{E:commutation}, it follows that
\begin{align*}
h^{\bar\beta\gamma}A\ind{s}{\alpha}{\bar\beta;\gamma}
&=
-h^{\bar\beta\gamma}h^{\bar\delta\alpha}
\bparen{h_{s\bar\beta;\gamma\bar\delta}
-h_{s\bar\tau}R\ind{}{\bar\tau}{\bar\beta\bar\delta\gamma}
}
\\
&=
-h^{\bar\delta\alpha}
\bparen{
\paren{h^{\bar\beta\gamma}\pd{h_{\bar\beta\gamma}}{s}}_{;\bar\delta}
-h_{s\bar\tau}h^{\bar\beta\gamma}R\ind{}{\bar\tau}{\bar\beta\bar\delta\gamma}
}
\\
&=
-h^{\bar\delta\alpha}
\bparen{
\paren{\pd{}{s}\log\det(h_{\alpha\bar\beta})}_{;\bar\delta}
+h_{s\bar\tau}R\ind{}{\bar\tau}{\bar\delta}
}
\\
&=
-h^{\bar\delta\alpha}
\bparen{
(\Theta_{h^{\rho_\varepsilon}_{X/Y}})_{s\bar\delta}
+h_{s\bar\tau}h^{\bar\tau\gamma}R_{\gamma\bar\delta}
}
\\
&=
-h^{\bar\delta\alpha}
\bparen{
(\Theta_{h^{\rho_\varepsilon}_{X/Y}})_{s\bar\delta}
-h_{s\bar\tau}h^{\bar\tau\gamma}(\Theta_{h^{\rho_\varepsilon}_{X/Y}})_{\gamma\bar\delta}
}
\\
&=
-\varepsilon
h^{\bar\delta\alpha}
\bparen{
h_{s\bar\delta}-g_{s\bar\delta}
-h_{s\bar\tau}h^{\bar\tau\gamma}
\paren{h_{\gamma\bar\delta}-g_{\gamma\bar\delta}}
}
\\
&=
\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\alpha}
-h_{s\bar\delta}g^{\bar\delta\alpha}
}
\end{align*}
This completes the proof.
\end{proof}
Next we compute the term $I_1$:
\begin{align*}
I_1
&=
h_{\bar{s}\sigma}h\ind{s}{\sigma}{;\gamma\bar\delta}h^{\bar\delta\gamma}
\\
&=
h_{\bar{s}\sigma}
\paren{
-A\ind{s}{\sigma}{\bar\delta;\gamma}h^{\bar\delta\gamma}
+h\ind{s}{\lambda}{}R\ind{}{\sigma}{\lambda\gamma\bar\delta}
h^{\bar\delta\gamma}
}
\\
&=
h_{\bar{s}\sigma}
\bparen{
-\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\sigma}
-h_{s\bar\delta}g^{\bar\delta\sigma}
}
-h\ind{s}{\lambda}{}R\ind{}{\sigma}{\lambda}
}
\\
&=
h_{\bar{s}\sigma}
\bparen{
-\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\sigma}
-h_{s\bar\delta}g^{\bar\delta\sigma}
}
-h_{s\bar\lambda}R^{\sigma\bar\lambda}
}
\\
&=
h_{\bar{s}\sigma}
\bparen{
-\varepsilon
\paren{
g_{s\bar\delta}h^{\bar\delta\sigma}
-h_{s\bar\delta}g^{\bar\delta\sigma}
}
+h_{s\bar\lambda}\varepsilon
\paren{h^{\sigma\bar\lambda}-g^{\sigma\bar\lambda}
}
}
\\
&=
\varepsilon
h_{\bar{s}\sigma}
\bparen{
-g_{s\bar\delta}h^{\bar\delta\sigma}
+h_{s\bar\delta}g^{\bar\delta\sigma}
+
h_{s\bar\lambda}\paren{
h^{\sigma\bar\lambda}-g^{\sigma\bar\lambda}
}
}
\\
&=
\varepsilon\paren{
h_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
-
g_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
}.
\end{align*}
Finally we compute the term $I_4$:
\begin{align*}
I_4
&=
h^{\gamma\bar\delta}h\ind{s}{\sigma}{}A_{\bar{s}\sigma\gamma;\bar\delta}
\\
&=
h_{s\bar\sigma}
h^{\gamma\bar\delta}A\ind{\bar{s}}{\bar\sigma}{\gamma;\bar\delta}
\\
&=
h_{s\bar\sigma}
\varepsilon\paren{
g_{\bar s\delta}h^{\delta\bar\sigma}
-h_{\bar s\delta}g^{\delta\bar\sigma}
}
\\
&=
\varepsilon\paren{
h_{s\bar\beta}h^{\bar\beta\alpha}g_{\alpha\bar s}
-
h_{s\bar\beta}g^{\bar\beta\alpha}h_{\alpha\bar s}
}.
\end{align*}
Together with all computations, it follows that
\begin{align*}
\Delta_{\rho_\varepsilon} c(\rho_\varepsilon)
&=
\varepsilon(h_{s\bar s}-g_{s\bar s})
+
\Theta_{s\bar s}(E)
-
\abs{\bar\partial{v_{\rho_\varepsilon}}}_{\rho_\varepsilon}^2
\\
&\;\;\;-
\varepsilon
\paren{
h_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
-
g_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
}
\\
&\;\;\;-
\varepsilon
\paren{
h_{s\bar\beta}h^{\bar\beta\alpha}g_{\alpha\bar s}
-
h_{s\bar\beta}g^{\bar\beta\alpha}h_{\alpha\bar s}
}
\\
&=
\Theta_{s\bar s}(E)
-
\abs{\bar\partial{v_{\rho_\varepsilon}}}_{\rho_\varepsilon}^2
+
\varepsilon
\paren{
h_{s\bar s}
-h_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
}
\\
&\;\;\;+
\varepsilon\paren{
g_{s\bar s}
-g_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
-h_{s\bar\beta}h^{\bar\beta\alpha}g_{\alpha\bar s}
+h_{s\bar\beta}g^{\bar\beta\alpha}h_{\alpha\bar s}
}.
\end{align*}
Since
\begin{equation*}
\omega(v_{\rho_\varepsilon},\overline{v_{\rho_\varepsilon}})
=
g_{s\bar s}-g_{s\bar\beta}h^{\bar\beta\alpha}h_{\alpha\bar s}
-h_{s\bar\beta}h^{\bar\beta\alpha}g_{\alpha\bar s}
+h_{s\bar\beta}g^{\bar\beta\alpha}h_{\alpha\bar s},
\end{equation*}
it follows that
\begin{equation*}
-\Delta_{\rho_\varepsilon} c(\rho_\varepsilon)
+\varepsilon c(\rho_\varepsilon)
=
\varepsilon\omega(v_{\rho_\varepsilon},\overline{v_{\rho_\varepsilon}})
+
\abs{\bar\partial{v_{\rho_\varepsilon}}}_{\rho_\varepsilon}^2
-
\Theta_{s\bar s}(E).
\end{equation*}
Therefore, we have the conclusion.
\end{proof}
\begin{corollary}\label{C:PDE}
Let $\rho$ be the fiberwise Ricci-flat metric in Theorem \ref{T:main_theorem}. Then the following PDE holds on each fiber $X_y$:
\begin{equation*}
-\Delta_\rho c(\rho)=\abs{\bar\partial v_\rho}_\rho^2-\Theta_{s\bar s}(E).
\end{equation*}
\end{corollary}
\begin{proof}
Recall that the fiberwise Ricci-flat metric $\rho$ satisfies the following:
\begin{equation*}
\Theta_{h^\rho_{X/Y}}(K_{X/Y})
=
-dd^c\log\norm{u}_s^2
=
\Theta(E)
\end{equation*}
If we apply the same computation with the proof of Theorem \ref{T:PDE} to $\rho$ using the above equation, then we have the conclusion.
On the other hand, it is also an easy consequence of the convergence of the form $\rho_\varepsilon$ to $\rho$ as $\varepsilon\rightarrow0$ by passing through a subsequence for each $y\in Y$.
(More precisely, the function $\varphi_\varepsilon$ converges to $\varphi$ as $\varepsilon\rightarrow0$.) This will be proved in the next section.
\end{proof}
\begin{remark}\label{R:PDE}
The computations in Corollary \ref{C:PDE} do not use the normalization condition of $\varphi$.
Hence it is easy to see that for any $d$-closed smooth real $(1,1)$-form $\tau$ whose restriction on each fiber is the Ricci-flat metric we have
\begin{equation*}
-\Delta_\tau c(\tau)
=
\abs{\bar\partial v_\tau}_\tau^2-\Theta_{s\bar s}(E).
\end{equation*}
\end{remark}
Now we are at the position of proving the positivity of the direct image $p_*\rho^{n+1}$.
As we mentioned in Subsection \ref{SS:horizontal_lift}, it is enough to show that the fiber integral $\int_{X_s}c(\rho)\rho^n$ is positive. It follows from Theorem \ref{T:PDE} and Proposition \ref{P:harmonic_representative} that
\begin{align*}
\int_{X/\DD}
c(\rho_\varepsilon)\rho_\varepsilon^n
&=
\int_{X_s}
\frac{1}{\varepsilon}
\bparen{
\Delta_{\rho_\varepsilon}c(\rho_\varepsilon)
+
\abs{\bar\partial v_{\rho_\varepsilon}}^2_{\rho_\varepsilon}
-
\Theta_{s\bar s}(E)
+\varepsilon\omega(v_{\rho_\varepsilon},\bar v_{\rho_\varepsilon})
}
\rho_\varepsilon^n
\\
&=
\frac{1}{\varepsilon}
\bparen{
\norm{\bar\partial v_{\rho_\varepsilon}}^2_{L^2_{\rho_\varepsilon}(X_s)}
-
\Theta_{s\bar s}(E)
}
+
\int_{X_s}
\omega(v_{\rho_\varepsilon},\bar v_{\rho_\varepsilon})
\rho_\varepsilon^n
\\
&\ge
\int_{X_s}
\omega(v_{\rho_\varepsilon},\bar v_{\rho_\varepsilon})
\rho_\varepsilon^n.
\end{align*}
We already know that on each fiber $X_s$, the $\rho_\varepsilon\vert_{X_s}$ converges to $\rho\vert_{X_s}$ by Corollary \ref{C:convergence_vp}.
Therefore, Proposition \ref{P:conv_geo_curv}, which will be proved in the next section, says that
\begin{equation}\label{E:lower_bound}
\int_{X/\DD}
c(\rho)\rho^n
\ge
\int_{X_s}
\omega(v_{\rho},\bar v_{\rho})
\rho^n.
\end{equation}
In paticular, $p_*\rho^{n+1}$ is positive.
\begin{proposition} \label{P:conv_geo_curv}
On each fiber $X_y$, there exists a sequence $\{\varepsilon_j\}_{j\in\NN}$ converging to $0$ as $j\rightarrow\infty$ such that
\begin{equation*}
c(\rho_{\varepsilon_j})\rightarrow c(\rho)
\;\;\;\text{and}\;\;\;
\bar\partial v_{\rho_{\varepsilon_j}}
\rightarrow
\bar\partial v_{\rho}
\;\;\;\text{as}\;\;\;
j\rightarrow\infty.
\end{equation*}
\end{proposition}
We end this section with the proof of Theorem \ref{T:main_theorem2}.
\begin{proof}[Proof of Theorem \ref{T:main_theorem2}]
It is enough to prove when the family is over the unit disc $\DD$ in $\CC$.
Let $s\in\DD$. Recall that $c(\rho)$ satisfies
\begin{equation*}
-\Delta_\rho c(\rho)
=
\abs{\bar\partial v_\rho}_\rho^2-\Theta_{s\bar s}(E).
\end{equation*}
It follows from the Green kernel formula that
\begin{equation*}
c(\rho)
=
\int_{X_s}c(\rho)\rho^n
+
\int_{X_s}G_s(z,w)
\paren{
\abs{\bar\partial v_\rho}_\rho^2
-
\Theta_{s\bar s}(E)
}
\rho^n
\end{equation*}
Since the integral of $G_s(z,w)$ is zero, it follows that
\begin{align*}
c(\rho)
&\ge
\int_{X_s}c(\rho)\rho^n
-
\int_{X_s}G_s(z,w)
\abs{\bar\partial v_\rho}_\rho^2
\rho^n
\\
&=
\int_{X_s}c(\rho)\rho^n
-
K(s)
\int_{X_s}\abs{\bar\partial v_\rho}_\rho^2
\rho^n.
\\
&=
\int_{X_s}c(\rho)\rho^n
-
K(s)
\omega^{WP}(v,\bar v).
\end{align*}
Equation \eqref{E:lower_bound} implies that
\begin{equation*}
c(\rho)+K(s)\omega^{WP}(v,\bar v)
\ge
\int_{X_s}c(\rho)\rho^n
\ge
\int_{X_s}\omega(v_{\rho},\bar v_{\rho})
\rho^n>0.
\end{equation*}
On the other hand, it is easy to see that
\begin{equation*}
c\paren{\rho+K(s)\omega^{WP}}
=
c(\rho)+K(s)\omega^{WP}(v,\bar v).
\end{equation*}
This concludes that $\rho+K(s)\omega^{WP}$ is positive on $X$.
\end{proof}
\section{Approximation of the geodesic curvature}\label{S:app_geo_curv}
In this section, we shall prove Proposition \ref{P:conv_geo_curv}.
First we recall the setting:
Let $p:X\rightarrow\DD$ be a Calabi-Yau fibration and let $\omega$ be a fixed K\"ahler form on $X$. For each fiber $X_y$, we have a unique solution $\varphi_{y,\varepsilon}$ of the following complex Monge-Amp\`ere equation:
\begin{equation}\label{E:CMAEvpve}
\begin{aligned}
\paren{\omega_y+dd^c\varphi_{y,\varepsilon}}^n
&=
e^{\varepsilon\varphi_{y,\varepsilon}}e^{\eta_y}(\omega_y)^n
\;\;\text{and}
\\
\omega_y+dd^c\varphi_{y,\varepsilon}
&>0,
\end{aligned}
\end{equation}
where $\eta$ is defined in Section \ref{S:fiberwiseRFf}.
As we mentioned, we can consider $\varphi_\varepsilon$ as a smooth function on $X$ by letting
\begin{equation*}
\varphi_\varepsilon(x):=\varphi_{y,\varepsilon}(x),
\end{equation*}
where $y=p(x)$.
Denote by $\rho_\varepsilon=\omega+dd^c\varphi_\varepsilon$.
On the other hand, for each fiber $X_y$, we have the solution $\varphi_y$ of the following complex Monge-Amp\`ere equation:
\begin{equation}\label{E:CMAEvp}
\begin{aligned}
\paren{\omega_y+dd^c\varphi_y}^n &= e^{\eta\vert_{X_y}}(\omega_y)^n, \\
\omega_y+dd^c&\varphi_y>0,
\end{aligned}
\end{equation}
which is normalized by
\begin{equation}\label{E:normalization}
\int_{X_y}\varphi_y e^{\eta_y}(\omega_y)^n=0.
\end{equation}
Then $\varphi$ is a smooth function on $X$. We denote by $\rho=\omega+dd^c\varphi$.
It is remarkable to note that $\rho_\varepsilon$ and $\rho$ are uniformly equivalent on $X_y$ by Proposition \ref{P:approximation1}.
In this section, we write the horizontal lifting $v_\rho$ of $\partial/\partial s$ with respect to $\rho$ as follows:
\begin{equation*}
v_\rho=\pd{}{s}+a\ind{s}{\alpha}{}\pd{}{z^\gamma}
=
\pd{}{s}-h_{s\bar\beta}h^{\bar\beta\alpha}\pd{}{z^\gamma}.
\end{equation*}
in an admissible coordinate $(z,s)$ in $X$.
\begin{theorem}\label{T:convergence}
For a fixed fiber $X_y$, the following holds:
\begin{equation*}
\varphi_\varepsilon\rightarrow \varphi,
\;\;\;
v_\rho\varphi_\varepsilon\rightarrow
v_\rho\varphi
\;\;\;
\text{and}
\;\;\;
\overline{v_\rho} v_\rho\varphi_\varepsilon
\rightarrow
\overline{v_\rho}v_\rho\varphi
\end{equation*}
as $\varepsilon\rightarrow0$ in $C^{k,\alpha}(X_y)$-topology for any $k\in\NN$ and $\alpha\in(0,1)$ by passing through a subsequence.
\end{theorem}
It is obvious that this theorem implies Proposition \ref{P:conv_geo_curv}.
\medskip
In the proof, we fix a fiber $X_y$ and omit the subscript $y$, if there is no confusion. Every convergence means the convergence by passing through a subsequence in the topology of $C^{k,\alpha}(X_y)$ for any $k\in\NN$ and $\alpha\in(0,1)$.
It is easy to see that Corollary 3.5 yields the first assertion.
This also implies that there exists a uniform constant $C>0$ such that
\begin{equation}\label{E:equivalence'}
\frac{1}{C}\omega_y
<
\rho_\varepsilon\vert_{X_y}
<
C\omega_y,
\end{equation}
for $0<\varepsilon\le1$.
\medskip
Before going to the further proof of Theorem \ref{T:convergence}, we introduce the following proposition about the fiber integral.
\begin{proposition}\label{P:Lie_derivative}
Let $\tau$ be a $d$-closed real $(1,1)$-form on $X$ whose restriction on each fiber $X_s$ is positive definite.
For a smooth function $f$ on $X$, we have
\begin{equation*}
\pd{}{s}\int_{X_s}f\tau^n
=
\int_{X_s}L_{v_\tau}\paren{f\tau^n}
=
\int_{X_s}(v_\tau f)\tau^n.
\end{equation*}
In particular, if $\int_{X_s}f\tau^n=0$ for $s\in\DD$, then
$$
\int_{X_s}(v_\tau f)\tau^n=0.
$$
\end{proposition}
\begin{proof}
The first equality is mentioned in Section 3.2. Cartan's magic formula and Stokes' theorem imply that
\begin{align*}
\pd{}{s}\int_{X_s}f\tau^n
&=
\int_{X_s}L_{v_\tau}\paren{f\tau^n}\\
&=
\int_{X_s} \paren{d\circ i_{v_\tau}+i_{v_\tau}\circ d}
\paren{f\tau^n}\\
&=
\int_{X_s} d\paren{i_{v_\tau}\paren{f\tau^n}}
+
\int_{X_s} i_{v_\tau}\paren{df\wedge\tau^n}\\
&=
\int_{X_s} (v_\tau f)\tau^n
-
\int_{X_s} df\wedge i_{v_\tau}(\tau^n).
\end{align*}
On the other hand, Lemma \ref{L:contraction} implies that
\begin{equation*}
i_{v_\tau}(\tau^n)
=
i_{v_\tau}(\tau)\wedge\tau^{n-1}
=
\im c(\tau)\wedge\tau^{n-1}\wedge d\bar s.
\end{equation*}
Hence we have
\begin{equation*}
\int_{X_s} df\wedge i_{v_\tau}(\tau^n)
=
\int_{X_s} \im c(\tau)df\wedge\tau^{n-1}\wedge d\bar s=0.
\end{equation*}
This completes the proof.
\end{proof}
Now we go back to the proof of the second assertion. Taking logarithm of \eqref{E:CMAEvpve} and differentiating it with respect to $v_\rho$, we have
\begin{equation*}
(h_\varepsilon)^{\bar\beta\alpha}
v_\rho\paren{g_{\alpha\bar\beta}+(\varphi_\varepsilon)_{\alpha\bar\beta}}
=
\varepsilon v_\rho\varphi_\varepsilon
+
v_\rho\eta
+
g^{\bar\beta\alpha}v_\rho(g_{\alpha\bar\beta}).
\end{equation*}
As in Section 3, we have
\begin{equation*}
-\Delta_{\rho_\varepsilon}\paren{v_\rho\varphi_\varepsilon}
+\varepsilon\paren{v_\rho\varphi_\varepsilon}
=
-v_\rho\eta
+
(h_\varepsilon)^{\alpha\bar\beta}
\paren{
v_\rho\paren{g_{\alpha\bar\beta}}
+
[v_\rho,\varphi_\varepsilon]_{\alpha\bar\beta}
}
-
g^{\alpha\bar\beta}
v_\rho\paren{g_{\alpha\bar\beta}},
\end{equation*}
where $\Delta_{\rho_\varepsilon}$ is the Laplace-Beltrami operator of $\rho_\varepsilon$ and
\begin{align*}
[v_\rho,\varphi_\varepsilon]_{\alpha\bar\beta}
&=
v_\rho((\varphi_\varepsilon)_{\alpha\bar\beta})-(v_\rho(\varphi_\varepsilon))_{\alpha\bar\beta} \\
&=
-a\ind{s}{\gamma}{\alpha\bar\beta}(\varphi_\varepsilon)_\gamma
-a\ind{s}{\gamma}{\alpha}(\varphi_\varepsilon)_{\gamma\bar\beta}
-a\ind{s}{\gamma}{\bar\beta}(\varphi_\varepsilon)_{\alpha\gamma}.
\end{align*}
We denote the right hand side by $R_\varepsilon$. Hence $v_\rho\varphi_\varepsilon$ satisfies the following equation:
\begin{equation} \label{E:pde1}
-\Delta_{\rho_\varepsilon}(v_\rho\varphi_\varepsilon)
+
\varepsilon(v_\rho\varphi_\varepsilon)
=
R_\varepsilon.
\end{equation}
Then Proposition \ref{P:Key_Prop} implies that there exists a uniform constant $C>0$ such that
\begin{equation*}
\norm{v_\rho\varphi_\varepsilon}_{C^{k,\alpha}(X_s)}<C.
\end{equation*}
By the same computation to \eqref{E:CMAEvp}, $v_\rho\varphi$ satisfies that
\begin{equation}\label{E:pde1'}
-\Delta_\rho(v_\rho\varphi)
=
R,
\end{equation}
where
$$
R=
-v_\rho\eta
+
h^{\alpha\bar\beta}
\paren{
v_\rho\paren{g_{\alpha\bar\beta}}
+
[v_\rho,\varphi]_{\alpha\bar\beta}
}
-
g^{\alpha\bar\beta}
v_\rho\paren{g_{\alpha\bar\beta}}.
$$
Since $\varphi_\varepsilon$ converges to $\varphi$ and
$[v_\rho,\varphi_\varepsilon]_{\alpha\bar\beta}$ does not include $s$-derivative of $\varphi_\varepsilon$, we have
\begin{equation*}
(h_\varepsilon)^{\bar\beta\alpha}\rightarrow
h^{\bar\beta\alpha}
\;\;\;\text{and}\;\;\;
[v_\rho,\varphi_\varepsilon]_{\alpha\bar\beta}
\rightarrow
[v_\rho,\varphi]_{\alpha\bar\beta}
\;\;\;\text{as}\;\;\;
\varepsilon\rightarrow0.
\end{equation*}
It follows that Equation \eqref{E:pde1} converges to Equation \eqref{E:pde1'} as $\varepsilon\rightarrow0$.
Since Proposition \ref{P:Lie_derivative} says that $v_\rho\varphi$ is the unique solution of \eqref{E:pde1'} which satisfies that
\begin{equation*}
\int_{X_s}(v_\rho\varphi)\rho^n=0,
\end{equation*}
the following Lemma completes the proof.
\begin{lemma}\label{L:initial1}
The following holds:
\begin{equation*}
\lim_{\varepsilon\rightarrow0}
\int_{X_s}(v_\rho\varphi_\varepsilon)\rho^n=0.
\end{equation*}
\end{lemma}
\begin{proof}
Integrating \eqref{E:CMAEvpve}, we have
\begin{equation*}
1=\int_{X_s}e^{\varepsilon\varphi_\varepsilon+\eta}\omega^n.
\end{equation*}
Differentiating with respect to $s$, we have
\begin{equation*}
0
=
\pd{}{s}\int_{X_s}e^{\varepsilon\varphi_\varepsilon+\eta}\omega^n
=
\int_{X_s}v_\rho(e^{\varepsilon\varphi_\varepsilon})\rho^n
=
\varepsilon\int_{X_s}(v_\rho\varphi_\varepsilon)e^{\varepsilon\varphi_\varepsilon}\rho^n.
\end{equation*}
Since $e^{\varepsilon\varphi_\varepsilon}(\rho_\varepsilon)^n=\rho^n$ on each fiber $X_s$,
\begin{equation*}
\int_{X_s}(v_\rho\varphi_\varepsilon)(\rho_\varepsilon)^n=0.
\end{equation*}
Since $\rho_\varepsilon$ and $\rho$ is uniformly equivalent on $X_s$, this completes the proof.
\end{proof}
It remains only to prove the last assertion.
\medskip
Differentiating \eqref{E:pde1} with respect to $\overline{v_\rho}$, we have
\begin{equation}\label{E:PDEss1}
\begin{aligned}
-\Delta_{\rho_\varepsilon}(\overline{v_\rho}v_\rho\varphi_\varepsilon)
+
\varepsilon(\overline{v_\rho}v_\rho\varphi_\varepsilon)
=&
\overline{v_\rho}\paren{(h^\varepsilon)^{\bar\beta\alpha}}\cdot(v_\rho(\varphi_\varepsilon))_{\alpha\bar\beta}
+
\overline{v_\rho}(R_\varepsilon)\\
&+
(h_\varepsilon)^{\bar\beta\alpha}[\overline{v_\rho},v_\rho\varphi_\varepsilon]_{\alpha\bar\beta}.
\end{aligned}
\end{equation}
Then Proposition \ref{P:Key_Prop} implies that there exists a uniform constant $C>0$ such that
\begin{equation*}
\norm{\overline{v_\rho}v_\rho\varphi_\varepsilon}_{C^{k,\alpha}(X_s)}<C.
\end{equation*}
By the same way, $\overline{v_\rho}v_\rho\varphi$ satisfies that
\begin{equation}\label{E:PDEss0}
-\Delta_\rho\overline{v_\rho}v_\rho\varphi
=
\overline{v_\rho}\paren{h^{\bar\beta\alpha}}\cdot(v_\rho\varphi)_{\alpha\bar\beta}
+
\overline{v_\rho}R
+
h^{\bar\beta\alpha}[\overline{v_\rho},v_\rho\varphi]_{\alpha\bar\beta}.
\end{equation}
We already know that $\varphi_\varepsilon\rightarrow\varphi$ and $v_\rho\varphi_\varepsilon\rightarrow v_\rho\varphi$ as $\varepsilon\rightarrow0$ on $X_y$.
Hence the similar argument says that the RHS of \eqref{E:PDEss1} converges to the RHS of \eqref{E:PDEss0} as $\varepsilon\rightarrow0$.
Since Proposition \ref{P:Lie_derivative} says that $\overline{v_\rho}v_\rho\varphi$ is the unique solution of \eqref{E:PDEss0} which satisfies that
\begin{equation*}
\int_{X_s}(\overline{v_\rho}v_\rho\varphi)\rho^n=0,
\end{equation*}
As the previous argument, the following lemma completes the proof.
\begin{lemma}
The following holds:
\begin{equation*}
\lim_{\varepsilon\rightarrow0}
\int_{X_s}(\overline{v_\rho}v_\rho\varphi_\varepsilon)\rho^n=0.
\end{equation*}
\end{lemma}
\begin{proof}
Integrating \eqref{E:CMAEvpve}, we have
\begin{equation*}
1=\int_{X_s}e^{\varepsilon\varphi_\varepsilon+\eta}\omega^n.
\end{equation*}
Differentiating with respect to $s$ and $\bar s$, we have
\begin{align*}
0
&=
\pd{^2}{\bar s\partial s}\int_{X_s}e^{\varepsilon\varphi_\varepsilon+\eta}\omega^n
=
\int_{X_s}(\overline{v_\rho}v_\rho e^{\varepsilon\varphi_\varepsilon})\rho^n\\
&=
\varepsilon\int_{X_s}(\overline{v_\rho}v_\rho\varphi_\varepsilon)e^{\varepsilon\varphi_\varepsilon}\rho^n
+
\varepsilon^2\int_{X_s}\abs{v_\rho\varphi_\varepsilon}^2e^{\varepsilon\varphi_\varepsilon}\rho^n.
\end{align*}
Since $\varphi_\varepsilon$ and $v_\rho\varphi_\varepsilon$ are uniformly bounded, it follows that
\begin{equation*}
\int_{X_s}(\overline{v_\rho}v_\rho\varphi_\varepsilon)(\rho_\varepsilon)^n
\rightarrow0
\;\;\;
\text{as}
\;\;\;
\varepsilon\rightarrow0.
\end{equation*}
This completes the proof as in the proof of Lemma \ref{L:initial1}.
\end{proof}
\section{Some remarks}
As we mentioned in Introduction, our method does not show the positivity or semipositivity of the fiberwise Ricci-flat metrics. But in a special case, we have the positivity. In the next section, we will introduce an example which is in this case.
\begin{corollary}
Suppose that $\abs{\bar\partial v_\rho}_\rho$ depends only on $s$-variable (, i.e., it is a constant on each fiber.)
Then $\rho$ is positive on $X$.
\end{corollary}
\begin{proof}
Since $\abs{\bar\partial v_\rho}_\rho$ is constant on each fiber $X_s$,
Proposition \ref{P:norm_dbarv} says that $\abs{\bar\partial v_\rho}^2_\rho=\Theta_{s\bar s}(E)$.
It follows that
\begin{equation*}
-\Delta_\rho c(\rho)=0,
\end{equation*}
i.e., $c(\rho)$ is a constant on each fiber. Hence we have
\begin{equation*}
c(\rho)=\int_{X_s}c(\rho)\rho^n.
\end{equation*}
Then Theorem \ref{T:main_theorem} completes the proof.
\end{proof}
Now we consider a different type of fiberwise Ricci-flat metric.
Let $p:X\rightarrow\DD$ be a Calabi-Yau fibration and let $\omega$ be a fixed K\"ahler form on $X$.
By the same argument, there exists a unique smooth function $\psi$ in $X$ such that
\begin{equation*}
\begin{aligned}
\paren{\omega_y+dd^c\psi_y}^n
&=
e^{\eta_y}(\omega_y)^n, \\
\omega_y+dd^c\psi_y
&>
0
\end{aligned}
\end{equation*}
on each fiber $X_y$ with the following normalization condition:
\begin{equation*}
\int_{X_y}\psi_y(\omega_y)^n=0.
\end{equation*}
Obviously, $\tilde\rho:=\omega+dd^c\psi$ gives another fiberwise Ricci-flat metric on $X$. This metric is called \emph{semi-flat} or \emph{semi-Ricci-flat} metric on the polarized family of Calabi-Yau manifolds (cf, see \cite{Song:Tian, Tosatti, Song:Weinkove}). By Remark \ref{R:PDE}, we have the same PDE:
\begin{equation*}
-\Delta c(\tilde\rho)
=
\abs{\bar\partial v_{\tilde\rho}}_{\tilde\rho}^2
-
\Theta_{s\bar s}(E).
\end{equation*}
By the uniqueness of the solution of complex Monge-Amp\`ere equation, it is easy to see that $\psi=\varphi-A(y)$ where
\begin{equation*}
A(y)
=
\int_{X_y}\varphi\omega^n.
\end{equation*}
Then Theorem \ref{T:main_theorem} and Theorem \ref{T:main_theorem2} immediately imply the following.
\begin{corollary}
Under the hypothesis of Theorem \ref{T:main_theorem} and Theorem \ref{T:main_theorem2}, we have the following:
\begin{itemize}
\item[(1)]$\displaystyle p_*\tilde\rho^{n+1}+dd^cA$ is positive on $Y$.
\item[(2)]$\displaystyle \tilde\rho+dd^cA+K(y)\omega^{WP}$ is positive on $X$.
\end{itemize}
\end{corollary}
\begin{remark}
It is pointed by Demaiily and Eyssidieux that the fiberwise Ricci-flat metric in Theorem \ref{T:main_theorem} and the semi-Ricci-flat metric are not uniquely determined in the cohomology class $[\omega]$.
More precisely, even if $\omega_1$ and $\omega_2$ are K\"ahler metrics in $X$ which are in the same cohomology class $[\omega]$, the fiberwise Ricci-flat metrics constructed in Theorem \ref{T:main_theorem} (or semi-Ricci-flat metrics above) are different. Hence it is interesting to ask the canonical way to define the fiberwise Ricci-flat metric on Calabi-Yau fibrations, which is uniquely determined in each K\"ahler class $[\omega]$ in $X$.
\end{remark}
\section{An example : a family of elliptic curves}
In this section, we compute the fiberwise Ricci-flat metric on the well known example which is the family of elliptic curves.
The computation in this section is due to Magnusson.
For the details, we refer \cite{Magnusson1, Magnusson2}.
\medskip
Let $\HH$ be a upper half plane in $\CC$. Let $(z,s)$ be a Euclidean coordinate on $\CC\times\HH$. Define a group $G$ by
\begin{equation*}
G=\set{
g_{n,m}:g_{n,m}(z,s)=(z+n+ms,s).
}
\end{equation*}
Then $G$ acts on $\CC\times\HH$ properly discontinuously.
The quotient space $\CC\times\HH/G$ forms a universal family of elliptic curves, call it $X$.
The $(1,1)$-form $\frac{\im}{2}\dv{z}$ on $\CC$ descends to a Ricci-flat K\"ahler form on each $X_s$. Note that
\begin{equation*}
\vol(X_s)=\int_{X_s}\frac{\im}{2}\dv{z}
=
\ip{s}.
\end{equation*}
Since $dz$ is a nonvanishing holomorphic section of the direct image of the relative canonical line bundle, the curvature $\Theta(E)$ is
\begin{align*}
\Theta(E)
&=
-dd^c\log\norm{dz}^2
=
-dd^c\log\int_{X_s}\im dz\wedge d\bar z \\
&=
-dd^c\log\ip{s}
=
\frac{1}{\abs{s-\bar s}^2}\im ds\wedge d\bar s.
\end{align*}
There exists a K\"ahler form $\rho$ on $X$ such that $\pi^{-1}(\rho)=\hat\rho$ is written by the following:
\begin{equation*}
\hat\rho
=
\im\paren{h_{s\bar s}ds\wedge{d\bar s}
+h_{s\bar s}ds\wedge{dz}^{\bar z}
+h_{z\bar s}dz\wedge{d\bar s}
+h_{z\bar z}dz\wedge{d\bar z}
}.
\end{equation*}
where
\begin{equation*}
\paren{
\begin{array}{cc}
h_{s\bar s} & h_{s\bar z} \\
h_{z\bar s} & h_{z\bar z}
\end{array}
}
=
\paren{
\begin{array}{cc}
\displaystyle\frac{1}{(\ip{s})^2}
+
\frac{1}{(\ip{s})}\cdot\paren{\frac{z-\bar z}{s-\bar s}}^2
&
\displaystyle-\frac{1}{\ip{s}}\cdot\frac{z-\bar z}{s-\bar s}
\\
\displaystyle-\frac{1}{\ip{s}}\cdot\frac{z-\bar z}{s-\bar s}
&
\displaystyle\frac{1}{\ip{s}}
\end{array}
}.
\end{equation*}
It is easy to see that $g^*\hat\rho=\hat\rho$ for all $g\in G$.
Denote by $v=\partial/\partial s$. The horizontal lift of $v_\rho$ with respect to $\rho$ is computed by
\begin{equation*}
v_\rho
=
\pd{}{s}-h_{s\bar z}h^{\bar zz}\pd{}{z}
=
\pd{}{s}+\frac{z-\bar z}{s-\bar s}\pd{}{z}.
\end{equation*}
It follows that
\begin{equation*}
\bar\partial v_\rho
=
-\frac{1}{s-\bar s}\pd{}{z}\otimes d\bar z.
\end{equation*}
It is easy to see that this is the harmonic representative of $K_s$.
Hence we have
\begin{equation*}
\abs{\bar\partial v_\rho}_\rho^2
=
\frac{1}{\abs{s-\bar s}^2}.
\end{equation*}
In particular, $\abs{\bar\partial v_\rho}_\rho$ is a function which depends only on $s$-variable. The geodesic curvature $c(\rho)$ is computed by
\begin{align*}
c(\rho)
&=
h_{s\bar s}
-
h_{s\bar z}h^{z\bar z}h_{\bar zs}
\\
& =
\frac{1}{(\ip{s})^2}
+
\frac{1}{(\ip{s})}\cdot\paren{\frac{z-\bar z}{s-\bar s}}^2
-
\paren{\frac{1}{\ip{s}}\cdot\frac{z-\bar z}{s-\bar s}}^2
\cdot
\ip{s} \\
&=
\frac{1}{(\ip{s})^2} >0.
\end{align*}
Therefore, the fiberwise Ricci-flat metric $\rho$ is positive on $X$.
The direct image of $\rho^2$ is given as follows:
\begin{equation*}
p_*\rho^2
=
\int_{X_s}\rho^2=\int_{X_s}c(\rho)\rho\wedge\im ds\wedge d\bar s
=
\frac{1}{(\ip{s})^2}\im ds\wedge d\bar s.
\end{equation*}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction\label{sec:Intro}}
The Thomas--Fermi (TF) equation has proved useful for the treatment of many
physical phenomena that include atoms\cite{BCR74,CM50,M57,MT79,M83},
molecules\cite{M52,M57}, atoms in strong magnetic fields\cite{BCR74,MT79,M83}%
, crystals\cite{UT55} and dense plasmas\cite{YK89} among others. For that
reason there has been great interest in the accurate solution of that
equation, and, in particular, in the accurate calculation of the slope at
origin\cite{KMNU55,PP87,FO90}. Besides, the mathematical aspects of the TF
equation have been studied in detail\cite{H69,H70}. Some time ago Liao\cite
{L03} proposed the application of a technique called homotopy analysis
method (HAM) to the solution of the TF equation and stated that ``it is the
first time such an elegant and explicit analytic solution of the
Thomas--Fermi equation is given''. This claim is surprising because at first
sight earlier analytical approaches are apparently simpler and seem to have
produced much more accurate results\cite{PP87,FO90,T91,EFGP99}. Recently,
Khan and Xu\cite{KX07} improved Liao's HAM by the addition of adjustable
parameters that improve the convergence of the perturbation series.
The purpose of this paper is to compare the improved HAM with a
straightforward analytical procedure based on Pad\'{e} approximants\cite
{EFGP99} supplemented with a method developed some time ago\cite
{FMT89,F92,FG93,F95,F95b,F95c,F96,F96b,F97}. In Section \ref{sec:HAM} we
outline the main ideas of the HAM, in Section \ref{sec:HPM} apply the
Hankel--Pad\'{e} method (HPM) to the TF equation, and in Section \ref
{sec:conclusions} we compare the HAM with the HPM and with other approaches.
\section{The homotopy analysis method \label{sec:HAM}}
In order to facilitate later discussion we outline the main ideas behind the
application of the HAM to the TF equation. The TF equation
\begin{equation}
u^{\prime \prime }(x)=\sqrt{\frac{u(x)^{3}}{x}},\;u(0)=1,\;u(\infty )=0
\label{eq:TF}
\end{equation}
is an example of two--point nonlinear boundary--value problem. When solving
this ordinary differential equation one faces problem of the accurate
calculation of the slope at origin $u^{\prime }(0)$ that is consistent with
the physical boundary conditions indicated in equation (\ref{eq:TF}).
In what follows we choose the notation of Khan and Xu\cite{KX07} whose
approach is more general than the one proposed earlier by Liao\cite{L03}.
They define the new solution $g(\xi )=\gamma u(x)$, where $\xi =1+\lambda x$
and rewrite the TF equation as
\begin{equation}
(\xi -1)\lambda ^{3}\gamma g^{\prime \prime }(\xi )^{3}-g(\xi )^{3}=0
\label{eq:TF2}
\end{equation}
where $\gamma $ is the inverse of the slope at origin ($u^{\prime
}(0)=1/\gamma $) and $\lambda $ is an adjustable parameter. Khan and Xu\cite
{KX07} state that the solution to Eq. (\ref{eq:TF2}) can be written in the
form
\begin{equation}
g(\xi )=\sum_{j=1}^{\infty }A_{j}\xi ^{-j} \label{eq:g_series}
\end{equation}
that reduces to Liao's expansion\cite{KX07} when $\lambda =1$.
In principle there is no reason to assume that the series (\ref{eq:g_series}%
) converges and no proof is given in that sense\cite{L03,KX07}. Besides, the
partial sums of the series (\ref{eq:g_series}) will not give the correct
asymptotic behaviour at infinity\cite{H69,H70,BO78} as other expansions do%
\cite{PP87,FO90}.
Liao\cite{L03} and Kahn and Xu\cite{KX07} do not use the ansatz (\ref
{eq:g_series}) directly to solve the problem but resort to perturbation
theory. For example, Kahn and Xu\cite{KX07} base their approach on the
modified equation
\begin{equation}
(1-q)\mathcal{L}\left[ \Phi (\xi ;q)-g_{0}(\xi )\right] =q\hbar \mathcal{N}%
\left[ \Phi (\xi ;q),\Gamma (q)\right] \label{eq:HAM}
\end{equation}
where $\mathcal{L}$ and $\mathcal{N}$ are linear and nonlinear operators,
respectively, $0\leq q\leq 1$ is a perturbation parameter and $\hbar $ is
another adjustable parameter. Besides, $g_{0}(\xi )$ is a conveniently
chosen initial function and $\Phi (\xi ;q)$ becomes the solution to equation
(\ref{eq:TF2}) when $q=1$\cite{KX07}. Both $\Phi (\xi ;q)$ and $\Gamma (q)$
are expanded in a Taylor series about $q=0$ as in standard perturbation
theory, and $\Gamma (0)=\gamma _{0}$ is another adjustable parameter\cite
{KX07}.
The authors state that HAM is a very flexible approach that enables one to
choose the linear operator and the initial solution freely\cite{L03,KX07}
and also to introduce several adjustable parameters\cite{KX07}. However, one
is surprised that with so many adjustable parameters the results are far
from impressive, even at remarkable great perturbation orders\cite{L03,KX07}%
. For example the $[30/30]$ Pad\'{e} approximant of the HAM series yields $%
u^{\prime }(0)$ with three exact digits\cite{KX07}, while the $[1/1]$
Pad\'{e} approximant of the $\delta $ expansion\cite{BMPS89} provides
slightly better results\cite{L90,C93}. A more convenient expansion of the
solution of the TF equation leads to many more accurate digits\cite
{PP87,FO90} with less terms.
\section{The Hankel--Pad\'{e} method \label{sec:HPM}}
In what follows we outline a simple, straightforward analytical method for
the accurate calculation of $u^{\prime }(0)$. In order to facilitate the
application of the HPM we define the variables $t=x^{1/2}$ and $%
f(t)=u(t^{2})^{1/2}$, so that the TF equation becomes
\begin{equation}
T(f,t)=t\left[ f(t)f^{\prime \prime }(t)+f^{\prime }(t)^{2}\right]
-f(t)f^{\prime }(t)-2t^{2}f(t)^{3}=0 \label{eq:TF3}
\end{equation}
We expand the solution $f(t)$ to this differential equation in a Taylor
series about $t=0$:
\begin{equation}
f(t)=\sum_{j=0}^{\infty }f_{j}t^{j} \label{eq:f_series}
\end{equation}
where the coefficients $f_{j}$ depend on $f_{2}=f^{\prime \prime
}(0)/2=u^{\prime }(0)/2$. On substitution of the series (\ref{eq:f_series})
into equation (\ref{eq:TF3}) we easily calculate as many coefficients $f_{j}$
as desired; for example, the first of them are
\begin{equation}
f_{0}=1,\;f_{1}=0,\;f_{3}=\frac{2}{3},\;f_{4}=-\frac{f_{2}^{2}}{2},\;f_{5}=-%
\frac{4f_{2}}{15},\ldots
\end{equation}
The HPM is based on the transformation of the power series (\ref{eq:f_series}%
) into a rational function or Pad\'{e} approximant
\begin{equation}
\lbrack M/N](t)=\frac{\sum_{j=0}^{M}a_{j}t^{j}}{\sum_{j=0}^{N}b_{j}t^{j}}
\label{eq:[M/N]}
\end{equation}
One would expect that $M<N$ in order to have the correct limit at infinity;
however, in order to obtain an accurate value of $f_{2}$ it is more
convenient to choose $M=N+d$, $d=0,1,\ldots $ as in previous applications of
the approach to the Schr\"{o}dinger equation (in this case it was called
Riccati--Pad\'{e} method (RPM))\cite
{FMT89,F92,FG93,F95,F95b,F95c,F96,F96b,F97}.
The rational function (\ref{eq:[M/N]}) has $2N+d+1$ coefficients that we may
choose so that $T([M/N],t)=\mathcal{O}(t^{2N+d+1})$ and the coefficient $%
f_{2}$ remains undetermined. If we require that $T([M/N],t)=\mathcal{O}%
(t^{2N+d+2})$ we have another equation from which we obtain $f_{2}$.
However, it is convenient to proceed in a different (and entirely
equivalent) way and require that
\begin{equation}
\lbrack M/N](t)-\sum_{j=0}^{2N+d+1}f_{j}t^{j}=\mathcal{O}(t^{2N+d+2})
\label{eq:[M/N]2}
\end{equation}
In order to satisfy this condition it is necessary that the Hankel
determinant vanishes
\begin{equation}
H_{D}^{d}=\left| f_{i+j+d+1}\right| _{i,j=0,1,\ldots N}=0, \label{eq:Hankel}
\end{equation}
where $D=N+1$ is the dimension of the Hankel matrix. Each Hankel determinant
is a polynomial function of $f_{2}$ and we expect that there is a sequence
of roots $f_{2}^{[D,d]}$, $D=2,3,\ldots $ that converges towards the actual
value of $u^{\prime }(0)/2$ for a given value of $d$. We compare sequences
with different values of $d$ for inner consistency (all of them should give
the same limit). Notice that a somewhat similar idea was also proposed by Tu%
\cite{T91}, although he did not develop it consistently.
Present approach is simple and straightforward: we just obtain the Taylor
coefficients $f_{j}$ from the differential equation (\ref{eq:TF3}) in terms
of $f_{2}$, derive the Hankel determinant, and calculate its roots. Since $%
f_{4}$ is the first nonzero coefficient that depends on $f_{2}$ we choose
Hankel sequences with $d\geq 3$.
The Hankel determinant $H_{D}^{d}$ exhibits many roots and their number
increases with $D$. If we compare the roots of $H_{D}^{d}$ with those of $%
H_{D-1}^{d}$ we easily identify the sequence $f_{2}^{[D,d]}$ that converges
towards the actual value of $f_{2}$. Fig. \ref{fig:logconv} shows $\log
\left| 2f_{2}^{[D,d]}-2f_{2}^{[D-1,d]}\right| $ for $D=3,4,\ldots $ that
provides a reasonable indication of the convergence of the sequence of
roots. We clearly appreciate the great convergence rate of the sequences
with $d=3$ and $d=4$. For example, for $d=3$ and $D\leq 30$ it is
approximately given by
$\left| 2f_{2}^{[D,3]}-2f_{2}^{[D-1,3]}\right|=14.2\times 10^{
-0.705D}$. From the sequences for $D\leq 30$ we estimate
$u^{\prime }(0)=-1.58807102261137531$ which we believe is accurate to the
last digit. We are not aware of a result of such accuracy in the literature
with which we can compare our estimate. It is certainly far more accurate
than the result obtained by Kobayashi et al\cite{KMNU55} by numerical
integration that is commonly chosen as a benchmark\cite{L03,KX07}.
Present rational approximation to the TF function is completely different
from previous application of the Pad\'{e} approximants, where the slope at
origin was determined by the asymptotic behaviour of at infinity\cite{EFGP99}%
. Our approach applies to $u(x)^{1/2}$ and the slope at origin is determined
by a local condition at that point (\ref{eq:[M/N]2}) which results in the
Hankel determinant (\ref{eq:Hankel}). In this sense our approach is similar
to (although more systematic and consistent than) Tu's one\cite{T91} as
mentioned above.
Once we have the slope at origin we easily obtain an analytical expression
for $u(x)$ in terms of the rational approximation (\ref{eq:[M/N]}) to $f(t)$%
. In order to have the correct behaviour at infinity we choose $N=M+3$\cite
{EFGP99}. Table~\ref{tab:u(x)} shows values of $u(x)$ and its first
derivative for $1<x<1000$ (the approximation is obviously much better for $%
0<x<1$) given by the approximant $[5/8]$. Our results are in remarkably
agreement with the numerical calculation of Kobayashi et al\cite{KMNU55} and
are by far much more accurate than those provided by the HAM\cite{L03,KX07}.
Notice that we are comparing a $[5/8]$ Pad\'{e} approximant on the
straightforward series expansion (\ref{eq:f_series}) with $[50/50]$ and $%
[30/30]$ approximants on an elaborated perturbation series\cite{L03,KX07}.
\section{Conclusions \label{sec:conclusions}}
Any accurate analytical expression of the solution $u(x)$ to the TF equation
requires an accurate value of the unknown slope at origin $u^{\prime }(0)$,
and the HPM provides it in a simple and straightforward way. In this sense
the HPM appears to be preferable to other accurate approaches\cite
{PP87,FO90,KMNU55} and is far superior to the HAM\cite{L03,KX07}. Notice for
example that our estimate $2f_{2}^{[5,3]}=-1.588$, based on a rational
approximation $[7/4]$, is better than the result provided by a $[30/30]$ Pad%
\'{e} approximant on the improved HAM perturbation series\cite{KX07}.
Besides, by comparing Table 2 of Khan and Xu\cite{KX07} with our Fig. \ref
{fig:logconv} one realizes the different convergence rate of both
approaches. One should also take into account that the HPM does not have any
adjustable parameter for tuning up its convergence properties, while, on the
other hand, the ``flexible'' HAM with some such parameters plus a Pad\'{e}
summation results in a much smaller convergence rate\cite{L03,KX07}.
We also constructed a Pad\'{e} approximant $[5/8]$ from the series (\ref
{eq:f_series}) and obtained the TF function and its derivative with an
accuracy that outperforms the $[50/50]$ and $[30/30]$ Pad\'{e} approximants
on the HAM perturbation series\cite{L03,KX07}. It is clear that the HPM is
by far simpler, more straightforward, and much more accurate than the HAM.
In addition to the physical utility of the HPM we think that its
mathematical features are most interesting. Although we cannot provide a
rigorous proof of the existence of a convergent sequence of roots for each
nonlinear problem, or that the sequences will converge towards the correct
physical value of the unknown, a great number of successful applications to
the Schr\"{o}dinger equation\cite{FMT89,F92,FG93,F95,F95b,F95c,F96,F96b,F97}
suggest that the HPM is worth further investigation. Notice that we obtain a
global property of the TF equation $u^{\prime }(0)$ from a local approach:
the series expansion about the origin (\ref{eq:f_series}). The fact that our
original rational approximation (\ref{eq:[M/N]}) does not have the correct
behaviour at infinity is not at all a problem because we may resort to a
more conventient expansion\cite{EFGP99} once we have an accurate value of
the unknown slope at origin.
Finally, we mention that the HPM has recently proved successful for the
treatment of other two--point nonlinear equations\cite{AF07} of interest in
some fields of physics\cite{BFG07,BBG08,BBG08b}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Let $R$ be a commutative noetherian ring with a dualizing complex $D$.
Such complexes were introduced in \cite[chp.\ V]{H} where it was also
shown that the functor $\operatorname{RHom}_R(-,D)$ is a contravariant
autoequivalence of $\D^{\operatorname{f}}(R)$, the finite derived category of $R$.
Some time later, it was shown in \cite[sec.\ 3]{AF} that by
restricting to certain subcategories $\mathsf{A}(R)$ and $\mathsf{B}(R)$ of the
derived category $\mathsf{D}(R)$, the functors $D \stackrel{\operatorname{L}}{\otimes}_R -$ and
$\operatorname{RHom}_R(D,-)$ become quasi-inverse covariant equivalences
\[
\xymatrix{
\mathsf{A}(R) \ar@<1ex>[rrr]^-{D \stackrel{\operatorname{L}}{\otimes}_R -}
& & & \mathsf{B}(R). \ar@<1ex>[lll]^-{\operatorname{RHom}_R(D,-)}
}
\]
The categories $\mathsf{A}(R)$ and $\mathsf{B}(R)$ are known as the Auslander and
Bass categories of $R$. The precise definition is given in Remark
\ref{rmk:AB} below, but note that $\mathsf{A}(R)$ and $\mathsf{B}(R)$ contain the
bounded complexes of projective, respectively injective, modules.
This paper introduces the symmetric Auslander category $\mathsf{A}^{\operatorname{s}}(R)$
and the symmetric Bass category $\mathsf{B}^{\operatorname{s}}(R)$ which contain $\mathsf{A}(R)$,
respectively $\mathsf{B}(R)$, as full subcategories. While $\mathsf{A}(R)$ enjoys a
strong relation to Gorenstein projective modules, our main result is
that $\mathsf{A}^{\operatorname{s}}(R)$ has a similarly close relation to {\em
homomorphisms} of Gorenstein projective modules.
This result is set in the wider context of a theory which shows that
the two new categories inhabit a universe with strong symmetry
properties.
\medskip
\noindent
{\em Background on Auslander and Bass categories. }
Recall that the Auslander category $\mathsf{A}(R)$ can be characterized in
terms of totally acyclic complexes of projective modules. Such a
complex $P$ consists of projective modules, is exact, and has the
property that $\operatorname{Hom}_R(P,Q)$ is exact for each projective module $Q$.
It was proved in \cite[sec.\ 4]{CFH} that a complex is in $\mathsf{A}(R)$ if
and only if its homology is bounded and the left-tail of its
projective resolution is equal to the left-tail of a totally
acyclic complex of projective modules (all differentials point to the
right).
The left-tails of totally acyclic complexes of projective modules are
precisely the projective resolutions of so-called Gorenstein
projective mo\-du\-les; this is immediate from the definition of a
Gorenstein projective module as a cycle module of a totally acyclic
complex of projectives, see \cite{EJ}. This leads to the expectation
that if we remove from $\mathsf{A}(R)$ a suitable ``finite'' part, leaving
only the tails of projective resolutions, then we should get a
category of Gorenstein projective modules.
Indeed, the homotopy category $\mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)$ of bounded complexes
of projective modules can be viewed as a subcategory of $\mathsf{A}(R)$, and
we can remove it by forming the Verdier quotient $\mathsf{A}(R) /
\mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)$. On the other hand, the Gorenstein projective
modules form a Frobenius category $\mathsf{GProj}(R)$, and there is a stable
category $\underline{\mathsf{GProj}}(R)$ obtained by dividing out
homomorphisms which factor through projective modules. It is not
hard to show that there is an equivalence of triangulated categories
\begin{equation}
\label{equ:A}
\underline{\mathsf{GProj}}(R)
\stackrel{\simeq}{\rightarrow} \mathsf{A}(R) / \mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R).
\end{equation}
\medskip
\noindent
{\em Symmetric Auslander and Bass categories. }
The main result of this paper is a higher analogue of the above
phenomenon. Let $\mathsf{K}(\operatorname{Prj}\,R)$ be the homotopy category of complexes
of projective modules. We define the symmetric Auslander category
$\mathsf{A}^{\operatorname{s}}(R)$ to be the full subcategory of $\mathsf{K}(\operatorname{Prj}\,R)$ consisting
of complexes whose left- and right-tails are equal to the left-
and right-tails of totally acyclic complexes of projective modules.
Our main result is the following.
\smallskip
\noindent
{\bf Theorem A. }
{\em
There is an equivalence of triangulated categories
\[
\underline{\mathsf{GMor}}(R)
\stackrel{\simeq}{\rightarrow} \mathsf{A}^{\operatorname{s}}(R) / \mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R).
\]
}
Here $\underline{\mathsf{GMor}}(R)$ is the stable category of Gorenstein
projective objects in $\mathsf{Mor}(R)$, the abelian category of
homomorphisms of $R$-modules. Note that there is an equivalence of
categories between $\mathsf{Mor}(R)$ and $\mathsf{Mod}\, T_2(R)^{\operatorname{op}}$, the category
of right-modules over the upper triangular matrix ring $T_2(R)$; cf.\
\cite{Aus}. This implies that $\underline{\mathsf{GMor}}(R)$ is equivalent to
the stable category of Gorenstein projective right-modules over
$T_2(R)$.
On the other hand, we will show that the objects in
$\underline{\mathsf{GMor}}(R)$ are precisely the injective homomorphisms
between Gorenstein projective $R$-modules which have Gorenstein
projective cokernels. Hence, whereas the Auslander category $\mathsf{A}(R)$
is related to Gorenstein projective mo\-du\-les via equation
\eqref{equ:A}, the symmetric Auslander category $\mathsf{A}^{\operatorname{s}}(R)$ is
si\-mi\-lar\-ly related to {\em homomorphisms} of Gorenstein
projective modules via Theorem A.
To prove the theorem, we develop a theory for the symmetric Auslander
and Bass categories. One of the highlights is that $\mathsf{A}^{\operatorname{s}}(R)$ is,
indeed, a highly sym\-me\-tric object. Namely, the quotient
$\mathsf{A}^{\operatorname{s}}(R) / \mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)$ permits a so-called triangle of
recollements $(\mathsf{U},\mathsf{V},\mathsf{W})$ as introduced in \cite{IKM}. This means
that $\mathsf{U}$, $\mathsf{V}$, $\mathsf{W}$ are full subcategories of $\mathsf{A}^{\operatorname{s}}(R) /
\mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)$, and that each of
\[
(\mathsf{U} \, , \, \mathsf{V}) \; , \; (\mathsf{V} \, , \, \mathsf{W}) \; , \; (\mathsf{W} \, , \, \mathsf{U})
\]
is a stable t-structure. It is not obvious, even in principle, that
such a configuration is possible, but we show that
\begin{align}
\label{equ:UVW}
\nonumber
\mathsf{U} & = \mathsf{A}(R) / \mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R), \\
\mathsf{V} & = \mathsf{K}_{\operatorname{tac}}(\operatorname{Prj}\,R), \\
\nonumber
\mathsf{W} & = S(\mathsf{B}(R)) / \mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)
\end{align}
work, where $\mathsf{K}_{\operatorname{tac}}(\operatorname{Prj}\,R)$ is the full subcategory of
$\mathsf{K}(\operatorname{Prj}\,R)$ consisting of totally acyclic complexes and $S$ is a
certain functor introduced in \cite[sec.\ 4]{IK}.
There are also several other results, among them the following.
\smallskip
\noindent
{\bf Theorem B. }
{\em
There are quasi-inverse equivalences of triangulated
ca\-te\-go\-ri\-es
\[
\xymatrix{
\mathsf{A}^{\operatorname{s}}(R) \ar@<1ex>[r]
& \mathsf{B}^{\operatorname{s}}(R). \ar@<1ex>[l]
}
\]
}
Let $\mathsf{K}^{(\operatorname{b})}(\operatorname{Prj}\,R)$ denote the full subcateogry of
$\mathsf{K}(\operatorname{Prj}\,R)$ consisting of complexes with bounded homology.
\smallskip
\noindent
{\bf Theorem C. }
{\em
There are inclusions
\[
\mathsf{A}(R) \subseteq \mathsf{A}^{\operatorname{s}}(R) \subseteq \mathsf{K}^{(\operatorname{b})}(\operatorname{Prj}\,R).
\]
The first inclusion is an equality if and only if each Gorenstein
projective $R$-module is projective.
The second inclusion is an equality if and only if $R$ is a Gorenstein
ring.
}
Thus, the property that $\mathsf{A}^{\operatorname{s}}(R)$ is minimal, respectively
maximal, cha\-rac\-te\-ri\-ses two interesting classes of rings.
Let us remark on two important sources of ideas for this paper.
First, \cite{IKM} originated the notion of a triangle of recollements
and used it to get a version of Theorem A for finitely generated
modules when $R$ is a Gorenstein ring. The present paper can be
viewed as extending these ideas. Secondly, while it is not obvious
from the description above, we make extensive use of the machinery
developed in \cite{IK} for homotopy categories of complexes of
projective, respectively, injective modules and their relation to
Auslander and Bass categories.
The paper is organised as follows: Section \ref{sec:background}
briefly sketches the definitions and results we will use; most of them
come from \cite{IK}. Section \ref{sec:AsBs} proves Theorems B and C
above (Theorems \ref{thm:I} and \ref{thm:II}) and establishes the
existence of the triangle of recollements described by equation
\eqref{equ:UVW} (Theorem \ref{thm:A_Ktac_SB2}). Section
\ref{sec:Morph} studies the category of homomorphisms $\mathsf{Mor}(R)$ and
its Gorenstein projective objects, and culminates in the proof of
Theorem A (Theorem \ref{thm:main}).
\bigskip
\noindent
{\em Acknowledgement. }
We thank Srikanth Iyengar for comments on a preliminary version which
led to Remark \ref{rmk:Sri}.
The first author wishes to express his sincere gratitude for the
hospitality of the second author and the Visitor Programme operated by
the Graduate School of Science at Osaka Prefecture University. The
second author was partially supported by JSPS grant 18540044.
\section{Background}
\label{sec:background}
This section recalls the tools we will use; most of them come from
\cite{IK}.
\begin{Setup}
\label{set:blanket}
Throughout, $R$ is a commutative noetherian ring with a dualizing
complex $D$ which is assumed to be a bounded complex of injective
modules.
\end{Setup}
Dualizing complexes were introduced in \cite{H}, but see e.g.\
\cite[sec.\ 1]{CFH} for a contemporary introduction.
\begin{Remark}
There are homotopy categories $\mathsf{K}(\operatorname{Prj}\,R)$ and $\mathsf{K}(\operatorname{Inj}\,R)$ of
complexes of projective, respectively, injective modules. They have
several important triangulated subcategories:
The subcategories of bounded complexes are denoted by
$\mathsf{K}^{\operatorname{b}}(\operatorname{Prj}\,R)$ and $\mathsf{K}^{\operatorname{b}}(\operatorname{Inj}\,R)$. The subcategories of
complexes with bounded homology are denoted by $\mathsf{K}^{(\operatorname{b})}(\operatorname{Prj}\,R)$
and $\mathsf{K}^{(\operatorname{b})}(\operatorname{Inj}\,R)$.
The subcategories of K-projective, respectively, K-injective complexes
are denoted by $\mathsf{K}_{\operatorname{prj}}(R)$ and $\mathsf{K}_{\operatorname{inj}}(R)$; see \cite{S}.
The subcategories of totally acyclic complexes are denoted
$\mathsf{K}_{\operatorname{tac}}(\operatorname{Prj}\,R)$ and $\mathsf{K}_{\operatorname{tac}}(\operatorname{Inj}\,R)$. Complexes $X$ in
$\mathsf{K}(\operatorname{Prj}\,R)$ and $Y$ in $\mathsf{K}(\operatorname{Inj}\,R)$ are called totally acyclic if
they are exact and $\operatorname{Hom}_R(X,P)$ and $\operatorname{Hom}_R(I,Y)$ are exact for each
projective module $P$ and each injective module $I$.
\end{Remark}
\begin{Remark}
\label{rmk:inc}
Consider the subcategories $\mathsf{K}_{\operatorname{prj}}(R) \subseteq \mathsf{K}(\operatorname{Prj}\,R)$ and
$\mathsf{K}_{\operatorname{inj}}(R) \subseteq \mathsf{K}(\operatorname{Inj}\,R)$. By \cite[sec.\ 7]{IK}, the
inclusion functors, which we will denote by $\operatorname{inc}$, are parts of
adjoint pairs of functors,
\[
\xymatrix{
\mathsf{K}_{\operatorname{prj}}(R) \ar@<1ex>[r]^{\operatorname{inc}}
& \mathsf{K}(\operatorname{Prj}\,R) \ar@<1ex>[l]^{p}
}
\;\;\mbox{and}\;\;
\xymatrix{
\mathsf{K}_{\operatorname{inj}}(R) \ar@<-1ex>[r]_{\operatorname{inc}}
& \mathsf{K}(\operatorname{Inj}\,R). \ar@<-1ex>[l]_{i}
}
\]
In the terminology of \cite[chp.\ 9]{Neemanbook}, the existence of the
right adjoint $p$ places us in a situation of Bousfield localization,
and accordingly, the counit morphism of the adjoint pair $(\operatorname{inc},p)$
can be completed to a distinguished triangle
\[
pX
\stackrel{\epsilon_P}{\longrightarrow} X
\longrightarrow aX
\longrightarrow
\]
which depends functorially on $X$. Both $p$ and $a$ are triangulated
functors. Dually, the unit morphism of the adjoint pair $(i,\operatorname{inc})$
can be completed to a distinguished triangle
\[
bY
\longrightarrow Y
\stackrel{\eta_Y}{\longrightarrow} iY
\longrightarrow
\]
which depends functorially on $Y$.
\end{Remark}
\begin{Remark}
\label{rmk:ST}
By \cite[thm.\ 4.2]{IK} there are quasi-inverse equivalences of
categories
\[
\xymatrix{
\mathsf{K}(\operatorname{Prj}\,R) \ar@<1ex>[r]^-{T}
& \mathsf{K}(\operatorname{Inj}\,R) \ar@<1ex>[l]^-{S}
}
\]
where $T(-) = D \otimes_R -$ and $S = q \circ \operatorname{Hom}_R(D,-)$. The
functor $q$ is right-adjoint to the inclusion $\mathsf{K}(\operatorname{Prj}\,R)
\rightarrow \mathsf{K}(\operatorname{Flat}\,R)$ where $\mathsf{K}(\operatorname{Flat}\,R)$ is the homotopy
category of complexes of flat modules.
\end{Remark}
\begin{Remark}
\label{rmk:AB}
Let us recall the following from \cite{AF}. The derived category
$\mathsf{D}(R)$ supports an adjoint pair of functors
\[
\xymatrix{
\mathsf{D}(R) \ar@<1ex>[rrr]^-{D \stackrel{\operatorname{L}}{\otimes}_R -}
& & & \mathsf{D}(R). \ar@<1ex>[lll]^-{\operatorname{RHom}_R(D,-)}
}
\]
The Auslander category of $R$ is the triangulated subcategory defined
in terms of the unit $\eta$ by
\[
\mathsf{A}(R)
= \Biggl\{ X \in \mathsf{D}(R)
\Bigg|
\begin{array}{l}
\mbox{\small{$X$ and $D \stackrel{\operatorname{L}}{\otimes}_R X$ have bounded homology;}} \\
\mbox{\small{$X \stackrel{\eta_X}{\longrightarrow} \operatorname{RHom}_R(D,D \stackrel{\operatorname{L}}{\otimes}_R X)$ is an isomorphism}} \\
\end{array}
\Biggr\}
\]
and the Bass category of $R$ is the triangulated subcategory defined
in terms of the counit $\epsilon$ by
\[
\mathsf{B}(R)
= \Biggl\{ Y \in \mathsf{D}(R)
\Bigg|
\begin{array}{l}
\mbox{\small{$Y$ and $\operatorname{RHom}_R(D,Y)$ have bounded homology;}} \\
\mbox{\small{$D \stackrel{\operatorname{L}}{\otimes}_R \operatorname{RHom}_R(D,Y) \stackrel{\epsilon_Y}{\longrightarrow} Y$ is an isomorphism}} \\
\end{array}
\Biggr\}.
\]
The functors $D \stackrel{\operatorname{L}}{\otimes}_R -$ and $\operatorname{RHom}_R(D,-)$ restrict to
quasi-inverse equivalences between $\mathsf{A}(R)$ and $\mathsf{B}(R)$.
The canonical functors $\mathsf{K}_{\operatorname{prj}}(R) \rightarrow \sD(R)$ and
$\mathsf{K}_{\operatorname{inj}}(R) \rightarrow \sD(R)$ are e\-qui\-va\-len\-ces, and this
permits us to view $\mathsf{A}(R)$ as a full subcategory of $\mathsf{K}_{\operatorname{prj}}(R)$
and hence of $\mathsf{K}(\operatorname{Prj}\,R)$, and $\mathsf{B}(R)$ as a full subcategory of
$\mathsf{K}_{\operatorname{inj}}(R)$ and hence of $\mathsf{K}(\operatorname{Inj}\,R)$. As such, the adjoint
functors
\[
\xymatrix{
\mathsf{K}_{\operatorname{prj}}(R) \ar@<1ex>[r]^-{iT}
& \mathsf{K}_{\operatorname{inj}}(R) \ar@<1ex>[l]^-{pS}
}
\]
restrict to a pair of quasi-inverse equivalences between $\mathsf{A}(R)$ and
$\mathsf{B}(R)$ by \cite[prop.\ 7.2]{IK}.
\end{Remark}
See \cite[sec.\ 1]{CFH} for an alternative review of Auslander and
Bass categories.
\begin{Definition}
Let $\mathsf{T}$ be a triangulated category. A stable t-structure on $\mathsf{T}$
is a pair of full subcategories $(\mathsf{U},\mathsf{V})$ such that
\begin{enumerate}
\item $\Sigma\mathsf{U} = \mathsf{U}$, $\Sigma\mathsf{V} = \mathsf{V}$.
\smallskip
\item $\operatorname{Hom}_{\mathsf{T}}(\mathsf{U},\mathsf{V}) = 0$.
\smallskip
\item For each $T$ in $\mathsf{T}$ there exist $U$ in $\mathsf{U}$ and $V$ in
$\mathsf{V}$ and a distinguished triangle $U \rightarrow T \rightarrow V
\rightarrow$.
\end{enumerate}
A triangle of recollements in $\mathsf{T}$ is a triple $(\mathsf{U},\mathsf{V},\mathsf{W})$ such
that each of $(\mathsf{U},\mathsf{V})$, $(\mathsf{V},\mathsf{W})$, $(\mathsf{W},\mathsf{U})$ is a stable
t-structure.
Let $\mathsf{T}'$ be another triangulated category with a triangle of
recollements $(\mathsf{U}',\mathsf{V}',\mathsf{W}')$ and let $F : \mathsf{T} \rightarrow \mathsf{T}'$ be
a triangulated functor. We say that $F$ sends $(\mathsf{U},\mathsf{V},\mathsf{W})$ to
$(\mathsf{U}',\mathsf{V}',\mathsf{W}')$ if $F(\mathsf{U}) \subseteq \mathsf{U}'$, $F(\mathsf{V}) \subseteq
\mathsf{V}'$, $F(\mathsf{W}) \subseteq \mathsf{W}'$.
\end{Definition}
\section{Symmetric Auslander and Bass categories}
\label{sec:AsBs}
This section develops a theory of symmetric Auslander and Bass
ca\-te\-go\-ri\-es. It proves Theorems B and C from the Introduction,
and establishes the existence of the triangle of recollements
described by equation \eqref{equ:UVW} (Theorems \ref{thm:I},
\ref{thm:II}, and \ref{thm:A_Ktac_SB2}).
For the rest of the paper, an unadorned $\mathsf{K}$ stands for
$\mathsf{K}(\operatorname{Prj}\,R)$. We combine this in an obvious way with various
embellishments to form $\mathsf{K}^{\operatorname{b}}$, $\mathsf{K}^{(\operatorname{b})}$, $\mathsf{K}_{\operatorname{prj}}$, and
$\mathsf{K}_{\operatorname{tac}}$. Likewise, unadorned categories such as $\mathsf{A}$, $\mathsf{B}$,
and $\mathsf{D}$ stand for $\mathsf{A}(R)$, $\mathsf{B}(R)$, and $\mathsf{D}(R)$.
In the following definition, $\mathsf{X} * \mathsf{Y}$ denotes the full subcategory
of objects $C$ which sit in distinguished triangles $X \rightarrow C
\rightarrow Y \rightarrow$ with $X$ in $\mathsf{X}$ and $Y$ in $\mathsf{Y}$.
\begin{Definition}
The {\em symmetric Auslander category} $\mathsf{A}^{\operatorname{s}}$ and the {\em
symmetric Bass category} $\mathsf{B}^{\operatorname{s}}$ of $R$ are the full subcategories
of $\mathsf{K}(\operatorname{Prj}\,R)$ and $\mathsf{K}(\operatorname{Inj}\,R)$ defined by
\[
\mathsf{A}^{\operatorname{s}} = S(\mathsf{B}) * \mathsf{A}
\;\; \mbox{and} \;\;
\mathsf{B}^{\operatorname{s}} = \mathsf{B} * T(\mathsf{A})
\]
where $S$ and $T$ are the functors from \cite{IK} described in Remark
\ref{rmk:ST}.
\end{Definition}
\begin{Remark}
By \cite[thm.\ 4.1]{CFH}, the subcategory $\mathsf{A}$ of $\mathsf{K}$
consists of complexes isomorphic to right-bounded complexes of
projective mo\-du\-les whose left-tail is equal to the left-tail of a
complete projective resolution.
Using the theory of \cite{IK}, one can show that similarly, $S(\mathsf{B})$
consists of complexes isomorphic to left-bounded complexes of
projective mo\-du\-les whose right-tail is equal to the right-tail of
a complete projective resolution.
From this it follows that $\mathsf{A}^{\operatorname{s}}$ consists of complexes isomorphic
to complexes of projective modules both of whose tails are equal to
the tails of complete projective resolutions.
Similar remarks apply to $\mathsf{B}^{\operatorname{s}}$, and this is one of the reasons
for the terminology ``symmetric Auslander and Bass categories''.
\end{Remark}
\begin{Remark}
The following lemma and most of the other results in this section will
only be given for $\mathsf{A}^{\operatorname{s}}$, but there are dual versions for
$\mathsf{B}^{\operatorname{s}}$ with similar proofs.
\end{Remark}
\begin{Lemma}
\label{lem:Kiriko}
Let $C$ be in $\mathsf{K}$. Then $C$ is in $\mathsf{A}^{\operatorname{s}}$ if and only
if the following conditions are satisfied.
\begin{enumerate}
\item $C$ and $TC$ have bounded homology.
\smallskip
\item The mapping cone of $pC
\stackrel{\epsilon_C}{\longrightarrow} C$ is totally acyclic.
\smallskip
\item The mapping cone of $TC \stackrel{\eta_{TC}}{\longrightarrow}
iTC$ is totally acyclic.
\end{enumerate}
\end{Lemma}
\begin{proof}
``Only if'': Suppose that $C$ is in $\mathsf{A}^{\operatorname{s}}$. By definition, there
is a distinguished triangle
\[
SB \rightarrow C \rightarrow A \rightarrow
\]
in $\mathsf{K}$ with $B$ in $\mathsf{B}$ and $A$ in $\mathsf{A}$. All of $SB$, $A$, $TSB
\cong B$, and $TA$ have bounded homology, so the same is true for $C$
and $TC$, proving condition (i).
By Remark \ref{rmk:inc}, the distinguished triangle induces the
following commutative diagram where each row and each column is a
distinguished triangle.
\[
\xymatrix{
pSB \ar[r] \ar[d]_{\epsilon_{SB}} & pC \ar[r] \ar[d]_{\epsilon_C} & pA \ar[r] \ar[d]_{\epsilon_A} & {} \\
SB \ar[r] \ar[d] & C \ar[r] \ar[d] & A \ar[r] \ar[d] & {} \\
aSB \ar[r]_{\alpha} \ar[d] & aC \ar[r] \ar[d] & aA \ar[r] \ar[d] & {} \\
{} & {} & {} &
}
\]
Since $A$ is K-projective, $\epsilon_A$ is an isomorphism. Hence $aA$
is zero so $\alpha$ is an isomorphism. But $B$ is in $\mathsf{B}$ so $aSB$
is totally acyclic by \cite[prop.\ 7.4]{IK}, and so $aC$ is totally
acyclic, proving condition (ii). A similar argument proves condition
(iii).
``If'': Suppose that conditions (i) through (iii) hold. Hard
truncation gives a distinguished triangle
\[
C^{\geq 0} \rightarrow C \rightarrow C^{<0} \rightarrow
\]
in $\mathsf{K}$. We aim to show that $C^{\geq 0}$ is in $S(\mathsf{B})$ and that
$C^{<0}$ is in $\mathsf{A}$ whence $C$ is in $\mathsf{A}^{\operatorname{s}}$.
Set
\[
B = T(C^{\geq 0}) = D \otimes_R C^{\geq 0}
\]
so $SB = ST(C^{\geq 0}) \cong C^{\geq 0}$. Since $C^{\geq 0}$ is a
left-bounded complex of projective modules and $D$ is a bounded
complex of injective modules, $B$ is a left-bounded complex of
injective modules. In particular, it is K-injective.
Since $D$ is bounded, the complexes $B$ and $TC = D \otimes_R C$ agree
in high cohomological degrees. But $B$ is left-bounded and $TC$ has
bounded homology by condition (i), so it follows that $B$ has bounded
ho\-mo\-lo\-gy. Also, $B$ is K-injective so $\operatorname{RHom}_R(D,B)$ can be
computed as $\operatorname{Hom}_R(D,B)$, but
\[
\operatorname{Hom}_R(D,B)
\stackrel{\rm (a)}{\simeq} q \circ \operatorname{Hom}_R(D,B)
= SB
\cong C^{\geq 0}
\]
where the quasi-isomorphism (a) is by \cite[thm.\ 2.7]{IK}. Since the
homology of $C^{\geq 0}$ is bounded, so is the homology of
$\operatorname{RHom}_R(D,B)$.
As above, the distinguished triangle induces the following commutative
diagram where each row and each column is a distinguished triangle.
\[
\xymatrix{
pC^{\geq 0} \ar[r] \ar[d]_{\epsilon_{C^{\geq 0}}} & pC \ar[r] \ar[d]_{\epsilon_C} & pC^{<0} \ar[r] \ar[d]_{\epsilon_{C^{<0}}} & {} \\
C^{\geq 0} \ar[r] \ar[d] & C \ar[r] \ar[d] & C^{<0} \ar[r] \ar[d] & {} \\
aC^{\geq 0} \ar[r]_{\beta} \ar[d] & aC \ar[r] \ar[d] & aC^{<0} \ar[r] \ar[d] & {} \\
{} & {} & {} &
}
\]
Since $C^{<0}$ is a right-bounded complex of projective modules it is
K-projective and so $\epsilon_{C^{<0}}$ is an isomorphism. Hence
$aC^{<0}$ is zero so $\beta$ is an isomorphism. But $aC$ is totally
acyclic by condition (ii), and so $aC^{\geq 0}$ is totally acyclic.
Since $SB \cong C^{\geq 0}$, it follows from \cite[prop.\ 7.4]{IK} that
$B$ is in $\mathsf{B}$ and so $C^{\geq 0}$ is in $S(\mathsf{B})$.
A similar argument proves that $C^{<0}$ is in $\mathsf{A}$.
\end{proof}
\begin{Proposition}
\label{pro:subcats}
The category $\mathsf{A}^{\operatorname{s}}$ is a triangulated subcategory of $\mathsf{K}$,
and there are inclusions of triangulated subcategories
\[
\mathsf{K}_{\operatorname{tac}} \subseteq \mathsf{A}^{\operatorname{s}} \subseteq \mathsf{K}^{(\operatorname{b})}.
\]
\end{Proposition}
\begin{proof}
It is well known that $\mathsf{K}_{\operatorname{tac}}$ and $\mathsf{K}^{(\operatorname{b})}$ are triangulated
subcategories of $\mathsf{K}$.
Conditions (i) through (iii) of Lemma \ref{lem:Kiriko} respect mapping
cones, so $\mathsf{A}^{\operatorname{s}}$ is a triangulated subcategory of $\mathsf{K}$.
The second inclusion of the proposition is immediate from Lemma
\ref{lem:Kiriko}(i), and the first one follows from Lemma
\ref{lem:Kiriko}(i)--(iii) combined with the fact that $T$ sends
totally acyclic complexes to totally acyclic complexes by \cite[prop.\
5.9(1)]{IK}.
\end{proof}
\begin{Remark}
\label{rmk:Sri}
We owe the following observations based on Lemma \ref{lem:Kiriko} to
Srikanth Iyengar.
The Auslander and Bass categories $\mathsf{A}$ and $\mathsf{B}$ also exist in
versions $\widehat{\mathsf{A}}$ and $\widehat{\mathsf{B}}$ without boundedness
conditions \cite[7.1]{IK}. With small modifications, the proof of
Lemma \ref{lem:Kiriko} shows that membership of
$S(\widehat{\mathsf{B}})*\widehat{\mathsf{A}}$ is characterised by conditions (ii)
and (iii) of the Lemma.
It is immediate from Lemma \ref{lem:Kiriko} that $\mathsf{A} * S(\mathsf{B})$ is
contained in $\mathsf{A}^{\operatorname{s}} = S(\mathsf{B}) * \mathsf{A}$. This is a bit surprising
since one would not normally expect any inclusion between categories
of the form $\mathsf{X} * \mathsf{Y}$ and $\mathsf{Y} * \mathsf{X}$.
We do not know if $\mathsf{A} * S(\mathsf{B})$ is triangulated, but it will often be
considerably smaller than $S(\mathsf{B}) * \mathsf{A}$ since $\mathsf{K}_{\operatorname{tac}}$ is
contained in $S(\mathsf{B}) * \mathsf{A}$ by Proposition \ref{pro:subcats} while it
is easy to show that the intersection of $\mathsf{A} * S(\mathsf{B})$ with
$\mathsf{K}_{\operatorname{tac}}$ is zero.
\end{Remark}
\begin{Theorem}
\label{thm:I}
The functors $T$ and $S$ restrict to quasi-inverse
e\-qui\-va\-len\-ces of triangulated categories
\[
\xymatrix{
\mathsf{A}^{\operatorname{s}} \ar@<1ex>[r]^-{T}
& \mathsf{B}^{\operatorname{s}}. \ar@<1ex>[l]^-{S}
}
\]
\end{Theorem}
\begin{proof}
This is immediate from the definition of $\mathsf{A}^{\operatorname{s}}$ and $\mathsf{B}^{\operatorname{s}}$
because $T$ and $S$ are quasi-inverse equivalences of triangulated
categories.
\end{proof}
\begin{Theorem}
\label{thm:A_Ktac_SB}
\begin{enumerate}
\item The category $\mathsf{A}^{\operatorname{s}}$ has stable t-structures
\[
(\mathsf{A},\mathsf{K}_{\operatorname{tac}}(\operatorname{Prj}\,R))
\;\; \mbox{and} \;\;
(\mathsf{K}_{\operatorname{tac}}(\operatorname{Prj}\,R),S(\mathsf{B})).
\]
\smallskip
\item The category $\mathsf{B}^{\operatorname{s}}$ has stable t-structures
\[
(\mathsf{K}_{\operatorname{tac}}(\operatorname{Inj}\,R),\mathsf{B})
\;\; \mbox{and} \;\;
(T(\mathsf{A}),\mathsf{K}_{\operatorname{tac}}(\operatorname{Inj}\,R)).
\]
\end{enumerate}
\end{Theorem}
\begin{proof}
The first of the stable t-structures in part (i) can be established as
follows.
The category $\mathsf{A}^{\operatorname{s}}$ contains $\mathsf{A}$ by definition and $\mathsf{K}_{\operatorname{tac}}$
by Proposition \ref{pro:subcats}. Each $A$ in $\mathsf{A}$ is K-projective,
so a morphism $A \rightarrow U$ with $U$ in $\mathsf{K}_{\operatorname{tac}}$ is zero.
Existence of the first stable t-structure will thus follow if we can
prove $\mathsf{A}^{\operatorname{s}} = \mathsf{A} * \mathsf{K}_{\operatorname{tac}}$.
For $C$ in $\mathsf{A}^{\operatorname{s}}$, there is a distinguished triangle $SB
\longrightarrow C \longrightarrow A \longrightarrow$ with $B$ in $\mathsf{B}$
and $A$ in $\mathsf{A}$. Turning the triangle gives a distinguished triangle
$\Sigma^{-1}A \stackrel{\alpha}{\longrightarrow} SB \longrightarrow C
\longrightarrow A$.
There is also a distinguished triangle $pSB
\stackrel{\epsilon_{SB}}{\longrightarrow} SB \longrightarrow U
\longrightarrow$ and $U$ is totally acyclic by \cite[prop.\ 7.4]{IK}.
Since $\Sigma^{-1}A$ is in $\mathsf{A}$, each morphism $\Sigma^{-1}A
\rightarrow U$ is zero, and hence $\alpha$ lifts through
$\epsilon_{SB}$.
By the octahedral axiom, there is hence a commutative diagram in which
each row and each column is a distinguished triangle,
\[
\xymatrix{
\Sigma^{-1}pSB \ar[r] \ar[d] & \Sigma^{-1}SB \ar[r] \ar[d] & \Sigma^{-1}U \ar[r] \ar[d] & pSB \ar[d] \\
\Sigma^{-1}A^{\prime} \ar[r] \ar[d] & 0 \ar[r] \ar[d] & A^{\prime} \ar@{=}[r] \ar[d] & A^{\prime} \ar[d] \\
\Sigma^{-1}A \ar[r]^{\alpha} \ar[d] & SB \ar[r] \ar@{=}[d] & C \ar[r] \ar[d] & A \ar[d] \\
pSB \ar[r]_{\epsilon_{SB}} & SB \ar[r] & U \ar[r] & \Sigma pSB.
}
\]
Since $B$ is in $\mathsf{B}$, the object $pSB$ is in $\mathsf{A}$ by \cite[prop.\
7.2]{IK}; see Remark \ref{rmk:AB}. Since $A$ is also in $\mathsf{A}$, it
follows that $A^{\prime}$ is in $\mathsf{A}$. So the third column of the
above diagram shows $\mathsf{A}^{\operatorname{s}} = \mathsf{A} * \mathsf{K}_{\operatorname{tac}}$, proving existence
of the first stable t-structure in the theorem.
The first of the stable t-structures in part (ii) follows by an
analogous argument using \cite[prop.\ 7.3]{IK} instead of \cite[prop.\
7.2]{IK}.
The second stable t-structure in part (i) is obtained by applying $S$
to the first stable t-structure in part (ii). The second stable
t-structure in part (ii) is obtained by applying $T$ to the first
stable t-structure in part (i).
\end{proof}
\begin{Theorem}
\label{thm:II}
There are inclusions
\[
\mathsf{A} \subseteq \mathsf{A}^{\operatorname{s}} \subseteq \mathsf{K}^{(\operatorname{b})}.
\]
The first inclusion is an equality if and only if each Gorenstein
projective $R$-module is projective.
The second inclusion is an equality if and only if $R$ is a Gorenstein
ring.
\end{Theorem}
\begin{proof}
The first inclusion is clear from the definition of $\mathsf{A}^{\operatorname{s}}$, and
the second holds by Proposition \ref{pro:subcats}.
The claim on the first inclusion: The first stable t-structure of
Theorem \ref{thm:A_Ktac_SB} shows that $\mathsf{A}^{\operatorname{s}} = \mathsf{A}$ is equivalent
to $\mathsf{K}_{\operatorname{tac}} = 0$. This happens if and only if each totally acyclic
complex is split exact, that is, if and only if each Gorenstein
projective module is projective.
The claim on the second inclusion: First, suppose that $\mathsf{A}^{\operatorname{s}} =
\mathsf{K}^{(\operatorname{b})}$. Let $M$ be an $R$-module with projective resolution $C$;
it follows that $C$ is in $\mathsf{A}^{\operatorname{s}}$. Consider the distinguished
triangle $A \rightarrow C \rightarrow U \rightarrow$ with $A$ in $\mathsf{A}$
and $U$ in $\mathsf{K}_{\operatorname{tac}}$ which exists by Theorem \ref{thm:A_Ktac_SB}.
Since $U$ is exact, the homology of $A$ is $M$ so the $K$-projective
complex $A$ is a projective resolution of $M$. This shows that for
each module $M$, the projective resolution is in $\mathsf{A}$, hence the
Gorenstein projective dimension of $M$ is finite by \cite[thm.\
4.1]{CFH}, and hence $R$ is Gorenstein.
Secondly, suppose that $R$ is Gorenstein and let $C$ be in
$\mathsf{K}^{(\operatorname{b})}$. We will show that $C$ is in $\mathsf{A}^{\operatorname{s}}$ by showing that
$C$ satisfies the three conditions of Lemma \ref{lem:Kiriko}.
In condition (i), by definition, $C$ has bounded homology. Since $R$
is Gorenstein, $D$ can be taken to be an injective resolution of $R$.
Hence there is a quasi-isomorphism $R \rightarrow D$ of bounded
complexes, and since $C$ consists of projective modules, it follows
that there is a quasi-isomorphism $R \otimes_R C \rightarrow D
\otimes_R C$. So $TC = D \otimes_R C$ also has bounded homology.
Conditions (ii) and (iii) hold because the relevant mapping cones are
acyclic, and over a Gorenstein ring this implies that they are
totally acyclic; see \cite[cor.\ (5.5)]{IK}.
\end{proof}
In the following theorem, note that $\mathsf{K}_{\operatorname{tac}}$ is a triangulated
subcategory of $\mathsf{A}^{\operatorname{s}}$ which can also be viewed as a triangulated
subcategory of the Verdier quotient $\mathsf{A}^{\operatorname{s}} / \mathsf{K}^{\operatorname{b}}$ since there
are only zero morphisms from $\mathsf{K}^{\operatorname{b}}$ to $\mathsf{K}_{\operatorname{tac}}$.
\begin{Theorem}
\label{thm:A_Ktac_SB2}
The category $\mathsf{A}^{\operatorname{s}}/\mathsf{K}^{\operatorname{b}}$ has a triangle of recollements
\[
(\mathsf{A}/\mathsf{K}^{\operatorname{b}} \, , \, \mathsf{K}_{\operatorname{tac}} \, , \, S(\mathsf{B})/\mathsf{K}^{\operatorname{b}}).
\]
That is, it has stable t-structures
\[
(\mathsf{A}/\mathsf{K}^{\operatorname{b}} \, , \, \mathsf{K}_{\operatorname{tac}}) \; , \;
(\mathsf{K}_{\operatorname{tac}} \, , \, S(\mathsf{B})/\mathsf{K}^{\operatorname{b}}) \; , \;
(S(\mathsf{B})/\mathsf{K}^{\operatorname{b}} \, , \, \mathsf{A}/\mathsf{K}^{\operatorname{b}}).
\]
\end{Theorem}
\begin{proof}
The first two stable t-structures follow from the stable
t-struc\-tu\-res of Theorem \ref{thm:A_Ktac_SB} by \cite{IKM}.
Let us show that the third structure exists. By definition, $\mathsf{A}^{\operatorname{s}}
= S(\mathsf{B}) * \mathsf{A}$, and this implies $\mathsf{A}^{\operatorname{s}}/\mathsf{K}^{\operatorname{b}} =
(S(\mathsf{B})/\mathsf{K}^{\operatorname{b}}) * (\mathsf{A}/\mathsf{K}^{\operatorname{b}})$.
It is therefore enough to show that each morphism $S(B) \rightarrow A$
in $\mathsf{K}^{(\operatorname{b})}/\mathsf{K}^{\operatorname{b}}$ with $S(B)$ in $S(\mathsf{B})/\mathsf{K}^{\operatorname{b}}$ and $A$ in
$\mathsf{A}/\mathsf{K}^{\operatorname{b}}$ must be zero. Such a morphism is represented by a
diagram $S(B) \rightarrow A^{\prime} \leftarrow A$ in $\mathsf{K}^{(\operatorname{b})}$
where the mapping cone of $A \rightarrow A^{\prime}$ is in $\mathsf{K}^{\operatorname{b}}$.
In particular, the mapping cone is in $\mathsf{A}$, so $A^{\prime}$ is also
in $\mathsf{A}$ whence $A^{\prime}$ is isomorphic to a right-bounded complex
of projective modules. However, $S(B)$ is isomorphic to a
left-bounded complex of projective modules, and it easily follows that
the morphism $S(B) \rightarrow A^{\prime}$ factors through an object
of $\mathsf{K}^{\operatorname{b}}$. Hence this morphism becomes zero in
$\mathsf{K}^{(\operatorname{b})}/\mathsf{K}^{\operatorname{b}}$, and so the original morphism $S(B) \rightarrow
A$ in $\mathsf{K}^{(\operatorname{b})}/\mathsf{K}^{\operatorname{b}}$ is zero as desired.
\end{proof}
\section{The category of homomorphisms}
\label{sec:Morph}
This section proves our main result, Theorem A from the Introduction
(Theorem \ref{thm:main}).
\begin{Definition}
We let $\mathsf{Mor}$ denote the category of homomorphisms of $R$-modules.
The objects of $\mathsf{Mor}$ are the homomorphisms of $R$-modules.
The morphisms of $\mathsf{Mor}$ are defined as follows: A morphism $f$ from
$X _\alpha \stackrel{\alpha}{\rightarrow} T_\alpha$ to $X_\beta
\stackrel{\beta}{\rightarrow} T_\beta$ is a pair $(f_X , f_T)$ of
homomorphisms of $R$-modules $X_\alpha \stackrel{f_X}{\rightarrow}
X_\beta$ and $T_\alpha \stackrel{f_T}{\rightarrow} T_\beta$ such that
there is a commutative square
\[
\xymatrix{
X_{\alpha} \ar[r]^{f_X} \ar[d]_{\alpha} & X_{\beta} \ar[d]^{\beta} \\
T_{\alpha} \ar[r]_{f_T} & T_{\beta} \lefteqn{.}
}
\]
\end{Definition}
\begin{Remark}
\label{rmk:induced_diagram}
Given an object $X _\alpha \stackrel{\alpha}{\rightarrow} T_\alpha$ in
$\mathsf{Mor}$, we will denote the cokernel of $\alpha$ by $N_{\alpha}$.
Observe that a morphism $f$ in $\mathsf{Mor}$ induces a commutative
diagram of $R$-modules with exact rows,
\[
\xymatrix{
X_{\alpha} \ar[r]^{\alpha} \ar[d]_{f_X} & T_{\alpha} \ar[r] \ar[d]_{f_T} & N_{\alpha} \ar[r] \ar[d]^{f_N} & 0 \\
X_{\beta} \ar[r]_{\beta} & T_{\beta} \ar[r] & N_{\beta} \ar[r] & 0\lefteqn{.}
}
\]
\end{Remark}
\begin{Remark}
\label{rmk:pi}
A complex $\pi = \cdots \to \pi ^i \stackrel{d_\pi ^i}{\to} \pi^{i+1}
\cdots$ in $\mathsf{Mor}$ implies a chain map $\pi$ between complexes of
$R$-modules,
\[
\xymatrix{
\cdots \ar[r] & X_{\pi ^i} \ar[r] \ar[d]_{\pi ^i} & X_{\pi ^{i+1}} \ar[r] \ar[d]_{\pi ^{i+1}} & \cdots \\
\cdots \ar[r] & T_{\pi ^i} \ar[r] & T_{\pi ^{i+1}} \ar[r] & \cdots \lefteqn{.} }
\]
It is not hard to check that the projective objects of $\mathsf{Mor}$ are
precisely the split injections between projective $R$-modules. Hence,
if $\pi$ is a complex of projective objects in $\mathsf{Mor}$, then there
is an exact sequence
\begin{equation}
\label{equ:XTN}
0 \to X_\pi \stackrel{\pi}{\to} T_\pi \to N_\pi \to 0
\end{equation}
of complexes of projective $R$-modules.
\end{Remark}
The proof of the following lemma is straightforward.
\begin{Lemma}
\label{dual}
Let $\alpha \stackrel{f}{\rightarrow} \beta$ be a morphism in the
category $\mathsf{Mor}$. Let $M$ be an $R$-module and consider the zero
homomorphism $0 \stackrel{0^M}{\rightarrow} M$ and the identity $M
\stackrel{1_M}{\rightarrow} M$ as objects of $\mathsf{Mor}$. Then we have the
following.
\begin{enumerate}
\item There are vertical isomorphisms giving a commutative square
\[
\xymatrix {
\operatorname{Hom}_R(N_\beta , M) \ar[rrr]^{\operatorname{Hom}_R(f_N , M)} \ar[d]_{\cong} &&&
\operatorname{Hom}_R(N_\alpha , M) \ar[d]^{\cong} \\
\operatorname{Hom}_{\mathsf{Mor}}(\beta , 0^M) \ar[rrr]_{\operatorname{Hom}_{\mathsf{Mor}}(f , 0^M)} &&&
\operatorname{Hom}_{\mathsf{Mor}}(\alpha , 0^M) \lefteqn{.} \\
}
\]
\smallskip
\item There are vertical isomorphisms giving a commutative square
\[
\xymatrix {
\operatorname{Hom}_R(T_\beta , M) \ar[rrr]^{\operatorname{Hom}_R(f_T , M)} \ar[d]_{\cong} &&&
\operatorname{Hom}_R(T_\alpha , M) \ar[d]^{\cong} \\
\operatorname{Hom}_{\mathsf{Mor}}(\beta , 1_M) \ar[rrr]_{\operatorname{Hom}_{\mathsf{Mor}}(f , 1_M)} &&&
\operatorname{Hom}_{\mathsf{Mor}}(\alpha , 1_M) \lefteqn{.} \\
}
\]
\end{enumerate}
\end{Lemma}
\begin{Lemma}
\label{tac of Morph}
A complex $\pi$ of projective objects in $\mathsf{Mor}$ is totally acyclic
if and only if each of the complexes
\begin{align*}
X_\pi & = \cdots
\longrightarrow X_{\pi ^i}
\longrightarrow X_{\pi ^{i+1}}
\longrightarrow \cdots, \\
T_\pi & = \cdots
\longrightarrow T_{\pi ^i}
\longrightarrow T_{\pi ^{i+1}}
\longrightarrow \cdots
\end{align*}
belongs to $\mathsf{K}_{\operatorname{tac}}$.
\end{Lemma}
\begin{proof}
Let $\varphi$ be a projective object of $\mathsf{Mor}$. Remark \ref{rmk:pi}
says that $\varphi$ is a split injection of projective $R$-modules, so
there are projective $R$-modules $P$ and $P^{\prime}$ such that
$\varphi = 0^P \oplus 1_{P^{\prime}}$. The complex $\operatorname{Hom}_{\mathsf{Mor}}(\pi
, \varphi)$ is acyclic if and only if both $\operatorname{Hom}_{\mathsf{Mor}}(\pi , 0^P)$
and $\operatorname{Hom}_{\mathsf{Mor}}(\pi , 1_{P^{\prime}})$ are acyclic. By Lemma
\ref{dual}, this is equivalent to having both complexes $\operatorname{Hom}_{R}(T_\pi , P)$
and $\operatorname{Hom}_{R}(N_\pi , P^{\prime})$ acyclic.
Therefore $\pi$ is totally acyclic if and only if $T_\pi$ and $N_\pi$
are both totally acyclic, which by the sequence \eqref{equ:XTN} is
equivalent to both of $T_\pi$ and $X_\pi$ being totally acyclic.
\end{proof}
\begin{Corollary}
The Gorenstein projective objects of $\mathsf{Mor}$ are the injective
homomorphisms between Gorenstein projective $R$-modules which have
Gorenstein projective cokernels.
\end{Corollary}
\begin{proof}
A Gorenstein projective object in $\mathsf{Mor}$ is a cycle of a totally
acyclic complex of projective objects of $\mathsf{Mor}$. It follows
easily from Lemma \ref{tac of Morph} that it is an injective
homomorphism between Gorenstein projective $R$-modules, and that the
cokernel is Gorenstein projective.
Conversely, let $X_\alpha$ and $T_\alpha$ be Gorenstein projective
$R$-modules and suppose that $X_\alpha \stackrel{\alpha}{\to}
T_\alpha$ is an injective homomorphism with Gorenstein projective
cokernel. Using the Horseshoe Lemma, the short exact sequence $0
\rightarrow X_{\alpha} \stackrel{\alpha}{\rightarrow} T_{\alpha}
\rightarrow N_{\alpha} \rightarrow 0$ gives a short exact sequence of
complete projective resolutions
\[
0 \longrightarrow P_{X_\alpha}
\stackrel{\pi_\alpha}{\longrightarrow} P_{T_\alpha}
\longrightarrow P_{N_{\alpha}}
\longrightarrow 0.
\]
Lemma \ref{tac of Morph} says that $P_{X_\alpha}
\stackrel{\pi_\alpha}{\longrightarrow} P_{T_\alpha}$ can be viewed as a
totally acyclic complex of projective objects of $\mathsf{Mor}$, and it
is clear that it is a complete projective resolution of $X_\alpha
\stackrel{\alpha}{\to} T_\alpha$ which is hence a Gorenstein
projective object of $\mathsf{Mor}$.
\end{proof}
\begin{Definition}
We denote the full subcategory of Gorenstein projective objects in
$\mathsf{Mor}$ by $\mathsf{GMor}$. Inside $\mathsf{GMor}$, we consider the
following full subcategories $\mathsf{GMor}^p$, $\mathsf{GMor}^0$, and
$\mathsf{GMor}^1$.
\begin{enumerate}
\item $\mathsf{GMor}^p$ consists of injective homomorphisms $X
\stackrel{\iota_X}{\rightarrow} P$ where $X$ is Gorenstein
projective and $P$ is projective.
\smallskip
\item $\mathsf{GMor}^0$ consists of zero homomorphisms $0
\stackrel{0^T}{\rightarrow} T$ where $T$ is Gorenstein projective.
\smallskip
\item $\mathsf{GMor}^1$ consists of identity homomorphisms $X
\stackrel{1_M}{\rightarrow} X$ where $X$ is Gorenstein projective.
\end{enumerate}
There are corresponding stable categories which are defined by
di\-vi\-ding out the morphisms which factor through a projective
object. The stable categories are denoted by underlining. The
category $\underline{\mathsf{GMor}}$ is triangulated, and
$\underline{\mathsf{GMor}}^p$, $\underline{\mathsf{GMor}}^0$, and
$\underline{\mathsf{GMor}}^1$ are triangulated subcategories.
\end{Definition}
\begin{Theorem}
The category $\underline{\mathsf{GMor}}$ has a triangle of recollements
\[
(\underline{\mathsf{GMor}}^p
\, , \, \underline{\mathsf{GMor}}^1
\, , \, \underline{\mathsf{GMor}}^0).
\]
That is, it has stable t-structures
\[
(\underline{\mathsf{GMor}}^p \, , \, \underline{\mathsf{GMor}}^1) \; , \;
(\underline{\mathsf{GMor}}^1 \, , \, \underline{\mathsf{GMor}}^0) \; , \;
(\underline{\mathsf{GMor}}^0 \, , \, \underline{\mathsf{GMor}}^p).
\]
\end{Theorem}
\begin{proof}
It is enough to show that each of the following categories: (i)
$\underline{\mathsf{GMor}}^p * \underline{\mathsf{GMor}}^1$, (ii) $\underline{\mathsf{GMor}}^1
* \underline{\mathsf{GMor}}^0$, and (iii) $\underline{\mathsf{GMor}}^0 *
\underline{\mathsf{GMor}}^p$ is equal to $\underline{\mathsf{GMor}}$.
Let $X_{\alpha} \stackrel{\alpha}{\to} T_{\alpha}$ be an object of
$\mathsf{GMor}$ and consider the exact sequence $0 \to X_{\alpha}
\stackrel{\alpha}{\to} T_{\alpha} \stackrel{\beta}{\to} N_\alpha \to
0$ of G-projective $R$-modules. There exist injective homomorphisms
$\iota _{T_\alpha} : T_\alpha \to P$ and $\iota _{N_\alpha} :
N_\alpha \to P'$ with projective $R$-modules $P$ and $P'$.
(i) The commutative diagram with exact rows
\[
\xymatrix{
0 \ar[rr] && X_{\alpha} \ar[rr]^{\alpha} \ar[d]_{\alpha} && T_{\alpha} \ar[rr]^{\beta} \ar[d]_{1 \choose 0} && N_\alpha \ar[rr] \ar[d]^{\iota_{N_\alpha}} && 0\\
0 \ar[rr] && T_{\alpha} \ar[rr]_-{1 \choose -\iota_{N_{\alpha}}\beta} && T_{\alpha} \oplus P' \ar[rr]_-{(\iota_{N_{\alpha}}\beta\, , \,1)} && P' \ar[rr] && 0 \\
}
\]
induces a distinguished triangle in $\underline{\mathsf{GMor}}$
\[
\Sigma^{-1}\underline{\iota _{N_\alpha}}
\to \underline{\alpha}
\to \underline{1_{T_\alpha}}
\to
\]
with $\Sigma^{-1}\underline{\iota _{N_\alpha}}$ in
$\underline{\mathsf{GMor}}^p$ and $\underline{1_{T_\alpha}}$ in
$\underline{\mathsf{GMor}}^1$.
(ii) The commutative diagram with exact rows
\[
\xymatrix{
0 \ar[r] & X_{\alpha} \ar@{=}[r] \ar@{=}[d] & X_{\alpha} \ar[r] \ar[d]_{\alpha} & 0 \ar[r] \ar[d]^{0^{N_\alpha}} & 0\\
0 \ar[r] & X_{\alpha} \ar[r]_{\alpha} & T_\alpha \ar[r]_{\beta} & N_\alpha \ar[r] & 0\\
}
\]
induces a distinguished triangle in $\underline{\mathsf{GMor}}$
\[
\underline{1_{X_\alpha}}
\to \underline{\alpha}
\to \underline{0^{N_\alpha}}
\to
\]
with $\underline{1_{X_\alpha}}$ in $\underline{\mathsf{GMor}}^1$ and
$\underline{0^{N_\alpha}}$ in $\underline{\mathsf{GMor}}^0$.
(iii) The commutative diagram with exact rows
\[
\xymatrix{
0 \ar[r] & X_{\alpha} \ar@{=}[r] \ar[d]_{\alpha} & X_{\alpha} \ar[r] \ar[d]_{\iota _{T_\alpha} \alpha} & 0 \ar[r] \ar[d]^{0^{\Sigma T_\alpha}} & 0\\
0 \ar[r] & T_{\alpha} \ar[r]_{\iota _{T_{\alpha}}} & P \ar[r] & \Sigma T_\alpha \ar[r] & 0\\
}
\]
induces a distinguished triangle in $\underline{\mathsf{GMor}}$
\[
\underline{0^{T_\alpha}}
\to \underline{\alpha}
\to \underline{\iota _{T_\alpha} \alpha}
\to
\]
with $\underline{0^{T_\alpha}}$ in $\underline{\mathsf{GMor}}^0$ and
$\underline{\iota _{T_\alpha} \alpha}$ in $\underline{\mathsf{GMor}}^p$.
\end{proof}
Let $X_{\alpha} \stackrel{\alpha}{\to} T_{\alpha}$ be an object of
$\mathsf{GMor}$ and consider complete projective resolutions $P$ of
$X_{\alpha}$ and $\widetilde{P}$ of $T_{\alpha}$. In particular,
there is a surjection $P^0 \stackrel{\rho}{\rightarrow} X_{\alpha}$
and an injection $T_{\alpha} \stackrel{\iota}{\rightarrow}
\widetilde{P}^1$. Let $P_{\alpha}$ denote the complex
\[
\cdots
\longrightarrow P^{-1}
\longrightarrow P^0
\stackrel{\iota\alpha\rho}{\longrightarrow} \widetilde{P}^1
\longrightarrow \widetilde{P}^2
\longrightarrow \cdots.
\]
\begin{Proposition}
[{\cite[lemmas 4.2 and 4.3 and prop.\ 4.4]{IKM}}]
The operation $\alpha \mapsto P_{\alpha}$ gives a functor $\mathsf{GMor} \to
\mathsf{A}^s$ which induces a triangulated functor
\[
\underline{P} : \underline{\mathsf{GMor}} \to \mathsf{A}^s / \mathsf{K}^b.
\]
\end{Proposition}
\begin{Lemma}
[{\cite[lemmas 4.6 and 4.7]{IKM}}]
\label{lem:IKM_4.5_and_4.6}
\begin{enumerate}
\item $\underline{P}$ sends the triangle of recollements
\[
(\underline{\mathsf{GMor}}^p \, , \, \underline{\mathsf{GMor}}^1 \, , \,
\underline{\mathsf{GMor}}^0)
\]
to the triangle of re\-col\-le\-ments
\[
( \mathsf{A} / \mathsf{K}^b \, , \, \mathsf{K}_{\operatorname{tac}} \, , \, S(\mathsf{B}) / \mathsf{K}^b ).
\]
\smallskip
\item The restriction of $\underline{P}$ to $\underline{\mathsf{GMor}}^1$
is an equivalence of triangulated categories $\underline{\mathsf{GMor}}^1
\to \mathsf{K}_{\operatorname{tac}}$.
\end{enumerate}
\end{Lemma}
\begin{Proposition}
[{\cite[Prop.\ 1.18]{IKM}}]
\label{pro:IKM_1.18}
Let $(\mathsf{U}, \mathsf{V}, \mathsf{W})$ and $(\mathsf{U}', \mathsf{V}', \mathsf{W}')$ be triangles of
recollements in $\mathsf{T}$ and $\mathsf{T}'$ respectively. Suppose the
triangulated functor $F: \mathsf{T} \to \mathsf{T}'$ sends $(\mathsf{U}, \mathsf{V}, \mathsf{W})$ to
$(\mathsf{U}', \mathsf{V}', \mathsf{W}')$. If the restriction $F \mid \mathsf{U}$ is an
equivalence of triangulated categories, then so is $F$.
\end{Proposition}
The following main theorem follows immediately by combining Lemma
\ref{lem:IKM_4.5_and_4.6} and Proposition \ref{pro:IKM_1.18}; compare
with \cite[lem.\ 4.7 and thm.\ 4.8]{IKM}.
\begin{Theorem}
\label{thm:main}
The functor $\underline{P}$ is an equivalence of triangulated
categories
\[
\underline{\mathsf{GMor}} \to \mathsf{A}^s / \mathsf{K}^b.
\]
\end{Theorem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Radio and X-ray observations show that in majority of regular galaxy
clusters, possessing a cool core, the activity of a central
supermassive black hole mediated by AGN jets
creates bubbles of relativistic plasma in the intra-cluster medium (ICM)
\citep{Boer93, HS98, Chur00, McN00, Bir04}.
X-ray data are consistent with the
assumption that bubbles are completely devoid of thermal gas
\citep{Sand07}
although the limits on the amount of thermal gas in the bubbles are
not very tight. The absence of strong shocks in the ICM surrounding the
bubbles implies that the relativistic plasma is in approximate
pressure equilibrium with the ICM.
These bubbles, inflated by an AGN, are believed to be responsible for
mechanical coupling of the AGN energy release and the thermal state of
the ICM in galaxy clusters, groups and individual elliptical
galaxies. For this reason every aspect of bubble physics receives much
attention.
Given the ubiquity of bubbles in clusters, these objects
should be almost always present in the cluster core.
The lifetime of the bubble in the stratified atmosphere is set by
``buoyancy'' time, while the growth rate of the bubble is defined by the
power of the AGN \citep[e.g.,][]{GN73, Chur00}. Pairs of bubbles are
located on both sides of the central supermassive black hole and often
several generation of bubbles differing in their size and the distance
from the center are observed.
Bubbles with sizes of order 1-10 kpc are often found in the inner regions of
galaxy clusters, sharing the space with brightest cluster galaxy
(BCG).
For example, in NGC 1275 (BCG of Perseus cluster) within the
effective radius of the galaxy ($R_e\sim$15 kpc) we see two bubbles on
either side of the nucleus with the radius of each bubble $\sim6.3$ kpc
\citep[e.g.,][]{Boer93, Fab03}. In M 87 (Virgo cluster)
the bubble radius is $\sim1.4$ kpc \citep[e.g.,][]{For05, For07}, while
the galaxy effective radius is of order 7.7 kpc. We expect that in both cases
bubbles should contain significant fraction of the galaxy stars. These
old low mass stars evolve as usual, lose their mass via the wind, and
some of them give birth to type Ia supernovae (SNe\,Ia).
Even if the relativistic bubble sweeps up all the
thermal gas in the process of inflation a lot of material in the
form of the stellar wind thus can be supplied by the stars {\it inside
the bubble} during the bubble life. This wind material could affect
the expansion dynamics of supernovae exploded in the bubble.
The question we address here is what happens to the stellar winds
and SNIa ejecta embedded in the relativistic bubble. Almost weightless
relativistic plasma provides highly unusual conditions for the dynamic
evolution of wind shells and SNe. Two extreme scenarios are conceivable.
In one limit all the wind material and SN ejecta are decelerated and
mixed with relativistic plasma. During subsequent evolution the
buoyantly rising bubbles advect this material with them. In another
limit the matter ejected by evolving stars propagates freely
through the relativistic plasma, attains the boundaries of the bubble
and enriches the ICM just outside the bubble with heavy elements.
We investigate different scenarios of dynamical evolution of wind shells
and SN ejecta and explore observational outcomes of these scenarios.
The structure of the
paper is as follows. We first study the expansion and bulk motion of
the wind envelopes including mass striping effects. This provides us
with the estimate of the filling factor of the wind shells
ensemble. We then address the issue of the SN envelope expansion dynamics
and bulk motion,
Rayleigh-Taylor fragmentation of the SN shell and propagation of ejecta
fragments in the relativistic plasma. Finally, we discuss the implications
of results for the relativistic bubble matter contents and intracluster
thermal environment.
\section{Wind material in relativistic bubble}
\label{sec-wind}
\subsection{Stars and mass injection rate}
\label{sec-stars}
We consider, as a fiducial model, a spherical bubble of
radius $R_b=5$ kpc with the bubble center at the distance $R_b$
from the center of the BCG. The characteristic age of this bubble is
$t_b\sim2R_b/v_b\sim3.3\times10^7$ yr, where we adopt the bubble rise
velocity $v_b=300$ km s$^{-1}$ \citep[cf.][]{Chur01}. At the
radii $r<15$ kpc the stellar component in massive elliptical galaxies
dominates \citep[e.g.'][]{John09} and its density distribution
can be approximated by the singular
isothermal sphere with the velocity dispersion $\sigma_v$
\begin{equation}
\rho=\frac{\sigma_v^2}{2\pi Gr^2}=\rho_0\left(\frac{r_0}{r}\right)^2\,,
\end{equation}
where $G$ is gravitational constant and $r_0$ is a radial
scale\footnote{This approximation breaks at large radii to ensure
convergence of the total stellar mass.}.
Adopting $r_0=10$ kpc and velocity dispersion
$\sigma_v=350$ km s$^{-1}$ \citep{WT06} one gets
the stellar mass
$M(<r_0)=M_0=5.6\times10^{11}~M_{\odot}$. This value is consistent
with estimates
of the total mass of stellar component of BCG of $\sim10^{12}~M_{\odot}$.
Inside the bubble of radius $R_b=5$ kpc the stellar mass in this case is
$M_s\approx9\times10^{10}R_{b,5}~M_{\odot}$, where $R_{b,5}=R_b$/(5 kpc).
The stars are presumably old with an age of $t\sim 10^{10}$ yr, which suggests
that the current upper limit of the stellar mass at the AGB stage is $\approx
1~M_{\odot}$ \citep{Scha92}. Assuming
the Salpeter initial mass
function $dN/dm=Cm^{-\alpha}$ within the range $m_1<m<m_2$ one gets
the normalizing factor
\begin{equation}
C=(\alpha-2)M_s[m_1^{-(\alpha-2)}-m_2^{-(\alpha-2)}]^{-1}=0.28M_s\,,
\label{eq-normf}
\end{equation}
where $\alpha=2.35$, $m_1=0.1~M_{\odot}$, and $m_2=1~M_{\odot}$
are used.
The star of $m=1~M_{\odot}$ leaves behind a white dwarf of
$m_{wd}=0.5~M_{\odot}$ \citep{Sala09}, while
$\Delta m=m-m_{wd}=0.5~M_{\odot}$ is lost in the form of slow
($u\approx10-30$ km s$^{-1}$) wind during the thermally pulsing AGB stage
\citep{VW93}. The present day
integrated rate of the wind matter injection into the relativistic bubble is
\begin{equation}
\dot{M}=(m_2-m_{wd})\frac{dN}{dm}\frac{dm_2}{dt}\,,
\label{eq-dotmw}
\end{equation}
where $dm_2/dt$ is the rate at which upper limit $m_2$
decreases with time. This rate is determined by the relation between
the lifetime and initial star mass $t=t_2(m_2/m)^{\beta}$, where $\beta=3.2$
in the range of
$1-2~M_{\odot}$ \citep{Scha92}. For these values and
$M_s=9\times10^{10}R_{b,5}~M_{\odot}$ the equation (\ref{eq-dotmw}) yields
$\dot{M}=0.416~M_{\odot}R_{b,5}$ yr$^{-1}$. The corresponding stellar
death rate is $\dot{N}=0.83R_{b,5}$ yr$^{-1}$ which is also
the rate of wind shell formation $\dot{N_w}$.
One expect thus to find $N_w=t_b\dot{N_w}\approx2.7\times10^7R_{b,5}$
newly created wind envelopes in the bubble volume with
the total amount of the wind matter in the bubble of
$0.5N_w\approx 1.4\times10^{7}R_{b,5}~M_{\odot}$, factor $\sim10^4$ larger
than the mass in the form of relativistic
particles of the bubble, $3pV/c^2\sim2.4\times10^3~M_{\odot}$.
It should be noted, however, that the $N_w$ estimate ignores so far
a possible escape of wind envelopes from the bubble, which is addressed
below.
\subsection{Wind shell dynamics}
\label{sec-windyn}
The dynamical effect of the wind matter in the bubble on a certain
wind shell or SN ejecta
depends on the filling and covering factors of wind shells. To assess the
situation one needs first to find
the average volume and size of a wind envelope at the final stage of its
expansion.
Here we assume that the bubble is static and adopt that a star moves
with the mean velocity $v_s$. Despite a singular isothermal
sphere is assumed for the stellar population, it is reasonable to
estimate $v_s$
using Maxwell velocity distribution truncated at the escape velocity
$v_e$. The truncated Maxwell distribution is taken in the form
proposed by \citep{King66}:
$f(v)\propto[\exp(-v^2/2\sigma_v^2) - \exp(-v_e^2/2\sigma_v^2)]$
for $v<v_e$ and $f=0$ otherwise.
Adopting $\sigma_v=350$ km s$^{-1}$, i.e., $v_e=2\sigma=700$ km s$^{-1}$
we come to the average velocity $v_s=400$ km s$^{-1}$.
The geometry of the wind shell that forms as a result of
the interaction of the wind with the relativistic medium
depends on the value of the drag force exerted on the wind boundary.
If the drag force
is strong the stripped wind material creates a trailing plume. On the
other hand, if drag force is very weak, then the wind shell moves with
the star velocity retaining spherically-symmetric shape and eventually
may escape the relativistic bubble. To explore this issue we consider
major stages of the mass loss at the AGB and post-AGB stage:
\citep{Stef98, LZ10}:
(1) the slow wind ($u=10$ km s$^{-1}$, $\dot{M}\sim 10^{-7}~M_{\odot}$ yr$^{-1}$)
on the time scale of the AGB stage, i.e., $10^6$ yr, (2) slow superwind
($\dot{M}\sim 10^{-5}-10^{-4}~M_{\odot}$ yr$^{-1}$)
during the last $\sim10^4$ yr of
the AGB stage, and (3) fast wind at the post-AGB stage
which corresponds to the planetary nebula (PN) stage ($\sim10^4$ yr).
The last stage practically does not contribute to the mass loss,
but turns out essential for the acceleration of the slow wind.
We adopt that 60\% of the hydrogen shell is lost at
the first stage and 40\% at the superwind stage
\citep{Stef98}, which implies
the mass-loss rates
$\dot{M}\sim 3\times10^{-7}~M_{\odot}$ yr$^{-1}$ at the slow wind stage and
$\dot{M}\sim 2\times10^{-5}~M_{\odot}$ yr$^{-1}$ at the superwind stage.
For the fast wind stage we adopt parameters derived from
the modelling of the X-ray emission of the PN:
$\dot{M}\sim 2\times10^{-8}~M_{\odot}$ yr$^{-1}$ and $u=1500$ km s$^{-1}$
\citep{LZ10}.
The fast wind parameters suggest that the total kinetic energy
released during this stage ($\sim10^4$ yr) is $E_3\approx4.5\times10^{45}$ erg.
When transfered to the slow wind shell with the mass
of $0.5~M_{\odot}$ this energy accelerates the shell up to
$u\approx30$ km s$^{-1}$,
in accord with the expansion velocities of evolved PN \citep{Rich08}.
The mass striping rate of the wind shell scales as the shell radius
squared, so at the stage of slow wind the maximal
stripping is attained at the end of this stage. The radius of the
wind shell at this age can be estimated from the energy arguments.
The kinetic energy of the wind shell is spent on the
$pV$ work against external pressure $p$ and on the internal energy
which results in the stopping radius of the wind shell
\begin{equation}
r_1=\left[\frac{3}{8\pi}\left(\frac{\gamma-1}{\gamma}\right)
\frac{M_1u_1^2}{p}\right]^{1/3}
= 0.19p_{10}^{-1/3}~~\mbox{pc}\,,
\label{eq-rwind}
\end{equation}
where $p_{10}=p/(10^{-10}~\mbox{erg cm}^{-3})$,
$\gamma=5/3$, $u_1=10$ km s$^{-1}$, and $M_1=0.3~M_{\odot}$ are used.
The wind shell moves as a whole together with the
white dwarf unless the bulk of the material is stripped into the
trailing plume.
The wind shell stripping can be estimated following \citet{Nu82}
consideration of the gas stipping for a galaxy moving in the ICM.
The turbulent stripping is determined by the combined effect of
the Kelvin-Helmholtz instability (KHI) and the ram pressure drag.
Indeed, KHI broadens the boundary layer which results in the
supression of the KHI. The net stripping rate, therefore, is
controlled by the ram pressure
\begin{equation}
\dot{M}=\pi r_1^2\rho_a v_s=
1.6\times10^{-12}p_{10}^{1/3}~~M_{\odot}~\mbox{yr}^{-1}\,,
\label{eq-dotm}
\end{equation}
where we use $\rho_a=3p/c^2$ for the density of ambient medium.
The average residence time of a wind shell in the bubble is
$R_b/v_s\sim1.2\times10^7$~yr, so the above mass loss rate implies that
the wind shell loses $\sim2\times10^{-5}~M_{\odot}$ while moving in the
bubble, negligibly small amount compared to the mass of the wind shell,
$0.3~M_{\odot}$.
Alternatively, the stripping could be caused by the Alfven wave drag.
In this regard we note that the infinite conductivity approximation
for the wind shell is fully applicable.
A conducting body moving with velocity $v$ across the magnetic field $B$
experiences the drag force due to Alfven wave generation
\citep{Dre65}
\begin{equation}
F_d=(B^2/4\pi)(v/v_A)S\,,
\label{eq-dforce}
\end{equation}
where $S$ is the area of lateral surface perpendicular to $\vec{B}$,
$v_A$ is the Alfven velocity, $v_A\approx B/\sqrt{4\pi \rho_a}$ with
$\rho_a=3p/c^2$. Strictly speaking, the Alfven velocity in the
relativisic plasma \citep{Ged93} is smaller compared to this
expression by a factor of 0.7-0.9 depending on
the ratio of magnetic to total pressure; we neglect this difference.
For a sphere of the radius $r$ the lateral area is
$S\approx 2\pi r^2$ and the mass stripping rate caused by the Alfven
wave drag is
\begin{equation}
\dot{M}=\frac{B^2r_w^2}{2v_A}\approx4.1\times10^{-10}p_{10}^{7/6}
B_5~M_{\odot}~\mbox{yr}^{-1}\,,
\label{eq-dotma}
\end{equation}
where $B_5=B/(10^{-5}~\mbox{G})$.
The stripping rate due to the Alfven drag thus turns out two orders of
magnitude larger than
the rate according to equation (\ref{eq-dotm}). Yet even for the
Alfven drag the mass lost during the residence time
is only $\sim0.01~M_{\odot}$ which is a small fraction ($\sim3$\%)
of the wind shell. We thus conclude that the wind shell lost at the
slow wind stage
remains almost intact while traveling across the bubble.
A similar result can be obtained for the superwind stage.
The outcome of a combined effect of all three stages of the mass loss
is a spherical wind shell of $0.5~M_{\odot}$ expanding
with the velocity of 30 km s$^{-1}$. Using equation (\ref{eq-rwind})
one finds the wind shell stopping radius is $r_w=0.5p_{10}^{-1/3}$ pc.
The above treatment of the wind dynamics suggests that the
cosmic rays diffusion into the shell can be neglected. To check whether
this assumption is valid we assume Bohm diffusion coefficient $r_gc/3$,
where $r_g$ is the proton giroradius.
This assumption is standard for the analysis of cosmic ray propagation
and supported by observational data on the cosmic ray acceleration in
supernova remnants \citep{Stage06}, although the concept of
a tangled field might seriously modify a picture
of the cosmic ray diffusion in magnetic field \citep{Nar01}.
For the spectral index of relativistic protons $>2$ the
energy of cosmic rays resides in low energy protons, which permits us to use
the characteristic energy of relativistic protons $\sim 1$ GeV.
With $B=10^{-5}$ G one gets $r_g\sim3\times10^{11}$ cm.
The diffusion time is then
\begin{equation}
t_{d}\sim\frac{r_w^2}{r_gc}=3\times10^6B_5^{-1}r^2_{w,18}~~\mbox{yr}\,,
\end{equation}
where $r_{w,18}=r_w/(10^{18}\,\mbox{cm})$. For the slow wind
stage with the final radius of $r_1\sim0.2$ pc the
diffusion time is comparable to the duration of this stage ($\sim10^6$ yr),
while the life time of superwind and fast wind stages
is significantly smaller than the diffusion time.
The estimated time scale of the cosmic ray diffusion
suggests that the penetration of cosmic rays in the wind
can affect the wind expansion dynamics at the slow wind stage making the
pressure gradient smoother and the deceleration less pronounced. As a
result, the final radius of
the wind shell at the slow wind stage in fact could be somewhat larger,
$r_1>0.2$ pc. On the other hand, the effect cannot be significant
because the diffution time increases $\propto r_1^2$, so the
role of the cosmic ray diffusion rapidly drops for larger radius.
We conclude therefore that the stopping
radius of the wind shell ($r_w\sim0.5$ pc), which includes
the combined effect of AGB and post AGB mass loss and
omits the cosmic ray diffusion, is a reasonable estimate.
\subsection{Wind shell escape}
\label{sec-winesc}
Outside the relativistic bubble the wind shell turns out in
the intracluster thermal gas. For the mass stripping rate
$\dot{M}=\pi r_w^2\rho_av_s$ adopting the wind shell radius
$r_w=1.5\times10^{18}$ cm, number
density of interstellar gas $n=0.02$ cm$^{-3}$, and $v_s=400$ km s$^{-1}$
one obtains
$\dot{M}\sim1.7\times10^{-7}~M_{\odot}$ yr$^{-1}$.
It takes $3\times10^6$ yr to completely strip
the $0.5~M_{\odot}$ wind shell over the distance of $\sim1$ kpc.
The same estimate can be obtained from the equation of deceleration
of the wind shell as a whole by drag force $\pi r_w^2\rho_av_s^2$.
The wind shell escaping bubble is decelerated thus in a close vicinity
of the bubble boundary.
The average residence time of the wind shell in the bubble
$R_b/v_s\sim1.2\times10^{7}$ yr is somewhat smaller than the age of
the fiducial
bubble $3\times10^7$ yr. The total amount of the wind shells residing
in the bubble is, therefore, $\dot{N}R_b/v_s\sim10^7$, while
the filling factor of the ensemble of wind shells in the bubble is
\begin{equation}
f=N_w(R_w/R_b)^3\sim10^{-5}p^{-1}_{10}\,.
\label{eq-ffac}
\end{equation}
The probability of a collision with the wind shell is determined by the
ratio of the bubble radius and the mean free path. The latter is
\begin{equation}
\lambda=(\pi r_w^2n_w)^{-1}=64p^{2/3}_{10}~~\mbox{kpc}
\label{eq-taug}
\end{equation}
where $n_w=(3/4\pi)N_w/R_b^3=2\times10^{-5}$ pc$^{-3}$
is the number density of wind shells. The probability
of shell collisions is low, because the average number of wind shells
along the bubble radius is only $\tau=R_b/\lambda=0.08$.
The average probability of the collision is approximately
$\approx[1-\exp(-\tau)]\approx\tau=0.08$.
More accurate estimate can be done using expression for
the escape probability of the
photon from a homogeneous sphere \citep{Ost89}
\begin{equation}
p_{\rm esc}=\frac{3}{4\tau}\left[1-\frac{1}{2\tau^2}+\left(\frac{1}{\tau}+
\frac{1}{2\tau^2}\right)
\exp (-2\tau)\right]\,,
\label{eq-oster}
\end{equation}
For $\tau=0.08$ the
equation (\ref{eq-oster}) gives $p_{\rm esc}=0.94$. Most of wind shells
therefore escape the bubble freely and only 6\% of
wind shells collide with another shell.
The fate of the collided wind shells depends on whether the collision
is adiabatic or radiative. With the average relative velocity of
collision $u\sim560$ km s$^{-1}$ and the wind shell density
$\sim200$ cm$^{-3}$ the estimated cooling
time of the shocked gas turnes out to be $\sim3\times10^{10}$ s, comparable
with the hydrodynamic scale $r_w/u\sim3\times10^{10}$ s. This means
that both adiabatic and radiative collision regimes are plausible.
In the adiabatic case wind shells approximately retain their sizes and absolute
velocities, so the adiabatic collision
does not affect their escape. In radiative case
collided shells merge and form thin dense pancake of a thickness $b\ll r_w$
and density $\rho_c\sim(r_w/b)\rho_w$,
where $\rho_w$ is the density of the wind shell before collision,
This pancake is liable to fragmentation into clumps of size $a\gtrsim b$
and average velocity $\sim v_s/\sqrt{2}$. It is easy to verify that for
fragments with
sizes $a\gtrsim b$ and density $\rho_c\sim(r_w/b)\rho_w$
the stripping (or deceleration) time is greater
than the stripping time of wind shells.
We thus conclude that the most of the wind material
escapes into the hot ICM.
In our analysis of the wind shell dynamics we ignored a possible fragmentation
of the wind shell due to the Rayleigh-Taylor (RT) instability on the
deceleration or acceleration stages. This omission facilitates the
consideration; yet it does not affect the major conclusion that the wind
shell material escape the relativistic bubble. As we will see below,
the RT fragmentation favours the shell matter escape.
\section{Type Ia supernovae in relativistic bubble}
With SN\,Ia production efficiency $\psi=0.008$ per one white dwarf
formed in the stellar population of
E-galaxies \citep{Prit08} and the stellar death
rate in the bubble of fiducial model $\dot{N}=0.83$ yr$^{-1}$
one expects $\psi\dot{N}t_b\sim2\times10^5$ SN\,Ia explosions
during the
relativistic bubble lifetime $t_b=3\times10^7$ yr. We now consider in
detail SN expansion in the relativistic bubble and analyse an outcome
of the Rayleigh-Taylor fragmentation of decelerating SN shell.
\begin{figure}
\includegraphics[width=85mm]{fig1.eps}
\caption{
Thin shell model of supernova evolution in relativistic bubble.
Shown are the shell radius ({\bf a}), evolution of velocities
of the shell (S), pre-shock velocity of supernova ejecta (SN),
and reverse shock speed (RS) ({\bf b}),
evolution of the reverse shock temperature ({\bf c}),
X-ray luminosity ({\bf d}).
}
\label{f-tshell}
\end{figure}
\subsection{SN expansion}
Given a small number of wind shells along the bubble radius
(Section \ref{sec-winesc}) the deceleration of the SN expansion
in the relativistic bubble is probably dominated
by the pressure of the relativistic fluid. Indeed,
for SN expanding with the characteristic velocity
$v\approx(2E/M)^{1/2}=10^9$ cm s$^{-1}$
one readily sees that the ram pressure is small compared to
the pressure of relativistic fluid: $\rho v^2=3p(v/c)^2\ll p$.
The crucial role of the
external relativistic pressure in the SN deceleration is a distinguishing
feature compared to the standard case of SN shell in the ordinary
interstellar medium.
In our analysis of the SN expansion we assume the isotropic pressure of the
relativistic medium. This is the case, if the mean free path for
relativistic protons along the magnetic field is much less than the SN radius.
Since the SN expands subsonically relative
to the external medium, in which the sound speed is $\approx c/\sqrt3$, the
strong forward shock does not form.
The reverse shock obviously forms, because outer layers of ejecta are
decelerated by the external pressure and the velocity jump between
the undisturbed ejecta and swept-up shell exceeds the sound speed
in the unshocked ejecta ($\sim10$ km s$^{-1}$). The SN is fully
decelerated when the
reverse shock crosses the bulk of the ejecta mass.
To estimate the stopping radius
of SN one can use the energy considerations
likewise we did
for the wind shell expansion. The initial kinetic energy $E$ of SN should be
spent
on the $PV$ work against the external pressure and on the internal energy
$pV/(\gamma-1)$ which gives the stopping radius
\begin{equation}
r_{sn}=\left[\left(\frac{\gamma-1}{\gamma}\right)\frac{3E}{4\pi p}
\right]^{1/3}=36p_{10}^{-1/3}~~\mbox{pc}\,.
\label{eq-rstop}
\end{equation}
With a characteristic ejecta velocity $v\approx10^9$ km s$^{-1}$ it takes
$t_s\sim r_{sn}/v\sim3\times10^3$ yr to reach $r_{sn}$, rather short
time compared to the bubble lifetime.
The dynamics of the swept-up shell and the X-ray emission of the
reverse shock can be illustrated using a model based on
a thin shell approximation.
This suggests that the shell formed by ejecta material flowing
into the reverse shock is considered as a thin shell which dynamics
is governed by the dynamical pressure of the SN ejecta and external
pressure $p=10^{-10}$ erg cm$^{-3}$. The equation of motion for the
thin shell is
\begin{equation}
M\frac{dv}{dt}=4\pi r^2\left[\rho\left(\frac{r}{t}-v\right)^2-p\right]\,,
\label{eq-motion}
\end{equation}
where the shell mass is determined by the mass conservation
\begin{equation}
\frac{dM}{dt}=4\pi r^2\rho\left(\frac{r}{t}-v\right)\,.
\label{eq-mass}
\end{equation}
These equations are solved numerically assuming a freely expanding
SN with the mass
of $1.4~M_{\odot}$, energy of $1.5\times10^{51}$ erg, the density distribution
$\rho\propto \exp{(-v/v_0)}$, boundary velocity of
$4\times10^4$ km s$^{-1}$, and initial outer radius of $10^{18}$ cm.
Results are displayed in Fig. \ref{f-tshell} which
shows the evolution of the shell
radius, velocity of the shell, boundary velocity of SN ejecta and velocity of
the reverse shock, reverse shock temperature assuming full equilibration,
and X-ray luminosity of the reverse shock. The maximal radius of the thin
shell model is 39 pc, slightly larger than analytical estimate
$r_{sn}=36$ pc. Remarkably, the thin shell shows
contraction phase (Fig. 1a) which is a direct outcome of the
dynamical role of the external pressure. However, since we neglect
the internal pressure of the shocked envelope, the amplitude of the
contraction phase
in our model is exaggerated, so we stop the computations at this phase.
To calculate the X-ray emission we assume
that the hot plasma in the shell is distributed homogeneously in the shell
with the thickness $\Delta R/R=0.1$. It is rather a crude approximation
because the density distribution in the reverse shock is expected to be
essentially inhomogeneous with a peak at the contact surface.
Yet it is reasonable enough to get an idea about X-ray luminosity
within a factor of two. The luminosity is maximal at the
contraction phase and reaches $\sim4\times10^{33}$ erg s$^{-1}$
at the shock temperature of $\sim200$ keV. The equilibration of electrons
and ions, however, is an oversimplification, so the shock electron
temperature and the luminosity should be considered approximate.
\begin{figure}
\includegraphics[width=80mm]{fig2.eps}
\caption{
Density distribution for SNIa envelope expanding in the ICM at the ages
$\sim$60, 400, 3000, $2~10^4$ and $10^5$ years for
the ICM temperature of 0.01 keV ({\bf bottom}) and
of 100 keV ({\bf top}).
Position of a sharp wiggle in the density distribution corresponds
to the contact discontinuity separating SN ejecta and the ICM.
\label{f-snv}}
\end{figure}
Another interesting view on the SN expansion dynamics
in the relativistic bubble
gives us one-dimensional hydrodynamic simulations in which we
assume a hot rarefied non-relativistic plasma to be a proxy
for the relativistic fluid. The thermal pressure is taken the same
$p=10^{-10}$ erg cm$^{-3}$.
The ICM temperature varies in different runs from
$10^{-2}$ up to $10^{4}$ keV\,\footnote{We use nonrelativistic equation
of state in these illustrative runs even for $T_e=10^4$ keV,
although this not valid for electrons.}. For $T=10^{4}$ keV the situation
is close to the case of the relativistic medium because the
thermal pressure exceeds the dynamical pressure in the upstream flow of
the forward shock. We assume homologous
expansion of the envelope $v\propto r$ and model the initial density
distribution 10 years after the explosion as $\rho\propto e^{-r/r_0}$,
where $r_0=3~10^{-5}$ pc. The ejecta mass is $1.4~M_{\odot}$ and kinetic
energy is $1.5\times10^{51}$ erg, while the maximum expansion velocity
is set to $2~10^{4}~{\rm km~s^{-1}}$.
The dependence of the expansion dynamics on the temperature of the ICM
(at the same pressure) is apparent from Fig.~\ref{f-snv}. In the low
temperature case (bottom panel in Fig.~ \ref{f-snv}) most of
ejecta energy is spent on a forward shock, which is barely
resolved in our simple simulations. By contrast, for the
high temperature ICM almost all the initial kinetic energy is eventually
converted into the enthalpy of the ejecta and only tiny amount of
energy is deposited in the forward shock. Accordingly the final size of the
envelope at the boundary separating ejecta and ICM is much larger for
the high temperature run. This is further illustrated in
Fig.~\ref{f-rb}, showing the time dependence of the envelope
radius. The simulations show that in the
limit of very hot ICM the final envelope radius converges to the value
given by the thin shell model (Fig. \ref{f-tshell}).
The X-ray luminosity of the reverse shock in the model with
$10^4$ keV medium is shown in Fig. ~\ref{f-xlum} together with the
evolution of the radii of the reverse shock and contact surface.
The luminosity evolution is consistent with the prediction of
the thin shell model (Fig. \ref{f-tshell}) at the ages
$\lesssim6\times10^{3}$ yr. However, at the final stage
of the ejecta deceleration
($t\gtrsim10^4$ yr) the luminosity behavior differs from
that of the thin shell model. Indeed at this phase the shocked gas
cannot be treated as thin shell.
After about $10^4$ yr the reverse shock attains the center. This is
accompanied by the overall contraction; as a result the
emission measure increases and the luminosity attains maximal value
$\sim3\times10^{33}$ erg s$^{-1}$.
This is followed by the expansion which results in the luminosity drop.
At the most luminous phase, $L_{\rm x}\gtrsim10^{33}$ erg s$^{-1}$,
the SN~Ia remnant lives
$\sim4\times10^3$ yr. The temperature of the shocked ejecta at this phase
is in the range of $10^8-10^9$~K, so only a small fraction of the total
luminosity ($10-30$\%) falls into the standard {\em Chandra} band
(0.2-10 keV).
\begin{figure}
\includegraphics[width=80mm]{fig3.eps}
\caption{Radius of the contact discontinuity, separating SN ejecta and
the ICM as a function of time. Four curves shown correspond to
explosions in the ICM with the same pressure, but different
temperatures, 0.01, 1, 100, $10^4$ keV, from bottom to top.
Clearly the final size of
the ejecta is largest in the ICM with highest temperature.
}
\label{f-rb}
\end{figure}
Generally, the SN expansion dynamics can be affected by the diffusion of
relativistic protons into the ejecta. This process might modify dynamics
by smoothing out the pressure jump at the boundary between ejecta and
relativistic fluid.
The time it takes to fill the SN by cosmic rays with the energy $E$ per
particle can be estimated as the time for the proton to escape from the
relativistic bubble layer adjacent to SN. The volume comparable with SN of
radius $R$
is a spherical layer of a thickness of $\sim0.3R$. Assuming Bohm diffusion
coefficient $D=cr_g/3$ one gets the diffusion time
\begin{equation}
t_{d}\sim\frac{(0.3R)^2}{4D}=1.5\times10^9B_5
\left(\frac{R}{30\,\mbox{pc}}\right)^2~\mbox{yr}\,,
\end{equation}
The time it takes to fill the SN by cosmic rays at the
essential deceleration epoch turns out tremendous compared to the SN age
($\sim3\times10^3$ yr). We conclude, therefore, that
the diffusion penetration of relativistic protons into the SN envelope
unlikely affects the SN expansion dynamics.
\begin{figure}
\includegraphics[width=80mm]{fig4.eps}
\caption{X-ray luminosity of the reverse shock in the model
with the ICM temperature of $10^4$ keV ({\bf top}) and the radii of
the contact discontinuity and reverse shock ({\bf bottom}).
The luminosity decreases after $\sim10^4$ yr because of
the significant expansion of the postshock layer between
the reverse shock ({\bf bottom}, lower curve) and the contact
surface (upper curve).
}
\label{f-xlum}
\end{figure}
\subsection{SN bulk motion}
After the expansion braking the SN shell still
retains the bulk motion with the typical velocity
$v_s=400$ km s$^{-1}$. If the deceleration of the
bulk motion were negligible, the SN shell would escape the
relativistic bubble after the average residence
time $R_b/v_s\sim10^7$ yr. We now check, whether
the ram pressure and the Alfven wave drag can substantially
decelerate the bulk motion inside the relativistic bubble.
The ram pressure drag force exerted on the SN shell is
$F_d=\pi r_{sn}^2\rho_av^2$,
where the ambient density is $\rho_a=3p/c^2$, assuming particles
dominate in the pressure. Using the equation of motion
\begin{equation}
M\frac{dv}{dt}=-\pi r_{sn}^2\rho_av^2\,,
\label{eq-snram}
\end{equation}
the characteristic deceleration length can be estimated as
\begin{equation}
l_d\approx \frac{Mc^2}{3\pi r_{sn}^2p}\approx 69p_{10}^{-1/3}~~\mbox{kpc}\,.
\label{eq-lramd}
\end{equation}
This shows that the deceleration of the bulk motion of SN by the ram
pressure can be neglected.
The Alfven wave drag exerted on the spherical SN shell is defined
similarly to the case of the wind shell, i.e.,
$F_d=(1/2)B^2(v/v_A)r_{sn}^2$. The chracteristic deceleration time
is $t_d\approx v_sM/F_d$, while the deceleration length,
$l_d\approx v_st_d$, is
\begin{equation}
l_d\approx \frac{Mv_sc}{Br_{sn}^2\sqrt{12\pi p}}\approx
0.3v_{s,400}B_5^{-1}p_{10}^{1/6}~~\mbox{kpc}\,,
\label{eq-ldec}
\end{equation}
where $v_{s,400}=v_s/(400~\mbox{km s}^{-1})$. This shows that
the Alfven wave drag
eesentially brakes the bulk motion of the SN shell at the distance
much smaller than the radius of the relativistic bubble even for weak field
$B\sim3\times10^{-6}$ G.
We thus conclude that the ejecta of SN~Ia exploded in the
relativistic bubble cannot escape the bubble due to the bulk motion.
Amazingly, the SN material is decelerated in the slow bulk motion
at the distance ten times larger than in the maximum radius attained in the high
speed envelope expansion.
\subsection{Rayleigh-Taylor instability and spike deceleration}
The swept-up SN shell decelerating in the light relativistic
fluid is liable to the Rayleigh-Taylor instability
(RTI) which generally should result in the fragmentation of
SN shell, close to the stage of the significan deceleration, i.e.,
at about the stopping radius $r_{sn}$.
The situation is similar, albeit inverted with respect to Crab nebular.
The Crab shell {\em accelerated} by the shocked relativistic wind
shows long thin RT spikes directed backward the center
\citep{Hester96}. In case of decelerated SN
dense RT spikes protruded forward could travel large distances before
they stop. Yet it should be emphasised the difference with the Crab.
In the latter case the SN material pressurized by
the relativistic plasma is cool with the thermal velocity
$u_{\rm crab}\sim10$~km s$^{-1}$.
The SN~Ia material in the adiabatic reverse shock
is hot with the thermal velocity $u_{\rm sn}\sim10^4$~km s$^{-1}$.
The density contrast in the Crab is therefore by factor
$(u_{\rm sn}/u_{\rm crab})^2\sim10^6$ larger.
The behavior of RT spike may be affected by the KHI.
For a cylinder spike of the radius $a$ moving with the velocity $v$
along its own axis the perturbation growth time
for the most destructive wave number $k\sim1/a$
is $t_{\rm KH}\sim(a/v)\chi^{1/2}$, where $\chi$ is the density ratio of
SN and bubble material.
At the SN radius $R=20$ pc the contrast $\chi\sim10^5$. The distance at
which the
most dangerous mode grows is then $\sim vt_{\rm KH}\sim a\sqrt{\chi}\sim 300a$.
We do not aware of any multi-dimensional hydrodynamic simulations
of a dense cloud moving in a rarefied relativistic fluid. A close analogue
is the two-dimensional hydrodynamic simulations
of a dense cloud moving in the
post-shock intercloud rarefied gas \citep{KMC94}.
These simulations
show that the cloud life time with respect to the fragmentation and
fragment deceleration is order $\sim4(a/v)\chi^{1/2}$, roughly four times
larger than the Kelvin-Helmholtz time.
Adopting this characteristic time one finds that the distance at which
the spike will be destroyed and decelerated
is $\sim 4vt_{KH}\sim 4a\chi^{1/2}\sim10^3a$.
Assuming $a/R\sim 10^{-2}$, comparable with the fingers in case of Crab nebula
\citep{Hester96}, one finds that RT spike can travel
$\sim10R\lesssim0.3$ kpc, rather small distance compared to the bubble radius.
On the other hand, \citet{Nu82} argues that
the increase of the width of the boundary
layer due to the KHI quenches the very instability.
As a result the mass loss is suppressed and eventually is defined
by the momentum transfer [cf. equations (\ref{eq-dotm}) and (\ref{eq-dotma})].
In this case
the problem of mass stripping due to KHI is reduced to the problem of
a deceleration of RT spikes which is analysed in the next section.
The longitudinal magnetic field also can suppress the KHI. Let $\rho_1$ and
$\rho_2$ be the density of rarefied and dense fluid.
According to the criterion of the KHI in the presence of magnetic field
\cite{Chan61} the condition that the magnetic field turns off
the KHI is
\begin{equation}
B>(2\pi\rho_1)^{1/2}v=(6\pi p)^{1/2}\left(\frac{v}{c}\right)
\approx1.4\times10^{-6}~~\mbox{G}\,,
\end{equation}
where $p=10^{-10}$ erg cm$^{-3}$, $c$ is the light speed, and
$v=10^9$ cm s$^{-1}$ are used.
The required magnetic field $B>1.4\times10^{-6}$~G is within the range
of field strength in the relativistic bubble, which can be as large as
several $10^{-5}$ G. The magnetic stabilization of RT spikes against KHI thus
seems quite plausible.
Hereafter we address the deceleration of RT spike assuming its stability.
\subsubsection{Drag in collisionless case}
A typical RT spike is assumed to be a cylinder with the mass $m$,
radius $a$, and length $b$, which are assumed to remain constant.
The RT spike pesumably moves along the axis in the relativistic plasma
dominated by the relativistic
particles. With the giroradius $r_g\sim3\times10^{11}$ cm and the
RT spike radius $a\sim10^{-2}r_{sn}\sim10^{18}$ cm only a motion along
the regular magnetic field can be collisionless.
The mean free path for relativistic protons propagating
along the field can be constrained by scattering on the perpendicular
component of a random field.
The resulting mean free path along the mean field $B$ is
$\lambda_{\parallel}\sim r_g(B/\delta B)^2$ \citep{SMP07},
where $\delta B$ is the amplitude of a random field with resonance wave number
$k_{\rm res}r_g\sim1$.
Following \citet{SMP07} we assume the power law spectrum of
random field energy density $W(k)\propto k^{-s}$ with $s=1.67$,
between maximal length scale $k_{min}\sim 1/R_b$ and
and minimal scale $k_{max}\sim 1/r_g$. The
integrated energy density is normalized according to
suggestion by \citet{SMP07}: $W=B^2/(8\pi)$. With
these prerequisites
one gets $(\delta B/B)^2\sim(k_{min}/k_{\rm res})^{s-1}\sim10^{-7}$ and
$\lambda_{\parallel}\sim 10^7r_g\sim3\times10^{18}$ cm, i.e.,
$\lambda_{\parallel}\gtrsim r_g$.
The situation thus is about collisionless, although uncertainties of
the relevant parameters do not preclude collisional regime as well.
One needs therefore to consider both collisionless and collisional cases.
For $b\gg a$ the moment exchange occurs primarily
via the cosmic ray collisions with
the lateral surface of the spike. The momentum transferred to the colliding
particle with the energy $E$ assuming diffusive reflection, is $\sim(E/c^2)v$.
For the particle flux on the unit of surface area $(1/4)nc$,
where $n$ is the cosmic ray concentration, the moment transferred per second to
all the striking protons (drag force) is
$F_d=(1/2)\pi ab\epsilon(v/c)$, where $\epsilon=3p$ is
the energy density of relativistic particles. Note that the
drag force could be derived also
using average energy gain of a relativistic particle per collision with the
cloud $E(v/c)^2$
\citep{F49}. Indeed, for a spherical cloud the total energy loss
per second in that case is
$\pi a^2 ncE(v/c)^2=\pi a^2 \epsilon v^2/c$. This implies the drag force
$\pi a^2\epsilon(v/c)$, which coincides with the above expression for
$F_d$ within the geometrical factor.
The equation of motion of the spike in the collisionless case with the above
value of $F_d$ then reads
\begin{equation}
m\frac{dv}{dt}=-1.5\pi abp\frac{v}{c}\,.
\label{eq-eqmot}
\end{equation}
We neglect here the head-on collisions which would contribute the
term of the order $\sim a/b\ll1$ in the right hand side.
Note, the transition from spike to the spherical blob corresponds to $b=2a$
in the drag force expression.
The characteristic time of the spike deceleration is thus
\begin{equation}
t_d\sim\frac{2}{3}\frac{mc}{\pi abp}\,.
\label{eq-tdrag}
\end{equation}
The ratio $m/\pi a^2$ can be expressed via the surface density of the SN shell
at the stopping radius
as $m/\pi a^2=\eta M/4\pi r_{sn}^2$, where the parameter $\eta\gg1$
because the spike is formed by a shell patch with the radius $\gg a$.
The deceleration distance for the spike is then
\begin{equation}
l_d=vt_d\sim\frac{10}{9}\eta r_{sn}\frac{c}{v}\frac{a}{b}=
1.2\eta\frac{a}{b}~~\mbox{kpc}\,,
\label{eq-rbdec}
\end{equation}
where we make use of equation (\ref{eq-rstop}) and adopt $v=10^9$ cm s$^{-1}$
and $r_{sn}=36$ pc.
For $\eta\sim10$ and $a/b\sim0.1$ the spike can travel $\sim1$ kpc
before it gets completely decelerated. In collisionless case the RT
fragments are decelerated efficiently inside the relativistic bubble.
It should be stressed, however, that the collisionless regime can be
realized only in the case of the motion along the magnetic field
so, only a small fraction of RT spikes experiences this type
of a deceleration.
\subsubsection{Drag in collisional case}
If the mean free path for cosmic ray protons is small $\lambda\ll a$,
one expects that the drag force should be proportional to $v^2$.
The general condition for that is the large Reynolds number Re$>10^2$.
To estimate Re we adopt the spike radius
$a\sim10^{-2}r_{sn}\sim10^{18}$ cm, the spike velocity $v=10^9$ cm, and
$\lambda=r_g$ as the mean free path for relativistic protons.
Assuming $B=10^{-5}$ G, i.e., $r_g\sim3\times10^{11}$ cm one
gets Re$=3av/cr_g\sim 3\times10^5$. The condition Re$>10^2$ thus is
fulfilled for $\lambda<3\times10^4r_g\sim 10^{15}$ cm, which is
rather soft requirement.
In the collisional approximation the drag force is $F_d=3\pi a^2p(v/c)^2$.
Following the recipe of the previous section one obtains the
spike deceleration distance
\begin{equation}
l_d\approx\frac{5}{9}\eta r_{sn}\left(\frac{c}{v}\right)^2\,.
\end{equation}
For $v=10^9$ cm s$^{-1}$ and $r_{sn}=36$ pc one obtains $l_d\sim 17\eta$ kpc,
rather large value that exceeds bubble radius (5 kpc) even
for modest value of $\eta\sim 1$. The ram pressure drag thus
almost does not decelerate RT spikes inside the relativistic bubble.
\subsubsection{Alfven wave drag}
The Alfven wave drag can operate for the
large conducting body $a>(v/c)r_g\sim10^{10}$ cm,
which is easily met for RT spikes.
Using the expression for the power radiated in the form of Alfven waves
\cite{Dre65}) one can write the Alfven drag force acting
on the plasma spike with the radius $a$ and length $b$ as
\begin{equation}
F_d=\frac{B^2}{2\pi}\frac{v}{v_{\rm A}}ab\,,
\end{equation}
where $v_{\rm A}=B/\sqrt{4\pi\rho_a}$ is the Alfven velocity. Note,
it is only the
lateral surface area ($\approx2ab$), that matters. Following arguments of the
previous sections one gets the spike deceleration distance
\begin{equation}
l_d\approx5\eta r_{sn}\left(\frac{\pi}{3}\right)^{3/2}\frac{\sqrt p}{B}
\frac{a}{b}
\frac{c}{v}=6\eta\frac{a}{b}B_5^{-1}p_{10}^{1/6}~~\mbox{kpc}\,.
\end{equation}
For fiducial model $p=10^{-10}$ erg cm$^{-3}$, $B=10^{-5}$ G,
$v=10^9$ cm s$^{-1}$ one
obtains $l_d\approx6\eta(a/b)$ kpc.
This result shows that in case $a=b$ the blob can
travel the distance exceeding the radius of the relativistic bubble even for
moderate values of $\eta>2$. For a long spike ($b\gg a$) deceleration
is by factor $b/a$
stronger and the deceleration distance is accordingly shorter.
For $b/a\sim10$ and
$\eta\sim 10$ the RT spike can travel $\sim6$ kpc, a distance
comparable with the adopted bubble radius $R_b=5$ kpc.
It takes roughly $R_b/v\sim 5\times10^5$ yr
for the spike to reach the the bubble boundary assuming the spike average
velocity of $10^4$ km s$^{-1}$.
The effect of the Alfven wave drag is determined by the magnetic field.
If the field is weak, $B=3\times10^{-6}$ G, the deceleration distance
is $\sim18$ kpc,
substantially larger than the bubble radius. On the other hand, for
$B>10^{-5}$ G the deceleration distance is smaller than
the bubble radius and significan amount of RT fragments will remain
in the relativistic bubble.
The escape probability for RT spike in case of $B=10^{-5}$ G
can be estimated adopting mean free path $\lambda=vt_d=6$ kpc,
i.e., $\tau=R_b/\lambda=5/6$. Equation (\ref{eq-oster}) gives
in this case the escape probability $p_{\rm esc}\approx0.6$.
\subsubsection{Spike deceleration by wind material}
The average number of wind shells along the
average distance to the bubble boundary for the fiducial model is
$\sim0.08$ (Section \ref{sec-windyn})
which means that for SN fragments the probability to collide
with a wind shell is low, $\sim0.06$. A question arises,
what happens, anyway, if the collision takes place.
The deceleration is determined by the ratio of column densities ($\mu$) of the
projectile (RT spike) and target (wind shell). For the wind shell
\begin{equation}
\mu_w=\frac{m_w}{\pi r_w^2}=1.4\times10^{-4}~~\mbox{g cm}^{-2}\,,
\end{equation}
where $r_w=1.5\times10^{18}$ cm and $m_w=0.5~M_{\odot}$ are used. The column
density of the
spike $\mu_s=\eta M/(4\pi r_{sn}^2)\sim2\times10^{-8}\eta$ g cm$^{-2}\ll\mu_w$.
This comparison shows that
the spike will be fully decelerated in a single collision with a wind shell.
We conclude therefore that for the fiducial model
there is non-negligible probability, $\sim0.06$, that the spike will be
decelerated in the bubble via collision with the wind shell.
\begin{table}[t]
\caption[]{Parameters of fiducial model}
\label{tab-numbers}
\centering
\begin{tabular}{ l l c }
\hline\hline
\noalign{\smallskip}
Parameter & Description & Value \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$R_b$ & Bubble radius & 5 kpc \\
$t_b$ & Age & $3\times10^7$ yr \\
$p $ & Pressure & $10^{-10}$ erg cm$^{-3}$ \\
$B$ & Magnetic field & $10^{-5}$ G \\
$n $ & Number density of ICM & 0.02 cm$^{-3}$ \\
$M_s$ & Stellar mass in the bubble & $9\times10^{10}~M_{\odot}$ \\
$\dot{N}$ & Stellar death rate & 0.83 yr$^{-1}$ \\
$\dot{N}_{sn}$ & SN~Ia rate & 0.0066 yr$^{-1}$ \\
$r_w$ & Wind stopping radius & 0.5 pc \\
$r_{sn}$ & SN stopping radius & 36 pc \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\section{Discussion}
The aim of this paper has been to get an idea on what happens to the
matter ejected
by the stellar population of BCG embedded into a bubble of
relativistic plasma. We have found that the expansion of a wind envelope
lost by a star is stopped by the pressure of the relativistic fluid
when the radius attains $\sim0.5$ pc. Because of the small size of the
wind shell its bulk motion is not decelerated neither by the ram
pressure, nor by the Alfven wave drag, so the shell escapes the bubble
together with the parent star. The SN~Ia exploding in the
relativistic fluid expands up to much larger radius $\sim30-40$ pc
and its bulk motion can be efficiently decelerated by the Alfven
wave drag. Unless a RT instability operates, the SN material would not
escape the relativistic bubble, but will instead be advected by the
buoyantly rising relativistic fluid.
The RT instability can strongly
modify the behavior of the ejecta. In the framework of our fiducial
model we find that significant fraction of SN fragments escapes
the relativistic bubble. In
our analysis of this scenario we rely on the fiducial model
parameters outlined in Table 1.
A gas lump crossing the bubble boundary and entering
the ICM is decelerated after sweeping the ICM mass comparable
with its own mass. The escaping wind shell gets
decelerated in the thermal plasma of ICM at the length of $l_w\sim
1$ kpc (Section \ref{sec-windyn}).
The wind deposits $\sim1.4\times10^7~M_{\odot}$ in this layer.
This value should be compared with the mass of the ICM gas, which is
already there. For $R_b=5$ kpc and hydrogen number density $n=0.02$ cm$^{-3}$
the mass of the ICM in the boundary layer with the thickness
$l_w=1$ kpc is $7\times10^7~M_{\odot}$, i.e.,
factor $\sim5$ larger than the mass deposited by the wind shells.
The deposited wind mass scales with the bubble radius as $\propto R_b^4$,
whereas the ICM mass in the layer is $\propto R_b^2$.
We thus conclude that for the fiducial model
the wind material escaping the relativistic bubble does
not change significantly the density of the surrounding ICM; moreover
the effect even smaller in case of M~87 in which bubble radius
$R_b\sim1.4$ kpc.
The effect of the energy deposition by the escaping wind shells
is also negligible because the bulk velocities of the wind shells
is subsonic and the deposited mass is low. The chemical composition of ICM
is not affected by the escaping wind shells either.
Unlike the wind shells, fragments of SN ejecta escaping the relativistic
bubble may have a profound effect on the enrichment of ICM
by iron peak elements. The resulting abundance is determined by
the width of the mixing layer. Generally, one should consider
time-dependent model of the formation of mixing layer. However, we
assume that the mixing layer in the fiducial model is formed
by the cumulative effect of all SNe.
Emploing the momentum conservation arguments a deceleration distance for
the RT spike entering the ICM with the
velocity $v_i$ and decelerated down to $v_f$ turns out to be
\begin{equation}
l_{\rm sn}\sim\eta\ln(v_i/v_f)\frac{M}{4\pi r_{sn}^2\rho}
\approx0.5\eta~~\mbox{pc}\,,
\end{equation}
where we used $v_i/v_f=20$, $n=0.02$ cm$^{-3}$, $M=1.4~M_{\odot}$,
and $r_{sn}=36$ pc. For $\eta=10$ one gets the
deceleration length $l_{\rm sn}\sim 5$ pc. At first glance
the mixing layer could be identified with the deceleration layer.
This layer
contains $M_{icm}\sim7\times10^5~M_{\odot}$ of the ICM gas.
The mass produced by SN~Ia during the life time of the bubble is
$M_{sn}\sim3\times10^5~M_{\odot}$. If most of the SN mass escapes the
bubble, the amount of escaping iron in the mixing layer
turns out to be $\sim10^5~M_\odot$ which corresponds to the
iron abundance $\sim80\times$(solar).
On the other hand, the mixing layer could be broader
because the total volume of shocked SN fragments in pressure
equilibrium substantially exceeds the volume of the deceleration layer.
Simple estimate based upon the pressure equilibrium suggests the total
volume occupied by shocked SN ejecta to be $V_t=N_{sn}E/p$ which implies
the layer width $\sim300$ pc. If this is identified with
the width of the mixing layer than the iron abundance will be
factor two larger compared to solar abundance of pre-existing ICM.
The increase of the iron abundance by factor two changes the 0.6-2 keV
emissivity\footnote{ 0.6-2 keV is the energy range where present day
grazing incidence X-ray telescopes are most sensitive} by factors of
1.8, 1.5 and 1.3 for the gas temperatures 1, 2 and 3 keV respectively.
The latter estimates suggest complete mixing of injected iron with the ICM,
which may not be the case. The point is that the deceleration of SN
fragments in the ICM results in the strong
heating of the ejecta material up to temperature corresponding to its
kinetic energy. Most of the iron therefore ends up
in the high entropy/low density gas with the very little X-ray emission.
The observational outcome thus critically depends on the mixing degree
between hot SN gas and the relatively cool ICM.
The above picture of mixing layer is very crude and one
cannot rule out a possibility that the expanding
relativistic bubble continuously catches up with the mixing layer
outer boundary so that most of the decelerated SN material injected
in the ICM eventually turns out engulfed by the relativistic fluid.
Qualitatively this scenario then gets
similar to the case when SN~Ia ejecta bulk motion is decelerated well
inside the bubble. In this case no strong X-ray emission is expected
from fully expanded shells (because of the very low gas density). The
fate of the iron generated by SN~Ia in this scenario solely depends on
the evolution of relativistic plasma, which can escape the central
region of the galaxy, as suggested by observations of e.g. M87
\citep{Chur01, For05}.
\section{Conclusions}
We consider the outcome of the mass ejection by stars via winds and
supernovae inside a bubble of relativistic plasma inflated by an AGN
in the core of BCG. Wind shells are likely to escape the
relativistic bubble and deposit their mass in the ICM within
$\sim1$ kpc from the bubble boundary. SN~Ia exploded inside the
bubble is efficiently decelerated owing to the pressure of the
relativistic fluid. If the SN shells remain spherical until the
expansion of the envelope stops and do not fragment, then they would
not escape the bubble. In this case the iron produced by SN~Ia is
advected by the relativistic plasma and may leave the central region
of the BCG together with buoyantly moving bubbles.
As a possibility we consider a scenario in which the RT instability of
the SN envelope at the deceleration phase breaks the shell into
multitude of RT spikes. The analysis of the deceleration of RT spikes
in the relativistic fluid shows that the SN fragments are able to
escape the bubble. The fragments are decelerated in the ICM in a close
vicinity of the bubble boundary thus producing Fe-rich layer. In the
optimistic scenario this Fe-rich layer can enhance X-ray emission
around bubbles of relativistic plasma, producing bright rims around
bubbles.
\section{Acknowledgements}
We are grateful to Nail Inogamov and Sergey Sazonov for useful
discussions, and to Ewald M\"uller for sharing hydrocode.
NC thanks Wolfgang Hillebrandt for the invitation to MPA.
The work was partly supported by the Division of Physical Sciences
of the RAS (the program ``Extended objects in the Universe'', OFN-16)
and the project NSH-5069.2010.2.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $(X, \omega)$ be a compact Hermitian manifold of dimension $n$. The complex Monge-Amp\`ere equation on those manifolds has been studied extensively in recent years. The classical solution in the smooth case was provided by Tosatti-Weinkowe \cite{TW10b} (for $n=2$ the equation was solved earlier by Cherrier \cite{Ch87}). The weak solutions to the equation were studied in \cite{DK12}, \cite{KN1, KN4, KN2, KN7}, \cite{LPT20} and recent advances for the semi-positive Hermitian form are contained in \cite{GL21b}. The equation has found numerous geometric applications \cite{To15}, \cite{Ng16}, \cite{Di16}, \cite{Ni17}, \cite{T18} \cite{KT21} and \cite{GL21a}.
To obtain the weak solutions one often uses the stability of potentials of the approximating equations.
It is well-known that the $L^p$-convergence of potentials does not imply the weak convergence of corresponding Monge-Amp\`ere measures. Furthermore, in the Hermitian setting there are fewer such criterions available as compared to the K\"ahler manifolds. In this note, we prove such a criterion under suitable assumptions: of uniform boundedness and domination by Monge-Amp\`ere measures of another "nice" sequence. Recall the Bedford-Taylor capacity: for a Borel subset $E\subset X$,
$$ cap_\omega(E) := \sup\left\{\int_E (\omega + dd^cw)^n : w\in PSH(X, \omega), 0\leq w\leq 1\right\}.
$$
Here $PSH(X,\omega)$ denotes the set of all $\omega$-plurisubharmonic ($\omega$-psh) functions on $X$.
A sequence $\{\varphi_j\}_{j=1}^\infty \subset PSH(X,\omega)$ is said converge in capacity to $\varphi\in PSH(X,\omega)$ if for a given $\varepsilon>0$,
$$
\lim_{j\to +\infty} cap_\omega(|\varphi_j -\varphi|>\varepsilon) =0.
$$
The main result of this expository note is as follows.
\begin{thm} \label{thm:main} Let $\{u_j\}$ be a uniformly bounded sequence of $\omega$-psh functions. Assume $(\omega +dd^c u_j)^n \leq C (\omega + dd^c \varphi_j)^n$ for some uniformly bounded sequence $\{\varphi_j\}$ such that $\varphi_j \to \varphi \in PSH(X,\omega)$ in capacity. If $u_j \to u \in PSH(X,\omega) \cap L^\infty(X)$ in $L^1(X)$, then a subsequence of $(\omega+ dd^c u_j)^n$ converges weakly to $(\omega +dd^cu)^n$.
\end{thm}
This is a generalization of \cite[Lemma~2.1]{CK06} from the local setting to compact Hermitian manifolds that is pointed out in the proof of \cite[Lemma~2.11]{KN22} (see also \cite{KN21}). On compact K\"aher manifolds there are stronger results \cite[Theorem~2.1]{Hiep08} and \cite{DH12}. We will see that if the dimension $n=2$, then we have a similar result (Proposition~\ref{prop:cap-convergence}). However, we do not know if it still holds for dimensions $n\geq 3$.
It is also possible to extend the theorem to the case of Hermitian semi-positive $(1,1)$-forms $\al$ and we will consider it in a future paper.
An application of the theorem is that it provides a shorter proof of the existence of bounded $\omega$-psh solutions to Monge-Amp\`ere equations with the right hand side in $L^p(X)$, $p>1$.
\begin{cor} Let $0\leq f \in L^p(X)$ and $p>1$. Then, there exists a bounded function $u \in PSH(X,\omega) \cap L^\infty(X)$ and a constant $c>0$ solving $\omega_u^n = c f \omega^n.$
\end{cor}
\begin{proof} Approximating $f_j \to f$ by smooth positive function $f_j$. By the Tosatti-Weinkove theorem \cite{TW10b} there are $u_j \in PSH(X,\omega) \cap C^\infty(X)$ and $c_j>0$ solving
$$
(\omega + dd^c u_j)^n = c_j f_j \omega^n, \quad \sup_X u_j =0.
$$
Using \cite[Eq. (5.12)]{KN1} we have $c_j \leq C_0$ with a uniform $C_0>0$. Therefore, by \cite{DK12} we have $- C_1 \leq u_j \leq 0$. By passing to a subsequence we may assume that $u_j \to u$ in $L^1(X)$ and $c_j \to c \geq 0$. Then,
$$
\int_{X} |u_j-u| (\omega+ dd^c u_j)^n = \int_X |u-u_j| c_j f_j \omega^n \leq C \|u_j -u\|_{L^1(X)}^\frac{1}{q},
$$
where $1/p+1/q =1$.
Hence, by Lemma~\ref{lem:weak-convergence} (below) there is a subsequence of $\{u_j\}$, for simplicity still denoted by $\{u_j\}$, such that $\omega_{u_j}^n \to \omega_u^n = c f \omega^n$ weakly. Note that since $u$ is bounded, it follows from \cite[Remark~5.7]{KN1} that $c>0$.
\end{proof}
Another application is proving the existence of bounded solutions for a more general class of measures.
A positive Radon measure $\mu$ on $X$ is well dominated by capacity or belong to $\mathcal{F}(X,h)$ (see \cite{K05}) if
\[
\label{eq:dominate}
\mu(E) \leq F_h( cap_\omega (E)),
\]
for any Borel set $E \subset X$, where $F_h(x) = x/h(x^{-\frac{1}{n}})$ for some increasing
function $h : \mathbb R_+ \rightarrow (0, \infty ) $ satisfying
\[
\label{eq:admissible}
\int_1^\infty \frac{1}{x [h(x) ]^{\frac{1}{n}} } \, dx < +\infty.
\]
\begin{cor} Let $\mu$ be a positive Radon measure with $\mu(X) >0$. Assume that $\mu$ is well dominated by capacity. Then, there exists a bounded solution $u\in PSH(X,
\omega)$ and $c>0$ solving
$
\omega_u^n = c \mu.
$
\end{cor}
\begin{proof} It follows easily from the proof of \cite[Theorem~3.1]{KN21}.
\end{proof}
\begin{remark} The solutions obtained in Corollary~2.7 and Corollary~2.8 are H\"older continuous and continuous, respectively. However, to get these statements, we need a stronger stability of solutions as in \cite{KN1, KN7}.
\end{remark}
\medskip
{\em Acknowledgement.} The first author is partially supported by grant no. \linebreak 2021/41/B/ST1/01632 from the National Science Center, Poland. The second author is partially supported by the start-up grant G04190056 of KAIST and the National Research Foundation of Korea (NRF) grant no. 2021R1F1A1048185.
\section{Proof of Theorem~\ref{thm:main} and convergence in capacity}
This section is devoted to prove the main theorem.
Since all functions under consideration are uniformly bounded, by normalizing
\[ \sup_X u_j = \sup_X \varphi_j =0,
\]
they belong to the subset $\mathcal{P}_0$ of $PSH(X,\omega)$ given by
$$\mathcal{P}_0 =\left\{ v\in PSH (X,\omega) \cap L^\infty(X): \sup_X v=0 \right\}.$$
Let us denote $dV = \omega^n$ the volume form of $X$.
We have the following result essentially due to Cegrell \cite{Ce98}, a detailed proof is given in \cite[Lemma~2.1]{KN21}.
\begin{lem} \label{lem:L1-norm-convergence}
Let $d\lambda$ be a finite positive Radon measure on $X$ vanishing on pluripolar sets. Suppose moreover that $\{u_j\} \subset \mathcal{P}_0$ converges $dV$-a.e. to $u \in \mathcal{P}_0$. Then there exists a subsequence $\{u_{j_s}\} \subset \{u_j\}$ such that
$$ \lim_{j_s\to +\infty} \int_X u_{j_s} d\lambda = \int_X u d\lambda.$$
\end{lem}
\begin{comment
\begin{proof} Since $d\lambda$ is a finite measure it follows that $\sup_{j} \int_{X} |u_j|^2 d\lambda < +\infty$. So there exists a subsequence $\{u_j\}$ weakly converging to $v \in L^2 (d\lambda)$. By the Banach-Saks theorem we can find a subsequence $u_{j_k}$ such that
$$
F_k= \frac{1}{k} (u_{j_1} + \cdots + u_{j_k}) \to v \quad\text{in } L^2(d\lambda)
$$
as $k \to +\infty$. Extracting a subsequence $\{F_{k_s}\}$ of $\{F_k\}$ we get $F_{k_s} \to v$ a.e in $d\lambda$, and also that $F_{k_s}$ converges a.e to $u$ with respect to the Lebesgue measure. Therefore, $(\sup_{s>t} F_{k_s})^* \searrow u$ everywhere as $t\to +\infty$.
It follows that there is a subsequence which we still denote by $\{u_j\}$ such that
$$
\lim_{j\to \infty} \int_X u_j d\lambda = \int_X v d\lambda = \lim_{s\to \infty} \int_X F_{k_s} d\lambda = \lim_{t\to \infty} \int_X \sup_{s>t} F_{k_s} d\lambda = \int_X u d\lambda.
$$
where the first identity used the decreasing convergence property; the second one used the $d\lambda$-a.e convergence, and the last used the fact that $d\lambda$ does not charge pluripolar sets. This completes the proof.
\end{proof}
\end{comment
An immediate consequence is
\begin{cor}\label{cor:L1-convergence} There exists a subsequence, still denoted by $\{u_j\}$, such that
$$ \lim_{j\to \infty} \int_X |u_j - u| d\lambda =0.
$$
\end{cor}
\begin{proof}Applying Lemma~\ref{lem:L1-norm-convergence} twice to the sequences $\{u_j\}$ and $\max\{u_j, u\}$, we have (still denoting by $\{u_j\}$ the resulting subsequence)
$$\lim_{j\to \infty} \int_X u_j d\lambda = \int_X u d\lambda, \quad \lim_{j\to \infty} \int_X \max\{u_j, u\} d\lambda = \int_X u d\lambda.$$
Since $\max\{u,u_j\} = (u_j+u + |u_j-u|)/2$, integrating both sides and using the previous equations we get that
$u_j \to u$ in $L^1(d\lambda).$
\end{proof}
The next result is a global analogue of \cite[Lemma~2.3]{KN22} which says that under the assumption of Theorem~\ref{thm:main}, the sequence converges uniformly in $L^1$-norm with respect to a family of measures. On compact complex manifolds the Cegrell inequality will not be needed. For the reader convenience, we give details of the proof.
\begin{lem} \label{lem:uniform-L1-convergence} Let $\{w_j\}_{j=1}^\infty \subset \mathcal{P}_0$ be a uniformly bounded sequence that converges in capacity to $w\in \mathcal{P}_0$. Assume $\sup_j \int_X (\omega+ dd^c w_j)^n \leq C_1$ for some $C_1>0.$ Then,
$$
\lim_{j\to \infty} \int_X |u- u_j| (\omega+ dd^c w_j)^k \wedge \omega^{n-k} = 0 \quad \text{for } k=0,1,...,n.
$$
\end{lem}
\begin{proof} The case $k=0$ is the assumption on $\{u_j\}$, and we only give here the proof of the last inductive step, the other steps are the very similar. In the proof that follows the constant $C>0$ is a uniform constant depending only on $X, \omega$ and the uniform bounds for $\|u_j\|_{L^\infty}$ and $\|w_j\|_{L^\infty}$, it may change from line to line.
Note that $|u-u_j| = (\max\{u, u_j\} - u_j) + (\max\{u, u_j\} - u)$. By quasi-continuity of $\omega$-psh functions and Hartogs' lemma, we have $\phi_j := \max\{u, u_j\} \to u$ in capacity. Fix $\varepsilon>0$. Then, when $j $ is large,
$$\begin{aligned}
\int _{X} (\max\{u, u_j\} -u) (\omega+dd^c w_j)^n
&\leq \int_{\{|\phi_j -u|>\varepsilon\}} (\omega+dd^c w_j)^n + \varepsilon \int_{X} (\omega+dd^c w_j)^n \\
&\leq C cap_\omega (|\phi_j -u|>\varepsilon) + C_1 \varepsilon.
\end{aligned}$$
Therefore, $\lim_{j\to \infty} \int_X (\phi_j-u) (\omega+dd^c w_j)^n =0$. Next, we have for $j>k$,
$$
\int_X (\phi_j -u_j) (\omega+dd^c w_j)^n - \int_X (\phi_j - u_j) (\omega+dd^c w_k)^n
= \int_X (\phi_j - u_j) dd^c (w_j - w_k) \wedge T ,
$$
where $T= T(j,k) = \sum_{s=1}^{n-1} \omega_{w_j}^s \wedge \omega_{w_k}^{n-1-s}$. By integration by parts,
$$\begin{aligned}
\int_X (\phi_j - u_j) dd^c (w_j - w_k) \wedge T
&= \int_X (w_j - w_k) dd^c \left[(\phi_j -u_j) \wedge T \right].
\end{aligned}
$$
Note that for $h= \phi_j -u_j$,
$$ \begin{aligned}
dd^c (h T)
&= dd^c h \wedge T + dh \wedge d^c T - d^c h \wedge dT + h dd^c T \\
&= dd^c h \wedge T + dh \wedge d^c \omega \wedge T_1 - d^c h \wedge d\omega \wedge T_2 \\
&=: S_0 + S_1 + S_2.
\end{aligned}$$
Here notice that $T_1$ and $T_2$ are positive currents, and similar type as of $T$.
We now estimate each term $S_0, S_1$ and $S_2$ separately. Firstly, for the term $S_0$,
$$
\int_X (w_j - w_k) dd^c (\phi_j-u_j) \wedge T \leq \int_X |w_j -w_k| (\omega_{\phi_j}+ \omega_{u_j}) \wedge T.
$$
Since $\|w_j \|_\infty, \|u_j\|_\infty \leq A$ in $X$, it follows that
$$\begin{aligned}
\int_{X} |w_j - w_k| (\omega_{\phi_j} + \omega_{u_j}) \wedge T
&\leq A \int_{\{|w_j -w_k| > \varepsilon\}} (\omega_{\phi_j} + \omega_{u_j}) \wedge T \\ &\quad+ \varepsilon\int_{\{|w_j -w_k| \leq \varepsilon\}} (\omega_{\phi_j} + \omega_{u_j}) \wedge T \\
&\leq A^{n+1} cap_\omega( |w_j -w_k| > \varepsilon) + C \varepsilon,
\end{aligned}$$
where the uniform bound for the second integral on the right hand side follows from uniform boundedness of the potentials (see. e.g., \cite[Proposition~2.3]{DK12}).
Since $w_j\to w$ in capacity, it follows that there exists $k_0$ such that for every $j>k\geq k_0$ the left hand side is less than $2C \varepsilon$.
Secondly, for the term $S_1$, by the Cauchy-Schwarz inequality \cite[Proposition~1.4]{Ng16} in the Hermitian setting,
$$ \begin{aligned}
& \left|\int_X (w_j - w_k) dh \wedge d^c \omega \wedge T_1 \right|^2 \\
&\leq C \int_X |w_j-w_k| dh\wedge d^c h \wedge \omega \wedge T_1 \int_X |w_j-w_k| \omega^2 \wedge T_1 \\
&\leq C \int_X |w_j-w_k| \omega^2 \wedge T_1.
\end{aligned}$$
Therefore, by a similar argument as in the first case for the integral on the right hand side is bounded by $C\varepsilon$ for every $j> k \geq k_0$ (we may increase $k_0$ if necessary).
Lastly, the term $S_2$ is estimated similarly as $S_1$.
Thus,
$$\begin{aligned}
\int_X (\phi_j -u_j) (\omega+ dd^c w_j)^n
&\leq \int_X (\phi_j -u_j) (\omega+ dd^c w_k)^n \\
&\quad + \left|\int_X (\phi_j -u_j) \omega_{w_j}^n - \int_X (\phi_j - u_j) \omega_{w_k}^n \right| \\
&\leq \int_X (\phi_j -u_j) (\omega+ dd^c w_k)^n + 2C \varepsilon \\
&\leq \int_X |u-u_j| (\omega+ dd^c w_{k})^n + 2 C \varepsilon.
\end{aligned}$$
Fixing $k=k_0$ and applying Corollary~\ref{cor:L1-convergence} for $d\lambda = (\omega+dd^c w_{k_0})^n$, we get that
$$
\int_X (\phi_j -u_j) (dd^c w_j)^n \leq (2C + 1) \varepsilon \quad\text{for } j \geq k_1 \geq k_0.
$$
Since $\varepsilon>0$ is arbitrary, the proof of the lemma is completed.
\end{proof}
The proof of Theorem~\ref{thm:main} is an immediate consequence of Lemma~\ref{lem:uniform-L1-convergence} and the following
weak convergence.
\begin{lem} \label{lem:weak-convergence}
Suppose that
\[\label{eq:energy-convergence}
\lim_{j\to +\infty} \int_X |u_j -u| \omega_{u_j}^n =0.
\]
Then, there exists a subsequence $\{u_{j_s}\}$ of $\{u_j\}$ such that $\omega_{u_{j_s}}^n$ converges to $\omega_u^n$ weakly.
\end{lem}
\begin{proof}
Let $A>0$ be such that $-A \leq u_j, u \leq 0$.
By passing to a subsequence we may assume further that $u_j \to u$ a.e. in $X$ with respect to $dV$.
Note that
$$u = (\limsup_{j\to \infty} u_j)^* = \lim_{j\to \infty} (\sup_{\ell \geq j} u_\ell)^*.$$
Set
\[\label{eq:hartogs-s}
w_{j} =\max \{u_j, u-1/j\}.
\]
By the Hartogs lemma $w_j$ converges to $u$ in capacity. Therefore, by the convergence theorem in \cite{DK12} (see also \cite{BT82}),
$
\lim_{j\to \infty} \omega_{w_j}^n = \omega_u^n.
$
Thanks to Lemma~\ref{lem:uniform-L1-convergence} and the assumption $w_j \to u$ in capacity, we have
\[\label{eq:uniform-L1}
\int_{X} |u_j -u| (\omega + dd^c w_j)^n \to 0 \quad \text{as } j\to +\infty.
\]
Now we are ready to conclude the proof of the lemma. By \eqref{eq:energy-convergence} and \eqref{eq:uniform-L1} we can choose a subsequence $\{u_{j_s}\} \subset \{u_j\}$ so that $$\int_X |u-u_{j_s}| (\omega + dd^c u_{j_s})^n + \int_X |u-u_{j_s}|(\omega + dd^c w_s)^n < 1/s^2.$$
Recall from \eqref{eq:hartogs-s} that $w_s = \max\{u_{j_s}, u-1/s\}$ and this implies
$$
{\bf 1}_{\{u_{j_s} > u-1/s\}} (\omega+ dd^c w_s)^n = {\bf 1}_{\{u_{j_s} > u-1/s\}} (\omega + dd^c u_{j_s})^n.
$$
Therefore, for $\eta \in C^\infty(X)$,
$$\begin{aligned}
\left| \int_X \eta \omega_{u}^n - \int_X \eta \omega_{u_{j_s}}^n\right|
&\leq \left| \int_X \eta \omega_{u}^n - \int_X \eta \omega_{w_s}^n\right| + \left| \int_X \eta \omega_{w_s}^n - \int_X \eta \omega_{u_{j_s}}^n\right| \\
&\leq \left| \int_X \eta \omega_{u}^n - \int_X \eta \omega_{w_s}^n\right| + \left| \int_{\{u_{j_s} \leq u-1/s \}} \eta \omega_{w_s}^n - \eta \omega_{u_{j_s}}^n\right|.
\end{aligned}$$
The first term on the right hand side goes to zero as $\omega_{w_s}^n \to \omega_u^n$. It remains to estimate the second term. Firstly, by the choice of $\{u_{j_s}\} $ at the beginning of this proof,
$$\begin{aligned}
\left|\int_{\{u_{j_s} \leq u-1/s \}} \eta \omega_{u_{j_s}}^n \right|
&\leq \|\eta\|_{L^\infty} \int_{\{u_{j_s} \leq u-1/s \}} \omega_{u_{j_s}}^n \\
&\leq s \|\eta\|_{L^\infty} \int_X |u- u_{j_s}| \omega_{u_{j_s}}^n \leq \frac{1}{s}\|\eta\|_{L^\infty} \to 0 \quad\text{as } s\to +\infty.
\end{aligned}$$
Similarly,
$$ \begin{aligned}
\left|\int_{\{u_{j_s} \leq u-1/s\}} \eta \omega_{w_s}^n \right|
&\leq \|\eta\|_{L^\infty} \left| \int_{\{u_{j_s}\leq u-1/s\}} \eta \omega_{w_s}^n\right| \\
&\leq s \|\eta\|_{L^\infty} \int_X |u- u_{j_s}| \omega_{w_s}^n \to 0 \quad\text{as } s\to +\infty.
\end{aligned}
$$
The last two estimates complete the proof of the lemma. This also completed the proof of Theorem~\ref{thm:main}.
\end{proof}
We shall have a similar criterion for convergence in capacity \cite[Proposition~2.8]{KN21}. Let us recall \cite[Theorem~3.5]{DK12} and \cite[Lemmas~2.1, 2.2]{KN1}.
Let $B>0$ be a uniform constant satisfying
$$\begin{aligned}
& -B\omega^2 \leq 2n dd^c \omega \leq B \omega^2, \\
& -B \omega^3 \leq 4n^2 d\omega \wedge d^c \omega \leq B\omega^3.
\end{aligned}
$$
Then, for $\varphi, \psi \in PSH(X, \omega) \cap L^\infty(X)$ satisfying $\sup_{\{\varphi<\psi\}} (\psi -\varphi) \leq 1$,
\[ \label{eq:comparison-V1} \begin{aligned}
\int_{\{\varphi<\psi\}} \omega_\psi^n &\leq \int_{\{\varphi < \psi\}} \omega_{\varphi}^n
+ C_n \max\{1,B\}^n \sum_{k=0}^{n} \int_{\{\psi< \varphi\}} \omega_{\varphi}^k \wedge \omega^{n-k},
\end{aligned}
\]
where $C_n$ is a dimensional constant. If the dimension $n=2$, then we have a better estimate, namely,
\[ \label{eq:comparison-V2} \begin{aligned}
\int_{\{\varphi<\psi\}} \omega_\psi^2 &\leq \int_{\{\varphi < \psi\}} \omega_{\varphi}^2
+ C_n \max\{1,B\} \int_{\{\psi< \varphi\}} \omega^2,
\end{aligned}
\]
\begin{prop} \label{prop:cap-convergence} Let $\{u_j\}$ be the sequence in Theorem~\ref{thm:main}. Then, $u_j$ converges to $u$ in capacity if and only if
\[\label{eq:cap-convergence}
\lim_{j\to +\infty} \int_X |u_j-u| \omega_{u_j}^k \wedge \omega^{n-k} = 0, \quad k=0,...,n.
\]
In particular, if $n=2$, then the sequence converges in capacity.
\end{prop}
\begin{proof} Suppose that $u_j \to u$ in capacity. It follows from Lemma~\ref{lem:uniform-L1-convergence} that \eqref{eq:cap-convergence} holds for every $k=0, 1, ...,n$. Conversely, suppose that \eqref{eq:cap-convergence} holds true. Let $\varepsilon>0$ be fixed and without loss of generality we may assume that
$
-1 \leq u_j, u \leq 0.
$
We have
$$
cap_\omega(|u_j-u|>\varepsilon) \leq cap_\omega (u_j-u>\varepsilon) + cap_\omega (u-u_j >\varepsilon).
$$
By Hartogs' lemma, $\max\{u_j, u\} \to u$ in capacity. Hence, $$\lim_{j\to +\infty} cap_\omega (u_j-u >\varepsilon) \leq \lim_{j\to +\infty} cap_\omega(\max\{u_j,u\} -u >0) =0.$$
It remains to prove that $cap_\omega(u-u_j>\varepsilon) \to 0$ as $j\to +\infty$. Let $-1\leq \rho \leq 0$ be a function in $PSH(X,\omega)$. Then,
$$
\{u_j < u -\varepsilon\} \subset \left\{u_j < (1-\frac{\varepsilon}{4}) u + \frac{\varepsilon}{4} \rho - \frac{3\varepsilon}{4} \right\} \subset \left \{ u_j < u - \frac{\varepsilon}{2}\right\}
$$
Applying \eqref{eq:comparison-V1} for $\varphi = u_j $ and $\psi = (1-\varepsilon/4) u + \varepsilon \rho/4 - 3\varepsilon/4$ and then using the previous inclusions for domains of integration on the left hand side and on the right hand side, we have
$$\begin{aligned}
& \int_{\{u_j < u-\varepsilon\}} \left[\omega
+ (1-\frac{\varepsilon}{4}) dd^c u + \frac{\varepsilon}{4} dd^c \rho \right]^n \\
&\leq \int_{\{u_j< u - \varepsilon/2\}} \omega_{u_j}^n
+ C_n \max\{1,B\}^n \sum_{k=0}^{n} \int_{\{u_j< u-\varepsilon/2\}} \omega_{u_j}^k \wedge \omega^{n-k},
\end{aligned}$$
Since the integrand on the left hand side is larger than $(\varepsilon/4)^n\omega_\rho^n$, it follows that
\[ \label{eq:cap-inequality} \begin{aligned}
\left(\frac{\varepsilon}{4} \right)^n cap_\omega (u_j< u-\varepsilon) &\leq \int_{\{u_j < u -\varepsilon/2\}} \omega_{u_j}^n \\
&\quad + C_n \max\{1,B\}^n \sum_{k=0}^{n} \int_{\{u_j< u-\varepsilon/2\}} \omega_{u_j}^k \wedge \omega^{n-k}.
\end{aligned}
\]
Notice that for $k=0,...,n$ we have
$$
\int_{\{u_j < u -\varepsilon/2\}} \omega_{u_j}^k \wedge \omega^{n-k} \leq \frac{2}{\varepsilon} \int_X |u_j -u| \omega_{u_j}^k \wedge \omega^{n-k} .
$$
Combining this inequality with \eqref{eq:cap-convergence} and \eqref{eq:cap-inequality} we obtain
$\lim_{j\to +\infty} cap_\omega(u_j < u -\varepsilon) =0$. The proof is completed.
Finally let us finish the proof in the case $n=2$. Indeed, by \eqref{eq:comparison-V2} the inequality corresponding to \eqref{eq:cap-inequality} reads
$$
\left(\frac{\varepsilon}{4} \right)^2 cap_\omega(\{u_j < u-\varepsilon\}) \leq \int_{\{u_j< u-\varepsilon/2\}}\omega_{u_j}^2 + C_n \max\{1,B\} \int_{\{u_j < u-\varepsilon/2\}} \omega^2.
$$
The right hand side is bounded by
$$
\frac{2}{\varepsilon}\int_{X} |u_j-u| \omega_{u_j}^2 + \frac{2C_n}{\varepsilon} \max\{1,B\} \int_X |u_j- u| \omega^2,
$$
which tends to zero as $j$ goes to infinity.
Since $\varepsilon>0$ is fixed we conclude the convergence in capacity in dimension 2.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
About ten years ago, Giroux suggested that the notion of overtwistedness could be generalized to higher dimensions using open books: a negative (or left-handed) stabilization should be overtwisted.
Since then, various versions of overtwistedness in higher dimensions have been defined, most notably the plastikstufe, a special kind of family of overtwisted disks defined in \cite{Niederkrueger:Plastikstufe}, and the bLob, the bordered Legendrian open book defined in \cite{Massot:weak_strong_fillability}.
These objects reduce to overtwisted disks in dimension $3$.
Furthermore, existence of plastikstufes or bLobs obstructs fillability, implies algebraic overtwistedness and the existence of a contractible periodic Reeb orbit, see \cite{Albers:Weinstein_PS,Massot:weak_strong_fillability}.
These are all features in common with overtwisted contact $3$-manifolds.
In addition, plastikstufes exhibit some flexibility properties as shown in \cite{Murphy:loose_plastik}, so they seem to be the right generalization.
On the other hand, it was also shown that negative stabilizations have some aspects of overtwisted manifolds; for instance, negative stabilizations do not admit fillings, are algebraically overtwisted, and also have contractible periodic Reeb orbits, \cite{BvK,Massot:weak_strong_fillability}.
It is therefore reasonable to look for plastikstufes and bLobs in negative stabilizations, but it seems to be difficult to find these objects in these contact manifolds.
In this paper, we take a different point of view on this question.
We shall show that negative stabilizations are part of a much larger class of ``negatively twisted'' contact manifolds.
In order to define these, consider a Liouville domain $W$ whose boundary is a prequantization bundle $P$ over an integral symplectic manifold $(Q,\omega)$.
A collar neighborhood of the boundary looks like $(P\times I,d(e^t \lambda) \,)$, so define the symplectomorphism $\tau:(p,t)\mapsto (Fl^{R_\lambda}_{f(t)}(p),t)$, where $f: I\to {\mathbb{R}}$ is a smooth function such that $f(0)=2\pi/\ell$ for some positive integer $\ell$ and $f(1)=0$.
In some cases $\tau$ extends to the whole of $W$;
for instance, if $\ell=1$. This gives a so-called fibered Dehn twist, which was already considered by Biran and Giroux, \cite{Biran_Giroux:fibered_Dehn}.
We will explain some sufficient conditions and an explicit procedure to define such an extension in Section~\ref{sec:fractional_twist} by using a covering trick. Assume for now that we can construct such an extension in some definite way and call it a right-handed fractional twist of power $\ell$: this notion generalizes Dehn twists and also fibered Dehn twists.
Consider the contact open book $\OB(W,\tau^{\pm 1})$ with page $W$ and monodromy either a right-handed fractional twist $\tau$ or a left-handed one, $\tau^{-1}$.
We shall show that the contact manifolds constructed this way are principal circle bundles over smooth manifolds, and that the contact structure is invariant under the circle action.
\begin{theorem}
\label{thm:result invariant}
Let $(W,\Omega=d\lambda)$ be a Liouville domain such that $P:=\partial W$ is a prequantization bundle over a symplectic manifold $(Q,k\omega)$, where $\omega$ is a primitive, integral symplectic form, and $k\in {\mathbb{Z}}_{>0}$.
Suppose $pr:\tilde W\to W$ is an adapted $\ell$-fold cover (see Definition~\ref{def:adapted_cover}; such a cover trivially exists for $\ell=1$).
Then one can define a right-handed fractional twist $\tau$ of power $\ell$ on the cover $\tilde W$.
In particular, a fibered Dehn twist exists on $W$.
Furthermore, we have the following results for contact open books with the above monodromies.
\begin{enumerate}
\item The contact open book $\OB(\tilde W,\tau)$ is a prequantization bundle over the symplectic manifold
$$
M_+=\left(
P\times_{S^1,+}D^2
\right)
\cup_\partial W,
$$
where $P\times_{S^1,+}D^2$ denotes the associated disk bundle which is a concave filling for $P$.
In addition, this contact manifold is convex fillable.
\item The contact open book $\OB(\tilde W,\tau^{-1})$ is a principal circle bundle over the smooth manifold
$$
M_-=\left(
P\times_{S^1,-}D^2
\right)
\cup_\partial W,
$$
where $P\times_{S^1,-}D^2$ denotes the associated disk bundle dual to $P\times_{S^1,+}D^2$.
Furthermore, the contact structure on this contact open book is $S^1$-invariant, and the almost dividing set is contactomorphic to the prequantization bundle $P$.
\end{enumerate}
\end{theorem}
Before we continue, let us point out that a particularly nice class of Liouville manifolds $W$ with the above properties can be obtained from the following construction.
Let $(M,\omega_M)$ be an integral symplectic manifold, and suppose that $Q$ is a Donaldson type symplectic hypersurface that is Poincar\'e dual to $k[\omega_M]$.
For $k$ sufficiently large, $W:=M-\nu_{M}(Q)$ carries the structure of a Weinstein domain, and its completion is a Weinstein manifold whose end is the positive part of the symplectization of a prequantization bundle over $Q$.
We shall show that contact open books with left-handed twists as monodromy can be non-fillable, and below we give some criteria for this.
If we replace the word non-fillable by ``overtwisted'', the result becomes somewhat similar to the Giroux criterion for overtwistedness.
In fact, it is even more closely related to another result of Giroux, namely a result on invariant contact structures, \cite{Giroux:circle_bdls}.
Our result is much weaker, but it does hold in higher dimensions.
\begin{theorem}[Rough version of Giroux criterion in higher dimensions]
\label{thm:result}
As in Theorem~\ref{thm:result invariant}, let $(W^{2n-2},\Omega=d\lambda)$ be a Liouville domain with prequantization boundary $P$.
Assume that $\tilde W \to W$ is an adapted $\ell$-fold cover, and let $\tau$ denote a right-handed fractional twist of power $\ell$, defined on $\tilde W$.
Consider the invariant contact structure on $Y=\OB(\tilde W,\tau^{-1})$, and assume that one of the following conditions holds.
\begin{itemize}
\item $\tau$ is a fractional twist of power $\ell>1$.
\item $\tau$ is a fibered twist ($\ell=1$), $k>1$, and the inclusion $i:P=\partial W\to W$ induces an injection on $\pi_1$.
\item $\tau$ is a fibered twist ($\ell=1$) on $\tilde W=W=T^*\H \P^m, ~T^*Ca\P^2$, the cotangent bundles of quaternionic projective space and the Cayley plane.
\item $\tau$ is a fibered twist ($\ell=1$), $n\geq 3$, $\pi_1(Q)=0$, $k=1$, $c_1(W)=0$ and $c_1(Q)=c[\omega]$ with $c\leq n-\frac{\max \ind+3 }{2}$.
Here $\max \ind$ is the maximal index of a Morse function on $W$ that is convex near the boundary.
\end{itemize}
Then $Y$ is not convex semi-positively fillable.
\end{theorem}
To clarify the condition on the maximal index of a Morse function, we point out that if $W^{2n-2}$ is a Weinstein domain, then one can find an $\Omega$-convex Morse function, which satisfies $\max \ind\leq n-1$.
We remark that some condition is necessary for non-fillability in the case of a fibered twist. Indeed, take $W=D^{2n-2}$.
Then $P=\partial W$ is a prequantization bundle over ${\mathbb{C}} \P^{n-2}$, so $c=n-1>0$, and $k=1$.
In this case a fibered Dehn twist is symplectically isotopic to the identity relative to the boundary, so $Y=\OB(D^{2n-2},\tau^{-1})$ is contactomorphic to $(S^{2n-1},\xi_0)$, which is of course fillable.
On the other hand, this condition on the Chern class can certainly be relaxed, as the case $W=T^*\H \P^m, ~T^*Ca\P^2$ shows.
We also want to point the reader's attention to the case of $\dim W=2$, where $P$ is a collection of circles.
In this case, various questions about tightness and overtwistedness have been addressed by Giroux in \cite{Giroux:circle_bdls}.
We just mention one particular case, namely $\OB(T^*S^1,\tau_{Dehn}^2)\cong \OB(T^*S^1,\tau_{fibered})$, which is overtwisted, and in particular not fillable.
The idea of the proof of Theorem~\ref{thm:result} is to construct a $1$-dimensional family of holomorphic planes in an assumed filling with one ``boundary'' component on the boundary of the filling. Such a family arises rather naturally in the case of a left-handed twist.
The conditions we impose prevent breaking of this family, and also exclude this family from having its other boundary component on the boundary of the filling.
Since sphere bubbles are prevented by the semi-positivity assumption, there cannot be another boundary component at all, and we arrive at a contradiction.
The holomorphic curve methods that we use also apply to the $3$-dimensional case (so $n=2$), but the resulting statements are weaker than those of Giroux.
The methods used in the proof of Theorem~\ref{thm:result} also imply the following result.
\begin{corollary}
\label{cor:weinstein_conj}
If the contact manifold $Y$ obtained in Theorem~\ref{thm:result} satisfies the conditions given there, then the Weinstein conjecture holds for that particular contact structure.
\end{corollary}
Finally, these methods can also be used to tackle the symplectic isotopy problem for fibered twists, first considered by Biran and Giroux, \cite{Biran_Giroux:fibered_Dehn}, and also for fractional twists.
\begin{theorem}
\label{thm:symplectic_isotopy}
Let $W$ be a Weinstein domain admitting a right-handed fractional twist $\tau$ of power $\ell$, and suppose that $\OB(W,\tau^{-1})$ has no convex semi-positive filling (shown for instance by Theorem~\ref{thm:result}).
Assume that $Y_+=\OB(W,\tau)$ admits a convex, semi-positive symplectic filling (a sufficient condition is given in Lemma~\ref{lemma:fillability_BW}).
Then for all $N \in {\mathbb{Z}}_{>0}$, the contact manifold $\OB(W,\tau^{-N})$ is not convex semi-positively fillable.
In particular, $\tau^N$ is not symplectically isotopic to the identity relative to the boundary.
\end{theorem}
To make the statement about powers of fractional twists, we use Avdek's cobordism techniques, \cite{Avdek:Liouville}.
The symplectic isotopy problem for fibered twists was also addressed in \cite{CDvK:right-handed}, and the methods in that paper are somewhat simpler than those employed here.
Also, in several of the cases the obtained twists are not even smoothly isotopic to the identity relative to the boundary.
\begin{remark}
If we assume that contact homology algebra can be defined, using for instance the not yet finished polyfold techniques, then one can show that all contact manifolds from Theorem~\ref{thm:result} are actually algebraically overtwisted as in \cite{Bourgeois_Niederkrueger:AOT} and \cite{BvK}, meaning that contact homology algebra vanishes.
We will give arguments for these conjectural statements in Section~\ref{sec:HC=0}.
This should also imply that there are no weak fillings at all, cf.~\cite[Theorem 5]{Latschev_Wendl}, but again this assumes polyfolds or some kind of non-classical transversality argument.
Concerning weak fillings, one has the general fact that weak fillings can be deformed into strong fillings if $H^2(Y;{\mathbb{R}})=0$, see \cite[Remark 2.11]{Massot:weak_strong_fillability}.
\end{remark}
\subsection*{Plan of the paper}
The paper is organized as follows.
In Section~\ref{sec:def} we give definitions and discuss some background on the type of filling obstructions that we shall use.
In Section~\ref{sec:invariant_contact} we discuss and construct the invariant contact structures from Theorem~\ref{thm:result invariant}.
The remainder of the paper is used for the proof of Theorem~\ref{thm:result} and its corollaries.
Section~\ref{sec:indices} contains index computations.
Section~\ref{sec:holomorphic_plane} is the main technical part of the paper: we construct a rigid holomorphic plane, and discuss transversality and uniqueness.
In Section~\ref{sec:other_curves} we look at other holomorphic curves, and show that the assumptions of Theorem~\ref{thm:result} imply that no other rigid curves exist.
Finally, we combine all ingredients in Section~\ref{sec:wrapup} to prove Theorem~\ref{thm:result}, Corollary~\ref{cor:weinstein_conj} and Theorem~\ref{thm:symplectic_isotopy}.
\subsection*{Acknowledgements}
This project grew out of one of the questions asked at the AIM workshop on ``Contact topology in higher dimensions''. OvK would like to thank AIM for their hospitality.
We thank Klaus Niederkr\"uger, Chris Wendl and Urs Frauenfelder for useful comments and discussions.
RC is partially supported by the NSC grant 101-2115-M-006-003 and NCTS(South), Taiwan;
FD is supported by grant no. 10631060 of the National Natural Science Foundation of China;
OvK is supported by the NRF Grant 2012-011755 funded by the Korean government.
\section{Definitions and setup}
\label{sec:def}
Let $(Q,\omega)$ be a symplectic manifold with integral symplectic form, i.e.~$[\omega]\in H^2(Q;{\mathbb{Z}})$.
According to \cite{Kobayashi:circle_bundle}, there is a complex line bundle $L$ over $Q$ with $c_1(L)=[\omega]$. If $H^2(Q;{\mathbb{Z}})$ is torsion free, then this line bundle is unique up to isomorphism.
The associated principal $S^1$-bundle $\Pi: P \to Q$ carries a contact form $\theta$, the so-called {\bf Boothby--Wang form}, which is a connection $1$-form on $P$ whose curvature form equals $-2\pi\omega$,
\begin{equation}
\label{eq:d_connection}
d\theta = -2\pi\Pi^*\omega.
\end{equation}
The vector field $R_\theta$ generating the principal $S^1$-action satisfies the following equations
$$
\iota_{R_\theta} \theta=1,\quad\iota_{R_\theta}d\theta=0,
$$
since $\theta$ is a connection form. On the other hand, these are also the equations defining the Reeb vector field for $\theta$.
The resulting principal circle bundle is called the
\textbf{Boothby--Wang bundle} or {\bf prequantization bundle} associated with $(Q,\omega)$.
It is useful to think of $Q$ as the quotient space of the prequantization bundle $P$ by the $S^1$-action.
\subsection{Fillings}
Let $(C,\Omega)$ be a compact symplectic manifold with boundary.
We call a boundary component $Y$ of $C$ {\bf convex} if there is a Liouville vector field $X$ (so $\mathcal L_X \Omega=\Omega$) on a collar neighborhood of $Y$ that points outward.
We call the boundary component $Y$ {\bf concave} if there is a Liouville vector field pointing inward.
Note that the Liouville vector field induces a contact form on the boundary, given by $\lambda_Y=(i_X \Omega)|_{Y}$.
If $Y$ is a convex or concave boundary component, then a collar neighborhood $\nu_C(Y)$ is symplectomorphic to a piece of a symplectization, namely $( [-\epsilon,\epsilon]\times Y,d(e^t \lambda_Y)\,)$, and the Liouville field $X$ corresponds to the vector field $\partial_t$.
We will often attach a {\bf convex end}, given by $( [\epsilon,\infty[ \times Y,d(e^t \lambda_Y)\,)$, to a convex boundary in order to obtain a symplectic manifold with a Liouville vector field that is forward complete.
Similarly, we can define a {\bf concave end}.
\begin{definition}
A {\bf compact symplectic cobordism} is a compact symplectic manifold $(C,\Omega)$ whose boundary components are all convex or concave.
A {\bf complete symplectic cobordism} is then obtained by attaching convex and concave ends to the boundary components.
\end{definition}
We come now to symplectic fillings, which can be thought of as compact symplectic cobordisms with only a convex, or only a concave boundary component.
\begin{definition}
A {\bf convex symplectic filling} for a contact manifold $(Y,\xi=\ker \lambda_Y)$ is a connected, compact symplectic manifold $(C,\Omega)$ with boundary $Y$ such that
\begin{itemize}
\item $(Y,\lambda_Y)$ is a convex boundary of $(C,\Omega)$ with $\lambda_Y=(i_X \Omega)|_{Y}$.
\item $(Y,\lambda_Y)$ is oriented as the boundary of $C$ using that $X$ points outward.
\end{itemize}
Convex symplectic fillings are also known as {\bf strong fillings}.
A {\bf Liouville filling} or {\bf Liouville domain} is a convex symplectic filling with a globally defined Liouville vector field $X$.
\end{definition}
In particular, Liouville domains are exact symplectic manifolds.
One of the nicest symplectic fillings and a basic building block for many constructions is formed by so-called Weinstein manifolds, a symplectic analogue of Stein manifolds.
\begin{definition}
A {\bf compact Weinstein manifold} or {\bf Weinstein domain} consists of a Liouville domain $(W,\Omega,X)$ together with a Morse function $f:W\to {\mathbb{R}}$ such that $\partial W$ is a regular level set of $f$, and $X$ is gradient-like for $f$, i.e.~$X(f)>0$ except at critical points, where it has a standard form (like a gradient vector field).
\end{definition}
An immediate corollary of the definition is that $\partial W$ is of contact type.
We say that $\partial W$ is {\bf Weinstein fillable}.
By results of Eliashberg, one can deform {\bf Weinstein manifolds}, which are obtained by attaching a symplectization to the boundary, into Stein manifolds~\cite{Cieliebak:Stein_Weinstein}.
Clearly, Weinstein manifolds are special cases of Liouville manifolds which are in turn special cases of convex symplectic manifolds.
\begin{remark}
When talking about symplectic cobordisms and fillings, we will often drop the adjectives \emph{compact} and \emph{complete}, since either case can be converted into the other, by either attaching a symplectization piece, or by restricting to a compact subset.
\end{remark}
\subsubsection{Weak symplectic fillings}
The notion of weak symplectic filling in higher dimensions has recently been defined in a satisfactory way by Massot, Niederkr\"uger and Wendl in \cite{Massot:weak_strong_fillability}, see also \cite{DingGeiges:circle_bdls}.
We take their definition.
\begin{definition}[Massot, Niederkr\"uger, Wendl]
Let $(Y^{2n-1},\xi)$ be a cooriented contact manifold.
A {\bf weak symplectic filling} for $(Y,\xi)$ is a compact symplectic manifold $(F,\Omega)$ with boundary such that
\begin{itemize}
\item $\partial F=Y$ as oriented manifolds,
\item for a positive contact form $\alpha$ with $\xi=\ker \alpha$ we have
$$
\alpha\wedge (t d\alpha+\Omega|_{\xi})^{n-1}>0
$$
for all $t\geq 0$.
\end{itemize}
\end{definition}
We will also need the following definition,
\begin{definition}[Massot, Niederkr\"uger, Wendl]
Let $(F^{2n},J)$ be an almost complex manifold with boundary. We say that a contact manifold $(Y^{2n-1},\xi)$ is the {\bf tamed pseudoconvex boundary} of $(F,J)$ if $Y=\partial F$ where we orient $Y$ as the boundary of $F$, and
\begin{itemize}
\item the contact structure $\xi$ is the field of $J$-complex tangencies to $Y$, that is $\xi=TY \cap J TY$.
\item there is a symplectic form $\Omega$ on $F$ taming $J$, and
\item $Y$ is $J$-convex, meaning that for all positive contact forms $\alpha$ defining $\xi$ as an oriented hyperplane, we have
$$
d\alpha(v,Jv )>0 \text{ for all }v\in \xi-0.
$$
\end{itemize}
\end{definition}
\begin{theorem}[Massot-Niederkr\"uger-Wendl]
A compact symplectic manifold $(F,\Omega)$ is a weak filling of $(Y,\xi)$ if and only if there is an almost complex structure $J$ such that
\begin{itemize}
\item $J$ is tamed by $\Omega$,
\item $(Y,\xi)$ is the tamed pseudoconvex boundary of $(F,J)$.
\end{itemize}
\end{theorem}
These definitions allow one to use holomorphic curve machinery, and with these tools one can show that under certain conditions, such as negative stabilization or the existence of a bLob, weak fillings do not exist.
See \cite{BvK,Massot:weak_strong_fillability} for the definitions of negative stabilizations and bLobs.
As is usual, we will compactify the moduli space of holomorphic curves, \cite{BEHWZ:compactness}. Such a compactification can include nodal curves with multiply covered components. These cause trouble for regularity arguments, and hence it is not clear what structure the moduli space actually has.
To avoid these issues, one should be able to appeal to the polyfold machinery. However, this theory has not yet been completed, so we impose additional assumptions to deal with regularity.
\begin{definition}
A symplectic manifold $(F^{2n},\Omega)$ is called {\bf semi-positive} if every class $A\in \pi_2(F)$ with $\langle [\Omega],A \rangle >0$ and $\langle c_1(F),A \rangle \geq 3-n$ has non-negative Chern number.
\end{definition}
Note that this is trivially satisfied if $n\leq 3$.
\subsection{Contact open books}
\label{sec:contact_OB}
We follow Giroux' original construction of contact open books, which slightly modifies an idea due to Thurston and Winkelnkemper, see \cite{Giroux:ICM2002}.
Let $(W,d\lambda)$ be a Liouville domain with contact type boundary $P:=\partial W$, and suppose that $\psi:W \to W$ is a symplectomorphism that is the identity in a neighborhood of $\partial W$.
By a lemma of Giroux, we can assume that $\psi^*\lambda=\lambda-dU$ for a positive function $U$.
The `stretched' mapping torus
$$
Map(W,\psi):=W \times {\mathbb{R}} / \sim
$$
where $(x,\phi)\sim (\psi(x),\phi+U(x)\,)$, carries the contact form $d\phi+\lambda$.
The set $W$ will be called the {\bf page}.
Note that $U$ is constant on each boundary component; we shall assume that the value of this constant equals $U_c$ on each boundary component.
\subsubsection{Binding}
Let $(P,\lambda_P)$ denote the contact type boundary of the page $W$.
We construct a closed contact manifold by gluing the set $P\times D^2_{r_0}$ to the mapping torus $Map(W,\psi)$.
On the set $P\times D^2_{r_0}$, we use a standard model for the contact structure. Choose functions $h_1$ and $h_2$ as indicated in Figure~\ref{fig:functions_binding}.
These functions satisfy $h_1(r) h_2'(r)-h_2(r) h_1'(r)>0$ and $h_1(r)>0$ if $r>0$, so the following form is a contact form,
\begin{equation}
\label{eq:form_near_binding}
\alpha=h_1(r) \lambda_P+h_2(r) d\phi.
\end{equation}
\begin{figure}[htp]
\def\svgwidth{0.65\textwidth}%
\begingroup\endlinechar=-1
\resizebox{0.65\textwidth}{!}{%
\input{functions1.pdf_tex}%
}\endgroup
\caption{Functions for the contact form near the binding}
\label{fig:functions_binding}
\end{figure}
Note that a neighborhood of the boundary of $Map(W,\psi)$ is diffeomorphic to $P\times [-\delta,0] \times S^1$ for some $\delta>0$, because $\psi$ is the identity in a neighborhood of the boundary of $W$.
Let $A_{r_0,r_0+\delta}$ denote an annulus with inner radius $r_0$ and outer radius $r_0+\delta$, and define the gluing map
\[
\begin{split}
\psi_G: P\times A_{r_0,r_0+\delta}\subset P\times D^2_{r_0} &\longrightarrow \nu_{Map(W,\psi)}(\partial Map(W,\psi)\,) \subset Map(W,\psi)\\
(p;r,\phi) & \longmapsto (p,r_0-r,\frac{U_c \phi}{2\pi}).
\end{split}
\]
A {\bf contact open book} is a contact manifold obtained by gluing the above sets together.
We define
$$
\OB(W,\psi^{-1})=P\times D^2_{r_0} \cup_{\psi_G} Map(W,\psi),
$$
and we call $\psi^{-1}$ the {\bf monodromy} of the open book.
The subset $P\times \{ 0 \}$ is called the {\bf binding}.
\begin{remark}
The inverse in the definition of a contact open book is needed due to our conventions for a mapping torus.
\end{remark}
\subsection{Fractional fibered Dehn twists}
\label{sec:fractional_twist}
Let $(P,\lambda_P)$ be a prequantization bundle over a symplectic manifold $(Q,k\omega)$, where $[\omega]$ represents a primitive class in $H^2(Q;{\mathbb{Z}})$, and $k$ is a positive integer.
Suppose furthermore that $(W^{2n},\Omega=d\lambda,X)$ is a Liouville filling for $(\partial W=P,\lambda_P)$.
Denote the inclusion $P\subset W$ by $j$.
A collar neighborhood of the boundary of $W$ looks like a piece of a symplectization
$$
\nu_{W}(P)=(P\times [a,b], d(e^t \lambda_P ) \,).
$$
Here $P\times \{ b\}$ denotes the boundary.
We will usually take $[a,b]=[0,1]=I$, although this is not important.
Define $W_{in}=W-\nu_{W}(P)$; we obtain the decomposition
$$
W=W_{in}\cup_\partial P\times [a,b].
$$
We shall call $P\times [a,b]$ the {\bf margin} of the page: it carries the Liouville form $\lambda=e^t\lambda_P$.
The set $W_{in}$ will be called the {\bf content} of the page.
Fix a smooth function $f_m:[a,b]\to {\mathbb{R}}$ (the $m$ stands for monodromy) such that $f_m(a)=2\pi$ and $f_m(b)=0$.
We shall refer to this function as {\bf twisting profile}.
Define
\begin{eqnarray*}
\tau: P\times [a,b] & \longrightarrow & P\times [a,b] \\
(p,t) & \longmapsto & (Fl^{R_{P}}_{f_m(t)}(p),t),
\end{eqnarray*}
where $R_{P}$ is the Reeb field of $\lambda_P$.
Extend the map $\tau$ to be the identity on the content $W_{in}$.
\begin{lemma}
The map $\tau$ is a symplectomorphism satisfying
$$
\tau^* \lambda=\lambda-dU,
$$
where
\begin{equation}
\label{eq:formula_u}
U(p,t)=U(t)=C-\int_{s=a}^t f_m'(s)e^sds.
\end{equation}
\end{lemma}
\begin{proof}
On a collar neighborhood of the boundary we have $\lambda=e^t\lambda_P$.
By the Cartan formula we compute
\[
\begin{split}
\mathcal L_{f_mR_{P}} \lambda & =d(e^tf_m(t)\,)+i_{f_mR_{P} }\left( e^t dt\wedge \lambda_P +e^t d\lambda_P \right) \\
&=f_m'(t)e^tdt.
\end{split}
\]
Since $\int_{s=0}^1 \frac{d}{ds}{Fl^{f_mR_{P}}_s}^* \lambda \, ds=\tau^*\lambda-\lambda$, we find the above Formula~\eqref{eq:formula_u} for $U$.
\end{proof}
The following notion was introduced by Biran and Giroux, \cite{Biran_Giroux:fibered_Dehn}.
\begin{definition}
We call the symplectomorphism $\tau$ a {\bf right-handed fibered Dehn twist}.
\end{definition}
Next, take a positive integer $\ell$ dividing $k$ and assume that $P$ is covered by $\tilde P$, a prequantization bundle over $(Q,\frac{k}{\ell}\omega)$ by an $\ell$-fold covering of the form
\begin{equation}
\label{eq:cover_BW_bdl}
\begin{split}
pr_\ell: \tilde P & \longrightarrow P \\
p & \longmapsto p\otimes \ldots \otimes p.
\end{split}
\end{equation}
We remark that this can always be done if $H^2(Q;{\mathbb{Z}})$ is torsion free.
Note that this covering shows that $\pi_1(P)$ has a subgroup of index $\ell$.
We want to extend this covering to the filling, and for that purpose we introduce the following definition,
\begin{definition}
\label{def:adapted_cover}
We call a covering $pr_W:\tilde W \to W$ {\bf adapted} to the covering $pr_\ell: \tilde P \to P$ if $pr_W$ restricts on a collar neighborhood of the boundary $\nu_{\tilde W}( \partial \tilde W)=\tilde W \times I$ to $pr_\ell \times \id$,
$$
pr_W|_{\partial \tilde W\times \{t_0 \}}=pr_\ell \times \{ t_0\}.
$$
\end{definition}
If the Liouville filling $W$ is Weinstein of dimension at least $6$, then we can always find an adapted cover.
\begin{lemma}
If $W$ is Weinstein and $\dim W=2n\geq 6$, then there is an adapted cover $pr_W:\tilde W \to W$.
\end{lemma}
\begin{proof}
The dimension assumption $2n\geq 6$ implies that $\pi_1(P)\cong \pi_1(W)$.
Hence there is also a corresponding subgroup of index $\ell$ in $\pi_1(W)$, so covering theory tells us that there is a $\ell$-fold cover $pr_W:\tilde W \to W$.
The cover $\tilde W$ inherits a Weinstein structure by pulling back the symplectic form, the Liouville vector field and the Morse function.
\end{proof}
On an adapted cover a fibered Dehn twist $\tau: W \to W$ can be lifted to a map $\tilde \tau: \tilde W \to \tilde W$.
Indeed, in a collar neighborhood of the boundary, $P\times I$, we see that $\tau$ lifts to the map
\[
\begin{split}
\tilde P\times I & \longrightarrow \tilde P\times I \\
(p,t) & \longmapsto (Fl^{R_{\tilde P}}_{f_m(t)/\ell}(p),t)
\end{split}
\]
This can be extended to the content of the page by using a deck transformation of the cover $\tilde W \to W$.
We will also call this extension $\tilde \tau$.
\begin{lemma}
The map $\tilde \tau$ is a symplectomorphism satisfying
$$
\tilde \tau^* \tilde \lambda=\tilde \lambda-d \tilde U,
$$
where $\tilde U=pr_W^*U$
\end{lemma}
\begin{definition}
We call the symplectomorphism $\tilde \tau$ a {\bf right-handed fractional fibered Dehn twist} of power $\ell$, or more briefly a {\bf fractional twist}.
If a Liouville domain $\tilde W$ is an adapted $\ell$-fold cover of $W$, then we say that $\tilde W$ admits a fractional Dehn twist of power $\ell$.
\end{definition}
Note that if a Liouville domain admits a fractional fibered Dehn twist of power $\ell$, then there is an action of ${\mathbb{Z}}_\ell$ by symplectomorphisms which induces rotation by roots of unity in a collar neighborhood of the boundary.
Write $\zeta_\ell$ for the symplectomorphism that generates this ${\mathbb{Z}}_\ell$-action on the content of the page, and which rotates the margin of the page by $2\pi/\ell$ as in the above construction.
\begin{remark}
To remove some unnecessary clutter, we will omit the notation $\tilde{\phantom{W}}$ for the cover $\tilde W$ and $\tilde P$ when we only need the cover itself.
\end{remark}
\begin{example}
\label{ex:std_dehn_twist}
Consider $(W=T^*_{\leq 1} {\mathbb{R}} \P^n,d\lambda_{can})$.
Since ${\mathbb{R}} \P^n$ admits a metric for which all geodesics are periodic with the same period, $W$ admits a fibered Dehn twist $\tau$. Its double cover $T^*S^n$ admits therefore a fractional fibered Dehn twist of power $2$.
This is the generalized Dehn twist that was already considered by Arnold, \cite{Arnold:monodromy_A1}.
\end{example}
\begin{example}
Consider the complex hypersurface of degree $d$ in ${\mathbb{C}} \P^n$ given by
$$
X_d=\{ [z_0:\ldots:z_n]~|~\sum_j z_j^d=0 \}
.
$$
Define $W_d={\mathbb{C}} \P^n-\nu_{{\mathbb{C}} \P^n}(X_d)$.
This is a Weinstein domain with fundamental group $\pi_1(W_d)\cong {\mathbb{Z}}_d$.
It admits an adapted $d$-fold cover by the Brieskorn variety
$$
V_d=\{ (z_0,\ldots,z_n)\in {\mathbb{C}}^{n+1}~|~\sum_j z_j^d=1 \}
,
$$
see \cite[Section 7.3.2]{{CDvK:right-handed}}.
Hence $V_d$ admits a fractional fibered twist of power $d$.
\end{example}
\begin{remark}
\label{rem:conventions_twisting_profile}
Due to our conventions in the definition of a mapping torus, we will usually work with the inverse of the monodromy. This inverse also has a twisting profile, which we will denote by $f_i$; we have $f_i=-f_m$.
\end{remark}
\subsection{Preparing for filling obstructions}
We will use a symplectic field theory setup to describe filling obstructions.
In this setup one considers holomorphic curves in symplectic cobordisms.
To guarantee that this is a Fredholm problem, we need additional assumptions.
The simplest condition is the following.
\begin{definition}
A {periodic Reeb orbit} is called {\bf non-degenerate} if the restriction of its linearized return map to the contact structure has no eigenvalues equal to $1$.
We call a contact form {\bf non-degenerate} if all its periodic Reeb orbits are non-degenerate.
\end{definition}
Any contact form can be deformed into a non-degenerate one by a $C^\infty$-small perturbation.
The Reeb dynamics can change dramatically under such a perturbation.
Since we are working with an $S^1$-symmetry, it is useful to also include the Morse-Bott setup.
\begin{definition}
A contact form $\alpha$ on $Y$ is said to be of {\bf
Morse--Bott type} if the following conditions hold.
\begin{itemize}
\item The action spectrum $\Spec (\alpha)$ is discrete.
\item For every $T\in \Spec (\alpha)$, the subset $N_{T}=\{ p\in Y|
Fl^{R_\alpha}_{T}(p)=p\}$ is a smooth, closed submanifold of $Y$ such
that the rank $d\alpha|_{N_{T}}$ is locally constant and
$T_pN_{T}=\ker (TFl^{R_\alpha}_{T}-id)_p$.
\end{itemize}
\end{definition}
We will also need suitable almost complex structures on the symplectization ${\mathbb{R}} \times Y$ of a contact manifold $Y$.
We first recall the notion of stable Hamiltonian structure, which we need in order to invoke symplectic field theory compactness, \cite{BEHWZ:compactness}.
\begin{definition}
A {\bf stable Hamiltonian structure} on $Y^{2n-1}$ is a pair $(\lambda,\Omega_{sH})$, where $\lambda$ is a $1$-form and $\Omega_{sH}$ a closed $2$-form such that
\begin{itemize}
\item{} $\ker \Omega_{sH} \subset \ker d\lambda$,
\item{} $\lambda \wedge \Omega_{sH}^{n-1}>0$.
\end{itemize}
\end{definition}
A cooriented contact structure $(Y,\xi=\ker \lambda)$ is stably Hamiltonian with respect to $(\lambda,d\lambda)$.
On the other hand, stable Hamiltonian structures do not necessarily come from contact structures.
Given a stable Hamiltonian structure, one can define a Reeb-like vector field by the equations
$$
i_{R_\lambda} \lambda=1,\quad i_{R_\lambda} \Omega_{sH}=0.
$$
We now discuss the appropriate class of almost complex structures for the symplectic field theory setup.
\begin{definition}
Let $Y$ be an oriented $(2n-1)$-dimensional manifold with stable Hamiltonian structure $(\lambda,\Omega_{sH})$.
An {\bf adjusted almost complex structure} $J$ on ${\mathbb{R}} \times Y$ is an endomorphism $J:T ({\mathbb{R}} \times Y) \to T ({\mathbb{R}} \times Y)$ such that
\begin{itemize}
\item{} $J^2=-\id$,
\item{} $J$ is ${\mathbb{R}}$-invariant,
\item{} $J \partial_t=R_\lambda$,
\item{} $J$ gives $\xi=\ker \lambda$ the structure of a complex vector bundle that is $\Omega_{sH}$-tame.
\end{itemize}
\end{definition}
\subsection{Moduli spaces of holomorphic curves and indices}
Before we give a filling obstruction, we briefly review some notions from holomorphic curve theory.
For simplicity, we describe the case of rational holomorphic curves in a symplectization.
This setup can be generalized to general symplectic cobordisms.
Fix a contact manifold $(Y^{2n-1},\xi=\ker \alpha)$, and let $\Sigma$ be a Riemann surface of genus $0$ with finitely many punctures, denoted by $\{ p_i \}_i\cup \{ q_j \}_j$.
We call $\{ p_i \}_i$ positive punctures and $\{ q_j \}_j$ negative punctures.
Suppose that $u:\Sigma\to {\mathbb{R}} \times Y$ is a holomorphic curve asymptotic to a collection of periodic Reeb orbits $\Gamma^+$ at the positive punctures and asymptotic to a collection of periodic Reeb orbits $\Gamma^-$ at the negative punctures.
Choose a trivialization $\Phi$ of $\xi|_{\Gamma^+\cup \Gamma^-}$. This allows us to define the Conley-Zehnder index of a periodic Reeb orbit.
Define the {\bf total Maslov index} of $u$ as
$$
\mu^{\Phi}(\Gamma^+;\Gamma^-)=
\sum_{\gamma \in \Gamma^+} \mu_{CZ}(\gamma,\Phi)
-
\sum_{\gamma \in \Gamma^-} \mu_{CZ}(\gamma,\Phi).
$$
This number depends on the trivialization $\Phi$, but whenever possible we will restrict ourselves to special trivializations, see Remark~\ref{rem:CZ_via_disk}, making this dependence irrelevant.
The Fredholm index of the linearized Cauchy-Riemann operator, used in Section~\ref{seq:MB_regularity}, is given by
\begin{equation}
\label{eq:ind_moduli}
\ind D_u=n \chi(\Sigma)+2 c_1^\Phi(u^*T({\mathbb{R}} \times Y)\, )+\mu^{\Phi}(\Gamma^+;\Gamma^-)+ \# \Gamma^+ +\# \Gamma^-,
\end{equation}
where $c_1^\Phi(u^*T({\mathbb{R}} \times Y)\, )$ is the relative Chern class of the trivialization $\Phi$, see \cite[Proposition~3.7]{Wendl:transversality}.
Observe that this index does not depend on the chosen trivializations.
Furthermore, this Fredholm index should be thought of as the ``virtual'' dimension of the space of holomorphic maps, as the following theorem, \cite{Dragnev}, shows. Let $\mathcal J$ denote the space of adjusted almost complex structures.
\begin{theorem}[Dragnev]
\label{thm:regularity_somewhere_injective}
There is a Baire set $\mathcal J_{reg}\subset \mathcal J$ such that for $J\in \mathcal J_{reg}$, every moduli space of simple $J$-holomorphic curves is regular.
In particular, the dimension of the space of simple holomorphic maps is given by the Fredholm index.
\end{theorem}
Although we will not use contact homology, it is useful to define the {\bf reduced index} of a non-degenerate periodic Reeb orbit $\gamma$ as
$$
\bar \mu(\gamma;\Phi)=\mu_{CZ}(\gamma;\Phi)+n-3
$$
to see the relation with contact homology: the degree of a generator in contact homology is given by the reduced index.
Fix a relative homology class $A$ in $H_2(Y,\Gamma^+\cup \Gamma^-)$, and define $\mathcal Hol_0^A(\Gamma^+;\Gamma^-)$ as the space of holomorphic maps $u$ from $\Sigma$ to ${\mathbb{R}} \times Y$ that are asymptotic to the collection of periodic Reeb orbits $\Gamma^+$ near the positive punctures $\{ p_i \}_i$, asymptotic to the periodic orbits $\Gamma^-$ near the negative punctures such that $[u]=A\in H_2(Y,\Gamma^+\cup \Gamma^-)$.
Define the {\bf moduli space of rational curves} with prescribed asymptotics and homology class by
$$
\mathcal M_0^A(\Gamma^+;\Gamma^-)=
\mathcal Hol_0^A(\Gamma^+;\Gamma^-)/\Aut \Sigma
.
$$
From now on, we will restrict ourselves to the case of a single positive puncture, so $\Gamma^+=\gamma^+$.
Suppose that $u$ represents an element $[u]\in \mathcal M_0^A(\gamma^+;\Gamma^-)$.
If we assume regularity, as for instance obtained by Dragnev's theorem, then the dimension of this moduli space is
\begin{equation}
\label{eq:dimension_formula}
\dim \mathcal M^A_0(\gamma^+;\Gamma^-)
=\bar \mu(\gamma^+, \Phi)-\sum_{\gamma^-_j \in \Gamma^-} \bar \mu(\gamma^-_j, \Phi)
+2 c_1^\Phi(u^*T({\mathbb{R}} \times Y)\, )
.
\end{equation}
We say a curve $u$ representing $[u]$ in $\mathcal M^A_0(\gamma^+;\Gamma^-)$ is {\bf rigid} if $\dim \mathcal M_0(\gamma^+;\Gamma^-)=1$.
Note here that the symplectization has an ${\mathbb{R}}$-action, and modding out this action justifies the notion ``rigid''.
\begin{remark}
\label{rem:CZ_via_disk}
For practical purposes, we will consider special trivializations of the contact structure $(\xi,d\alpha)$.
Namely, if $\gamma$ bounds a disk $D$, then we can trivialize $(\xi,d\alpha)$ over $D$.
Denote this trivialization by $\Phi_D$ and we can define $\mu_{CZ}(\gamma,\Phi_D)$.
If we choose another bounding disk $D'$, formed as the connected sum $D'=D\# A$, where $A$ is sphere, then the Conley-Zehnder index changes according to the following formula,
$$
\mu_{CZ}(\gamma,\Phi_{D'})=\mu_{CZ}(\gamma,\Phi_D)+2\langle c_1(\xi),[A] \rangle .
$$
In particular, we see that this gives a well-defined total Maslov index if $c_1(\xi)=0$.
In this case, Formula~\eqref{eq:dimension_formula} simplifies since the relative Chern class will be $0$.
\end{remark}
\subsection{A filling obstruction}
We take a lemma from \cite{Massot:weak_strong_fillability} which we have slightly modified for our purposes.
\begin{lemma}[Massot, Niederkr\"uger, Wendl~+$\epsilon$]
\label{lemma:non-fillability}
Let $(Y^{2n-1},\xi=\ker \lambda)$ be a cooriented contact manifold with non-degenerate contact form $\lambda$, and a stable Hamiltonian structure $(\lambda,\Omega_{sH})$.
Suppose that there is an adjusted almost complex structure $J$ on the symplectization ${\mathbb{R}} \times Y$ with the following properties.
\begin{enumerate}
\item There is a Fredholm regular, rigid $J$-holomorphic finite energy plane $u_0$ in ${\mathbb{R}} \times Y$ that is asymptotic to a simply covered Reeb orbit $\gamma_1$.
\item If $u$ is a finite energy $J$-holomorphic curve of genus-$0$ with a single positive puncture, at which it is asymptotic to $\gamma_1$, then $u$ is a translation of $u_0$, or
$[u]\in \mathcal M_0(\gamma_1;\gamma_0)$, where $\gamma_0$ is a periodic Reeb orbit that satisfies $\mathcal A(\gamma_0)<2\min_\gamma \mathcal A(\gamma)$ and $\dim \mathcal M_0(\gamma_1;\gamma_0)>1$.
\end{enumerate}
Then $(Y,\xi)$ does not admit any semi-positive weak filling $(F_0,\Omega)$ for which $\Omega|_{TY}$ is cohomologous to $\Omega_{sH}$.
Furthermore, if $\Omega_{sH}=d\lambda$ then every contact form for $(Y,\xi)$ admits a contractible periodic Reeb orbit.
\end{lemma}
\begin{remark}
The second condition is only slightly more general than the one in \cite{Massot:weak_strong_fillability}. Furthermore, this condition is rather artificial, but we will need it to cover the case of $T^*\H \P^m$ and $T^*Ca\P^2$ in Theorem~\ref{thm:result}.
The statement in \cite{Massot:weak_strong_fillability} suffices for all other cases.
\end{remark}
For completeness, we include a proof, which is almost the same as the one in \cite{Massot:weak_strong_fillability}.
\begin{proof}
Suppose there is a weak filling $(F_0,\Omega)$ for $(Y,\xi)$ with $[\Omega|_{Y}]=[\Omega_{sH}]$.
According to \cite[Lemma 2.10]{Massot:weak_strong_fillability} there is a cylindrical end $([0,\infty[\times Y,\Omega)$ with the properties that
\begin{itemize}
\item there is $T>0$ with $\Omega=\Omega_{sH}+d(e^t\lambda)$ on $[T,\infty[\times Y$.
\item on $[0,\epsilon[\times Y$, $\Omega$ restricts to the given symplectic form on $F_0$.
\end{itemize}
Attach this cylindrical end along a collar neighborhood of the boundary of $F_0$ to form a complete symplectic manifold, which we denote by $F$.
\begin{remark}
For the case of a strong filling (so $\Omega=d\lambda$), which is the only case we really need, the above argument can be simplified: we can attach the positive part of the symplectization as a suitable cylindrical end.
\end{remark}
Extend the adjusted almost complex structure $J$ for $[0,\infty[\times Y\subset {\mathbb{R}} \times Y$ given in the assumptions to an almost complex structure on $F$ taming $\Omega$.
By a result due to Dragnev \cite{Dragnev}, stated in Theorem~\ref{thm:regularity_somewhere_injective}, we can assume that all simple $J$-holomorphic curves in $F$ and in the symplectization ${\mathbb{R}} \times Y$ are regular.
The holomorphic curve $u_0$ has image in $[T,\infty [ \times Y \subset F$ for some $T$, so $u_0$ represents an element $[u_0]\in \mathcal M_0(\gamma_1;\emptyset)$, the moduli space of holomorphic finite energy planes asymptotic to $\gamma_1$.
Furthermore, since all simple holomorphic curves are regular by our choice of $J$, the dimension of the component of $\mathcal M_0(\gamma_1;\emptyset)$ containing $[u_0]$ can be extracted from the Fredholm index.
We denote this component by $\mathcal M$.
It is a smooth manifold of dimension $\dim \mathcal M=1$.
Take a sequence of holomorphic planes $\{ u_k \}$ with $[u_k]\in \mathcal M$.
By SFT compactness, \cite{BEHWZ:compactness}, there is a subsequence converging to a holomorphic building with levels $u_\infty=\{ u_\infty^{L_1},\ldots,u_\infty^{L_m} \}$.
Take the first non-trivial level, say $u_\infty^{L_j}$.
This is a curve that is asymptotic to $\gamma_1$.
By the assumptions we have made, only two cases can occur, namely
\begin{enumerate}
\item $u_\infty^{L_j}$ is a translation of $u_0$.
\item $u_\infty^{L_j}$ is a holomorphic cylinder from $\gamma_1$ to $\gamma_0$.
\end{enumerate}
We first argue that the second case cannot occur.
Indeed, the total building must be a plane with possibly sphere bubbles, so there must be a holomorphic building with the topological type of a plane capping off $\gamma_0$.
The assumptions on the action of $\gamma_0$ tell us that $\gamma_0$ and all possible periodic Reeb orbits appearing in a building capping off $\gamma_0$ must be simple, so all components (not considering sphere bubbles) must be somewhere injective.
Furthermore, the index of $\gamma_0$ is lower than that of $\gamma_1$, so it follows that the plane capping off the final periodic Reeb orbit cannot exist by a regularity argument using Theorem~\ref{thm:regularity_somewhere_injective}: use that the index of such a final plane is negative, and that its asymptote is embedded.
We conclude that the first case occurs, so we have at most one level, the so-called main layer.
In this main layer, we hence have at most sphere bubbles, and we can write the limit curve as $u_\infty \cup \cup_i B_i$.
\begin{figure}[htp]
\def\svgwidth{0.5\textwidth}%
\begingroup\endlinechar=-1
\resizebox{0.5\textwidth}{!}{%
\input{bubbling2.pdf_tex}%
}\endgroup
\caption{Possible breaking in the filling}
\label{fig:breaking}
\end{figure}
We claim that semi-positivity excludes these sphere bubbles, and we give a brief sketch of the argument.
The Fredholm index is additive, so in a family we get
$$
\ind u_0=\ind u_\infty +\sum_i \ind B_i,
$$
and we have $\ind u_0=\ind u_\infty$ by the above argument.
We hence conclude that $\sum_i \ind B_i=0$.
On the other hand, the index of a holomorphic sphere in $F^{2n}$ is given by
$$
\ind B_i=2n+2\langle c_1(F),B_i \rangle.
$$
We find that there is $i_0$ with $\langle c_1(F),B_{i_0} \rangle <0$.
Furthermore, $B_{i_0}$ can be written as $B_{i_0}=k_{i_0} A_{i_0}$, where $A_{i_0}$ is a simple holomorphic curve.
It follows that $\langle c_1(F),A_{i_0} \rangle <0$ and this contradicts Lemma~6.4.4 from \cite{MS:J_curves}.
The statement concerning the Weinstein conjecture can be found in \cite[Lemma 3.3]{Massot:weak_strong_fillability}.
The basic idea is due to Hofer: stretching a finite energy plane in a symplectization gives rise to a holomorphic building which topologically still forms a plane. Hence the lowest level contains a finite energy plane, which is asymptotic to a contractible periodic Reeb orbit.
\end{proof}
\section{$S^1$-invariant contact structures, and geometric differences between positive and negative twists}
\label{sec:invariant_contact}
Let $Y\to M$ be a principal circle bundle and denote the right principal action of $g\in S^1$ by $RA_g$.
Define the vector field $X_Y$ generating the $S^1$-action by
$$
X_Y:=\frac{d}{dt}|_{t=0} RA_{e^{it}}.
$$
Suppose that $\xi$ is a cooriented, $S^1$-invariant contact structure on $Y$, meaning
$$
(RA_g)_*\xi=\xi.
$$
By averaging the contact form, we obtain an $S^1$-invariant contact form $\alpha$, so $\mathcal L_{X_Y}\alpha=0$.
Recall the following notion introduced by Giroux, \cite{Giroux:convex}.
\begin{definition}
A hypersurface $H$ in a contact manifold $(Y,\xi)$ is called {\bf convex} if there exists a contact vector field $X$ that is transverse to $H$.
The {\bf dividing set} of $H$ with respect to $X$ is the set
$$
\Gamma:=\{ x \in H ~|~X(x)\in \xi \} .
$$
\end{definition}
Let $\pi:Y\to M$ be a principal circle bundle with an invariant contact form $\alpha$.
Define the {\bf almost dividing set} of $(Y,\alpha)$ with respect to $M$ as the set
$$
\Gamma:=\{ x\in M ~|~\alpha_q(X_Y)=0 \text{ for }q\in \pi^{-1}(x) \} .
$$
This is well-defined, since $\alpha$ is $S^1$-invariant.
Now let $Z$ be a subset of $M$ such that $Y|_{M-Z}$ is a trivial bundle, so we can find a section $\sigma:M-Z \to Y$.
Then $\sigma(M-Z)$ is a convex surface with respect to $X_Y$.
Furthermore, $\pi^{-1}(\Gamma)\cap \sigma(M-Z)$ is the dividing set of $\sigma(M-Z)$ with respect to $X_Y$.
According to the following proposition the almost dividing set is a contact manifold.
\begin{proposition}
Let $(Y,\xi=\ker \alpha_{inv})$ be a principal circle bundle over $M$ with an invariant contact structure. Suppose that $\Gamma$ is the almost dividing set of $(Y,\alpha_{inv})$ in $M$.
Then $\alpha_{inv}$ induces a contact structure on $\Gamma$.
\end{proposition}
A proof of this statement can be found in \cite[Lemma~3.3]{DingGeiges:circle_bdls}.
Alternatively, one can prove this statement using contact reduction.
We shall construct invariant contact structures in higher dimensions that should be considered overtwisted in view of the $3$-dimensional analogue.
To compare with dimension $3$, the following result due to Giroux, \cite[Proposition 4.1]{Giroux:circle_bdls}, is closest to what we will obtain.
\begin{proposition}[Giroux]
Suppose $(Y^3,\alpha)$ is a principal circle bundle over a surface $S$ with almost dividing set $\Gamma$.
Then the contact manifold $(Y,\alpha)$ is universally tight if and only if one of the following conditions is satisfied
\begin{itemize}
\item $S\neq S^2$, and no component of $S-\Gamma$ is a disk.
\item $S=S^2$, the Euler class of $Y$ is negative and $\Gamma$ is empty.
\item $S=S^2$, the Euler class of $Y$ is non-negative, and $\Gamma$ consists of a single circle.
\end{itemize}
\end{proposition}
After rescaling the contact form, the second case corresponds to prequantization bundles over $S^2$.
\begin{remark}[Goal of the paper]
We shall construct a principal circle bundle $Y$ over higher-dimensional manifolds $M$ with the property that the almost dividing set bounds disk-bundles over codimension $2$ submanifolds in $M$.
In many cases the almost dividing set is actually connected.
In dimension $3$ such manifolds can be universally tight, tight, or overtwisted depending on more precise data.
Analogously, we shall obtain manifolds that can be symplectically fillable or non-fillable, depending on more precise data.
\end{remark}
\subsection{Circle actions}
Let $(P,\lambda_P)$ be a prequantization bundle over an integral symplectic manifold $(Q,\omega)$.
Denote the Reeb field of $\lambda_P$ by $R_{P}$.
This vector field generates the circle action on $P$.
Fix integers $a,b$ and consider the circle action on $P\times D^2$ given by
\[
\begin{split}
S^1 \times P\times D^2 & \longrightarrow P \times D^2 \\
(g;p,z) & \longmapsto (g^a \cdot p,g^b z).
\end{split}
\]
The vector field generating this action is
$$
X_{ab}=a R_{P} +b\partial_\phi.
$$
As in the usual model for a contact form near the binding of an open book, choose functions $h_1$ and $h_2$.
We shall only impose the contact condition, $h_1 h_2'-h_2 h_1'\neq 0$, though.
\begin{lemma}
\label{lemma:invariant_form_binding}
The contact form $\alpha_{inv}=h_1(r)\lambda_P+h_2(r)d\phi$ is invariant under the $S^1$-action.
\end{lemma}
This can by verified by computing the Lie derivative with the Cartan formula,
$$
\mathcal L_{X_{ab}} \alpha_{inv}=d\left( ah_1+bh_2 \right)-ad h_1-bdh_2=0.
$$
Choose a smooth function $f_i$, which is going to serve as a profile function for the inverse of a fractional twist.
We do not yet impose any conditions.
Define an equivalence relation on $P\times I \times {\mathbb{R}}$
$$
(p,t;\phi) \sim (Fl^{R_{P}}_{f_i(t)}(p),t;\phi+U_i(t)\, ),
$$
where $U_i$ is defined via \eqref{eq:formula_u} using $f_i$ instead of $f_m$.
Since we haven't imposed any conditions on the profile function $f_i$, the following construction will not be a contact open book in the sense we defined it in Section~\ref{sec:contact_OB}.
However, the resulting contact manifold can be deformed into a contact open book. We will explain this in the proof of Theorem~\ref{thm:left_and_right_twisted}.
Define the ``margins of the pages'' of an ``open book'' by $P\times I\times {\mathbb{R}}/\sim$.
The set $P\times D^2$ serves as a neighborhood of the binding of the ``open book'' to which we glue to the ``pages'' using the gluing map
\begin{equation}
\label{eq:gluingmap_binding_to_pages}
\begin{split}
\psi_G: P\times D_{glue} & \longrightarrow P\times I \times {\mathbb{R}} /\sim \\
(p;r,\phi) & \longmapsto ( [\frac{f_i(1-r)\phi}{2\pi}] \cdot p,1-r,\frac{\phi}{2\pi}U_i(1-r)\, ).
\end{split}
\end{equation}
Here $D_{glue}$ is the open annulus $D_{glue}=\{ r e^{i\phi} \in D^2~|~ r_1-\delta_{glue}<r <r_1 +\delta_{glue} \}$, whose inner and outer radius will be specified later.
We call the domain $ P\times D_{glue}$ the {\bf gluing region}.
For now, we will compute with the gluing region $P\times (D^2 -\{ 0 \})$
\begin{lemma}
The map $\psi_G$ induces a circle action on $P\times I \times {\mathbb{R}} /\sim$ generated by the vector field
\begin{equation}
\label{eq:vf_S^1_pages}
\tilde X_{a,b}=\frac{2\pi a+f_i(t)b}{2\pi} R_{P}+\frac{bU_i(t)}{2\pi}\partial_\phi .
\end{equation}
Furthermore, the contact structure $\ker (d\phi+e^t \lambda_P)$ is $S^1$-invariant.
\end{lemma}
\begin{proof}
Take $g\in S^1$.
On $P\times D^2$, the element $g$ induces the map
\begin{equation}
\label{eq:action_binding}
(p;z) \longmapsto (g^a \cdot p,g^b z).
\end{equation}
With the map $\psi_G$, we get
$$
\psi_G(\, (g^a \cdot p,g^b z) \, )=([ag+\frac{bg}{2\pi}f_i(1-r)+\frac{\phi}{2\pi}f_i(1-r)]\cdot p,1-r,\frac{\phi}{2\pi}U_i(1-r)+ \frac{bg}{2\pi}U_i(1-r)\,).
$$
The circle action on $P\times I \times {\mathbb{R}} /\sim$ is hence given by
\[
\begin{split}
S^1 \times \left( P\times I \times {\mathbb{R}} /\sim \right) & \longrightarrow P\times I \times {\mathbb{R}} /\sim \\
(g;p,t,\phi) & \longmapsto ( [\frac{2\pi a+bf_i(t)}{2\pi}g] \cdot p,t,\phi+\frac{bg}{2\pi}U_i(t) ).
\end{split}
\]
On the right hand side we have used additive notation rather than multiplicative.
If we denote the vector field generating the circle action by $\tilde X_{a,b}$, then we get
$$
\tilde X_{a,b}=\frac{2\pi a+f_i(t)b}{2\pi} R_{P}+\frac{bU_i(t)}{2\pi}\partial_\phi .
$$
To check that the contact form $\left( d\phi+e^t \lambda_P \right)$ is invariant under the circle action, we compute the Lie derivative with respect to $\tilde X_{a,b}$ with the Cartan formula and simplify with Formula~\eqref{eq:formula_u},
$$
\mathcal L_{\tilde X_{a,b}} \left( d\phi+e^t \lambda_P \right) =
d\left( \frac{bU_i(t)}{2\pi} \right) +d\left( \frac{2\pi a+f_i(t)b}{2\pi}e^t \right)
-\frac{2\pi a+f_i(t)b}{2\pi}e^t dt=0.
$$
\end{proof}
Now suppose that $P$ is symplectically filled by a Liouville domain $(W,\Omega=d\lambda)$, where $\lambda=e^t\lambda_P$ in a collar neighborhood of the boundary $P\times I$.
Define the manifold
$$
Y:=P\times D^2 \cup_{\psi_G} W\times {\mathbb{R}} /\sim.
$$
Define the content of the page $W$ by $W_{in}:=W-P\times I$.
In order to obtain an invariant contact structure we need to fulfill the following criteria:
\begin{itemize}
\item the monodromy extends to $W_{in} \times {\mathbb{R}} /\sim$.
This will be done with a fractional twist of power $\ell$ ($\ell$ is some positive integer); let $\zeta_\ell=e^{2\pi i/\ell}$ denote a root of unity. Define the equivalence relation
$$
(w,\phi) \sim(\zeta_\ell \cdot w,\phi+U_i).
$$
\item the action extends to $W_{in} \times {\mathbb{R}} /\sim$.
As we shall see in the following lemma, a sufficient condition to do this is $2\pi a+f_i(0) b=0$.
\end{itemize}
\subsubsection{Right- and left-handed twists}
\label{sec:right_left_binding_twist}
We briefly describe the \emph{monodromy} that we shall be using.
We use the special twisting profiles $f_m$ as sketched in Figure~\ref{fig:profile_function}.
For a right-handed fractional twist of power $\ell$, we choose $f_m:[0,1]\to {\mathbb{R}}$ to be a positive, smooth function with
\begin{itemize}
\item $f_m\equiv 2\pi/\ell$ on a neighborhood of $0$, say $[0,\delta_c]$.
\item for $t>1-\delta_c$, the function $f_m(t)$ is small, and strictly increasing with small slope $f_m'$, say $|f_m'|<\delta_s$.
\end{itemize}
For a left-handed fractional fibered twist of power $\ell$, we choose $f_m:[0,1]\to {\mathbb{R}}$ to be an increasing, smooth function with
\begin{itemize}
\item $f_m\equiv -2\pi/\ell$ on a neighborhood of $0$, say $[0,\delta_c]$.
\item for $t>\delta_c$, the function $f_m(t)$ is strictly increasing. Furthermore, it has a unique zero in $t=t_1$.
For $t>t_1$, the slope $f_m'$ is small, say $|f_m'|<\delta_s$.
\end{itemize}
\begin{figure}[htp]
\def\svgwidth{0.65\textwidth}%
\begingroup\endlinechar=-1
\resizebox{0.65\textwidth}{!}{%
\input{profile_function.pdf_tex}%
}\endgroup
\caption{Profile functions for a left-handed fractional twist (on the left) and a right-handed fractional twist (on the right)}
\label{fig:profile_function}
\end{figure}
With these choices of twisting profiles, fractional twists are not the identity near the boundary.
However, we shall see that this choice also glues nicely to a neighborhood of the binding.
\subsubsection{Gluing the binding to the pages}
We will make some choices to fix the gluing region. The eventual results do not depend on these choices up to contactomorphism, but a small gluing region will be convenient.
If $a=-1$ (the left-handed twist), then choose $r_1=1-t_1$.
If $a=+1$ (the right-handed twist), then choose $r_1=1-t_m$, where $t_m$ attains the minimal value of $f_m$.
In both cases, choose $\delta_{glue}$ small.
\begin{lemma}
\label{lemma:decomposition}
Suppose that $W$ admits a fractional twist of power $\ell$, and denote the boundary by $P$.
Take $a=\pm 1$ and $b=\ell$.
Choose a smooth profile function $f_i$ (for the \emph{inverse} of the monodromy) such that $f_i(0)=-\frac{a 2\pi}{\ell}$, and $f_i(1-r_1+\delta_{glue}+t)=-\left( \delta_1+\delta_2 t \right)$, where $\delta_1,\delta_2>0$ are small.
Then the manifold
$$
Y:=P\times D^2 \cup_{\psi_G} W \times {\mathbb{R}} /\sim
$$
is a principal circle bundle with an $S^1$-invariant contact form $\alpha_{inv}$.
The quotient space $M:=Y/S^1$ can be identified with the smooth manifold,
$$
M\cong P\times_{S^1} D^2 \cup_{\bar \psi_G} W/{\mathbb{Z}}_\ell,
$$
where $P\times_{S^1} D^2$ is the associated disk-bundle for the action given by \eqref{eq:action_binding}, and $\bar \psi_G$ is the map induced by $\psi_G$.
Furthermore, if $a=-1$, the almost dividing set is nonempty and contactomorphic to $P/{\mathbb{Z}}_\ell$.
\end{lemma}
Since the gluing map in our construction comes from the inverse of the monodromy, the profile function $f_i$ in the above lemma is minus the profile function $f_m$ used for a fractional twist.
See Figure~\ref{fig:profile_function} for the twisting profiles for the monodromy we will be using.
\begin{proof}
We only carry out the proof for the case most interesting to us in this paper: $a = -1$. The case $a=1$ and $b=1$ was done in detail in our previous paper \cite[Section 6]{CDvK:right-handed}.
Alternatively, one can adapt the argument here.
We first show that $P\times D^2 \cup_{\psi_G} W \times {\mathbb{R}} /\sim$ admits a free $S^1$-action.
We define the action on subsets and show that it is well-defined.
On $P\times D^2$ we have the action
\[
\begin{split}
S^1 \times P \times D^2 & \longrightarrow P \times D^2\\
g\cdot (p,z) &\longmapsto \left(
g^{-1}\cdot p,g^\ell\cdot z
\right)
.
\end{split}
\]
On $P\times I\times {\mathbb{R}} /\sim$ we have the action
\[
\begin{split}
S^1 \times P \times I \times {\mathbb{R}} /\sim & \longrightarrow P \times I \times {\mathbb{R}} /\sim\\
g\cdot (p,t,\phi) &=\left(
\frac{\ell f_i(t)-2\pi}{2\pi}g \cdot p,t,\phi+\frac{\ell g}{2\pi}U_i(t)
\right)
\end{split}
\]
For the action on $W_{in}\times {\mathbb{R}} /\sim$ note first of all that the function $U_i$ is constant on that set. Define the circle action by
\[
\begin{split}
S^1 \times \left( W_{in} \times {\mathbb{R}} /\sim \right) & \longrightarrow W_{in} \times {\mathbb{R}} /\sim\\
g\cdot [w,\phi] &\longmapsto [
w,\phi+\frac{\ell g}{2\pi}U_i
]
\end{split}
\]
Observe that on the last piece we have
$$
(w,\phi+\ell U_i)=(\zeta_\ell^\ell\cdot w,\phi+\ell U_i)\sim(w,\phi),
$$
so this action is an honest $S^1$-action.
By our assumptions on $f_i$, the actions on the overlap of the different pieces coincide, so we have a well-defined action.
We check that the action is free.
\begin{itemize}
\item on the set $P\times D^2$, this is clear since $S^1$ acts freely on $P$.
\item on the set $P\times I \times {\mathbb{R}} /\sim$, it suffices to check for $g\in [0,2\pi[$ that $g\cdot [p,t,\phi]=[p,t,\phi]$ if and only if $g=0$.
To see this holds, take $g$ such that $g\cdot [p,t,\phi]=[p,t,\phi]$, and note that $\frac{\ell g}{2\pi}U_i(t)$ must be an integer multiple of $U_i(t)$, so $g=\frac{2\pi}{\ell}m$ for some $m \in {\mathbb{Z}}$.
For the $P$-factor, we must have
$$
\frac{\ell f_i(t)-2\pi}{2\pi}g \equiv m f_i(t) \mod 2\pi,
$$
so we get the condition
$$
(f_i(t)-\frac{2\pi}{\ell}) m \equiv f_i(t) m \mod 2\pi.
$$
Hence $m\in \ell{\mathbb{Z}}$. This implies $g\in 2\pi {\mathbb{Z}}$.
\item on the set $W_{in}\times {\mathbb{R}} /\sim$, the circle action is given by
$$
g\cdot [x,\phi]=[x,\phi+\frac{\ell g}{2\pi}U_i].
$$
The equivalence class $[x,\phi]$ contains the elements $\{ \zeta_\ell^{m} \cdot x,\phi+m U_i \}_{m \in {\mathbb{Z}}}$, so we see that we can only have $g\cdot [x,\phi]=[x,\phi]$ if $g \in 2\pi {\mathbb{Z}}$.
It follows that the action is free.
\end{itemize}
For the assertion about the quotient space $M$ we check that
$$
(W\times {\mathbb{R}} /\sim)/S^1\cong W /{\mathbb{Z}}_\ell.
$$
To see this, note that we can use the circle action to bring an element into the form $[x,0]$. There are $\ell$ such elements, namely $[\zeta_\ell^{m} \cdot x,0]$ for $m=0,\ldots,\ell-1$. Hence we need to mod out by this ${\mathbb{Z}}_\ell$-action.
To obtain an invariant contact structure, we use the contact form
$$
\alpha=d \phi+\lambda
$$
on the set $W \times {\mathbb{R}} /\sim$.
By the previous lemma, this gives an invariant contact structure.
We pull this form back to a neighborhood of the binding to see what behavior we need to prescribe there.
Write $Inv(r)=1-r$.
By the Cartan formula $\mathcal L_{\frac{f_i\phi}{2\pi} R_{P} } e^t \lambda_P=d(e^t\frac{f_i\phi}{2\pi})-e^t\frac{f_i\phi}{2\pi} dt$, so we find
\begin{equation}
\label{eq:pullback_form}
\begin{split}
\psi_G^* \alpha & =d\left( \frac{\phi U_i\circ Inv(r)}{2\pi} \right) + \psi_G^*e^t \lambda_{P}\\
&=Inv^*
\left(
\frac{U_i}{2\pi}d\phi+\frac{\phi}{2\pi}dU_i+\frac{e^tf_i(t)}{2\pi}d\phi+\frac{e^t df_i}{2\pi} \phi+e^t \lambda_{P}
\right) \\
&=e^{1-r} \lambda_P+ \frac{ \tilde C+\int_0^{1-r} e^s f_i(s) ds}{2\pi}d\phi.
\end{split}
\end{equation}
Put $h_1(r)=e^{1-r}$ and $h_2(r)=\frac{ \tilde C+\int_0^{1-r} e^s f_i(s) ds}{2\pi}$ for $r>\delta$, where $\delta$ is some positive number, to see that this form coincides with the one given in Lemma~\ref{lemma:invariant_form_binding}.
Indeed, observe that $h_2'(r)=\frac{-1}{2\pi}e^{1-r}f_i(1-r)$ is positive for small $r$.
Therefore we can extend it to the whole set $P\times D^2$ as an invariant contact form that is of open book type near $r=0$.
To obtain the claim about the almost dividing set, observe that
$$
i_{X_{-1,\ell}}\alpha_{inv}=\ell h_2(r)-h_1(r).
$$
For $r=r_1+\delta_{glue}$, this is positive, and for $r=0$, it is negative.
Since $\ell h_2'(r)-h_1'(r)>0$ on the interval $[0,r_1+\delta_{glue}[$, it follows that there is a unique $r_0\in [0,r_1+\delta_{glue}[$, where $i_{X_{-1,\ell}}\alpha$ vanishes.
If we go further into the page, we use the page model $W\times {\mathbb{R}} /\sim$; insert the vector field generating the $S^1$-action, \eqref{eq:vf_S^1_pages} and its extension to the content of the pages, into the contact form.
We get a function that is non-decreasing as we go deeper into the page (decreasing $t$-coordinate).
In particular, we see that the $S^1$-action is positively transverse to the contact structure on the content of the pages.
The almost dividing set is hence the set $(P\times \{ r_0 \}\times S^1)/S^1\cong P/{\mathbb{Z}}_\ell$ (see Formula~\eqref{eq:action_binding} with $a=-1,b=\ell$).
\end{proof}
\begin{remark}
Doing the construction of the above proof in the case of $a=1$ will not give the standard Boothby--Wang form.
Instead, we only obtain an invariant contact structure with a positively transverse contact vector field.
By rescaling $\alpha_{inv}$ we obtain the standard Boothby--Wang contact form.
\end{remark}
In Figure~\ref{fig:circle_actions} we have visualized the difference between the circle actions on an open book with a left-handed twist and one with a right-handed twist.
\begin{figure}[htp]
\def\svgwidth{0.85\textwidth}%
\begingroup\endlinechar=-1
\resizebox{0.85\textwidth}{!}{%
\input{open_books_LR.pdf_tex}%
}\endgroup
\caption{Direction and orbits of the circle action on an open book with a left-handed twist (left) and a right-handed twist (right)}
\label{fig:circle_actions}
\end{figure}
\begin{theorem}
\label{thm:left_and_right_twisted}
Let $W$ be a Liouville domain with boundary $P$ admitting a right-handed fractional twist $\tau$ of power $\ell$.
Then we have
\begin{itemize}
\item[(R)] the contact open book $\OB(W,\tau)$ is contactomorphic to a prequantization bundle $(Y,\alpha)$ over a symplectic manifold. In particular, the almost dividing set is empty, and the contact manifold is convex fillable.
\item[(L)] the contact open book $\OB(W,\tau^{-1})$ is diffeomorphic to a principal circle bundle over a smooth manifold $M$, and supports an $S^1$-invariant contact form $\alpha_{inv}$. Furthermore, the almost dividing set of $\alpha_{inv}$ is contactomorphic to $P/{\mathbb{Z}}_\ell$.
\end{itemize}
\end{theorem}
\begin{proof}
We apply Lemma~\ref{lemma:decomposition} which does not directly give a contact open book.
However, we can deform the contact form $\alpha_{inv}$ to a contact form of open book type: choose a $1$-parameter family of functions $h_2^s$ such that $h_2^0=h_2$, $h_2^1$ is constant on some open interval away from $r=0$, and the contact condition, $h_1{h_2^s}'-h_2^sh_1'\neq 0$, holds for all $s$.
By Gray stability, the resulting contact manifolds are contactomorphic.
The open interval contains some point $r_c$, so by taking a suitable neighborhood $P\times D^2_{r_c}$, we find the decomposition
$$
Y=P\times D^2_{r_c} \cup_{\psi_G} W\times {\mathbb{R}} /\sim.
$$
By construction, the monodromy is isotopic to a fractional twist.
This proves most of the assertions, except for the statement about convex fillability. We prove the last claim in Lemma~\ref{lemma:fillability_BW}.
\end{proof}
\subsection{Examples}
The left-handed stabilization of the standard contact sphere $\OB(D^{2n},\id)$ is given by the contact open book
$$
\OB(T^*S^{n},\tau^{-1}),
$$
where $\tau$ is a right-handed Dehn twist. By Example~\ref{ex:std_dehn_twist}, this is a special case of a fractional fibered Dehn twist.
\begin{proposition}
\label{prop:negative_stabilization_as_invariant_contact}
The left-handed stabilization $\OB(T^*S^{n},\tau^{-1})$ is diffeomorphic to the Hopf fibration over ${\mathbb{C}} \P^n$. However, it has an almost dividing set contactomorphic to $(ST^*{\mathbb{R}} \P^{n},\lambda_{can})$.
\end{proposition}
\begin{proof}
According to Lemma \ref{lemma:decomposition}, we have the decompositions
\[
M\cong ST^*S^n\times_{S^1} D^2 \cup T^*S^n/{\mathbb{Z}}_2.
\]
Now note that $ST^*S^n\times_{S^1} D^2 \cong \mathcal{O}_{Q^{n-1}}(-2)$ is the line bundle dual to the neighborhood of the quadric in ${\mathbb{C}} \P^n$ and $T^*S^n/{\mathbb{Z}}_2 \cong T^*{\mathbb{R}} \P^n$. Gluing these two pieces we conclude that $M$ is diffeomorphic to ${\mathbb{C}} \P^n$, and the boundary of the disk bundle, $ST^*{\mathbb{R}} \P^{n}$, is the almost dividing set.
\end{proof}
\begin{remark}
The right-handed stabilization of the standard contact sphere $\OB(D^{2n},\id)$ is given by the contact open book $\OB(T^*S^{n},\tau)$. It is contactomorphic to the Hopf fibration over ${\mathbb{C}} \P^n$. Its almost dividing set is empty.
\end{remark}
\begin{example}
We can also consider a right-handed fibered Dehn twist $\tau$ on $D^{2n}$.
In this case $\tau$ is symplectically isotopic to identity relative to the boundary.
Hence
$$
\OB(D^{2n},\tau)\cong \OB(D^{2n},\id)\cong (S^{2n+1},\xi_0).
$$
Furthermore, the $S^1$-invariant contact structure from Lemma~\ref{lemma:decomposition} is the standard prequantization structure, so the almost dividing set in $M=(S^{2n-1}\times_{S^1} D^2) \cup D^{2n}\cong {\mathbb{C}} \P^n$ is empty.
On the other hand, we can also consider a left-handed fibered twist, which is also symplectically isotopic to the identity relative to the boundary.
Then
$$
\OB(D^{2n},\tau^{-1})\cong \OB(D^{2n},\id)\cong (S^{2n+1},\xi_0).
$$
In this case, the $S^1$-invariant contact structure from Lemma~\ref{lemma:decomposition} has an almost dividing set contactomorphic to $(S^{2n-1},\xi_0)$ in $\overline {{\mathbb{C}} \P}^n$.
\end{example}
\section{Reeb orbits, Maslov indices and actions}
\label{sec:indices}
Let $(W^{2n-2},d\lambda)$ be a Liouville domain with boundary $P=\partial W$.
Write $\lambda_P:=\lambda|_{P}$ for the contact form on $P$.
Assume that $(P,\lambda_P)$ is a prequantization bundle over an integral symplectic manifold $(Q,k\omega)$, where $\omega$ is primitive and $k\in {\mathbb{Z}}_{\geq 1}$.
Let $\psi$ be a symplectomorphism on $W$ that is the identity near the boundary.
Consider the contact open book $Y=\OB(W,\psi)$.
The contact form in a neighborhood of the binding, $P\times D^2$, is given by
$$
\alpha=h_1(r) \lambda_P+h_2(r)d\phi.
$$
Define the matrix
$$
H=\left(
\begin{array}{cc}
h_1 & h_2 \\
h_1' & h_2'
\end{array}
\right)
.
$$
Then the Reeb vector field is given by
$$
R_\alpha=\frac{1}{\det H}\left( h_2' R_{P} -h_1' \partial_\phi \right).
$$
Reeb orbits have hence the form $t\mapsto (\gamma_P(\frac{h_2'(r)}{\det H(r)}t),r,-\frac{h_1'(r)}{\det H(r)}t)$, where $\gamma_P$ is a Reeb orbit in the binding.
We see that a Reeb orbit in $P\times D^2$ is periodic if and only if $\frac{h_2'(r)}{h_1'(r)}\in {\mathbb{Q}}$ because the Reeb flow $R_P$ is periodic with period $2\pi$.
Hence we can parametrize a simple periodic Reeb orbit by
$$
\gamma_{i,j,\phi_0}(t)=(\gamma_P(jt),r,i t),
$$
where $i,j$ are relatively prime and satisfy $-\frac{h_2'(r)}{h_1'(r)}=\frac{j}{i}$.
The action of these simple orbits is given by
\begin{equation}
\label{eq:action_near_binding}
\mathcal A(\gamma_{i,j,\phi_0})=2\pi h_1(r) j +2\pi h_2(r) i.
\end{equation}
\subsubsection{Reeb dynamics away from the binding}
In a general contact open book, one can understand periodic Reeb orbits that lie in the pages in terms of fixed points of iterates of the monodromy.
Indeed, if $\psi^m(x)=x$ for some $x$ in $W$, then there is $T>0$ such that $Fl^{R}_T(x,\phi)=(x,\phi)$: the Reeb field on the pages is given by $R=\partial_\phi$, so each turn around the binding corresponds to an application of $\psi$.
\begin{equation}
\label{eq:identification}
Fl^R_t(x,\phi)=(x,\phi+t)\sim (\psi^{-1}(x),\phi+t+U_i(x)\,).
\end{equation}
Hence we have
\begin{lemma}
Periodic Reeb orbits that lie in the pages are in $1-1$-correspondence to fixed points of the monodromy $\psi$ and its iterates.
\end{lemma}
Note that for the class of monodromies we will be considering from now on, namely left-handed fractional twists, the fixed point sets come in families, and most of the interesting behavior happens in a (large) neighborhood of the binding. Topologically, this set has the form $P\times D^2$.
Define $\Sigma_{i,j}$ to be the set of points $x$ in the margin part of the pages of the contact open book $Y=\OB(W,\psi)$ such that $x$ lies on a periodic Reeb orbit $\gamma_x$ with the properties that
\begin{itemize}
\item $\gamma_x$ has linking number $i$ with the binding $P$, and
\item the projection $P\times D^2 \to P$ sends $\gamma_x$ to a $j$-fold cover of a fiber of $P$.
\end{itemize}
We denote the corresponding orbit space by $S_{i,j}=\Sigma_{i,j}/S^1$.
\subsection{Spanning disks and Maslov indices}
\label{sec:spanning_disks}
We now assume in addition to the conditions listed at the beginning of Section~\ref{sec:indices} that
\begin{itemize}
\item $c_1(Q)=c[\omega]$.
\item $n\geq 3$, $k=1$ and $\pi_1(Q)=0$. This guarantees that the fibers of the prequantization bundle are contractible, so we can find disks bounding these fibers.
\end{itemize}
In Lemma~\ref{lemma:CZ_special_orbit} and Lemma~\ref{lemma:index_control_2} we will point out that a small part of the setup here also works out in general, and we will also point out some topological conclusions that can be drawn if $\pi_1(Q)\neq 0$.
Given these assumptions we construct a spanning disk for a periodic Reeb orbit using the binding and page model.
In our setup, the inverse of the monodromy is given by $\psi^{-1}=Fl^{f_iR_{P}}_1$, where $f_i$ is the function defined in Lemma~\ref{lemma:decomposition} for a left-handed twist.
The equivalence relation~\eqref{eq:identification} then motivates the identification map, which we will use to ``straighten the mapping torus''
\[
\begin{split}
\Psi: P\times I \times {\mathbb{R}} & \longrightarrow P\times I \times {\mathbb{R}} \\
(p,t,\phi) & \longmapsto (Fl^{f_i(t)R_{P}}_1(p),t,\phi+U_i(t)\, ).
\end{split}
\]
With $U_i'(t)=-f_i'(t)e^t$ we compute the differential of $\Psi$ as
$$
T\Psi=
\left(
\begin{array}{ccc}
TFl^{f_i(t)R_{P}}_1 & f_i'(t) R_{P} & 0 \\
0 & 1 & 0 \\
0&-f_i'(t)e^t & 1
\end{array}
\right)
.
$$
\subsubsection{Construction of an annulus bounding an orbit}
\label{sec:annulus_trivialization}
We construct an annulus that glues to a disk in the binding model, and bounds a specific Reeb orbit.
For this, we choose $p_0\in P$.
Define $Inv(r)=1-r$, and define a map from an annulus into the pages of the open book.
\begin{eqnarray*}
\psi_A:~S^1 \times I & \longrightarrow & P\times I \times {\mathbb{R}}/\sim \quad\subset W \times {\mathbb{R}}/\sim \\
(\phi,r) & \longmapsto & (Fl^{R_P}_{f_i\circ Inv(r) \frac{\phi}{2\pi} } (p_0),Inv(r);U_i\circ Inv(r) \frac{\phi}{2\pi} ).
\end{eqnarray*}
This map is induced by the gluing map $\psi_G$ defined in Equation~\eqref{eq:gluingmap_binding_to_pages}. It is well-defined as
$$
(\phi+2\pi,r)\mapsto (Fl^{R_P}_{f_i\circ Inv(r) \frac{\phi}{2\pi}+ f_i\circ Inv(r) } (p_0),Inv(r),U_i\circ Inv(r) \frac{\phi}{2\pi} +U_i \circ Inv(r))\sim
(Fl^{R_P}_{f_i\circ Inv(r) \frac{\phi}{2\pi} } (p_0),Inv(r),U_i\circ Inv(r) \frac{\phi}{2\pi} ).
$$
Denote the image of $\psi_A$ by $C$.
Since the contact form on $P\times I \times {\mathbb{R}} /\sim$ is given by $\alpha=d\phi+e^t \lambda_P$, we use the splitting $\ker \alpha=\ker \lambda_P\oplus \Span( \partial_t,\partial_\phi-e^{-t}R_P ) $.
Define $V$ to be the symplectic vector space $(\xi_P,d\lambda_P)|_{p_0}$.
Use the following map to trivialize the contact structure along the annulus $C$,
\begin{eqnarray*}
S^1 \times I \times V \oplus ({{\mathbb{R}}^2},\omega_0) & \longrightarrow & \xi|_C \\
(\phi,r;v,w_1,w_2) & \longmapsto & (\psi_A(\phi, r);
\left(
\begin{array}{ccc}
TFl^{R_P}_{f_i\circ Inv(r)\frac{\phi}{2\pi}}|_{\xi_P} &f_i'\circ Inv(r) \frac{\phi}{2\pi} R_{P} & -e^{-Inv(r)}R_P \\
0 & 1 & 0\\
0 & -f_i'\circ Inv(r) e^{Inv(r)} \frac{\phi}{2\pi} & 1
\end{array}
\right)
\left(
\begin{array}{c}
v \\
w_1\\
w_2
\end{array}
\right)
)
.
\end{eqnarray*}
We claim that this trivialization is well-defined.
Indeed, if we insert $\phi+2\pi$ instead of $\phi$, then we can recognize the image as a composition with $T\Psi$.
With respect to this trivialization, which we denote by $\epsilon=\epsilon_{\xi_P}\oplus \epsilon_w$, the path of symplectic matrices of the linearized flow is given by
$$
s\longmapsto
\left(
\begin{array}{ccc}
TFl^{R_P}_{-f_i(t)s}|_{\xi_P} & 0 & 0 \\
0 & 1 & 0\\
0 & f_i'(t)e^t s & 1
\end{array}
\right)
,
$$
where we have written $t=Inv(r)$ (put $s=\frac{\phi}{2\pi}$).
Now apply the direct sum axiom and a variation of the normalization axiom for the Maslov index of symplectic paths, \cite{Robbin:Maslovindex};
note that the symplectic form for the $\epsilon_w$-part has a minus sign, so we obtain
$$
\mu(S_{i,j};\epsilon)= \mu( TFl^{R_P}_{-f_i(t)s}|_{\xi_P};\epsilon_{\xi_P} )+\mu(
\left(\,
\begin{array}{cc}
1 & 0\\
f_i'(t)e^t s & 1
\end{array}
\right)
\,
;\epsilon_w
)
=+2cj-\frac{1}{2} \sgn f_i'.
$$
Since this trivialization ``winds'' around the binding, we modify the trivialization by composing with a suitable loop of symplectic matrices.
This new trivialization extends over a disk, and the Maslov index gets an additional contribution of $2i$, where $i$ is the number of revolutions around the binding.
We conclude
$$
\mu(S_{i,j})=2i+2cj-\frac{1}{2} \sgn f_i'.
$$
Since the orbit space $S_{i,j}$ has dimension $2n-3$, we obtain a formula for the reduced index with \cite[Lemma~2.4]{Bourgeois:thesis}.
\begin{lemma}[Conley-Zehnder index after perturbation]
Let $f_{Morse}$ be a Morse function on the orbit space $S_{i,j}$. Lift this Morse function to an $S^1$-invariant function $\bar f_{Morse}$, and define the perturbed contact form $\alpha_\epsilon=(1+\epsilon\bar f_{Morse}) \alpha$.
Fix a constant $T_{threshold}$. Then for $\epsilon$ sufficiently small all periodic Reeb orbits of $\alpha_\epsilon$ with action less than $T_{threshold}$ correspond to critical points of $f_{Morse}$.
Furthermore, the reduced index of a periodic Reeb orbit corresponding to the critical point $a$ of $f_{Morse}$ is given by
\begin{equation}
\label{eq:reduced_index_orbitspaces}
\bar \mu(\gamma_a)=2i+2cj-3/2 -\frac{1}{2} \sgn f_i'+\ind_a f_{Morse}.
\end{equation}
\end{lemma}
\subsection{A special orbit}
The periodic orbits in $S_{i,0}$ are special in the sense that they always bound a spanning disk, even without any of the assumptions made in the beginning of Section~\ref{sec:spanning_disks}.
Namely, the ``flat disk'' $\{ p_0 \}\times D^2_{r_1}\subset P\times D^2$ provides a spannning disk, where $D^2_{r_1}$ is a disk of radius $r_1$ in an enlarged neighborhood of the binding. See Section~\ref{sec:fattening_binding}.
The methods from the previous section apply, and we find.
\begin{lemma}[Conley-Zehnder index of special orbits]
\label{lemma:CZ_special_orbit}
Let $f_{Morse}$ be a Morse function on the orbit space $S_{i,0}$. Lift this Morse function to an $S^1$-invariant function $\bar f_{Morse}$, and define the perturbed contact form $\alpha_\epsilon=(1+\epsilon\bar f_{Morse}) \alpha$.
Fix a constant $T_{threshold}$. Then for $\epsilon$ sufficiently small all periodic Reeb orbits of $\alpha_\epsilon$ with action less than $T_{threshold}$ correspond to critical points of $f_{Morse}$.
Furthermore, the reduced index of a periodic Reeb orbit corresponding to the critical point $a$ of $f_{Morse}$ is given by
\begin{equation}
\bar \mu(\gamma_a)=2i-1+\ind_a f_{Morse}.
\end{equation}
\end{lemma}
\subsection{Orbits through the content of the pages}
A monodromy given by a (left-handed) fibered Dehn twist $\tau^{-1}$ is the identity on the content of the page, so the manifold with boundary given by $W_{in}\times S^1 \subset \OB(W,\tau^{-1})$ consists of periodic Reeb orbits.
In this case we need to be somewhat careful with the contribution of the boundary.
We will choose a perturbation where the boundary of $W_{in}\times S^1$ contributes in a simple way.
Recall that we decomposed $W^{2n-2}=W_{in}\cup P \times [0,1]$.
Now choose a Morse function $f_{convex}$ on $W$ with the following properties.
\begin{enumerate}
\item $f_{convex}$ equals $\delta \cdot e^t$ on a symplectization piece $(P \times [0,1],d(e^t \lambda_P )\, )$ with coordinates $(p,t)$.
\item Periodic orbits of the Hamiltonian vector field $X_{f_{convex}}$ that do not correspond to critical points have large period, say much larger than $2\pi$.
\end{enumerate}
Some words on why this is possible.
For the first point we point out that we can realize any contact form on the boundary by attaching a piece of a symplectization.
For the second point, first choose a Morse function $\tilde f_{convex}$ which equals $e^t$ on the symplectization piece.
We can assume that the Hamiltonian vector field $X_{\tilde f_{convex}}$ has only finitely many periodic orbits in (a slightly shrunk copy of) $W_{in}$.
Denote the minimal period of a periodic orbit not corresponding to a critical point by $T_{min}$, and find $\delta>0$ such that $T_{min}/\delta> 2\pi$.
Then $f_{convex}=\delta \tilde f_{convex}$ has the required properties.
\begin{remark}
For later applications, it will be useful to have more control of the maximal index of $f_{convex}$.
If $W$ is a Weinstein manifold, then we can achieve this by choosing an $\Omega$-convex Morse function with the above properties.
\end{remark}
Define a Hamiltonian on $P\times I$ by
$$
F=f(t)\cdot e^t
$$
such that $f(t)+f'(t)=-(f_i(t)-2\pi)$.
Then the Hamiltonian vector field $X_F$ satisfies
$$
X_F=-(f(t)+f'(t)\,)\cdot R_P=(f_i(t)-2\pi)R_P,
$$
so its time $1$-flow generates a \emph{right-handed} fibered twist with profile $f_i$.
Note the signs and the right-left conventions are those from Remark~\ref{rem:conventions_twisting_profile}.
Furthermore, by choosing $F(0)=0$, we see that we choose $F$ to be $0$ on the content of the pages $W_{in}$.
Now define
$$
H=F+f_{convex},
$$
and let $X_H$ denote its Hamiltonian vector field of $H$.
Write the time $1$-flow of $X_H$ as
$$
\tau_{MB}=Fl_1^{X_H}.
$$
Observe that a right-handed fibered Dehn twist is symplectically isotopic, relative to the boundary, to $\tau_{MB}$ (put a parameter in front of $f_{convex}$ to make this isotopy).
Hence we consider the contact open book
$$
\OB(W,\tau_{MB}^{-1}).
$$
Since $\tau_{MB}$ is the flow of a Hamiltonian vector field, $\tau_{MB}^*\lambda=\lambda-\mu$, where $\mu$ is exact.
In a collar neighborhood of the boundary we can use Formula~\eqref{eq:formula_u} to compute an explicit primitive of $\mu$.
\begin{remark}
The choice of the sign $+$ in front of $f_{convex}$ is a convenient choice for our purposes, as this will prevent the creation of additional orbit spaces.
\end{remark}
Later on, we shall only need the indices of periodic orbits that have linking number $1$ with the binding $P$. We have the following result.
\begin{lemma}
\label{lemma:index_control_left_twist}
Let $\gamma$ be a periodic Reeb orbit in $Map(W^{2n-2},\tau_{MB})\subset \OB(W,\tau_{MB}^{-1})$ such that the linking number of $\gamma$ with the binding equals $1$.
Then one of the following holds.
\begin{itemize}
\item $\gamma$ corresponds to a critical point of $f_{convex}$.
Its reduced index equals
$$
\bar \mu(\gamma)
=2n-2-\ind f_{convex}-2c.
$$
\item $\gamma$ is a periodic orbit in the margin piece $P\times I \times {\mathbb{R}} /\sim$. In this case $\gamma$ comes in the Morse--Bott family $S_{1,0}$, and the reduced index of the periodic Reeb orbit corresponding to a minimum of a Morse function on $S_{1,0}$ equals
$\bar \mu(\gamma)=1$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $x$ be a fixed point of the first iterate of $\tau_{MB}$.
Then $x\in W_{in}$ or $x\in P\times I$.
If $x\in W_{in}$, then we claim that $x$ is a critical point of $f_{convex}$.
Indeed, by property~(2) of $f_{convex}$, other periodic orbits of $X_{f_{convex}}$ have large period, so they cannot give rise to periodic Reeb orbits corresponding to a fixed point of the first iterate of the monodromy.
To determine the index of the periodic Reeb orbit $\gamma$ through $x$, we use a small modification of \cite[Lemma~2.4]{Bourgeois:thesis},
\[
\begin{split}
\bar \mu(\gamma) & =2i+2cj+n-3-\frac{1}{2}\dim W_{in}+(2n-2)-\ind_x H \\
&=-2c+(2n-2)-\ind_x H
=2n-2-\ind_x f_{convex}-2c,
\end{split}
\]
where we have put $i=1$. The twist in the $P$-direction equals $j=-1$ in the case of an orbit in the contents of pages.
On the margin piece $P\times I\times {\mathbb{R}} /\sim$, the Hamiltonian vector field $X_{-H}$, which generates the left-handed Dehn twist, equals $(2\pi-f_i(t)+\delta)R_P$.
So if an orbit $\gamma$ through $x=(p,t)\in P\times I$ has linking number $1$ with the binding, then $x$ lies in the perturbed copy of $S_{1,j}$ for some $j$.
As $2\pi-f_i(t)+\delta$ is an injective function, we only have to check where it takes values in $2\pi {\mathbb{Z}}$.
This is only the case if $f_i(t)=\delta$, so we see that $x$ lies in the perturbed copy of $S_{1,0}$.
Since $f_i'<0$, we obtain the reduced index of $\gamma$ by Formula~\eqref{eq:reduced_index_orbitspaces}.
\end{proof}
\begin{lemma}
\label{lemma:index_control_2}
Let $W$ be a Liouville filling for a prequantization bundle $(P,\lambda_P)$ over $(Q,k\omega)$ where $\omega$ is a primitive symplectic form and $k\in {\mathbb{Z}}_{>1}$.
Assume furthermore that the inclusion map $P\to W$ induces an injection on $\pi_1$.
Let $\gamma$ be a periodic Reeb orbit in $Map(W,\tau_{MB})$ such that the linking number of $\gamma$ with the binding equals $1$.
Then one of the following holds
\begin{itemize}
\item $\gamma$ is a periodic orbit in $S_{1,0}$.
Furthermore, the reduced index of the orbit corresponding to the minimum of a Morse function on $S_{1,0}$ is $\bar \mu(\gamma)=1$.
\item $\gamma$ lies in the content of the pages and corresponds to a critical point of $f_{convex}$.
Furthermore, if $\gamma_1$ is a periodic orbit in the orbit space $S_{1,0}$, then $[\gamma]\neq [\gamma_1]$ as a free homotopy class in $\OB(W,\tau_{MB}^{-1})-P\times \{ 0 \}$.
\end{itemize}
\end{lemma}
\begin{proof}
The first assertion follows from Lemma~\ref{lemma:CZ_special_orbit}.
The first part of the second assertion follows from the proof of Lemma~\ref{lemma:index_control_left_twist}.
For the second part, note that after untwisting the mapping torus $W\times {\mathbb{R}} /\sim$, the curve $\gamma_1$ moves once along an $S^1$-fiber in the $P$-direction, whereas $\gamma$ does not.
If $k>1$, then this fiber is non-contractible, and this implies the claim.
\end{proof}
\section{Holomorphic curves near the binding of an open book}
\label{sec:holomorphic_plane}
In this section we will analyze holomorphic curves in the symplectization of the contact open book $Y:=\OB(W,\tau^{-1})$.
Here $W$ is a Liouville domain with prequantization boundary $P$, and $\tau^{-1}$ is a left-handed fractional twist.
Instead of taking the open book contact form, we start by using the $S^1$-invariant contact form constructed in Section~\ref{sec:invariant_contact}, and we will make additional perturbations in order to obtain non-degeneracy properties.
In a neighborhood of the binding the invariant contact form looks like
\begin{equation}
\label{eq:contact_form_fat_nbhd}
(P\times D^2,\alpha=h_1(r)\lambda_P +h_2(r) d\phi).
\end{equation}
\subsection{Fattening the binding: setup for finding finite energy holomorphic planes}
\label{sec:fattening_binding}
In order to have a single model containing both the binding and an important part of the pages, we ``fatten'' the binding using the following procedure.
In the previous section we perturbed the monodromy to $\tau_{MB}$.
We have $\tau_{MB}^*\lambda=\lambda-\mu$, where $\mu$ is exact.
In a neighborhood of the boundary of $W$, we follow Formula~\eqref{eq:formula_u} and obtain an explicit primitive $U_0$ with $dU_0=\mu$.
For $x=(p,t)\in P\times [0,1]$ we have
$$
U_0(x)=U_0(t)=
C-\int_{s=a}^t f_i'(s)e^sds,
$$
where $f_i$ is the twisting profile for a left-handed twist as defined in Lemma~\ref{lemma:decomposition}.
This primitive $U_0$ extends to $W_{in}$.
Choose $C\in {\mathbb{R}}$ such that $U_0>0$ and
\begin{equation}
\label{eq:condition_C}
\frac{\max_{x\in W} U_0(x)}
{\min_{x\in W} U_0(x)}<2.
\end{equation}
This condition on $U_0$ is not necessary to prove the main result, but it is necessary for Section~\ref{sec:HC=0} which concerns an application to contact homology.
This condition is then used to control the action of periodic Reeb orbits.
Define $\alpha_{FB}:=\psi_G^* (d\phi+ e^t \lambda_P)$.
By Formula \eqref{eq:pullback_form} we find
\[
\begin{split}
\alpha_{FB} & =e^{1-r} \lambda_P+ \frac{ \tilde C+\int_0^{1-r} e^s f_i(s) ds}{2\pi}d\phi.
\end{split}
\]
We are going to extend the coefficient of $\lambda_P$ to a function $h_1$, and the coefficient of $d\phi$ to a function $h_2$.
\begin{lemma}
\label{lemma:orbits_near_binding}
There are functions $h_1:[0,r_1+\delta]\to {\mathbb{R}}$ and $h_2:[0,r_1+\delta]\to {\mathbb{R}}$ such that
\begin{itemize}
\item the form $\alpha=h_1(r)\lambda_P+h_2(r)d\phi$ is a smooth contact form on $P\times D^2$ extending $\alpha_{FB}$.
\item The function $h_2$ has a unique maximum in $r_1$, where $Inv(r_1):=1-r_1=t_1$. Here $t_1$ is the unique zero of the function $f_i=-f_m$, the twisting profile for a left-handed twist as defined in Section~\ref{sec:right_left_binding_twist}.
\item Periodic Reeb orbits $\gamma_{ij}(t)=(\gamma_P(j t);r,it)$ with $r<r_1$ either are binding orbits (orbits with $r=0$), or satisfy $\lk(P\times \{ 0 \},\gamma_{ij})=i\geq 2$.
\item Every binding orbit has a larger action than the action of $\gamma_1(t)=(p_0;r_1,t)$.
\end{itemize}
\end{lemma}
\begin{proof}
We extend $h_2$ such that it has a unique maximum in $r_1$.
It is increasing on $[0,r_1]$, and near $0$, we require $h_2(r)=r^2$.
For $h_1$ we choose a decreasing function with the following properties.
\begin{itemize}
\item $h_1(r)=e^{1-r}$ near $r=r_1$.
\item for $r\in]0,r_1]$ the derivative is sufficiently negative such that $\vert \frac{h_2'}{h_1'}\vert<1$.
\item $h_1(0)>h_2(r_1)$.
\end{itemize}
These choices guarantee that the first two asserted properties hold.
To see that the third property holds, take $r$ with $0<r<r_1$.
We rescale the Reeb vector field to
$$
-\frac{h_2'(r)}{h_1'(r)}R_{P}+\partial_\phi.
$$
Since the coefficient of $R_{P}$ is non-zero, but less than $1$ in absolute value, a Reeb orbit cannot close up after a single revolution around the binding, so $\lk(P\times \{ 0 \},\gamma_{ij})=i\geq 2$.
For the last assertion, plug in the assumption $h_1(0)>h_2(r_1)$ into Formula~\eqref{eq:action_near_binding}.
\end{proof}
From now we will use the functions $h_1$ and $h_2$ provided by this lemma.
Our first goal is
\begin{lemma}
\label{lemma:existence_rigid_regular_plane}
Let $W$ be a Liouville domain whose contact type boundary $\partial W=(P,\lambda_P)$ is a prequantization bundle over a symplectic manifold, and suppose it admits a right-handed fractional twist $\tau$.
Consider the contact open book $Y=\OB(W,\tau^{-1})$, and fatten the binding model such that the function $h_2$ has a unique maximum in $r_1$, as in the above.
Then the following holds true.
\begin{enumerate}
\item There is a periodic Reeb orbit $\gamma_1(t)=(p_0;r_1,t)$ in the fattened binding $P\times D^2\subset Y$.
\item This orbit $\gamma_1$ bounds a unique, rigid finite energy holomorphic plane in the symplectization ${\mathbb{R}} \times Y$. Furthermore, we can assume that this finite energy plane is regular.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is somewhat long, so we split it into several sections.
Part (1) is clear by the construction in the previous section.
Existence of a finite energy holomorphic plane in the Morse--Bott setup is proved in Lemma~\ref{lemma:existence_finite_energy_plane}.
Uniqueness in the Morse--Bott setup is proved in Section~\ref{sec:uniquess_finite_energy_plane}.
In the sections after that we prove regularity of the finite energy plane and describe how to go from the Morse--Bott setup to the non-degenerate setup.
\end{proof}
\begin{remark}
In any open book, not just those with left-handed twists in the monodromy, one can always perturb $h_2$ to have such a maximum, so this lemma does not have direct geometric meaning.
Instead, in special setups one can show that $\gamma_1$ does not bound any other holomorphic curves.
If this happens, the finite energy plane from the lemma will have a meaning: it obstructs fillability, and guarantees the existence of a contractible periodic Reeb orbit, even after deformations of the contact form by Lemma~\ref{lemma:non-fillability}.
We also want to point out that the maximum of $h_2$ in the case of a left-handed twist arises rather naturally, whereas perturbing $h_2$ will in general create additional periodic Reeb orbits and holomorphic curves.
\end{remark}
\subsection{An adjusted almost complex structure for the symplectization}
The binding $P$ is a prequantization bundle over an integral symplectic manifold $(Q,\omega)$. We denote the projection from $P \to Q$ by $\pi_Q$.
Choose a compatible almost complex structure $J_Q$ for $(Q,\omega)$.
We lift this to a complex structure compatible with $(\xi_P=\ker \lambda_P,d\lambda_P)$ using the formula
$$
J_{\xi_P}:=-\pi_Q^*J_Q:=-Hor \circ J_Q \circ d\pi_Q.
$$
Here the map $Hor_p$ takes horizontal lifts of vectors in $T_{\pi(p)}Q$ to the contact structure $\xi|_p$ in $P$.
We need the minus sign because of our convention~\eqref{eq:d_connection}.
A fattened neighborhood of the binding as defined in Section~\ref{sec:fattening_binding}, has the form $P\times D^2$. Denote the natural projection $P\times D^2 \to P$ by $\pi_P$.
Define
$$
X_1=\partial_r \quad \text{and} \quad X_2=\frac{1}{\det H}\left( -h_2 R_{P}+h_1 \partial_\phi \right)
.
$$
Split the contact structure $\xi=\ker \alpha$ as
$$
\xi=\pi_P^*\xi_P \oplus {\mathbb{R}} X_1 \oplus {\mathbb{R}} X_2.
$$
The vectors $X_1,X_2$ form a symplectic basis of the complement of $\pi_P^*\xi_P$ in $\xi$, so we define a complex structure on this complement by putting $J_{X_1X_2}(X_1)=X_2$ and $J_{X_1X_2}(X_2)=-X_1$, and extending linearly.
Then
$$
J_\xi:=\pi_P^* J_{\xi_P}+J_{X_1X_2}
$$
defines a compatible complex structure on $(\xi,d\alpha)$. More explicitly, we can write
$$
J_{X_1X_2}=X_2\otimes X_1^*-X_1\otimes X_2^*=\frac{1}{\det H}\left( -h_2 R_{P}+h_1 \partial_\phi \right) \otimes dr-\partial_r \otimes \left( h_1' \lambda_P +h_2' d\phi \right).
$$
We extend the complex structure $J_\xi$ to an adjusted almost complex structure for the symplectization using the standard recipe
$$
J\partial_t =R_\alpha.
$$
We obtain
$$
J=R_\alpha \otimes dt - \partial_t \otimes \alpha+J_{X_1X_2}+\pi_P^* J_{\xi_P}.
$$
In matrix notation
$$
J=
\left(
\begin{array}{ccccc}
0 & -h_1 & 0 & 0 & -h_2 \\
\frac{h_2'}{\det H} & 0 & 0 & \frac{-h_2}{\det H} & 0 \\
0 & 0 & J_{\xi_P} & 0 & 0 \\
0 & -h_1'& 0 & 0 & -h_2' \\
\frac{-h_1'}{\det H} & 0 & 0 &\frac{h_1}{\det H} & 0
\end{array}
\right)
,
$$
where we have ordered the coordinates/basis vectors as $(t,R_{P},\xi_P,r,\phi)$.
\subsection{PDE to ODE: constructing a rigid holomorphic plane}
We prove the following lemma.
\begin{lemma}
\label{lemma:existence_finite_energy_plane}
There is a finite energy holomorphic plane $u_0$ in the symplectization of $P\times D^2$ asymptotic to $\gamma_1$.
\end{lemma}
\begin{proof}
We use the Morse-Bott almost complex structure from the previous section.
Take $p_0$ in $P$ and make the following ansatz for the holomorphic plane
\begin{eqnarray*}
u: {\mathbb{C}} & \longrightarrow & {\mathbb{R}} \times P \times D^2 \\
(\rho,\psi) & \longmapsto & (t(\rho);p_0,r(\rho),\psi)
\end{eqnarray*}
With respect to the coordinates/basis vectors $(t,R_{P},\xi_P,r,\phi)$ we have
$$
u_\rho:=\frac{\partial u}{\partial \rho}=
\left(
\begin{array}{c}
t_\rho \\
0 \\
0 \\
r_\rho \\
0
\end{array}
\right)
\text{ and }
u_\psi=
\left(
\begin{array}{c}
0 \\
0 \\
0 \\
0 \\
1
\end{array}
\right)
$$
The Cauchy-Riemann equation in polar coordinates, $u_\rho+\frac{1}{\rho}J u_\psi=0$, reduces hence to a system of ODE's
\[
\begin{split}
t_\rho(\rho) &=\frac{1}{\rho} h_2(r(\rho) \,) \\
r_\rho(\rho) &=\frac{1}{\rho} h_2'(r(\rho) \,)
\end{split}
\]
This system can be solved for by first integrating the second equation, and substituting the solution in the first: integration yields then a solution, and there are two integration constants, one for shifting $t$ (the symplectization symmetry), and one for rescaling $\rho$ (an automorphism of the plane).
We check that this is a smooth solution and that it is asymptotic to $\gamma_1$.
To see that the solution is smooth, we use the standard form of $h_2(r)=r^2$ near $r=0$. This is not really necessary, but it simplifies the argument.
Near $r=0$, we get
\[
\begin{split}
r_\rho(\rho) &=\frac{1}{\rho} 2 r(\rho)
\end{split},
\]
so $r(\rho)=C_r \rho^2$, where $C_r$ is an integration constant.
The first equation reduces to
\[
\begin{split}
t_\rho(\rho) &=\frac{1}{\rho} C_r^2 \rho^4.
\end{split}
\]
So near $r=0$, the solution looks like $t(\rho)=C_t+\frac{1}{4}C_r^2\rho^4$, where $C_t$ is another integration constant.
We see that the map $u$ is smooth near $r=0$. Away from $r=0$, there are no coordinate singularities, so $u$ is smooth everywhere.
Finally, we check that $u$ is asymptotic to $\gamma_1$.
To see this, note that as $\rho\to \infty$, the radial component $r(\rho)\to r_1$.
There $t_\rho(\rho)\sim \frac{h_2(r_1)}{\rho}$, so asymptotically $t\sim h_2(r_1) \log(\rho)$.
After going to cylindrical coordinates near $\infty$ (such coordinates are described after Lemma~\ref{lemma:normal_coordinates}) and noting that $2\pi h_2(r_1)=\mathcal A(\gamma_1)$ we see that $u$ converges exponentially to the Reeb orbit $\gamma_1(\psi)=(p_0;r_1,\psi)$ in $P\times D^2$.
For later use, we put $u_0:=u$.
\end{proof}
\subsection{Uniqueness of holomorphic planes}
\label{sec:uniquess_finite_energy_plane}
Consider the projection
\begin{eqnarray*}
\pi: {\mathbb{R}} \times P \times D^2 & \longrightarrow & D^2.
\end{eqnarray*}
We shall start by arguing that any finite energy holomorphic plane $u$ asymptotic to $\gamma_1$ has the same projection $\pi(u_0)$ as the plane $u_0$ we found in Lemma \ref{lemma:existence_finite_energy_plane}.
\begin{lemma}
\label{lem:holomorphic_curve_projection}
Let $u:{\mathbb{C}} \to {\mathbb{R}} \times Y$ be a finite energy holomorphic plane asymptotic to $\gamma_1$.
Then $\pi \circ u$ is defined, and $\pi\circ u({\mathbb{C}})=\{ (r,\phi) \in D^2~|~r\leq r_1 \}$.
\end{lemma}
The idea is that curves that escape this neighborhood exceed the a priori energy bound.
Here are more details.
\begin{proof}
Since the asymptote $\gamma_1$ has linking number $1$ with the binding, there is a unique point on $u$ intersecting the binding.
We will assume that this is $u(0)\in P\times \{ 0\}$.
Clearly, there is some connected, open subset $V\subset {\mathbb{C}}$ such that $\pi \circ u|_V$ is defined.
Furthermore, we can find $V$ such that $D^2_{r_1} \subset \pi\circ u(V)$ as $u$ can otherwise not be asymptotic to $\gamma_1$.
By Stokes' theorem the energy of the holomorphic curve $u$ is given by
$$
E(u)=\mathcal A(\gamma_1)=E(u_0)= 2\pi h_2(r_1)
.
$$
On the other hand, we can also compute the energy directly as
$$
E(u)=\int_{\mathbb{C}} u^* d\alpha \geq \int_{u(V)} h_1'dr \wedge \lambda_P+h_1d\lambda_P+h_2' dr \wedge \phi.
$$
As $u$ is $J$-holomorphic and $h_1 d\lambda_P(\cdot, J \cdot)=h_1 d\lambda_P(\cdot, J_{\xi_P} \cdot)$ is non-negative, this can be estimated by
\begin{equation}
\label{eq:estimate_energy_curve}
E(u)\geq \int_{u(V)} h_1'dr \wedge \lambda_P+h_2'dr \wedge d\phi.
\end{equation}
We are going to give a lower bound for this quantity by dividing the plane into annuli for which the image has inner radius $r_{s_j}$ and outer radius $r_{e_j}$.
Then the energy contains the following term
$$
\int_{A_{r_{s_j},r_{e_j}}} h_1'dr \wedge \lambda_P+\int_{A_{r_{s_j},r_{e_j}}} h_2'dr \wedge d\phi.
$$
The second term is positive for $r<r_1$, but the first term can be negative.
However, we can parametrize the inverse image of a regular value of $r$ as a circle, and this allows us to say more.
We find the following contribution to the energy for an annulus with radii $r_{s_j}$ and $r_{e_j}$,
\begin{eqnarray*}
\int_{A_{r_{s_j},r_{e_j}}} h_1'dr \wedge \lambda_P & = &
\int_{r=r_{s_j}}^{r_{e_j}} \int_{\psi=0}^{2\pi}h_1'(r) \lambda_P(\partial_\psi u) d\psi dr \\
& = & \int_{r=r_{s_j}}^{r_{e_j}} \int_{\psi=0}^{2\pi}\frac{h_1'(r)}{h_1(r)} \left(
\alpha-h_2(r) d\phi
\right)
(\partial_\psi u) d\psi dr \\
& = & \int_{r=r_{s_j}}^{r_{e_j}} -\frac{h_1'(r)}{h_1(r)} \left(
2\pi h_2(r) - \int_{\psi=0}^{2\pi} \alpha(\partial_\psi u) d\psi
\right)
dr. \\
\end{eqnarray*}
Here we used the fact the asymptotics of $u$ are the same as those of $u_0$, so $\gamma_1$ is covered once, and $\int_0^{2\pi} d\phi(\partial_\psi u) d\psi =2\pi$.
We see that this contribution to the energy is non-negative as long as
$$
\int_{\psi=0}^{2\pi} \alpha(\partial_\psi u) d\psi \leq 2\pi h_2(r).
$$
This is the action of the loop for which the $r$-component of $u$ equals $u^r=r$.
Let us now complete the argument.
Consider an increasing sequence of regular values $\{ r_i \}_i$, where $r_i$ is the outer radius of an annulus, that converges to $r_1$.
There are two cases to consider.
\begin{itemize}
\item{} There is a subsequence, denoted also by $\{ r_i \}_i$, such that
$$
\int_{\psi=0}^{2\pi} \alpha(\partial_\psi u(r_i,\psi)\, ) d\psi > 2\pi h_2(r_i).
$$
If this happens, then the above estimate does not work, but in this case we can directly compute the action by Stokes' theorem to find
$$
\lim_{i\to \infty} \int_{u,r<r_i} d\alpha \geq E(u_0).
$$
\item{} If there is no such subsequence, then we can apply the above estimate.
In particular, we find that
$$
\int_{A_{r_{s_j},r_{e_j}}} h_1'dr \wedge \lambda_P \geq 0,
$$
so the energy of the annulus $A_{r_{s_j},r_{e_j}}$ is at least
$$
E_{A_{r_{s_j},r_{e_j}}}(u)\geq 2\pi h_2(r_{e_j})- 2\pi h_2(r_{s_j}).
$$
\end{itemize}
Combine this with Equation~\ref{eq:estimate_energy_curve} to obtain an estimate for the total energy of the curve $u$.
We have
$$
E(u)\geq E(u_0)
$$
if $\pi \circ u$ just covers $D^2_{r_1}$.
Arguing by contradiction we can show the claim of the lemma.
If there is a point $x$ such that $\pi \circ u(x)\notin D^2_{r_1}$, then we find an open set $U_x$ not contained in $D^2_{r_1}$ which gives a positive contribution to the energy.
In particular,
$$
E(u)>E(u_0).
$$
But this is impossible by Stokes' theorem which asserts that we must have equality.
\end{proof}
We shall now reduce the problem to a holomorphic curve in a $3$-dimensional contact manifold.
Consider a holomorphic plane
\begin{eqnarray*}
u: {\mathbb{C}} & \longrightarrow & {\mathbb{R}} \times P\times D^2_{r_1} \\
z & \longmapsto & (f(z),g(z),h(z) )
\end{eqnarray*}
with the same asymptotics as $u_0$.
By Lemma~\ref{lem:holomorphic_curve_projection} this curve $u$ must have the same projection to $D^2_{r_1}$ as $u_0$, and $u$ stays in a set of the form ${\mathbb{R}} \times P \times D^2_{r_1}$.
Consider the projection
\[
\begin{split}
\bar \pi_Q:~{\mathbb{R}} \times P \times D^2_{r_1} & \longrightarrow Q \\
(t,p,z) & \longmapsto \pi_Q(p).
\end{split}
\]
\begin{lemma}
The map $\bar \pi_Q\circ u: {\mathbb{C}} \to Q$ is a $(-J_Q)$-holomorphic curve with vanishing area. Hence $\bar \pi_Q\circ u$ is constant.
\end{lemma}
\begin{proof}
This holds because $J$ is obtained by pulling back $J_Q$. More precisely, if we let $j$ denote the standard complex structure on ${\mathbb{C}}$, then we have
\[
\begin{split}
d(\bar \pi_Q\circ u) \circ j&=d \bar \pi_Q \circ d u \circ j=d \bar \pi_Q \circ J \circ du
=d\bar \pi_Q \circ \left( R_\alpha \otimes dt-\partial_t \otimes \alpha-\pi_P^* \circ \pi_Q^*J_Q +J_{X_1X_2}\right) \circ du\\
&=
-d \pi_Q \circ \left( Hor \circ J_Q \circ d\bar \pi_Q \right)du =-J_Q \circ d(\bar \pi_Q\circ u),
\end{split}
\]
so $\bar \pi_Q\circ u$ is a $(-J_Q)$-holomorphic curve. Furthermore, $\int_{\mathbb{C}} ( \bar \pi_Q\circ u)^*\omega_Q=\int_{\mathbb{C}} \frac{-1}{2\pi} u^* d \lambda_P$ by virtue of $\lambda_P$ being a connection form.
As $\int_{\mathbb{C}} u^*d \lambda_P=0$ by the asymptotic boundary conditions (the $P$-component of $u$ converges to $p_0$ at $\infty$), we conclude that $\bar \pi_Q\circ u$ is a constant map.
\end{proof}
From this lemma we conclude that $u$ must have the following form,
\begin{eqnarray*}
u: {\mathbb{C}} & \longrightarrow & {\mathbb{R}} \times P\times D^2_{r_1} \\
z & \longmapsto & (f(z),Fl^{R_{P}}_{\tilde g(z)}(p_0),h(z) )
\end{eqnarray*}
It follows that we can construct a holomorphic curve $v$ in the symplectization of a $3$-dimensional contact manifold,
\begin{eqnarray*}
v: {\mathbb{C}} & \longrightarrow & {\mathbb{R}} \times S^1 \times D^2_{r_1} \\
z & \longmapsto & (f(z),\tilde g(z),h(z) )
.
\end{eqnarray*}
Similarly, we can also associate a holomorphic curve $v_0$ with $u_0$.
\begin{eqnarray*}
v_0: {\mathbb{C}} & \longrightarrow & {\mathbb{R}} \times S^1 \times D^2 \\
z & \longmapsto & (f_0(z),\theta_0,h_0(z) )
.
\end{eqnarray*}
Let us now argue that $v$ is a translation of $v_0$.
This implies in turn that $u$ is a translation of $u_0$.
There are two cases to consider.
\begin{itemize}
\item{} The function $\tilde g$ is constant, so $\tilde g(z)=\theta_0$.
In this case, the curves $v$ and $v_0$ are both contained in the $3$-dimensional submanifold ${\mathbb{R}} \times \{ \theta_0 \} \times D^2$, and they both converge exponentially to the same periodic orbit $\gamma_1$.
If $v$ is not a translation of $v_0$, then we can apply a translation in the ${\mathbb{R}}$-direction of the symplectization to $v$, and obtain a shifted curve $v_{shift}$ that intersects $v_0$ non-trivially and transversely.
It follows that the intersections are $1$-dimensional submanifolds.
By positivity of intersection (cf.~\cite[Theorem~E.1.2]{MS:J_curves}), $v$ must coincide with $v_0$, which contradicts our assumption.
\item{} The function $\tilde g$ is not constant.
Then we define a shifted holomorphic curve with different asymptotics by
$$
v_{shift}= (f(z),\tilde g(z)+\theta_{shift},h(z) ).
$$
The holomorphic curve $v_0$ is asymptotic to the periodic Reeb orbit $\gamma_1(\psi)=(\theta_0;r_1,\psi)$, and $v_{shift}$ is asymptotic to the periodic Reeb orbit $\gamma_{shift}(\psi)=(\theta_0+\theta_{shift};r_1,\psi)$.
The obvious Seifert surfaces for $\gamma_1$ and $\gamma_{shift}$ (flat disks in the solid torus $S^1\times D^2$) do not intersect, so $\gamma_1$ and $\gamma_{shift}$ do not link for any shift $\theta_{shift} \notin 2\pi {\mathbb{Z}}$.
On the other hand, the linking number $\lk(\gamma_1,\gamma_{shift})$ can also be computed as a $4$-dimensional intersection number of the Seifert surfaces of $\gamma$ and $\gamma_{shift}$.
We shall take the Seifert surfaces provided by the finite energy planes $v_0$ and $v_{shift}$, so
$$
\lk(\gamma_1,\gamma_{shift}) =v_0 \cdot v_{shift}.
$$
Since $\tilde g(z)$ is assumed to be non-constant, we can find $\theta_{shift}$ such that $v_0$ and $v_{shift}$, after possibly translating in the ${\mathbb{R}}$-direction, intersect. By positivity of intersection for holomorphic curves in dimension $4$, it follows that $\lk(\gamma_1,\gamma_{shift})>0$.
This is a contradiction, so we conclude that for finite energy planes asymptotic to $\gamma_1$ the function $\tilde g$ is constant.
\end{itemize}
\subsection{Regularity of the finite energy plane: Banach space for Morse-Bott setup and reduction to $3$ dimensions}
\label{seq:MB_regularity}
We will show that $u_0$ is regular, meaning that the linearized Cauchy-Riemann operator is surjective at $u_0$.
Since the details are rather lengthy, we give a summary first.
\begin{itemize}
\item We show that the linearized operator splits into a $4$-dimensional part, which is a mixture of normal and tangential directions, and higher-dimensional part, which is purely normal.
\item The kernel of the higher-dimensional part can be explicitly determined.
\item By automatic transversality results of \cite{Wendl:transversality} the $4$-dimensional part is regular. We conclude that $\ker D_{u_0}=\ind D_{u_0}$ for the full problem, so the cokernel is trivial.
\end{itemize}
Let us now give some details.
Since we are looking only at finite energy planes, we shall restrict ourselves to the functional analytic setup for this particular case.
Fix a contact manifold $(Y,\alpha)$, and let $\gamma_1$ be a periodic Reeb orbit of Morse-Bott type in $Y$.
Suppose that the action $\mathcal A(\gamma_1)=T$.
Fix a small number $\delta>0$. This number will serve as an asymptotic weight and is necessary for the linearized operator to be Fredholm. Also choose $p>2$.
We take the following lemma from \cite[Lemma~3.1]{Bourgeois:thesis}.
\begin{lemma}[Normal coordinates for a periodic orbit in a Morse-Bott manifold]
\label{lemma:normal_coordinates}
Suppose that $N_T$ is a $k$-dimensional submanifold in $(Y^{2n-1},\alpha)$ consisting of simple periodic Reeb orbits of period $T$.
Let $\gamma_1\subset N_T$ be a simple periodic orbit.
Then there is a neighborhood $S^1\times D^{k-1}\times D^{2n-1-k}=S^1 \times D^{2n-2}$ near $\gamma_1$ with coordinates $(\phi;z_t,z_n)=(\phi;z)$ such that
$$
\alpha=g\cdot(d\phi+\frac{i}{2}(zd\bar z-\bar z dz)\, ),
$$
where $dg|_{N_T}=0$.
\end{lemma}
We apply this lemma to obtain coordinates $(t;\phi,z)$ near a cylinder over $\gamma_1$ for the symplectization ${\mathbb{R}} \times Y$.
We then say that a map $u:{\mathbb{C}} \P^1-\{ pt \}={\mathbb{C}} \to {\mathbb{R}} \times Y$ is {\bf asymptotically cylindrical} to $\gamma_1$ if there are $t_0,\phi_0$ such that in cylindrical coordinates $(\rho,\psi)\in Z_+={\mathbb{R}}_{\geq 0}\times S^1$ for ${\mathbb{C}}$, the map $u$ satisfies
\[
\begin{split}
t\circ u-T \rho -t_0 & \in W^{1,p}_\delta(Z_+,{\mathbb{R}}) \\
\phi\circ u-\psi-\phi_0 & \in W^{1,p}_\delta(Z_+,{\mathbb{R}}) \\
z\circ u=(z_t,z_n)\circ u & \in W^{1,p}_\delta(Z_+,{\mathbb{R}}^{2n-2}).
\end{split}
\]
Denote the orbit space $N_T/S^1$ by $S_T$.
Define the Banach manifold
$$
\mathcal B^{1,p}_{MB,\delta}(\gamma_1):=\{ u:{\mathbb{C}} \to {\mathbb{R}} \times Y~|~u \text{ is of class } W^{1,p}_{loc}, \text{ and asymptotically cylindrical to }\gamma_1\}
$$
For a map $u\in \mathcal B^{1,p}_{MB,\delta}(\gamma_1)$ with $[\gamma_1]$ in the orbit space $S_T$, we define the finite-dimensional vector spaces
$$
V_{\gamma_1}={\mathbb{R}} \frac{\partial}{\partial t}\oplus {\mathbb{R}} R_\alpha \text{ and }
W_{\gamma_1}=T_{[\gamma_1]} S_T.
$$
Then the tangent space at $u$ can be identified with
\[
\begin{split}
T_u \mathcal B^{1,p}_{MB,\delta}(\gamma_1) &\cong W^{1,p}_\delta({\mathbb{C}},u^* T({\mathbb{R}} \times Y )\, ) \oplus V_{\gamma_1}\oplus W_{\gamma_1}.
\end{split}
\]
This vector space is actually enough for our purposes since we only need to check regularity near one solution, namely $u_0$.
Let $Y=\OB(W,\tau^{-1})$ be the contact manifold constructed in Theorem~\ref{thm:left_and_right_twisted} with the modified contact form from Section~\ref{sec:fattening_binding}.
Consider the holomorphic plane $u_0$ constructed in Lemma~\ref{lemma:existence_finite_energy_plane}.
We linearize the Cauchy-Riemann operator near the solution $u_0$.
Observe that the $P$-component of $u_0$ equals $p_0$.
On a neighborhood of ${\mathbb{R}} \times \{ p_0 \} \times D^2$ we choose the Riemannian metric
$$
g:=dt \otimes dt+g_{flat}^P+dr \otimes dr +r^2 d\phi \otimes d\phi.
$$
Let $\nabla$ denote the Levi-Civita connection for this flat metric.
Then $\nabla \partial_t=0,\nabla \partial_r=0, \nabla \partial_\phi=0$.
The linearized operator at a solution $u_0$ of the Cauchy-Riemann equations, acting on a vector field $X$, is then given by $D_{u_0}X \nabla X + J \nabla X \circ i +(\nabla_X J)\circ du_0 \circ i$, see \cite[Equation~3.8]{Wendl:transversality}.
By plugging in $\partial_\rho$ we get a PDE with asymptotic boundary conditions given by the functional analytic setup,
$$
D_{u_0}(X)(\partial_\rho)= \nabla_{\partial_\rho} X + \frac{1}{\rho} J \nabla_{\partial_\psi} X +\frac{1}{\rho} (\nabla_X J)\partial_\phi.
$$
The last term can be simplified: $(\nabla_X J)\partial_\phi= \nabla_X (J\partial_\phi)-J\nabla_X \partial_\phi=-\nabla_X \left( h_2 \partial_t +h_2' \partial_r \right)$.
Simplifying notation as well, we arrive at
$$
D_{u_0}(X)(\partial_\rho)= \nabla_\rho X + \frac{1}{\rho} J \nabla_\psi X - \frac{1}{\rho} X^r h_2' \partial_t-\frac{1}{\rho} X^r h_2'' \partial_r.
$$
In order to understand the associated differential operator, consider the vector fields along the solution $u_0$ given by
\[
\begin{split}
X_1^I=\partial_t,\quad X_2^I=R_\alpha, \quad X_3^I=X_1, \quad X_4^I=X_2, \\
\{ X_i^{II} \} \text{ symplectic basis of }{\xi_P}|_{p_0}.
\end{split}
\]
Denote the span of the $X^I$-vectors by $E_{I}$, and the span of the $X^{II}$-vectors by $E_{II}$.
We obtain a splitting $u_0^*T({\mathbb{R}} \times Y)=E_{I}\oplus E_{II}$.
Furthermore, with respect to this decomposition, the ``Morse-Bott'' vector space $W_{\gamma_1}$ decomposes into $W_{\gamma_1}^I\oplus W_{\gamma_1}^{II}$.
This gives rise to bounded linear projection maps
\[
\begin{split}
\pi_{I}:W^{1,p}_\delta(u_0^*T({\mathbb{R}} \times Y)\,)\oplus V_{\gamma_1} \oplus W_{\gamma_1}
& \longrightarrow W^{1,p}_\delta(E_{I})\oplus V_{\gamma_1} \oplus W^I_{\gamma_1} \\
\pi_{II}:W^{1,p}_\delta(u_0^*T({\mathbb{R}} \times Y)\,)\oplus V_{\gamma_1} \oplus W_{\gamma_1}
& \longrightarrow W^{1,p}_\delta(E_{II})\oplus W^{II}_{\gamma_1}
\end{split}
\]
splitting the tangent space $T_{u_0} \mathcal B^{1,p}_{MB,\delta}$.
Similarly, there is a splitting of the target space of the linearized operator as well.
We have
$$
L^p_\delta(\overline{\Hom}(T{\mathbb{C}}, u_0^*T({\mathbb{R}} \times Y)\,)\, )
=
L^p_\delta(\overline{\Hom}(T{\mathbb{C}},E_{I}\, ))
\oplus
L^p_\delta(\overline{\Hom}(T{\mathbb{C}},E_{II}\, ))
,
$$
where $\overline{\Hom}$ denotes complex anti-linear maps.
\begin{lemma}
The vertical differential at $u_0$,
\[
D_{u_0}: T_{u_0} \mathcal B^{1,p}_{MB,\delta} \longrightarrow
L^p_\delta(\overline{\Hom}(T{\mathbb{C}}, u_0^*T({\mathbb{R}} \times Y)\,)\, ),
\]
splits as $D_{u_0}=D_{I}+D_{II}$, where
\[
\begin{split}
D_{I}: W^{1,p}_\delta(E_{I})\oplus V_{\gamma_1} \oplus W^I_{\gamma_1} & \longrightarrow L^p_\delta(\overline{\Hom}(T{\mathbb{C}},E_{I}\, )) \\
D_{II}: W^{1,p}_\delta(E_{II})\oplus W^{II}_{\gamma_1}
& \longrightarrow L^p_\delta(\overline{\Hom}(T{\mathbb{C}},E_{II}\, ))
\end{split}
\]
\end{lemma}
\begin{proof}
For this, we write out the vertical part of the linearized operator.
Note that the pulled back tangent space $u_0^*T({\mathbb{R}} \times P \times D^2)$ is trivialized by the ${X^{I}}$-vectors $\partial_t,R_\alpha,X_1=\partial_r,X_2=J_{X_1X_2} \partial_r$ and the $X^{II}$-vectors which form a symplectic basis of $\xi_P$ at $p_0$.
A dual basis is given by $dt,\alpha, dr, \beta,\{ X^{II,*}_j \}_j$, where
$$
\beta=h_1' \lambda_P+h_2' d\phi.
$$
The linearized Cauchy-Riemann equation $D_{u_0} \bar \partial_{J}(X)(\partial_\rho)=0$ can be written as the system
\begin{alignat*}{4}
\quad\quad
\partial_\rho X^t-\frac{1}{\rho}\alpha( \nabla_{\psi}X )-\frac{1}{\rho}X^rh_2' &= 0
&\quad\quad\quad\quad(dt) \\
\quad\quad
\alpha(\nabla_\rho X)+\frac{1}{\rho}\partial_\psi X^t &=0
&\quad\quad\quad\quad(\alpha) \\
\quad\quad
\nabla_\rho X^{\xi_P}+\frac{1}{\rho}J_\xi \nabla_\psi X^{\xi_P} &=0
&\quad\quad\quad\quad(II\text{-part}) \\
\quad\quad
\partial_\rho X^r-\frac{1}{\rho}\beta(\nabla_\psi X)-\frac{1}{\rho}X^r h_2'' &=0
&\quad\quad\quad\quad(dr) \\
\quad\quad
\beta(\nabla_\rho X)+\frac{1}{\rho}\partial_\psi X^r &=0.
&\quad\quad\quad\quad(\beta)
\end{alignat*}
The third equation $\nabla_\rho X^{\xi_P}+\frac{1}{\rho}J_\xi \nabla_\psi X^{\xi_P} =0$ separates from the others, and this equation gives $D_{II}$.
\end{proof}
To check that the linearized operator is surjective, we show that $\dim \ker D_{u_0}=\ind D_{u_0}$.
We emphasize that we are in a Morse-Bott setup, so the index can be computed with Formula~\eqref{eq:ind_moduli} if replace the Conley-Zehnder index by a perturbed variant as defined in \cite[Section~3.2]{Wendl:transversality}. Using Lemma~\ref{lemma:CZ_special_orbit} we may write the index as
$$
\ind D_{u_0}=5+\dim S_T.
$$
Now observe that if $X=X^I+X^{II}$ lies in the kernel of $D_{u_0}=D_{I}+D_{II}$ with $X^I\in W^{1,p}_\delta(E_{I})\oplus V_{\gamma_1} \oplus W^I_{\gamma_1}$ and $X^{II}\in W^{1,p}_\delta(E_{II})\oplus W^{II}_{\gamma_1}$, then $D_{II} X^{II}=0$.
Solutions to the equation $D_{II} X^{II}=0$ correspond to some of the Morse-Bott symmetries.
Indeed, $D_{II} X^{II}=0$ is a standard system of Cauchy-Riemann equations on ${\mathbb{C}}$ in polar coordinates, so its solutions are holomorphic functions ${\mathbb{C}} \to {\mathbb{C}}^{\frac{\dim S_T-1}{2}}$.
In the functional analytic setup we have here, only constant solutions are admissible; other solutions do not lie in $ W^{1,p}_\delta(E_{II})\oplus W^{II}_{\gamma_1}$.
There are $\dim S_T-1$ real linearly independent constant solutions, so $\dim \ker D_{II}=\dim S_T-1$.
Since $D_{I}$ and $D_{II}$ decouple, we find also $D_{I} X^{I}=0$.
Now notice that $D_{I}$ corresponds to the linearized Cauchy-Riemann operator for the $3$-dimensional problem: the case that $W$ is a surface, and $P$ a collection of circles.
Denote the finite energy plane for the $3$-dimensional case by $u_{0,\dim=3}$.
For this $3$-dimensional case we use Wendl's automatic transversality result, \cite[Theorem 1]{Wendl:transversality}.
Since the curve $u_{0,\dim=3}$ is embedded, has genus $0$, only one positive puncture, the conditions of Wendl's theorem hold, and we see that $\dim \ker D_I=\ind D_{ u_{0,\dim=3} }=5+1$.
We conclude that for the original problem $\dim \ker D_{u_0}=\dim \ker D_{I} +\dim \ker D_{II}=6+\dim S_T-1=\ind D_{u_0}$, so the cokernel of $D_{u_0}$ is trivial.
\begin{remark}
Alternatively, we can use a more direct argument to compute $\ker D_{I}$.
In \cite{BvK} Fourier analysis was used to directly compute the kernel.
Also note that the index that we use here is not the same as the one in \eqref{eq:ind_moduli}; here we are working with maps, so there are still automorphisms.
\end{remark}
\subsection{From Morse-Bott to non-degenerate}
\label{sec:MB_non_deg}
We follow the procedure from Bourgeois' thesis to get a non-degenerate contact form.
Instead of performing the procedure in general, we will perform this procedure in our particular setup.
In our case, the Morse-Bott manifold is given by $P\times \{ r_1 \} \times S^1\subset P \times D^2$, and the orbit space is $P=P\times S^1 /Reeb$.
Choose a Morse function $f$ on the orbit space with a unique local minimum in $[\gamma_1]=p_0$ and with value $f(p_0)=0$.
Lift this function to an $S^1$-invariant function $\bar f$ on the Morse-Bott manifold $P\times S^1$.
Define the perturbed contact form
$$
\alpha_\epsilon:=(1+\epsilon \bar f) \alpha.
$$
Denote the Reeb field of $\alpha_\epsilon$ by $R_\epsilon$.
Since $p_0$ is a minimum, $\gamma_1$ is also a periodic orbit of $R_\epsilon$, and as $f(p_0)=0$, it has in fact the same parametrization.
In addition, for $\epsilon>0$ sufficiently small, $\gamma_1$ is a non-degenerate periodic orbit of $R_\epsilon$.
Next we define an adjusted almost complex structure $J_\epsilon$ for the symplectization by putting $J_\epsilon|_{\xi}=J|_{\xi}$, and extending by the usual recipe $J_\epsilon \partial_t =R_\epsilon$.
First apply Lemma~\ref{lemma:normal_coordinates} to obtain coordinates $(\phi;x,y)$ for a neighborhood of $\gamma_1$ of the form $S^1\times U_{P\times I}$ such that
$$
\alpha=g\cdot(d\phi+xdy-ydx).
$$
In these coordinates, we have identified $S^1\cong {\mathbb{R}}/{\mathbb{Z}}$.
For later use, we define the period $T=\int_{S^1}\alpha=g(0;0,0)$.
Take a moving frame $\partial_t,\partial_\phi,\{ U_i,V_j \}_{i,j}$ on this neighborhood of $\gamma_1$, where $\{ U_i,V_j \}$ forms a unitary trivialization of $(\xi,d\alpha,J_\xi)$, meaning in particular that $J_\xi$ is the standard complex structure with respect to this trivialization.
Write $F=(1+\epsilon \bar f)g$, so $\alpha_\epsilon=F\cdot (d\phi+xdy-ydx)$.
With respect to the above moving frame, $J_\epsilon$ is the matrix
\begin{equation}
\label{eq:perturbed_J}
J_\epsilon=
\left(
\begin{array}{cccc}
0 & -F & 0 & 0 \\
\frac{1}{F} & 0 & 0 & 0 \\
\frac{V(F)}{F^2} & \frac{- U(F)}{F} & 0 & -\mathbbm{1} \\
\frac{-U(F)}{F^2} & \frac{- V(F)}{F} & \mathbbm{1} & 0
\end{array}
\right)
.
\end{equation}
Note that the first column is the Reeb vector field for $\alpha_\epsilon$, and that the restriction of $J_\epsilon$ to $\xi$ is equal to the restriction of unperturbed $J_0$ to $\xi$.
\subsubsection{Asymptotic behavior and Sobolev spaces}
In Section~\ref{seq:MB_regularity} we checked that the operator $D_{u_0}$ is surjective.
As $D_{u_0}$ is a Fredholm operator, we get a bounded right inverse $Q_{u_0}$.
Note that the difference of the endomorphisms
$$
\Delta_\epsilon:=J_\epsilon-J_0
$$
is small by the above formula~\eqref{eq:perturbed_J} for $J_\epsilon$.
Writing out the Cauchy-Riemann equation shows that there is a constant $C(p)$ depending on $p$ such that
$$
\Vert \bar \partial_{J_\epsilon} \Vert_{L^p_\delta} \leq C(p) \epsilon
$$
provided $\epsilon$ is sufficiently small.
We now investigate the behavior of solutions of the Cauchy-Riemann equation for the perturbed problem.
We need this to ensure that we can use the Sobolev space $W^{1,p}_\delta({\mathbb{C}},u^* T({\mathbb{R}} \times P \times D^2 )\, ) \oplus V_{\gamma_1}$ also for the functional analytic setup in the non-degenerate case.
Near a vertical cylinder consider the coordinates
$$
Z=(Z^t,Z^\phi;Z_{P\times I})=(u^t-T\rho,u^\phi-\psi,u_{P\times I})\in {\mathbb{R}} \times S^1\times P\times I.
$$
With respect to these coordinates the Cauchy-Riemann equations for the cylindrical ends become
\[
0=u_\rho +J_{\epsilon} u_\psi=Z_\rho+T\partial_t+J_\epsilon Z_\psi +J_{\epsilon}\partial_\phi.
\]
We note that the second column of $J_\epsilon$ contains the gradient of $F$ in the direction of the contact structure with respect to the metric $\omega(\cdot,J_\xi\cdot )$, so the Cauchy-Riemann equations near the cylindrical ends reduce to
\[
Z_\rho+J_\epsilon Z_\psi+T(1-F/T) \partial_t
-\frac{1}{F} \grad f=0.
\]
Express the coordinates for $Z_{P\times I}$ into a part tangential to the Morse-Bott manifold $S_T=P$, denoted by $z_t$, and a normal part corresponding to the $I$-factor, denoted by $z_n$.
Rewrite the above equation into the general form
\begin{equation}
\label{eq:CR_near_vertical_cylinder}
Z_\rho+J_\epsilon Z_\psi+s_\epsilon(z_t,z_n) z_n
-S_\epsilon \grad f=0.
\end{equation}
With this equation and the proof of \cite[Proposition~A.2]{Bourgeois:MB_symplectic_homology} we obtain
\begin{proposition}
Let $N_T$ be the Morse-Bott submanifold consisting of points on simple periodic Reeb orbits with period $T$, and write $S_T=N_T/S^1$ for its orbit space.
Choose a Morse function $f:S_T \to {\mathbb{R}}$ on the orbit space $S_T$ with a single local minimum at $[\gamma_1]$.
Then there are $\delta,\epsilon_0>0$ such that, for $\epsilon<\epsilon_0$, every $J_\epsilon$-holomorphic plane $u:{\mathbb{C}} \to {\mathbb{R}} \times Y$ asymptotic to $\gamma_1$ satisfies
\begin{eqnarray*}
t \circ u(\rho,\psi)-T \rho-t_0 & \in &
W^{1,p}_\delta({\mathbb{C}},{\mathbb{R}}) \\
\phi \circ u(\rho,\psi)- \psi-\phi_0 & \in &
W^{1,p}_\delta({\mathbb{C}},{\mathbb{R}}) \\
z_{t} \circ u(\rho,\psi)- Fl^{S_\epsilon \grad f}_\rho(p_0) & \in &
W^{1,p}_\delta({\mathbb{C}},{\mathbb{R}}^{2n-3}) \\
z_{n} \circ u(\rho,\psi) & \in &
W^{1,p}_\delta({\mathbb{C}},{\mathbb{R}}^{1}) .
\end{eqnarray*}
for some $t_0\in {\mathbb{R}}$, $\phi_0 \in S^1$ and $p_0\in {\mathbb{R}}^{2n-3}$.
\end{proposition}
\subsubsection{Bounded right inverse for the non-degenerate setup and applying the implicit function theorem to construct a solution to the perturbed problem}
If $\epsilon$ is sufficiently small, then $D_{u_\epsilon}$ is still surjective.
By possibly choosing $\epsilon$ even smaller we can ensure that
$$
\Vert (D_{u_\epsilon}-D_{u_0}) Q_{u_0} \Vert <\frac{1}{2}
$$
holds.
Then we can define a bounded right inverse $Q_{u_\epsilon}$ for $D_{u_\epsilon}$ by putting
\[
\begin{split}
Q_{u_\epsilon}&:=Q_{u_0}(D_{u_\epsilon} Q_{u_0})^{-1}\\
&=Q_{u_0}( (D_{u_\epsilon}-D_{u_0})Q_{u_0} +\id )^{-1}=Q_{u_0} \sum_{k=0}^\infty (-1)^k \left( (D_{u_\epsilon}-D_{u_0})Q_{u_0} \right)^k.
\end{split}
\]
We now apply \cite[Proposition A.3.4]{MS:J_curves} and a modification of \cite[Theorem 3.5.2]{MS:J_curves} to our problem.
This will yield a solution $u_\epsilon:{\mathbb{C}} \to {\mathbb{R}} \times Y$ to the perturbed equation
$$
\begin{cases}
\bar \partial_{J_\epsilon} u_\epsilon=0,& \\
u_\epsilon \text{ asymptotic to } \gamma_1.
\end{cases}
$$
Moreover, this argument also shows that $u_\epsilon$ is a rigid curve, so a solution of the Cauchy-Riemann equation is unique (up to translation) for curves in a neighborhood of $u_0$ as in \cite[Corollary~3.5.6]{MS:J_curves}.
\subsubsection{Excluding other solutions to the perturbed problem}
We conclude by arguing that there are no finite energy holomorphic planes asymptotic to $\gamma_1$ other than translations of $u_\epsilon$ if $\epsilon$ is sufficiently small.
To show this, we argue by contradiction. Take a sequence $\{ \epsilon_n \}_n$ converging to $0$, and assume that there is a sequence of $J_{\epsilon_n}$-holomorphic curves $u_{n}$ that are not vertical translations of the $u_{\epsilon_n}$ we constructed before.
Since $u_0$ is regular and rigid, these curves $u_n$ cannot lie in a sufficiently small $C^0$-neighborhood $U_{C^0}$ of any unperturbed solution by a modification of \cite[Corollary~3.5.6]{MS:J_curves}.
Note that there is a subsequence of the $u_{n}$ converging to a $J_0$-holomorphic curve on compact subsets of ${\mathbb{C}}$.
The energy bound for $u_n$, namely
$$
E(u_n)\leq \mathcal A_{\alpha_\epsilon}(\gamma_1)=\mathcal A(\gamma_1),
$$
implies that we obtain a $J_0$-holomorphic finite energy plane $u_\infty$ asymptotic to some periodic Reeb orbit $\gamma$.
By energy/action considerations, this Reeb orbit $\gamma$ must give a point $[\gamma]$ in the Morse-Bott orbit space $S_T$.
We claim that $u_\infty$ is asymptotic to $\gamma_1$.
To see why, note that the positive cylindrical end of a holomorphic curve follows the positive gradient flow of $f$ by Equation~\eqref{eq:CR_near_vertical_cylinder}.
Any solution not converging to $\gamma_1$ is therefore pushed away from $\gamma_1$.
Since the total building, of which $u_\infty$ is just one layer, converges to $\gamma_1$, we obtain a contradiction if $u_\infty$ is not asymptotic to $\gamma_1$.
Hence $u_\infty$ is a solution to the unperturbed Cauchy-Riemann equations with asymptote $\gamma_1$, and by our previous uniqueness argument it is a translation of $u_0$.
It follows that the $J_{\epsilon_n}$-holomorphic curves $u_n$ must lie in some $C^0$ neighborhood $U_{C^0}$ of a solution to the unperturbed problem for sufficiently large $n$.
This is a contradiction.
\section{Other holomorphic curves: the situation away from the binding}
\label{sec:other_curves}
In this section we consider more general rational holomorphic curves asymptotic to $\gamma_1$.
As in the previous section~\ref{sec:holomorphic_plane}, $Y^{2n-1}$ denotes the contact manifold $Y=\OB(W,\tau^{-1})$ and we will use the contact form from the perturbation described in that section to make orbits non-degenerate.
We will show that the conditions of Lemma~\ref{lemma:non-fillability} hold given the assumptions in Theorem~\ref{thm:result}, to do so we use action, index and linking/homotopy arguments combined with some regularity arguments using Dragnev's Theorem~ \ref{thm:regularity_somewhere_injective}.
We start with a well-known observation about the linking number.
\begin{lemma}
\label{lemma:linking}
Let $\gamma^+,\gamma^-_1,\ldots,\gamma^-_m$ be periodic Reeb orbits that do not lie in the binding $P$ of a contact open book $Y$, and consider a holomorphic curve $u$ of genus $0$ with $[u]\in \mathcal M^A_0(\gamma^+;\gamma^-_1,\ldots,\gamma^-_m)$.
Then $\lk(P\times \{ 0 \},\gamma^+)\geq \sum_i \lk(P\times \{ 0 \},\gamma^-_i)$.
\end{lemma}
\begin{proof}
To see this, observe that the symplectization of $P$ is an almost complex submanifold of real codimension $2$ in ${\mathbb{R}} \times Y$.
Furthermore, the holomorphic curve $u$ can be thought of as a Seifert surface for the collection of oriented Reeb orbits $\Gamma$ that $u$ bounds, so $({\mathbb{R}} \times P)\cdot u=\lk(P\times \{ 0 \},\Gamma)$.
By positivity of intersection, $\lk(P\times \{ 0 \},\Gamma)\geq 0$, so the claim follows.
\end{proof}
\begin{lemma}
\label{lemma:no_other_curves_fractional_fibered}
Let $W$ be a Liouville domain admitting a right-handed fractional twist $\tau$ of power $\ell>1$.
Let $Y=\OB(W,\tau^{-1})$.
The only rigid holomorphic curve in the symplectization of $Y$ with positive puncture $\gamma_1$ is the finite energy plane $u_0$ from Lemma~\ref{lemma:existence_finite_energy_plane}.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:existence_rigid_regular_plane} there is a unique rigid, finite energy holomorphic plane asymptotic to $\gamma_1$.
Therefore we only need to exclude holomorphic curves that also have negative punctures.
Suppose that $u$ is such a holomorphic curve.
By Lemma~\ref{lemma:orbits_near_binding} the action of a binding orbit is larger than the action of $\gamma_1$, so none of the negative punctures can be a binding orbit.
Denote the binding of the contact open book by $P$. Then $\lk(P\times \{ 0 \},\gamma_1)=1$, so by Lemma~\ref{lemma:linking}, the linking number of any negative puncture can be at most $1$.
Since all page orbits intersect the pages in a positively transverse fashion, linking number $0$ is not possible.
So we conclude that there can be only one negative puncture, at which $u$ is asymptotic to $\gamma$.
The monodromy is nontrivial and fixed point free on the content of the page, since it corresponds to a non-trivial deck-transformation by the assumption $\ell>1$.
Therefore any periodic orbit in the content of the page must have linking number at least $2$.
It follows that all candidates for $\gamma$ lie in the fattened binding and in the margins of the pages.
By Lemma~\ref{lemma:orbits_near_binding} we have $\lk(P\times \{ 0 \},\gamma)\geq 2$ if $r<r_1$.
The periodic orbit $\gamma$ must make an integer number of turns in $P$-direction, and the linking condition $\lk(P\times \{ 0 \},\gamma)=1$ tells us then that $f_i\circ Inv(r)$ must be an integer multiple of $2\pi$.
This only happens at $r_1$, so $\gamma$ satisfies $r=r_1$.
Since $\gamma_1$ corresponds to the minimum of a Morse function as described in Section~\ref{sec:MB_non_deg}, it has minimal action among all Reeb orbits corresponding to that orbit space.
Since $\mathcal A(\gamma)>\mathcal A(\gamma_1)$ is impossible for a holomorphic cylinder we have $\gamma=\gamma_1$.
It follows that $u$ is a vertical cylinder, which is not a rigid curve.
We conclude that the claim holds.
\end{proof}
The next case combines a linking and index argument.
\begin{lemma}
\label{lemma:fibered_twist_neg_c_single_plane}
Let $W^{2n-2}$ be a Liouville domain with prequantization boundary $(P,\lambda_P)$ over $(Q,\omega)$, where $\omega$ is a primitive integral symplectic form, and $\pi_1(Q)=0$.
Assume furthermore that
\begin{itemize}
\item $c_1(W)=0$.
\item $c_1(Q)=c[\omega]$.
\end{itemize}
Let $\tau$ denote a right-handed fibered Dehn twist on $W$.
Define $Y=\OB(W,\tau^{-1})$.
Denote the maximal index of a Morse function on $W$ that is convex near the boundary by $\max \ind$.
If $c\leq n-\frac{\max \ind+3 }{2}$, then the only rigid holomorphic curve in the symplectization of $Y$ with positive puncture $\gamma_1$ is the finite energy plane constructed in the proof of Lemma~\ref{lemma:existence_finite_energy_plane}.
\end{lemma}
\begin{proof}
Let $u:\Sigma \to {\mathbb{R}} \times Y$ be any holomorphic curve with a single positive puncture, at which $u$ is asymptotic to $\gamma_1$.
As in the proof of the previous lemma, we use a linking argument to show that $u$ can only have one negative puncture, at which $u$ is asymptotic to $\gamma$.
Lemma~\ref{lemma:orbits_near_binding} excludes any orbit $\gamma$ that lies in a fattened neighborhood of the binding with $r<r_1$.
We invoke Lemma~\ref{lemma:index_control_left_twist} to conclude that $\gamma$ lies either in the content of the pages or $\gamma$ lies in $S_{1,0}$.
The case that $[\gamma]\in S_{1,0}$ was already excluded in the proof of the previous lemma.
We conclude that $\gamma$ lies in the content of the pages, so it corresponds to a critical point $a$ of $f_{convex}$ by Lemma~\ref{lemma:index_control_left_twist}.
Since $\gamma_1$ is simple, the holomorphic curve $u$ must be somewhere injective.
By Dragnev's theorem, we can perturb $J$ to make all somewhere injective curves regular.
Hence the moduli space $\mathcal M_0(\gamma_1;\gamma)$ is a smooth orbifold of dimension determined by the Fredholm index, which we compute now.
Denote the contact structure on $Y$ by $\xi$.
We first show that $c_1(\xi)=0$.
For this we apply the Mayer-Vietoris sequence in cohomology,
\[
\begin{split}
0 \longrightarrow H^2(Y;{\mathbb{R}}) & \longrightarrow H^2(P\times D^2;{\mathbb{R}}) \oplus H^2(W\times S^1;{\mathbb{R}}) \\
c_1(\xi) & \longmapsto (i_{P\times D^2}^* c_1(\xi),i_{W\times S^1}^* c_1(\xi) \, )=(0,0).
\end{split}
\]
It follows that the Conley-Zehnder/Maslov indices do not depend on the homology class of a curve as explained in Remark~\ref{rem:CZ_via_disk}, so by Lemma~\ref{lemma:index_control_left_twist} and Formula~\eqref{eq:dimension_formula} we find the following dimension for the moduli space.
\[
\begin{split}
\dim \mathcal M_0(\gamma_1;\gamma)&=\bar \mu(\gamma_1)-\bar \mu(\gamma)=
1-(-2c+2n-2-\ind_a f_{convex} )\\
&=1+2c-2n+2+\ind_a f_{convex}.
\end{split}
\]
If $c$ satisfies the above assumptions, then this dimension is non-positive and regularity directly implies that this moduli space is empty.
\end{proof}
For the following lemma, a purely homotopical argument suffices to exclude other curves.
\begin{lemma}
\label{lemma:only_plane_with_pi1_cond}
Let $W^{2n-2}$ be a Liouville domain with prequantization boundary $(P,\lambda_P)$ over $(Q,k\omega)$, where $\omega$ is a primitive integral symplectic form, and $k\in {\mathbb{Z}}_{>1}$.
Let $\tau$ denote a right-handed fibered Dehn twist on $W$, and define $Y=\OB(W,\tau^{-1})$.
If the inclusion map $i:P=\partial W\to W$ induces an injection on $\pi_1$, then the only rigid holomorphic curve in the symplectization of $Y$ with positive puncture $\gamma_1$ is the finite energy plane constructed in the proof of Lemma~\ref{lemma:existence_finite_energy_plane}.
\end{lemma}
\begin{proof}
The same arguments as in the proofs of the earlier lemmas show that we only have the following possibilities for a holomorphic curve $u$ whose only positive puncture is asymptotic to $\gamma_1$:
\begin{enumerate}
\item The finite energy plane constructed in the proof of Lemma~\ref{lemma:existence_finite_energy_plane}.
\item A holomorphic cylinder $u$ representing an element in some moduli space ${\mathcal M}_0(\gamma_1;\gamma)$, where $\lk(P\times \{ 0 \},\gamma)=1$.
\end{enumerate}
Suppose that $u$ is a holomorphic cylinder as in the second case.
Since $k>1$ and $i$ induces an injection on $\pi_1$, Lemma~\ref{lemma:index_control_2} applies.
As we have already seen in the proof of the previous two lemmas, the orbit $\gamma$ at the negative puncture of $u$ cannot lie in the orbit space $S_{1,0}$.
This leaves the second case from Lemma~\ref{lemma:index_control_2}, which tells us that the free homotopy classes $[\gamma_1]$ and $[\gamma]$ in $Y-P\times \{ 0\}$ are not equal.
Since the cylinder $u$ cannot intersect the binding, the projection of $u$ to $Y-P\times \{ 0\}$ provides a homotopy from $\gamma_1$ to $\gamma$, which is a contradiction.
We conclude that $u$ cannot be a non-trivial holomorphic cylinder, so any rigid $u$ is a translation of $u_0$.
\end{proof}
Finally, we consider the case analogous to that of Lemma~\ref{lemma:fibered_twist_neg_c_single_plane}, but with $c>0$.
Here we use an index argument, which depends on more delicate details, and only works for the following specific, but important examples.
\begin{lemma}
\label{lemma:TKPn}
Let $W=T^*\H \P^m$ or $W=T^*Ca\P^2$.
These manifolds admit right-handed fibered Dehn twists, which we denote by $\tau$.
Define $Y=\OB(W,\tau^{-1})$.
Then the only rigid holomorphic curve in the symplectization of $Y$ with positive puncture $\gamma_1$ is the finite energy plane constructed in the proof of Lemma~\ref{lemma:existence_finite_energy_plane}.
\end{lemma}
\begin{proof}
Let $u$ be a holomorphic curve of genus $0$ with $\gamma_1$ as its only positive puncture.
The same linking argument we used earlier shows that $u$ can have at most one negative puncture, at which it is asymptotic to $\gamma$.
As before, we can exclude the case that $\gamma$ lies in a fattened neighborhood of the boundary.
As in Lemma~\ref{lemma:fibered_twist_neg_c_single_plane} we will use an index argument.
Denote the contact structure on $Y$ by $\xi$.
Then we can verify that $c_1(\xi)=0$ by using the same method as in the proof of Lemma~\ref{lemma:fibered_twist_neg_c_single_plane}; the new ingredient is $c_1(T^*M,d\lambda_{can})=0$.
By Dragnev's theorem and Lemma~\ref{lemma:index_control_left_twist}, we see that the moduli space $\mathcal M_0(\gamma_1;\gamma)$ is a smooth orbifold of dimension determined by the Fredholm index, and with Formula~\eqref{eq:dimension_formula} we find
$$
\dim \mathcal M_0(\gamma_1;\gamma)=\bar \mu(\gamma_1)-\bar \mu(\gamma)= 2c+1-\dim W+\ind_a f_{convex}.
$$
For $T^*\H \P^m$, we have $c=2m+1$ by \cite[Proposition 2.10]{Frauenfelder:Volume_growth_Dehn}.
Furthermore, $T^*\H \P^m$ admits a plurisubharmonic Morse function with indices $0,4,\ldots,4m$.
It follows that
$$
\dim \mathcal M_0(\gamma_1;\gamma)=1+4m+2-8m+4r=3-4m+4r,
$$
with $r=0,\ldots,m$.
By regularity, it follows that the moduli spaces $\mathcal M_0(\gamma_1;\gamma)$ are empty or satisfy $\dim \mathcal M_0(\gamma_1;\gamma)>1$.
It follows that holomorphic cylinders with $\gamma_1$ at its positive puncture cannot be rigid.
For $T^*Ca\P^2$, this argument works as well. Use that $c=11$, and that there is a plurisubharmonic Morse function with indices $0,8,16$.
Then
$$
\dim \mathcal M_0(\gamma_1;\gamma)=2c+1-32+8r=8r -9,
$$
for $r=0,\ldots,2$, so the same argument works.
\end{proof}
\section{Proof of the main theorem and discussion}
\label{sec:wrapup}
All that is left is combining the pieces we set up earlier.
\begin{proof}[Proof of Theorem~\ref{thm:result}]
We apply Lemma~\ref{lemma:non-fillability} to the contact manifold $Y=\OB(W,\tau^{-1})$.
Let us briefly check the conditions.
By Lemma~\ref{lemma:existence_rigid_regular_plane} there is an adjusted almost complex structure on ${\mathbb{R}} \times Y$ and a rigid finite energy holomorphic plane $u_0$ that is asymptotic to a non-degenerate, simple periodic Reeb orbit $\gamma_1$.
In addition, this plane is unique up to translation and it is Fredholm regular, so the first condition holds.
The second condition of Lemma~\ref{lemma:non-fillability} follows
\begin{itemize}
\item from Lemma~\ref{lemma:no_other_curves_fractional_fibered} for a fractional twist.
\item from Lemma~\ref{lemma:TKPn} for a fibered twist on $W=T^* \H \P^m$ or $W=T^*Ca \P^2$ and the following observation.
In both cases, the necessary plurisubharmonic Morse function on $W$ comes from a Morse function $f_0$ on the zero-section, and by adding a constant, we can ensure $\max_x(f_0(x) \, )/\min_x(f_0(x) \, )<2$.
This implies the action condition in Lemma~\ref{lemma:non-fillability}.
\item from Lemma~\ref{lemma:only_plane_with_pi1_cond} if $k>1$ and the inclusion $P\to W$ induces an injection on $\pi_1$.
\item from Lemma~\ref{lemma:fibered_twist_neg_c_single_plane} in the remaining case.
\end{itemize}
We conclude that the conditions of Lemma~\ref{lemma:non-fillability} hold for the stable Hamiltonian structure $(\lambda,\Omega_{sH}=d\lambda)$.
This proves the theorem.
\end{proof}
Corollary~\ref{cor:weinstein_conj}, which asserts that the Weinstein conjecture holds for these manifolds, follows from the last statement in Lemma~\ref{lemma:non-fillability}.
We also want to mention another corollary, namely that the negative stabilization is a special case of our construction.
\begin{corollary}[\cite{BvK,Massot:weak_strong_fillability}]
The negative stabilization of the standard contact sphere $(S^{2n+1},\xi_0)$ admits no weak semi-positive symplectic filling.
\end{corollary}
We already described the $S^1$-invariant contact structure on the negative stabilization in Proposition~\ref{prop:negative_stabilization_as_invariant_contact}.
We directly obtain the non-existence of a convex semi-positive symplectic filling from Theorem~\ref{thm:result}.
More work is required to exclude weak fillings: the stable Hamiltonian structure has to be chosen with care. We refer to \cite[Proof of Theorem~3.1]{Massot:weak_strong_fillability}.
\subsection{Negative powers of fractional twists}
To deal with powers of fractional twists we use cobordism techniques due to Avdek.
The following proposition is a special case of \cite[Theorem 1.9]{Avdek:Liouville}.
\begin{proposition}[Avdek]
\label{prop:cobordism_open_book}
Let $W$ be a Weinstein domain, and suppose that $\psi_1$ and $\psi_2$ are symplectomorphisms of $W$ with compact support.
Then there is a Stein cobordism from the disjoint union $\OB(W,\psi_1)\coprod \OB(W,\psi_2)$ at the concave end to $\OB(W,\psi_1 \circ \psi_2)$ at the convex end.
\end{proposition}
The following lemma is essentially contained in \cite{NP:resolving_orbi_singularities}.
Furthermore, that paper also deals with prequantization bundles over symplectic orbifolds.
For us the following suffices.
\begin{lemma}
\label{lemma:convex_concave_filling}
Let $P$ be a smooth prequantization bundle over a symplectic manifold $(Q,k\omega)$ whose associated disk-bundle is a convex or a concave filling. Then $P$ admits both a convex and a concave filling.
\end{lemma}
\begin{proof}
Perform a symplectic cut on $P\times_{S^1}D^2$.
We obtain a new symplectic manifold $E=Q\tilde \times S^2$, which is an $S^2$-bundle over $Q$. This bundle $E$ contains the smaller copy of the original disk bundle $P\times_{S^1}D^2$ with its given symplectic form as a subset.
This smaller copy still has concave or convex boundary, and the complement of this subset forms then a convex (if the original bundle $P\times_{S^1}D^2$ was concave) or concave (if $P\times_{S^1}D^2$ was convex) filling.
\end{proof}
We will now give a criterion to test whether the convex fillings obtained this way are semi-positive.
For this, we first set up some notation.
Let $W^{2n-2}$ be a Liouville domain with boundary $P$, where $\pi:P\to (Q,k\omega)$ is a prequantization bundle over a symplectic manifold, with $\omega$ a primitive symplectic form and $k\in {\mathbb{Z}}_{>1}$.
Suppose that $\tilde P\to P$ is an $\ell$-fold cover of the same form as in Equation~\eqref{eq:cover_BW_bdl}, and assume that $\tilde W$ is an adapted $\ell$-fold cover of $W$.
Denote the right-handed fractional twist of power $\ell$ by $\tilde \tau$.
Consider the contact open book $\tilde Y_+=\OB(\tilde W,\tilde \tau)$.
By Lemma~\ref{lemma:decomposition} $\tilde Y_+$ is contactomorphic to a prequantization bundle over $M=P\times_{S^1,+} D^2 \cup_\partial W$.
Here $(p,z)\in P\times D^2 \sim (pg,gz)\in P\times D^2$ is the equivalence relation for the associated bundle $P\times_{S^1,+} D^2$.
We rescale the symplectic form on $M$ by positive number in order to obtain a primitive symplectic form $\omega_M$.
\begin{lemma}
\label{lemma:fillability_BW}
The above contact manifold $\tilde Y_{+}$ is convex symplectically fillable by the associated disk bundle $\bar L:=\tilde Y_{+}\times_{S^1,-}D^2$, where $(p,z)\sim_{S^1,-} (pg,g^{-1}z)$.
\begin{itemize}
\item if $n=2,3$, then $\bar L$ is trivially semi-positive.
\item if $n\geq 4$, $W$ is Weinstein manifold, and $c_1(Q)=c[\omega]$, then
\[
\begin{split}
c_1(\bar L)&=-\frac{k}{\ell} \pi_{\bar L}^* [\omega_M]\\
c_1(T\bar L)&=(c+k-\frac{k}{\ell})\pi_{\bar L}^*[\omega_M],
\end{split}
\]
where $\pi_{\bar L}:\bar L\to M$ is the natural projection.
In particular $\bar L$, as a symplectic manifold, is semi-positive if $c+k-\frac{k}{\ell}\geq 0$ or if $c+k-\frac{k}{\ell}<3-n$.
\end{itemize}
\end{lemma}
\begin{proof}
We already know that $\tilde Y_{+}$ admits a concave filling by a disk bundle $L=\tilde Y_{+}\times_{S^1,+}D^2$.
Here the equivalence relation for defining $L$ is $(p,z)\sim_+(pg,gz)$, and the symplectic form is $\omega_{Biran}=-d\left( (1-r^2)\theta+r^2d\phi\right)$ as defined by Biran, \cite{Biran:Lagrangian_barrier}.
We give the line bundle associated with $L$ a complex structure, which we define by $j\cdot [p,z]_{+}=[p,iz]_+$.
By Lemma~\ref{lemma:convex_concave_filling} we obtain a convex filling $\bar L$, which can be identified with the disk-bundle we defined in the claim of the lemma.
Furthermore, $\bar L$ inherits a complex structure from the symplectic cutting construction, which we denote by $\bar j$.
Note that $(\bar L,\bar j)$ is dual to $(L,j)$.
We now check the statement concerning the case $n\geq 4$.
We need to compute $c_1(T\bar L)$. Since $\bar L$ can be equipped with a split complex structure, $c_1(T\bar L)=\pi_L^*c_1(\bar L)+\pi_L^*c_1(TM)$, where $\pi_L:\bar L \to M$ is the natural projection.
As $(\bar L,\bar j)$ is dual to $(L,j)$, we see that $c_1(\bar L)=-c_1(L)$.
By Lemma~\ref{lemma:decomposition} we have the decomposition see $M=P\times_{S^1}D^2 \cup_{\partial} W$.
The sequence of the pair gives
$$
H^1(P\times_{S^1,+}D^2) \longrightarrow H^2(M,P\times_{S^1,+}D^2)
\longrightarrow H^2(M) \stackrel{i^*}{\longrightarrow} H^2(P\times_{S^1,+}D^2 ) \longrightarrow H^3(M,P\times_{S^1,+}D^2),
$$
so by excision, the assumption that $W$ is Weinstein and the dimension restriction we see that $H^2(M,P\times_{S^1,+}D^2)\cong H^2(W,\partial W)\cong H^{2n-4}(W)=0$, and $i^*$ is an injection.
The same ingredients imply also that $H^3(M,P\times_{S^1,+}D^2)$ has no torsion.
This means that $[\pi^* \omega]$ lies in the image of $i^*$, so we conclude that $[\omega_M]=(i^*)^{-1}[\pi^* \omega]$ as these elements are primitive.
It is therefore enough to compute $i^*c_1(L)$ and $i^*c_1(TM)$.
By the assumption that $P$ is a prequantization bundle over $(Q,k\omega)$, we see that as a line bundle over $Q$, we have $c_1(P\times_{S^1,+}D^2)=k[\omega]$.
Hence we compute
\[
i^*c_1(TM)=c_1(T(P\times_{S^1,+}D^2)\,)=\pi^*c_1(P\times_{S^1,+}D^2)+\pi^*c_1(TQ)=(c+k)\pi^*[\omega].
\]
Since $\tilde P$ is the $\ell$-fold cover of $P$, we have
\[
i^*c_1(L)=\frac{1}{\ell}c_1(P\times_{S^1,+}D^2)=\frac{k}{\ell}\pi^*[\omega].
\]
The claim about the first Chern class follows, and the statement about semi-positivity follows directly from the definition.
\end{proof}
\begin{corollary}
\label{cor:no_convex_filling}
Let $W^{2n-2}$ be a Weinstein domain admitting a right-handed fractional twist $\tau$ such that $\OB(W,\tau^{-1})$ is not convex semi-positively fillable.
Suppose that the prequantization bundle $\OB(W,\tau)$ is convex semi-positively fillable.
Then for all positive integers $N$, the contact manifold $\OB(W,\tau^{-(N+1)})$ is not convex semi-positively fillable.
\end{corollary}
\begin{proof}
We argue by contradiction to prove the assertion.
Suppose that $\OB(W,\tau^{-(N+1)})$ is convex symplectically fillable by $F_{-(N+1)}$.
By assumption $\OB(W,\tau)$ is convex semi-positively symplectically fillable, and we call this filling $F_1$.
\begin{figure}[htp]
\def\svgwidth{0.75\textwidth}%
\begingroup\endlinechar=-1
\resizebox{0.75\textwidth}{!}{%
\input{cobordism2.pdf_tex}%
}\endgroup
\caption{Capping off the cobordism}
\label{fig:cobordism}
\end{figure}
By Proposition~\ref{prop:cobordism_open_book}, there is an exact symplectic cobordism $(C_{Avdek},d\lambda_{Avdek})$ whose convex end is $\OB(W,\tau^{-1})$, and whose concave ends are $N$ copies of $\OB(W,\tau)$ and a single copy of $\OB(W,\tau^{-(N+1)})$.
Cap off the concave ends using the convex fillings we obtained, and we find a convex symplectic filling for $\OB(W,\tau^{-1})$.
We can make the almost complex structure standard in the sense that in the gluing regions involving the $N$ copies of $\OB(W,\tau)$ and $\OB(W,\tau^{-(N+1)})$ the almost complex structure sends $\partial_t$ to the Reeb fields of the corresponding ends.
Semi-positivity does, in general, not hold in this glued cobordism, but we shall argue that we still have control over sphere bubbles.
Indeed, let $u$ be a parametrized holomorphic sphere that is not completely contained in one of the $F_1$'s or in $F_{-(N+1)}$.
Then we find a subset $\Sigma\subset S^2$ such that $u(\Sigma)\subset C_{Avdek}$, and by Stokes' theorem we compute the energy as
$$
\int_{\Sigma}u^*d \lambda_{Avdek}=\int_{\partial \Sigma} u^*\lambda_{Avdek}.
$$
For $J$-holomorphic curves, the energy must be non-negative.
However, the $\partial_t$-component of $(Tu) \nu$ (here $\nu$ is an outward normal to $\Sigma$) is negative, so by our choice of almost complex structure we find $\int_{\partial \Sigma} u^*\lambda_{Avdek}<0$.
We conclude that any sphere bubble must be contained in one of the $F_1$'s or in $F_{-(N+1)}$.
Since these symplectic manifolds are semi-positive by assumption, we conclude that sphere bubbles cannot occur.
To conclude, in the filling for $\OB(W,\tau^{-1})$ we constructed above, sphere bubbles cannot occur, so the argument in the proof of Lemma~\ref{lemma:non-fillability} will give a contradiction to the existence of a convex, semi-positive filling of $\OB(W,\tau^{-(N+1)})$.
\end{proof}
Note that $\OB(W,\tau^N)$ is a prequantization bundle over a symplectic orbifold by arguments in \cite{CDvK:right-handed}.
By \cite{NP:resolving_orbi_singularities}, this manifold is convex symplectically fillable, so instead of attaching $N$ convex fillings for $\OB(W,\tau)$, we could also have used a single convex filling for $\OB(W,\tau^N)$.
However, if one does so, sphere bubbles are more difficult to control.
In this case, we can also find a convex filling with the cobordism techniques of Avdek.
\begin{lemma}
Let $W$ be a Weinstein domain admitting a right-handed fractional twist $\tau$, and suppose that $\OB(W,\tau)$ admits a convex, semi-positive symplectic filling.
Then for all $N\in {\mathbb{Z}}_{>0}$, the contact manifold $\OB(W,\tau^N)$ admits a convex, semi-positive symplectic filling.
\end{lemma}
\begin{proof}
By Avdek's Proposition~\ref{prop:cobordism_open_book}, there is an exact symplectic cobordism with $\OB(W,\tau^N)$ at the convex end and $N$ copies of $\OB(W,\tau)$ at the concave ends.
Cap off each concave end with a convex semi-positive filling for $\OB(W,\tau)$.
\end{proof}
We collect the above results as Theorem~\ref{thm:symplectic_isotopy} mentioned in the introduction.
\begin{theorem}
\label{thm:powers_of_twists}
Let $W$ be a Weinstein domain admitting a right-handed fractional twist $\tau$ of power $\ell$, and suppose that $\OB(W,\tau^{-1})$ has no convex, semi-positive filling.
Assume that $Y_+=\OB(W,\tau)$ admits a convex, semi-positive symplectic filling.
Then for all $N \in {\mathbb{Z}}_{>0}$, the contact manifold $\OB(W,\tau^{-N})$ is not convex, semi-positively fillable.
In particular, $\tau^N$ is not symplectically isotopic to the identity relative to the boundary.
\end{theorem}
The first claim follows directly from Corollary~\ref{cor:no_convex_filling}.
For the last claim, observe that if $\tau^{-N}$ is symplectically isotopic to the identity relative to the boundary, then $\OB(W,\tau^{-N})$ is contactomorphic to $\OB(W,\id)$ which is fillable by the Weinstein domain $W\times D^2$.
This gives a contradiction, so the assertion holds.
\subsection{Algebraic overtwistedness}
\label{sec:HC=0}
The discussion in this section will be of a conjectural nature, since we will assume that contact homology algebra exists, and has the expected properties. For a more detailed discussion of this point of view, see \cite{BvK}.
With this in mind, we claim that our construction also shows that contact homology algebra will vanish if the fillability obstructions listed in Theorem~\ref{thm:result} hold.
Let $A_*(Y,\alpha;{\mathbb{Q}}[H_2(Y)])$ denote the contact homology algebra chain complex with full coefficients in $H_2(Y;{\mathbb{Z}})$.
We claim that $\gamma_1\in A_*(Y,\alpha;{\mathbb{Q}}[H_2(Y)])$ satisfies $\partial \gamma_1 =\pm 1$.
Indeed, the linking argument from the proof of Lemma~\ref{lemma:no_other_curves_fractional_fibered} shows that the holomorphic curve count needed for $\partial \gamma_1$ only involves cylinders and planes.
By Lemma~\ref{lemma:existence_rigid_regular_plane}, there is a unique plane, so $\partial \gamma_1 =\pm 1 +\sum_i n_i \gamma^-_i$.
Lemmas \ref
{lemma:no_other_curves_fractional_fibered}
\ref{lemma:fibered_twist_neg_c_single_plane}, \ref{lemma:only_plane_with_pi1_cond} and \ref{lemma:TKPn} show that there are no rigid holomorphic cylinders.
Note here that the contact manifolds in the Lemmas that rely on index arguments, namely Lemma~\ref{lemma:fibered_twist_neg_c_single_plane} and \ref{lemma:TKPn}, always have $c_1(\xi)=0$.
\subsubsection{Left-handed stabilizations}
We also recover the main result of \cite{BvK}, which asserts that certain left-handed stabilizations have vanishing contact homology, by the following argument.
Let $Y_{-}=\OB(T^*S^n,\tau^{-1}_{Dehn})$, and let $Y'$ be another closed contact manifold of the same dimension.
By the condition~\eqref{eq:condition_C} we imposed on the function $U_0$ and Lemma~\ref{lemma:orbits_near_binding} the orbit $\gamma_1$ in $Y_-$ has minimal action.
By rescaling the contact form in $Y'$, we can assume that all periodic Reeb orbits in $Y'$ have much larger action than that of $\gamma_1$.
Now take the connected sum $Y_- \# Y'$ along suitable Darboux balls using the Weinstein model.
New orbits are created, but by shrinking the connecting tube of the Weinstein model most of the new periodic Reeb orbits have large action, except possibly for the periodic orbits contained in the connecting tube of the connected sum; these orbits are also known as Lyapunov orbits.
Since these Lyapunov orbits have reduced index $2n-3,2n-1,\ldots$ and $\gamma_1$ has reduced index equal to $1$, it follows that there is no holomorphic curve from $\gamma_1$ to any combination of these Lyapunov orbits. Since all other orbits have larger action than $\gamma_1$, it follows that $\partial \gamma_1=\pm 1$ even after the connected sum.
However, contact homology is still a work in progress, mainly due to transversality problems involving holomorphic curves that are not simple, and we will not pursue this argument.
\subsection{Generalizations and questions}
There are some obvious generalizations which we haven't worked out.
For example, consider a Weinstein domain for which the Reeb flow on the boundary is periodic, say $\partial W=P$.
If we mod out $P$ by its Reeb action, we obtain a symplectic orbifold.
We can still define fractional twists, and many arguments will still go through.
See for instance \cite[Section 4.1.1]{vanKoert:openbooks5} for a variation of the fractional twist in the case that the boundary of $W$ is a Brieskorn manifold, which is a prequantization bundle over a symplectic orbifold.
\subsubsection{$T^*{\mathbb{C}} P^n$ as a page}
The index argument we relied on for $T^*\H \P^m$ and $T^*Ca\P^2$ does not work in this case.
However, on the odd complex projective spaces there is a free involution, namely
\[
\begin{split}
\sigma: {\mathbb{C}} \P^{2n+1} & \longrightarrow {\mathbb{C}} \P^{2n+1} \\
[z_0:z_1:\ldots:z_{2n}:z_{2n+1}] & \longmapsto [-\bar z_1:\bar z_0:\ldots:-\bar z_{2n+1}:\bar z_{2n}].
\end{split}
\]
The Fubini-Study metric on complex projective space has the property that all geodesics are periodic with the same period.
It follows that $M:={\mathbb{C}} \P^{2n+1}/\sigma$ admits a metric for which all geodesics are periodic.
Unfortunately, the periods are not all the same unless we are looking at $T^*{\mathbb{C}} \P^1/\sigma\cong T^*{\mathbb{R}} \P^2$, so this involution does not help us in general.
In the latter case we can find a fractional twist $\tilde \tau$ on $T^*{\mathbb{C}} P^{1}$ of power $2$, which is the usual Dehn twist on $T^*S^2$.
By the first part of Theorem~\ref{thm:result}, it follows that $\OB(T^*{\mathbb{C}} P^{1},\tilde \tau^{-1})$ is not convex semi-positively fillable.
On the other hand, the contact open book $Y_{+}=\OB(T^*{\mathbb{C}} P^{1},\tilde \tau)\cong S^5$ is Liouville fillable.
Theorem~\ref{thm:powers_of_twists} then tells us that $\OB(T^*{\mathbb{C}} \P^{1},\tilde \tau^{-N})$ is not convex semi-positively fillable.
\begin{question}[AIM Workshop]
Let $\tau$ denote a right-handed fibered Dehn twist on $T^*{\mathbb{C}} \P^n$.
Is $\OB(T^*{\mathbb{C}} P^n,\tau^{-1})$ non-fillable for $n\geq 2$?
\end{question}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
The partially ordered set $(S, \leq)$ is called a \emph{meet-semilattice} if every two elements $x$ and $y$ of $S$ have a greatest lower bound $x \wedge y \in S$. Equivalently, for a binary operation $\wedge$ on $S$, the structure $(S, \wedge)$ is a \emph{meet-semilattice} if $\wedge$ is associative, commutative, and idempotent (i.e., a commutative idempotent semigroup). We denote the smallest element of a meet-semilattice by $0$, and the largest element by $1$. It is called \emph{bounded} if it has a smallest and a largest element. A \emph{join-semilattice} is defined dually, and a \emph{bounded semilattice} will be a meet or join-semilattice with both a $0$ and a $1$.
Given a bounded semilattice $(S, \circ, 0, 1)$, define a graph $G(S)$ as follows:
\begin{enumerate}
\item The set of vertices $V$ of $G(S)$ is the set of all elements of $S$ except $0$ and $1$.
\item The vertices $x,y \in V$ are adjacent, i.e. $\{x,y\}$ belongs to the edges $E$ of $G(S)$, if $x \neq y$ and $x\circ y \neq 0$.
\end{enumerate}
We need two more definitions. Let $(S, \circ, 0, 1)$ be a bounded semilattice, then an \emph{atom} is a minimal element of $S-\{0,1\}$, and $S$ will be called \emph{Artinian} if every decreasing chain of elements becomes stationary.
In this paper, we initiate the study of the graph $G(S)$ for general bounded semilattices $S$, and, for example, we prove the following new results:
\medskip
\begin{itemize}\itemindent=5em
\item[{\bf Theorem \ref{K2K12}}:] If $G(S)$ is a path of length $k$, then either $G(S)=K_2$, or $G(S)=K_{1,2}$.
\item[{\bf Proposition \ref{Artinianprop}}:] If $S$ is Artinian with more than two elements, then $G(S)$ is a complete graph if and only if $S$ has exactly one atom.
\item[{\bf Theorem \ref{Euler2}}:] If $S$ is a product of two or more chains, then $G(S)$ is Eulerian if and only if either the length of every chain is even or if all the chains are of length one.
\item[{\bf Theorem \ref{TreeStar}}:] If $G(S)$ is a tree, then it is a star graph.
\item[{\bf Proposition \ref{MinDiam}}:] If $S$ has more than three elements and exactly one atom, then $G(S)$ is a complete graph.
\item[{\bf Theorem \ref{girth}}:] If $G(S)$ contains a cycle, then its girth is equal to $3$.
\end{itemize}
\medskip
Intersection graph theory (for short IGT) is a classical topic in the theory of graphs \cite{ErdosGoodmanPosa1966}. For a good introduction to IGT, one can refer to the book \cite{McKeeMcMorris1999}. And in this classic book, some applications of IGT in different fields of science such as biology, psychology, and computing are mentioned in details \cite[\S 2 and \S 3]{McKeeMcMorris1999}. Although all graphs are intersection graphs \cite{Szpilrajn1945}, some classes of intersection graphs are of special interest. For example, the intersection graphs of some classes of geometrical objects, e.g. closed intervals of the real-line (see \cite[p. 1]{Cohen1977}, \cite{Cohen1978} and \cite[p. 43]{CohenBriandNewman1990}), chords of a circle \cite[p. 137]{Sherwani2002}, trapezoids between two horizontal lines \cite{LiangLuTang1997}, and unit disks in plane \cite{LotkerPeleg2010} have interesting applications in science and industry.
On the other hand, the intersection graphs of substructures of an algebraic structure have been investigated by many authors \cite{AfkhamiKhashyarmanesh2014, Bosak1964, ChakrabartyGhoshMukherjeeSen2009, CsakanyPollak1969, Osba2016, Shen2010, Zelinka1975, Zelinka1973}. Our original motivation for this work was the intersection graphs of submodules of a module \cite{AkbariTavallaeeGhezelahmad2012} and our discussions on this topic led us to work on a more general context, i.e. graphs that we attributed to bounded semilattices.
\section{The Graphs of Bounded Semilattices}\label{sec:bsl}
Let us recall that $(S,\circ)$ is called a semilattice, if $(S,\circ)$ is a commutative semigroup and its binary operation $\circ$ is idempotent, i.e. $x \circ x= x$, for all $x\in S$ \cite[Definition 2.1.1]{ChajdaHalasKuhr2007}. It is good to mention that a similar definition for semilattices is given in \cite[Section 4.1]{Vickers1989}. It is easy to see that a partial order is induced on the semilattice $S$ by setting $x \leq y$ whenever $x\circ y = x$, for all $x,y \in S$ \cite[Theorem 2.1.2]{ChajdaHalasKuhr2007}. Finally, note that if 1 is the neutral element of $S$, then $x \leq 1$, for all $x\in S$. And if $0$ is an absorbing element of a semilattice $S$, that is, $x\circ 0 = 0$, for all $x\in S$, then $0$ is the least element of $S$, i.e. $0 \leq x$, for all $x\in S$. If the semilattice $S$ possesses neutral and absorbing elements, then $S$ is called bounded, since $0 \leq x\leq 1$ for all $x\in S$. One of the simplest semilattices that may come to one's mind is the semilattice $(\mathbb{P}(A), \cap)$, where by $\mathbb{P}(A)$ we mean the set of all subsets of the set $A$.
One can easily check if $A$ is a set and $\mathcal{A} \subseteq\mathbb{P}(A)$, then $(\mathcal{A}, \cap)$ is a bounded semilattice if and only if the following properties hold:
\begin{enumerate}
\item If $X, Y \in \mathcal{A}$, then $X \cap Y \in \mathcal{A}$,
\item There are two distinct sets $M_1$ and $M_2$ in $\mathcal{A}$ such that $M_1 \subseteq X \subseteq M_2$, for all $X \in \mathcal{A}$.
\end{enumerate}
In this paper, the semilattice $\mathbb{P}(S)$ is of special interest, when $S$ has an algebraic structure since it provides some good examples for our results. Now, we attribute a graph to a bounded semilattice, inspired by the definition of intersection graphs in \cite{McKeeMcMorris1999}.
\begin{dfn}
\label{IntersectionGraphSemiLattice}
Let $(S,\circ,0,1)$ be a bounded semilattice. We attribute a graph $G(S)$ to $S$, whose vertices $V$ and edges $E$ are determined as follows:
\begin{enumerate}
\item The set of vertices $V$ is the set of all elements of $S$ except $0$ and $1$.
\item The vertices $x,y \in V$ are adjacent, i.e. $\{x,y\} \in E$, if $x \neq y$ and $x\wedge y \neq 0$.
\hfill $\diamond$
\end{enumerate}
\end{dfn}
The following remark justifies why our definition for graphs of semilattices given in Definition \ref{IntersectionGraphSemiLattice} is a generalization of the intersection graphs of substructures of different algebraic structures.
\begin{rmk}[Intersection Graphs of Algebraic Structures]\label{IntersectionGraphEx}
$ $
\begin{enumerate}
\item Let $S$ be a semigroup and $\mathcal{S}$ the set of all subsemigroups of $S$. Clearly, the structure $(\mathcal{S} \cup \{\emptyset\}, \cap)$ is a bounded semilattice and its graph $G(\mathcal{S})$, given in Definition \ref{IntersectionGraphSemiLattice} of the current paper, coincides with the definition of the graphs of semigroups introduced in \cite{Bosak1964}.
\item Let $R$ be a commutative ring with a nonzero identity and $M$ a unitary nonzero $R$-module. It is obvious that the intersection graph of an $R$-module $M$, introduced in \cite{AkbariTavallaeeGhezelahmad2012}, is just the graph $G(\Sub_R(M))$ of the bounded semilattice $\Sub_R(M)$, where by $\Sub_R(M)$, we mean the set of all $R$-submodules of $M$. For more results on the intersection graph of a module, one may also refer to \cite{Yaraneri2013}.
\item Let $S$ be a semiring and $M$ an $S$-semimodule. It is easy to see that $(\Sub_S(M), \cap)$ is a bounded semilattice, where by $\Sub_S(M)$, we mean the set of all $S$-subsemimodules of $M$. In some cases, we will investigate the intersection graph $G(M)$ of the subsemimodules of the $S$-semimodule $M$.
\item Other examples for bounded semilattices and their intersection graphs include subgroups of a group \cite{CsakanyPollak1969,Zelinka1975}, normal subgroups of a nontrivial group, left ideals of a semiring \cite[\S 6]{Golan1999}, left ideals of the ring possessing a nonzero identity \cite{ChakrabartyGhoshMukherjeeSen2009}, subsemirings of the semiring \cite{LinRatti1970}, subsemimodules of a nonzero semimodule \cite[\S 14]{Golan1999}, and clopen sets of a topology, where by a clopen set, it is meant a set that is both closed and open \cite[Definition 3.6.4]{Vickers1989}.
\hfill $\diamond$
\end{enumerate}
\end{rmk}
\begin{dfn}
\label{atomdef}
Let $S$ be a bounded semilattice. An element $a\in S$ is called to be an atom, if $0 < a < 1$ and also, if $0 \leq y \leq a$, then either $y=0$ or $y = a$. We gather atoms of $S$ in the set $\Atom(S)$. Also, an element $d\in S$ is called to be a dual atom, if $0 < d < 1$ and also, if $d \leq y \leq 1$, then either $y=d$, or $y=1$. We gather dual atoms of $S$ in $\DAtom(S)$.
\hfill $\diamond$
\end{dfn}
\begin{rmk}
Let $S$ be a bounded semilattice. It is clear that atoms of $S$ are the minimal elements of the poset $S-\{0,1\}$, and dual atoms of $S$ the maximal elements of the poset $S-\{0,1\}$. Note that if $\mathcal{S}$ is the semilattice of the ideals of a commutative semiring $R$, then the dual atoms of $\mathcal{S}$ are nothing, but the maximal ideals of $R$.
\hfill $\diamond$ \end{rmk}
Let us recall that the degree of a vertex $v$ in a graph $G$, denoted by $d(v)$, is the number of edges of $G$ incident with $v$ \cite[p. 7]{BondyMurty2008}.
\begin{prop}
\label{deg1}
Let $S$ be a bounded semilattice and the graph $G(S)$ have no cycle of length $3$. If $y\in \Atom(S)$, then $\deg(y) =1$.
\end{prop}
\begin{proof}
Let $y\in \Atom(S)$, but $\deg(y) \geq 2$. So, there exist at least two distinct vertices $y_1$ and $y_2$ of $G(S)$ such that both are adjacent to $y$. Therefore, $y y_1\neq 0$ and $y y_2 \neq 0$. Since $y\in \Atom(S)$, $y y_1= y= y y_2$. Hence, $y \leq y_1$ and $y\leq y_2$. Thus $0 \neq y=y^2 \leq y_1 y_2$ and this implies that $y_1$ and $y_2$ are adjacent. Thus $G(S)$ contains a cycle $y-y_1-y_2-y$, which is a contradiction. Consequently, $\deg(y) =1$.
\end{proof}
\begin{prop}
\label{minimax}
Let $S$ be a bounded semilattice and $y$ a vertex of $G(S$). If $\deg (y)=1$, then either $y \in \Atom(S)$ or $y \in \DAtom(S)$.
\end{prop}
\begin{proof}
Let $y$ be a vertex of $G(S$) such that $\deg (y)=1$ and $z$ be the only vertex of $G(S)$ such that $z$ is adjacent to $y$. Clearly, $y z \neq 0$. Our claim is that either $yz=z$ or $yz=y$. Suppose that $yz \neq y$. Therefore, $y\cdot yz = yz \neq 0$, which means that $yz$ is adjacent to $y$ and this implies that $yz = z$. So we have showed that either $y \leq z$ or $z \leq y$. If $y \leq z$, then there is no nonzero element $l\in S$ such that $l < y$. So, $y$ is in $\Atom(S)$. If $z \leq y$, then there is no $m \in S-\{1\}$ such that $y < m$. So, $y$ is in $\DAtom(S)$ and the proof is complete.
\end{proof}
\begin{rmk} The converse of Proposition \ref{minimax} does not hold. For example, let $(R, \mathfrak{m})$ be a quasi-local semiring, i.e. a semiring with the unique maximal ideal $\mathfrak{m}$. Clearly, if $|\Id(R)| \geq 5$, then $\deg(\mathfrak{m})\geq 2$. Note that any valuation semiring is quasi-local \cite[Theorem 1.8]{Nasehpour2017}.
\hfill $\diamond$
\end{rmk}
Let us recall that a path is a simple graph whose vertices can be arranged in a linear sequence in such a way that two vertices are adjacent if they are consecutive in the sequence, and are nonadjacent otherwise \cite[p. 16]{BondyMurty2008}.
\begin{lem}
\label{MaxK2}
Let $S$ be a bounded semilattice and $G(S)$ a path as sequence $$y_1, y_2,\ldots,y_t,$$ where $t\geq2$. If $y_1\in \DAtom(S)$, then $G(S)= K_2$, where $K_2$ is the complete graph on two vertices.
\end{lem}
\begin{proof}
Let $G(S)$ be a path as sequence $y_1, y_2,\ldots,y_t$, where $t\geq2$ and $y_1\in \DAtom(S)$. Then, either $y_1 y_2=y_1$ or the vertex $y_1 y_2$ is adjacent to $y_1$. If $y_1 y_2=y_1$, then $y_1 \leq y_2$. This implies that $y_1 = y_2$, since $y_1 \in \DAtom(S)$, and obviously, this is a contradiction, since $y_1$ and $ y_2$ are distinct vertices of $G(S)$. Since by assumption, the only vertex adjacent to $y_1$ is the vertex $y_2$, $y_1 y_2=y_2$ and this means that $y_2 \leq y_1$. Now, we prove that $t$ cannot be greater than 2. In contrary, let $t\geq 3$. Therefore, either $0\neq y_2 y_3=y_2$ or the vertex $y_2 y_3$ is adjacent to $y_2$. If $0\neq y_2 y_3=y_2$, then $y_2\leq y_3$ and so $y_2\leq y_3 y_1$. This means that $y_3 y_1 \neq 0$ and so the vertices $y_1$ and $y_3$ are adjacent, which is a contradiction. But the only vertices that are adjacent to $y_2$ are $y_1$ and $y_3$. So, either $y_2 y_3= y_1$ or $y_2 y_3= y_3$. If $y_2 y_3= y_1$, then $y_1$ and $y_3$ are adjacent, which is a contradiction. Otherwise, $y_2 y_3= y_3$ and this implies that $y_3\leq y_2$. Now in view of $y_2\leq y_1$, we get that the vertices $y_1$ and $y_3$ are adjacent, again a contradiction. Hence, $G(S)=K_2$ and the proof is complete.
\end{proof}
\begin{cor}
Let $S$ be a bounded semilattice such that $G(S)$ is a path. Then $G(S)=K_2$ if and only if $|\DAtom(S)|=|\Atom(S)|=1$.
\end{cor}
\begin{proof}
($\Rightarrow$): Let $G(S)=K_2$. So, by definition, $G(S)$ has only two vertices $y_1$ and $y_2$ and they are adjacent, which means that $y_1 y_2 \neq 0$. Obviously, this implies that either $y_1 y_2=y_1$ or $y_1 y_2=y_2$. If $y_1 y_2=y_1$, then $y_1\leq y_2$. This implies that $y_1$ is in $\Atom(S)$ and $y_2$ is in $\DAtom(S)$. Similarly, if $y_1 y_2=y_2$, then $y_2$ is in $\Atom(S)$ and $y_1$ is in $\DAtom(S)$ and therefore, in each case, $|\DAtom(S)|=|\Atom(S)|=1$.
($\Leftarrow$): Straightforward.
\end{proof}
\begin{thm}
\label{K2K12}
Let $S$ be a bounded semilattice and $G(S)$ a path. Then, either $G(S)=K_2$, or $G(S)=K_{1,2}$.
\end{thm}
\begin{proof}
Let $G(S)$ be a path as sequence $y_1,y_2,\ldots,y_n$. By Proposition \ref{minimax}, the element $y_1$ is either in $\Atom(S)$ or in $\DAtom(S)$. If $y_1 \in \DAtom(S)$, then by Lemma \ref{MaxK2}, $G(S) = K_2$.
Therefore, let, for the moment, $y_1$ be in $\Atom(S)$. Since vertices $y_1$ and $y_2$ are adjacent, we have $y_1 y_2\neq 0$. But $y_1 y_2 \leq y_1$ and $y_1$ is in $\Atom(S)$. So, $y_1\leq y_2$. Now, we prove that if $n=3$, then $G(S)=K_{1,2}$. So, let $G(S)$ be a path as a sequence $y_1,y_2,y_3$ such that $y_1 \in \Atom(S)$. First of all, if $y_2 y_3= y_2$, then $y_2\leq y_3$ and so, we conclude that $y_1$ and $y_3$ are adjacent, a contradiction. Hence, either $y_2 y_3= y_1$ or $y_2 y_3= y_3$. If $y_2 y_3= y_1$, then $y_1$ and $y_3$ are adjacent, a contradiction. Thus, $y_2 y_3= y_3$ and so $y_3\leq y_2$.
Now, if we prove that $n$ cannot be greater than 3, we are done. In contrary, let $n>3$. Vividly, since $y_3 y_2 \neq 0$, we have $y_3 y_2 = y_t$, for some $1\leq t \leq n$. If $t > 3$, then $y_t \leq y_2$ and so, $y_2$ and $y_t$ are adjacent, a contradiction. Hence, $n$ cannot be greater than 3, i.e. $G(S) = K_{1,2}$ and the proof is complete.
\end{proof}
\begin{cor}
Let $(L,+,\cdot,0,1)$ be a bounded lattice and $G(L)$ be a path. Then, either $G(L)=K_2$ or $G(L)=K_{1,2}$. Moreover, if $G(L)=K_{1,2}$, then $G(L)$ is of the form $y_1 - y_2 - y_3$ with $y_2=y_1 + y_3$ and $y_1 y_3 = 0$.
\end{cor}
\begin{proof}
As we have seen in the proof of Theorem \ref{K2K12}, we have $y_3 \leq y_2$ and $y_1 \leq y_2$. This implies that $y_3 + y_1 \leq y_2$. So, $y_3 + y_1 = y_t$ for some $1 \leq t \leq 3$. If $y_3 + y_1 = y_1$, then $y_3 \leq y_1$, which implies that $y_3$ is adjacent to $y_1$, a contradiction. In a similar way, one can see that $y_3 + y_1 \neq y_3$. So, $y_3 + y_1 = y_2$. Note that since, $y_3$ is not adjacent to $y_1$, we have $y_1 y_3 = 0$.
\end{proof}
One of the corollaries of Theorem \ref{K2K12} is the following result for semimodules. For the definition of semirings and semimodules, one can refer to the book \cite{Golan1999}.
\begin{cor}
Let $S$ be a semiring and $M$ an $S$-semimodule. If the intersection graph $G(M)$ of the $S$-subsemimodules of $M$ is a path, then either $G(M)=K_2$, or $G(M)=K_{1,2}$. Moreover, if $G(M)=K_{1,2}$, then $G(M)$ is of the form $N_1 - N_2 - N_3$ with $N_2=N_1 + N_3$ and $N_1 \cap N_3 = (0)$, where $N_1, N_2, N_3$ are $S$-subsemimodules of $M$.
\end{cor}
Let us recall that in a commutative semigroup $S$ with zero, $s\in S$ is a zero-divisor if there is a nonzero $t\in S$ such that $st =0$.
\begin{prop}
\label{Complete}
Let $S$ be a bounded semilattice with more than two elements. Then $G(S)$ is complete if and only if $S$ has no zero-divisors other than 0.
\begin{proof}
Straightforward.
\end{proof}
\end{prop}
\begin{dfn}
\label{Artiniandef}
A bounded semilattice $S$ is Artinian, if any decreasing chain $$s_1 \geq s_2 \geq \cdots s_n \geq s_{n+1} \geq \cdots$$ in $S$ is stationary, i.e. there is an $n\in \mathbb N$ such that $s_i = s_{i+1}$, for all $i \geq n$.
\hfill $\diamond$
\end{dfn}
\begin{cor}
\label{Artinianprop}
Let $S$ be an Artinian bounded semilattice with more than two elements. Then, $G(S)$ is complete if and only if $|\Atom(S)| = 1$.
\end{cor}
\begin{proof}
Let $S$ be an Artinian bounded semilattice with more than two elements. Clearly, $\Atom(S) \neq \emptyset$.
($\Leftarrow$): Let $|\Atom(S)| = 1$ and $m$ be the unique element of $\Atom(S)$. Clearly, $m \leq x$, for all $x\in S-\{0,1\}$. This implies that if $x,y \in S-\{0,1\}$, then $xy \geq m$. So, $xy\neq 0$, for all $x,y \in S-\{0,1\}$, which means that $G(S)$ has no zero-divisor other than 0 and therefore, by Proposition \ref{Complete}, it is complete.
($\Rightarrow$): If $m_1$ and $m_2$ are two distinct elements of $\Atom(S)$, then $m_1 m_2 = 0$. So, $S$ has some zero-divisors other than 0. Hence, by Proposition \ref{Complete}, $G(S)$ is not complete.
\end{proof}
Let us recall that if $S$ is a poset, then the length of $S$, denoted by $l(S)$, is defined as $l(S) = \sup \{ |C|-1: C \text{ is a chain of $S$}\}$ \cite[p. 54]{Blyth2005}.
A graph is said to be planar if it can be drawn in the plane so that its edges intersect only at their ends. Kuratowski's Theorem in graph theory states that a graph is planar if and only if it contains no subdivision of either $K_5$ or $K_{3,3}$ \cite[Theorem 10.30]{BondyMurty2008}.
\begin{prop}
\label{planarchain}
Let $S$ be a bounded semilattice. If $G(S)$ is a planar graph, then $l(S) \leq 5$.
\end{prop}
\begin{proof}
Let $l(S) \geq 6$. So there exists a chain $0 < s_1 < s_2 < s_3 < s_4 < s_5 <1$ in $S$ such that $s_i \in S-\{0,1\}$. Clearly, the
vertices $s_i$, where $1\leq i\leq 5$, form $K_5$ as an induced subgraph of $G(S)$, a contradiction. So, $l(S) \leq 5$ and the proof is complete.
\end{proof}
Let us recall that a walk in a graph $G$ is a sequence $v_0 e_1 v_1 \cdots v_{l-1}e_l v_l$, whose terms are alternately vertices and edges of $G$, such that $v_{i-1}$ and $v_i$ are the ends of $e_i$, $1 \leq i \leq l$. A walk in a graph is closed if its initial and terminal vertices are identical. A tour of a connected graph $G$ is a closed walk that traverses each edge of $G$ at
least once, and a Euler tour one that traverses each edge exactly once. A graph is Eulerian if it admits a Euler tour (see Sections 3.1 and 3.3 in \cite{BondyMurty2008}). A graph in which each vertex has even degree is called an even graph. A connected graph is Eulerian if and only if it is even \cite[Theorem 3.5]{BondyMurty2008}.
\begin{lem}
\label{Euler1}
Let $S$ be a finite bounded chain (semilattice) with more than two elements. Then $G(S)$ is a complete graph. Moreover, $G(S)$ is Eulerian if and only if $l(S)$ is an even number.
\end{lem}
\begin{proof}
Let $l(S) = t+1$ and set $S=\{0,s_1,\ldots,s_t,1\}$ such that $0 < s_1 < \cdots < s_t <1$. It is clear that $s_i s_j = s_{\min\{i,j\}} \neq 0$. So, $G(S)$ is the complete graph $K_t$ and $\deg(s_i) = t-1$, for each $i$. Therefore, $G(S)$ is Eulerian if and only if $l(S)=t+1$ is even.
\end{proof}
It is easy to verify that if $\{S_i\}$ is a family of bounded semilattices, then $S = \prod_i S_i$ is also a bounded semilattice, where its operation is defined componentwise and $1_S = (1_{S_i})$ and $0_S = (0_{S_i})$.
\begin{thm}
\label{Euler2}
Let $n\geq 3$ and $\{S_i\}^n_{i=1}$ be a family of bounded semilattices. If each $S_i$ is a finite chain and $S= \prod^n_{i=1} S_i$, then $G(S)$ is Eulerian if and only if either (a) the length $l(S_i)$ of $S_i$ is even, for all $1 \leq i \leq n$ or (b) each $S_i$ has two elements.
\end{thm}
\begin{proof}
Let $x=(x_1, \ldots, x_n) \in S-\{0_S,1_S\}$. Define $\delta_i : S \longrightarrow \{0,1\}$ as follows:
$$\delta_i(x)=\left\{
\begin{array}{ll}
1, & \hbox{ $x_i= 0$;} \\
0, & \hbox{ $x_i\neq 0$.}
\end{array}
\right.
$$
Therefore, the number of elements $y \in S-\{0_S, 1_S\}$ such that $x y=0$ is $$\prod_{i=1}^n(l_i+1)^{\delta_i(x)}-1,$$ where $l_i = l(S_i)$. Also, the number of vertices of the graph $G(S)$ is $$\prod^n_{i=1}(l_i +1) -2.$$ Now, since the vertex $x$ is not adjacent to $x$, $$\deg(x)=\prod_{i=1}^n(l_i+1)-\prod_{i=1}^n(l_i+1)^{\delta_i(x)}-2.$$
$(\Leftarrow)$ Proof of (b): If each $S_i$ has two elements and $x=(x_1, \ldots, x_n) \in S-\{0_S,1_S\}$, then some of the $x_i$s are 0, while the rest of the $x_i$s are 1. Now, if we set $B=\{ i: x_i =0\}$, then $1 \leq |B| < n$, and $\deg(x) = 2^n - 2^{|B|} -2$, which is clearly an even number. Therefore, $G(S)$ is Eulerian.
Proof of (a): Since $x \neq 0_S$, by symmetry, we can imagine $\delta_i = \cdots = \delta_n = 0$, for some $1\leq i \leq n$. Now, if each $l_i$ is even, then $\prod_{i=1}^n(l_i+1)$ and $\prod_{i=1}^n(l_i+1)^{\delta_i(x)}$ are both odd numbers and therefore, $\deg(x)$ is even, for each vertex $x$. This implies that $G(S)$ is Eulerian.
$(\Rightarrow)$ Now, suppose that one of the numbers $\{l_i=l_i(S_i)\}$ is odd and at the same time one of the numbers $\{l_i=l_i(S_i)\}$ is even. We define $x=(x_1,\ldots,x_n)$, where $x_i = 1$ if and only if $l_i$ is odd and $x_i = 0$ if and only if $l_i$ is even. Clearly, $\prod_{i=1}^n(l_i+1)$ is an even number, while $\prod_{i=1}^n(l_i+1)^{\delta_i(x)}$ is odd, because it is the multiplication of odd numbers. So, $\deg(x)$ is odd and $G(S)$ cannot be a Eulerian graph.
If all the numbers $l_i$ are odd and for some $i$ we have $l_i \geq 3$, then we define $y=(y_1,\ldots,y_n)$, where $y_i = 1_{S_i}$, if and only if $l_i = 1$ and $y_i$ is the unique element in $\DAtom(S_i)$ if and only if $l_i \geq 3$. Definitely, $y \neq 1_S$ and $\delta_i(y) = 0$, for each $i$. Also, $\deg(y)= \prod_{i=1}^n(l_i+1)-3$, which is, clearly, an odd number. Therefore, $G(S)$ cannot be again a Eulerian graph and the proof is complete.
\end{proof}
In Lemma \ref{Euler1}, we proved that if $S$ is a finite bounded chain (semilattice) with more than two elements, then $G(S)$ is a complete graph. Now, we show that for a direct product of bounded chains, this is not the case.
\begin{prop}
Let $n\geq 2$ and $\{S_i\}^n_{i=1}$ be a family of bounded semilattices. If each $S_i$ is a finite chain and $S= \prod^n_{i=1} S_i$, then $G(S)$ cannot be a complete graph.
\end{prop}
\begin{proof}
Let $x=(x_1, \ldots, x_n) \in S-\{0_S,1_S\}$. Clearly, if $G(S)$ is a complete graph, then the number of elements $y \in S-\{0_S, 1_S\}$ such that $x y=0$ must be zero. Therefore, according to the proof of Theorem \ref{Euler2}, we have $$\prod_{i=1}^n(l_i+1)^{\delta_i(x)}-1 = 0.$$ Obviously, this implies that $\delta_i(x)=0$, for each $i$ and this happens only $x=1_S$, a contradiction. Therefore, $G(S)$ cannot be a complete graph and the proof is complete.
\end{proof}
Let us recall that a tree is an undirected graph in which any two vertices are connected by exactly one path \cite[Theorem 1.5.1]{Diestel2017}. A complete bipartite graph is a graph where every vertex of the first set is connected to every vertex of the second set and if one of the sets has exactly one element, it is called a star graph \cite[p. 18]{Diestel2017}.
\begin{thm}
\label{TreeStar}
Let $S$ be a bounded semilattice. Then, the graph $G(S)$ is a tree if and only if it is a star graph.
\begin{proof}
We just need to prove that if $G(S)$ is a tree, then $G(S)$ is a star graph. On contrary, let $G(S)$ be a tree such that it is not a star graph. So, $G(S)$ has a path of length 3, say of the form $y_1-y_2-y_3-y_4$ with $\deg (y_1)=1$. Since $\deg (y_1)=1$, by Proposition \ref{minimax}, either $y_1$ is in $\Atom(S)$ or $\DAtom(S)$.
Firstly, suppose that $y_1 \in \Atom(S)$. Since vertices $y_1$ and $y_2$ are adjacent, we have $y_1 y_2\neq 0$. Clearly, $y_1 y_2 \neq1$. Our claim is that $y_1 y_2 = y_1$. If $y_1 y_2 = y_2$, then $y_2 \leq y_1$ and since $y_1$ is an atom and $y_2$ is nonzero, $y_2 = y_1$, a contradiction. Now, let $y_1y_2 = s$ such that $s \in S-\{y_1,y_2\}$. In this case, $s$ is adjacent to the both vertices $y_1$ and $y_2$ and this is impossible, since $G(S)$ is a tree and any two vertices of a tree are connected by exactly one path \cite[Theorem 1.5.1]{Diestel2017}. Therefore, $y_1 y_2 = y_1$ and so, $y_1\leq y_2$.
By assumption, $y_2$ and $y_3$ are adjacent. So, $y_2 y_3 \neq 0$. Also, $y_2 y_3 \neq 1$. Now, we prove that $y_2 y_3 = y_3$.
If $y_2 y_3= y_2$, then $y_2\leq y_3$. So, we obtain that $y_1$ and $y_3$ are adjacent, a contradiction. On the other hand, if there is an element $s\in S-\{0,1\}$ such that $s$ is different from $y_2$, and $y_3$ and $y_2 y_3= s$, then $s$ is adjacent to the both vertices $y_2$ and $y_3$, again a contradiction. Therefore, $y_2 y_3= y_3$ and this implies that $y_3\leq y_2$. So, $y_2 y_4 \geq y_3y_4 \neq 0$ and this means that $y_2$ and $y_4$ are adjacent, a contradiction.
Now, suppose that $y_1 \in \DAtom(S)$. Similar to the proof in above, we can show that $y_1 y_2 = y_2$ and so, $y_2\leq y_1$. Obviously, $y_2 y_3 \neq 0,1$. On the other hand, it can be similarly proved that if $y_2y_3 = s$ for some $s\in S-\{y_2, y_3\}$, then again $G(S)$ cannot be a tree. If $y_2 y_3 = y_2$, then $y_2 \leq y_3$ and this implies that $y_1$ is adjacent to $y_3$, a contradiction. Also, if $y_2 y_3 = y_3$, then $y_3 \leq y_2$ and in this case, $y_2$ is adjacent to $y_4$, again a contradiction. Hence, if $G(S)$ is a tree, then it is a star graph and the proof is complete.
\end{proof}
\end{thm}
We end this section by a criterion for $G(L)$ being connected, where $L$ is a modular bounded lattice. Let us recall that a bounded lattice $L$ is modular if $c \leq b$ implies that $(c+a)b = c + ab$, for all $a,b,c\in L$ \cite[p. 10]{Stern1999}.
\begin{prop}
\label{modularlattice}
Let $a$ and $b$ be two distinct elements of a modular bounded lattice $L$. Then there is no path in $G(L)$ between $a$ and $b$ if and only if $ab=0$, $a+b =1$, and $a,b \in \Atom(L)$.
\begin{proof}
$(\Rightarrow)$: Assume that there is no path in $G(L)$ between $a$ and $b$. Clearly, $ab = 0$. Let $c$ be an element of $L$ such that $0 \neq c \leq b$. So, $ca=0$. On the other hand, if $c+a \neq 1$, then $a-(c+a)-b$ is a path of length 2, a contradiction. So, $c+a =1$. In particular, $a+b =1$. Now, we prove that $b\in \Atom(L)$. By modular law, $(c+a)b=c+ab$. But we have already seen that $ab =0$ and $c+a=1$. Therefore, $b=c$.
\end{proof}
\end{prop}
\begin{rmk}
Proposition \ref{modularlattice} is a special case of Theorem 2.12 in \cite{DevhareJoshiLaGrange2018}.
\hfill $\diamond$ \end{rmk}
\section{On the Diameter and Girth of the Graphs of Bounded Semilattices}\label{sec:diam}
Let us recall that the distance between two vertices in a graph is the number of edges in a shortest path connecting them. The
greatest distance between any two vertices in a graph $G$ is the diameter of $G$, denoted by $\diam(G)$ \cite[p. 8]{Diestel2017}.
\begin{prop}
\label{MinDiam}
Let $S$ be a bounded semilattice with $|S| \geq 4$ and $|\Atom(S)| = 1$. Then $G(S)$ is connected with $\diam(G(S))= 1$.
\begin{proof}
Let $m$ be the unique element of $\Atom(S)$. Therefore, for any nonzero element $s$ in $S$, we have $m \leq s$. This implies that if $s_1$ and $s_2$ are two distinct elements of $S-\{0,1\}$, then $s_1 \cdot s_2 \geq m \cdot m = m \neq 0$. So, any pair of the vertices $s_1$ and $s_2$ of $G(S)$ are connected to each other, which means that $G(S)$ is complete (connected) and $\diam(G(S))= 1$.
\end{proof}
\end{prop}
\begin{prop}
\label{MaxDiam}
Let $S$ be a bounded semilattice with $|S| \geq 3$ and $|\DAtom(S)| = 1$. Then $G(S)$ is connected with $\diam(G(S))\leq 2$.
\end{prop}
\begin{proof}
Let $m\in \DAtom(S)$. Obviously, if $y$ is a vertex of $G(S)$ distinct from $m$, then, $y\leq m$, and so, $ym = y \neq 0$. Therefore, $y$ and $m$ are adjacent. Clearly, this implies that for each vertices $x\neq m$ and $y\neq m$, we have the path $x - m - y$, which implies that the distance between any pair of vertices of $G(S)$ is at most 2 and the proof is complete.
\end{proof}
\begin{exm}
In this example, we give graphs of bounded semilattices satisfying the conditions of Proposition \ref{MaxDiam}, with diameter 0, 1, and 2. Let $k$ be a field and $X$ an indeterminate over $k$. Set $R=k[[X]]$ to be the formal power series ring over $k$. The set of all ideals of $R$ is the infinite chain $$R \supset (X) \supset (X^2) \supset \cdots \supset (X^n) \supset \cdots \supset (0).$$ Now, we set $S_n = \Id(R/(X^n))$ to be the set of all ideals of the ring $R/(X^n)$. Clearly, $S_n$ has at least three elements for any $n\geq 2$, $\DAtom(S_n) = (X)/(X^n)$, and $G(S_n) = K_{n-1}$. Therefore, $\diam(G(S_2)) = 0$ and $\diam(G(S_n))=1$, for any $n\geq 3$.
Now, let $T = k \times k[[X]]/(X^2)$. Obviously, the only maximal ideal of $T$ is the ideal $\mathfrak{n} = k \times (X)/(X^2)$. Suppose $\mathfrak{a} = k \times 0$ and $\mathfrak{b} = 0 \times k[[X]]/(X^2)$. We have $\mathfrak{a} \cap \mathfrak{b} = 0$, while $\mathfrak{a} \cap \mathfrak{n} \neq 0$ and $\mathfrak{b} \cap \mathfrak{n} \neq 0$. So, $d(\mathfrak{a}, \mathfrak{b}) = 2$ and this means that $\diam(G(T)) =2$.
\hfill $\diamond$ \end{exm}
Let us recall that a bounded lattice is called dually atomic if for every $x\in S-\{1\}$, there exists a dual atom $m$ such that $a \leq m$ \cite[\S1]{ChajdaHalasKuhr2007}.
\begin{thm}
\label{IntersectionGraphLIdLatticeThm}
Let $(S,+,\cdot,0,1)$ be a dually atomic bounded distributive lattice in which $\DAtom(S)$ is nonempty. If the graph $G(S)$ of $S$ has no isolated vertex, then $G(S)$ is connected with $\diam(G(S))\leq 4$.
\end{thm}
\begin{proof}
Let ${a}_1, {a}_2$ be two distinct vertices of $G(S)$. If ${a}_1 {a}_2 \neq 0$, then $d({a}_1,{a}_2)=1$. Now, suppose that ${a}_1 {a}_2=0$. By assumption, there are two elements ${m}_1$ and ${m}_2$ in $\DAtom(S)$ such that ${a}_1\leq {m}_1$ and ${a}_2\leq {m}_2$.
If ${m}_1={m}_2$, or ${m}_1 {m}_2\neq 0$, or ${a}_1 {m}_2\neq 0$, or
${a}_2 {m}_1\neq 0$, then $d({a}_1,{a}_2)\leq 3$. Therefore, we assume that ${m}_1\neq {m}_2$, ${m}_1 {m}_2= 0$, ${a}_1 {m}_2= 0$ and ${a}_2 {m}_1= 0$. If ${a}_1 + {a}_2 \neq 1$, then the path ${a}_1-({a}_1+{a}_2)-{a}_2$ is of length 2. So, $d({a}_1,{a}_2)\leq 2$.
But if ${a}_1+{a}_2=1$, then ${m}_1={m}_1 {a}_1+{m}_1 {a}_2$. Since ${m}_1 {a}_2=0$, we have ${m}_1={m}_1 {a}_1$, which implies that ${a}_1={m}_1$. By a similar argument, ${a}_2={m}_2$. Also, since $G(S)$ has no isolated vertex, there are two vertices ${b}_1\neq {a}_1$ and ${b}_2\neq {a}_2$ such that ${a}_1$ is adjacent to ${b}_1$ and ${a}_2$ is adjacent to ${b}_2$.
If ${b}_1 {b}_2 \neq 0$, then ${a}_1-{b}_1-{b}_2-{a}_2$ is a path of length 3. So, $d({a}_1,{a}_2)\leq 3$. If ${a}_1 {b}_2\neq 0$ or ${a}_2 {b}_1\neq 0$, then $d({a}_1,{a}_2)\leq 2$.
Finally, let ${b}_1 {b}_2= 0$, ${a}_1 {b}_2= 0$, and ${a}_2 {b}_1= 0$. Our claim is that ${b}_1+{b}_2 \neq 1$. In contrary, let ${b}_1+{b}_2=1$. So, ${a}_1 = {a}_1 {b}_1+{a}_1 {b}_2$. So, ${a}_1 = {a}_1 {b}_1$, which implies that ${a}_1 \leq {b}_1$. But ${a}_1 = {m}_1$, so ${a}_1 = {b}_1$, a contradiction. By a similar argument, we have ${a}_2 = {b}_2$. Hence, ${b}_1+{b}_2 \neq 1$ and ${a}_1-{b}_1-({b}_1+{b}_2)-{b}_2-{a}_2$ is a path of length 4. So, $d({a}_1,{a}_2)\leq 4$. Consequently, $G(S)$ is connected with $\diam(G(R))\leq 4$ and the proof is complete.
\end{proof}
Let $R$ be a commutative ring with a nonzero identity and $M$ be a nonzero unital $R$-module. The $R$-module $M$ is called distributive if the lattice of $R$-submodules of $M$ is distributive \cite{Camillo1975}. As in \cite{AkbariTavallaeeGhezelahmad2012}, we denote the intersection graph of submodules of the $R$-module $M$ by $G(M)$.
\begin{cor}
\label{DistributiveModule}
Let $R$ be a commutative ring with a nonzero identity and $M$ be a nonzero unital $R$-module. If $M$ is a distributive $R$-module such that any submodule of $M$ is a subset of a maximal submodule of $M$ and $G(M)$ has no isolated vertex, then $G(M)$ is connected with $\diam(G(M))\leq 4$.
\hfill $\Box$
\end{cor}
Let us recall that a trail in a graph $G$ is a walk in which all edges are distinct. A path in the graph $G$ is a trail in which all vertices (except possibly the first and last) are distinct. If $ P = x_0 \cdots x_{k-1}$ is a path in $G$ and $k \geq 3$, then the path $C = x_0 \cdots x_{k-1} x_0$ is a cycle in $G$. The minimum length of a cycle (contained) in the graph $G$ is called the girth of $G$, denoted by $\girth(G(S))$ \cite[p. 8]{Diestel2017}.
\begin{thm}
\label{girth}
Let $S$ be a bounded semilattice. If $G(S)$ contains a cycle, then we have $\girth(G(S)) = 3$.
\begin{proof}
In contrary, suppose that $\girth(G(S)) \geq 4$. This implies that every pair of elements $y$ and $z$ in $S-\{0,1\}$ with $yz \neq 0$ are comparable, because if they are not comparable, then $yz$ is different from $y$ and $z$ and therefore, $y - yz - z - y$ is cycle of length 3, a contradiction. Now, let $z-y-x-t$ be a path of length 3 in $G(S)$. Since any two elements in this path are comparable and any chain of length 2 in $S-\{0,1\}$ induces a cycle of length 3 in $G(S)$, the only possible cases are: $z \leq y$, $x \leq y$, $x \leq t$, or $y \leq z$, $y \leq x$ and we prove that each case leads us to a contradiction.
Case 1: If $z \leq y$, $x \leq y$, and $x \leq t$, then $x \leq yt$, which implies that $yt \neq 0$. Therefore, $y-x-t-x$ is a cycle of length 3 in $G(S)$, a contradiction.
Case 2: If $y \leq z$ and $y \leq x$, then $y \leq xz$, which implies that $xz \neq 0$. Therefore, $z-y-x-z$ is a cycle of length 3 in $G(S)$, again a contradiction. Hence, $\girth(G(S)) = 3$ and the proof is complete.
\end{proof}
\end{thm}
\section*{Acknowledgment}
The research of the first author was in part supported by a grant from the Islamic Azad University of Qazvin Branch. The second author is supported by the Department of Engineering Science at the Golpayegan University of Technology and his special thanks go to the department for providing all the necessary facilities available to him for successfully conducting this research. The authors are grateful to John LaGrange for looking through the paper and telling some points which improved the paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The Keck telescope on Mauna Kea and the Low-Resolution Imaging Spectrograph LRIS delivers a very large collecting area (10m) to a moderate resolution Cassegrain slit spectrograph \citep{Goodrich:2003kv,1995PASP..107..375O}. The polarimetric mode, called LRISp, provides a dual-beam mode with the ability to measure circular and linear polarized spectra using achromatic retarders \citep{Goodrich:1995fg,1991PASP..103.1314G} to modulate the incoming polarized light.
In the field of stellar magnetism, several recent theoretical and observational studies show that resolutions of only a few thousand are required to detect signatures of global fields and star spots. Magnetic fields in TiO bands have been detected in M-dwarfs. \citep{2008ASPC..384..175B}. Iron hydride (FeH) and chromium hydride (CrH) bands are also observable and modeled to be detectable with high polarimetric sensitivity \citep{Afram:2008kt, Afram:2007iq, Kuzmychov:2013hb}. The 3-dimensional structure of star spots can be constrained with observations in multiple molecular bands \citep{Berdyugina:2011wc}. With LRISp, several scientific investigations are possible provided that signal-to-noise ratios (SNRs) of over 1000 can be delivered as we have been pursuing \citep{Kuzmychov:2014vv, 2013MmSAI..84.1127K}.
Many galactic sources are highly polarized showing detectable continuum and line polarization effects of up to 20\% \citep{1995AJ....110.2597T,1995ApJ...440..578T,1995ApJ...440..597T, 1995ApJ...440..565T}. LRISp has been used to search these targets achieving roughly 1\% polarimetric sensitivites \citep{2011ApJ...726L..21T}.
To achieve these high SNRs, not only are many photons required, but a thorough calibration and correction of many instrumental artifacts must be performed. Most instruments suffer from several problems that create artifacts in polarimetric data. For example, instrument flexure causes wavelength drifts of a fraction of a pixel between exposures or within the two polarized beams of a single exposure. These small instrumental wavelength shifts mimic signatures from stellar magnetic fields which are also wavelength shifts of spectral lines through the Zeeman effect (c.f. \citet{2013A&A...559A.103B}). A typical polarization calculation requires combining spectra from several exposures. Rotating retarders, as in LRISp and other instruments, can introduce several polarimetric artifacts \citep{2008PASP..120...89H}. Instrumental wavelength instabilities can become a serious limitation even with shifts as small as 0.1 pixel. The flexure of LRIS was measured in 2011 by telescope staff to be over a pixel \citep{Keck:2011vc} similar to previous LRIS flexure measurements \citep{Keck:2014uc}.
Polarimetric instruments typically use retarders to modulate the incoming stellar light and a polarizer to function as an {\it analyzer}. Dual beam instruments utilize polarizing beamsplitters so that orthogonally analyzed polarization states are recorded on the CCD with high instrument throughput and the ability to remove systematic errors. By differencing and / or ratioing spectra recorded with different retarder orientations through the two orthogonally polarized dual beams, several instrument systematic errors can be removed. This is typically called {\it beam swapping}. Several examples are included in these references: \citep{2003isp..book.....D, Bagnulo:2009bz, Tinbergen:2005up, 2013pss2.book..175S, 1993A&A...278..231S, 1974psns.coll.....G}. Dual beam instrumentation with {\it beam swapping} modulation techniques can cancels out several effects to first order. Depending on how the polarization spectra are computed, there are still second-order artifacts remaining from instabilities in time (seeing, pointing jitter, sky transparency) and between the beams (transmissions, flat-fielding, differential aberrations, CCD imperfections, etc). Since polarization spectra are computed as differences between measured intensities, instrumental artifacts must be carefully removed or mitigated \citep{1996SoPh..164..243K}.
Several kinds of retarders often have small but detectable interference effects, called fringes in some texts and are more generally polarized spectral fringes \citep{2005A&A...434..377C, 2003A&A...401....1S}. Many retarders are manufactured as multiple layers of birefringent materials that produce Fabry-Perot type etaloning which introduces fringes. These fringes are different for both the fast and slow axis orientations, creating spurious instrumental polarization. Some of the {\it super-achromatic} type retarders with multiple layers can have fringes producing spectral intensity modulation of well over 1\% amplitudes. These fringes are often modeled or removed to some residual error level with various function fits or Fourier filtering techniques. \citep{Aitken:2001ih, Harries:1996vf, 1995ApJ...448L..49A, 2005A&A...434..377C, 2003A&A...401....1S}.
Telescope and instrumental polarization is important because knowledge of the continuum polarization can be used in addition to line polarization as a constraint on circumstellar environments. Because this dual-beam spectropolarimeter is mounted at Cassegrain focus in a mostly symmetric optical beam, it has quite minimal instrumental polarization. The instrument is mounted roughly 10 arc minutes off-axis showing low polarization induced by the telescope. Most telescopes even in axially symmetric beams show instrumental induced polarization at the 0.1\% level from asymmetries in the optical coatings, oxidation, metallic properties, etc. These small but significant telescope polarizations are seen in several imaging instruments such as PlanetPol, POLISH, DiPOL, DiPOL2, and HIPPI \citep{2015MNRAS.449.3064B, Hough:2006iz, Bailey:2008fm, Bailey:2010de, Wiktorowicz:2008fm, Berdyugin:2006gn,Berdyugina:2011ca,Berdyugina:2008dj}. Segmented mirrors, off-axis instrument mounting and data reduction artifacts can all introduce continuum polarization.
Spectrographs generally are limited in their stability by changes in the optical path (pointing, slit tracking jitter, flexure, dispersive optics sensitivities). Spurious instrumental polarization can also be caused by incomplete scattered light compensation, CCD instabilities, polarization induced by reflective optics (e.g. oblique fold mirrors) and imperfect coatings on optics. Thus, measuring the absolute value of the polarization at high accuracy across the entire continuum of a dispersed spectrum presents challenges to both the instrument and the data analysis pipeline. Many spectrographs have much higher polarization sensitivity across individual spectral lines because the continuum variations can be differentially subtracted across a small wavelength region (cf. \citet{Pereyra:2015gt})
In addition to polarization being created by the telescope, the instrument can also scramble or mix incoming polarization states. This mixing, called cross-talk can be from the unpolarized intensity to detected linear or circular polarization or simply mixing between linear and circular states. The LRISp retarders are highly achromatic, reducing the mixing between linear and circular polarization \citep{Keck:2012ub}.
The LRISp instrument was upgraded to include a second blue camera which also can be used in polarimetric mode \citep{Keck:2012ub,1998SPIE.3355...81M}. In 2009, the LRIS red detector was upgraded to include higher sensitivity at longer wavelengths \citep{Rockosi:2010ez}. The atmospheric dispersion corrector (ADC) was mounted in 2007 and includes transmissive prisms \citep{2008SPIE.7014E..53P}. This ADC was tested in 2007 to only marginally impact the measured degree of polarization and angle of polarization for standard stars \citep{Keck:2007wf}.
\section{Observations}
We observed a range of targets on August 22nd and 23rd 2012. The 831/8200 grating was used for the red channel at an angle of 37.47$^\circ$ giving coverage from 789nm to 1026nm. The blue channel used the 300/5000 grism with the 680 dichroic with reasonable sensitivity from 380nm to 776nm though drastic throughput losses were seen long of the 680nm dichroic cutoff.
The target list included magnetic stars, brown dwarfs and a range of calibration standards. EV Lac and V1054 Oph are magnetic flare stars of roughly M3 to M4 type. These stars have roughly known magnetic field strengths and have been studied extensively in the optical and radio \citep{1984ApJ...282..214P, 1994IAUS..154..493S, 1996ApJ...459L..95J, 2000ASPC..198..371J} . We use them here as stars where we expect relatively large and detectable signals. The star HD20630 is a G5Vv star of BY Dra type. Though this star is magnetic with known variability, it is listed as a bright unpolarized standard star (in continuum filters) on the UKIRT standard star list \footnote{{\it http://www.jach.hawaii.edu/UKIRT/instruments/irpol/irpol\_stds.html}} and in several publications \citep{1974psns.coll.....G}. Ceres is the largest main belt asteroid and has a visual magnitude of V=8.
\begin{table*}[!h,!t,!b]
\begin{center}
\caption{\label{Table_LRISp_Observations} Observed targets for LRISp August 22nd and 23rd.}
\begin{tabular}{lcllllll}
\hline
\hline
{\bf Name } & {\bf Date} & {\bf Exp} &{\bf $\sqrt{N}$} & {\bf Elevation} &{\bf Defocus?} &{\bf Spec} & {\bf Type} \\
\hline
\hline
EV Lac & 22nd & 20 & 2000 & 64-65 & No & M4.5V & flare star \\
V 1054 Oph & 22nd & 15 & 2000 & 59-60 & No & M3.5Ve & flare star \\
HD 20630 & 22nd & 1.2 & 1000 & 73-74 & Yes+ & G5Vv & star type BY Dra \\
2MASS & 22nd & 600 & 350 & 67-84 & No & L3.5 & Brown Dwarf \\
2MASS & 22nd & 600 & 350 & 88-77 & No & L3.5 & Brown Dwarf \\
2MASS & 22nd & 600 & 350 & 72-59 & No & L3.5 & Brown Dwarf \\
LSRJ & 22nd & 600 & 1000 & 74-77 & No & M8.5V & Brown Dwarf \\
LSRJ & 22nd & 600 & 1000 & 74-64 & No & M8.5V & Brown Dwarf \\
LSRJ & 22nd & 600 & 1000 & 62-49 & No & M8.5V & Brown Dwarf \\
\hline
Ceres & 23rd & 60 & 2000 & 59-54 & Yes- & Asteroid & Solar \\
HD 20630 & 22nd & 2 & 2000 & 73-74 & Yes & G5Vv & star type BY Dra \\
HD 174160 & 22nd & 5 & 1200 & 73-70 & Yes+ & F8V & star \\
2MASS & 22nd & 600 & 200 & 63-77 & No & L3.5 & Brown Dwarf \\
2MASS & 22nd & 600 & 200 & 81-88 & No & L3.5 & Brown Dwarf \\
2MASS & 22nd & 600 & 200 & 79-64 & No & L3.5 & Brown Dwarf \\
LSRJ & 22nd & 600 & 1200 & 77-74 & No & M8.5V & Brown Dwarf \\
LSRJ & 22nd & 600 & 1200 & 73-63 & No & M8.5V & Brown Dwarf \\
LSRJ & 22nd & 600 & 1200 & 59-46 & No & M8.5V & Brown Dwarf \\
\hline
\end{tabular}
\end{center}
This table shows all complete polarimetric data sets used for development of this data reduction pipeline. Note that the star we denote as LSRJ is {\it LSR J18353790+3259545} and the star we denote as 2MASS is an L3.5 brown dwarf: {\it 2MASS J00361617+1821104}. The star HD 20630 is a magnetic star (BY Dra type) but it is also listed on the UKIRT IRPOL list of unpolarized standards {\it http://www.jach.hawaii.edu/UKIRT/instruments/irpol/irpol\_stds.html} \citep{1974psns.coll.....G}. The statistical upper limit to the signal-to-noise ratio of each polarimetric exposure was computed empirically from polarimetric data. We computed the pixel-to-pixel variance in the polarization spectra after applying high-pass filters to isolate the statistical noise. This noise limit represents the $\sqrt{N}$ limitations from the detected photon flux as an upper limit to the data sensitivity. We also show the azimuth and elevation range for the telescope to illustrate the variation in local gravity during an exposure. The slit de-rotator was used and set to parallactic for each brown dwarf exposure adding to the gravitational orientation changes between exposures. The telescope was defocused for several bright targets to investigate this technique for increasing the exposure time to saturation. The Yes- indicates some defocus while Yes+ indicates substantial defocus for very bright targets. The spectral classification and star type from SIMBAD are shown in the last two columns. See the text for details. \\
\end{table*}
\subsection{Polarimetric Modulation and Demodulation}
In this paper we use the standard Stokes vector formalism to describe polarized spectra. Linear polarization is denoted as $Q$ and $U$ while circular polarization is $V$. When we normalize a spectrum by the total intensity, we use lower case symbols. For instance $q$ = $Q$/$I$.
We use the general framework for measuring polarization as a {\it modulation} and {\it demodulation} process. The Stokes parameters are typically described as differences between intensities measured with retarders at different orientations. In the limit of perfect instrumentation and achromatic optics, a spectropolarimeter can create exposures that mimic the definition of the Stokes parameters. Several modulation strategies are in use in solar, space and night time applications in order to balance the need for efficiency, redundancy, error checking through {\it null spectra} and for simplicity of data analysis \citep{Tinbergen:2005up, 2003isp..book.....D, delToroIniesta:2000cg, 2013pss2.book..175S, Nagaraju:2007tn, Tomczyk:2010wta, Snik:2012jw, Snik:2009va, deWijn:2010fh}. In the typical notation, the instrument modulates the incoming polarization information in to a series of measured intensities (${\bf I}_{i}$) for $i$ independent observations via the modulation matrix (${\bf O}_{ij}$) for $j$ input Stokes parameters (${\bf S}_j$):
\begin{equation}
{\bf I}_{i} = {\bf O}_{ij} {\bf S}_{j}
\end{equation}
In most night-time polarimeters, instruments and associated data analysis packages a modulation matrix that separates and measures individual parameters of the Stokes vector as well as providing redundant information for use in characterizing instrument performance \citep{1993A&A...278..231S, Donati:1997wj}. In the {\it Stokes definition} modulation scheme, there are 6 exposures recorded each corresponding to an independent Stokes parameter ($QUV$).
\begin{equation}
\label{normmod}
{\bf O}_{ij} =
\left ( \begin{array}{rrrr}
1 & +1 & 0 & 0 \\
1 & -1 & 0 & 0 \\
1 & 0 & +1 & 0 \\
1 & 0 & -1 & 0 \\
1 & 0 & 0 & +1 \\
1 & 0 & 0 & -1 \\
\end{array} \right )
\end{equation}
In ESPaDOnS, FORS and other instruments, additional redundancy is achieved by making another set of measurements using the same modulation matrix but with all retarders rotated by 180 degrees \citep{Bagnulo:2009bz, Donati:1999dh, 1993A&A...278..231S}. In LRISp, this type of modulation is accomplished in two separate optical configurations (for two separate exposures). A rotating half-wave super achromatic retarder plate (HWP) is mounted in front of the analyzer. The HWP is rotated in a sequence of [0$^\circ$, 45$^\circ$, 22.5$^\circ$, 67.5$^\circ$] in order to accomplish linear polarization modulation in 4 exposures with beam swapping. Circular polarization is measured by rotating a second quarter-wave achromatic retarder plate (QWP) into the optical path using the calibration filter wheel. This QWP is fixed in a single orientation and modulation is accomplished by rotating the HWP by 0$^\circ$ and 45$^\circ$ behind the QWP.
In the Stokes definition scheme, calculation of each Stokes parameter from intensity spectra follows Equation \ref{eqn_stokesdef_mod} is implemented as a series of normalized intensity differences recorded in two exposures assuming perfect modulation and achromatic optics:
\begin{equation}
\label{eqn_stokesdef_mod}
q = \frac{Q}{I} = q_0 + q_1 = \frac{ I_0 - I_1}{I_0+I_1} - \frac{I_2-I_3}{I_2+I_3}
\end{equation}
We wish to highlight that these normalized intensity differences when assumed to represent a Stokes parameter can introduce several types of instrumental errors while also ignoring cross-talk. In dual beam systems, two pairs of spectra are recorded in two exposures. Thus each part of the ratio is subject to instrumental uncertainties that are introduced between exposures.
In many night time spectropolarimeters, the instrument is designed so the cross-talk is below some nominal design value. Chromatic effects are often just minimized and subsequently left uncalibrated, but no additional spectra are used to compute a Stokes parameter. By using additional spectra, or additional calibrations, cross-talk can be further minimized. Each source of polarimetric error must be considered when choosing an optimal modulation scheme and associated data processing algorithms.
Other instruments choose to modulate and measure all Stokes parameters using only four measurements or pursue less redundant but more efficient schemes \citep{deWijn:2010fh, delToroIniesta:2000cg, Tomczyk:2010wta, Snik:2012jw, Snik:2009va, Nagaraju:2007tn, Keil:2011wj, Elmore:2010ip, 1992SPIE.1746...22E}. For instance, one can use alternate retardance and fast axis orientations to give a modulation matrix that uses only four exposures to measure a Stokes vector with maximal efficiency.
One recovers the input Stokes vector from the series of intensity measurements by inverting the modulation matrix (${\bf O}$). If the matrix is not square, and non-unitary (as in the Stokes definition scheme) one can simply solve the over-specified system of equations via the normal least squares formalism:
\begin{equation}
\label{eqn_demod}
{\bf S} = \frac{ {\bf O}^T {\bf I} } { {\bf O}^T {\bf O}}
\end{equation}
Other modulation schemes are easily crated using tunable liquid crystals and the modulation matrix does not need to have any particular symmetry. We have implemented this ourselves on other spectropolarimeters \citep{Harrington:2010km,Harrington:2011fz}. In systems with more complex or less redundant modulation schemes, additional calibrations with upstream polarizers and retarders are often used to achieve the highest calibration accuracy and remove residual chromatic effects from the demodulation process. The demodulation process of Equation \ref{eqn_demod} can be used regardless of modulation scheme.
Though often not performed, these same kind of demodulation processes can be applied to {\it Stokes definition} type modulation schemes to remove residual chromatic errors if the imperfections are estimated through a calibration procedure. For instance, in LRISp, there are calibration polarizers mounted in the filter wheel ahead of the rotating HWP retarder. This polarizer can be used to measure some of the chromatic properties of the HWP and to modify the modulation matrix of Equation \ref{normmod} to account for imperfections. Polarized standard stars or more elaborate calibration optics can be used to derive the system Mueller matrix to correct for some residual cross-talk. Alternatively, the daytime sky calibrations we outline here and elsewhere can be used to measure the system Mueller matrix and to apply corrections to the demodulated spectra to account for any uncalibrated cross-talk to some residual error levels \citep{Harrington:2011fz}, Harrington et al. 2015.
\subsection{Geometric calibrations - wavelength stabilization}
The first steps in spectral extraction is locating the spectral orders and identifying the basic optical configuration. When combining 6 exposures to make a single $quv$ spectral data set, any instrumental instability can produce spurious signals. In general, spectral order curvature, anamorphic magnification, and tilt of the slit image against the ccd pixel grid all can impact polarimetric data. This is especially true given telescope guiding imperfections and instrument flexure. For the LRISp red channel, the basic parameters of our optical extraction are shown in Figure \ref{waveln_sampres}. We derive several parameters from fits to several arc lamp calibration exposures. The spectral resolution is derived from arc lamp calibration exposures. The resolving power (R=$\lambda$ / $\delta\lambda$) shown in Figure \ref{waveln_sampres} is R=2500 at 800nm rising to R=3500 at 1000nm. In this configuration, the spectra are oversampled. From Gaussian fits to arc lamp spectral lines, we find the spectra to be sampled at 4.5 pixels to 5.5 pixels in the Gaussian full-width half-max (FWHM). With this over sampling of 0.56{\AA} to 0.59{\AA} per pixel, we can test for several instrumental artifacts and apply several types of data post processing filters to remove noise sources.
With the arc line exposures, we measure how the wavelength coordinates change as the HWP rotates through the typical modulation sequence of 0$^\circ$, 45$^\circ$, 22.5$^\circ$, 67.5$^\circ$. This rotating HWP causes a drift of roughly 0.15 pixels between the two separate modulation states. For the LRISp HWP as mounted, the offsets average [0, 0.08, 0.13, 0.10] pixels referenced to the first exposure. There is also a mild wavelength dependence across the CCD. Note that the two Stokes $q$ exposures would show a wavelength drift of 0.1 pixels in between modulated images resulting in imperfect subtraction introducing artifacts resembling the derivative of the intensity profile with wavelength. However, the Stokes $u$ exposures would show substantially less wavelength drift between modulation states, but would be offset from Stokes $q$ spectra by 0.1 pixels in wavelength.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig1.eps}
\caption{ \label{waveln_sampres} The derived spectral sampling and spectral resolution for the LRISp red channel. The black parabolic curve shows the spectral sampling (pico meters per pixel) using the left hand y axis. The mapping between the wavelength solution and each spectral pixel is found by comparing the top and bottom x axes. The spectral resolution is derived as the FWHM of the arc lamp Gaussian fits as defined in the text. The arc lines typically have a FWHM of 4.5 to 5.5 spectral pixels. The spectral resolution is derived as the wavelength decided by the FWHM of the arc line fits and is shown with the symbols using the right hand y axis. The resolution was between 2500 and 3500 across the sampled wavelengths. The blue symbols show the spectral resolution of the top polarized beam. The red symbols show the spectral resolution of bottom polarizers beam. The red curve in between these symbols shows the average spectral resolution of both top and bottom beams. }
\end{center}
\end{figure}
Slit guiding for Keck is software-referenced and the user can vary the stellar location along the length of the slit. In addition, we describe later how we had substantial guiding drifts tracking our targets. We observed at higher elevations and see expected drifts for an altitude-azimuth telescope with a low-bandwidth guider control system. With drift of the optical beam along the slit, uncorrected geometrical tilt of the dispersed spectra will lead to wavelength drifts between exposures.
For the red channel, we find significant tilt of the monochromatic slit images against the pixel rows. There is roughly 2 pixels of wavelength change on the CCD pixels from the bottom to the top of the imaged slit over the 300 spatial pixels sampled. We observed up to 40 spatial pixels of guiding drift during our observing run. This spatial drift combined with the spectral tilt would give wavelength instabilities of up to half a pixel if this geometrical effect is not compensated in the pipeline.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig2.eps}
\caption{ \label{2d_example} An example of an extracted 2-D spectrum for a typical LSRJ brown dwarf observation. The two beams are shown after tilt-correction as observed August 22nd after geometric extraction. Several night sky glow lines are visible as the vertical stripes. The top beam has noticeably higher throughput than the bottom beam as was typical for this spectrograph configuration. The intensity was linearly scaled with blue and black colors corresponding to the highest intensities.}
\end{center}
\end{figure}
To compensate for this geometrical tilt, the data is linearly up-sampled to a 0.01 pixel grid and then shifted spectrally to compensate for the tilt. The data is then averaged back down to nominal 1-pixel sampling. An example of an extraction after tilt correction is shown in Figure \ref{2d_example}. In this Figure, the stellar spectrum is in the center of the two extracted beams. The 4000 spectral pixels of the detector are shown on the X-axis. We only show a subset of the extracted spatial pixels to clearly show the stellar spectrum.
\section{Automatically Removing Cosmic Rays With Iterative Filters}
The LRIS red channel pixels are 300 microns deep, giving a fairly high rate of cosmic ray hits in long exposures \citep{Rockosi:2010ez}. An example of the background region in a typical 10 minute brown dwarf exposure is shown in Figure \ref{sky_background2d}. Bright night sky glow lines are seen in addition to many cosmic ray hits. Cosmic ray damage in this CCD often spans 10s of pixels and most spectral pixels are contaminated at some level, making sensitive polarimetry difficult.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig3.eps}
\caption{ \label{sky_background2d} This Figure shows a small region of the 2-D spectrum used to extract and remove the sky-glow lines in a single polarimetric exposure. The two polarized beams are shown separated by a white line. Typical cosmic ray hit rates are seen. }
\end{center}
\end{figure}
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig4.eps}
\caption{ \label{cosray_rejection_example} An example of the cosmic ray iterative filtering process. The black curve shows the raw stellar spatial profile (a cut through the data orthogonal to the spectral direction representing the local seeing, telescope jitter and optical imperfections). A very large cosmic ray hit is seen damaging several pixels in the middle of the spatial profile at pixels 5 to 10. The wide spread damage requires the iterative solution to reject the damaged data. After iterating, 5 spatial pixels are rejected and ignored in the shift-n-scale profile fit. The blue curve shows the median spatial profile used in the fit. The red curve shows the spatial profile replacements. The right hand y axis shows the noise estimates used in the filter. The triangle symbols below show the noise estimates on a per pixel basis used in the rejection. A 2-sigma filter was applied in this case. }
\end{center}
\end{figure}
We define the {\it spatial profile} as a trace through the data that is orthogonal to the wavelength direction after geometric calibration of the extracted spectral data has been performed. This spatial profile contains the atmospheric seeing, telescope jitter, guiding imperfections, optical imperfections (ghosts) and can be used to apply data-derived filters for various error sources. An example spatial profile is shown in Figure \ref{cosray_rejection_example}.
An iterative method has been developed to use the spatial profile to filter out these cosmic ray hits based on what's called {\it optimal extraction} \citep{Horne:1986bg, Marsh:1989jo}. The spatial profile is computed over a range of wavelengths. We find that 100 spectral pixels is a good compromise between increasing the SNR of the spatial profile and ensuring that the wavelength of the spatial profile matches that of the wavelength to be filtered.
The first step in the filter is to shift and scale the median spatial profile to the individual spatial profile of interest. The least squares solution implemented over the $i$ spatial pixels that independently contribute to the problem.
\begin{figure*} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.65\linewidth, angle=90]{fig5.eps}
\caption{ \label{polarimetry_2mass} This Figure shows 2MASS set 3 from August 23rd. Stokes $quv$ polarization was computed using the standard {\it Stokes definition} modulation assumption with beam swapping as $\frac{ I_0 - I_1}{I_0+I_1} - \frac{I_2-I_3}{I_2+I_3}$. The cosmic ray rejection filter was applied with the noise threshold of 1.8$\sigma$. A 10-count background noise level was set. A 50-pixel smoothing width was used to compute the median spatial profile in the in the shift-n-scale fitting algorithm. The black lines show the computed polarization spectra before cosmic ray filtering. The blue line shows the resulting polarization after the iterative cosmic ray filtering is applied. Cosmic rays are present in a substantial fraction of each $quv$ spectrum. The lower right panel shows the intensity spectrum with a black line in units of 100s of detected counts per spectrum. The sky glow spectrum extracted from spatial pixels outside the region illuminated by the star is shown in red. For 2MASS, the sky glow lines often are brighter than the star at certain wavelengths. }
\end{center}
\end{figure*}
We model the shift-and-scale problem with a general notation where $D$ denotes the data to be fit and $P$ denotes the profile used to do the fitting. The derivative in the spatial direction is $\partial x$ The data ($D$) is modeled as a sum of three terms: a constant (a) times the profile ($P$), a constant (b) times the derivative of the profile, and an additive constant (c).
\begin{equation}
D = a P + b \frac{\partial P}{\partial x} + c
\end{equation}
The total error to be minimized is a sum over all spatial pixels:
\begin{equation}
\label{eqn_rotmat_error}
E = \sum \limits_{i} \epsilon^2 = \sum \limits_{i} (D - a P - b \frac{\partial P}{\partial x} - c)^2
\end{equation}
In order to solve the least squares problem, we take the partial derivatives with respect to the three coefficients (a,b,c) and set them equal to zero:
\begin{equation}
\label{partials}
\frac{ \partial E } {\partial a} = \frac{ \partial E } {\partial b} = \frac{ \partial E } {\partial c} = 0
\end{equation}
After doing the partial derivatives, collecting terms and solving for 0, we get a system of three equations for three variables:
\begin{equation}
\left ( \begin{array}{r}
PD \\
\frac{\partial P}{\partial x} D \\
D \\
\end{array} \right ) =
\left ( \begin{array}{lll}
P^2 & P \frac{\partial P}{\partial x } & P \\
P \frac{\partial P}{\partial x} D & \frac{\partial P}{\partial x }^2 & \frac{\partial P}{\partial x} D \\
P & \frac{\partial P}{\partial x } & 1 \\
\end{array} \right )
\left ( \begin{array}{lll}
a \\
b \\
c \\
\end{array} \right )
\end{equation}
With the least-squares solution, we can compute the difference between the shift-n-scaled median spatial profile and the individual spatial profile to be corrected. The difference between these two spatial profiles is then compared against a measure of the expected noise at every spatial location. The expected noise has two contributing terms. One term is a constant background noise estimate representing the read, dark and scattered light background variation. The second term accounts for the spatially varying flux levels and is proportional to the square root of the number of detected counts. This shot-noise term is computed using the shift-n-scaled median spatial profile.
This process is iterated until convergence is achieved and no spatial pixels show noise levels above a user-determined set of thresholds. At each step of the loop, the single worst offending point above the noise threshold is rejected from consideration by setting the weighting in the fit to zero. Once the iteration has converged, it delivers a shift-n-scaled median profile that fits all non-rejected points below the noise threshold. After convergence, all rejected spatial profile points are replaced with the shift-n-scaled median spatial profile values. This iteration ensures that very large cosmic ray strikes are properly corrected without undue influence in the fitting process.
\begin{figure*} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.35\linewidth, angle=90]{fig6a.eps}
\includegraphics[width=0.35\linewidth, angle=90]{fig6b.eps}
\includegraphics[width=0.35\linewidth, angle=90]{fig6c.eps}
\includegraphics[width=0.35\linewidth, angle=90]{fig6d.eps}
\caption{ \label{psfs_polarimetric_manystars} The spatial profile of the extracted spectra after spatial trimming for guider correction and optimal extraction filtering. The unpolarized standard HD 174160 is in the upper left from August 23rd with an extraction width of 60. The unpolarized but magnetic BY Dra star HD 20630 is in the upper right from August 22nd with an extraction width of 80. The magnetic flare star EV Lac is in the lower left from August 22nd with an extraction width of 30. The asteroid Ceres is in the lower right from August 23rd with an extraction width of 50. All 12 spatial profiles are shown (6 exposures for $quv$ and 2 polarized beams per exposure). }
\end{center}
\end{figure*}
An example of this process is shown in Figure \ref{cosray_rejection_example}. In this example, a spatial profile for a 2MASS brown dwarf exposure on August 22nd is shown in the black curve. A very large cosmic ray hit is seen at spatial pixels 5 to 10. This cosmic ray shows a count level an order of magnitude larger than the detected stellar flux. The filter iterates through the shift-n-scale process and identifies spatial pixels with noise limits above the user-defined thresholds for read and shot noise. After all points have been rejected, the red curve shows the values used to repair the spatial profile. The triangle symbols in Figure \ref{cosray_rejection_example} show the residual noise levels for the points included in the fit. The user specified threshold was set at 2$\sigma$ for this example.
Typical performance of the cosmic ray filter is shown in Figure \ref{polarimetry_2mass}. The black curve shows the $quv$ spectra with substantial cosmic ray hits contaminating a large fraction of the wavelengths covered. The blue curve shows the corresponding filtered $quv$ spectra with effective removal of the cosmic ray hits.
\section{Spatial profiles and slit guider tracking}
The Keck slit guider was used to acquire and track our targets. In some cases the guiding delivered spatial profiles which remained centered to within 5 spatial pixels for the duration of a 6-exposure polarimetric data set. For other exposures, there was drift of over 50 spatial pixels. This corresponds to roughly 20\% of the slit length (290 spatial pixels in our extractions). Thus our pipeline was configured to compute the center of light for each exposure of a polarimetric data set. The data was extracted around this detected center-of-light for each exposure to compensate for the guiding drift.
A common technique to increase the exposure time until saturation and increase the duty-cycle of measurements is to defocus the telescope. We tested the effects of defocus on achieving high SNRs without saturation on very bright targets. During spectral extraction, the spatial profile width varied from roughly 10 pixels full width at 10\% max for in-focus brown dwarf targets. For the bright unpolarized standards, this spatial width increased to over 40 pixels at 10\% max. This focus shift changes the optical beam footprint through both QWP and HWP retarders, the polarizing beam splitter and all downstream spectrograph optics and care must be taken with calibration (cf. \citet{Tinbergen:2007fd}).
Figure \ref{psfs_polarimetric_manystars} shows the spatial profiles for four stars to illustrate the change in telescope focus. The top panels show highly defocused observations of bright stars. The bottom panels show an in-focus brown dwarf on the bottom left and a mildly defocused Ceres exposure on the bottom right. The data reduction pipeline allows for a variable spatial extraction width that changed between 30 and 80 pixels to accommodate this wide ranging defocus. In addition, by including the minimal number of spatial pixels needed to capture the delivered target flux, we minimize the impact of cosmic rays and other detector noise contributions (cosmetics, bad columns, etc).
\section{Flexure Compensation}
There is substantial wavelength drift even within a polarimetric data set due to instrument flexure. These wavelength instabilities cause a major source of spurious instrumental polarization. Measurement, compensation and accurate error budgeting of these types of systematic effects are critical to interpretation of spectropolarimetric results (c.f. \citet{2013A&A...559A.103B}). Some instruments use a more redundant modulation scheme where there are 4 exposures per Stokes parameter (using the Stokes definition modulation scheme). These redundant schemes do provide for error assessment through the so-called {\it null-spectrum} in addition to further removal of some instrumental error sources. However, requiring 4 exposures also introduces the possibility of wavelength jitter over long exposure times with gravity changes (for Cassegrain instruments like LRISp). \citet{2013A&A...559A.103B} also found that using 4 dual-beam exposures (instead of 2 in our scheme) introduced more systematic errors through flexure instabilities. Additionally, some of our science targets have fast rotation periods and there is noticeable change over even the 2 polarimetric exposures of a single Stokes parameter spectrum. Thus any science campaign must balance the error budgeting between the different types of systematic errors based on the specific use case (exposure time, flexure predictions, null spectrum requirements, overly redundant modulation for error suppression, etc).
In the brown dwarf data sets, a polarimetric observation can last over an hour. For faint targets, we can use sky glow lines to clearly align the wavelengths for each portion of the spectrum with a sufficient number of lines. However, in brighter targets with shorter exposure times, we must use the telluric absorption lines as an absolute wavelength reference. The 930nm to 940nm bandpass has many absorption lines that can easily be used for correlation. Figure \ref{lsrj_telluric_spectrum} shows the telluric wavelength region for the first data set recorded on August 22nd. The variation in detected intensity is due mainly to telescope guiding drifts. The two different orthogonally polarized beams produced by the polarizing beam splitter have noticeable throughput variations. The red curves from polarization state 0 are roughly half the flux of the polarization state 1 beams.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig7.eps}
\caption{ \label{lsrj_telluric_spectrum} The intensity spectra for the LSRJ data set 1 from August 22nd. The detected flux is shown for each of the 6 exposures and 2 polarization states (dual beam). The polarization state 0 beam is red while polarization state 1 is blue.}
\end{center}
\end{figure}
We derive a wavelength correction for each exposure by running a cross correlation analysis. All spectra are up-sampled by a factor of 100 to derive correlations in 0.01 pixel bins. The polarization state 0 beam of the first exposure is used as a reference for all subsequent exposures and polarization states in the 6 exposure set. From these correlation functions we find the peak and derive a wavelength shift to align all exposures. These telluric corrections were typically less than a pixel but with varying behavior with exposures and between the dual orthogonally polarized beams.
Flexure that is common to both dual orthogonally polarized beams will be removed to first order with the beam swapping applied in the nominal {\it Stokes definition} modulation scheme. For the Ceres observations we use as a fringe standard, we find a 0.5 pixel drift in wavelength but this shift is consistent between the two polarization states. Provided the wavelength drift between the two polarized beams is consistent, the primary error proportional to intensity derivatives will be canceled when calculating the polarization spectra using the difference method (a-b)/(a+b). Additional errors proportional to the second derivative of the intensity will be come significant if the drift between exposures is large and uncompensated.
Even though these wavelength shifts are small, they can have large impact on the computed polarization if differential effects occur. For some of our data sets, we measure wavelength offsets of roughly 0.2 pixels between the dual orthogonally polarized beams. As an example, the derived spectral pixel shifts for the second August 22nd LSRJ data set is shown in Figure \ref{lsrj_telluric_pixel_shifts_set2}. There is a noticeable difference in behavior between the polarized beams for the final exposure 5.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig8.eps}
\caption{ \label{lsrj_telluric_pixel_shifts_set2} The pixel shifts for the LSRJ data set 2 from August 22nd. The polarization state 0 beam is red while polarization state 1 is blue. }
\end{center}
\end{figure}
This difference of 0.2 pixels in wavelength for one exposure, though small, introduces a very large systematic error in polarization. Since the computed $quv$ profiles are differences in intensities, any wavelength drift imprints a $quv$ signature that is proportional to the derivative of the intensity with wavelength. This small 0.2 pixel wavelength drift is enough to imprint a 0.5\% Stokes $v$ signature change when the corresponding intensity spectra has a strong absorption line. Applying a wavelength drift correction is critical to deriving accurate polarization spectra with LRISp.
Another added benefit of including cross-correlation analysis in our LRISp pipeline is that all spectra can be easily referenced to a common wavelength. Figure \ref{ripple_spectral_drift_all_stars} shows the wavelength shift (in pixels) derived from correlating the Ceres telluric spectra with all exposures for all August 22nd and 23rd data sets. The flexure derived roughly $\pm$2 pixels within a night and a fraction of a pixel within a data set.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig9.eps}
\caption{ \label{ripple_spectral_drift_all_stars} The wavelength drift (in pixels) for all $quv$ data sets derived through cross correlation of telluric lines with the nominal Ceres data set on August 23rd. The symbol for $q$ is the triangle, $u$ is the diamond and $v$ is the cross. The left side shows August 22nd data while the right side shows August 23rd. The wavelength drift is roughly $\pm$ 2 pixels.}
\end{center}
\end{figure}
Additionally, we have tested the wavelength stability across the detector. In principal, the flexure could induce wavelength changes that are not completely removed by a single shift for all wavelengths. Any optical distortion or second order effects could create a more complex functional dependence on the wavelength solution with flexure. We ran a cross-correlation of the intensity spectra within a complete polarimetric data set. Essentially we find that the wavelength solution is well corrected by a single shift of all spectral pixels to within at least 0.05 pixels. We find perturbations in wavelength regions where we detect spectropolarimetric signatures, as we would expect from magnetic field effects. As expected, with the highly defocused standard stars without the use of the ADC, there are shifts in the wavelength solution of up to 1 pixel across the detector. However, this drift only influences the unpolarized standard star observations.
\section{Spectral Fringes}
Another feature of LRISp data at high SNR is a spectral fringes caused by interference within the achromatic retarders. Fringes such as these can be common in night-time spectropolarimeters such as the Intermediate dispersion Spectrograph and Imaging System (ISIS) on the William Herschel Telescope (WHT) or the Anglo-Australian Telescope (AAT) with the Royal Greenwich Observatory (RGO) spectrograph \citep{Aitken:2001ih, Harries:1996vf, Donati:1999dh}. For most of these spectrographs over relatively limited wavelength regions, the spectropolarimetric fringe follows a simple functional form. Chirp functions or simple harmonic filters in Fourier space are enough to suppress the fringes below the statistical noise limits.
For LRISp, the fringe depends on the Stokes parameter being measured. Stokes $qu$ measurements use only a single rotating half-wave retarder (HWP) giving a fringe at 0.2\% amplitude at 850nm. Measuring Stokes $v$ with LRISp involves adding a second achromatic quarter wave plate (QWP) fixed in front of the HWP. This second optic complicates the fringe and increases the amplitude to roughly 0.5\% at 850nm. Figure \ref{evlac_spectral_ripple}.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig10.eps}
\caption{ \label{evlac_spectral_ripple} This Figure shows the $quv$ spectra in the 850nm region from EV Lac. The SNR is estimated to be about 1200 to 1800 in each Stokes parameter. A strong spectral fringe is seen dominating the $quv$ spectra. The fringe amplitude is well above the statistical noise limits and is well sampled with many spectral pixels.}
\end{center}
\end{figure}
The exact form of the fringe power spectra depend on several things. First, the telescope was defocused for the brighter targets. During spectral extraction, the spatial profile width varied from roughly 10 pixels full width at 10\% max to over 40 pixels. This means the beam footprint as the light passed through both QWP and HWP retarders was substantially different. Figure \ref{psfs_polarimetric_manystars} showed the spatial profiles for four stars to illustrate the change in telescope focus. This defocus changes the size of the beam and also where the beam from each field angle passes through the optic.
There are two common methods to subtract this fringing. First, an unpolarized standard star is observed under the same conditions and the corresponding spectral fringe is subtracted from the science target spectra. Second, Fourier filters can be applied to the science target spectra without any need for a calibration target provided the frequencies to be filtered are known for the specific instrument configuration.
In the first method, the calibration standard can be observed with very high SNRs and simply subtracted without degrading the SNR of the target. This assumes that the spectral fringe is stable between calibration standard and science target. One potential error is that telescope guiding drifts can change the angle and location of the beam through the optic. Another is that gravity or time dependent instrument changes (flexure, temperature, focus, etc) can substantially alter interference fringes.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig11.eps}
\caption{ \label{ripple_manystars_CrH_short} The spectral fringes present in 4 observed calibrator targets for the 855nm to 865nm wavelength region. The unpolarized standard HD 174160 from August 23rd had a SNR of 1200, the unpolarized but magnetic HD 20630 star from August 22nd had a SNR of 1000 and August 23rd had an SNR of 2000. Ceres, though presenting continuum polarization, matches the unpolarized standards from August 23rd with an SNR of 2000. The blue curve shows the average of all 4 individual spectra.}
\end{center}
\end{figure}
Figure \ref{ripple_manystars_CrH_short} shows the $quv$ spectra for the four possible calibration standards we observed August 22nd and 23rd. A clear and repeatable spectral fringe signature present. Ceres had a of SNR of 2000. The two unpolarized but magnetic HD 20630 observations had SNRs of 1000 on August 22nd and 2000 on August 23rd. The unpolarized standard HD 174160 had a SNR of 1200. Though Ceres is an asteroid with a continuum polarization, it is not expected to show significant spectropolarimetric signatures in this wavelength region. Since the telescope focused changed substantially between the unpolarized standards and Ceres, some systematic variation is expected between the observations. There is some small but statistically significant variation between the fringes detected in Figure \ref{ripple_manystars_CrH_short}. We note that the Ceres spectra are very similar to all the unpolarized standard observations. Additionally, the Ceres spectra were only mildly defocused and match the brown dwarf focus position much closer than the unpolarized standards.
\begin{figure*} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.45\linewidth, angle=0]{fig12a.eps}
\includegraphics[width=0.45\linewidth, angle=0]{fig12b.eps}
\includegraphics[width=0.45\linewidth, angle=0]{fig12c.eps}
\includegraphics[width=0.45\linewidth, angle=0]{fig12d.eps}
\caption{ \label{ev_lac_fft_power_chromatic} The EV Lac power spectra for the $quv$ fringe in select spectral bandpasses. The central wavelengths are 817nm, 875nm, 934nm and 992nm with 1000 spectral pixels per computation. Blue shows Stokes $q$. Green shows Stokes $u$. Red shows Stokes $v$. The Stokes $v$ power spectra in red show multiple peaks that have significant frequency width. The dashed lines show a simple model with two cosine functions multiplied together to give a measure of the single-frequency peak width. The cosine functions have periods of 0.33 nm$^{-1}$ and 1.23 nm$^{-1}$ which arise to the sum-frequency of 1.56 nm$^{-1}$ and the difference-frequency of 0.90 nm$^{-1}$ in the power spectrum. }
\end{center}
\end{figure*}
The second method for removing these spectral fringes involves computing the power spectrum of each individual Stokes parameter and then filtering unwanted frequencies. We compute the power spectrum conventionally as ABS(FFT($quv$))$^2$.
The $quv$ fringe has significant wavelength dependence in the 800 to 1000nm region. We choose 817nm, 875nm, 934nm and 992nm to illustrate the wavelength dependence. We use small wavelength intervals of 1000 spectral pixels centered on the bandpasses to illustrate the changes with wavelength. Figure \ref{ev_lac_fft_power_chromatic} shows the power spectra for Stokes $q$, $u$, and $v$ in all four wavelength bandpasses.
There are typically multiple frequency components, particularly in the Stokes $v$ measurements. For instance, the Stokes $qu$ spectra have substantial power at a 1.56 nm$^{-1}$ period in the 875nm bandpass. The Stokes $v$ spectra are more complex in the same bandpass with power at both 1.56 nm$^{-1}$ and 0.90 nm$^{-1}$ periods. If you model this $v$ power spectrum as a multiplication of two cosine functions, you get periods of 0.33 nm$^{-1}$ and 1.23 nm$^{-1}$ which arise to the sum-frequency of 1.56 nm$^{-1}$ and the difference-frequency of 0.90 nm$^{-1}$. This simple model is shown as the solid black curves in every panel of Figure \ref{ev_lac_fft_power_chromatic}. Since the Stokes $v$ data has two retarders mounted in the beam, complex interactions are expected.
The wavelength dependence of the fringe is most readily seen as the change in period of the oscillations. The relative strength of the fringe at various periods also changes. At 817nm, the fringe for Stokes $v$ is largely contained at periods of 1 nm$^{-1}$ and 2 nm$^{-1}$ but with two separate frequency peaks seen at each period. By 875nm, the Stokes $v$ power spectra frequencies have shifted to shorter frequencies and there are two separate but identifiable peaks within each main frequency band. At longer wavelengths of 934nm, there are two very clear independent frequencies smaller than 1 nm$^{-1}$ and two much smaller peaks around 1.5 nm$^{-1}$. The fringe variation in $q$ and $u$ in the 934nm bandpass is barely detectable. However, in the 992nm bandpass, the $q$ and $u$ variability are clearly detectable again.
We compared the Fourier filter method with a direct subtraction of a stellar calibration observation and find good agreement where we have sufficient SNR in the Ceres spectrum. However, the method if directly subtracting standard star observations under the same optical configuration seems to be robust and to deliver higher SNRs when calibration standards are observed at very high SNRs.
Figure \ref{evlac_spectral_ripple_comparison} shows the EV Lac $quv$ spectra after subtraction of the spectral fringe using two different sets of calibrators. We tested the fringe subtraction methodology by using different groups of observations to create an average spectrum at much higher SNRs. In one group, we simply averaged all observations from Table \ref{Table_LRISp_Observations} after applying the various flexure and wavelength drift corrections to create an average fringe spectrum. For a second fringe spectrum, we averaged only the non-brown-dwarf targets at high SNR observations from Table \ref{Table_LRISp_Observations} which excludes the LSRJ and 2MASS stars. These two spectra allow us to verify consistent results in effectively subtracting this spectral fringe.
Figure \ref{evlac_spectral_ripple_comparison} shows that the resulting fringe subtracted EV Lac $quv$ spectra are essentially identical. The SNR is estimated using the standard deviation in each $quv$ spectrum on both the blue and red continuum. The blue continuum runs from 812.2nm to 816.3nm while the red continuum runs from 820.9nm to 824.9nm with each band covering 70 spectral pixels outlined by the dashed blue vertical lines in Figure \ref{evlac_spectral_ripple_comparison}. This method gives SNRs from 950 up to 1640 for the red curve and 1310 up to 1800 for the black curve. As an independent test of the fringe subtraction, we apply a high-pass filter to the data by subtracting a 9-pixel boxcar smoothed spectrum from the fringe subtracted data sets. This 9 pixel smoothing width represents the full width 10\% max of the delivered optical instrument profile measured from arc line exposures. These residual variations after filtering are dominated by shot noise and any residual cosmic ray damage. The SNRs of these high pass filtered data sets are in the range of 1700 to 2200 showing effective removal of the fringes down to typical statistical limits of 0.05\% in any individual $quv$ spectrum.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.99\linewidth, angle=0]{fig13.eps}
\caption{ \label{evlac_spectral_ripple_comparison} A comparison of the EV Lac $quv$ spectra after fringe subtraction using different combinations of stellar and standard observations. The red curve shows the EV Lac spectra after subtraction of all flexure-corrected stellar observations (all stars and calibrators). The black curves show the EV Lac spectra when using only bright star and calibrator observations (V 2054 Oph, Ceres and unpolarized stars HD 20630 and HD 174160, no brown dwarfs). The dashed blue lines define two regions used for computing SNR statistics as well as continuum normalization for the intensity profile. The blue continuum runs from 812.2nm to 816.3nm while the red continuum runs from 820.9nm to 824.9nm covering 70 spectral pixels in each band.}
\end{center}
\end{figure}
\subsection{Slit stepping for High SNR}
An additional method for increasing the SNR is to step the target along the slit length during an exposure. An example of this procedure is shown in Figure \ref{slit_stepping_psfs}. The guider offset script was set to move the target along the slit in 1 arc second steps during an exposure. A mode like this allows the user to achieve near-saturation brightness levels on the detector over an order of magnitude more spatial pixels without requiring readout of the detector. For LRISp on bright targets, the CCD readout time is a major limitation. Readout can take up to a minute while the integration time to saturation is a small fraction of this time. By implementing this slit-stepping mode, the duty-cycle and efficiency of bright target observing campaigns can be kept high with integration times substantially longer than the readout time.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.75\linewidth, angle=90]{fig14.eps}
\caption{ \label{slit_stepping_psfs} An example of the slit-stepping script for achieving high SNR measurements. The star was moved in 1 arc second increments along the length of the slit 10 times for a total of 11 samples of the local seeing and telescope jitter. Each color represents a different exposure. One of the two polarized beams had substantially higher throughput. Variations along the spatial pixels represents the changing flux on 1 second timescales. }
\end{center}
\end{figure}
This slit-stepping mode resulted in substantial change in the telescope beam footprint as the light passed through the retarders. This changes the spectral fringe. A simple Fourier filter as shown above is sufficient to remove spectral fringes. An example of a high SNR Stokes $q$ and intensity spectrum for a bright field star HD345495 is shown in Figure \ref{slit_stepping_spectra}. After fringe removal, the spectra were averaged spectrally by 4 pixels to show SNRs above 3000 at 1-point per resolution element spectral sampling.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=1.05\linewidth, angle=0]{fig15.eps}
\caption{ \label{slit_stepping_spectra} An example of the $q$ and intensity spectra for a field star using the new slit-stepping script. This script allowed us to achieve high SNR measurements ($>$3000). The star was moved in 1 arc second increments along the length of the slit following Figure \ref{slit_stepping_psfs} before readout, allowing for high duty-cycle observations of bright targets without saturation. The blue curve shows an average of 5 repeated exposures on this target with SNRs ranging from 1200 to 1500 each.}
\end{center}
\end{figure}
We acquired 5 separate spectra of this field star using this slit-stepping mode. The guider performance did result in some small variation in between steps, but the SNR's for each individual polarization measurement were between 1200 and 1500 at full spectral sampling. After combining and spectrally averaging by a factor of 4, we achieve a final SNR of 4500 for a polarimetric sensitivity of 0.022\% as seen in the blue curve of Figure \ref{slit_stepping_spectra}. The higher SNRs achieved by spectral and temporal averaging demonstrates that the SNRs are statistically limited and not dominated by some systematic errors.
\subsection{Internal Calibrations}
The coordinate reference frame for linear polarization can be verified with optics mounted in the internal filter wheel \citep{Goodrich:2003kv}. The wheel includes two calibration polarizers in addition to the QWP used for circular polarization. The infrared polarizer is said to have a wavelength range of 750nm to 1050nm \citep{Goodrich:2003kv}. We used this calibration optic to perform an independent assessment of the cross-talk between linear polarization states introduced by having chromatic fast axis orientation variations in the HWP.
We follow a standard method to identify the HWP fast axis orientation as well as the overall degree of polarization delivered by the calibration optics. The flat field lamps are used to illuminated the fixed calibration IR polarizer. We then take a series of exposures while rotating the HWP. In our case, we rotated the HWP from -7$^\circ$ to 89$^\circ$ in steps of 2$^\circ$. This allows us to have high angular sampling as well as recording both the minimum and maximum intensity transmitted through both beams of the polarizing beam splitter. With 49 independent measurements at a range of HWP orientations, we can model the transmitted intensity as a simple function and fit for the calibration parameters.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig18.eps}
\caption{ \label{polcal_polarizer_hwp_test} The degree of polarization for the IR calibration polarizer and the half-wave plate fast axis orientation. The measured degree of polarization (DoP) for the polarizer is shown on the right and y-axis. The DoP is above 95\% for wavelengths shorter than 900nm. At longer wavelengths the DoP falls to below 20\% at 1000nm. The fast axis orientation for the HWP is shown as the colored curves with the left-hand y axis. The colors correspond to the two orthogonally polarized beams recorded on the detector produced by the polarizing beam splitter. Each colored fast axis location has two separate curves corresponding to fitting a function with either a fixed angular modulation period ($P=90^\circ$) or with a variable period ($P$). The fast axis orientation changes by about 2.5$^\circ$ from 780nm to 950nm. Beyond 950nm, the calibration polarizer has low DoP and the fast axis orientations measured in the two beams starts to diverge. Additionally, the computed modulation period diverges from the nominal angle of $P=90^\circ$. Likely the low DoP and other polarization artifacts from the polarizer itself cause the measurement technique to give errors. See text for details.}
\end{center}
\end{figure}
At every wavelength, we model the system as a combination of an imperfect polarizer and a HWP that rotates the polarization by some angle. We also include a fit for the period of the modulation function with HWP angle as a test on the accuracy of the model. Imperfections in the assumptions about the polarizer behavior as well as errors in rotation stage encoders values can manifest as deviations. The functional form of the intensity modulation with HWP angle is represented as an unmodulated intensity ($I_0$) constant plus a modulated intensity ($I_m$) amplitude times the modulation function itself. For an ideal HWP with a varying fast axis orientation, the modulation is simply a Cosine function with an unknown orientation ($\theta$). The angular period of the polarization modulation ($P$) should be 90$^\circ$ for a perfect HWP. The functional form is thus:
\begin{equation}
I = I_0 + I_m COS( 2\pi \frac{\theta - \theta_0}{P} )
\end{equation}
As a test of the method, we perform fits both with a fixed $P=90^\circ$ and a variable period. The results of the functional fit at every wavelength are shown in Figure \ref{polcal_polarizer_hwp_test}. We included fits where $P$ is both fixed and allowed to vary. The fast axis orientation variation is seen in color and varies by about 2.5$^\circ$ from 780nm to 950nm. In the 950nm to 1050nm the fits begin to diverge. The angular modulation period deviates from 90$^\circ$ and the two orthogonally polarized beams do not give the same location for the HWP fast axis. The assumption of the calibration polarizer being simply represented as a fractional polarization at a fixed angle and the behavior of the HWP appears to break down.
However, this testing does show that the expected rotation of the linear polarization reference frame caused by deviations in HWP properties with wavelength is well controlled. The rotation of the plane of polarization for a science target should be easily compensated by use of a linear polarization standard star as is common for calibrations of LRISp \citep{Goodrich:2003kv,Goodrich:1995fg}.
\section{Daytime Sky Calibration Testing: Cross-talk}
As an independent test of the instrument calibration, we use the daytime sky polarization. The quarter wave plate used to measure circular polarization is mounted in the calibration filter wheel. As such, there are no calibration optics mounted in front of the instrument in the 2-retarder configuration. This presents a calibration challenge. Recently, we have been developing methods to use daytime sky polarization as a bright, highly polarized calibration source with a well known angle of polarization (AOP) to derive telescope properties \citep{Harrington:2011fz, Harrington:2010km}. We have developed algorithms to use this daytime sky polarization to compute the cross-talk introduced in the instrument while illuminating the entire optical train much more similar to star light. This method was successful in calibrating the AEOS telescope and the HiVIS spectropolarimeter where the linear to circular cross talk was 100\% at some wavelengths and telescope pointings. This method provides an independent check of the internal calibration optics and can be used without relying on standard stars.
During our observing run, we obtained permission to open the Keck dome before sunset. We observed the Zenith with LRISp before and during twilight on both August 22nd and 23rd. During this time, the sky degree of polarization changes quickly and takes on spectral features in atmospheric absorption bands. The shadow from the earth propagates upward through the atmospheric column during twilight. On August 22nd we observed the Zenith sky polarization with the telescope pointed at an azimuth of 270$^\circ$ and then 180$^\circ$. On August 23rd, we observed the Zenith at azimuths of 360$^\circ$ and then 090$^\circ$. Figure \ref{sky_flux_vs_time} shows the detected brightness for two complete polarimetric data sets on August 22nd.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig16.eps}
\caption{ \label{sky_flux_vs_time} Daytime and twilight sky intensity measured with LRISp as the sun set on August 22nd. The first data set was recorded with the telescope at an azimuth of 270 shown in solid lines. The second data set was recorded at an azimuth of 180 shown in dashed lines. Each color shows a different modulation state. Note there is at least an order of magnitude change in brightness between spectra just due to the high degree of polarization in the daytime sky at the Zenith. The brightness dropped by 3 orders of magnitude between the start and end of the test as the sun set. }
\end{center}
\end{figure}
At the Zenith on a mountain site with the sun near the horizon, we expect to measure degree of polarizations above 60\% as measured by all-sky imaging polarimeters common in the atmospheric sciences \citep{Swindle:2014ue,Swindle:2014wc, Dahlberg:2009jh, Dahlberg:2011wk, 2010SPIE.7672E..0AS, Pust:2006gc, Pust:2007fl, Pust:2009fq}. Figure \ref{sky_swp} shows the Stokes $quv$ spectra and the associated Degree of Polarization (DoP) measured with LRISp on August 22nd.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig18.eps}
\caption{ \label{sky_swp} The Stokes $quv$ spectra and the corresponding degree of polarization (DoP) measured with LRISp on August 22nd. The solid lines show the first data set with the sun above the horizon and the telescope at an azimuth of 270. The dashed lines show the second data set with the telescope at an azimuth of 180 as the sun was setting. The Stokes $v$ measurement (dashed red line) has the lowest flux levels and the corresponding highest noise at longer wavelengths. Clear linear-to-circular cross talk is seen as the nonzero Stokes $v$ component of roughly 5\% changing sign as the instrument rotates.}
\end{center}
\end{figure}
We measured 65\% to 70\% DoP in the first data set and 80\% in the second set. This slight increase in DoP as the sun reaches the horizon is expected from standard Rayeligh scattering theory on a clear day.
We can see that the Stokes $v$ measurements are small but nonzero. This is expected as there is a known misalignment between the retarders and the polarizing beam splitter orientation in addition to the expected chromatic change in the QWP fast axis orientation \citep{Goodrich:1995fg}. Note that the QWP is fixed in the calibration wheel mount and optimization must be done manually in the present instrument configuration. The daytime sky has no circular polarization measured to limits better than 1\% \citep{Swindle:2014ue,Swindle:2014wc}. Our measurements show that the linear to circular cross-talk is roughly a few percent of the incoming linear polarization signal.
We also note that the angular relationship was verified to be preserved following our algorithms outlined in \cite{Harrington:2011fz}. On both August 22nd and 23rd we observed the Zenith with the telescope at cardinal pointings of North, East, South and West. The angle between the measured Stokes vector and the theoretical Rayleigh sky vector was preserved to better than 5$^\circ$ as the instrument rotated on both days.
The high measured degree of polarization for the daytime sky shown above at long wavelengths also suggests efficient modulation and polarizing beamsplitter functionality even though the internal calibration polarizer was unable to accurately verify long wavelength performance from the degraded internal polarizer performance.
\subsection{Intensity to Polarization Cross-talk}
Another critical test for any spectropolarimeter is to measure the level of cross-talk between the intensity spectrum and the polarization spectra. There are two separate types of intensity to polarization artifacts. The instrumental induced polarization is the first type. This real polarization signal is generally caused by the optics. This is often called the {\it zero point} calibration and is typically measured with an unpolarized standard stars. This polarization should not reflect the incoming stellar line spectrum and is usually a smooth continuum function. As with most spectrographs our unpolarized standard measurements show $\sim$1\% instrumental polarization as relatively smooth functions of wavelength.
A problem with defocusing the telescope is that the defocused beam as masked by the slit creates unstable zero point instrument polarization when combined with the poor guiding performance of the Keck slit guider. Defocusing as a means for increasing the dynamic range has several advantages for line polarization studies where the continuum is simply fit and subtracted. However, the relatively unstable masking of the incoming beam breaks the circular symmetry of optical path and reduces the ability for the instrument to measure continuum polarization. Care must be taken when defocusing the telescope as the continuum stability is reduced. In addition, the slit guider performance degrades at high airmass and continuum polarization is less stable when tracking targets at high elevations.
A second kind of artifact is caused by instrument and data reduction artifacts. This kind of intensity to polarization cross talk involves having the incoming stellar spectrum spuriously present in the polarization spectrum. Imperfections in detector linearity, background subtraction and other effects begin to cause limitations at high sensitivity levels c.f. \cite{1996SoPh..164..243K}. In unpolarized standard stars, in addition to the instrumental continuum polarization, the stellar line spectrum along with it's first and second wavelength derivatives can be spuriously present. Careful correction for scattered light, ghost images, wavelength drifts, etc will only reduce this kind of $I$ to $QUV$ cross talk below some detection threshold. The dual-beam configuration in addition to beam-swapping during data reduction does minimize several artifacts to first order. However, several second order effects are not completely removed and can easily dominate the $quv$ spectra at high SNR levels.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig17.eps}
\caption{ \label{i_to_quv} The cross correlation functions computed with the IDL C\_CORRELATE routine. The black lines show the correlation functions between intensity and the $qu$ spectra for the 5 slit-stepped spectra of HD 354495 described above. No substantial peaks are seen above the typical fluctuation levels of ~0.1. The blue lines show the correlation functions when 0.02\% of the continuum normalized intensity spectrum was added to the individual polarization spectra. These blue correlation curves show that a substantial correlation of 0.2 to 0.3 is present at zero lag giving rise to our upper limits for intensity to polarization cross-talk.}
\end{center}
\end{figure}
For LRISp, we took the five high SNR measurements in slit-stepped mode used to create Figure \ref{slit_stepping_spectra}. By running intensity-to-polarization correlations across many spectral lines, a clear signature of intensity to $quv$ spectra can be assessed. Each of the five spectra had a SNR of 1200 to 1500 in each $q$ and $u$ Stokes parameter. A correlation analysis of 1000 spectral pixels between 856nm and 896nm yielded no significant I to $qu$ cross-talk at levels above the $qu$ SNR. The same test was run on the continuum normalized polarization profiles $Q$ = $I*q$ (sometimes differentiated as $q_c$ as opposed to $q$). A test of the routines showed that 0.2\% of the intensity spectrum added in to either $qu$ or $QU$ spectrum was easily detectable with our correlation analysis. Figure \ref{i_to_quv} shows these normalized correlation curves computed with the IDL C\_CORRELATE function. A correlation of 1.0 would describe perfect correlation. The black curves show the correlation between the continuum normalized intensity spectra and the $qu$ spectra. The correlation curves fluctuate around zero with low amplitudes of 0.1 or less showing no substantial correlation. The blue curves of Figure \ref{i_to_quv} show the correlation functions with the intensity added to the $qu$ spectra with a 0.2\% multiplier. Clear peaks are seen with correlations above 0.2 when we simulate intensity to polarization cross-talk at 2x the SNR levels (0.2\%). This test sets our upper limits on the I to $qu$ cross-talk for our instrument configuration to $<$0.2\%.
\section{Comparison: LRISp Blue Channel}
The blue channel of the polarimeter has some substantial differences in sensitivity to polarimetric errors when using common spectral extraction algorithms. We used the 680 dichroic and the 300/5000 grism during this campaign. The full-width half-max of arc line fits is between 6.0 and 7.5 spectral pixels showing substantial oversampling. There are slight differences of roughly 0.1 to 0.3 pixels FWHM between the two orthogonally polarized beams, showing a small but detectable difference in optical resolution between the orthogonally polarized beams. The delivered spectral resolution is R$\sim$450 with 6 points per resolution element at 360nm. The resolution rises to R$\sim$790 at 7 point per FWHM sampling at 775nm.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.95\linewidth, angle=0]{fig20.eps}
\caption{ \label{spectral_tilt_comparison} A comparison between the pixel coordinate grid tilt geometrical calibrations derived using the arc lamp fits. Black symbols show the computed wavelength center of several arc lines at selected spatial pixels on the CCD. The red symbols show the median spatial location of all arc line wavelength centers for the two orthogonally polarized beams of the red channel. Because the average wavelength solution in fixed ccd pixel coordinates was used, tilts between the orthogonally polarized beams are seen between a fixed pixel grid and the tilted arc lines. This coordinate geometry must be corrected in the red channel by associating each spatial pixel with the correct wavelength derived from the arc lines as described above. The blue channel has a greatly reduced tilt, as shown by the blue symbols for the median arc line location across the blue channel CCD. }
\end{center}
\end{figure}
Unlike the red channel, the tilt of the monochromatic slit image is less than 0.1 pixels across 240 spatial pixels extracted. Tilted coordinate geometries can be overcome using up sampling and instrument calibrations, or even more complex instrument profile deconvolutions. However, with such small spectral tilt in this blue channel, the wavelength errors and polarimetric artifacts are greatly reduced even when using simple extraction algorithms. The red and blue channel spectral tilts are shown in Figure \ref{spectral_tilt_comparison}. The red tilt is over 2 pixels across the slit image while the blue channel is more than an order of magnitude less. Arc line exposures are used to determine how each wavelength falls across the CCD pixel grid. The arc line central wavelength is mapped at every spatial and spectral pixel for every arc lamp line, shown as the black symbols in Figure \ref{spectral_tilt_comparison} only for the red channel. The median spatial locations of all arc lamp lines along the CCD pixel grid are shown in red for the LRISp red channel. The same median location for the blue channel is shown in blue.
The blue channel ccd is much thinner than the red channels deep depletion pixels. The corresponding cosmic ray hit rate is greatly reduced. The optimal extraction algorithms outlined for the red channel are always effective for rejecting cosmic ray hits as well as filtering some detector cosmetic issues. However, there is less need for this algorithm for modest exposure times.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=1.05\linewidth, angle=0]{fig21.eps}
\caption{ \label{evlac_psf_log} The spatial profiles computed for EV Lac. The average spatial profile over all spectral pixels is shown for all 6 polarimetric exposures for the red channel (plotted in red) and the blue channel (plotted in blue). The blue channel has a ghost image and substantially wider scattered light in the wings of the spatial profile.}
\end{center}
\end{figure}
Careful scattered light and background subtraction is more important in the blue channel. There is a ghost image and substantially more scattered light width to the blue channel spatial profile. A comparison between the blue and red channels is shown in Figure \ref{evlac_psf_log}. The blue channel takes roughly double the number of spatial pixels to drop below the 1\% core flux level. Typical background subtraction algorithms will include some fraction of the stellar flux depending on the reduction choices made. This is easily subtracted along with night sky backgrounds provided this scattered light profile is known and properly accounted in the reduction parameter choices. Night sky lines must be assumed to sit on top of a scattered light profile that extends substantially away from the core region, and scattered light backgrounds must be measured from an appropriate spatial distance away from the stellar core.
The curve of growth is used to determine the optimal spatial extraction width when considering different noise source. We define the curve of growth as the computation of how much flux is included in the spectrum when setting progressively wider spatial profiles divided by the total detected flux. For instance, if the user picks a blue channel half width of 10 for a total of 21 spatial pixels in the blue extracted spectrum, the user is only including 90\% of the total flux. If the user would select a half width of 39 spatial pixels, 99.9\% of the light would be included. In the red channel, care must be taken as cosmic ray damaged pixels are incompletely corrected and this noise source can dominate errors from incomplete flux inclusion. Thankfully the scattered light is reduced in the red channel and correspondingly less spatial pixels are required to gather the majority of the detected stellar flux.
Spectral fringe is not seen in the blue channel at these low spectral resolutions. An example high sensitivity polarimetric data set for EV Lac is shown in Figure \ref{evlac_signal_blue}. The signal to noise levels achieved are above 2000 per spectral pixel at full spectral sampling as estimated from the $quv$ spectra. The $quv$ spectra are dominated by photon statistics across the entire 400nm to 700nm range. The 680nm dichroic cutoff used did have some leakage beyond the cutoff wavelength, and spectra were extracted out to 775nm. In the EV Lac spectra, a small spectral region was saturated and was subsequently not plotted.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.99\linewidth, angle=0]{fig22.eps}
\caption{ \label{evlac_signal_blue} The $quv$ and intensity spectra for EV Lac recorded on August 22nd 2012. The signal to noise ratios are estimated to peak above 2000 as seen in the $quv$ noise levels.}
\end{center}
\end{figure}
The blue channel showed similar cross-talk levels to the red channel given the standard retarder alignment and operating angles. The daytime sky polarization observations were performed both on red and blue channels. The measured sky degree of polarization is between 50\% and 80\% for the entire 360nm to 770nm range. Figure \ref{evlac_sky_dop} shows the daytime sky polarization measurements. Cross talk between linear and circular states is seen at similar levels to the red channel.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.99\linewidth, angle=0]{fig23.eps}
\caption{ \label{evlac_sky_dop} The daytime sky polarization measured roughly 30 minutes before sundown on August 22nd 2012. Blue shows $q$, yellow shows $u$ and red shows $v$. The total degree of polarization (DoP) is shown in black. The 680nm dichroic cutoff greatly reduces the detected sky brightness for longer wavelengths. Poor background subtraction and the blue channel scattered light profiles contribute to the change in polarization behavior given standard reduction algorithms.}
\end{center}
\end{figure}
\section{Summary}
We have developed a data reduction pipeline that can calibrate the LRISp spectropolarimeter and achieve signal-to-noise ratios above 2000 limited by photon statistics at full resolution and sampling. Spectral binning and temporal averaging achieved SNRs of 4500 without obvious visible instrumental artifacts. We have demonstrated algorithms for overcoming several instrument artifacts and tested them on both red and blue channels. The major polarimetric limitations proved to be wavelength drifts from flexure, spectral fringes from the retarders and effective removal of cosmic ray contamination in the red channel deep depletion CCDs. With these calibrations, we have successfully reproduced magnetic field measurements in atomic lines of M dwarfs such as EV Lac and found new magnetic signatures in molecular bands of M dwarfs and brown dwarfs. Sensitive comparisons of low resolution spectropolarimetric data with new magnetic field models such as Figure \ref{magnetic_models} can now begin. With magnetic fields producing signals at the $>$0.2\% levels, we must achieve shot-noise limited performance to sensitivity levels substantially below this level as demonstrated here.
The instrument flexure in addition to beam wobble from rotating retarders introduce wavelength instabilities. The drifts are a substantial fraction of a pixel within and between modulated polarimetric exposures. These systematic errors produce spurious instrumental artifacts that are proportional to the first and second derivative of the spectral intensity with wavelength, even if the tilted coordinate geometry is calibrated using standard arc lamp exposures. These artifacts exactly resemble signatures from stellar magnetic fields and must be effectively suppressed to achieve accurate science results. Using correlation techniques with telluric lines and atmospheric sky-glow lines we can effectively track and remove these wavelength drifts.
The spectral fringes introduced by the retarders have amplitudes of 0.2\% in Stokes $qu$ and over 0.5\% in Stokes $v$ in the red channel. These fringes display wavelength dependent behavior in both amplitude and frequency for individual Stokes parameters. Simple Fourier filters can remove the fringes provided a bandpass of 50nm or less is used. The Fourier filtering method gives very similar results to subtracting a calibration standard star observation. However, calibration of fringes by subtraction of standard star observations does give better results.
Cosmic ray hits are present in more than half the spectral pixels for each Stokes parameter in a typical long exposures of the red channel on a faint target. Optimal spectral extraction techniques are an effective filter of this noise source. An estimate of the local spatial profile computed using adjacent wavelengths gives information about the expected profile at any individual wavelength. This local spatial profile is used in an iterative loop to identify and reject cosmic ray hits and to correct the damaged pixels using the best-estimate of the spatial profile shifted and scaled to each individual extracted wavelength.
\begin{figure} [!h, !t, !b]
\begin{center}
\includegraphics[width=0.99\linewidth, angle=0]{fig24.eps}
\caption{ \label{magnetic_models} The model Stokes $quv$ signatures induced by a stellar magnetic field. Our brown dwarf target LSRJ was modeled under a range of magnetic field orientations, strengths and filling factors to show examples of the $quv$ morphology variation with magnetic field properties. Typical kilogauss fields as inferred from radio and optical measurements produce detectable signatures in the CrH band here with amplitudes of less than 0.4\%. }
\end{center}
\end{figure}
We have applied our daytime sky based polarization calibration techniques to LRISp to derive estimates of the instrument $quv$ cross-talk and polarization reference frame. There is a few percent linear to circular cross talk present as seen by observed Stokes $v$ when looking at a $\sim$100\% linearly polarized daytime sky in both blue and red channels. This is expected given the wavelength dependent properties of the retarders and small misalignments in the retarder rotation stages. We also assessed the intensity to $qu$ cross-talk on high SNR observations and derive upper limits. The red channel $I$ to $qu$ and $QU$ cross-talk is below 0.2\% and limited by shot noise.
This pipeline will allow high SNR use of LRISp and Keck on faint targets now that the major sources of instrumental error have been identified and suppressed.
\section{Acknowledgements}
We thank the Keck staff, support astronomers and in particular Dr. Bob Goodrich and Dr. Hien Tran for their support in operating the telescope during daylight hours and in developing scripts for new slit stepping modes. Dr. Harrington and Dr. Berdyugina acknowledge support from the InnoPol grant: SAW-2011-KIS-7 from Leibniz Association, Germany, and by the European Research Council Advanced Grant HotMol (ERC-2011-AdG 291659). Dr Berdyugina acknowledges the support from the NASA Astrobiology Institute and the Institute for Astronomy, University of Hawaii, for the hospitality and allocation of observing time at the Keck telescope. Dr. Kuhn acknowledges the NSF-AST DKIST/CryoNIRSP program. This program was partially supported by the Air Force Research Labs (AFRL) through salary support for Dr. Harrington. This work made use of the Dave Fanning and Markwardt IDL libraries.
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\noindent
Let $G = (\mathbb{V}(G), \mathbb{E}(G))$ be the graph with set of vertices $\mathbb{V} = \mathbb{Z}^d$ and set of (unoriented) bonds $\mathbb{E} = \{\langle\vec{x},\vec{x} + i\cdot \vec{e}_m\rangle: \vec{x}\in\mathbb{Z}^d,\; i \in \mathbb{Z},\; m \in \{1,\ldots, d\}\}$, where $\vec{e}_1,\ldots,\vec{e}_d$ denote the vectors in the canonical basis of $\mathbb{Z}^d$. Let $(p_i)_{i=1}^\infty$ be a sequence in the interval $[0,1]$ and consider a Bernoulli bond percolation model where each bond $e \in \mathbb{E}$ is open with probability $p_{\|e\|}$, where $\|e\|$ denotes the $l_\infty$ distance between the two endpoints of $e$. That is, take $(\Omega, \, \mathcal{A}, \, P)$, where $\Omega = \{0,1\}^{\mathbb{E}}$, $\mathcal{A}$ is the canonical product $\sigma$-algebra, and $P = \prod_{e \in \mathbb{E}} \mu_e$, where $\mu_e({\omega}_e = 1) = p_{\|e\|} = 1- \mu_e({\omega}_e = 0)$. An element $\omega \in \Omega$ is called a percolation configuration.
As usual, the set $\{0\leftrightarrow \infty\}$ denotes the set of configurations such that the origin is connected to infinitely many vertices by paths of open bonds (bonds where $\omega_e=1$). Our principal assumption concerning the sequence $(p_i)_i$ will be
\begin{equation}\label{eq:summability}\sum_{i=1}^\infty p_i =\infty,\end{equation} so that, by Borel-Cantelli Lemma, we have $P\{0\leftrightarrow \infty\}=1$.
We now consider a truncation of the sequence $(p_i)_i$ at some finite range $k$. More precisely, for each $k> 0$ consider the truncated sequence $(p_i^k)_{i=1}^\infty$, defined by
\begin{equation}
p_i^k=\left\{
\begin{array}
[c]{l}%
p_i,\mbox{ if } i\leq k,\\
0,\ \mbox{ if } i>k.
\end{array}\right.\label{eq:truncation}
\end{equation}
and the measure $P^k = \prod_{e \in \mathbb{E}} \mu^k_e$, where
$\mu^k_e({\omega}_e = 1) = p^k_{\|e\|} = 1- \mu^k_e({\omega}_e = 0)$. Then, the \textbf{truncation question} is: does $P^k\{0\leftrightarrow \infty\}>0$ hold for $k$ large enough?
The works \cite{SSV}, \cite{Be}, \cite{FLS}, \cite{FL} and \cite{LS} (in chronological order) give an affirmative answer to the truncation question under different sets of assumptions on the dimension $d$ and the sequence $(p_i)$. In particular, \cite{FL} gives an affirmative answer for $d\geq 3$ and no assumption on $(p_i)$ other than \eqref{eq:summability}; moreover, this work shows how the analogous question for the long-range Potts model can be studied via a long range percolation model. We would like to mention that the general truncation question for $d=2$ is still open and it is not difficult to see that for $d=1$ the answer is negative.
In the nonsummable situation, the positive answer to the truncation question (in dimensions more than 1) appears to be more robust than in the summable case. Indeed, the presence of first-order transitions in the occupation density, or in a temperature-like parameter for summable infinite-range models, causes the truncation question to have a negative answer, as observed in \cite{FL}. Although continuity of the transition is known for Ising models, and their associated random-cluster models, in considerable generality (see for example the recent work \cite{ADCS}), this is not the case for independent percolation, where even in $d=3$ it is a famous open question in the nearest-neighbor model, while for $q$-state Potts models first-order transitions are quite common for $q \geq 3$ (see the references \cite{BCC}, \cite{C} and \cite{GoMe}).
In this paper, we consider the truncation question in an oriented graph. Let ${\cal G}= (\mathbb{V}({\cal G}),\mathbb{E}({\cal G}))$ be the oriented graph defined as follows. The vertex set is $\mathbb{V}({\cal G})=\mathbb{Z}^d\times\mathbb{Z}_+$, where $\mathbb{Z}_+ = \{0,1,\ldots\}$; elements of $\mathbb{V}({\cal G})$ will be denoted $(\vec{x},n)$, where $\vec{x} \in \mathbb{Z}^d$ and $n \in \mathbb{Z}_+$. The set $\mathbb{E}(\mathcal{G})$ of oriented bonds is
\begin{equation}\{\langle (\vec{x},n),(\vec{x}+i\cdot \vec{e}_m,n+1)\rangle: \vec{x} \in\mathbb{Z}^d,\;n\in\mathbb{Z}_+,\;m\in\{1,\ldots, d\},\; i\in\mathbb{Z}\}.\label{eq:bonds}\end{equation} Again we are given a sequence $(p_i)_{i=1}^\infty$ satisfying \eqref{eq:summability}
and we assume each bond $\langle (\vec{x},n),(\vec{x}+ i\cdot \vec{e}_m,n+1)\rangle$ is open with probability $p_{i}$ independently of each other. Again denoting by $P$ the probability measure corresponding to this percolation configuration and by $\{(\vec{0}, 0)\leftrightarrow \infty\}$ the event that there exists an infinite open oriented path starting from $(\vec{0},0)$, Borel-Cantelli gives $P\{(\vec{0},0)\leftrightarrow \infty\} = 1$. For each $k> 0$, we then consider the truncated sequence given in \eqref{eq:truncation} and the corresponding measure $P^k$ and ask the truncation question, that is, whether $P^k\{(\vec{0},0)\leftrightarrow \infty\} > 0$. We prove:
\begin{theorem}\label{thm:main_cor} For any $d \geq 2$, if the sequence $(p_i)_i$ satisfies \eqref{eq:summability}, the truncation question has an affirmative answer for the graph ${\cal G}$. Moreover, $${\displaystyle \lim_{k\to\infty}P^k\{(\vec{0},0) \leftrightarrow \infty \} = 1}.$$
\end{theorem}
This theorem is proved in the next section. In Section \ref{s:other}, we will treat a related question for the contact process and also for a different oriented graph.
\section{Proof of Proposition \ref{prop:anis}}\label{s:proof_main}
We obtain Theorem \ref{thm:main_cor} as an immediate consequence of a stronger result, which we now describe. We fix $d = 2$ and consider $\mathcal{G}$ defined as above, with vertex set $\mathbb{Z}^2\times \mathbb{Z}_+$ and set of oriented bonds given in \eqref{eq:bonds}. We take two sequences $(p_i)$, $(q_i)$ and now prescribe that bonds of the form $\langle (\vec{x}, n), (\vec{x} + i\cdot \vec{e}_1,n+1) \rangle$ are open with probability $p_i$ and bonds of the form $\langle (\vec{x}, n), (\vec{x} + i\cdot \vec{e}_2,n+1) \rangle$ are open with probability $q_i$. The truncated measure $P^k$ is obtained by truncating both sequences $(p_i)_i$ and $(q_i)_i$ at range $k$.
\begin{proposition}
\label{prop:anis}
If $(p_i)_{i=1}^\infty$ satisfies \eqref{eq:summability} and $(q_i)_{i=1}^\infty$ is not identically zero, then ${\displaystyle \lim_{k\rightarrow\infty}P^k\{(\vec{0},0) \leftrightarrow \infty\}=1}$.
\end{proposition}
\begin{proof}
By assumption, we can fix $\beta\in\mathbb{N}$ such that $q_\beta > 0$.
We will define certain \textit{bifurcation events} which will imply that a point $(\vec{x},n)$ is connected to two new points $(\vec{y},n+2)$ and $(\vec{z},n+2)$. For each $(\vec{x},n) \in {\cal G}$, define the \textit{bifurcation event}
$$E_{(\vec{x},n)} = \bigcup_{a,a'\in \mathbb{Z}}\left\{\begin{array}{l}\omega_{\langle (\vec{x},n), (\vec{x} + a\vec{e}_1, n+1) \rangle} \\=\omega_{\langle (\vec{x} + a\vec{e}_1, n+1),(\vec{x}+a\vec{e}_1 + \beta \vec{e}_2, n+2) \rangle}\\=\omega_{\langle (\vec{x} + a\vec{e}_1, n+1),(\vec{x}+a\vec{e}_1 + a'\vec{e}_1, n+2) \rangle}=1 \end{array} \right\}.$$
We have
\begin{equation*}
\label{eq:probE}
P^k(E_{(\vec{x},n)}) = 1 - \prod_{a: |a|\leq k}\left( 1 - p_{|a|} \cdot q_{\beta} \cdot \left( 1- \prod_{a': |a'|\leq k} (1-p_{|a'|})\right)\right)=\gamma_k,
\end{equation*}
which can be made arbitrarily close to 1 by increasing $k$, by \eqref{eq:summability}.
Also note that
\begin{equation}
\label{eq:inclusion_E}
\{(\vec{0},0) \leftrightarrow (\vec{x},n)\} \cap E_{(\vec{x},n)} \subseteq \bigcup_{a, a'\in \mathbb{Z}_+} \left\{\begin{array}{l}(\vec{0},0) \leftrightarrow (\vec{x} + a \vec{e}_1 + a'\vec{e}_1, n+2),\\ (\vec{0},0) \leftrightarrow (\vec{x} + a \vec{e}_1 + \beta\vec{e}_2, n+2) \end{array}\right\}.
\end{equation}
Finally, under $P$ and $P^k$, $E_{((a,b),m)}$ and $E_{((a',b'),n)}$ are independent and identically distributed as soon as either $b\neq b'$ or $|m-n| \geq 2$.
The next step is to prove that, if $k$ is large enough, a certain projection of the $k$-truncated process dominates an oriented supercritical Bernoulli percolation on $\mathbb{Z}^2_+$. Define the following order in $\mathbb{Z}^2_+$: given $(m_1,n_1),(m_2,n_2)\in\mathbb{Z}^2_+$ we say that $(m_1,n_1)\prec(m_2,n_2)$ if and only if $n_1<n_2$ or $(n_1=n_2\mbox{ and }m_1<m_2)$. Given $X\subset\mathbb{Z}^2_+$, we define the exterior boundary of $X$ as the set $$\partial_e X=\{(m,n)\in \mathbb{Z}^2_+\backslash X: (m,n-1)\in X\mbox{ or }(m-1,n-1)\in X\}.$$
We define the vertex $(m,n)\in\mathbb{Z}^2_+$ as {\em red} if and only if the following event occurs: $\bigcup_{a\in\mathbb{Z}}\left(\{(\vec{0},0) \leftrightarrow((a,m\beta),2n) \}\cap E_{((a,m\beta),2n)}\right)$.
\begin{figure}[htb]
\begin{center}
\setlength\fboxsep{0pt}
\setlength\fboxrule{0pt}
\fbox{\includegraphics[width = 0.9\textwidth]{drawing.pdf}}
\caption{The occurrence of each bifurcation event is represented by a triple of arrows with the same color. On the left side of the picture, we represent a certain projection which will be defined from these events: red vertices will appear at the (projected) starting points of bifurcations. With the information available in the picture, it is impossible to tell whether or not the three vertices on top are red.}
\end{center}
\end{figure}
To avoid confusion, let us emphasize that, if a vertex in $\mathbb{Z}^2_+$ has coordinates $(m,n)$, then this vertex is defined as red through an event in the original lattice $\mathbb{Z}^2 \times \mathbb{Z}_+$; this event involves a bifurcation with some starting point in the line $\{((a, m\beta), 2n): a \in \mathbb{Z} \}$. In particular, in Figure 1, one horizontal unit and one vertical unit in the lattice depicted on the left correspond respectively to $\beta$ units and $2$ units in the lattice on the right.
We will construct a red cluster dynamically, defining inductively two sequences $(A_i)_i$ and $(B_i)_i$ of subsets of $\mathbb{Z}^2_+$. Set $A_0=B_0=\emptyset$ and $x_0=(0,0)$. Assuming $A_j,B_j$ and $x_j$ have been defined for $j=0,\dots,i$, we let
\begin{align*}
&A_{i+1} =
\begin{cases}
A_i\cup\{x_i\},&\mbox{ if } x_i\mbox{ is red},\\
A_i, &\mbox{ otherwise},
\end{cases}\qquad B_{i+1}=\begin{cases}
B_i,&\mbox{ if } x_i\mbox{ is red},\\
B_i\cup\{x_i\},& \mbox{ otherwise.}
\end{cases}
\end{align*}
Now, if $(\partial_e A_{i+1})\backslash B_{i+1} = \emptyset,$ we stop our recursive definition. Otherwise we let $x_{i+1}$ be the minimal point of $(\partial_e A_{i+1})\backslash B_{i+1}$ with respect to the order $\prec$ defined above, and continue the recursion. Regardless of whether or not the recursion ever ends, we let ${\cal C}$ be the union of all sets $A_i$ that have been defined. It follows from \eqref{eq:inclusion_E} that $\{|{\cal C}| =\infty\} \subseteq \{(\vec{0},0) \leftrightarrow \infty\}$.
Now, observe that
$$P^k(x_{i}\mbox{ is red} \mid (A_j,B_j): 0 \leq j \leq i)\geq\gamma_k.$$
This implies that ${\cal C}$ stochastically dominates the cluster of the origin in Bernoulli oriented site percolation on $\mathbb{Z}^2_+$ with parameter $\gamma_k$ (see Lemma 1 of \cite{GM}). As noted earlier, $\gamma_k$ can be made arbitrarily close to 1; this proves that ${\displaystyle \lim_{k\to\infty} P^k(|{\cal C}|=\infty) = 1}$.
\end{proof}
\section{Contact process and oriented percolation on other graphs}
\label{s:other}
\subsection{The Contact Process}\label{contato}
Here we will give a counterpart of Theorem \ref{thm:main_cor} for the contact process obtained from truncating an infinite set of rates. Let us define precisely the model that we have in mind. We are given a sequence of non-negative real numbers, $(\lambda_i)_{i=1}^\infty$. We take a family of independent Poisson point processes on $[0,\infty)$:
\begin{itemize}
\item a process $D^{\vec{x}}$ of rate 1 for each $\vec{x} \in \mathbb{Z}^d$;
\item a process $B^{(\vec{x},\vec{y})}$ of rate $\lambda_{|i|}$ for each ordered pair $(\vec{x}, \vec{y})$ with $\vec{x} \in \mathbb{Z}^d$ and $\vec{y} = \vec{x} + i\cdot \vec{e}_m$ with $i \in \mathbb{Z}$ and $m \in \{1,\ldots, d\}$.
\end{itemize}
We view each of these processes as a random discrete subset of $[0,\infty)$ and write, for $0\leq a < b$, $D^{\vec{x}}_{[a,b]} = D^{\vec{x}} \cap [a, b]$ and $B^{(\vec{x},\vec{y})}_{[a,b]} = B^{(\vec{x},\vec{y})} \cap [a,b]$.
Fix $k \in \mathbb{N}$. Given $\vec{x}, \vec{y} \in \mathbb{Z}^d$ and $0 \leq s \leq t$, we say $(\vec{x},s)$ and $(\vec{y},t)$ are $k$-connected, and write $(\vec{x},s) \stackrel{k}{\leftrightarrow} (\vec{y},t)$, if there exists a function $\gamma:[s,t] \to \mathbb{Z}^d$ that is right-continuous, constant between jumps and satisfies:
\begin{align*}\begin{array}{ll}\gamma(s) = \vec{x},\;\gamma(t) = \vec{y} \;\text{ and, for all } r \in [s,t], &\gamma(r) \notin D^{\gamma(r)},\\ & r \in B^{(\gamma(r-),\gamma(r))} \text{ if } \gamma(r) \neq \gamma(r-),\\
&|\gamma(r) - \gamma(r-)| \leq k .\end{array} \end{align*}
We then define
$$\xi_{t,k}(\vec{x}) = I\{(\vec{0},0) \stackrel{k}{\leftrightarrow} (\vec{x},t)\},\qquad \vec{x} \in \mathbb{Z}^d,\; t \geq 0.$$ $(\xi_{t,k})_{t \geq 0}$ is then a Markov process on the space $\{0,1\}^{\mathbb{Z}^d}$ for which the configuration that is identically equal to 0 (denoted here by $\underline{0}$) is absorbing. In case $\lambda_i > 0$ only for $i =1$, $(\xi_{t,1})$ is the contact process of Harris (\cite{harris}).
\begin{theorem}
For all $d\geq 2$, if $\sum_{i=1}^\infty \lambda_i = \infty$, then $$ \lim_{k \to \infty} P\left(\xi_{t,k} \neq \underline{0} \text{ for all } t \right) = 1.$$
\end{theorem}
\begin{proof}
It is enough to prove the case $d=2$. Fix $\delta > 0$ and $k \in \mathbb{Z}_+$. Let $t_n = n\delta$, for $n \in \{0,1,\ldots\}$. Fix $b$ such that $\lambda_{b} > 0$.
For $\vec{x}\in \mathbb{Z}^d$ and $n \in \mathbb{Z}_+$, let $F_{(\vec{x}, n)}$ be the event
$$\{D^{\vec{x}}_{[t_n,t_{n+1}]} = \varnothing\}\cap \bigcup_{a \in \mathbb{Z}} \left\{\begin{array}{l} D^{\vec{x} + a \vec{e}_1}_{[t_n,t_{n+1}]} = D^{\vec{x} + a \vec{e}_1 + b \vec{e}_2}_{[t_n,t_{n+1}]} = \varnothing,\\[.4cm] B^{(\vec{x}, \vec{x} + a \vec{e}_1)}_{[t_n, t_n + \delta/2]}\neq \varnothing,\; B^{(\vec{x}+a\vec{e}_1, \vec{x} + a \vec{e}_1 + b\vec{e}_2)}_{[t_n+\delta/2, t_{n+1}]} \neq \varnothing \end{array} \right\}.$$
Then,
$$P^k(F_{(\vec{x},n)}) = e^{-\delta} \left(1- \prod_{a=-k}^k \left(1- e^{-2\delta}\cdot (1-e^{-\frac{\lambda_{|a|} \delta}{2}}) \cdot (1-e^{-\frac{\lambda_{|b|}\delta}{2}})\right)\right).$$
By first taking $\delta$ small and then taking $k$ large, the probability of these events can be made arbitrarily close to 1. Moreover,
$$\{\xi_{t_n,k}(\vec{x}) = 1\} \cap F_{(\vec{x},n)} \subseteq \bigcup_{a \in \mathbb{Z}}\left\{\begin{array}{l}\xi_{t_{n+1},k}(\vec{x}+a\vec{e}_1) \\=\xi_{t_{n+1},k}(\vec{x} + a \vec{e}_1 + b \vec{e}_2) =1\end{array} \right\}.$$
The proof is then completed with a comparison with oriented percolation almost identical to the one that established Proposition \ref{prop:anis}.
\end{proof}
\subsection{Other Oriented Graphs}
In this section we consider a graph ${\cal G^*}=(\mathbb{V}({\cal G^*}),\mathbb{E}({\cal G^*}))$. Once more, the vertex set is $\mathbb{V}({\cal G^*})=\mathbb{Z}^d\times\mathbb{Z}_+,\ d\geq 1$. The set of bonds $\mathbb{E}({\cal G^*})$ consists of two disjoint subsets; one of them, denoted $\mathbb{E}_v$, only contains oriented bonds, and the other, $\mathbb{E}_h$, only unoriented bonds. These subsets are given by
\begin{align*}&\mathbb{E}_v=\{\langle (\vec{x},n),(\vec{x},n+1)\rangle: \vec{x}\in\mathbb{Z}^d,n\in\mathbb{Z}_+\},\\&\mathbb{E}_h=\{\langle\vec{x},n),(\vec{x}+i\cdot \vec{e}_m,n)\rangle: \vec{x}\in\mathbb{Z}^d,\;n\in\mathbb{Z}_+,\;i\in\mathbb{Z},\;m\in\{1,\dots,d\}\}.\end{align*}
That is, we are considering the hypercubic lattice where there are nearest neighbour, oriented bonds along the vertical direction and long range, unoriented bonds parallel to all other coordinate axes.
We consider an anisotropic oriented Bernoulli percolation on this graph. Given $\epsilon\in(0,1)$ and a sequence $(p_i)_{i=1}^\infty$ in the interval $[0,1]$, each bond $e\in\mathbb{E}$ is open with probability $\epsilon$ or $p_{\|e\|}$, if $e\in\mathbb{E}_v$ or $e\in\mathbb{E}_h$, respectively.
Given two vertices $(\vec{x},m)$ and $(\vec{y},n)$ with $m < n$, we say that $(\vec{x},n)$ and $(\vec{y},m)$ are connected if there exists a path $$\langle (\vec{x},n)=(\vec{x}_0,n_0),(\vec{x}_1,n_1),\dots,(\vec{x}_s,n_s)=(\vec{y},m)\rangle$$ such that $\langle (\vec{x}_i,n_i),(\vec{x}_{i+1},n_{i+1})\rangle\in\mathbb{E}_h$ or ($\vec{x}_i=\vec{x}_{i+1}$ and $n_{i+1}=n_i+1$) for all $i=0,\dots,s-1$, and the bonds $\langle (\vec{x}_i,n_i),(\vec{x}_{i+1},n_{i+1})\rangle$ are open for all $i=0,\dots,s-1$. That is, all allowed paths use vertical bonds only in the upward direction. We use the notation $\{(\vec{0},0)\stackrel{*}{\leftrightarrow}\infty\}$ to denote the set of configurations in which there is an infinite open path starting at $(\vec{0},0)$. We use also the notations $P$ and $P^k$ to denote the non-truncated and the truncated (in the range $k$) probability measures, respectively.
\begin{theorem}\label{hex} For any $d\geq 2$, any $\epsilon >0$ and any sequence $(p_i)_{i=1}^\infty$ such that $\sum_{i\in\mathbb{N}} p_i =\infty$, we have ${\displaystyle \lim_{k\rightarrow\infty}P^k\{(\vec{0},0)
\stackrel{*}{\leftrightarrow} \infty\}=1}$.
\end{theorem}
A weaker result was proven in \cite{FL} (see Theorem 6 therein) in the context of non-oriented and isotropic percolation. The proof of Theorem \ref{hex} is inspirated by the proof thereof (\cite{FL}).
\begin{proof}
It is sufficient to prove the theorem for $d = 2$.
Let $\upgamma: \mathbb{Z} \to \mathbb{Z}^2$ be the function satisfying
$$\upgamma(0) = \vec{0}, \qquad \upgamma(m+1)- \upgamma(m) = \begin{cases} \vec{e}_1&\text{if $m$ is even,}\\-\vec{e}_2&\text{if $m$ is odd.}\end{cases}$$
Define the events
$$H_{m,n} = \left\{\begin{array}{l}\text{$(\upgamma(m),n)$ and $(\upgamma(m+1),n)$ are connected }\\
\text{by a path of open bonds of $\mathbb{E}_h$ that is }\\\text{entirely contained in the line that contains}\\\text{$(\upgamma(m),n)$ and $(\upgamma(m+1),n)$}\end{array}\right\},\; m \in \mathbb{Z},\;n\in \mathbb{Z}_+.$$
Clearly, $P^k(H_{m,n}) = P^k(H_{0,0})$ for all $m,n$. Also note that, if $(m_1, n_1) \neq (m_2,n_2)$, then the line that contains $(\upgamma(m_1),n_1)$ and $(\upgamma(m_1 + 1), n_1)$ does not share any bonds of $\mathbb{E}_h$ with the line that contains $(\upgamma(m_2),n_2)$ and $(\upgamma(m_2+1), n_2)$. Hence, the events $H_{m,n}$ defined above are independent. Moreover, we have
\begin{equation}\label{eq:one_dimension}
\lim_{k \to \infty} P^k(H_{m,n}) = 1
\end{equation}
(a proof of this can be found in the first few lines of the proof of Theorem 6 in \cite{FL}).
Now, fix $\epsilon > 0$ and $\delta > 0$. Let $N$ be an integer satisfying $(1-(1-\epsilon)^{N})^2 > 1-\delta/2$. Then, using \eqref{eq:one_dimension}, choose $k > 0$ such that $(P^k(H_{0,0}))^{2N} > 1 - \delta/2$. Then let
$$\Lambda_0 = \left\{(a,n)\in \mathbb{Z} \times \mathbb{Z}_+: a + n \text{ is even} \right\}.$$
For each $(a,n) \in \Lambda_0$, let $\zeta(a,n)$ be the indicator function of the event
\[\left(\bigcap_{m=aN}^{aN + 2N -1}H_{m,n} \right) \cap \left( \bigcup_{m=aN}^{aN + N -1} \left\{\langle (\Gamma(m),n),(\Gamma(m),n+1)\rangle \text{ is open}\right\} \right)\cap\]
\[\left( \bigcup_{m=aN+N}^{aN + 2N -1} \left\{\langle (\Gamma(m),n),(\Gamma(m),n+1)\rangle \text{ is open}\right\} \right).\]
Then, the elements of the sequence of random variables $(\zeta(a,n))_{(a,n) \in \Lambda_0}$ are independent and, by the choice of $N$, each of them is equal to 1 with probability $1-\delta$. Now note that an infinite sequence $(a_i)_{i =0}^\infty$ such that $a_0=0$, $|a_{i+1} - a_{i}| = 1$ and $\zeta(a_i,i) = 1$ for each $i$ necessarily corresponds to an infinite open path in $G$. Moreover, the probability of existence of such a sequence can be taken arbitrarily close to 1 since $\delta$ is arbitrary.
\end{proof}
\section*{Acknowledgements}
This work was done during B.N.B.L.'s sabbatical stay at IMPA; he would like to thank Rijksuniversiteit Groningen and IMPA for their hospitality. The research of B.N.B.L. was supported in part by CNPq and FAPEMIG (Programa Pesquisador Mineiro).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
In many body quantum systems, the mean-field theory is focusing on the
average behavior of single constituents (see e.g. \cite{RaggWer89,RaggWer91,Werner92}). Large atomic ensembles are
systems of this kind. Typical global states of these systems have the property
that the state restricted to an single atom is independent of the individual
atom. If we average the state restricted to a single atom over all atoms
of the ensemble, we obtain the same state as restricted to each individual
atom. For asymptotically large systems, this mean-field average corresponds to
an effective classical systems. Since one is only concerned with expectation
values of observable of single atoms, correlations between different atoms are
irrelevant for the mean-field limit.
A kind of ``first-order correction'' to the classical mean-field limit are
``mean-field fluctuations''. Observables which for testing mean-field
fluctuations are build as follows: One looks at the deviation of a single atom
observable from its mean-field expectation value. Whereas the mean-field
expectation value is the same for each single atom, the expectation value of
the deviation may depend on the individual atom. A ``mean-field fluctuation
observable'' (fluctuation operator) is an appropriate average of the
individual mean-field deviations over all atoms.
It is well known, that if a large atomic ensemble is prepared in a homogenous
product state, i.e. each single atom is individually prepared in the same
state, the mean-field fluctuations effectively behave like a systems of
non-interacting bosonic modes. In other words, a homogenous product
state of a large atomic ensemble, induce (via mean-field fluctuations) a ``Gaussian state'' on a system of bosonic
modes. This statement has to be interpreted in the limit of infinitely large
systems. This is also related to the well known ``Holstein-Primakoff transformation'' \cite{PhysRev.58.1098}
which relates large spin systems to bosonic systems.
The ``bosonic nature'' of mean-field fluctuations for a large atomic
ensemble can also be interpreted as ``simulating'' bosonic systems by large
atomic ensembles.
It can be observed in experiments that mean-field fluctuations can have a ``bosonic behavior''
by building interfaces between atomic ensembles and light
\cite{RevModPhys.82.1041}.
Here, a laser is appropriately interacting with a gas
of atoms confined to a glass box at room temperature. The state of the laser
field can be stored into the atomic ensemble by using the effective degrees of
freedom of the mean-field fluctuations. Conversely, one can also perform an
inverse process, by transferring the state of the mean-field fluctuations of
an atomic ensemble to bosonic modes of a light field.
One can also imagine to use a similar
technique in order to store the state of a laser field into an ensemble of
atoms that are trapped by the periodic
potential of an optical lattice. The interesting point is here, that the implementation of
quantum cellular automata for optical lattice systems is a natural task. In an
optical lattice, each atoms occupies a lattice site. A quantum cellular
automaton is a global (mostly reversible) quantum operation whose local action
on a single site subsystem only affects the neighboring sites \cite{SchuWer04}.
A very interesting issue combines the simulation of bosonic modes by atomic
ensembles, on one hand, with the implementation of quantum cellular automata which acts on
atomic ensembles, on the other hand: First, store the state of a laser field
into an ensemble of atoms (arrange within a optical lattice), then implement
an quantum cellular automaton acting on the atomic ensemble, and finally
``release the light'' from the atomic ensemble. By the overall process, we
obtain an incoming light field and a outgoing scattered light field. The
question is now:
\begin{quote}
What kind of effective operation is describing this ``scattering process''?
\end{quote}
We expect to obtain operations that go beyond the ``Gaussian world''
which can be used as ``non-Gaussian addons''. Since Gaussian operations are
limited in their ability to perform quantum information tasks, this may open a
door to perform new tasks.
Whether the mean-field fluctuations of a large atomic ensemble behave like
bosonic modes depends on the state of the atomic ensemble. If we want to find
an answer to the question given above, we will answer the following
question first:
\begin{quote}
For which states of large atomic ensembles do the mean-field fluctuations
behave like bosonic modes and is there a set of states with bosonic mean
field fluctuations which is invariant under application of quantum cellular automata?
\end{quote}
We shall see that Theorem~\ref{thm-main} provides an answer to this question.
For states of large atomic ensembles whose correlations are exponentially
decaying with the distance of the single atoms (exponential clustering), the mean-field fluctuations
behave like bosonic modes. In particular, states with exponential clustering
are invariant under quantum cellular automata since the action of a quantum
cellular automaton on a single site system only affects a finite set of neighbors.
As a consequence, the following process is possible: A laser field is
interacting with a large atomic ensemble such that the Gaussian state of the laser field
is encoded into a homogenous product state of the atoms. A quantum cellular
automaton acting on the atomic ensemble is implemented. The resulting state of
the atomic ensemble possesses again bosonic mean-field fluctuations. Finally,
we can use again the interaction between the laser field and the atomic
ensemble to transfer the state of the atomic ensemble (almost faithfully) to
the bosonic modes of the laser.
The total process induces an operation on bosonic modes which maps an initial
Gaussian state to some bosonic state, which can be non-Gaussian. With help of
Theorem~\ref{prop-expansion}, the correlation functions of the resulting state
can be written as a perturbation of a Gaussian state. This may be helpful in
oder to decide which states of atomic ensembles correspond to Gaussian states.
\subsection*{Outline of the paper}
In Section~\ref{sec:univ-descr-cont} we provide the appropriate mathematical
tools for describing mean-field fluctuation. Fixing a given atomic ensemble,
it depends on the state which kind of system of bosonic modes (if there is
one) is corresponding to the mean-field fluctuations. This requires to compare
different bosonic systems even if they differ by its canonical commutator
relations. The tensor algebra provides a universal description that covers all
different bosonic systems at once. Which bosonic system is realized is part of
the state of this ``universal continuous variable system''.
How to describe and analyze mean-field fluctuation by using the framework of
universal continuous variable systems (tensor algebras) is discussed in
Section~\ref{sec:mean-field-fluct}. Here, we also present the main results
(Theorem~\ref{prop-expansion}, Theorem~\ref{thm-main}).
Technical supplements in order to give self-contained proofs are postponed to
the sections in the appendix.
\subsection*{Acknowledgment}
This work was supported by the EU FP7 FET-Open project COQUIT (contract
number 233747).
\section{Universal description of continuous variable systems}
\label{sec:univ-descr-cont}
Our goal is to relate systems of atomic ensembles with
bosonic systems. In this section, we explain this relation in mathematical
detail and generality, by using the algebraic approach to quantum mechanics.
Here systems are given in terms of their observable algebras, which, in our
case, are C*-algebras or more general *-algebras when unbounded operators are
included.
\subsection{The tensor algebra a universal playground}
We now introduce a formalism for describing general continuous variable
systems in a uniform manner, which is not so frequently used, but which has the advantage that the
Holstein-Primakov transformation can be implemented easily and naturally.
Let $V$ be a complex vector space with a complex conjugation $J$. The tensor
algebra over $(V,J)$ is the unital associative *-algebra that is given by the complex
vector space
\begin{equation}
\9T(V,J)=\bigoplus_{n\in\7N} V^{\otimes n} \; .
\end{equation}
The product, which is just given by the tensor product, is determined by
\begin{equation}
(v_1\otimes v_2\otimes \cdots\otimes v_n)(w_1\otimes w_2\otimes\cdots\otimes
w_m)= v_1\otimes v_2\otimes \cdots\otimes v_n\otimes w_1\otimes w_2\otimes\cdots\otimes
w_m
\end{equation}
and the adjoint is determined by
\begin{equation}
(v_1\otimes v_2\otimes \cdots\otimes v_n)^*=Jv_{n}\otimes
Jv_{n-1}\otimes\cdots\otimes Jv_1 \; ,
\end{equation}
where $v_1,\cdots, v_n,w_1,\cdots, w_m\in V$. Note that $V^{\otimes 0}\cong
\7C$ corresponds to the multiples of the unit operator $\11$. Obviously, there
is a linear embedding $\Phi$ of $V$ into the tensor algebra $\9T(V,J)$ such
that $\Phi(v)^*=\Phi(Jv)$. In the following, we call the operators $\Phi(v)$
``generalized field operators''.
The tensor algebra $\9T(V,J)$ represents the
observable algebra for a wider class of continuous variable systems. The detailed type of the
system, e.g. fermionic or bosonic, is encoded in the states under
consideration.
A state $\omega$ is described by normalized positive linear functional on the
tensor algebra, i.e. $\omega(A^*A)\geq 0$ and
$\omega(\11)=1$. Each state is determined by the $n$-point correlation
functions
\begin{equation}
\omega_n(v_1,\cdots ,v_n)=\omega(v_1\otimes\cdots\otimes v_n)
\end{equation}
where $\omega_n$ is a $n$-multi-linear functional on $V$.
How is all this related to the
ordinary Hilbert space formalism of quantum mechanics? Well, with help of the
so called GNS representation we obtain a Hilbert space $\2H_\omega$, a vector $\Omega_\omega$
and a *-representation $\pi_\omega$ by linear (but
not necessarily bounded) operators on $\2H_\omega$ with
$\omega(A)=\SPROD{\Omega_\omega}{\pi_\omega(A)\Omega_\omega}$. This is just a
consequence of the positivity of the functional $\omega$.
\subsection{Realizing bosonic systems}
As a first example, let us have a look at the bosonic systems. For this purpose we
construct a``quasi-free bosonic state'' from a bilinear form, called the covariance
$W$, on $V$. In order to obtain a positive functional, the positivity
condition $W(Jv,v)\geq 0$ has to be fulfilled. The corresponding quasi-free
state is determined according to the following conditions: For $n>0$ we put
\begin{equation}
\label{eq:2}
\omega_W(v_1\otimes\cdots\otimes v_n):=\sum_{P\in \Pi_2(n)} \prod_{(i,j)\in P} W(v_i,v_j)
\end{equation}
and $\omega_w(\11)=1$, where $\Pi_2(n)$ is the set of ordered partitions of
$\{1,\cdots ,n\}$ into two-elementary subsets. Note that the sum is empty for
odd $n$.
We associate to $W$ the hermitian form $\gamma$ which is given by
$\gamma(v_1,v_2)=W(Jv_1,v_2)-W(v_2,Jv_1)$. Using the Araki's self-dual
formalism, the self-dual CCR algebra $\8{CCR}(V,J,\gamma)$ is the *-algebra
that is constructed as follows: Let $\9J_\gamma$ be the two sided ideal in
$\9T(V,J)$ that is generated by the operators
$\Phi(v)^*\Phi(v')-\Phi(v')\Phi(v)^*-\gamma(v,v')\11$. Then the corresponding
self-dual CCR algebra is given by the quotient *-algebra
\begin{equation}
\8{CCR}(V,J,\gamma):=\9T(V,J)/\9J_\gamma \; .
\end{equation}
As we will briefly sketch below, the state $\omega_W$ annihilates the ideal
$\9J_\gamma$, which implies that $\omega_W$ induces a unique state on
$\8{CCR}(V,J,\gamma)$. Two quasi-free states $\omega_W$ and $\omega_{W'}$ on
the tensor algebra belong to the same Bosonic system if $W(Jv,v')-W(v',Jv)=
W'(Jv,v')-W'(v',Jv)=\gamma(v,v')$. In this case, both states annihilates the
ideal $\9J_\gamma$ and can be lifted to the same CCR algebra.
\begin{rem}\em
The quasi-free states $\omega_W$ have the special property to be ``even'',
i.e. the expectation value of a single generalized field operator is vanishing
$\omega_W(\Phi(v))=0$. To obtain all quasi-free states, we take advantage of
the following fact: Let $V_{\7R}^*$ be the real vector space of real continuous linear functionals on
$V$. Note that a functional $u\in V^*$ is real if it fulfills the condition
$u(Jv)=\overline{u(v)}$ for all $v\in V$. If we regard $V_{\7R}^*$ with its addition as an Abelian group, then $V^*_{\7R}$
is acting by *-automorphisms on the tensor algebra $\9T(V,J)$. For each $u\in
V_{\7R}^*$, we define the *-automorphism $\alpha_u\in\8{Aut}(\9T(V,J))$ according to
\begin{equation}
\alpha_u\Phi(v):=\Phi(v)+u(v)\11 \; .
\end{equation}
By construction, the group law is fulfilled, i.e. $\alpha_{u_1}
\alpha_{u_2}=\alpha_{u_1+u_2}$ is valid for all $u_1,u_2\in V^*_{\7R}$. To
obtain a quasi-free state with a non-vanishing one-point function, we just
``shift'' an even quasi-free state $\omega_W$ by an appropriate automorphism
$\alpha_u$ yielding the quasi-free state $\omega_{W,u}=\omega_W\circ \alpha_u$
which has the one-point function
$\omega_{W,u}(\Phi(v))=\omega_W(\Phi(v))+u(v)=u(v)$.
\end{rem}
\subsection{Ideals to specify more detailed systems}
The discussion of the previous subsection shows that the tensor algebra can be
indeed used to describe various
systems by one unified object. To specify a more particular sub-class of
systems additional algebraic relations has to be respected. This corresponds
to a proper two-sided ideal $\9J\subset\9T(V,J)$. By inclusion, the set of
two-sided ideals is partially ordered. As larger the ideal, as more specific
is the systems class under consideration. For instance, if the hermitian form
$\gamma$ is non-degenerate, then the ideal $\9J_\gamma$ which describes the
corresponding CCR-relations is maximal: This can be interpreted as the most
specific description of a system, here for a set of bosonic modes. Each state
$\omega$ is accompanied with a natural system that is
given by the quotient algebra $\9A_\omega:=\9T(V,J)/\9J_\omega$, where
$\9J_\omega$ is the two-sided ideal $\9J_\omega:=\{A|\forall B,C:\omega(B^*AC)=0
\}$. Note that, by construction,
$\9J_\omega$ does not contain the identity operator and is therefore a proper
ideal.
\subsection{Comparison of states}
But what does it mean, that two states on the tensor algebra are close to each
other? To give a precise answer to this question, we need to compare states
quantitatively. For this purpose, we assume that $V$ is a Banach space.
The dual space of the tensor algebra $\9T(V,J)$ is denoted by $\9T(V,J)^*$. It
consists of all linear functionals $F:\9T(V,J)\to \7C$ for which for all $n\in
\7N$ the semi-norms
\begin{equation}
\nu_n(F):=\sup_{(v_1,\cdots,v_n)\in V_1^n}|F(v_1\otimes\cdots\otimes v_n)| <\infty
\end{equation}
are bounded, where $V_1=\{v\in V|\|v\|=1\}$ is the unit sphere. Now, $\9T(V,J)^*$ is closed in the following topologies:
\begin{itemize}
\item
The
{\em strong topology} is the locally convex topology that is induced by the
family of semi-norms $\nu_n$, $n\in \7N$.
\item
The {\em weak topology} is the locally convex topology that is induced by the
family of semi-norms
$\nu_{(v_1,\cdots,v_n)}(F):=|F(v_1\otimes\cdots\otimes v_n)|$ with $(v_1,\cdots,v_n)\in\bigcup_{k}V^k$.
\end{itemize}
From an experimental perspective, the strong topology is related to the
comparison of two states $\omega$ and
$\omega'$. Suppose we estimate for a finite family of vectors $v_1,\cdots,
v_n\in V$
the correlation functions
$\omega(v_1\otimes\cdots\otimes v_n)$ and
$\omega'(v_1\otimes\cdots\otimes v_n)$. Then the modulus of the difference
of the correlation functions can be used as a measure how
``close'' $\omega$ and $\omega'$ are to each other.
To give an example, we consider the correlation functions of two
quasi-free states $\omega_W$ and $\omega_{W'}$, where the covariances $W,W'$
have a norm difference that is given by $\|W-W'\|=\sup_{v_1,v_2\in V_1}|W(v_1,v_2)-W'(v_1,v_2)|$.
\begin{prop}
\label{prop:quasi-free-contin}
Let $\omega_W$ and $\omega_{W'}$ be quasi-free states with covariances $W$
and $W'$ respectively, then for each $n\in\7N$ the semi-norm difference of the
quasi-free states satisfies the bound
\begin{equation}
\nu_n(\omega_W-\omega_{W'})\leq\|W-W'\| \
|\Pi_2(n)|\sum_{k=1}^{n/2} \ \|W\|^{k-1}\|W'\|^{n/2-k} \; .
\end{equation}
\end{prop}
A direct consequence of the proposition (which we prove in the appendix) is that, if $W\to W'$ are converging
in norm, then $\omega_W\to\omega_{W'}$ converges in the strong topology. In
other words the mapping $W\to\omega_W$ is continuous in the respective
topologies.
\section{Mean-field fluctuations}
\label{sec:mean-field-fluct}
Many systems under consideration possessing a large number of independent degrees of
freedom such that they can be idealized by infinite systems in the
thermodynamic limit. Here we model this situation by an infinite (countable)
lattice $\Lambda$ that possesses a distance function
$d:\Lambda^2\to\7R_+$. The observable algebra of the global system is the
so called {\em quasi-local algebra} that is constructed by the infinite tensor
product
\begin{equation}
\6A(\Lambda)=\bigotimes_{x\in\Lambda}\6A(\Lambda)
\end{equation}
of single cell C*-algebras $\6A\cong\6A(x)$. For a given lattice point $x\in X$, the natural embedding
of the single site algebra which identifies $\6A$ with $\6A(x)\subset\6A(\Lambda)$
is denoted by $\iota_x$. We are going to use this mapping later on quite often.
However, in view of non-equilibrium thermodynamics, the nature of global states of an
infinite systems can be very different from equilibrium states and
the calculation of expectation values for such states may be a hard
computational task. The analysis of global states, one looks at the
asymptotic behavior of certain properties within the mesoscopic range. For
this purpose, one restricts the global state to the local observable algebras
that correspond to finite subsets sets $X\subset\Lambda$ which
is given by the finite tensor product
\begin{equation}
\6A(X)=\bigotimes_{x\in X}\6A(x) \, .
\end{equation}
Note that for an inclusion $X\subset Y\subset\Lambda$, it follows immediately
that $\6A(X)\subset \6A(Y)$.
Taking a global state $\omega_\Lambda$, we obtain for each finite subset
$X\subset \Lambda$ a restricted state $\omega_X:=\omega_\Lambda|_{\6A(X)}$
which lives on finitely many degrees of freedom. This yields a net of sates
$(\omega_X)_{X\subset\Lambda}$ that is indexed by the partially ordered set of
finite subsets of the lattice $\Lambda$. Roughly speaking, the basic idea behind the
Holstein-Primakov transformation is to analyze the behavior of
each of the states $\omega_X$ concerning their ``bosonic nature'', i.e. to
what extend they ``simulate'' continuous variable systems. We shall see, that
each restriction $\omega_X$ induces a state $\hat\omega_X$ on the tensor
algebra $\9T(\6A,*)$, where $\6A$ is the observable algebra of a single cell
system. To be of ``bosonic nature'', the induced state $\hat\omega_X$ has to
fulfill ``almost'' the canonical commutation relations. This means that there
is an antisymmetric hermitian form $\gamma$ such that the induced state
$\hat\omega_X$ is ``almost'' annihilating the ideal $\9J_\gamma$: A typical
behavior is $\hat\omega_X(A)=O(|X|^{-1/2})$ for each operator $A$ that belongs
to the ideal $\9J_\gamma$.
\subsection{Inducing states and $\sqrt{\rm n}$-fluctuations}
Let $\omega_\Lambda$ be a state of the global system. Then we obtain the net
of restricted states $(\omega_X)_{X\subset\Lambda}$ that are indexed by the
partially ordered set of finite subsets $X\subset \Lambda$. The induction of
states works by using ``fluctuation operators''
associated with the restricted state $\omega_X$ and an operator $a\in\6A$:
\begin{equation}
\Phi_{\omega_X}(a):=\frac{1}{|X|^{1/2}}\sum_{x\in X}
[\iota_{x}a-\omega_X(\iota_{x}a)\11] \; .
\end{equation}
This yields a representation $\Phi(a)\mapsto\Phi_{\omega_X}(a)$ of the tensor
algebra and the induced state
$\hat\omega_{X}$ is determined by its $n$-point functions according to
\begin{equation}
\hat\omega_{X}(a_1\otimes\cdots\otimes a_n):=\omega_X(\Phi_{\omega_X}(a_1)\cdots\Phi_{\omega_X}(a_n))
\; .
\end{equation}
The main goal is now to study the asymptotic limit of large systems. For this
purpose, let $\Lambda$ be a countable lattice. Let $(\omega_X)_{X\subset
\Lambda}$ be a net of states that is indexed by finite subsets of
$\Lambda$, where $\omega_X$ is a state on $\6A(X)$. The asymptotic properties
in the limit $X\to\Lambda$ can be investigated by looking at the net of
induced states $(\hat\omega_{X})_{X\subset\Lambda}$ according to the classification:
\begin{itemize}
\item
The state $\omega_\Lambda$ has
$\sqrt{\rm n}$-fluctuations if the induced net
$(\hat\omega_{X})_{X\subset\Lambda}$ converges
$w-\lim_{X\to\Lambda}\hat\omega_{X}=\hat\omega_{\Lambda}$ in the weak
topology on $\9T(\6A,*)$.
\item
The state $\omega_\Lambda$ has
strongly $\sqrt{\rm n}$-fluctuations if the induced net
$(\hat\omega_{X})_{X\subset\Lambda}$ converges
$s-\lim_{X\to\Lambda}\hat\omega_{X}=\hat\omega_{\Lambda}$ in the strong
topology on $\9T(\6A,*)$.
\item
The state $\omega_\Lambda$ has
weakly $\sqrt{\rm n}$-fluctuations if for the induced net
$(\hat\omega_{X})_{X\subset\Lambda}$ the each semi-norm $\nu_n$, $n\in \7N$, is uniformly bounded:
$\sup_{X\subset\Lambda}\nu_n(\hat\omega_{X})<\infty$.
\end{itemize}
Obviously, strongly $\sqrt{\rm n}$-fluctuations implies $\sqrt{\rm
n}$-fluctuations implies weakly $\sqrt{\rm n}$-fluctuations.
We are now considering states that are ``single site homogenous''.
These states are defined by the property that their restrictions
to a single site is independent of the
lattice point.
\subsection{Induced states for asymptotically large systems}
Asymptotically large atomic ensembles can be described by an infinite lattice
system which is in some state $\omega_\Lambda$. Suppose we assume that the
corresponding induced net of states $(\hat\omega_{X})_{X\subset\Lambda}$ has
weakly $\sqrt{\rm n}$-fluctuations. What conclusions can we draw from this
property? What do we know about the asymptotic behavior of the correlation
functions of the induced states $\hat\omega_{X}$?
Since each semi-norm $\nu_n(\hat\omega_X)$ is uniformly bounded in the size
of the subset $X$, we know that there are weak limit points. In order to
analyze the properties of these limit points more systematically, we will give
here an ``operational'' description of what limit points are.
Within a concrete experimental realization, the atomic ensemble under consideration
will be always finite. If the setup is scalable, then, at least in principle,
the same experiment can be performed for various sizes of the system, i.e. the
subset $X$ can be regarded as a ``classical configuration''. Here one can also
think of a situation, where atoms occupy only finitely many sites of a lattice
randomly. Thus we are dealing with a preparation device that prepares for each
finite atomic ensemble $X\subset \Lambda$ a state $\omega_X$ with a certain
probability $\mu(X)$. The probability distribution $\mu:X\mapsto \mu(X)$ is nothing
else but a classical state on the system of finite subsets
$X\subset \Lambda$. The corresponding observable algebra consists of all bounded
complex valued functions $f:\Lambda\supset X\mapsto f(X)$. The expectation
value of an observable $f$ for the state $\mu$ is then
given by $\mu(f)=\sum_{X\subset\Lambda}\mu(X) f(X)$.
A general classical state
on the system of finite subsets $X\subset\Lambda$ is a complex valued linear
functional on the algebra of bounded complex valued functions
$f:X\mapsto f(X)$ such that the following holds:
\begin{itemize}
\item
Positivity: $\eta(f)\geq 0$ for each $f\geq 0$.
\item
Normalization: $\eta(\11)=1$.
\end{itemize}
A preparation device that produces asymptotically large atomic ensembles has
the property that, in the limit $X\to\Lambda$, the probability that only a
finite number of lattice sites are occupied is vanishing. This corresponds to
classical states $\eta$ with the following property:
\begin{itemize}
\item
$\eta(f)=0$ if $\lim_{X\to\Lambda}f(X)=0$.
\end{itemize}
A state $\eta$ with this property is called a ``limit point''. To justify this notion,
suppose that limit $\lim_X f(X)=c$ exists. In this case
$\lim_X(f(X)-c)=0$, and $\eta(f-c\11)=\eta(f)-c=0$
follows, which means that there expectation value $\eta(f)=c$ coincides for
all limit points $\eta$.
What can we say about the limit points of the induced net
$(\hat\omega_X)_{X\subset\Lambda}$ for a state $\omega_\Lambda$
that have weakly $\sqrt{\rm n}$-fluctuations? To each operator
$A\in\9T(\6A,*)$ of the tensor algebra, we assign a bounded function
which is given by $X¸\mapsto \hat\omega_X(A)$ \footnote{This function is
indeed bounded which can be verified as follows: The operator $A$ can be
written as a finite direct sum $\bigoplus_{k=0}^nA_k$ with $A_k\in\6A^{\otimes
k}$. Since $\omega_\Lambda$ has weakly $\sqrt{\rm n}$-fluctuations we obtain
that $|\hat\omega_X(A)|\leq \sum_{k=0}^n|\hat\omega_X(A_n)|\leq \sum_{k=0}^n
C_n \|A_n\|<\infty$ with $C_n=\sup_{X\subset\Lambda}\nu_n(\hat\omega_X)$.}.
Now, each limit point $\eta$ induces a state on the tensor
algebra by $\omega_\eta(A)=\eta(X\mapsto \hat\omega_X(A))$. Here we use the
suggestive notation $\eta(X\mapsto f(X)):=\eta(f)$ to represent an expectation value.
The states $\hat\omega_\eta$ describe the mean-field fluctuations
of asymptotically large
atomic ensembles. The next propositions states that these mean-field
fluctuations behave like bosonic modes. Consider a state $\omega_\Lambda$ that
is single site homogenous with single site restriction
$\omega=\omega_\Lambda\circ \iota_x$.
Then there is a natural antisymmetric
hermitian form $\gamma(a,b):=\omega([a^*,b])$ on the observable algebra $\6A$ of the
single site system. The ideal $\9J_{\gamma}$, which represents the canonical
commutation relations, is generated by the operators
$I_\gamma(a,b):=[\Phi(a),\Phi(b)]-\gamma(a^*,b)\openone$.
\begin{prop}
\label{prop-bosonic}
Let $\omega_\Lambda$ be a single site homogenous state having weakly $\sqrt{\rm n}$-fluctuations.
Then for each limit point $\eta$, the state $\hat\omega_\eta$ annihilates the
ideal $\9J_{\gamma}$ and can uniquely be lifted to a state on the
corresponding CCR algebra.
\end{prop}
\begin{proof}
By Lemma~\ref{lem-00} of the appendix, we conclude that $\lim_X\hat\omega_X(A)=0$ for each
operator in the ideal $\9J_{\gamma}$. This implies
$\hat\omega_\eta(A)=\eta(X\mapsto\hat\omega_X(A))=0$ which implies that
$\hat\omega_\eta$ annihilates $\9J_\gamma$.
\end{proof}
\begin{rem}\em
Proposition~\ref{prop-bosonic} can be interpreted, at least to a certain extend,
by saying that a state $\omega_\Lambda$ of a large atomic ensembles
with weakly $\sqrt{n}$-fluctuations possess bosonic mean-field
fluctuations. This is justified by the fact that each limit point
$\hat\omega_\eta$ is a state on the CCR algebra $\8{CCR}(\6A,*,\gamma)$ which
describes a bosonic system. On the other hand, the CCR algebra is an algebra of
unbounded operators and it might happen that the GNS representation associated
to a limit state $\hat\omega_\eta$ has ``exotic'' properties. Recall, that the GNS
representation is given by a Hilbert space $\2H$ an algebra homomorphism $\pi$
that assigns to each operator $A$ in the CCR algebra a linear (unbounded)
operator on $\2H$ as well as a normalized vector $\Omega\in\2H$ such that
$\omega_\eta(A)=\langle \Omega, \pi(A)\Omega\rangle$. The question that arises
here is whether it is possible to build the exponential
$\exp(\8i\pi(\Phi(a)))$ of a field operator $\pi(\Phi(a))$ in the
representation $\pi$, where $a=a^*$ is selfadjoint. If we can do this, then we
obtain a representation of the Weyl algebra by bounded operators. If it is not
possible to build the exponential we are dealing with an
``exotic'' case (see e.g. \cite{RESI1}). In order to exclude this kind of
pathologies, we need to consider more specific examples of states.
\end{rem}
\subsection{States with exponential clustering}
A state $\omega_\Lambda$ on $\6A(\Lambda)$ has exponential clustering (with
respect to $d$) if for local operators $A\in\6A(X)$ and $B\in\6A(Y)$ the identity
\begin{equation}
\omega_\Lambda(AB)=\omega_\Lambda(A)\omega_\Lambda(B)+G_{(X,Y)}(A,B)\8e^{-d(X,Y)}
\end{equation}
is valid for a bounded bilinear function $G_{(X,Y)}:\6A(X)\times\6A(Y)\to\7C$ such that
$|G_{(X,Y)}(A,B)|\leq G_0 \|A\|\|B\|$ for all $A,B$, for all finite regions $X,Y$. Here $G_0$ is
a constant that is independent of the localization regions. The bilinear forms
$G_{X,Y}$ express locally the deviations from the state to a product state,
being scaled with the exponential of the distance. Therefore $G_{(X,Y)}$
indicates the presence of correlations that are exponentially decreasing with
the distance. To give a name, we call the family of bilinear maps
$G=(G_{(X,Y)})_{X,Y\subset \Lambda}$ the {\em correlators}.
Note that, equivalently, exponential clustering is given by the condition
\begin{equation}
|\omega_\Lambda(AB)-\omega_\Lambda(A)\omega_\Lambda(B)|\leq G_0 \ \8e^{-d(X,Y)}
\end{equation}
for all $A\in\6A(X),B\in\6A(Y)$. Here $d(X,Y)=\min_{x\in X,y\in Y}d(x,y)$ is
the distance between the finite subsets $X,Y\subset\Lambda$. We always require
here, that the distance $d$ is regular, i.e. the maximal number $N(r)$ of lattice sites
within a ball of radius $r$ is bounded by a polynomial.
The exponential clustering property can be used to derive a useful cluster expansion
in terms of expectation values of the single site restriction $\omega$ and the correlators
$G$. In order to write down this expansion, we introduce the following
objects:
\begin{itemize}
\item
For each finite subset $Y\subset \Lambda$ we introduce the
``the spread'' $\Delta(Y):=\max_{y\in Y}d(y,Y\setminus y)$ which measures the
maximal distance of a point to its relative complement in $Y$.
\item
An $k$-elementary subset
$\{y_1,\cdots,y_k\}\subset X$ is called ``spread optimally enumerated'' if the enumeration
fulfills the condition
$d(y_l,\{y_{l+1},\cdots,y_k\})=\Delta(y_l,\cdots,y_k)$ for all
$l=1,\cdots,k-1$. Note that each subset can be spread optimally
enumerated.
\item
Given a tuple $x\in X^n$, we choose a spread optimal enumeration
of the range $\8{Ran}(x)=\{y_1,\cdots,y_{|\8{Ran}(x)|}\}$ and we consider the
correlators $G^x_k:=G_{(y_k,\{y_{k+1},\cdots,y_{|\8{Ran}(x)|}\})}$ which test
the correlations for splitting the site $y_k$ from the remaining points
$\{y_{k+1},\cdots,y_{|\8{Ran}(x)|}\}$, where $k=1,\cdots,|\8{Ran}(x)|-1$.
\item
For a family of operators $a_1,\cdots,a_n\in\6A$ and a tuple $x\in X^n$ whose
range $\{y_{1},\cdots,y_{|\8{Ran}(x)|}\}$ is spread optimally enumerated, we
introduce the single site ``cluster operators'' $a^x_{k}\in\6A$ which are given by the ordered
product $a_k^x:=\prod_{j\in x^{-1}(y_k)}a_j$, where the ordering is according
to the value of the index in $x^{-1}(y_k)=\{j=1,\cdots,n|x_j=y_k\}$.
\end{itemize}
The following theorem, whose proof is given in the appendix, states that
correlation functions of the induced states $\hat\omega_X$ admit a cluster
expansion in terms of the single site restriction $\omega$, the correlators
$G$ and cluster operators $a_k^x$:
\begin{thm}[Cluster expansion]
\label{prop-expansion}
Let $\omega_\Lambda$ be a single site homogenous state with single site
restriction $\omega$ and exponential
clustering with respect to $d$. For each $a_1,\cdots,a_n\in\8{ker}(\omega)$
and for each finite subset $X\subset \Lambda$ the $n$-point correlation
function of the induced state $\hat\omega_X$ can be written as
\begin{equation}
\label{equ-expansion}
\begin{split}
&\hat\omega_X(a_1\otimes\cdots\otimes a_n)=\hat\omega^{\otimes
X}(a_1\otimes\cdots\otimes a_n)+F_X(a_1\otimes\cdots\otimes a_n) \; .
\end{split}
\end{equation}
where the correlation function of the induces homogenous product state
$\hat\omega^{\otimes X}$ and the functional $F_X$ are given by
\begin{equation}
\label{eq:1}
\begin{split}
\hat\omega^{\otimes X}(a_1\otimes\cdots\otimes a_n)
&=|X|^{-\frac{n}{2}}\sum_{x\in X^n}\omega(a_{1}^x)\cdots\omega(a_{|\8{Ran}(x)|}^x)
\\
F_X(a_1\otimes\cdots\otimes a_n)
&=|X|^{-\frac{n}{2}}\sum_{x\in X^n}\sum_{k=1}^{|\8{Ran}(x)|-1} \omega(a_{1}^x)\cdots\omega(a_{k-1}^x)
\\
&\times \ \ G^x_k(a^x_k,a^x_{k+1}\cdots
a^x_{|\8{Ran}(x)|})\8e^{-\Delta(y_k,y_{k+1},\cdots,y_{|\8{Ran}(x)|})} \; ,
\end{split}
\end{equation}
where for each $x\in X^n$ the range $\8{Ran}(x)$ is spread optimally enumerated.
\end{thm}
As the cluster expansion is stated above, it holds for all correlation
functions for which the operators $a_1,\cdots,a_n$ are chosen in the kernel
$\8{ker}(\omega)$ of the single site restriction. If this is not the case, we
can express the correlation function in terms of $a_i=a_i'+\omega(a_i)\11$ where
$a_i'\in\8{ker}(\omega)$. The tensor product $a_1\otimes\cdots \otimes a_n$
can be expanded in terms of the operators $a_i'\in \8{ker}(\omega)$ according to
\begin{equation}
\label{eq:3}
a_1\otimes\cdots \otimes a_n=\sum_{J\subset \{1,\cdots, n\}} \bigotimes_{i\in J}
a'_{i} \prod_{j\in \{1,\cdots, n\}\setminus J} \omega(a_j)
\end{equation}
where the sum runs over all ordered subsets. To get the general
cluster expansion for the full tensor algebra, one only has
to apply Theorem~\ref{prop-expansion} to (\ref{eq:3}) for each summand.
It is known, that the induced net $(\hat\omega^{\otimes X})_{X\subset\Lambda}$
converges weakly to a quasi-free state.
We show here a slightly stronger result:
\begin{prop}
\label{prop-prod}
A homogenous product state $\omega^{\otimes\Lambda}$ has strongly
$\sqrt{\rm n}$-fluctuations. In particular, the induced net
$(\hat\omega^{\otimes X})_{X\subset\Lambda}$ converges strongly to the quasi-free state
$\hat\omega_{\8{qf}}$ whose covariance is given by the truncated two-point
function $W(a,b)=\omega(ab)-\omega(a)\omega(b)$.
\end{prop}
Homogenous product states are the simplest among states that have exponential
clustering. For the general case, the following is true:
\begin{thm}
\label{thm-main}
Each single site homogenous state with exponential clustering has weakly
$\sqrt{\rm n}$-fluctuations.
\end{thm}
The proof of the theorem is quite technical and therefore postponed to the
appendix. However, it takes advantage of the cluster expansion of $F_X$ into single
site expectation values and correlators. The basic idea to get a uniform bound
for the semi norms $\nu_n(F_X)$ is to count the number of terms that are
contributing to the cluster expansion. In total, we sum over all tuples in $X^n$ which
gives $|X|^n$ terms. Since we normalize by multiplying $|X|^{-n/2}$, a naive
counting would give the non-uniform bound $\nu_n(F_X)\leq \8O(|X|^{-n/2})$. By a
more careful analysis, it turns out that effectively only $|X|^{n/2}$ terms
are contributing. By choosing $a_1,\cdots,a_n\in\8{ker}(\omega)$,
the single site expectation value of a cluster operator
$\omega(a_k^x)$ is vanishing if $x^{-1}(y_k)=\{j\}$ contains only a single
element. Note that in this case we just have
$\omega(a^x_k)=\omega(a_j)=0$. This reduces directly the
number of terms which in the cluster expansion (\ref{eq:1}). A large
number contributions are also coming from tuples $x$ with range
$\{y_1,\cdots,y_{|\8{Ran}(x)|}\}$ for which the
spreads $\Delta(y_k,\cdots, y_{|\8{Ran}(x)|})$ are large. These
contributions are also of order $|X|^{n/2}$, since they are suppressed the
exponential damping $\exp(-\Delta(y_k,\cdots, y_{|\8{Ran}(x)|}))$.
\begin{rem}\em
We can derive from the cluster expansion that
for asymptotically large atomic ensembles there is a quasi-free part from the
product state contribution and a perturbation which comes from the correlators.
Namely, for each weak limit point $\eta$ the state
$\hat\omega_\eta$ can be written as
\begin{equation}
\hat\omega_\eta=\hat\omega_{\8{qf}}+F_\eta
\end{equation}
with $F_\eta(A)=\eta(X\mapsto F_X(A))$. The functional $F_\eta$ is a perturbation of the quasi-free
limit state $\hat\omega_{\8{qf}}$ which may depend on the limit functional
$\eta$. Note that Theorem~\ref{thm-main} guarantees the existence of weak
limit points $F_\eta$,
since $\sup_{X\subset\Lambda}\nu_n(F_{X})<\infty$.
\end{rem}
\section{Conclusion}
We have shown that states of large atomic ensembles whose correlations are exponentially
decaying with the distance between atoms (exponential clustering) possess
bosonic mean-field fluctuations. In addition to that, these states
are invariant under applications of quantum cellular.
This enables the implementation of the following type of process: The bosonic
modes of a light field are coupled to a large atomic ensemble such that the
Gaussian
state of the laser field
is transferred almost perfectly (where the precision is here of order
$\8O(\sqrt{\mbox{number of single atom systems}})$) to a homogenous product state of the atoms. A quantum cellular
automaton acting on the atomic ensemble is implemented. The resulting state of
the atomic ensemble possesses again bosonic mean-field fluctuations and
the resulting state of the atomic ensemble can be transferred back almost perfectly to
the bosonic modes of the light field.
The total process induces an operation on bosonic modes which maps an initial
Gaussian state to some bosonic state, which can be non-Gaussian. With help of
Theorem~\ref{prop-expansion}, the correlation functions of the resulting state
can be written as a perturbation of a Gaussian state. This may be helpful in
oder to decide which states of atomic ensembles correspond to Gaussian states.
It is still an open problem to decide in general from the state of the atomic ensemble whether the
resulting induced state is Gaussian or not. Concerning states with
exponential clustering, the cluster
expansion (Theorem~\ref{prop-expansion}) appears to be a reasonable technique in order to address this problem.
Here the correlation functions of the fluctuation operators can be expanded
into the correlation functions of the homogenous product state
$\omega^{\otimes X}$ (here $\omega$ is the restriction of the global
state to a single atom) and some correction $F_X$. For large atomic ensembles,
the correlation functions of the homogenous product state $\omega^{\otimes X}$
correspond to a Gaussian state, whereas $F_X$ can be regarded as a
``perturbation''.
Furthermore, it would be desirable to construct new examples of atomic
ensemble states (in particular beyond homogenous product states) whose
induced net has strongly $\sqrt{\rm n}$-fluctuations or $\sqrt{\rm
n}$-fluctuations.
In order to archive more concrete results in this direction, one has to
consider here more concrete examples. One suggestion is to consider ensembles
of two-level atoms arranged in a one-dimensional lattice. A natural class of
states for which mean-field fluctuations can be investigated are stabilizer
states which are invariant under the action of so called Clifford quantum
cellular automata (see \cite{SchlVogtWer08,Schl09,Gutschow:1184639} and references given therein).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Low-energy dynamical supersymmetry (SUSY) breaking with gauge mediation
is extremely attractive,
since it may not only solve various phenomenological problems
but also its dynamical nature may provide a
natural explanation of the large hierarchy between the electroweak and some
higher (say the Planck) scales \cite{Dine_review}. Several mechanisms
\cite{ADS,ISS,IY,Randall}
for dynamical SUSY breaking have been discovered
and their applications to realistic models have been also proposed
\cite{DNS,HIY, MH}.
Structures of the proposed models
\cite{DNS,HIY, MH}
predict a relatively large
SUSY-breaking scale $\Lambda > 10^6~ {\rm GeV} $
to provide sufficiently large soft
masses in the SUSY standard-model sector.
On the other hand, the unclosure condition of our universe yields a constraint
on the gravitino mass as $m_{3/2} \mathop{}_{\textstyle \sim}^{\textstyle <} 1 ~ {\rm keV} $
\cite{gravitino_mass},
which corresponds
to the SUSY-breaking scale $\Lambda \mathop{}_{\textstyle \sim}^{\textstyle <} 10^6 ~ {\rm GeV} $.
This is not achieved in the referred models.
In fact, a detailed analysis
\cite{MM}
on the models in Ref.~\cite{DNS}
has shown that the gravitino is likely to be heavier
than $1~ {\rm keV} $,
which necessitates a late-time entropy production
\cite{MM,MMY}
to dilute the gravitino energy density in the universe.
In this paper, we systematically construct gauge-mediated models
of low-energy SUSY breaking
with the structure of direct transmission
(that is, without messenger gauge interactions).
We obtain models in which
the gravitino mass can be set smaller than $1~ {\rm keV} $.
The existence of such models suggests that low-energy dynamical SUSY breaking
with gauge mediation does not necessarily require
complicated non-standard cosmology.
\section{Dynamical scale generation}
We first discuss a dynamics for scale generation since it is crucial
for the dynamical SUSY breaking in our models. We adopt a SUSY SU(2)
gauge theory with four doublet chiral superfields $Q_i$,
where $i$ is a flavor index ($i=1,\cdots,4$).
Without a superpotential, this theory has a flavor SU(4)$_F$ symmetry.
This SU(4)$_F$ symmetry is explicitly broken down to a global SP(4)$_F$
by a superpotential in our models.
We add gauge singlets $Y^a$ ($a=1, \cdots, 5$) which constitute
a five-dimensional representation of SP(4)$_F$
to obtain a tree-level superpotential
\begin{eqnarray}
W_Y = \lambda_Y Y^a (QQ)_a,
\end{eqnarray}
where $(QQ)_a$ denote a five-dimensional representation
of SP(4)$_F$ given by a suitable combination of gauge invariants $Q_iQ_j$.
An effective superpotential
\cite{IS}
which describes the dynamics of the SU(2) gauge interaction
may be given by
\begin{eqnarray}
W_{eff}=S(V^2 + V_a^2 - \Lambda^4) + \lambda_Y Y^a V_a
\label{dynamical_potential}
\end{eqnarray}
in terms of low-energy degrees of freedom
\begin{eqnarray}
V \sim (QQ), \quad V_a \sim (QQ)_a,
\end{eqnarray}
where $S$ is an additional chiral superfield, $\Lambda$ is a dynamically
generated scale, and a gauge invariant ($QQ$) denotes a singlet of
SP(4)$_F$ defined by
\begin{eqnarray}
(QQ)=\frac{1}{2} (Q_1 Q_2 + Q_3 Q_4).
\end{eqnarray}
The effective superpotential Eq.(\ref{dynamical_potential}) implies that
the singlet $V \sim (QQ)$
condenses as
\begin{eqnarray}
\label{VEV}
\langle V \rangle = \Lambda^2,
\end{eqnarray}
and SUSY is kept unbroken in this unique vacuum.
Since the vacuum preserves the flavor SP(4)$_F$ symmetry, we have no
massless Nambu-Goldstone boson. The absence of flat direction at this stage
is crucial for causing dynamical SUSY breaking as seen in the next section.
\section{Dynamical SUSY breaking}
Let us further introduce a singlet chiral superfield $Z$
to consider a superpotential for dynamical SUSY breaking
\cite{IY}:
\begin{eqnarray}
W_0 = W_Y + \lambda Z (QQ).
\end{eqnarray}
For a relatively large value of the coupling $\lambda_Y$,
we again obtain the condensation Eq.(\ref{VEV})
with the low-energy effective superpotential approximated by
\begin{eqnarray}
W_{eff} \simeq \lambda \Lambda^2 Z.
\end{eqnarray}
On the other hand,
the effective K\"ahler potential is expected to take a form
\begin{eqnarray}
K = |Z|^2 - \frac{\eta}{4 \Lambda^2}|\lambda Z|^4 + \cdots,
\end{eqnarray}
where $\eta$ is a real constant of order one.
The effective potential for the scalar $Z$ (with the same notation as
the superfield) is given by
\begin{eqnarray}
V_{eff} \simeq |\lambda|^2 \Lambda^4 (1 +
\frac{\eta}{\Lambda^2} |\lambda|^4 |Z|^2).
\end{eqnarray}
If $\eta > 0$, this implies $\langle Z \rangle = 0$.
Otherwise we expect $|\lambda \langle Z \rangle| \sim \Lambda$,
since the effective potential is
lifted in the large $|Z|$ ($> \Lambda$) region \cite{IY,HIY,Shirman}.
Anyway, the $F$-component of $Z$ superfield has nonvanishing
vacuum-expectation
value, $\langle F_Z \rangle \simeq \lambda \Lambda^2$, and thus SUSY is
dynamically broken in this model.
In the following analyses, we assume the latter case
$|\lambda \langle Z \rangle| \sim \Lambda$,
which results in the breakdown of $R$ symmetry.%
\footnote{The spontaneous breakdown of the $R$ symmetry produces a
Nambu-Goldstone $R$-axion. This $R$-axion is, however, cosmologically
harmless, since it acquires a mass from the $R$-breaking constant term in the
superpotential which is necessary to set the cosmological constant to
zero\cite{Bag}.
Modifications for the case $\langle Z \rangle = 0$
is touched upon in the final section.}
\section{One-singlet model}
\label{one_singlet}
Let us first consider a realistic model with one singlet
$Z$ for SUSY breaking which couples directly to $(QQ)$. It is referred as
a `multiplier' singlet, hereafter.
We introduce four pairs of
massive chiral superfields $d$, $\bar{d}$, $l$, $\bar{l}$, $d'$, $\bar{d}'$,
and $l'$, $\bar{l'}$ which are all singlets under the strong SU(2).
We assume that the $d$, $d'$ and $\bar{d}$, $\bar{d}'$
transform as the down quark and its antiparticle, respectively, under the
standard-model gauge group. The $l$, $l'$ and $\bar{l}$, $\bar{l}'$
are assumed to transform as the lepton doublet and its antiparticle,
respectively. These fields are referred as messenger quarks and leptons.
The superpotential of the one-singlet model is given by
\begin{eqnarray}
W_1 = W_Y + Z(\lambda (QQ) + k_d d {\bar d} + k_l l {\bar l})
+ m_d d {\bar d}'+ m_{\bar d} d' {\bar d} + m_l l {\bar l}'
+ m_{\bar l} l' {\bar l},
\end{eqnarray}
where m's denote mass parameters.%
\footnote{Dynamical generation of these mass terms will be discussed in
the following sections.
Mass terms for SUSY-breaking transmission were considered
in Ref.\cite{HIY,Ran}.
In the course of writing this paper, we received a paper \cite{recent}
which also treated similar mass terms in SUSY-breaking models.}
For relatively small values of the couplings $k_d$ and $k_l$,
we have a SUSY-breaking vacuum with the vacuum-expectation values
of the messenger quarks and leptons vanishing.
Then the soft SUSY-breaking masses of the messenger quarks and leptons
are directly generated by $\langle F_Z \rangle =\lambda \Lambda^2\neq 0$
through the couplings $Z(k_d d {\bar d} + k_l l {\bar l})$.
The above SUSY-breaking vacuum is the
true vacuum as long as the mass parameters $m_\psi$ are much larger
than $\sqrt{k_\psi F_Z} \simeq \sqrt{k_\psi \lambda} \Lambda$
for $\psi=d,l$.
To find the stability condition of our vacuum, we examine the scalar potential
\begin{eqnarray}
V&=&| \lambda \Lambda^2 + k_d d \bar{d}+ k_l l \bar{l} |^2
+ |m_d d|^2 + |m_l l|^2 + |m_{\bar{d}}\bar{d}|^2 + |m_{\bar{l}}\bar{l}|^2
\nonumber \\
&& + |k_d Z \bar{d} + m_d \bar{d}'|^2
+|k_d Z d + m_{\bar{d}} d'|^2
+ |k_l Z \bar{l} + m_l \bar{l}'|^2
+|k_l Z l + m_{\bar{l}} l'|^2.
\end{eqnarray}
The vacuum
\begin{eqnarray}
\langle F_Z \rangle \simeq \lambda \Lambda^2, \quad
\langle d \rangle = \langle \bar{d} \rangle=\langle l \rangle
=\langle \bar{l} \rangle=\langle d' \rangle=\langle \bar{d}' \rangle
=\langle l' \rangle=\langle \bar{l}'\rangle=0
\end{eqnarray}
is stable when
\begin{eqnarray}
|m_d m_{\bar{d}}|^2 &>& |k_d \langle F_Z \rangle|^2 ,
\nonumber \\
|m_l m_{\bar{l}}|^2 &>& |k_l \langle F_Z \rangle|^2.
\label{stable_cond}
\end{eqnarray}
In the following analysis, we restrict ourselves to
the parameter region Eq.(\ref{stable_cond}).
The standard-model gauginos acquire their masses through loops of the messenger
quarks and leptons when $\langle Z \rangle \neq 0$
(see Figs.\ref{gaugino_mass}-\ref{3F_gaugino_mass} and the Appendix).
The gaugino masses are obtained as
\begin{eqnarray}
m_{\tilde{g}_1} &=& \frac{\alpha_1}{4 \pi} \left\{
\frac{2}{5} \left|\frac{k_d \langle F_Z \rangle}{m_d m_{\bar{d}}}
\right|^2
\frac{k_d \langle F_Z \rangle}{\sqrt{m_d m_{\bar{d}}}}
{\cal F}_d
\right.
\nonumber \\
&&\left. ~~+ \frac{3}{5}
\left|\frac{k_l \langle F_Z \rangle}{m_l m_{\bar{l}}}
\right|^2
\frac{k_l \langle F_Z \rangle}{\sqrt{m_l m_{\bar{l}}}}
{\cal F}_l
\right\}\left(1+O((k_\psi \langle F_{Z}\rangle
/m_\psi m_{\bar{\psi}})^2 \right),
\label{bino_mass}
\\
m_{\tilde{g}_2} &=& \frac{\alpha_2}{4 \pi}
\left| \frac{k_l \langle F_Z \rangle}{m_l m_{\bar{l}}} \right|^2
\frac{k_l \langle F_Z \rangle}{\sqrt{m_l m_{\bar{l}}}}
{\cal F}_l
\left(1+O((k_l\langle F_{Z}\rangle /m_l m_{\bar{l}})^2 \right),
\label{wino_mass}
\\
m_{\tilde{g}_3} &=& \frac{\alpha_3}{4 \pi}
\left| \frac{k_d \langle F_Z \rangle}{m_d m_{\bar{d}}} \right|^2
\frac{k_d \langle F_Z \rangle}{\sqrt{m_d m_{\bar{d}}}}
{\cal F}_d
\left(1+O((k_d \langle F_{Z}\rangle /m_d m_{\bar{d}})^2 \right),
\label{gluino_mass}
\end{eqnarray}
where we have adopted SU(5) GUT normalization of U(1)$_Y$ gauge coupling,
$\alpha_1 \equiv \frac{5}{3}\alpha_Y$, and $\tilde{g}_3, \tilde{g}_2$, and
$\tilde{g}_1$ are gauginos of the
standard-model gauge groups SU(3)$_C$, SU(2)$_L$, and U(1)$_Y$, respectively.
The ${\cal F}_\psi$ for $\psi=d,l$ are defined in the Appendix.
Here, we have assumed
$(k_\psi \langle F_Z \rangle/m_\psi m_{\bar{\psi}})^2 \ll 1$.
Notice that
the leading term of $(k_\psi\langle F_Z \rangle/m_\psi m_{\bar{\psi}})$
in Fig.\ref{gaugino_mass} vanishes.
Hence the GUT relation among gaugino masses ,
$m_{\tilde{g}_1}/\alpha_1=m_{\tilde{g}_2}/\alpha_2=m_{\tilde{g}_3}/\alpha_3$,
does not hold even when
all the couplings and mass parameters for messenger quarks and
leptons satisfy the GUT relation at the GUT scale.
The soft SUSY-breaking masses for squarks and sleptons $\tilde{f}$ in the
standard-model sector are generated
by two-loop diagrams shown in Fig.\ref{two_loop_sfermion_mass}.
We obtain them as
\begin{eqnarray}
m^2_{\tilde{f}}=2
\left[ C_3^{\tilde{f}} \left(\frac{\alpha_3}{4 \pi} \right)^2
\Lambda^{(d)2}
+ C_2^{\tilde{f}} \left(\frac{\alpha_2}{4 \pi} \right)^2
\Lambda^{(l)2}
+ \frac{3}{5} Y^2 \left(\frac{\alpha_1}{4 \pi} \right)^2
\left(\frac{2}{5} \Lambda^{(d)2}
+ \frac{3}{5} \Lambda^{(l)2} \right) \right],
\end{eqnarray}
where $C_3^{\tilde{f}}=\frac{4}{3}$ and $C_2^{\tilde{f}}=\frac{3}{4}$
when $\tilde{f}$ is in the fundamental representation of SU(3)$_C$
and SU(2)$_L$, and $C_i^{\tilde{f}}=0$ for the gauge singlets,
and $Y$ denotes the U(1)$_Y$ hypercharge
($Y \equiv Q-T_3$). Here the effective scales $\Lambda^{(\psi)}$
are of order $k_\psi \langle F_Z \rangle/m_\psi$.
For example, the effective scales $\Lambda^{(\psi)}$
are given by
\begin{eqnarray}
\Lambda^{(\psi)2}=\frac{|k_\psi \langle F_Z \rangle|^2}{\bar{m}^2_\psi}
\end{eqnarray}
if the messenger quarks and leptons have a degenerate SUSY-invariant mass
$\bar{m}_\psi$,%
\footnote{
In the present analysis, we only discuss
the sfermion masses qualitatively. A more detailed analysis
will be given in Ref.\cite{NT}.}
which is an eigenvalue of the mass matrix
\begin{eqnarray}
\left(
\begin{array}{cc}
k_\psi \langle Z \rangle & m_{\bar{\psi}}\\
m_{\psi} & 0
\end{array}
\right).
\end{eqnarray}
The SUSY-breaking squark
and slepton masses are proportional to
$(k_\psi \langle F_Z \rangle /m_\psi m_{\bar{\psi}})$.
On the other hand, the gaugino masses have an extra suppression
$(k_\psi \langle F_Z \rangle /m_\psi m_{\bar{\psi}})^2$ as shown in
Eqs.(\ref{bino_mass})-(\ref{gluino_mass})
since the leading term of $(k_\psi\langle F_Z \rangle/m_\psi m_{\bar{\psi}})$
vanishes. Thus, to avoid
too low masses for the gauginos, we must take
$(k_\psi \langle F_Z \rangle /m_\psi m_{\bar{\psi}})^2 > 0.1$.
It is interesting that this condition is necessary to have a
light gravitino with mass less than $1~ {\rm keV} $ as shown below.
We are now at a point to derive a constraint on the gravitino mass.
The conservative constraint comes from the experimental lower
bounds\footnote{
These bounds are derived assuming the GUT relation of the gaugino
masses. The bound on the gluino mass assumes that the gluino is
heavier than all squarks. A more detailed phenomenological analysis on the
models in this paper will be given in Ref.\cite{NT}.}
on the masses of wino and gluino\cite{LEP,PDG}\footnote{
We find in Ref.\cite{NT} that even when
$(k \langle F_Z \rangle /m^2)^2 \simeq 1$, the
constraint from the right-handed slepton mass is weaker than those
from the gaugino masses.}
\begin{eqnarray}
m_{\tilde{g}_2} \mathop{}_{\textstyle \sim}^{\textstyle >} 50~ {\rm GeV} ,~~~m_{\tilde{g}_3} \mathop{}_{\textstyle \sim}^{\textstyle >} 220~ {\rm GeV} ,
\end{eqnarray}
which yield
\begin{eqnarray}
\left|\frac{k_l \langle F_Z \rangle}{m_l m_{\bar{l}}} \right|^2
\frac{k_l \langle F_Z \rangle}{\sqrt{m_l m_{\bar{l}}}}
{\cal F}_l & \mathop{}_{\textstyle \sim}^{\textstyle >} & 1.9 \times10^4~ {\rm GeV} ,
\\
\left|\frac{k_d \langle F_Z \rangle}{m_d m_{\bar{d}}} \right|^2
\frac{k_d \langle F_Z \rangle}{\sqrt{m_d m_{\bar{d}}}}
{\cal F}_d & \mathop{}_{\textstyle \sim}^{\textstyle >} & 2.3 \times10^4~ {\rm GeV} .
\end{eqnarray}
We obtain
\begin{eqnarray}
\langle F_Z \rangle & \mathop{}_{\textstyle \sim}^{\textstyle >} & \frac{3 \times 10^8}{k_l {\cal F}_l^2}
\left( \frac{m_l m_{\bar{l}}}{k_l \langle F_Z \rangle} \right)^5~ {\rm GeV} ^2,
\\
\langle F_Z \rangle & \mathop{}_{\textstyle \sim}^{\textstyle >} & \frac{5 \times 10^8}{k_d {\cal F}_d^2}
\left( \frac{m_d m_{\bar{d}}}{k_d \langle F_Z \rangle} \right)^5~ {\rm GeV} ^2.
\end{eqnarray}
The gravitino mass is given by
\begin{eqnarray}
m_{3/2}&=&\frac{\langle F_Z \rangle}{\sqrt{3}M} \mathop{}_{\textstyle \sim}^{\textstyle >}
\frac{0.8}{k_l}\left(\frac{0.1}{{\cal F}_l}\right)^2
\left( \frac{m_l m_{\bar{l}}}{k_l \langle F_Z \rangle} \right)^5
\times 10^{-2}~ {\rm keV} .
\\
m_{3/2}&=&\frac{\langle F_Z \rangle}{\sqrt{3}M} \mathop{}_{\textstyle \sim}^{\textstyle >}
\frac{1}{k_d}\left(\frac{0.1}{{\cal F}_d}\right)^2
\left( \frac{m_d m_{\bar{d}}}{k_d \langle F_Z \rangle} \right)^5
\times 10^{-2}~ {\rm keV} .
\end{eqnarray}
Since the $|{\cal F}_\psi|$ has the maximal value $0.1$
(see the Appendix), we see that in the region of
$0.2 \mathop{}_{\textstyle \sim}^{\textstyle <} (\frac{k_\psi \langle F_Z \rangle}{m_\psi m_{\bar{\psi}}})^2
\mathop{}_{\textstyle \sim}^{\textstyle <} 1$ and $k_\psi \simeq 1$ for $\psi=d,l$,
the gravitino can be lighter than $1~ {\rm keV} $, which is required from the standard
cosmology.
We have found that the gravitino mass can be set smaller than
$1~ {\rm keV} $ if $m_\psi$
are of order the SUSY-breaking scale $\Lambda$. In principle, the masses
$m_\psi$ of the messenger quarks and leptons might be considered
to arise from dynamics of another strong
interaction. In that case, however, it seems accidental to have
$m_\psi \sim \Lambda$. Thus it is natural to consider a model in which the
SUSY-breaking dynamics produces simultaneously the mass terms for the
messenger quarks and leptons. This possibility will be
discussed in section~\ref{three-singlet}.
We note that there is no CP violation in this model. All
the coupling constants $k_d,~k_l$ and the mass parameters $m_\psi$
($\psi=d,l,\bar{d},\bar{l}$) can be taken real without loss of
generality. The vacuum-expectation values $\langle QQ \rangle$ and
$\langle Z \rangle$ are also taken real by
phase rotations of the corresponding superfields.
Thus only the $\langle F_Z \rangle$ is a complex quantity
and then all the gaugino masses have
a common phase coming from the phase of $\langle F_Z \rangle$. However,
this phase can be eliminated by a common rotation
of the gauginos.\footnote{
The rotation of the gauginos induces a complex phase in the Yukawa-type
gauge couplings of the gauginos. However, such a complex phase is eliminated
by a rotation of the sfermions and Higgs fields $H$ and $\bar{H}$,
since we have no SUSY-breaking trilinear couplings and no SUSY-breaking
$B$ term $B\mu H {\bar{H}}$ at the tree-level.}
\section{Two-singlet model}
Next we consider a realistic model with two `multiplier' singlets
$Z_1$ and $Z_2$ for SUSY breaking.
We introduce two pairs of chiral superfields $d$, $\bar{d}$
and $l$, $\bar{l}$ which are all singlets under the strong SU(2).
We also introduce an additional singlet $X$ to obtain
a superpotential
\footnote{We could construct a model without the additional singlet
superfield \cite{HIY} at the sacrifice of complete naturalness.
It may manage to accommodate a light gravitino with
$m_{3/2}\sim 1~ {\rm keV} $ in a strong-coupling regime.}
\begin{eqnarray}
W_2 = W_Y + Z_1(\lambda_1(QQ) - f_1X^2) + Z_2(\lambda_2(QQ) - f_2X^2)
+ X(f_dd{\bar d} + f_ll{\bar l}).
\end{eqnarray}
Without loss of generality, we may set $f_2 = 0$ by an appropriate redefinition
of $Z_1$ and $Z_2$.
Then the superpotential
yields a vacuum with $\langle X \rangle = \sqrt{f_1^{-1}\lambda_1} \Lambda$.
The masses of messenger quarks and leptons are given by
\begin{eqnarray}
m_\psi=f_\psi \langle X \rangle
\end{eqnarray}
for $\psi=d,l$.
Since $F_{Z_2}=\lambda_2 \Lambda^2$ is nonvanishing, SUSY is broken.
The soft masses of the messenger quarks and leptons
stem from radiative corrections.
For example, the diagrams shown in Fig.\ref{Kahler_corr} generate an effective
K\"ahler potential of the form
\begin{eqnarray}
\frac{\delta}{(16 \pi^2)^2} |\lambda_1|^2 |\lambda_2|^2
\lambda_1 |f_1|^2 f_1^*
\frac{Z_2^*Z_2(\lambda_1^* Z_1^* + \lambda_2^* Z_2^*) X^{*2} X}{\Lambda^6}
(|f_d|^2 f_d d{\bar d} + |f_l|^2 f_l l{\bar l}),
\end{eqnarray}
which gives soft mass terms of the form
\begin{eqnarray}
\frac{\delta}{(16 \pi^2)^2}|\lambda_1|^2 |\lambda_2|^2
\lambda_1 \lambda_2^* |f_1|^2 f_1^*
\frac{|F_{Z_2}|^2 \langle Z_2 \rangle \langle X \rangle^3}{\Lambda^6}
(|f_d|^2 f_d d{\bar d} + |f_l|^2 f_l l{\bar l}),
\end{eqnarray}
when $\langle Z_2 \rangle \neq 0$.
Since the induced soft masses for messenger squarks and sleptons are
suppressed by loop factors, the gravitino mass
is expected to be much larger than $1~ {\rm keV} $ in this model.
\section{Three-singlet model}
\label{three-singlet}
We finally obtain a realistic model with three `multiplier' singlets
$Z_1$, $Z_2$, and $Z_3$ for SUSY breaking.
The model is a combination of the one- and the two-singlet models discussed
in the previous sections.
The masses $m_\psi$ of messenger quarks and leptons in the one-singlet
model are generated by Yukawa couplings of X introduced in the two-singlet
model.
The superpotential in this three-singlet model is given by
\begin{eqnarray}
W_3 &=& W_Y + Z_1(\lambda_1(QQ) + k_{d1}d{\bar d} + k_{l1}l{\bar l} - f_1X^2)
+ Z_2(\lambda_2(QQ) + k_{d2}d{\bar d} + k_{l2}l{\bar l} - f_2X^2)
\nonumber \\
&&+ Z_3(\lambda_3(QQ) + k_{d3}d{\bar d} + k_{l3}l{\bar l} - f_3X^2)
+ X(f_d d {\bar d}' + f_{\bar d} d' {\bar d} + f_l l {\bar l}'
+ f_{\bar l} l' {\bar l}).
\label{superpotential_model3}
\end{eqnarray}
Without loss of generality, we may set
$k_{d1} = k_{l1} = f_2 = 0$
by an appropriate redefinition of $Z_1$, $Z_2$, and $Z_3$.
For relatively small values of the couplings
$k_{d2}$, $k_{l2}$, $\lambda_3$, $k_{d3}$, $k_{l3}$, and $f_3$,
the superpotential yields a vacuum with
$\langle X \rangle = \sqrt{f_1^{-1}\lambda_1} \Lambda$
and the vacuum expectation values of the messenger quarks and leptons
vanishing.
The masses $m_\psi$ of messenger quarks and leptons in the one-singlet model
are given by
\begin{eqnarray}
m_\psi=f_\psi \langle X \rangle
\end{eqnarray}
for $\psi=d,l,\bar{d}, \bar{l}$.
In this vacuum, the $F$-components of $Z_i$ are given by
\begin{eqnarray}
F_{Z_1} \simeq 0, \quad
F_{Z_2} \simeq \lambda_2 \Lambda^2, \quad
F_{Z_3} \simeq \lambda_3 \Lambda^2-f_3 \langle X \rangle^2,
\end{eqnarray}
and thus SUSY is broken.
The masses of gauginos, squarks, and sleptons are generated
as in the one-singlet model in section~\ref{one_singlet}. We should replace
$k_\psi \langle F_Z \rangle $ in Eqs.(\ref{bino_mass})-(\ref{gluino_mass}) by
$k_{\psi 2} \langle F_{Z_2} \rangle+ k_{\psi 3} \langle F_{Z_3} \rangle$.
If $k_{d1}/k_{d2} \neq k_{l1}/k_{l2}$, the phases of the three
gauginos' masses are different from one another. Then, the phases of the
gauginos' masses cannot be eliminated by a common rotation of the
gaugino fields and thus CP is broken. However, there is no such problem
in the GUT models since $k_{d1}/k_{d2} \simeq k_{l1}/k_{l2}$ holds even at
low energies.
We comment on the $\mu$-problem\cite{DNS,DGP}. If the superfield $X$ couples
to $H \bar{H}$ where $H$ and $\bar{H}$ are Higgs fields in the standard model,
the SUSY-invariant mass $\mu$ for Higgs $H$ and $\bar{H}$ is generated. To
have the desired mass $\mu \simeq (10^2-10^3)~ {\rm GeV} $, we must choose a small
coupling constant $\lambda_h \simeq 10^{-3}$, where $\lambda_h$ is defined
by $W=\lambda_h X H \bar{H}$. This is natural in the sense of 't Hooft.
We note that no large $B$ term ($B\mu {H} {\bar{H}}$)
is induced since the $F$-component of $X$ is very small.
Hence the scale $\mu$ may originate from the SUSY-breaking scale
in the present model.%
\footnote{
There has been also proposed an interesting solution to the $\mu$-problem
in Ref.\cite{Yana}.}
Finally, we should stress that the superpotential
Eq.(\ref{superpotential_model3}) is natural, since it has a global symmetry
U(1)$_R \times$U(1)$_\chi$, where U(1)$_R$ is an $R$ symmetry. That is, the
superpotential Eq.(\ref{superpotential_model3}) is a general one allowed
by the global U(1)$_R \times$U(1)$_\chi$.\footnote{
This global symmetry may forbid mixings between the messenger quarks and
the down-type quarks in the standard-model sector. This avoids naturally
the flavor-changing neutral current problem\cite{DNS2}. Then
there exists the lightest stable particle in the messenger sector\cite{DGP2}.}
The charges for chiral superfields
are given in Table~\ref{table_charge}.
\section{Conclusion}
We have constructed gauge-mediated
SUSY-breaking models
with direct transmission of SUSY-breaking effects
to the standard-model sector. In our three-singlet model,
the gravitino mass $m_{3/2}$ is expected to be
smaller than $1~ {\rm keV} $ naturally as required from the standard cosmology:
If all the Yukawa coupling constants are of order one,
the SUSY-breaking scale $m_{SUSY}$
transmitted into the standard-model sector is given by $m_{SUSY} \simeq
0.1 \frac{\alpha_i}{4 \pi} \Lambda$.
Imposing $m_{SUSY} \simeq (10^2-10^3)$ GeV, we get
$\Lambda \simeq (10^5-10^6)$ GeV, which yields the gravitino mass $m_{3/2}
\simeq (10^{-2}-1)$ keV.
In the present models, we have four gauge groups
SU(3)$_C \times$ SU(2)$_L \times$U(1)$_Y \times$SU(2). It is well known
that the three gauge coupling constants of the SUSY standard-model gauge groups
meet at the GUT scale $\sim 10^{16}~ {\rm GeV} $. It is remarkable
that in the three-singlet model, all the four gauge coupling constants
meet at the scale $\sim 10^{16}~ {\rm GeV} $ as shown in Fig.\ref{gauge_coupling}.
Here, we have assumed that the gauge coupling constant $\tilde{\alpha}_2$
of the strong SU(2) becomes strong ($\tilde{\alpha_2}/\pi \simeq 1$) at
the scale $\Lambda \simeq (10^5-10^6)~ {\rm GeV} $.
So far we have assumed spontaneous breakdown of $R$ symmetry in the models.
If $\langle Z \rangle = 0$, we need to introduce $R$-breaking mass terms
such as $md{\bar d} + m'l{\bar l}$ to generate
the standard-model gaugino masses.
These mass terms might be induced through the $R$ symmetry breaking
which is necessary for the cosmological constant to be vanishing
\cite{Bag}.
\newpage
\section*{Appendix}
In this Appendix, we evaluate the standard-model gaugino masses in
our SUSY-breaking models.
The superpotential which relates to the mass terms of messenger
fields $\psi$, $\bar{\psi}$, $\psi'$, and $\bar{\psi}'$ for $\psi=d,l$ is
represented as
\begin{eqnarray}
W=\sum_{\psi=d,l}(\bar{\psi}, \bar{\psi}')
M^{(\psi)}
\left(
\begin{array}{c}
\psi \\
\psi'
\end{array}
\right),
\end{eqnarray}
where the mass matrix $M^{(\psi)}$ is given by
\begin{eqnarray}
M^{(\psi)}=
\left(
\begin{array}{cc}
m^{(\psi)}_1 & m^{(\psi)}_3\\
m^{(\psi)}_2 & 0
\end{array}
\right).
\label{mass_matrix}
\end{eqnarray}
In the one-singlet model, the mass parameters $m^{(\psi)}_i$
are given by
\begin{eqnarray}
m^{(\psi)}_1 &=& k_\psi \langle Z \rangle,
\\
m^{(\psi)}_2 &=& m_\psi,
\\
m^{(\psi)}_3 &=& m_{\bar{\psi}},
\end{eqnarray}
and in the three-singlet model, they are given by
\begin{eqnarray}
m^{(\psi)}_1 &=& k_{\psi 2} \langle Z_2 \rangle
+ k_{\psi 3} \langle Z_3 \rangle,
\\
m^{(\psi)}_2 &=& f_{\psi} \langle X \rangle ,
\\
m^{(\psi)}_3 &=& f_{\bar{\psi}} \langle X \rangle.
\end{eqnarray}
The soft SUSY-breaking mass terms of the messenger fields are given by
\begin{eqnarray}
{\cal{L}}_{soft}&=&\sum_{\psi=d,l} F^{(\psi)}\tilde{\psi} \tilde{\bar{\psi}},
\end{eqnarray}
where
\begin{eqnarray}
F^{(\psi)}=k_\psi \langle F_Z \rangle
\end{eqnarray}
in the one-singlet model and
\begin{eqnarray}
F^{(\psi)}=k_{\psi 2} \langle F_{Z_2} \rangle
+ k_{\psi 3} \langle F_{Z_3} \rangle
\end{eqnarray}
in the three-singlet model.
Then the standard-model gauginos acquire their masses through loops of the
messenger
quarks and leptons. Their masses of order $F^{(\psi)}/m^{(\psi)}$
are given by (see Fig.\ref{gaugino_mass})
\begin{eqnarray}
m_{\tilde{g}_3} &=& \frac{\alpha_3}{4\pi} F^{(d)}
\left( M^{(d)^{-1}} \right)_{11},
\\
m_{\tilde{g}_2} &=& \frac{\alpha_2}{4\pi} F^{(l)}
\left( M^{(l)^{-1}} \right)_{11},
\\
m_{\tilde{g}_1} &=& \frac{\alpha_1}{4\pi} \left\{
\frac{2}{5} F^{(d)} \left( M^{(d)^{-1}} \right)_{11}
+\frac{3}{5} F^{(l)} \left( M^{(l)^{-1}} \right)_{11}
\right\},
\end{eqnarray}
where the masses $m_{\tilde{g}_i}$ ($i=1, \cdots, 3$) denote bino, wino, and
gluino masses, respectively, and we have adopted the SU(5) GUT normalization
of the U(1)$_Y$ gauge coupling ($\alpha_1 \equiv \frac{5}{3} \alpha_Y$).
Because of $\left( M^{(\psi)^{-1}} \right)_{11} =0$,
the above contributions vanish.
However, the contributions of higher powers of
$F^{(\psi)}/m^{(\psi)2}$ do not vanish in general:
We now work in a basis where the supersymmetric masses $M^{(\psi)}$ are
diagonalized as
\begin{eqnarray}
O_{\psi_\psi} M^{(\psi)} O_{\theta_\psi}^\dagger =
\left(
\begin{array}{cc}
m_{\psi 1} & 0\\
0 & m_{\psi 2}
\end{array}
\right).
\end{eqnarray}
Here the mass eigenstates are given by
\begin{eqnarray}
\left(
\begin{array}{c}
\psi_1 \\
\psi_2
\end{array}
\right)
=O_{\theta_\psi}
\left(
\begin{array}{c}
\psi \\
\psi'
\end{array}
\right)
=
\left(
\begin{array}{cc}
\cos \theta_\psi & -\sin \theta_\psi \\
\sin \theta_\psi & \cos \theta_\psi
\end{array}
\right)
\left(
\begin{array}{c}
\psi \\
\psi'
\end{array}
\right),
\\
\left(
\begin{array}{c}
\bar{\psi}_1 \\
\bar{\psi}_2
\end{array}
\right)
=O_{\phi_\psi}
\left(
\begin{array}{c}
\bar{\psi} \\
\bar{\psi}'
\end{array}
\right)
=
\left(
\begin{array}{cc}
\cos \phi_\psi & -\sin \phi_\psi \\
\sin \phi_\psi & \cos \phi_\psi
\end{array}
\right)
\left(
\begin{array}{c}
\bar{\psi} \\
\bar{\psi}'
\end{array}
\right),
\end{eqnarray}
where we have taken the mass matrices $M^{(\psi)}$ to be real,
which is always possible.
Then, for example, the contribution of order
$(F^{(\psi)}/m^{(\psi)})(F^{(\psi)}/m^{(\psi)2})^2$ to the gaugino
masses, which is shown in Fig.\ref{3F_gaugino_mass}, is represented by
\begin{eqnarray}
\label{g1_mass}
m_{\tilde{g}_3}&=&\frac{\alpha_3}{4 \pi} \left|
\frac{F^{(d)}}{m^{(d)}_2 m^{(d)}_3} \right|^2 \frac{F^{(d)}}
{\sqrt{m^{(d)}_2 m^{(d)}_3}}
{\cal F}_d, \\
\label{g2_mass}
m_{\tilde{g}_2}&=&\frac{\alpha_2}{4 \pi} \left|
\frac{F^{(l)}}{m^{(l)}_2 m^{(l)}_3} \right|^2 \frac{F^{(l)}}
{\sqrt{m^{(l)}_2 m^{(l)}_3}}
{\cal F}_l, \\
\label{g3_mass}
m_{\tilde{g}_1}&=&\frac{\alpha_1}{4 \pi} \left\{ \frac{2}{5}\left|
\frac{F^{(d)}}{m^{(d)}_2 m^{(d)}_3 } \right|^2 \frac{F^{(d)}}
{\sqrt{m^{(d)}_2 m^{(d)}_3}}
{\cal F}_d
+ \frac{3}{5}\left|
\frac{F^{(l)}}{m^{(l)}_2 m^{(l)}_3} \right|^2 \frac{F^{(l)}}
{\sqrt{m^{(l)}_2 m^{(l)}_3}}
{\cal F}_l \right\}.
\end{eqnarray}
Here, the ${\cal F}_\psi$ for $\psi=d,l$ are defined by
\begin{eqnarray}
{\cal{F}_\psi} \equiv {\cal{F}}(\tan^2 \theta_\psi, \tan^2\phi_\psi),
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}(a , b)&=&\frac{(ab)^{\frac{1}{4}}}{6(1-ab)^4(1+a)^{\frac{3}{2}}
(1+b)^{\frac{3}{2}}}
\left\{ 2(a+b)(-1+8ab-8 a^3 b^3 + a^4 b^4 +12 a^2 b^2 \ln (ab))
\right.
\nonumber \\
&& \left. -1-ab -64 a^2 b^2 +64 a^3 b^3 + a^4 b^4 + a^5 b^5
-36 a^2 b^2 (1+ab) \ln (ab)
\right\}.
\end{eqnarray}
This function ${\cal F}(a , b)$ has the maximal value 0.1 at $a\simeq3$ and
$b\simeq3$. Eqs.(\ref{g1_mass})-(\ref{g3_mass}) imply that
the so-called GUT relation of the gaugino masses does not
hold in general.
\newpage
\newcommand{\Journal}[4]{{\sl #1} {\bf #2} {(#3)} {#4}}
\newcommand{Ap. J.}{Ap. J.}
\newcommand{Can. J. Phys.}{Can. J. Phys.}
\newcommand{Nuovo Cimento}{Nuovo Cimento}
\newcommand{Nucl. Phys.}{Nucl. Phys.}
\newcommand{Phys. Lett.}{Phys. Lett.}
\newcommand{Phys. Rev.}{Phys. Rev.}
\newcommand{Phys. Rep.}{Phys. Rep.}
\newcommand{Phys. Rev. Lett.}{Phys. Rev. Lett.}
\newcommand{Prog. Theor. Phys.}{Prog. Theor. Phys.}
\newcommand{Sov. J. Nucl. Phys.}{Sov. J. Nucl. Phys.}
\newcommand{Z. Phys.}{Z. Phys.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and main results}
\label{sec:Intro}
In this article,
we determine the discrete spectra
of the restriction $\Pi|_{G'}$
of an irreducible unitary representation of $G$
to a subgroup $G'$,
where
\begin{enumerate}
\item[$\bullet$]
$\Pi$ is \lq\lq{attached to}\rq\rq\
a minimal elliptic coadjoint orbit
(Section \ref{sec:elliptic}),
\item[$\bullet$]
$(G,G')=(O(p,q), O(p',q') \times O(p'',q''))$
with $p=p'+p''$ and $q=q'+q''$.
\end{enumerate}
We denote by $\widehat {G'}$ the set of equivalence classes
of irreducible unitary representations of $G'$
({\it{unitary dual}}).
In Theorem \ref{thm:2002}
we prove a {\it{multiplicity-free theorem}} asserting
\[
\dim_{\mathbb{C}} \Hom_{G'}(\pi,\Pi|_{G'}) \le 1
\quad
\text{for all $\pi \in \widehat {G'}$},
\]
and give a complete description of the {\it{discrete spectra}}
for the branching:
\[
\operatorname{Disc}(\Pi|_{G'})
:=
\{ \pi \in \widehat {G'}: \Hom_{G'}(\pi,\Pi|_{G'}) \ne \{0\}
\},
\]
where $\Hom_{G'}(\,,\,)$ denotes the space
of {\it{continuous}} $G'$-homomorphisms.
The irreducible unitary representations $\Pi$
in consideration are of various aspects such as
\begin{enumerate}
\item[$\bullet$]
they are \lq\lq{geometric quantization}\rq\rq\
of indefinite K{\"a}hler manifolds
(Section \ref{subsec:Dolbeault});
\item[$\bullet$]
they are \lq\lq{discrete series representations}\rq\rq\
for pseudo-Riemannian space forms
(Section \ref{subsec:Xpq}),
\cite{xfar, xstri};
\item[$\bullet$]
they are \lq\lq{unitarization}\rq\rq\
of the Zuckerman derived functor modules
that are cohomological induction from a maximal $\theta$-stable
parabolic subalgebra
${\mathfrak {q}}$
(Section \ref{subsec:Aq}),
\cite{xvr, xvz}.
\end{enumerate}
The representations $\Pi$ of $G=O(p,q)$
are parametrized by $\varepsilon \in \{\pm\}$
and $\lambda \in A_{\varepsilon}(p,q)$,
see Definition-Theorem \ref{def:pilmd},
and will be denoted by
$
\pi_{\varepsilon, \lambda}^{p,q}.
$
Our first main result gives a description of the discrete part
({\it{cf}}. Section \ref{subsec:Pidisc})
of the restriction $\Pi|_{G'}$.
Without loss of generality,
we assume $\varepsilon=+$.
\begin{theorem}
\label{thm:2002}
For $\lambda \in A_+(p,q)$,
we set $\Pi=\pi_{+,\lambda}^{p,q}$,
the irreducible unitary representation
of $G=O(p,q)$,
as in Definition-Theorem \ref{def:pilmd}.
Then the discrete part
of the restriction $\Pi|_{G'}$ is a multiplicity-free
direct sum of irreducible unitary representations
of the subgroup $G'=O(p', q') \times O(p'',q'')$
as follows:
\begin{equation}
\label{eqn:2.1.1}
\bigoplus_{(\delta,\varepsilon) \in \{\amp, \app, \apm\}}
\Hsum{(\lambda', \lambda'') \in \Lambda_{\delta \varepsilon}(\lambda)}
\pi_{\delta, \lambda'}^{p',q'} \boxtimes
\pi_{\varepsilon, \lambda''}^{p'',q''}
\quad
\text{{\rm{(Hilbert direct sum)}}.}
\end{equation}
\end{theorem}
Here the parameter set $\Lambda_{\delta,\varepsilon}(\lambda)$ is defined
for $\lambda \in A_+(p,q)$ by
\begin{align*}
\Lambda_\amp(\lambda)
&:=\set{(\lambda', \lambda'') \in A_-(p',q') \times A_+(p'',q'')}{
\lambda'' - \lambda - \lambda' - 1 \in 2 \Bbb N},
\\
\Lambda_\app(\lambda)
&:=\set{(\lambda', \lambda'') \in A_+(p',q') \times A_+(p'',q'')}{
\lambda - \lambda' - \lambda'' - 1 \in 2 \Bbb N},
\\
\Lambda_\apm(\lambda)
&:=\set{(\lambda', \lambda'') \in A_+(p',q') \times A_-(p'',q'')}{
\lambda' - \lambda'' - \lambda - 1 \in 2 \Bbb N}.
\end{align*}
We note that $\Lambda_{++}(\lambda)$ is a finite set,
whereas $\Lambda_{\apm}(\lambda)$ (also $\Lambda_{\amp}(\lambda)$)
is an infinite set
unless it is empty.
Our proof is geometric and constructive.
It is outlined as follows.
First,
we divide the pseudo-Riemannian space form
$G/H =O(p,q)/O(p-1,q)$
into three regions (up to conull set)
according to orbit types
labeled by $\amp$, $\app$, $\apm$
of the subgroup $G'$.
Second,
we introduce $G'$-intertwining operators
({\it{holographic operators}}) from each irreducible summand
of \eqref{eqn:2.1.1}
to the original representation $\pi_{+,\lambda}^{p,q}$
by realizing these representations
in the space of eigenfunctions of the Laplacian
on pseudo-Riemannian space forms
(Theorem \ref{thm:holographic}).
The final step is to prove the exhaustion
of \eqref{eqn:2.1.1},
which is carried out
by a careful estimate
of the boundary behaviours
of solutions
that \lq\lq{holographic operators}\rq\rq\
must satisfy (Section \ref{sec:5}).
Here is an example of Theorem \ref{thm:2002}
when $(p'', q'')=(1,0)$ and $(0,1)$.
\begin{example}
\label{ex:GP}
Suppose $p \ge 2$ and $q\ge 1$.
Let $\Pi := \pi_{+,\lambda}^{p,q} \in \widehat G$
for $\lambda \in A_+(p,q)$.
\begin{enumerate}
\item[{\rm{(1)}}]
{\rm{(\cite{xk:1})}}\enspace
If $(p'', q'')=(0,1)$,
then $\Lambda_{\amp}(\lambda)=\Lambda_{\app}(\lambda)=\emptyset$
and
\[
\Pi|_{G'}
=
\Hsum{n \in {\mathbb{N}}}
\pi_{+,\lambda+n+\frac 1 2}^{p,q-1}
\boxtimes
(\operatorname{sgn})^{n},
\]
where $\operatorname{sgn}$ stands for the nontrivial character
of $O(1) \simeq O(1,0)$.
\item[{\rm{(2)}}]
If $(p'', q'')=(1,0)$,
then $\Lambda_{\amp}(\lambda)=\Lambda_{\apm}(\lambda)=\emptyset$.
Moreover,
$\Hom_{G'}(\pi,\Pi|_{G'}) \ne \{0\}$
if and only if $\pi \in \widehat {G'}$ is of the form
\[
\pi=
\pi_{+,\lambda-n-\frac 1 2}^{p-1,q}
\boxtimes
(\operatorname{sgn})^{n}
\qquad
\text{for some $0 \le n < \lambda-\frac 1 2$}.
\]
\end{enumerate}
\end{example}
In the general case where $p',p'',q',q'' \ge 2$
and $\lambda > 2$,
all the three parameter sets $\Lambda_{\amp}(\lambda)$,
$\Lambda_{\app}(\lambda)$,
and $\Lambda_{\apm}(\lambda)$
are nonempty
(Section \ref{sec:comments}).
As a corollary of Theorem \ref{thm:2002} and its proof,
we find a necessary and sufficient condition
on the quadruple $(p',p'',q',q'')$
for the restriction $\Pi|_{G'}$
to have the following properties:
\begin{enumerate}
\item[$\bullet$]
$\Pi|_{G'}$ is discretely decomposable
(Theorem \ref{thm:discdeco}),
\item[$\bullet$]
the discrete part \eqref{eqn:2.1.1} is at most a finite sum
(Theorem \ref{thm:191419}),
\item[$\bullet$]
$\Pi|_{G'}$ contains only continuous spectrum
(Theorem \ref{thm:conti}).
\end{enumerate}
Our results can be also applied
to the existence problem
of symmetry breaking operators
between {\it{smooth representations}} of $G$ and its subgroup $G'$.
Let $\Pi^{\infty}$ be the Fr{\'e}chet space of smooth vectors
of the unitary representation $\Pi$ of $G$,
and $\pi^{\infty}$ that of a unitary representation
$\pi$ of the subgroup $G'$.
\begin{corollary}
\label{cor:SBO}
Let $\Pi=\pi_{+,\lambda}^{p,q} \in \widehat G$
for $\lambda \in A_+(p,q)$
and $\pi=\pi_{\delta,\lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}\in \widehat{G'}$
for some $(\delta, \varepsilon)=(-,+)$, $(+,+)$, or $(+,-)$.
Then we have:
\begin{equation}
\label{eqn:SBO}
\Hom_{G'}(\Pi^{\infty}|_{G'}, \pi^{\infty})\ne\{0\}
\quad
\text{if $(\lambda', \lambda'') \in \Lambda_{\delta,\varepsilon} (\lambda)$}.
\end{equation}
\end{corollary}
The second main theorem in this article
is a quantitative result:
for every $(\lambda', \lambda'') \in \Lambda_{\delta,\varepsilon} (\lambda)$,
we construct explicitly
in a geometric model of representations
a holographic operator
(an injective $G'$-intertwining operator)
\[
T_{\delta\varepsilon,\lambda}^{\lambda',\lambda''}
\colon
\pi_{\delta,\lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}
\to
\pi_{+,\lambda}^{p,q},
\]
and find a closed formula
of its operator norm
(Theorem \ref{thm:holographic}).
\vskip 0.8pc
Branching laws in the same setting with specific choices
of $p'$, $p''$, $q'$, $q''$
have been studied over 25 years:
\begin{enumerate}
\item[$\bullet$]
When $(p'',q')=(0,0)$,
Theorem \ref{thm:2002} is nothing but the $K$-type formula,
and can be computed by a generalized Blattner formula
of the Zuckerman derived functor modules
\cite{xvr, xk92},
see also Faraut \cite{xfar}, Howe--Tan \cite{xhowetan}.
\item[$\bullet$]
When $p''=0$,
the restriction $\Pi|_{G'}$ is discretely decomposable
(Theorem \ref{thm:discdeco}).
In this case,
Theorem \ref{thm:2002} gives the whole branching law
of the restriction $\Pi|_{G'}$,
which was determined in \cite[Thm.~3.3]{xk:1}.
The special case $(p,q)=(3,3)$ with $(p'', q'')=(0,1)$ was also studied in \cite{xspeh}.
\item[$\bullet$]
When $(q',q'')=(1,0)$ (hence $q=1$),
the branching law of $\Pi|_{G'}$ was obtained in \cite{xmo}.
In this case,
$\Pi|_{G'}$ contains also continuous spectrum.
\item[$\bullet$]
In the case $p''=q=1$,
an analogous result to \eqref{eqn:SBO} was studied
in \cite[Thms.~4.1 and 4.2]{xksbonvec}
when $\Pi^{\infty}$ and $\pi^{\infty}$ are cohomologically induced
representations from more general parabolic subalgebras.
\item[$\bullet$]
If $(p'',q'')=(1,0)$ or $(0,1)$,
then $\Hom_{G'}(\Pi^{\infty}|_{G'}, \pi^{\infty})$ is at most
of one-dimensional by the general result of Sun and Zhu \cite{xsunzhu}.
In this case,
the discrete spectra \eqref{eqn:2.1.1} are stated
in Example \ref{ex:GP},
and some part of them
have been obtained recently in {\O}rsted and Speh \cite{xso}
by a different approach
under the constraints
that $b(\lambda) \ge 0$
(see \eqref{eqn:b} for notation).
\end{enumerate}
For general $p'$, $q'$, $p''$, $q''$,
the complete classification of discrete spectra
(Theorem \ref{thm:2002}),
and the construction of all holographic operators
with a Parseval-type theorem
(Theorems \ref{thm:holographic} and \ref{thm:4.2})
were presented at the conference
\lq\lq{Analyse harmonique sur les groupes de Lie
et les espaces sym\'etriques}\rq\rq\
en l'honneur de Jacques Faraut held in Nancy-Strasbourg
in June, 2005,
however, the manuscript \cite{xkmin}
has not been published.
Because of growing interest in branching problems
for reductive groups in recent years,
I come to think
that the results and the methods here might be of some help
for further perspectives
such as a possible generalization
of the Gross--Prasad conjecture
for nontempered representations
({\it{e.g.}} \cite{xgp, xksbonvec, xso})
as well as analytic representation theory.
\vskip 1pc
{\bf{$\langle$Acknowledgements$\rangle$}}\enspace
The author was partially supported
by Grant-in-Aid for Scientific Research (A) (18H03669),
Japan Society for the Promotion of Science.
\vskip 1pc
{\bf{Notation:}}\enspace
${\mathbb{N}}=\{0,1,2,\dots\}$ and
${\mathbb{N}}_+=\{1,2,\dots\}$.
\section{Irreducible unitary representations
attached to minimal elliptic orbits}
\label{sec:elliptic}
In this section,
we discuss a certain family of irreducible unitary representations
of $G=O(p,q)$,
denoted by $\pi_{\varepsilon,\lambda}^{p,q}$
with parameter $\varepsilon=\pm$
and $\lambda \in A_{\varepsilon}(p,q)$
defined as below:
\begin{align}
\label{eqn:A+}
A_+(p,q):=&
\begin{cases}
\{\lambda \in {\mathbb{Z}}+\frac{p+q}2: \lambda >0\}
&(p \ge 2,q\ge 1),
\\
\{\lambda \in {\mathbb{Z}}+\frac{p}2: \lambda \ge \frac p 2-1\}
&(p\ge 2,q=0),
\\
\emptyset
&(p=1,q\ge 1)\,\, \text{ or }\,\, (p=0),
\\
\{-\frac 1 2, \frac 1 2\}
&(p=1,q=0).
\end{cases}
\\
\label{eqn:A-}
A_-(p,q):=&A_+(q,p).
\end{align}
The representations $\pi_{\varepsilon,\lambda}^{p,q}$ are a generalization
of the finite-dimensional representations
of the compact group $O(p)$
on the space ${\mathcal{H}}^m({\mathbb{R}}^p)$
of spherical harmonics
(see Remark \ref{rem:2.2} (1)).
These unitary representations $\pi_{\varepsilon,\lambda}^{p,q}$
have been treated from various aspects
in scattered literatures
(\cite{xfar, xhowetan, xk92, xk:1, opq2, xspeh, xso, xstri}).
For the convenience of the reader,
we summarize a number of realizations
of the representations $\pi_{\varepsilon,\lambda}^{p,q}$
when $\varepsilon=+$
in Section \ref{subsec:pilmd}.
Throughout this section,
we adopt the same notation as in \cite{opq2}.
\subsection{Summary: four realizations of $\pi_{\varepsilon,\lambda}^{p,q}$}
\label{subsec:pilmd}
We use the German lower case letter ${\mathfrak{g}}$,
${\mathfrak{k}}$, $\cdots$,
to denote the Lie algebras
of $G$, $K$, $\cdots$,
and write ${\mathfrak{Z}}({\mathfrak{g}})$
for the center of the enveloping algebra
of the complexified Lie algebra
${\mathfrak{g}}_{\mathbb{C}}
={\mathfrak{g}} \otimes_{\mathbb{R}} {\mathbb{C}}$.
For ${\mathfrak{g}}={\mathfrak{o}}(p,q)$,
we set
\begin{equation}
\label{eqn:rho}
\rho:=\frac 1 2 (p+q-2).
\end{equation}
For $\lambda \in A_+(p,q)$,
we put
\begin{align}
b\equiv\, & b_+(\lambda,p,q):=\lambda-\frac p 2 + \frac q 2 + 1 \in {\mathbb{Z}},
\label{eqn:b}
\\
\label{eqn:e}
\delta \equiv\, & \delta_+ (\lambda,p,q)
:=(-1)^b.
\end{align}
\begin{definition-theorem}
\label{def:pilmd}
Let $p \ge 2$ and $q \ge 0$.
For any $\lambda \in A_+(p,q)$,
there exists a unique irreducible unitary representation
of $G=O(p,q)$,
to be denoted by $\pi_{+, \lambda}^{p,q}$,
whose underlying \gk-module is given by
one of (therefore, any of) the following \gk-modules
that are isomorphic to each other:
\begin{enumerate}
\item[{\rm{(i)}}]
The Zuckerman derived functor module
$A_{\mathfrak{q}}(\lambda-\rho)$
(see Section \ref{subsec:Aq});
\item[{\rm{(ii)}}]
(geometric quantization of coadjoint orbits)\enspace
the underlying \gk-module of the Dolbeault cohomology
$H_{\overline \partial}^{p-2}({\mathcal{O}}_{\lambda}, {\mathcal{L}}_{\lambda+\rho})$
(see Section \ref{subsec:Dolbeault});
\item[{\rm{(iii)}}]
the underlying \gk-module of the subrepresentation
of the parabolic induction $I_{\delta}(\lambda+\rho)$
with $K$-types $\Xi(K;b)$
(see Section \ref{subsec:ps});
\item[{\rm{(iii)$'$}}]
the underlying \gk-module of the quotient
of the parabolic induction $I_{\delta}(-\lambda+\rho)$
with $K$-types $\Xi(K;b)$;
\item[{\rm{(iv)}}]
the underlying \gk-module of the discrete series representation
$L^2(X(p,q))_{\lambda}$
(see Section \ref{subsec:Xpq})
for the symmetric space
$X(p,q)=O(p,q)/O(p-1,q)$.
\end{enumerate}
The ${\mathfrak{Z}}({\mathfrak{g}})$-infinitesimal character
of $\pi_{+, \lambda}^{p,q}$ is given by
\begin{equation}
\label{eqn:Zginf}
(\lambda,\frac{p+q}2-2,\frac{p+q}2-3,\cdots, \frac{p+q}2-[\frac{p+q}2])
\end{equation}
in the Harish-Chandra parametrization
for the standard basis,
and the minimal $K$-type of $\pi_{+, \lambda}^{p,q}$
is given by
\[
\begin{cases}
{\mathcal{H}}^b({\mathbb{R}}^p) \boxtimes {\bf{1}}
\quad
&\text{if $b \ge 0$},
\\
{\bf{1}} \boxtimes {\bf{1}}
\quad
&\text{if $b \le 0$}.
\end{cases}
\]
\end{definition-theorem}
The proof of the equivalence is given in \cite[Thm.~3]{xk92}
and \cite[Sect.~5.4]{opq2},
see also references therein.
Since these rich aspects
of the representations $\pi_{\varepsilon, \lambda}^{p,q}$
are the heart of our main results
in both the proof and perspectives,
we give a brief account
on each of these aspects
in Sections \ref{subsec:Aq}--\ref{subsec:Xpq} below.
\begin{remark}
\label{rem:2.2}
\begin{enumerate}
\item[{\rm{(1)}}]
When $q=0$,
$\pi_{+, \lambda}^{p,0}$ is an irreducible finite-dimensional representation
of the compact group $O(p,0) \simeq O(p)$
on the space ${\mathcal{H}}^m({\mathbb{R}}^p)$
of spherical harmonics of degree $m=\lambda-\frac p 2+1$.
\item[{\rm{(2)}}]
The conditions {\rm{(iii)}} and {\rm{(iii)$'$}}
in Definition-Theorem \ref{def:pilmd}
make sense for $q >0$;
the other conditions for $q \ge 0$.
\end{enumerate}
\end{remark}
For $(p,q)=(1,0)$,
$O(p,q) \simeq O(1)$.
It is convenient to set
\[
\text{
$A_+(p,q)=\{\tfrac 1 2, -\tfrac 1 2\}\,\,$
and $\,\,\pi_{+, \lambda}^{1,0}
:=
\begin{cases}
{\bf{1}}\quad&\text{if $\lambda=-\tfrac 1 2$, }
\\
{\operatorname{sgn}}\quad&\text{if $\lambda=\tfrac 1 2$.}
\end{cases}$
}
\]
Via the isomorphism of Lie groups $O(p,q) \simeq O(q,p)$,
we define an irreducible unitary representation
$\pi_{-, \lambda}^{p,q}$
for $\lambda \in A_-(p,q)$
to be the one $\pi_{+, \lambda}^{q,p}$ of $O(q,p)$,
where we recall from \eqref{eqn:A-}
that $A_-(p,q)=A_+(q,p)$.
By the $K$-type formula (see the condition (iii)
in Definition-Theorem \ref{def:pilmd}
and by the formula \eqref{eqn:Zginf}
of the ${\mathfrak{Z}}({\mathfrak{g}})$-infinitesimal character,
the following proposition holds.
\begin{proposition}
\label{prop:pilmd}
Irreducible unitary representations of $G=O(p,q)$
in the following set are not isomorphic to each other:
\[
\{\pi_{+, \lambda}^{p,q}
: \lambda \in A_+(p,q)
\}
\cup
\{\pi_{-, \lambda}^{p,q}
: \lambda \in A_-(p,q).
\}
\]
\end{proposition}
\subsection{Zuckerman derived functor modules $A_{\mathfrak {q}}(\lambda)$}
\label{subsec:Aq}
Let $G=O(p,q)$,
and $\theta$ the Cartan involution
corresponding to a maximal compact subgroup $K=O(p) \times O(q)$.
We take a Cartan subalgebra ${\mathfrak{t}}$ of ${\mathfrak{k}}$,
and extend it to that of ${\mathfrak{g}}$,
to be denoted by ${\mathfrak{j}}$.
Take the standard basis
$\{f_i: 1 \le i \le [\frac{p+q}2]\}$
of ${\mathfrak{j}}_{\mathbb{C}}^{\ast}$
such that the root system
$\Delta({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{j}}_{\mathbb{C}})$
is given by
\[
\{\pm f_i \pm f_j
: 1 \le i < j \le [\frac{p+q}{2}]\}
\,\,
(\cup \{\pm f_i: 1 \le i \le [\frac{p+q}{2}]\}
\,\,\,
(\text{$p+q$: odd})).
\]
Let ${\mathfrak{q}}={\mathfrak{l}}_{\mathbb{C}} + {\mathfrak{u}}$
be a $\theta$-stable parabolic subalgebra
of ${\mathfrak{g}}_{\mathbb{C}}$
with Levi part ${\mathfrak{l}}_{\mathbb{C}}$ containing ${\mathfrak{j}}_{\mathbb{C}}$
and nilpotent radical ${\mathfrak{u}}$ defined by
\[
\Delta({\mathfrak{u}}, {\mathfrak{j}}_{\mathbb{C}})
=\{ f_1 \pm f_j
: 2 \le j \le [\frac{p+q}{2}]\}
\,\,
(\cup \{f_1\}
\,\,\,
(\text{$p+q$: odd})).
\]
Then the normalizer $L$ of ${\mathfrak{u}}$ in $G$ is given by
\begin{equation}
\label{eqn:Levi}
L \simeq SO(2) \times O(p-2,q).
\end{equation}
For $\nu \in {\mathbb{Z}}$,
we write ${\mathbb{C}}_{\nu f_1}$
for the one-dimensional representation of the Levi subgroup $L$
by letting the second factor act trivially.
The same letter ${\mathbb{C}}_{\nu f_1}$ is used
to denote a character of the Lie algebra ${\mathfrak{l}}$
for $\nu \in {\mathbb{C}}$.
Zuckerman introduced cohomological parabolic induction
${\mathcal{R}}_{\mathfrak{q}}^j$
($j \in {\mathbb{N}}$)
which is a covariant functor from the category
of $({\mathfrak{l}}, L \cap K)$-modules
(or that of metaplectic $({\mathfrak{l}}, L \cap K)\tilde{}$-modules)
to that of \gk-modules.
We note that ${\mathbb{C}}_{\lambda f_1}$ lifts
to the metaplectic $({\mathfrak{l}}, L \cap K)\tilde{}$-module
if and only if ${\mathbb{C}}_{(\lambda+\rho) f_1}$ lifts to $L$,
namely,
$\lambda \in {\mathbb{Z}}+\frac 1 2(p+q)$.
In particular,
for $\lambda \in A_+(p,q)$ ($\subset {\mathbb{Z}}+\frac 1 2 (p+q)$),
we obtain \gk-modules ${\mathcal{R}}_{\mathfrak{q}}^j({\mathbb{C}}_{\lambda f_1})$ for $j \in {\mathbb{N}}$,
which vanish except for $j =p-2$,
and the resulting \gk-module is
\[
{\mathcal{R}}_{\mathfrak{q}}^{p-2}({\mathbb{C}}_{\lambda f_1})
\simeq
A_{\mathfrak{q}}(\lambda-\rho).
\]
Here we have adopted the convention and normalization
in \cite[Def.~6.20]{xvr} for ${\mathcal{R}}_{\mathfrak{q}}^j$
and in \cite{xvz} for $A_{\mathfrak{q}}(\cdot)$.
This normalization means
that $A_{\mathfrak{q}}(\nu)$ has nonzero \gk-cohomologies
when $\nu=0$,
whereas ${\mathcal{R}}_{\mathfrak{q}}^j$ preserves
the ${\mathfrak{Z}}({\mathfrak{l}})$- and ${\mathfrak{Z}}({\mathfrak{g}})$-infinitesimal characters
in the Harish-Chandra parametrization modulo
the Weyl groups $W_L$ and $W_G$.
The general theory of the Zuckerman cohomological parabolic induction
(see \cite{xvr} for instance)
assures
that the \gk-module
${\mathcal{R}}_{\mathfrak{q}}^{p-2}({\mathbb{C}}_{\lambda f_1})$
is nonzero and irreducible
if $\lambda$ is in the \lq\lq{good range}\rq\rq\
({\it{i.e.}} if $\lambda > \frac 1 2 (p+q)-2$),
whereas the same condition may fail
if the parameter $\lambda$ wanders outside the \lq\lq{good range}\rq\rq.
Although our parameter set $A_+(p,q)$ contains
finitely many $\lambda$
that are outside the good range,
the \gk-module
${\mathcal{R}}_{\mathfrak{q}}^{p-2}({\mathbb{C}}_{\lambda f_1})$
is nonzero and irreducible
for all $\lambda \in A_+(p,q)$,
see \cite[Thm.~3]{xk92} applied to $r=1$
with the notation therein.
\subsection{Geometric quantization of elliptic orbits}
\label{subsec:Dolbeault}
Any coadjoint orbit of a Lie group carries
a natural symplectic structure.
We shall see
that the irreducible unitary representation $\pi_{+, \lambda}^{p,q}$ of $G$
may be regarded as a \lq\lq{geometric quantization}\rq\rq\
of the minimal elliptic coadjoint orbit
\[
{\mathcal{O}}_{\nu}\equiv {\mathcal{O}}_{+, \nu} := \Ad^{\ast}(G)(\nu f_1)
\,\,
(\subset \sqrt{-1} {\mathfrak{g}}^{\ast}),
\]
where $\lambda = \nu-\rho$
if we adopt the normalization
of the parameter for \lq\lq{quantization}\rq\rq\
as in \cite{xrons}, see below.
As a homogeneous space,
${\mathcal{O}}_{\nu}$ ($\nu \ne 0$) is identified with the homogeneous space
$G/L$
where $L$ is the subgroup defined in \eqref{eqn:Levi}.
Since the same homogeneous space $G/L$ arises an open $G$-orbit
of the complex flag variety $G_{\mathbb{C}}/Q$
where $Q$ is the complex parabolic subgroup
with Lie algebra ${\mathfrak{q}}$
(Section \ref{subsec:Aq})
of the complexified Lie group $G_{\mathbb{C}}$,
it carries a $G$-invariant complex structure.
Moreover,
it admits a $G$-invariant indefinite K{\"a}hler metric
such that its imaginary part yields the Kostant--Kirillov--Souriau symplectic form.
For $\nu \in {\mathbb{Z}}$,
we form a homogeneous line bundle
${\mathcal{L}}_\nu:=G \times_L {\mathbb{C}}_{\nu f_1}$ over $G/L$.
For instance,
the canonical bundle of $G/L$ is expressed as
${\mathcal{L}}_{2\rho} = {\mathcal{L}}_{p+q-2}$.
For $\lambda \in {\mathbb{Z}} + \rho$ with $\lambda \ne 0$,
we take the Dolbeault cohomologies
for the $G$-equivariant holomorphic line bundle
\[
{\mathcal{L}}_{\lambda+\rho} \to {\mathcal{O}}_{\lambda} \simeq G/L,
\]
which carry a natural Fr{\'e}chet topology
by the closed range theorem
of the $\overline \partial$-operator
due to Schmid and Wong
\cite{xwong},
and the Fr{\'e}chet $G$-module
\[
H_{\overline \partial}^{j}(G/L, {\mathcal{L}}_{\lambda+\rho})
\]
is a maximal globalization of the \gk-module
${\mathcal{R}}_{\mathfrak{q}}^{j}({\mathbb{C}}_{\lambda f_1})$.
This shows the \gk-modules
in (i) and (ii) in Theorem \ref{thm:2002}
are isomorphic to each other.
If $\lambda \in A_+(p,q)$,
then the Dolbeault cohomology for $j=p-2$ contains a Hilbert space
on which $G$ acts as the unitary representation $\pi_{+,\lambda}^{p,q}$.
For $q \ge 2$,
we can consider similar family
of minimal elliptic coadjoint orbits
${\mathcal{O}}_{-,\lambda} \simeq G/L_-$
with $L_-:=O(p,q-2) \times SO(2)$
by switching the role of $p$ and $q$,
and we obtain an irreducible unitary representations
$\pi_{-,\lambda}^{p,q}$ for $\lambda \in A_-(p,q)$
($=A_+(q,p)$).
The irreducible unitary representations $\pi_{\varepsilon,\lambda}^{p,q}$
of $G$
may be interpreted
as geometric quantization
of the coadjoint orbits ${\mathcal{O}}_{\varepsilon,\lambda}$,
and the Gelfand--Kirillov dimension is given by
\[
\operatorname{DIM} \pi_{\varepsilon, \lambda}^{p,q}
=
\frac 1 2 \dim {\mathcal{O}}_{\varepsilon, \lambda}
=
p+q-2
\quad
\text{for $\varepsilon=\pm$}.
\]
\subsection{Degenerate principal series representations}
\label{subsec:ps}
The indefinite orthogonal group $G=O(p,q)$ has
a maximal (real) parabolic subgroup
$P=M A N$,
unique up to conjugation,
with Levi factor
\[
M A \simeq GL (1,{\mathbb{R}}) \times O(p-1,q-1).
\]
Any one-dimensional representation of the first factor $GL (1,{\mathbb{R}})$
is parametrized by
$(\varepsilon,\nu) \in \{\pm \} \times {\mathbb{C}}$,
which extends to a character $\chi_{\varepsilon,\nu}$ of $M A$
by letting the second factor trivial.
We denote by $I_{\varepsilon}(\nu)$
the $G$-module
obtained as unnormalized parabolic induction
$\operatorname{Ind}_P^G(\chi_{\varepsilon,\nu})$.
Our parameter $\nu$ is chosen in a way
that the trivial one-dimensional representation ${\bf{1}}$ of $G$
occurs as the subrepresentation of $I_+(0)$,
and as the quotient of $I_+(2\rho)=I_+(p+q-2)$.
Geometrically,
the real flag variety $G/P$ has a $G$-equivariant double covering
\begin{equation}
\label{eqn:SSGP}
S^{p-1} \times S^{q-1} \simeq G/ P_+ \to G/P
\end{equation}
where $P_+ = (G L (1,{\mathbb{R}})_+ \times O(p-1,q-1))N$ is a normal subgroup
of $P$ of index two,
and the group $G$ acts conformally on $S^{p-1} \times S^{q-1}$
endowed with the pseudo-Riemannian metric
$g_{S^{p-1}} \oplus (-g_{S^{q-1}})$.
We recall that ${\mathcal{H}}^m({\mathbb{R}}^p)$ denotes
the space of spherical harmonics
of degree $m$.
For $p=1$,
we consider only $m=0$ and $1$.
The orthogonal group $O(p)$ acts irreducibly on ${\mathcal{H}}^m({\mathbb{R}}^p)$,
and we shall use the same letter
to denote the resulting representation.
For $b \in {\mathbb{Z}}$,
we define the following infinite-dimensional $K$-module:
\begin{equation}
\label{eqn:Xib}
\Xi(K,b):=
\bigoplus_{\substack{m,n \in {\mathbb{N}} \\[1pt] m-n \in 2{\mathbb{N}}+b}}
{\mathcal{H}}^m({\mathbb{R}}^p) \boxtimes {\mathcal{H}}^n({\mathbb{R}}^q)
\quad
\text{(algebraic direct sum).}
\end{equation}
We recall from Howe--Tan \cite{xhowetan}:
\begin{proposition}
Suppose $\lambda \in A_+(p,q)$.
Let $b$ and $\varepsilon$ be as in \eqref{eqn:b} and \eqref{eqn:e}.
\begin{enumerate}
\item[{\rm{(1)}}]
There is a unique irreducible submodule of $I_{\varepsilon}(\lambda+\rho)$
with $K$-types $\Xi(K,b)$.
\item[{\rm{(2)}}]
There is a unique irreducible quotient of $I_{\varepsilon}(-\lambda+\rho)$
with $K$-types $\Xi(K,b)$.
\item[{\rm{(3)}}]
These two modules are isomorphic to each other.
\end{enumerate}
\end{proposition}
\subsection{Discrete series for semisimple symmetric spaces}
\label{subsec:Xpq}
We equip ${\mathbb{R}}^{p+q}$ with the standard pseudo-Riemannian structure
\[
g_{{\mathbb{R}}^{p,q}}
:=
dx_1^2 + \cdots + dx_p^2- dy_1^2-\cdots -dy_q^2.
\]
Then $g_{{\mathbb{R}}^{p,q}}$ is nondegenerate
on the following hypersurface
\[
X(p,q) \equiv X(p,q)_+
:= \{(x,y) \in {\mathbb{R}}^{p+q}:
|x|^2-|y|^2=1\},
\]
yielding a pseudo-Riemannian structure $g_{X(p,q)}$
of signature $(p-1,q)$
with constant sectional curvature $+1$,
sometimes referred to as a {\it{pseudo-Riemannian space form}}
of positive curvature.
We also set
\[
X(p,q)_-
:= \{(x,y) \in {\mathbb{R}}^{p+q}:
|x|^2-|y|^2=-1\}.
\]
Then $X(p,q)_-$ has a pseudo-Riemannian structure of signature $(p,q-1)$.
There is a natural isomorphism
(reversing the signature of the pseudo-Riemannian metric):
\[X(p,q)_- \simeq X(q,p)_+. \]
Then $X(p,q)$ is a sphere $S^{p-1}$ if $q=0$,
a hyperbolic space
if $p=1$,
de Sitter manifold if $p=2$,
and anti-de Sitter manifold if $q=1$.
We note $X(0,q) = \emptyset$.
The group $G=O(p,q)$ acts isometrically
and transitively on $X(p,q)_{\pm}$,
and we have $G$-diffeomorphims:
\[
X(p,q)_+ \simeq O(p,q)/O(p-1,q),
\quad
X(p,q)_- \simeq O(p,q)/O(p,q-1).
\]
The pseudo-Riemannian metric $g_{X(p,q)}$ induces the Radon measure,
and the Laplace--Beltrami operator $\Delta \equiv \Delta_{X(p,q)}$
on $X(p,q)$.
For $\lambda \in {\mathbb{C}}$,
we consider a differential equation on $X(p,q)$:
\begin{equation}
\label{eqn:Laplmd}
\Delta_{X(p,q)} f =(-\lambda^2+\rho^2)f
\end{equation}
where $\rho=\frac 1 2(p+q-2)$,
and set
\begin{align*}
C^{\infty}(X(p,q))_{\lambda}
:=&
\{f \in C^{\infty}(X(p,q))
:
\text{$f$ satisfies \eqref{eqn:Laplmd} in the usual sense}\},
\\
L^2(X(p,q))_{\lambda}
:=&
\{f \in L^2(X(p,q))
:
\text{$f$ satisfies \eqref{eqn:Laplmd} in the distribution sense}\}.
\end{align*}
\begin{proposition}
[Faraut \cite{xfar}, Strichartz \cite{xstri}]
\label{prop:discX}
$L^2(X(p,q))_{\lambda} \ne \{0\}$
if and only if $\lambda \in A_+(p,q)$.
\end{proposition}
The group $G=O(p,q)$ acts on $L^2(X(p,q))_{\lambda}$
as an irreducible unitary representation.
Moreover,
if $f \in L^2(X(p,q))_{\lambda}$ is $K$-finite,
then there is an analytic function $a \in C^{\infty}(S^{p-1} \times S^{q-1})$
such that
\begin{equation}
\label{eqn:L2asym}
f(\omega \cosh s, \eta \sinh s)
=
a (\omega, \eta) e^{-(\lambda+\rho)s}
\,
(1+s e^{-2s}O(1))
\quad
\text{as $s \to \infty$.}
\end{equation}
\section{General scheme}
Our approach to the branching laws (Theorem \ref{thm:2002})
is to use analysis on $G'$-orbits
in the reductive symmetric space $G/H$,
as developed in \cite{xkInvent94, xkdisc} among others.
In our setting,
$G/H\simeq X(p,q)$ admits principal orbits of the subgroup $G'$
(see \cite[Sect.~8.2]{xkdisc}),
hence all the discrete spectrum
in the branching law $\Pi|_{G'}$ can be captured
though the analysis
on principal $G'$-orbits,
as formulated in Proposition \ref{prop:3.1.1} below.
\subsection{Principal $G'$-orbits in $X(p,q)$}
\label{subsec:orbits}
We introduce a $G'$-invariant function in the ambient space
$\Bbb R^{p+q} =\Bbb R^{p'+p'' + q'+q''}$ by
\begin{equation}
\label{eqn:level}
\mu \colon \Bbb R^{p'+p'' + q'+q''} \to \Bbb R,
\ (u', u'', v', v'') \mapsto |u'|^2-|v'|^2.
\end{equation}
If $(u', u'', v', v'') \in X(p,q)$,
then
$$
\mu(u', u'', v', v'') = |u'|^2-|v'|^2 = -|u''|^2 + |v''|^2 +1.
$$
We define three $G'$-invariant open sets
$X(p,q)_{\delta\varepsilon}$ of $X(p,q)$ by
\begin{align*}
X(p,q)_\amp &:=
X(p,q) \cap \mu^{-1}(\set{s\in \Bbb R}{s<0}),
\\
X(p,q)_\app &:=
X(p,q) \cap \mu^{-1}(\set{s\in \Bbb R}{0<s<1}),
\\
X(p,q)_\apm &:=
X(p,q) \cap \mu^{-1}(\set{s\in \Bbb R}{1<s}).
\end{align*}
Then the disjoint union
\begin{equation}
\label{eqn:3union}
X(p,q)_\amp \amalg X(p,q)_\app \amalg X(p,q)_\apm
\end{equation}
is conull in $X(p,q)$.
Accordingly,
we have a direct sum decomposition of the Hilbert space:
\begin{equation}
\label{eqn:2.2.1}
L^2(X(p,q)) = L^2(X(p,q)_\amp)
\oplus L^2(X(p,q)_\app) \oplus L^2(X(p,q)_\apm),
\end{equation}
which is stable by the action of $G'$.
We shall see in \eqref{eqn:mapmp}--\eqref{eqn:mappm}
that the isomorphism classes
of the isotropy subgroups
of the subgroup $G'$ at points
in $X(p,q)_{\delta\varepsilon}$ are determined uniquely
by $(\delta, \varepsilon)$.
\subsection{A priori estimate of $\operatorname{Disc}(\Pi|_{G'})$}
By using the general theory \cite{xkdisc},
we explain the three families
of irreducible representations of $G'$ occurring
in the branching law $\Pi|_{G'}$
(Theorem \ref{thm:2002})
arise from the decomposition \eqref{eqn:3union}.
\begin{proposition}
\label{prop:3.1.1}
For $\lambda \in A_+(p,q)$,
we set $\Pi:=\pip{p}{q}{\lambda} \in \widehat G$
as in Definition-Theorem \ref{def:pilmd}.
If
$\pi \in \widehat{G'}$ satisfies
$\Hom_{G'}(\pi, \Pi|_{G'}) \neq \{0\}$,
then there exist uniquely
$(\delta', \delta'') \in \{\amp, \app, \apm \}$
and
$(\lambda', \lambda'') \in A_{\delta'}(p', q')\times A_{\delta''}(p'', q'')$
such that
\begin{equation}
\label{eqn:pikappa}
\pi \simeq
\pia{\delta'}{p'}{q'}{\lambda'} \boxtimes
\pia{\delta''}{p''}{q''}{\lambda''}.
\end{equation}
Moreover the following parity condition holds:
\begin{equation}
\label{eqn:parity}
\delta' \lambda' + \delta'' \lambda''- \lambda
\in
2 {\mathbb{Z}}+1.
\end{equation}
\end{proposition}
\begin{proof}
The existence follows from the general results
proved in \cite[Thm.8.6]{xkdisc}.
The uniqueness is clear
because these irreducible $G'$-modules are mutually inequivalent.
To show the parity condition \eqref{eqn:parity},
we observe
that the central element $-I_{p,q}$ of $G$
acts on $\pi_{\varepsilon,\lambda}^{p,q}$ as a scalar
$
(-1)^{\lambda-\frac{p-q}{2}\varepsilon+1},
$
as one sees from the equivalent condition (iii)
in Definition-Theorem \ref{def:pilmd}.
Since $(-I_{p',q'}) \times (-I_{p'',q''}) \in G'$ is identified
with $-I_{p,q} \in G$,
it follows from the assumption $\Hom_{G'}(\pi, \Pi|_{G'})\ne \{0\}$ that
\[
(-1)^{\lambda'-\frac{p'-q'}{2}\delta'+1}
(-1)^{\lambda''-\frac{p''-q''}{2}\delta''+1}
=
(-1)^{\lambda-\frac{p-q}{2}+1}.
\]
Then one obtains \eqref{eqn:parity}
in view of
$
\lambda' \in {\mathbb{Z}}+\frac{p'+q'}{2},
\,\,
\lambda'' \in {\mathbb{Z}}+\frac{p''+q''}{2}
\,\,
\text{ and }
\,\,
\lambda \in {\mathbb{Z}}+\frac{p+q}{2}.
$
\end{proof}
The above proof gives useful geometric information
on functions that belong to irreducible components
of the branching law:
\begin{proposition}
\label{prop:3.1.2}
In the setting of Proposition \ref{prop:3.1.1},
suppose $\pi \in \widehat{G'}$ satisfies
$\Hom_{G'}(\pi, L^2(X(p,q))_\lambda) \ne \{0\}$.
We set $\kappa:=(\delta',\delta'') \in \{\amp, \app, \apm\}$
according to \eqref{eqn:pikappa}
in Proposition \ref{prop:3.1.1}.
Then we have
$$
\operatorname{Supp} F \subset \overline{X(p,q)_{\kappa}}
$$
for any function $F$
in the image of $\Hom_{G'}(\pi, L^2(X(p,q))_\lambda)$.
\end{proposition}
\section{Construction of holographic operators}
In this section we construct explicit intertwining operators
({\it{holographic operators}}) from
irreducible $G'$-modules
to irreducible $G$-modules:
\[
T_{\delta\varepsilon,\lambda}^{\lambda',\lambda''}
\colon
\pi_{\delta, \lambda'}^{p',q'} \boxtimes \pi_{\varepsilon, \lambda''}^{p'',q''}
\to
\pi_{+, \lambda}^{p,q}|_{G'},
\]
by using a geometric realization
of these representations in the $L^2$-spaces
of pseudo-Riemannian space forms
$X(p',q')_{\delta}$, $X(p'',q'')_{\varepsilon}$ and $X(p,q)$,
as described in Section \ref{subsec:Xpq}.
Moreover,
we find a closed formula for the operator norm
of $T_{\delta\varepsilon,\lambda}^{\lambda',\lambda''}$.
The main results of this section are stated in Theorem \ref{thm:holographic}.
\subsection{Preliminaries}
To state the quantitative results (Theorem \ref{thm:holographic}),
we set
\begin{align*}
\V{\apm}{\lambda'}{\lambda''}{\lambda}
&:=
\frac{
(\Gamma(\lambda''+1))^2
\
\Gamma(\frac{\lambda' - \lambda''+ \lambda+1}{2})
\
\Gamma(\frac{\lambda' - \lambda'' - \lambda+1}{2})
}{
2 \lambda
\
\Gamma(\frac{\lambda' + \lambda''+ \lambda+1}{2})
\
\Gamma(\frac{\lambda' + \lambda'' - \lambda+1}{2})
},
\\
\V{\app}{\lambda'}{\lambda''}{\lambda}
&:=
\frac{
(\Gamma(\lambda''+1))^2
\
\Gamma(\frac{-\lambda' - \lambda''+ \lambda+1}{2})
\
\Gamma(\frac{\lambda' - \lambda'' + \lambda+1}{2})
}{
2 \lambda
\
\Gamma(\frac{-\lambda' + \lambda''+ \lambda+1}{2})
\
\Gamma(\frac{\lambda' + \lambda'' + \lambda+1}{2})
},
\\
\V{\amp}{\lambda'}{\lambda''}{\lambda}
&:=\V{\apm}{\lambda''}{\lambda'}{\lambda}.
\end{align*}
\begin{lemma}
\label{lem:V}
\begin{enumerate}
\item[{\rm{(1)}}]
$\V {\delta\varepsilon}{\lambda'}{\lambda''}{\lambda}>0$
if $\lambda>0$,
$\lambda', \lambda'' \ge -\frac 1 2$,
and $\delta \varepsilon \lambda -\varepsilon\lambda'-\delta\lambda'' >0$.
Here $\delta \varepsilon \lambda:=\lambda$
when $\delta=\varepsilon$
and $-\lambda$ when $\delta \ne \varepsilon$.
\item[{\rm{(2)}}]
$\V {\delta\varepsilon}{\lambda'}{\lambda''}{\lambda}>0$
if $(\lambda', \lambda'') \in \Lambda_{\delta \varepsilon}(\lambda)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1)\enspace
Clear from the definition.
\itm{(2)}
The second statement is a special case of the first one.
See also Lemma \ref{lem:2.4} for an alternative proof.
\end{proof}
\subsection{Jacobi functions and Jacobi polynomials}
Let us consider the differential operator
\begin{equation}
\label{eqn:JacobiL}
L_{\apm}:=
\frac{d^2}{d t^2}
+((2 \lambda'+1) \tanh t+(2 \lambda''+1) \coth t)\frac{d}{d t}.
\end{equation}
We recall
that for $\lambda$, $\lambda'$, $\lambda'' \in {\mathbb{C}}$
with $\lambda'' \ne -1, -2, \cdots$,
the {\it{Jacobi function}} $\varphi_{i \lambda}^{(\lambda'', \lambda')}(t)$
is the unique even solution
to the following differential equation
\begin{equation}
\label{eqn:JacobiODE}
(L_{\apm}+((\lambda'+\lambda''+1)^2-\lambda^2)) \varphi=0
\end{equation}
such that $\varphi(0)=1$,
see Koornwinder \cite{xkoorn},
for instance.
We note that
$
\varphi_{i \lambda}^{(\lambda'', \lambda')}(t)
=
\varphi_{-i \lambda}^{(\lambda'', \lambda')}(t).
$
By the change of variables
$z=-\sinh^2 t$,
$g(z):=\varphi(t)$ satisfies
the hypergeometric differential equation
\begin{equation}
\label{eqn:4.4.6}
\left(
z(1-z)\frac{\partial^2}{\partial z^2}
+ (c-(a+b+1)z) \frac{\partial}{\partial z}
- a b \right)
g(z) = 0
\end{equation}
with
$$
a= \frac{\lambda'+\lambda''+1-\lambda}2,\quad
b= \frac{\lambda'+\lambda''+1+\lambda}2,\quad
c=\lambda''+1.
$$
The hypergeometric differential equation \eqref{eqn:4.4.6} has
a regular singularity $z=0$,
and its exponents are $0$, $-\lambda''$.
For $\lambda''\ne 0$,
we denote by $g_{1(0)}(z)$ and $g_{2(0)}(z)$
the unique solutions
to \eqref{eqn:4.4.6}
such that
\begin{equation}
\label{eqn:g0}
g_{1(0)}(0)=1
\quad
\text{and}
\quad
\lim_{z \to 0}z^{\lambda''} g_{2(0)}(z)=1.
\end{equation}
We set
\[
u_{j(0)}(t):=g_{j(0)}(-\sinh^2 t)
\qquad{\text{for $j=1,2$.}}
\]
If $\lambda'' \ne 0, -1,-2, \cdots$,
then $u_{1(0)}(t)$ is the Jacobi function
$\varphi_{i \lambda}^{\lambda'', \lambda'} (t)$
(see \eqref{eqn:JacobiF}),
and thus we have
\begin{equation}
\label{eqn:JacobiF}
\varphi_{i \lambda}^{(\lambda'', \lambda')}(t)
=
{}_2F_1\left(
\tfrac{\lambda'+\lambda''+1-\lambda}2,
\tfrac{\lambda'+\lambda''+1+\lambda}2;
\lambda''+1;-\sinh^2 t
\right),
\end{equation}
where ${}_2F_1$ is the Gauss hypergeometric function.
We need the following formul{\ae}
for the $L^2$-norms
of the Jacobi functions.
\begin{lemma}
[{\cite[Lem.~8.2]{opq2}}]
\label{lem:2.4}
Suppose $\lambda >0$.
\begin{alignat*}{2}
&\int_0^\infty |\varphi_{i \lambda}^{(\lambda'', \lambda')}(t)|^2
(\cosh t)^{2 \lambda'+1}
(\sinh t)^{2 \lambda''+1}
\ d t
= V_{\apm,\lambda}^{(\lambda', \lambda'')}
\quad
&&\text{if $(\lambda', \lambda'') \in \Lambda_{\apm}(\lambda)$.}
\\
&\int_0^\frac{\pi}2 |\varphi_{i \lambda}^{(\lambda'', \lambda')}(i \theta)|^2
(\cos \theta)^{2 \lambda'+1}
(\sin \theta)^{2 \lambda''+1}
\ d \theta
= V_{\app,\lambda}^{(\lambda', \lambda'')}
\quad
&&\text{if $(\lambda', \lambda'') \in \Lambda_{\app}(\lambda)$.}
\end{alignat*}
\end{lemma}
\subsection{Construction of holographic operators}
\label{subsec:ho}
We define the following diffeomorphisms $\Phi_{\delta\varepsilon}$
onto the open subsets $X(p,q)_{\delta\varepsilon}$ by
\begin{alignat}{2}
\label{eqn:mapmp}
\Phi_\amp \colon
&X(q', p') \times X(p'', q'') \times (0,\infty)
&&\rarrowsim X(p,q)_\amp
\\
&((y', x'), (x'', y''), t)
&&\mapsto (x' \sinh t, x'' \cosh t, y' \sinh t, y'' \cosh t),
\notag
\\
\label{eqn:mappp}
\Phi_\app \colon
&X(p',q') \times X(p'', q'') \times (0,\frac \pi 2)
&&\rarrowsim X(p,q)_\app
\\
&((x', y'), (x'', y''), \theta)
&&\mapsto (x' \cos \theta, x'' \sin \theta, y' \cos \theta, y'' \sin \theta),
\notag
\\
\label{eqn:mappm}
\Phi_\apm \colon
&X(p',q') \times X(q'', p'') \times (0,\infty)
&&\rarrowsim X(p,q)_\apm
\\
&((x', y'), (y'', x''), t)
&&\mapsto (x' \cosh t, x'' \sinh t, y' \cosh t, y'' \sinh t).
\notag
\end{alignat}
By using the following coordinates:
\begin{alignat*}{2}
(z',z'', t) &= \Phi_{\delta \varepsilon}^{-1}(x) \ && \text{ for $x \in X(p,q)_{\delta \varepsilon}$ for $(\delta,\varepsilon)=(-,+)$ or $(+,-)$},
\\
(z',z'', \theta) &= \Phi_\app^{-1}(x) \ && \text{ for $x \in X(p,q)_\apm$},
\end{alignat*}
we introduce linear operators
\begin{equation}
\label{eqn:T}
T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''} \colon
L^2(X(p',q')_{\delta}) \widehat\otimes L^2(X(p'',q'')_{\varepsilon})
\to L^2(X(p,q)),
\end{equation}
as follows:
\begin{align*}
T_{\amp, \lambda}^{\lambda', \lambda''} h (x)
&:=
\begin{cases}
h(z',z'') \ \varphi_{i \lambda}^{(\lambda', \lambda'')}(t)
\ (\cosh t)^{\lambda''-\rho''} (\sinh t)^{\lambda'-\rho'}
& \text{if } x \in X(p,q)_\amp,
\\
0 & \text{otherwise,}
\end{cases}
\\
T_{\app, \lambda}^{\lambda', \lambda''} h (x)
&:=
\begin{cases}
h(z',z'') \
\varphi_{i \lambda}^{(\lambda'', \lambda')}(i \theta)
(\cos \theta)^{\lambda'-\rho'} (\sin \theta)^{\lambda''-\rho''}
\,\,\,
& \text{if } x \in X(p,q)_\app,
\\
0 & \text{otherwise,}
\end{cases}
\\
T_{\apm, \lambda}^{\lambda', \lambda''} h (x)
&:=
\begin{cases} h(z',z'') \ \varphi_{i \lambda}^{(\lambda'', \lambda')}(t)
\ (\cosh t)^{\lambda'-\rho'} (\sinh t)^{\lambda''-\rho''}
& \text{if } x \in X(p,q)_\apm,
\\
0 & \text{otherwise.}
\end{cases}
\end{align*}
\begin{theorem}
\label{thm:holographic}
Suppose $(\delta,\varepsilon)=(-,+)$, $(+,+)$ or $(+,-)$.
Let $\lambda \in A_+(p,q)$
and $(\lambda',\lambda'') \in \Lambda_{\delta\varepsilon}(\lambda)$.
Then $T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''}$ induces
an injective $G'$-intertwining operator:
\[
T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''} \colon
L^2(X(p',q')_{\delta})_{\lambda'}
\widehat\otimes
L^2(X(p'',q'')_{\varepsilon})_{\lambda''}
\to L^2(X(p,q))_{\lambda}.
\]
Moreover,
$(\V {\delta\varepsilon}{\lambda}{\lambda'}{\lambda''})^{-\frac 1 2}
T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''}$ is an isometry.
\end{theorem}
The proof of Theorem \ref{thm:holographic} is divided into two parts:
\begin{enumerate}
\item[$\bullet$]
to compute the operator norm of $T_{\delta \varepsilon,\lambda}^{\lambda', \lambda''}$,
see Proposition \ref{prop:opnorm};
\item[$\bullet$]
to show that $T_{\delta \varepsilon,\lambda}^{\lambda', \lambda''}h$ is
a weak solution
to \eqref{eqn:Laplmd},
see Proposition \ref{prop:weakSol}.
\end{enumerate}
\subsection{Operator norms of the holographic operators}
We prove
that the linear operator $T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''}$ is a scalar multiple
of an isometric operator,
and find its $L^2$-norm.
We do not need
that $h$ satisfies a differential equation
in the proposition below.
\begin{proposition}
\label{prop:opnorm}
Suppose $(\delta, \varepsilon)=(-,+)$, $(+,+)$, or $(+,-)$.
If $\lambda>0$ and $(\lambda', \lambda'') \in \Lambda_{\delta \varepsilon}(\lambda)$,
then $T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''}$ is an isometry
upto scaling:
\[
\| T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''} h \|_{L^2(X(p,q))}^2
=
V_{\delta\varepsilon, \lambda}^{(\lambda', \lambda'')}
\|h\|_{L^2(X(p',q')_{\delta} \times X(p'',q'')_{\varepsilon})}^2
\]
for all $h \in L^2(X(p',q')_{\delta} \times X(p'',q'')_{\varepsilon})$.
\end{proposition}
\begin{proof}
With respect to the diffeomorphisms \eqref{eqn:mapmp}--\eqref{eqn:mappm},
the invariant measure $d \mu$
on $X(p,q)$ is expressed as
\begin{equation}
\label{eqn:measure}
d \mu_{X(p,q)} = d \mu_{X(p',q')_{\delta}} d \mu_{X(p'',q'')_{\varepsilon}}
d \mu_{\delta\varepsilon}(t)
\quad
\text{on $X(p,q)_{\delta\varepsilon}$},
\end{equation}
where
\begin{align}
d \mu_{\amp}(t) :=& (\cosh t)^{2 \rho''+1}(\sinh t)^{2\rho'+1}d t,
\notag
\\
d \mu_{\app}(\theta) :=& (\cos \theta)^{2 \rho'+1}(\sin \theta)^{2\rho''+1}d \theta,
\label{eqn:dmu++}
\\
\label{eqn:dmu+-}
d\mu_{+-}(t):=&
(\cosh t)^{2 \rho' +1} (\sinh t)^{2 \rho'' + 1} d t.
\end{align}
Hence the proof of Proposition \ref{prop:opnorm} is reduced
to Lemma \ref{lem:2.4}.
\end{proof}
\subsection{Construction of smooth solutions on open sets}
Since the Laplacian $\Delta_{X(p,q)}$ is not
an elliptic differential operator
unless the signature of $g_{X(p,q)}$ is definite
({\it{i.e., }} $p=1$ or $q=0$),
eigenfunctions (in the distribution sense)
of the Laplacian are not necessarily real analytic on $X(p,q)$.
In fact,
when $p \ge 2$ and $q \ge 1$,
one sees from the proof of Corollary \ref{cor:discdeco}
that $T_{\delta\varepsilon,\lambda}^{\lambda',\lambda''} h$ is never
real analytic
on the whole space $X(p,q)$
if $h \not \equiv 0$ and $p' p'' \ne 0$.
We begin by considering the restriction
of $T_{\delta\varepsilon,\lambda}^{\lambda',\lambda''} h$
to the open set $X(p,q)_{\delta\varepsilon}$
(Section \ref{subsec:orbits})
for each $(\delta, \varepsilon)=(-,+)$, $(+,+)$, or $(+,-)$.
\begin{proposition}
\label{prop:smoothSol}
Suppose $\lambda, \lambda', \lambda'' \in {\mathbb{C}}$
such that $\lambda', \lambda'' \ne -1,-2,\cdots$.
Then for any $h \in C^{\infty}(X(p',q')_{\delta})_{\lambda'} \otimes C^{\infty}(X(p'',q'')_{\varepsilon})_{\lambda''}$,
$F(x):=T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''} h(x)$
satisfies the differential equation \eqref{eqn:Laplmd}
on the open set $X(p,q)_{\delta \varepsilon}$.
\end{proposition}
\begin{proof}
Suppose $(\delta, \varepsilon)=(+,-)$.
We set
\begin{align*}
D_{\apm} =&
\frac{\partial^2}{\partial t^2}
+ ( (2 \rho'+1) \tanh t + (2\rho''+1) \coth t) \frac{\partial}{\partial t},
\\
L_{\apm} =&
\frac{\partial^2}{\partial t^2}
+ ((2 \lambda'+1) \tanh t + (2 \lambda'' + 1) \coth t) \der{t},
\end{align*}
where we set
$
\rho' = \frac{p'+q'-2}2,
\
\rho'' = \frac{p''+q''-2}2.
$
We note that
$
\rho=\rho'+\rho''+1.
$
A short computation shows that
\[
S_{\lambda', \lambda''}^{-1} \circ D_{\apm} \circ S_{\lambda', \lambda''}
=
L_{\apm} + ((\lambda' + \lambda''+1)^2-\rho^2
- \frac{(\lambda')^2-(\rho')^2}{(\cosh t)^2} + \frac{(\lambda'')^2-(\rho'')^2}{(\sinh t)^2} ),
\]
under the transform $S_{\lambda',\lambda''}$ defined by
\begin{equation}
\label{eqn:S}
(S_{\lambda', \lambda''} \varphi)(t) := (\cosh t)^{\lambda'-\rho'}(\sinh t)^{\lambda''-\rho''} \varphi(t).
\end{equation}
Via the diffeomorphism
$\Phi_{+-}$ \eqref{eqn:mappm},
the Laplacian $\Delta_{X(p,q)}$ takes the form:
\begin{equation}
\label{eqn:4.4.2}
\Delta_{X(p,q)}
= - D_{\apm} + \frac{1}{\cosh^2 t} \Delta_{X(p', q')}
- \frac{1}{\sinh^2 t} \Delta_{X(q'', p'')}
\end{equation}
in $X(p,q)_{\apm}$.
Therefore,
for nonzero $h' \in C^{\infty}(X(p',q'))_{\lambda'}$
and $h'' \in C^{\infty}(X(q'',p''))_{\lambda''}$,
$F_{\apm}(z',z'',t):=h'(z') h''(z'') (S_{\lambda', \lambda''} \varphi)(t)$
satisfies
\[
(\Delta_{X(p,q)} + \lambda^2-\rho^2) F_{\apm} \circ \Phi_{\apm}^{-1}
= 0
\quad
\text{on $X(p,q)_{\apm}$}
\]
if and only if $\varphi$ satisfies the Jacobi differential equation
\eqref{eqn:JacobiODE}.
Thus Proposition \ref{prop:smoothSol} is shown
for $(\delta,\varepsilon)=(+,-)$.
The proof for $(\delta,\varepsilon)=(-,+)$ is essentially the same,
and that for $(\delta,\varepsilon)=(+,+)$ goes similarly.
In this case,
the Laplacian takes the form:
\begin{equation*}
\Delta_{X(p,q)}
= D_{\app}+ \frac{1}{\cos^2 \theta} \Delta_{X(p', q')}
+ \frac{1}{\sin^2 \theta} \Delta_{X(p'', q'')}
\end{equation*}
on $X(p,q)_{\app}$ in the coordinates via $\Phi_{\app}$,
where we set
\[
D_{++}:=\frac{\partial^2}{\partial \theta^2}
- ((2 \rho'+1) \tan \theta - (2\rho''+1) \cot \theta)
\frac{\partial}{\partial \theta}.
\]
By the change of variables $z=\sin^2 \theta$,
the function
$$
g(z', z'', z)
:= (\cos \theta)^{-\lambda' + \rho'}
(\sin \theta)^{-\lambda'' + \rho''}
F \circ \Phi_\app(z', z'', \theta),
$$
satisfies
the same hypergeometric equation \eqref{eqn:4.4.6},
with regular singularities:
the exponents at $z=0$ are $0$, $-\lambda''$;
and those at $z=1$ are $0$, $-\lambda'$.
\end{proof}
\subsection{Boundary $\partial X(p,q)_{\delta\varepsilon}$}
\label{subsec:bdry}
By definition \eqref{eqn:T},
$T_{\delta \varepsilon, \lambda}^{\lambda', \lambda''}h$ is
the extension of a solution
to the differential equation \eqref{eqn:Laplmd}
in the open domain $X(p,q)_{\delta \varepsilon}$
(see Proposition \ref{prop:smoothSol})
to the whole manifold $X(p,q)$ by zero
outside the domain.
In order to prove a precise condition for such an extension
to give a weak solution to \eqref{eqn:Laplmd}
in $L^2(X(p,q))$,
we need an estimate of the solution near the boundary.
In this section
we study the boundary $\partial X(p,q)_{\delta\varepsilon}$.
We observe that
\[
\partial X(p,q)_{++}= \partial X(p,q)_{-+} \cup \partial X(p,q)_{+-}.
\]
Since $\partial X(p,q)_{-+}$ is similar
to $\partial X(p,q)_{+-}$,
we take a closer look at
\begin{align*}
\partial X(p,q)_{+-}
=&\{(u',u'',v',v'')\in X(p,q): |u''|=|v''|\},
\intertext{which is a union of the following two submanifolds:}
\partial X(p,q)_{+-}^{\operatorname{sing}}
:=&\{(u',0,v',0): (u', v') \in X(p',q')\},
\\
\partial X(p,q)_{+-}^{\operatorname{reg}}
:=&\{(u',u'',v',v'')\in X(p,q) : |u''|=|v''| \ne 0 \}.
\end{align*}
We note that the singular part
$\partial X(p,q)_{+-}^{\operatorname{sing}}$
is diffeomorphic to $X(p',q')$
and that the map $\Phi_{\apm}$ extended to $t=0$
in \eqref{eqn:mappm}
surjects $\partial X(p,q)_{+-}^{\operatorname{sing}}$:
\[
\Phi_{\apm} (X(p',q') \times X(q'',p'') \times \{0\})
=
\partial X(p,q)_{\apm}^{\operatorname{sing}}.
\]
On the other hand,
the regular part $\partial X(p,q)_{+-}^{\operatorname{reg}}$
is a hypersurface in $X(p,q)$.
In a neighbourhood $U$ of a point at $\partial X(p,q)_{\apm}^{\operatorname{reg}}$,
we set
\[
\xi_1:=|v''|-|u''|, \,\,
\xi_2:=|v''|+|u''| \, (>0),
\]
and take coordinates on $U$ $(\subset X(p,q))$ by
\begin{equation}
\label{eqn:coordxi}
(u',u'',v',v'')
=
((1+\xi_1 \xi_2)^{\frac 1 2}x',
\frac 1 2 (\xi_2- \xi_1)\omega'',
(1+\xi_1 \xi_2)^{\frac 1 2}y',
\frac 1 2 (\xi_1 + \xi_2)\eta''),
\end{equation}
where $z'=(x',y') \in X(p',q')$,
$\omega'' \in S^{p''-1}$, and $\eta' \in S^{q''-1}$.
Then $U \cap X(p,q)_{\apm}$ is given by $\xi_1 >0$,
whereas $U \cap X(p,q)_{\app}$ is given by $\xi_1<0$.
\begin{lemma}
\label{lem:1914194}
In the coordinates \eqref{eqn:coordxi},
the Laplacian $\Delta_{X(p,q)}$ takes the form
\begin{equation}
\label{eqn:Lapxi}
\Delta_{X(p,q)}
=
\xi_1^2 \frac{\partial^2}{\partial \xi_1^2}
+
4 \frac{\partial^2}{\partial \xi_1 \partial \xi_2}
+\xi_1 P \frac{\partial}{\partial \xi_1}
+Q,
\end{equation}
where $P$ and $Q$ are differential operators
of variables $\xi_2$, $x'$, $y'$, $\omega''$ and $\eta''$
with smooth coefficients.
\end{lemma}
\begin{proof}
The coordinates \eqref{eqn:coordxi} are obtained from
$\Phi_{\apm}(z',z'',t)$,
see \eqref{eqn:mappm},
successively by the following two steps:
\begin{align}
\label{eqn:coo1}
&\bullet \,\,\,
z''=(\omega''\sinh s, \eta''\cosh s) \in X(p'',q'')_-,
\\
&\label{eqn:coo2}
\bullet \,\,\,
\xi_1=e^{-s}\sinh t,
\,\,
\xi_2=e^s \sinh t.
\end{align}
By change of coordinates in the first step,
the Laplacian $\Delta_{X(p,q)}$ takes the form \eqref{eqn:4.4.2}
with the second term replaced by
\[
\frac 1 {\cosh^2 t}
(-D^s +\frac 1 {\cosh^2 s} \Delta_{S^{q''-1}}-\frac 1 {\sinh^2 s}\Delta_{S^{p''-1}})
\]
where we set
\[
D^s := \frac{\partial^2}{\partial s^2}
+
((q''-1) \tanh s+(p''-1)\coth s)
\frac {\partial}{\partial s}.
\]
Then the change of variables $(t,s) \mapsto (\xi_1, \xi_2)$
in the second step yields
\begin{equation*}
\frac{\partial}{\partial s}
=
- \xi_1 \frac {\partial}{\partial \xi_1} + \xi_2 \frac {\partial}{\partial \xi_2},
\quad
\frac{\partial}{\partial t}
=
\left(\frac{1+\xi_1 \xi_2}{\xi_1 \xi_1}\right)^{\frac 1 2}
\left(\xi_1\frac{\partial}{\partial \xi_1}
+ \xi_2 \frac {\partial}{\partial \xi_2}\right),
\end{equation*}
whence the lemma by short computations.
\end{proof}
\subsection{Extension as a weak solution in $L^2(X(p,q))$}
The proof of Theorem \ref{thm:holographic} will be completed
if the image of $T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''}$
gives weak solutions
to the differential equation \eqref{eqn:Laplmd}.
\begin{proposition}
\label{prop:weakSol}
Suppose $(\delta, \varepsilon)=(-,+)$, $(+,+)$, or $(+,-)$.
Assume $(\lambda', \lambda'') \in \Lambda_{\delta\varepsilon}(\lambda)$.
Then for any $h \in L^2(X(p',q')_{\delta})_{\lambda'} \widehat \otimes L^2(X(p'',q'')_{\varepsilon})_{\lambda''}$,
$F:=T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''} h$ is a weak solution
to the differential equation \eqref{eqn:Laplmd} on $X(p,q)$.
\end{proposition}
\begin{proof}
Since the Laplacian $\Delta$ is a closed operator on $L^2(X(p,q))$,
and since $T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''}$ is
a bounded operator by Proposition \ref{prop:opnorm},
it suffices to prove the assertion
for a dense subspace of the Hilbert space.
Thus we may and do assume
that $h$ is a $K'$-finite function.
Then $F$ is real analytic on $X(p,q)_{\delta \varepsilon}$
and satisfies \eqref{eqn:Laplmd} in $X(p,q)_{\delta \varepsilon}$
in the usual sense
by Proposition \ref{prop:smoothSol}.
In order to prove
that $F$ is a weak solution to \eqref{eqn:Laplmd}
in the whole manifold $X(p,q)$,
we consider the boundary $\partial X(p,q)_{\delta \varepsilon}$,
and explain the case $(\delta,\varepsilon)=(+,-)$.
We may and do assume that $p''>0$.
In fact,
if $p''=0$,
then $X(p,q)_{\app} = X(p,q)_{\apm} =\emptyset$
and $T_{\apm, \lambda}^{\lambda',\lambda''}h |_{X(p,q)_{\apm}}$ extends
to a smooth function on $X(p,q)$.
Suppose $p''>0$.
Then $\lambda'' \in A_-(p'',q'')$ satisfies $\lambda''>0$.
In order to prove that $F$ is a weak solution to \eqref{eqn:Laplmd},
it suffices to verify it near the boundary
$\partial X(p,q)_{\apm}=\partial X(p,q)_{\apm}^{\operatorname{reg}}
\cup \partial X(p,q)_{\apm}^{\operatorname{sing}}$.
\vskip 1pc
{\bf{Case I.}}\enspace
First,
we deal with a neighbourhood $U$
of a point at $\partial X(p,q)_{\apm}^{\operatorname{reg}}$.
We take coordinates of $U$
as in \eqref{eqn:coordxi}.
We recall
that the boundary $U \cap \partial X(p,q)_{\apm}$
is given by $\xi_1=0$
where $\xi_2 >0$.
Then $\Phi_{\apm}(z',z'',t)$
with $z''=(\omega'' \sinh s, \eta'' \cosh s)$,
see \eqref{eqn:coo1},
approaches to boundary points
in $\partial X(p,q)_{\apm}^{\operatorname{reg}}$,
as $t \to 0$ and $s \to \infty$
with constraints
\[
C_1< e^{s} \sinh t< C_2
\qquad
\text{for some $0< C_1 < C_2$},
\]
because
\[
\xi_1 =e^{-s} \sinh t, \quad \xi_2 = e^s \sinh t.
\]
Then it follows from \eqref{eqn:L2asym}
that the $K'$-finite function $h$ has an asymptotic behavior
\begin{equation}
\label{eqn:hasym}
h(z',z'')=a(z',\omega'',\eta'') e^{-(\lambda''+\rho'')s}(1+se^{-2s}O(1))
\end{equation}
as $s \to \infty$
for some analytic function $a(z',\omega'',\eta'')$,
and therefore
$F=T_{\apm,\lambda}^{\lambda',\lambda''}h$ in $U \cap X(p,q)_{\apm}$
behaves as
\[
O(e^{-(\lambda''+\rho'')s}(\sinh t)^{\lambda''-\rho''})
=O(\xi_1^{\lambda''}\xi_2^{-\rho''})
\]
near the boundary $\xi_1 \downarrow 0$,
whereas $F \equiv 0$ for $\xi_1 <0$.
Since $\lambda''>0$ and since $\Delta_{X(p,q)}$ takes
the form \eqref{eqn:Lapxi},
the distribution $\Delta_{X(p,q)}F$
is actually a locally integrable function on $U$.
Since $F$ solves \eqref{eqn:Laplmd}
in $U \setminus \partial X(p,q)_{\apm}$
in the usual sense,
so does $F$ in $U$ in the distribution sense.
\vskip 1pc
{\bf{Case II.}}\enspace
Next,
we deal with a neighbourhood $U$
of a point at $\partial X(p,q)_{\apm}^{\operatorname{sing}}$.
In this case,
we use
$
(z',z'',t) \in X(p',q') \times X(q'', p'') \times [0, \infty)
$
as coordinates of $U \cap \overline{X(p,q)_{\apm}}$
via $\Phi_{\apm}$.
Since $F$ behaves as $O(t^{\lambda''-\rho''})$
when $t$ tends to zero,
so does $Y_1 F$ as $O(t^{\lambda''-\rho''-1})$
and $Y_1 Y_2 F$ as $O(t^{\lambda''-\rho''-2})$
for any vector fields $Y_1$, $Y_2$ on $X(p,q)$.
In view of the formula \eqref{eqn:dmu+-} of the measure $d \mu_{+-}(t)$,
these functions belong to $L_{\operatorname{loc}}^1({\mathbb{R}}, d \mu_{+-}(t))$
if
\[
(\lambda''-\rho''-2)+ (2\rho''+1)>-1,
\]
which is automatically satisfied
because $\lambda'' >0$.
Thus $F$ is a weak solution to \eqref{eqn:Laplmd}
near the boundary $\partial X(p,q)_{\delta\varepsilon}$
when $(\delta, \varepsilon)=(+,-)$.
The other cases $(\delta, \varepsilon)=(+,+)$ and $(-,+)$ are similar.
Thus Proposition \ref{prop:weakSol} is proved.
\end{proof}
\section{Exhaustion of holographic operators}
\label{sec:5}
Let $\Pi \in \widehat G$ be any discrete series representation
for the pseudo-Riemannian space form
$G/H\simeq X(p,q)$.
In this section we prove
that discrete spectra of the restriction
$\Pi|_{G'}$ are exhausted by \eqref{eqn:2.1.1}
counted with multiplicities,
hence complete the proof of Theorem \ref{thm:2002}.
To be precise,
we recall from Proposition \ref{prop:discX}
that any $\Pi \in \Disc{G/H}$ is of the form
$\Pi = \pi_{+,\lambda}^{p,q}$
for some $\lambda \in A_+(p,q)$,
and from Proposition \ref{prop:3.1.1}
that $\pi \in \widehat{G'}$ satisfying $\Hom_{G'}(\pi, \Pi|_{G'}) \ne \{0\}$
must be of the form
$\pi=\pi_{\delta,\lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}$
for some $(\lambda',\lambda'')
\in A_{\delta}(p',q') \times A_{\varepsilon}(p'',q'')$
with $(\delta, \varepsilon) \in \{(-, +), (+, +), (+, -)\}$.
We show
that $(\lambda',\lambda'')$ is actually
an element of $\Lambda_{\delta\varepsilon}(\lambda)$.
More strongly,
we prove:
\begin{theorem}
\label{thm:4.2}
Suppose that $\lambda \in A_+(p,q)$
and
$(\lambda', \lambda'')
\in A_{\delta}(p',q') \times A_{\varepsilon}(p'',q'')$.
Then, we have
$$
\Hom_{G'}(\pi_{\delta, \lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''},
\pip{p}{q}{\lambda}|_{G'})
\simeq \begin{cases} \Bbb C T_{\delta\varepsilon, \lambda}^{\lambda', \lambda''} &
\text{ if } (\lambda', \lambda'') \in \Lambda_{\delta\varepsilon}(\lambda),
\\
0 & \text{ otherwise}.
\end{cases}
$$
\end{theorem}
We already know in \cite{xk:1}
that the direct sum \eqref{eqn:2.1.1} equals the whole restriction $\Pi|_{G'}$
if $p'=0$ or $p''=0$.
In this case,
$\Pi = \pi_{+, \lambda}^{p,q}$ is $K'$-{\it{admissible}}
({\it{cf}}. Section \ref{subsec:algdeco}),
and the multiplicity of each $K'$-type occurring in $\Pi$
coincides with
that in \eqref{eqn:2.1.1}.
Hence the restriction $\Pi|_{G'}$ is discretely decomposable
and is isomorphic to the direct sum \eqref{eqn:2.1.1}.
Thus,
we shall assume $p' p''>0$ from now on.
The rest of this section is devoted to the proof
of Theorem \ref{thm:4.2}
in the case $p' p'' >0$
and $(\delta, \varepsilon)=(+,-)$.
The other cases
where $(\delta, \varepsilon)=(-,+)$ or $(+,+)$
are similar.
\subsection{Kummer's relation}
The hypergeometric differential equation \eqref{eqn:4.4.6}
has a regular singularity
also at $z=\infty$,
and its exponents
are $\frac1 2 (\lambda'+\lambda''+1-\lambda)$
and $\frac1 2 (\lambda'+\lambda''+1+\lambda)$.
Suppose $\lambda \ne 0$.
We write $g_{(\infty)}^+(z)$ and $g_{(\infty)}^-(z)$
for the unique solutions to \eqref{eqn:4.4.6}
such that
\begin{equation}
\label{eqn:ginfty}
\lim_{z \to \infty} (-z)^{\frac{\lambda'+\lambda''+1 \mp \lambda}{2}}
g_{(\infty)}^{\pm}(z)=1,
\end{equation}
and set
\begin{equation}
\label{eqn:uinfty}
u_{(\infty)}^{\pm}(t):=g_{(\infty)}^{\pm}(-\sinh^2 t).
\end{equation}
\begin{lemma}
[Kummer's relation]
\label{lem:Kummer}
Suppose $\lambda \ne 0, -1, -2, \dots$
and $\lambda'' \ne 0$.
\begin{enumerate}
\item[{\rm{(1)}}]
There exist uniquely
$a(\lambda', \lambda'', \lambda)$,
$b(\lambda', \lambda'', \lambda) \in {\mathbb{C}}$
such that
\begin{equation}
\label{eqn:gKummer}
g_{(\infty)}^-(z)
=
a(\lambda', \lambda'', \lambda)g_{1(0)}(z)
+
b(\lambda', \lambda'', \lambda)e^{i \pi \lambda''}
g_{2(0)}(z).
\end{equation}
\item[{\rm{(2)}}]
If $\lambda'' \ne 0,-1,-2,\dots$,
then
\begin{equation}
\label{eqn:1914110}
b(\lambda', \lambda'', \lambda)
=
\frac{\Gamma(\lambda'') \Gamma(1+\lambda)}
{\Gamma(\frac{-\lambda'+ \lambda''+ \lambda+1}2)
\Gamma(\frac{\lambda'+ \lambda''+ \lambda+1}2)}.
\end{equation}
Moreover,
if $\lambda'' \not\in{\mathbb{Z}}$,
then $a(\lambda', \lambda'', \lambda)=b(\lambda', -\lambda'', \lambda)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first statement is clear
because $g_{1(0)}(z)$ and $g_{2(0)}(z)$ are linearly independent
solutions to \eqref{eqn:4.4.6}.
To see the second statement,
we begin with the generic case
where $\lambda \not \in \{0,-1,-2,\cdots\}$
and $\lambda'' \not \in {\mathbb{Z}}$.
Then we have
\begin{align}
g_{(\infty)}^-(z)
=&(-z)^{\frac{\lambda'+ \lambda''+\lambda+1}2}
{}_2 F_1(\tfrac{\lambda'+ \lambda''+ \lambda+1}2,
\tfrac{\lambda'- \lambda''+ \lambda+1}2;
1+\lambda;z^{-1}),
\notag
\\
g_{1(0)}(z)
=&{}_2 F_1(\tfrac{\lambda'+ \lambda''- \lambda+1}2,
\tfrac{\lambda'+ \lambda''+ \lambda+1}2;
1+\lambda'';z),
\label{eqn:g10}
\\
g_{2(0)}(z)
=&z^{-\lambda''} {}_2 F_1(\tfrac{\lambda'- \lambda''- \lambda+1}2,
\tfrac{\lambda'- \lambda''+ \lambda+1}2;
1-\lambda'';z),
\label{eqn:g20}
\end{align}
and Kummer's relation \cite[2.9 (39)]{xerd}
shows
$
a(\lambda',\lambda'',\lambda)=b(\lambda',-\lambda'',\lambda)
$
with the formula \eqref{eqn:1914110}
for $b(\lambda',\lambda'',\lambda)$.
When $\lambda'' = m \in {\mathbb{N}}_+$,
$g_{1(0)}(z)$ remains to be the same \eqref{eqn:g10}
but $g_{2(0)}(z)$ does not take the form \eqref{eqn:g20}.
In fact,
$g_{2(0)}(z)$ contains a logarithmic term,
and is given by the analytic continuation:
\[
\lim_{\lambda'' \to m}
(g_{2(0)}(z)-\frac{P_m}{\lambda''-m} g_{1(0)}(z))
\]
where $P_m \equiv P_m(\lambda',\lambda) \in {\mathbb{C}}$
is determined by
\[
\lim_{\lambda'' \to m}
(\lambda''-m)g_{2(0)}(z)
=
P_m g_{1(0)}(z).
\]
Then the change of basis may alter
the coefficient $a(\lambda',\lambda'',\lambda)$
in \eqref{eqn:gKummer}
but leaves $b(\lambda',\lambda'',\lambda)$ invariant.
Thus the lemma is proved.
\end{proof}
For $\lambda',\lambda'' \in {\mathbb{R}}$,
we set a measure $d \mu^{\lambda',\lambda''}$ on ${\mathbb{R}}$
by
\[
d \mu^{\lambda',\lambda''}(t)
:=
(\cosh t)^{2\lambda'+1}(\sinh t)^{2\lambda''+1} d t.
\]
We note that $d \mu_{\apm}(t)=d \mu^{\rho',\rho''}(t)$,
see \eqref{eqn:dmu+-},
and
\begin{equation}
\label{eqn:SL2}
u \in L^2((0,\infty), d \mu^{\lambda',\lambda''}(t))
\Leftrightarrow
S_{\lambda',\lambda''}(u) \in L^2((0,\infty), d \mu_+(t))
\end{equation}
by the definition of the transform \eqref{eqn:S}
of $S_{\lambda',\lambda''}$.
We need the following:
\begin{lemma}
\label{lem:KummerL2}
Suppose $\lambda>0$, $\lambda'>-1$, $\lambda''>-1$.
Then
$
u_{(\infty)}^-(t) \in L^2((0, \infty), d \mu^{\lambda', \lambda''}(t))
$
if and only if $-1 < \lambda''<1$
or $\lambda'-\lambda''-\lambda-1 \in 2 {\mathbb{N}}$.
\end{lemma}
\begin{proof}
By the asymptotic behavior \eqref{eqn:ginfty} of
$g_{(\infty)}^-(z)$ as $z \to \infty$,
we have
\[
u_{(\infty)}^-(t)
=g_{(\infty)}^-(-\sinh^2 t)
\in
L^2([1, \infty), d \mu^{\lambda', \lambda''}(t))
\]
because $\lambda >0$.
Likewise,
by the asymptotic behavior \eqref{eqn:g0} of $g_{1(0)}(z)$
and $g_{2(0)}(z)$
as $z \to 0$,
\begin{align*}
u_{1(0)}\in L^2((0, 1], d \mu^{\lambda', \lambda''}(t))
&\Leftrightarrow
\operatorname{Re}\lambda'' >-1,
\\
u_{2(0)}\in L^2((0, 1], d \mu^{\lambda', \lambda''}(t))
&\Leftrightarrow
\operatorname{Re}\lambda'' <1.
\end{align*}
In view of the Kummer's relation \eqref{eqn:gKummer},
\[
u_{(\infty)}^-(t)
=a(\lambda', \lambda'', \lambda) u_{1(0)}(t)
+
b(\lambda', \lambda'', \lambda) u_{2(0)}(t)
\]
belongs to $L^2((0, \infty), d \mu^{\lambda', \lambda''}(t))$
if and only if
$-1< \lambda''<1$
or $b(\lambda', \lambda'', \lambda)=0$.
The latter condition amounts to
$\lambda'-\lambda''-\lambda-1 \in 2{\mathbb{N}}$
by Lemma \ref{lem:Kummer} (2).
Thus the lemma is proved.
\end{proof}
\subsection{Possible form of holographic operators}
\label{subsec:4.4}
In this section
we examine a possible form for a holographic operator
$\pi \to \Pi|_{G'}$,
and find a necessary condition
on the parameter for $\Hom_{G'}(\pi,\Pi|_{G'})$
to be nonzero.
We begin with the following:
\begin{lemma}
\label{lem:psi+-}
Let $\lambda \in A_+(p,q)$
and $(\lambda',\lambda'')\in A_+(p',q') \times A_-(p'',q'')$.
Suppose
$
T \in \Hom_{G'}(\pi_{+,\lambda'}^{p',q'}
\boxtimes \pi_{-,\lambda''}^{p'',q''},
\pi_{+,\lambda}^{p,q}|_{G'})
$.
Then in the geometric realizations
of these representations
on pseudo-Riemannian space forms
(Section \ref{subsec:Xpq}),
$T$ must be of the following form:
there exists $c \in {\mathbb{C}}$
such that
\[
T h =
\begin{cases}
c(h S_{\lambda',\lambda''}(u_{(\infty)}^-)) \circ \Phi_{\apm}^{-1}
\quad
&\text{on $X(p,q)_{\apm}$},
\\
0
&\text{otherwise},
\end{cases}
\]
for all $h \in L^2(X(p',q'))_{\lambda'} \widehat{\otimes} L^2(X(q'',p''))_{\lambda''}$. \end{lemma}
\begin{remark}
\label{rem:uinfty}
We have used the Jacobi function
$u_{1(0)}(t)=\varphi_{i \lambda}^{(\lambda'',\lambda')}(t)$
\eqref{eqn:JacobiF}
for the definition
of the holographic operator
$T_{\apm, \lambda}^{\lambda'',\lambda'}$ in \eqref{eqn:T}
instead of $u_{(\infty)}^-(t)$
as in Lemma \ref{lem:psi+-}.
It is a part of Theorem \ref{thm:4.2}
to show that $u_{1(0)}(t)$ is proportional
to $u_{(\infty)}^-(t)$
if $(\lambda',\lambda'') \in \Lambda_+(\lambda)$.
\end{remark}
\begin{proof}
[Proof of Lemma \ref{lem:psi+-}]
For any $h$ in $\pip{p'}{q'}{\lambda'} \boxtimes \pim{p''}{q''}{\lambda''}$,
we have $\operatorname{Supp} T h \subset \overline {X(p,q)_{+-}}$
by Proposition \ref{prop:3.1.2}.
Suppose that $h$ is $K'$-finite.
We set
\begin{equation}
\label{eqn:psi+-}
\psi_{\apm} := S_{\lambda',\lambda''}^{-1} \circ T h \circ \Phi_{\apm},
\end{equation}
where $S_{\lambda',\lambda''}^{-1}$
(see \eqref{eqn:S}) is applied to the last variable $t$.
Then the following differential equations are satisfied:
\begin{multline*}
\Delta_{X(p',q')} \psi_\apm
=(-(\lambda')^2 + (\rho')^2) \psi_\apm,\quad
\Delta_{X(q',p')} \psi_\apm
=(-(\lambda'')^2 + (\rho'')^2) \psi_\apm,
\end{multline*}
where
$\Delta_{X(p',q')}$ acts on $z'$-variables,
and
$\Delta_{X(q',p')}$ on $z''$-variables.
As in the proof of Proposition \ref{prop:smoothSol},
the differential equation \eqref{eqn:Laplmd} yields
the following differential equation
(in the sense of distribution):
\begin{equation}
\label{eqn:4.4.3}
(L_\apm - (\lambda^2 - (\lambda' + \lambda'' + 1)^2))
\psi_\apm(z', z'', t) = 0,
\end{equation}
where $L_{\apm}$ is defined in \eqref{eqn:JacobiL}.
Since $\lambda \ne 0$,
the solution $\psi_\apm(z', z'', t)$ is a linear combination
of the basis $u_{(\infty)}^+(t)$ and $u_{(\infty)}^-(t)$.
Hence $\psi_{\apm}$ is of the form
\[
\psi_{\apm}(z',z'',t)
=
h_+(z',z'') u_{(\infty)}^+(t)
+
h_-(z',z'') u_{(\infty)}^-(t)
\]
for some real analytic functions
$h_+(z',z'')$ and $h_-(z',z'')$
on $X(p',q') \times X(q'',p'')$.
We observe that under the assumption $\lambda > 0$
we have
\begin{equation}
u_{(\infty)}^+(t) \not\in L^2([1, \infty); d \mu^{\lambda', \lambda''}(t)),
\quad
u_{(\infty)}^-(t) \in L^2([1, \infty); d \mu^{\lambda', \lambda''}(t) ).
\end{equation}
Since $\operatorname{Supp} T h \subset \overline{X(p,q)_{\apm}}$,
the formula \eqref{eqn:measure}
of the invariant measure on $X(p,q)$
and the definition \eqref{eqn:S}
of $S_{\lambda',\lambda''}$ imply
\[
\|T h\|_{L^2(X(p,q))}^2
=
\int_{X(p',q') \times X(q'',p'')}
\int_{0}^{\infty}
|\psi_{\apm}(z',z'',t)|^2
d z' d z''
d \mu^{\lambda', \lambda''}(t).
\]
Thus
we conclude from $T h \in L^2(X(p,q))$
that $h_+(z',z'')=0$.
In turn,
we have
\[
\|T h\|_{L^2(X(p,q))}
=
\|h_-\|_{L^2 (X(p',q') \times X(q'',p''))}
\|u_{(\infty)}^-\|_{L^2((0,\infty), d\mu^{\lambda',\lambda''}(t))}.
\]
Since $T$ is a continuous map
between the Hilbert spaces,
we have
\begin{equation}
\label{eqn:u-}
u_{(\infty)}^- (t) \in L^2((0,\infty), d\mu^{\lambda',\lambda''}(t))
\end{equation}
if $T \ne 0$.
Moreover,
$h \mapsto h_-$ is a $({\mathfrak{g}}',K')$-endomorphism
of the irreducible $({\mathfrak{g}}',K')$-module
$(\pi_{+,\lambda'}^{p',q'} \boxtimes \pi_{-,\lambda''}^{p'',q''})_{K'}$,
whence there exists $c \in {\mathbb{C}}$
such that $h_- = c h$
for all $K'$-finite vectors $h$
by Schur's lemma.
Since $T$ is a continuous map,
we obtain Lemma \ref{lem:psi+-}.
\end{proof}
Next,
we show
that the condition $T h \in L^2(X(p,q))$ leads us to the following:
\begin{proposition}
\label{prop:nec}
Retain $(\delta,\varepsilon)=(+,-)$.
Suppose $\lambda \in A_+(p,q)$
and $(\lambda',\lambda'')
\in A_\delta(p',q') \times A_\varepsilon(p'',q'')$.
If
$
\Hom_{G'}(\pi_{\delta,\lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}, \pi_{+,\lambda}^{p,q}|_{G'}) \ne \{0\}$,
then $\lambda'' =\frac 1 2$
or $(\lambda', \lambda'') \in \Lambda_{\delta\varepsilon}(\lambda)$.
\end{proposition}
In Section \ref{subsec:Heviside},
we treat the case $\lambda''=\frac 1 2$.
\begin{proof}
As we have seen \eqref{eqn:u-}
in the proof of Lemma \ref{lem:psi+-},
$u_{(\infty)}^-(t) \in L^2((0,\infty), d \mu^{\lambda',\lambda''}(t))$.
Hence $-1 < \lambda'' < 1$
or $\lambda'-\lambda''-\lambda-1 \in 2 {\mathbb{N}}$
by Lemma \ref{lem:KummerL2}.
Since $\lambda'' \in A_-(p'',q'')$ with $p''>0$
(see \eqref{eqn:A-}),
the only possible $\lambda''$ with $\lambda''<1$
is $\lambda''=\frac 1 2$.
(We note that $\lambda''=-\frac 1 2$ occurs only
when $(p'',q'')=(0,1)$.)
Thus Proposition \ref{prop:nec} is proved.
\end{proof}
\subsection{The case $\lambda''=\frac 1 2$}
\label{subsec:Heviside}
The case $\lambda''=\frac 1 2$ is delicate
because there exists a continuous $G'$-homomorphism
\[
T \colon \pi_{+,\lambda'}^{p',q'} \boxtimes \pi_{-,\lambda''}^{p'',q''}
\to
L^2(X(p,q)_{\apm})
\]
such that the image of $T$ consists of weak solutions
to \eqref{eqn:Laplmd}
in $L^2(X(p,q)_{\apm})$
without the assumption
$(\lambda',\lambda'') \in \Lambda_{\apm}(\lambda)$.
However,
we shall see
that $T h$ {\it{cannot}} be a weak solution
to \eqref{eqn:Laplmd} in $L^2(X(p,q))$
unless $(\lambda', \lambda'') \in \Lambda_{\apm}(\lambda)$.
For this,
it suffices to show the following:
\begin{lemma}
\label{lem:Heviside}
In the setting of Lemma \ref{lem:psi+-},
suppose $\lambda''=\frac 1 2$
and $(\lambda', \lambda'') \not \in \Lambda_{\apm}(\lambda)$.
Then the distribution $\Delta_{X(p,q)}(T h)$
is not a locally integrable function on $X(p,q)$
for any nonzero $K'$-finite function $h$.
\end{lemma}
\begin{proof}
We consider a neighbourhood $U$
at a point of $\partial X(p,q)_{\apm}^{\operatorname{reg}}$,
and use the coordinates \eqref{eqn:coordxi}
as in Section \ref{subsec:bdry}.
Then $T h=0$
if $\xi_1<0$.
Let us examine the behavior of $T h$
in $U \cap \overline{X(p,q)_{\apm}}$
near the boundary as $\xi_1 \downarrow 0$.
Let $\psi_{\apm}$ be as in \eqref{eqn:psi+-}.
Since $(\lambda',\lambda'') \not \in \Lambda_{\apm}(\lambda)$,
the coefficient $b(\lambda',\lambda'',\lambda)$
in \eqref{eqn:gKummer} does not vanish.
Hence there exist $A \in {\mathbb{C}}$
and $B \ne 0$
such that
\begin{align*}
\psi_{\apm}(z',z'',t)
=& h(z',z'') (A u_{1(0)}(t) + B u_{2(0)}(t))
\\
=& h(z',z'') (A -B t^{-1})(1+O(t^2)).
\end{align*}
We recall from \eqref{eqn:hasym}
that $h(z',z'')$ has an asymptotic behavior
\[
h(z',z'')=a(z',\omega'',\eta'') e^{-(\lambda''+\rho'')s}(1+se^{-2s}O(1))
\]
for some real analytic function
of $(z',\omega'',\eta'') \in X(p',q') \times S^{p''-1} \times S^{q''-1}$
as $s \to \infty$
in the coordinates $z''=(\omega'' \sinh s, \eta'' \cosh s)$.
Combining these two asymptotic behaviours
as $s \to \infty$ and $t \to 0$
with $\xi_2 =e^s \sinh t$
away from 0 and infinity,
we obtain the asymptotic behavior
of $T h$
near the boundary $\partial X(p,q)_{\apm}^{\operatorname{reg}}$:
\[
T h \sim \sum_{k=0}^{\infty}\xi_1^{\lambda''-\frac 1 2+\frac k 2} g_k(\xi_2, z', \omega'',\eta'')
\]
where the first term is given by
\[
g_0=-B \xi_2^{-\frac 1 2 - \rho''}a(z',\omega'',\eta'').
\]
In view of $\lambda''=\frac 1 2$,
the proof of the lemma is reduced to the following.
\end{proof}
\begin{lemma}
\label{lem:191496}
Let $U$ be an open subset of ${\mathbb{R}}^n$,
and $P$ a differential operator on $U$ of the form
\[
P=\xi_1^2 \frac{\partial^2}{\partial \xi_1^2} + \frac{\partial}{\partial \xi_1} P'+ P''
\]
such that $P'$ and $P''$ are differential operators
of variables $\xi'=(\xi_2, \cdots, \xi_n)$
with smooth coefficients
in $\xi=(\xi_1, \xi')$.
Suppose that $f(\xi)$ is a locally integrable function on $U$
of the form
\[
f(\xi)=
\begin{cases}
F(\xi_1^{\frac 1 2}, \xi')
\quad
&\text{for $\xi_1>0$,}
\\
0
\quad
&\text{for $\xi_1\le 0$, }
\end{cases}
\]
for some smooth function $F$.
Then the distribution $P(\xi_1 f)$ is a continuous function in $U$.
Furthermore,
$f$ is a weak solution to $P f=0$
only when $P(\xi_1 f)|_{\xi_1=0} \equiv 0$.
\end{lemma}
\begin{proof}
The first assertion is clear.
Moreover we have
$
P(\xi_1 f)|_{\xi_1=0}= P_1 F(0,\xi').
$
For the second assertion,
we observe
that $f$ is a smooth function
on $U^{\operatorname{reg}}:= U \setminus \{\xi_1 \ne 0\}$.
Hence,
in order to show $P f \ne 0$
in the distribution sense,
it suffices to show
that $P f$ does not belong to $L_{\operatorname{loc}}^1(U)$
when $P_1 F(0,\xi') \not \equiv 0$.
We introduce a locally integrable function $\widetilde f$ on $U$
by
\[
\widetilde f(\xi):=
\begin{cases}
F(0, \xi')
\quad
&\text{for $\xi_1>0$,}
\\
0
\quad
&\text{for $\xi_1\le 0$.}
\end{cases}
\]
Clearly,
the distribution
\[
\frac{\partial}{\partial \xi_1} P_1 \widetilde f
=
\delta(\xi_1) P_1 F(0,\xi')
\]
is not locally integrable
unless $P_1 F (0,\xi')\not \equiv 0$.
Since $(P-\frac{\partial}{\partial \xi_1} P_1)f \in L_{\operatorname{loc}}^1(U)$
and $\frac{\partial}{\partial \xi_1} P_1 (f-\widetilde f)\in L_{\operatorname{loc}}^1(U)$,
we conclude
that $P f \not \in L_{\operatorname{loc}}^1(U)$.
Thus the lemma is proved.
\end{proof}
\section{Further analysis of the branching laws}
\label{sec:comments}
In this section we discuss further analytic aspects
of the branching laws of the restriction $\Pi|_{G'}$
of a discrete series representation $\Pi \in \operatorname{Disc}(G/H)$
($\subset \widehat G$),
see Section \ref{subsec:Pidisc} for notation.
\subsection{Generalities: discrete part of unitary representations}
\label{subsec:Pidisc}
Any unitary representation $\pi$
of a reductive Lie group $L$
has a unique irreducible decomposition:
\begin{equation}
\label{eqn:directint}
\pi \simeq \int_{\widehat L} n_{\pi}(\sigma) \sigma \, d \mu (\sigma)
\qquad
\text{(direct integral)},
\end{equation}
where $d \mu$ is a Borel measure
on the unitary dual $\widehat L$,
and $n_{\pi} \colon \widehat L \to {\mathbb{N}} \cup \{\infty\}$
is a measurable function
({\it{multiplicity}}).
In what follows,
we use the same letter
to denote a representation space
with the representation.
Then the Hilbert direct sum
\[
\pi_{\operatorname{disc}} := \Hsum{\sigma \in \widehat L} \Hom_L(\sigma, \pi) \otimes \sigma
\]
is identified with the maximal closed $G$-submodule of $\pi$
which is discretely decomposable.
We say that the unitary representation $\pi_{\operatorname{disc}}$
is the {\it{discrete part}}
of the unitary representation $\pi$,
and its orthogonal complement $\pi_{\operatorname{cont}}$ in $\pi$
is the {\it{continuous part}} of $\pi$.
The unitary representation $\pi$ is discretely decomposable
if $\pi = \pi_{\operatorname{disc}}$,
whereas $\pi=\pi_{\operatorname{cont}}$
({\it{i.e.}},
$\pi_{\operatorname{disc}}=\{0\}$)
means
that the irreducible decomposition \eqref{eqn:directint} does not
contain any discrete spectrum.
The irreducible decomposition \eqref{eqn:directint} is called
the {\it{Plancherel formula}}
when $\pi$ is the regular representation on $L^2(X)$
where $X$ is an $L$-space
with invariant measure;
it is called the
{\it{branching law}}
when $\pi$ is the restriction $\Pi|_L$
of a unitary representation $\Pi$ of a group $G$
containing $L$
as a subgroup.
The support
$
\{\sigma \in \widehat L: \Hom_L(\sigma, \pi) \ne \{0\}\}
$
will be denoted by
\begin{alignat*}{3}
&\operatorname{Disc}(X)\,\,
&&(\subset \widehat G)
\quad
&&\text{when $L=G$
and $\pi$ is the regular representation $L^2(X)$;}
\\
&\operatorname{Disc}(\Pi|_{G'})\,\,
&&(\subset \widehat {G'})
\quad
&&\text{when $L=G'$ and $\pi$ is the restriction $\Pi|_{G'}$. }
\end{alignat*}
We consider the restriction $\Pi \in \operatorname{Disc}(G/H)$
$(\subset \widehat G)$
to the subgroup $G'$.
The unitary representation $\Pi|_{G'}$ of the subgroup $G'$ splits
into the discrete and continuous parts:
\[
\Pi|_{G'} = (\Pi|_{G'})_{\operatorname{disc}} \oplus (\Pi|_{G'})_{\operatorname{cont}}.
\]
We ask
\begin{question}
\label{q:Disc}
Let $H$, $G'$ be reductive subgroups of $G$
and $\Pi \in \Disc{G/H}$.
\begin{enumerate}
\item[{\rm{(1)}}]
When $(\Pi|_{G'})_{\operatorname{disc}} =\{0\}$?
\item[{\rm{(2)}}]
When $\#(\operatorname{Disc}(\Pi|_{G'}))<\infty$?
\item[{\rm{(3)}}]
When $(\Pi|_{G'})_{\operatorname{cont}} =\{0\}$?
\end{enumerate}
\end{question}
We note that if $G'=H$
and if $\Pi \in \operatorname{Disc}(G/H)$
then the underlying $({\mathfrak{g}}, K)$-module $\Pi_K$
is never discretely decomposable
as a $({\mathfrak{g}}', K')$-module,
see \cite[Thm.~6.2]{xkInvent98}.
\subsection{Criteria
for $(\Pi|_{G'})_{\operatorname{disc}}=\{0\}$
and $(\Pi|_{G'})_{\operatorname{cont}}=\{0\}$
}
We retain the previous setting
where
\[\text{$G/H=O(p,q)/O(p-1,q) =X(p,q)$
and
$
G'=O(p',q') \times O(p'',q'').
$}
\]
{}From now,
we assume
\begin{equation}
\label{eqn:pqbasic}
p=p'+p'' \ge 2, \, \, q=q'+q'' \ge 1, \,\,
\text{$(p',q') \ne (0,0)$ and $(p'',q'') \ne (0,0)$.}
\end{equation}
Then Proposition \ref{prop:discX} and Theorem \ref{thm:2002}
may be restated as:
\begin{align*}
\operatorname{Disc}(G/H)
=&
\{\pi_{+,\lambda}^{p,q}:
\lambda \in A_+(p,q)\},
\\
\operatorname{Disc}(\pi_{+,\lambda}^{p,q}|_{G'})
=&
\bigcup_{\delta, \varepsilon}
\{\pi_{\delta,\lambda'}^{p',q'} \boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}:
(\lambda', \lambda'') \in \Lambda_{\delta\varepsilon}(\lambda)\}.
\end{align*}
In particular
$\operatorname{Disc}(G/H) \ne \emptyset$.
Here are answers to Question \ref{q:Disc} (1)--(3):
\begin{theorem}
[purely continuous spectrum]
\label{thm:conti}
The following two conditions on $(p',p'',q',q'')$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\operatorname{Disc}(\Pi|_{G'}) = \emptyset$
for any $\Pi \in \operatorname{Disc}(G/H)$;
\item[{\rm{(ii)}}]
$(p',p'')=(1,1)$,
$(p',q')=(1,1)$ or $(p'',q'')=(1,1)$.
\end{enumerate}
\end{theorem}
As a weaker property than Theorem \ref{thm:conti},
we have:
\begin{theorem}
[at most finitely many discrete summands]
\label{thm:191419}
The following three conditions on $(p',p'',q',q'')$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\# \operatorname{Disc}(\Pi|_{G'}) < \infty$
for any $\Pi \in \operatorname{Disc}(G/H)$;
\item[{\rm{(ii)}}]
$\# \operatorname{Disc}(\Pi|_{G'}) < \infty$
for some $\Pi \in \operatorname{Disc}(G/H)$;
\item[{\rm{(iii)}}]
$p' p''> 0$,
$\operatorname{min}(p'',q') \le 1$
and $\operatorname{min}(p',q'') \le 1$.
\end{enumerate}
\end{theorem}
As an opposite extremal case to Theorem \ref{thm:conti},
we have:
\begin{theorem}
[discretely decomposable restriction]
\label{thm:discdeco}
The following three conditions on $(p',p'',q',q'')$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
The restriction $\Pi|_{G'}$ is discretely decomposable
for any $\Pi \in \operatorname{Disc}(G/H)$;
\item[{\rm{(ii)}}]
The restriction $\Pi|_{G'}$ is discretely decomposable
for some $\Pi \in \operatorname{Disc}(G/H)$;
\item[{\rm{(iii)}}]
$p'=0$ or $p''=0$.
\end{enumerate}
\end{theorem}
For a unitary representation $\Pi$ of $G$,
the space $\Pi^{\infty}$ of smooth vectors
(as a representation of $G$)
is smaller in general than the space
$(\Pi|_{G'})^{\infty}$ of smooth vectors
as a representation of the subgroup $G'$.
This difference detects discrete decomposability of the restriction $\Pi|_{G'}$ as follows.
\begin{corollary}
\label{cor:discdeco}
Let $\Pi \in \Disc{G/H}$.
Then the following two conditions are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
The restriction $\Pi|_{G'}$ contains continuous spectrum
in the branching law;
\item[{\rm{(ii)}}]
There does not exist a closed $G'$-irreducible submodule $W$ in $\Pi$
such that $W \cap \Pi^{\infty} \ne \{0\}$.
\end{enumerate}
\end{corollary}
\subsection{Proof of Theorem \ref{thm:191419}: finitely many summands}
We begin with the proof of Theorem \ref{thm:191419}.
\begin{lemma}
\label{lem:191414}
In the setting \eqref{eqn:pqbasic},
the following three conditions on $(p',p'',q',q'')$
and $\lambda \in A_+(p,q)$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\Lambda_{\apm} (\lambda) \ne \emptyset$;
\item[{\rm{(ii)}}]
$\# \Lambda_{\apm} (\lambda) = \infty$;
\item[{\rm{(iii)}}]
$p'' =0$ or \lq\lq{$p' \ge 2$ and $q'' \ge 2$}\rq\rq.
\end{enumerate}
\end{lemma}
\begin{proof}
Direct from the definition of
$\Lambda_{\apm} (\lambda)$ in Section \ref{sec:Intro}.
\end{proof}
We note that the conditions (i) and (ii) in Lemma \ref{lem:191414}
do not depend on the choice of $\lambda \in A_+(p,q)$.
An analogous result holds for $\Lambda_{\amp}(\lambda)$
by switching the role of $(p',q')$ and $(p'',q'')$.
Hence we have:
\begin{lemma}
\label{lem:191417}
The following three conditions on $(p',p'', q',q'')$
and $\lambda \in A_+(p,q)$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\Lambda_{\amp} (\lambda) \cup \Lambda_{\apm} (\lambda) \ne \emptyset$;
\item[{\rm{(ii)}}]
$\# (\Lambda_{\amp} (\lambda) \cup \Lambda_{\apm} (\lambda))= \infty$;
\item[{\rm{(iii)}}]
$p' p'' =0$, $\operatorname{min}(p'',q')\ge 2$,
or $\operatorname{min}(p',q'')\ge 2$.
\end{enumerate}
\end{lemma}
Since $\# \Lambda_{\app} (\lambda)< \infty$ for any $\lambda$,
Theorem \ref{thm:191419} follows immediately from Lemma \ref{lem:191417}.
\subsection{Nonexistence condition of discrete spectrum: proof of Theorem \ref{thm:conti}}
In this section,
we discuss about when the restriction $\Pi|_{G'}$ decomposes into
continuous spectrum,
and give a proof of Theorem \ref{thm:conti}.
We begin with the following observation
on elementary combinatorics:
\begin{lemma}
\label{lem:Aempty}
The condition (ii) in Theorem \ref{thm:conti} is equivalent to the condition:
\[
A_{\delta}(p',q') \times A_{\varepsilon}(p'',q'') =\emptyset
\quad
\text{for $(\delta, \varepsilon)=(-,+)$, $(+,+)$ and $(+,-)$}.
\]
\end{lemma}
\begin{proof}
Clear from the definitions \eqref{eqn:A+} and \eqref{eqn:A-}
of $A_{\pm}(p,q)$.
\end{proof}
Thus the implication (ii) $\Rightarrow$ (i) in Theorem \ref{thm:conti}
follows readily from Theorem \ref{thm:2002}
and Lemma \ref{lem:Aempty}.
In order to prove the opposite implication,
we need another elementary combinatorics as below.
The proof is direct from the definition of $\Lambda_{\app}(\lambda)$.
\begin{lemma}
\label{lem:191411}
In the setting \eqref{eqn:pqbasic},
assume further that
$p', p'' \ge 2$.
Then for $\lambda \in A_+(p,q)$,
we have the following:
\begin{enumerate}
\item[{\rm{(1)}}]
$\Lambda_{\app} (\lambda)=\emptyset$
\quad
if $\lambda < 2$ or if \lq\lq{$\lambda =2$ and $p' \equiv q' \mod 2$}\rq\rq;
\item[{\rm{(2)}}]
$\Lambda_{\app}(\lambda) \ne \emptyset$
\quad
if $\lambda>2$
or if \lq\lq{$\lambda=2$ and $p' \not \equiv q' \mod 2$}\rq\rq.
\end{enumerate}
\end{lemma}
We are ready to complete the proof
of Theorem \ref{thm:conti}.
\begin{proof}
[Proof of the implication (i) $\Rightarrow$ (ii) in Theorem \ref{thm:conti}]
Suppose that
$
\operatorname{Disc}(\Pi|_{G'}) =\emptyset
$
for any $\Pi \in \operatorname{Disc}(G/H)$.
Then Theorem \ref{thm:191419} tells
\begin{equation}
\label{eqn:thm72}
p' p'' >0, \,\,
\operatorname{min}(p'',q')\le 1, \,\,
\text{and $\operatorname{min}(p',q'')\le 1$}.
\end{equation}
On the other hand,
it follows from Lemma \ref{lem:191411} (2)
that $\Lambda_{\app}(\lambda) \ne \emptyset$
for $\lambda >2$
if $\operatorname{min}(p',p'')\ge 2$.
Hence we get $\min(p',p'') \le 1$.
Without loss of generality,
we may and do assume $p'=1$.
In turn,
the condition \eqref{eqn:thm72} imply
\[
(p', p'') = (1,1), \,
(p',q') =(1,0), \,
\text{ or }
(p',q') =(1,1).
\]
As we saw in Example \ref{ex:GP},
$\operatorname{Disc}(\pi_{+,\lambda}^{p,q}|_{G'}) \ne \emptyset$
for any $\lambda \in A_+(p,q)$ with $\lambda \ge 1$
if $(p', q')=(1,0)$.
Hence $(p', q')\ne(1,0)$.
Thus the implication (i) $\Rightarrow$ (ii)
in Theorem \ref{thm:conti} is proved.
\end{proof}
\subsection{Proof of Theorem \ref{thm:discdeco}
and Corollary \ref{cor:discdeco}}
\label{subsec:algdeco}
In the category of \gk-modules,
analogous results to Theorem \ref{thm:discdeco}
and Corollary \ref{cor:discdeco} are known in a general setting,
which we now recall:
\begin{proposition}
\label{prop:discdeco}
Let $(G,G')$ be a reductive symmetric pair.
For $\Pi \in \widehat G$
of which the underlying \gk-module $\Pi_K$
is a Zuckerman derived functor module
$A_{\mathfrak{q}}(\lambda)$.
Then the following four conditions are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\Pi_K$ is discretely decomposable as a $({\mathfrak{g}}',K')$-module
(\cite[Def.~1.1]{xkInvent98}).
\item[{\rm{(ii)}}]
$\Pi_K$ is $K'$-admissible,
namely,
$\dim_{\mathbb{C}} \Hom_{K'}(\tau, \Pi_K)<\infty$
for any $\tau \in \widehat{K'}$.
\item[{\rm{(iii)}}]
There exists a $G'$-irreducible closed subspace $\pi$ of $\Pi$
such that $\pi \cap \Pi_K \ne \{0\}$.
\item[{\rm{(iv)}}]
There exists a $G'$-irreducible closed subspace $\pi$ of $\Pi$
such that $\pi \cap \Pi_K$ is dense in the Hilbert space $\pi$.
\end{enumerate}
\end{proposition}
\begin{proof}
The equivalence (i) $\Leftrightarrow$ (ii) is proved
in \cite[Thm.~4.2]{xkInvent98}.
The equivalence (i) $\Leftrightarrow$ (iii) $\Leftrightarrow$ (iv)
follows from \cite[Lem.~1.5]{xkInvent98}.
\end{proof}
The equivalence holds
without the assumption
$\Pi_K \simeq A_{\mathfrak{q}}(\lambda)$.
See also \cite{KOY15, K19}.
Back to our setting,
we know from the classification theory \cite{decoAq}:
\begin{lemma}
\label{lem:K'adm}
The following three conditions on $(p', p'', q',q'')$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
$\Pi_K$ is discretely decomposable as a $({\mathfrak{g}}',K')$-module
for any $\Pi \in \Disc{G/H}$;
\item[{\rm{(ii)}}]
$\Pi_K$ is discretely decomposable as a $({\mathfrak{g}}',K')$-module
for some $\Pi \in \Disc{G/H}$;
\item[{\rm{(iii)}}]
$p'=0$ or $p''=0$.
\end{enumerate}
\end{lemma}
Since the discrete decomposability in the category of \gk-module
implies the discrete decomposability
of the unitary representation,
the implication (iii) $\Rightarrow$ (i) ($\Rightarrow$ (ii))
in Theorem \ref{thm:discdeco} follows from Lemma \ref{lem:K'adm}.
To prove the converse implication (ii) $\Rightarrow$ (iii) in Theorem \ref{thm:discdeco},
the following lemma is crucial.
\begin{lemma}
\label{lem:Xiadm}
Let $G/H=O(p,q)/O(p-1,q)$ $(=X(p,q))$.
Then the direct sum $\bigoplus_{\Pi \in \Disc{G/H}} \Pi$
is $K$-admissible.
\end{lemma}
\begin{proof}
This follows from the classification of $\Disc{G/H}$
in Proposition \ref{prop:discX}
and from the $K$-type formula of $\Pi$
as seen in the condition (iii) of Definition-Theorem \ref{def:pilmd}.
\end{proof}
Combining Lemma \ref{lem:Xiadm} with Theorem \ref{thm:2002},
we have
\begin{proposition}
\label{prop:Discadm}
For any $\Pi \in \Disc{G/H}$,
$(\Pi|_{G'})_{\operatorname{disc}}$ is $K'$-admissible.
\end{proposition}
We are ready to complete the proof of Theorem \ref{thm:discdeco}.
\begin{proof}
[Proof of the implication (ii) $\Rightarrow$ (iii) in Theorem \ref{thm:discdeco}]
Suppose that the restriction $\Pi|_{G'}$ is discretely decomposable
as a unitary representation of the subgroup $G'$,
{\it{i.e.,}}
$\Pi|_{G'}=(\Pi|_{G'})_{\operatorname{disc}}$.
Then $\Pi$ is $K'$-admissible by Proposition \ref{prop:Discadm},
and so is the underlying \gk-module $\Pi_K$.
Hence $p'=0$ or $p''=0$ by Lemma \ref{lem:K'adm}.
Thus Theorem \ref{thm:discdeco} is proved.
\end{proof}
\begin{proof}
[Proof of Corollary \ref{cor:discdeco}]
By Theorem \ref{thm:discdeco},
the condition (i) in Corollary \ref{cor:discdeco}
is equivalent to the following:
\newline
(i)\enspace
$p' p'' \ne 0$,
\newline
whereas the condition (ii) is clearly equivalent to
\newline
(ii)$'$\enspace
For any $\pi \in \widehat {G'}$
and any $\iota \in \Hom_{G'}(\pi, \Pi|_{G'})$,
$\iota(\pi) \cap \Pi^{\infty} = \{0\}$.
\newline
Let us prove the equivalence (i)$'$ $\Leftrightarrow$ (ii)$'$.
\newline
{(ii)$'$ $\Rightarrow$ (i)$'$:}\enspace
Suppose $p'p''=0$.
Then $\iota(\pi) \cap \Pi_K \ne \{0\}$
by Proposition \ref{prop:discdeco},
whence $\iota(\pi) \cap \Pi^{\infty}\ne \{0\}$
because $\Pi_K \subset \Pi^{\infty}$.
\newline
{(i)$'$ $\Rightarrow$ (ii)$'$}:\enspace
Conversely,
suppose $\iota\colon \pi \to \Pi|_{G'}$ is a nonzero continuous $G'$-homomorphism
for some $\pi \in \widehat{G'}$.
Then $\pi$ must be of the form
$\pi_{\delta,\lambda'}^{p',q'}
\boxtimes \pi_{\varepsilon,\lambda''}^{p'',q''}$
for some $(\delta,\varepsilon)$ and $(\lambda',\lambda'')$,
and $\iota$ must be a scalar multiple
of $T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''}$
by Theorems \ref{thm:2002} and \ref{thm:holographic}.
If $p' p'' \ne 0$,
then it follows from the definition of $X(p,q)_{\delta\varepsilon}$
in Section \ref{subsec:orbits} that at least two of the open sets
$X(p,q)_{\amp}$, $X(p,q)_{\app}$,
$X(p,q)_{\apm}$ are nonempty,
and thus
$
\operatorname{Image}
T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''}
\cap C^{\infty}(X(p,q)) =\{0\}
$
by the definition of $T_{\delta\varepsilon, \lambda}^{\lambda',\lambda''}$
in Section \ref{subsec:ho}.
Since $\Pi^{\infty} \subset C^{\infty}(X(p,q))$,
this shows that $\iota(\pi) \cap \Pi^{\infty} =\{0\}$.
Therefore,
we have shown the implication (i)$'$ $\Rightarrow$ (ii)$'$.
\end{proof}
\section{Appendix ---multiplicity in branching laws}
As viewed in \cite{xKVogan2015},
we divide branching problems into the following three stages:
\par\indent
\text{Stage A}:\enspace
Abstract features of the restriction;
\par\indent
\text{Stage B}:\enspace
Branching laws
(irreducible decomposition of restrictions);
\par\indent
\text{Stage C}:\enspace
Construction of symmetry breaking/holographic operators.
The role of Stage A is to develop
an abstract theory on the restriction
of representations
as generally as possible.
In turn,
we could expect a detailed study of the restriction
in Stages B and C
in the specific settings
that are {\it{a priori}} guaranteed
to be \lq\lq{nice}\rq\rq\
in Stage A.
Conversely,
new results and methods in Stage C
may indicate a further fruitful direction
of branching problems
including Stage A.
The present article has focused on analytic problems
in Stages B and C
in the setting where the triple $H \subset G \supset G'$ is given by
\begin{equation}
\label{eqn:Opq3}
(G,H,G')=(O(p,q), O(p-1,q), O(p',q')\times O(p-p',q-q')).
\end{equation}
Then one might wonder what are the abstract features (Stage A)
which have arisen from this article,
and also might be curious about a possible generalization
beyond the setting \eqref{eqn:Opq3}.
The spectral property
of the branching laws is such an aspect,
which we discussed in Section \ref{sec:comments}.
Another aspect of Theorem \ref{thm:2002} is
the multiplicity-free property:
\begin{equation}
\label{eqn:mult-one}
m_{\Pi}(\pi) \le 1
\qquad
\text{
${}^{\forall} \pi \in \widehat {G'}$
and
${}^{\forall} \Pi \in \operatorname{Disc}(G/H)$.
}
\end{equation}
Here,
for $\Pi \in \widehat G$,
the multiplicity $m_{\Pi}(\pi)$ of $\pi \in \widehat {G'}$
as the {\it{discrete spectrum}}
of the (unitary) restriction $\Pi|_{G'}$
is defined by
\[
m_{\Pi}(\pi):= \dim_{\mathbb{C}} \invHom {G'}{\pi}{\Pi|_{G'}}
=\dim_{\mathbb{C}}\invHom {G'}{\Pi|_{G'}}{\pi}
\in {\mathbb{N}} \cup \{\infty\}.
\]
In this Appendix,
we give a flavor of some multiplicity estimates (Stage A)
in a broader setting
than \eqref{eqn:Opq3},
for instance,
when
\begin{equation}
\label{eqn:triplesymm}
\text{both $(G,H)$ and $(G,G')$ are reductive symmetric pairs.}
\end{equation}
In what follows,
we treat not only discrete series representations
$\Pi \in \operatorname{Disc}(G/H)$
but also non-unitary representations
that have a non-trivial
$H$-period (or is $H$-distinguished)
as well.
We recall that
there is a canonical equivalence
of categories
between the category ${\mathcal{H C}}$
of $({\mathfrak{g}}, K)$-modules
of finite length and the category ${\mathcal{M}}$
of smooth admissible representations
of moderate growth
by the Casselman--Wallach globalization theory
\cite[Chap.~11]{WaI}.
Denote by $\operatorname{Irr}(G)$
the set of irreducible objects
in ${\mathcal{M}}$.
The unitary dual $\widehat G$ may be thought
of as a subset of $\operatorname{Irr}(G)$
by taking smooth vectors:
\begin{equation}
\label{eqn:Piinfty}
\widehat G \hookrightarrow \operatorname{Irr}(G),
\qquad
\Pi \mapsto \Pi^{\infty}.
\end{equation}
For $\Pi^{\infty} \in \operatorname{Irr}(G)$
and $\pi^{\infty} \in \operatorname{Irr}(G')$,
we set
\[
m_{\Pi^{\infty}}(\pi^{\infty})
:= \dim_{\mathbb{C}} \invHom {G'}{\Pi^{\infty}|_{G'}}{\pi^{\infty}}.
\]
In general,
for any $\Pi \in \widehat G$,
one has
$m_{\Pi}(\pi) \le m_{\Pi^{\infty}}(\pi^{\infty})$
for all $\pi \in \widehat {G'}$,
and
$m_{\Pi}(\pi) \le n_{\Pi}(\pi) \le m_{\Pi^{\infty}}(\pi^{\infty})$
a.e.~$\pi \in \widehat {G'}$
with respect to the measure
for the disintegration \eqref{eqn:directint}
of the (unitary) restriction $\Pi|_{G'}$,
where we recall
$n_{\Pi} \colon \widehat {G'} \to {\mathbb{N}} \cup \{\infty\}$
is the measurable function
which gives the multiplicity
in \eqref{eqn:directint}.
For a closed subgroup $H$ of $G$,
we define
\begin{equation*}
\operatorname{Irr}(G)_H
:=\{\Pi^{\infty} \in \operatorname{Irr}(G)
:(\Pi^{-\infty})^H \ne \{0\}
\},
\end{equation*}
where $\Pi^{-\infty}$ denotes the representation
on the space
of distribution vectors.
Then $\operatorname{Disc}(G/H)$ may be thought of
as a subset of $\operatorname{Irr}(G)_H$
via \eqref{eqn:Piinfty}.
Now we address the following:
\begin{prob}
\label{q:bdd}
Find a criterion for a triple $H \subset G \supset G'$
with bounded multiplicity property
for the restriction:
there exists $C>0$
such that
\begin{equation}
\label{eqn:BBH}
m_{\Pi^{\infty}}(\pi^{\infty}) \le C
\qquad
\text{${}^{\forall} \pi^{\infty} \in \operatorname{Irr}(G')$
and ${}^{\forall} \Pi^{\infty} \in \operatorname{Irr}(G)_H$}.
\end{equation}
\end{prob}
Note that the condition \eqref{eqn:BBH} immediately implies
\begin{equation}
\label{eqn:BBDisc}
m_{\Pi}(\pi) \le C
\qquad
\text{
${}^{\forall}\pi \in \widehat {G'}$
and
${}^{\forall}\Pi \in \operatorname{Disc}(G/H)$.
}
\end{equation}
We also note that \eqref{eqn:mult-one} is nothing but \eqref{eqn:BBDisc}
with $C=1$.
We recall some general results in the setting
where $H=\{e\}$ from \cite[Thms.~C and D]{xktoshima}
and \cite[Thm.~4.2]{xkInvent98}
(see also Proposition \ref{prop:discdeco}):
{\bf{Bounded multiplicity:}}\enspace
$(G_{\mathbb{C}} \times G_{\mathbb{C}}')/\operatorname{diag} G_{\mathbb{C}}'$ is spherical
iff
\begin{equation}
\label{eqn:BB}
\text{${}^{\exists}C>0
\quad
m_{\Pi^{\infty}}(\pi^{\infty}) \le C
$
\quad
${}^{\forall}\pi^{\infty} \in \operatorname{Irr}(G')$
and ${}^{\forall}\Pi^{\infty} \in \operatorname{Irr}(G)$. }
\end{equation}
{\bf{Finite multiplicity:}}\enspace
$(G \times G')/\operatorname{diag} G'$ is real spherical
iff
\begin{equation}
\label{eqn:PP}
\text{
$m_{\Pi^{\infty}}(\pi^{\infty}) < \infty$
\quad
${}^{\forall} \pi^{\infty} \in \operatorname{Irr}(G')$
and
${}^{\forall}\Pi^{\infty} \in \operatorname{Irr}(G)$.
}
\end{equation}
{\bf{Admissible restriction:}}\enspace
If $\Pi_K$ is discretely decomposable
as a $({\mathfrak{g}}',K')$-module
and if $(G,G')$ is a symmetric pair,
then
\begin{equation}
\label{eqn:Wconj}
\text{$m_{\Pi}(\pi)=m_{\Pi^{\infty}}(\pi^{\infty})<\infty$
for all $\pi \in \widehat{G'}$. }
\end{equation}
(This generalizes Harish-Chandra's admissibility theorem
for compact $G'$.)
In these cases,
explicit criteria lead us to the classification theory.
The criterion \cite{xktoshima} for \eqref{eqn:BB} depends
only on the complexification $({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')$,
hence the classification for \eqref{eqn:BB}
for simple ${\mathfrak{g}}_{\mathbb{C}}$ simple
reduces to a classical result \cite{xkramer}:
\begin{equation}
\label{eqn:BBlist}
({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')
=
({\mathfrak{sl}}_n, {\mathfrak{gl}}_{n-1}),
({\mathfrak{so}}_{n}, {\mathfrak{so}}_{n-1}),
\text{ or }
({\mathfrak{so}}_8, {\mathfrak{spin}}_7).
\end{equation}
In this case,
one can take $C=1$
for most of the real forms
\cite{xsunzhu}.
On the other hand,
irreducible symmetric pairs
$({\mathfrak{g}}, {\mathfrak{g}}')$ satisfying \eqref{eqn:PP}
were classified in \cite{xKMt}.
The triples $(A_{\mathfrak{q}}(\lambda), {\mathfrak{g}}, {\mathfrak{g}}')$
having discretely decomposable restrictions
$A_{\mathfrak{q}}(\lambda)|_{{\mathfrak{g}}'}$ were classified
in \cite{decoAq}.
We now consider the setting \eqref{eqn:triplesymm}.
In this generality,
\eqref{eqn:BBDisc} may fail.
The following example is a reformulation
of \cite[Ex.~5.5]{xkAdv00}
(cf. \cite[Sect.~6.3]{mf-korea}).
\begin{example}
\label{ex:sp2C}
$(G,H,G')=(SO(5,{\mathbb{C}}),SO(3,2), SO(3,2))$.
Then for any $\Pi \in \operatorname{Disc}(G/H)$
there exists $\pi \in \widehat {G'}$
such that $m_{\Pi}(\pi)=\infty$.
(In this case,
the disintegration $\Pi|_{G'}$ contains
continuous spectrum,
see \eqref{eqn:Wconj}.)
\end{example}
As we shall see in Observation \ref{obs:0.6} (1) below,
the bounded multiplicity property \eqref{eqn:BBH} often holds
if $\operatorname{rank}G/H=1$,
but not always:
\begin{example}
\label{ex:SU3}
Let $(G,H,G')=(SU(3),U(2), SO(3))$.
Then \eqref{eqn:BBDisc} fails
because $m_{\Pi_n}(\pi_n)=[\frac n 2]+1$
where $\Pi_n \in \operatorname{Disc}(G/H)$
and $\pi_n \in \widehat{G'}$
are of dimensions $(n+1)^3$ and $2n+1$,
respectively.
\end{example}
\begin{example}
\label{ex:SL3}
Let $(G,H,G')=(SL(3,{\mathbb{R}}),GL(2,{\mathbb{R}}), SO(3))$.
Then \eqref{eqn:BBDisc} fails
because $\sup_{\pi \in \widehat {G'}} m_{\Pi}(\pi)=\infty$
for any $\Pi \in \operatorname{Disc}(G/H)$.
\end{example}
The last example may be compared with the following:
\begin{example}
Let $(G,H,G')=(S L(4,{\mathbb{R}}), S p(2,{\mathbb{R}}), S O(4))$.
Then \eqref{eqn:BBDisc} holds
because $\sup_{\pi \in \widehat{G'}} m_{\Pi}(\pi) =1$
for any $\Pi \in \operatorname{Disc}(G/H)$.
\end{example}
To describe an answer to Problem \ref{q:bdd} (Stage A)
which covers not only discrete series representations
$\Pi \in \operatorname{Disc}(G/H)$
but also any irreducible representations $\Pi^{\infty}$
realized in $C^{\infty}(G/H)$,
we fix some notation.
Denote by $\sigma$ the involution of $G$
that defines a symmetric pair $(G,H)$.
We use the same letter $\sigma$
to denote the complex linear extension of its differential.
We write $G_{\mathbb{C}}$
for a complexification of $G$,
and $G_U$ for a compact real form of $G_{\mathbb{C}}$.
Let ${\mathfrak{j}}_{\mathbb{C}}$ be a maximal semisimple abelian
subspace
in ${\mathfrak{g}}_{\mathbb{C}}^{-\sigma}=\{X \in {\mathfrak{g}}_{\mathbb{C}}:\sigma X=-X\}$,
and $Q_{\mathbb{C}}$ a parabolic subgroup
of $G_{\mathbb{C}}$
with Levi part $Z_{G_{\mathbb{C}}}({\mathfrak{j}}_{\mathbb{C}})$.
\begin{theorem}
\label{thm:bdd}
Suppose that $(G,H)$ is a reductive symmetric pair,
and $G'$ an (algebraic) reductive subgroup of $G$.
Then the following three conditions on the triple $(G,H,G')$
are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
${}^{\exists}C>0$,
$m_{\Pi^{\infty}}(\pi^{\infty}) \le C$
\quad
${}^{\forall}\Pi^{\infty} \in \operatorname{Irr}(G)_H$
and ${}^{\forall}\pi^{\infty} \in \operatorname{Irr}(G')$.
\item[{\rm{(ii)}}]
$G_{\mathbb{C}}/Q_{\mathbb{C}}$ is $G_{\mathbb{C}}'$-spherical.
\item[{\rm{(iii)}}]
$G_{\mathbb{C}}/Q_{\mathbb{C}}$ is $G_U'$-strongly visible.
\end{enumerate}
\end{theorem}
See \cite{tanaka}
(see also \cite[Cor.~15]{xrims40})
for the equivalence (ii) $\Leftrightarrow$ (iii).
\begin{remark}
The multiplicity-freeness
\eqref{eqn:mult-one} holds for compact forms.
\end{remark}
It should be mentioned that the bounded multiplicity property (i)
depends {\it{a priori}}
on the real form $(G,H,G')$,
however,
Theorem \ref{thm:bdd} tells that its criterion (ii)
(or equivalently (iii))
can be stated only by the complexification
of the Lie algebras $({\mathfrak{g}}, {\mathfrak{h}}, {\mathfrak{g}}')$.
Here is a complete classification
of such triples $({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')$
when ${\mathfrak{g}}_{\mathbb{C}}$ is simple:
\begin{corollary}
[classification]
\label{cor:bdd}
Assume ${\mathfrak{g}}_{\mathbb{C}}$ is simple in the setting \eqref{eqn:triplesymm}.
Then the bounded multiplicity property \eqref{eqn:BBH}
holds for the triple $(G,H,G')$
iff the complexified Lie algebras
$({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')$
are in Table \ref{tab:0.1}
up to automorphisms.
In the table,
$p$, $q$ are arbitrary subject to $n=p+q$.
\begin{table}[H]
\begin{minipage}[t]{.45\textwidth}
\begin{tabular}[t]{ccc}
${\mathfrak{g}}_{\mathbb{C}}$
&${\mathfrak{h}}_{\mathbb{C}}$
&${\mathfrak{g}}_{\mathbb{C}}'$
\\
\hline
${\mathfrak{sl}}_n$
&${\mathfrak{gl}}_{n-1}$
&${\mathfrak{sl}}_p \oplus {\mathfrak{sl}}_q \oplus {\mathbb{C}}$
\\
${\mathfrak{sl}}_{2m}$
&${\mathfrak{gl}}_{2m-1}$
&${\mathfrak{sp}}_m$
\\
${\mathfrak{sl}}_{6}$
&${\mathfrak{sp}}_{3}$
&${\mathfrak{sl}}_4 \oplus {\mathfrak{sl}}_2 \oplus {\mathbb{C}}$
\\
${\mathfrak{so}}_n$
&${\mathfrak{so}}_{n-1}$
&${\mathfrak{so}}_p \oplus {\mathfrak{so}}_q$
\\
${\mathfrak{so}}_{2m}$
&${\mathfrak{so}}_{2m-1}$
&${\mathfrak{gl}}_m$
\\
${\mathfrak{so}}_{2m}$
&${\mathfrak{so}}_{2m-2} \oplus {\mathbb{C}}$
&${\mathfrak{gl}}_m$
\\
${\mathfrak{sp}}_n$
&${\mathfrak{sp}}_{n-1} \oplus {\mathfrak{sp}}_1$
&${\mathfrak{sp}}_p \oplus {\mathfrak{sp}}_q$
\\
${\mathfrak{sp}}_n$
&${\mathfrak{sp}}_{n-2} \oplus {\mathfrak{sp}}_2$
&${\mathfrak{sp}}_{n-1} \oplus {\mathfrak{sp}}_1$
\\
${\mathfrak{e}}_6$
&${\mathfrak{f}}_4$
&${\mathfrak{so}}_{10} \oplus {\mathbb{C}}$
\\
${\mathfrak{f}}_4$
&${\mathfrak{so}}_{9}$
&${\mathfrak{so}}_9$
\\
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[t]{.45\textwidth}
\begin{tabular}[t]{ccc}
${\mathfrak{g}}_{\mathbb{C}}$
&${\mathfrak{h}}_{\mathbb{C}}$
&${\mathfrak{g}}_{\mathbb{C}}'$
\\
\hline
${\mathfrak{sl}}_n$
&${\mathfrak{so}}_{n}$
&${\mathfrak{gl}}_{n-1}$
\\
${\mathfrak{sl}}_{2m}$
&${\mathfrak{sp}}_{m}$
&${\mathfrak{gl}}_{2m-1}$
\\
${\mathfrak{sl}}_n$
&${\mathfrak{sl}}_p \oplus {\mathfrak{sl}}_q \oplus {\mathbb{C}}$
&${\mathfrak{gl}}_{n-1}$
\\
${\mathfrak{so}}_{n}$
&${\mathfrak{so}}_{p} \oplus {\mathfrak{so}}_q$
&${\mathfrak{so}}_{n-1}$
\\
${\mathfrak{so}}_{2m}$
&${\mathfrak{gl}}_{m}$
&${\mathfrak{so}}_{2m-1}$
\\
\end{tabular}
\end{minipage}
\caption{Triples $({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')$ with ${\mathfrak{g}}_{\mathbb{C}}$ simple in Theorem \ref{thm:bdd}}
\label{tab:0.1}
\hfil
\end{table}
\end{corollary}
Here by \lq\lq{automorphisms}\rq\rq\
we mean inner automorphisms
for $({\mathfrak{g}}, {\mathfrak{h}})$
and $({\mathfrak{g}}, {\mathfrak{g}}')$
separately
and outer autormorphisms for $({\mathfrak{g}}, {\mathfrak{h}}, {\mathfrak{g}}')$
simultaneously.
Thus in Table \ref{tab:0.1},
we have omitted some cases such as
$({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')
=
({\mathfrak{s o}}_8, {\mathfrak{spin}}_7)$,
$({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')=({\mathfrak{s o}}_8, {\mathfrak{g l}}_4, {\mathfrak{s o}}_6 \oplus {\mathfrak{s o}}_2)$
or
$({\mathfrak{s l}}_4, {\mathfrak{s p}}_2, {\mathfrak{s l}}_2 \oplus {\mathfrak{s l}}_2 \oplus {\mathbb{C}})$.
The right-hand side of Table \ref{tab:0.1} collects
the case \eqref{eqn:BBlist},
where a stronger bounded multiplicity theorem \eqref{eqn:BB} holds.
The left-hand side includes:
\begin{example}
\label{ex:opqA}
The setting \eqref{eqn:Opq3} for Theorem \ref{thm:2002} is
a real form of
$
({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')
=({\mathfrak{s o}}_n, {\mathfrak{s o}}_{n-1}, {\mathfrak{s o}}_p \oplus {\mathfrak{s o}}_q)
$
in the fourth row of the left-hand side in Table \ref{tab:0.1}.
\end{example}
{}From Corollary \ref{cor:bdd},
one sees the following:
\begin{observation}
\label{obs:0.6}
{\rm{(1)}}\enspace
The bounded multiplicity \eqref{eqn:BBH} holds
for any triple
$(G,H,G')$ with $\operatorname{rank}G/H=1$
except for the following two cases:
$
({\mathfrak{g}}_{\mathbb{C}}, {\mathfrak{h}}_{\mathbb{C}}, {\mathfrak{g}}_{\mathbb{C}}')
=({\mathfrak{s l}}_n, {\mathfrak{g l}}_{n-1}, {\mathfrak{s o}}_n)
\text{ or }
({\mathfrak{f}}_4, {\mathfrak{s o}}_9,
{\mathfrak{s p}}_3 \oplus {\mathfrak{s l}_2). }
$
\newline
{\rm{(2)}}\enspace
The bounded multiplicity \eqref{eqn:BBH} may hold even
when $\operatorname{rank}G/H >1$ and $\operatorname{rank}G/G' >1$.
\end{observation}
Theorem \ref{thm:bdd} also gives a criterion for two reductive symmetric pairs $(G,H_1)$ and $(G, H_2)$
with the following bounded multiplicity property
of tensor product representations.
\begin{theorem}
[tensor product]
\label{thm:tensor}
Suppose that $(G,H_j)$ $(j=1,2)$ are reductive symmetric pairs,
and that ${Q_{j}}_{\mathbb{C}}$ are parabolic subgroups
of $G_{\mathbb{C}}$
as in Theorem \ref{thm:bdd}.
Then the following three conditions on the triple
$(G,H_1, H_2)$ are equivalent:
\begin{enumerate}
\item[{\rm{(i)}}]
There exists $C>0$
such that
\begin{equation}
\label{eqn:bddt}
\dim_{\mathbb{C}} \operatorname{Hom}_G(\Pi_1 \otimes \Pi_2, \Pi) \le C
\quad
{}^{\forall}\Pi_j \in \operatorname{Irr}(G)_{H_j}\,(j=1,2)
\text{ and }
{}^{\forall}\Pi \in \operatorname{Irr}(G).
\end{equation}
\item[{\rm{(ii)}}]
$(G_{\mathbb{C}} \times G_{\mathbb{C}})/({Q_{1}}_{\mathbb{C}} \times {Q_{2}}_{\mathbb{C}})$ is $G_{\mathbb{C}}$-spherical via the diagonal action.
\item[{\rm{(iii)}}]
$(G_{\mathbb{C}} \times G_{\mathbb{C}})/({Q_{1}}_{\mathbb{C}} \times {Q_{2}}_{\mathbb{C}})$ is $G_U$-strongly visible via the diagonal action.
\end{enumerate}
\end{theorem}
By the classification of strongly visible actions
\cite{K2007b},
one concludes from Theorem \ref{thm:bdd}
that such examples for groups of type A
are rare:
\begin{example}
[tensor product]
\label{ex:tensorA}
Suppose ${\mathfrak{g}}_{\mathbb{C}}={\mathfrak{sl}}(n,{\mathbb{C}})$.
Then \eqref{eqn:bddt} holds
iff
$({\mathfrak{g}}_{\mathbb{C}}, {{\mathfrak{h}}_1}_{\mathbb{C}},
{{\mathfrak{h}}_2}_{\mathbb{C}})$ is isomorphic
to
$
({\mathfrak{sl}}_2, {\mathfrak{so}}_{2}, {\mathfrak{so}}_{2})
$
or
$
({\mathfrak{sl}}_4, {\mathfrak{sp}}_{2}, {\mathfrak{sp}}_{2}).
$
\end{example}
For groups of type BD, one has:
\begin{example}
[tensor product]
\label{ex:tensorOpq}
Let $G=O(p,q)$,
and $H_1$, $H_2$ be $O(p-1,q)$ or $O(p,q-1)$.
Then \eqref{eqn:bddt} holds.
In particular,
the tensor product $\Pi_{\delta,\lambda}^{p,q} \otimes \Pi_{\varepsilon,\nu}^{p,q}$
decomposes into irreducible unitary representations
with uniformly bounded multiplicities
for any $\delta, \varepsilon \in \{+,-\}$,
$\lambda \in A_{\delta}(p,q)$,
$\nu \in A_{\varepsilon}(p,q)$.
\end{example}
\begin{example}
[tensor product]
\label{ex:SO8}
Let $G=O(2p,2q)$ with $p+q=4$,
$H=O(2p-1,2q)$,
and $G'=U(p,q)$.
Then \eqref{eqn:bddt} holds.
\end{example}
Proofs of the assertions in Appendix
will be given in another paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
The Planck mission has established that the cosmic microwave background
anisotropies on small angular scales are well described by the standard
$\Lambda$CDM cosmology with a nearly scale-invariant power-law spectrum
(\citealt{planck2013}). At large scales ($\ell < 40$), however, there appears
an overall deficit of power compared to what is expected in the benchmark
$\Lambda$CDM model (\citealt{planck2013c}). The unusual shape and amplitude of
the power spectrum at low multipoles was first observed by the WMAP mission
(\citealt{spergel2003}; \citealt{hinshaw2003}) and remains unexplained,
although hypotheses include a running spectral index (\citealt{feng2003};
\citealt{bastero2003}; \citealt{chung2003}; \citealt{kawasaki2003};
\citealt{hunt2007}), a breakdown of slow-roll inflation or pre-slow roll phase
(\citealt{peiris2003}; \citealt{contaldi2003}; \citealt{mortonson2009};
\citealt{hazra2014}; \citealt{lello2014}), a contracting pre-inflation phase
(\citealt{piao2004}), open inflation (\citealt{white2014}), a non-Bunch-Davies
initial vacuum state (\citealt{ashoorioon2014}), or the presence of an extra
neutrino species (\citealt{dvorkin2014}; \citealt{anchordoqui2014}).
The tension at low $\ell$ is exacerbated significantly if there exists a
stochastic gravitational wave background with tensor-to-scalar ratio $r \gtrsim
0.1$, since tensor perturbations add to the expected CMB temperature
anisotropies at low multipoles (\citealt{smith2014}). Such a large $r$ has been
suggested by the BICEP2 experiment in its detection of B-mode polarization in
the sky at degree-angular scales (\citealt{bicep2014}). Under the assumption
that the observed B-mode anisotropies are sourced by primordial gravitational
waves, they infer $r_{0.05}=0.2_{-0.05}^{+0.07}$. By comparison, from the $TT$
spectrum alone, \cite{planck2013} inferred $r_{0.05} < 0.135$ at the 95\%
confidence level, in tension with the BICEP2 result. At present, the BICEP2
result is highly uncertain due to the likely presence of contamination by
foreground dust polarization in the observed field-of-view
(\citealt{flauger2014}; \citealt{mortonson2014}; \citealt{planck2014}).
Nevertheless, large-field inflation models---including the simplest chaotic
models---predict at minimum $r \gtrsim 0.01$ (\citealt{lyth1997}) and thus
would increase the apparent tension with $\Lambda$CDM.
The simplest way to accommodate the deficit of power at low $\ell$ is to allow
for a running spectral index, thus departing from a power-law spectrum. When a
constant running of the spectral index $\alpha =dn_s/d\ln k$ is allowed,
\cite{planck2013b} finds $\alpha = -0.011\pm 0.008$ in the case $r=0$, where
the preference for negative running is driven largely by the temperature
likelihood at low multipoles. More significantly, running also allows for a
higher tensor contribution, leading to $r_{0.05} < 0.275$ when both running and
tensors are allowed.
Such a large constant running is difficult to implement in the underlying
inflation model, since it yields an insufficient number of e-foldings to solve
the horizon problem (\citealt{easther2006}). Within the context of single-field
slow-roll inflation, the only way to achieve the 50-60 remaining e-foldings
necessary after the mode $k=0.05$ Mpc$^{-1}$ leaves the horizon, is if the
running diminishes or turns positive at larger $k$. Plausible mechanisms exist
for the running to diminish to zero at larger $k$, for example through
radiative corrections (\citealt{ballesteros2008}; \citealt{ballesteros2014}) or
GUT symmetry breaking (\citealt{hazra2014a}); this would, however, imply a
special scale at which the running ``turns over'' and becomes small, a scale
comparable to or just smaller than scales observable in the CMB, which would
seem a remarkable coincidence. Another, perhaps more natural possibility is
that the power spectrum oscillates, implying that the inflaton potential may
contain an oscillatory component. Since it would seem
unnatural for only one such oscillation to occur during the course of
inflation, the intriguing possibility arises that the inflaton potential
contains a gentle oscillation which may occur all the way to the end of
inflation. In fact, many large-field models that include oscillations in a
natural way have been investigated (e.g. \citealt{ashoorioon2006};
(\citealt{silverstein2008}; \citealt{kaloper2009}; \citealt{mcallister2010};
\citealt{kaloper2011}; \citealt{kobayashi2011}).
In this article we will test axion monodromy models against the CMB temperature
anisotropy spectrum, but our results are broadly applicable to inflationary
potentials with gentle oscillations. We will assume that the oscillation scale
(in $\log k$) is ``long'', i.e.~comparable to the range of multipoles observed
in the CMB, naturally leading to a running spectral index as described above.
We will show that the best-fit model gives three correlated predictions: 1) a
significant gravitational wave amplitude of order $r \sim 0.1$; 2) a reduction
of power at large scales (low $\ell$) despite the tensor contribution, thus
mitigating the existing tension at large scales; 3) there is a corresponding
significant suppression of power at small scales, particularly at the scales
relevant to dwarf galaxy formation, which will alleviate some of the
small-scale structure problems of $\Lambda$CDM. Finally, although axion
monodromy allows for a gravitational wave background, we will find that the
e-folding requirement surprisingly places an upper bound on the
tensor-to-scalar ratio ($r \lesssim 0.2$), which is in tension with the BICEP2
measurement unless a significant portion of the observed B-mode signal is due
to foreground contamination rather than primordial gravitational waves.
The paper is outlined as follows. In Section \ref{sec:background} we discuss
the theoretical motivation for axion monodromy models and derive formulae for
the power spectra for scalar and tensor perturbations. In Section
\ref{sec:priors} we discuss our choice of parameters and the priors for each.
The resulting constraints are shown in Section \ref{sec:results}; the
constraints on the oscillation parameters are discussed in Section
\ref{sec:axion_constraints}, the best-fit model is discussed in Section
\ref{sec:bestfit}, and the constraint on the tensor-to-scalar ratio $r$ is
discussed in Section \ref{sec:nr_constraints}. In Sections \ref{sec:lowl} and
\ref{sec:highl} we compare our model to the usual constant-running model at low
and high multipoles, respectively. In Section \ref{sec:amending_efoldings} we
discuss whether our model can be amended to allow for a higher tensor-to-scalar
ratio. Section \ref{sec:smallscale_probs} investigates the implications of an
oscillating power spectrum for the small-scale problems of $\Lambda$CDM: the
effect on dwarf galaxy formation is discussed in Section \ref{sec:dwarfs},
while in Section \ref{sec:lyman_alpha} our results are compared to recent
Lyman-$\alpha$ forest data and the prospects for other small-scale probes of
the matter power spectrum are discussed. We conclude with a summary of our
main points in Section \ref{sec:conclusions}.
\section{Theoretical Background}\label{sec:background}
\subsection{Motivation for large-field inflation models}
During slow roll inflation, it is easily shown that a relation exists between
the tensor-to-scalar ratio $r$ and the overall shift $\Delta\phi$ in the scalar
field from CMB scales to the end of inflation:
\begin{equation}
\frac{\Delta\phi}{M_p} = \mathcal{O}(1)\times\left(\frac{r}{0.01}\right)^{1/2}
\label{eq:lyth_bound}
\end{equation}
where $M_p=\left(8\pi G\right)^{-1/2}$ is the reduced Planck mass. The
importance of this relation, known as the Lyth bound (\citealt{lyth1997}), is
that a significant gravitational wave contribution $r \gtrsim 0.01$ implies
that the scalar field value changes by more than the Planck energy $M_p$. These
so-called large-field models imply that any fields coupled to the inflaton with
at least gravitational strength will receive corrections to the coupling
strengths, resulting in an infinite series of Planck-suppressed terms
contributing to the scalar field potential. Such terms spoil the flatness of
the potential, inhibiting inflation, unless there is an exquisite degree of
fine-tuning in the coefficients. The only way to avoid fine-tuning is if a
symmetry ``protects'' the potential from large contributions---in particular,
an approximate shift symmetry $\phi \rightarrow \phi + a$ in the corresponding
Lagrangian ensures that the correction to the potential is \emph{at worst}
periodic, and thus its magnitude can in principle be controlled.
While many different potentials can approximately satisfy the above mentioned
shift symmetry, there is no guarantee that such a potential admits a UV
completion in quantum gravity; indeed, it is far more likely that the class of
potentials which can be derived from a corresponding UV-complete theory obeying
the same approximate shift symmetry is relatively restricted. For this reason,
merely considering generic renormalizable effective field theories is not
sufficient for large-field models. Instead, one should consider inflation
models that can be derived from quantum gravity, with string theory being the
best developed to date. Although a few alternatives exist
(\citealt{freese1990}; \citealt{dimopoulos2008}; \citealt{wan2014};
\citealt{neupane2014a}), one of the best-motivated of these models is axion
monodromy (\citealt{silverstein2008}; \citealt{mcallister2010};
\citealt{flauger2010}; \citealt{aich2013}). Axion fields $a$ in string theory
naturally obey a discrete symmetry $a \rightarrow a + 2\pi$. The phenomenon of
monodromy appears when axions are coupled to fluxes in a compactified
higher-dimensional space (in the string theoretic description, this occurs in
the presence of a wrapped brane in a Calabi-Yau manifold). The shift symmetry
is slightly broken, allowing the field potential energy to change by a large
amount as one traverses many cycles in the compactified space, while all the
remaining microphysics is periodic in field space; this is analogous to a
spiral staircase where the overall height can change without bound through many
cycles.
The resulting potential for axion monodromy models consists of a monomial
term plus a sinusoidal term:
\begin{equation}
V(\phi) = \lambda \phi^p + A\sin\left(\frac{\phi}{f} + \psi\right)
\label{potential_original}
\end{equation}
where we are now working in Planckian units where $M_p=\left(8\pi
G\right)^{-1/2}=1$ for the remainder of this paper. The parameter $f$
corresponds to the period of oscillation and is known as the \emph{axion decay
constant}. Although in its original incarnation the monodromy potential has
$p=1$, depending on the flux coupling and brane configuration one can also
achieve other discrete values such as $p=2/3$, $p=2$, $p=3$ and so on
(\citealt{mcallister2014}). Since we are interested here in what $p$-value(s)
are preferred by CMB data, we will vary $p$ as a free parameter, with the
ansatz that $p$ can be any positive real number.
\subsection{Axion monodromy power spectrum}\label{sec:power_spectrum}
For fitting purposes, we find it useful to make the transformation $\phi
\rightarrow \phi_{min} - \phi$, where $\phi_{min}$ will be chosen so that $\phi
= 0$ corresponds to the CMB pivot scale $k_* = 0.05$ Mpc$^{-1}$. With a
suitable relabeling of parameters, we can recast the potential in the following
form,
\begin{equation}
\frac{V}{V_*} = \left(1-a\sin\delta\right)\left(1-\phi/\phi_{min}\right)^p + a \sin\left(\frac{\phi}{f} + \delta\right)
\label{potential}
\end{equation}
which is written so that $V(\phi=0) = V_*$. We choose $\phi_{min}$ to be
positive so the field rolls in the positive direction, i.e.~$\phi$ increases as
it rolls.
To find the primordial power spectra for scalar and tensor perturbations, one
typically uses the slow roll approximation, which assumes the Hubble parameter
varies slowly enough that the quasi-de Sitter solution of the equation for
quantum perturbations can be used (\citealt{mukhanov1992}; for a review see
\citealt{baumann2009}). Careful consideration is required here, however,
because slow roll can break down if oscillations in the potential are
sufficiently rapid, in which case the relevant equation must be solved
directly. This occurs if the amplitude is sufficiently large, or if the axion
decay constant $f$ is small (\citealt{flauger2010}). Since in this paper we
focus only on long-wavelength oscillations in the power spectrum, the slow roll
approximation holds very well in the parameter region of interest. The proof of
this is given in appendix \ref{sec:slowroll} for the interested reader.
The power spectrum for curvature perturbations in the slow-roll approximation
is given by $\Delta_{\mathcal{R}}^2 \sim \frac{H^2}{\epsilon} \sim
\frac{V^3}{V_{,\phi}^{2}}$. Normalizing the power spectrum by the scalar
amplitude at the pivot scale $A_s$, we find
\begin{eqnarray}
\label{powerspec}
&&\Delta_{\mathcal{R}}^2 = A_s \mathcal{N}^2 \times \nonumber \\
&&\frac{\left[(1-a\sin\delta)(1-\phi_k/\phi_{min})^p + a \sin(\frac{\phi_k}{f} + \delta)\right]^3}{\left[(1-a\sin\delta)(1-\phi_k/\phi_{min})^{p-1} - \left(\frac{\phi_{min}}{p f}\right) a \cos(\frac{\phi_k}{f} + \delta)\right]^2}, \nonumber \\
\end{eqnarray}
where $\phi_k$ corresponds to the scalar field value at the time when the mode
with wavenumber $k$ left the horizon (to be determined shortly), and
\begin{equation}
\label{nfactor}
\mathcal{N} = 1 - a\sin\delta - \left(\frac{\phi_{min}}{nf}\right) a\cos\delta.
\end{equation}
Note that at the pivot scale ($\phi_k = 0$), this reduces to
$\Delta_{\mathcal{R}}^2 = A_s$ as it should.
Meanwhile, the primordial tensor power spectrum is given by $\Delta_t^2 \sim
H^2 \sim V$, and since it is normalized by $A_t = r A_s$ where $r$ is the
tensor-to-scalar ratio at the pivot scale $k_*$, we have
\begin{equation}
\label{tensor_powerspec}
\Delta_t^2 = r A_s \frac{V(\phi_k)}{V_*}
\end{equation}
where $V/V_*$ is given in equation \ref{potential}.\footnote{Note that the
scale of the potential $V_*$ will not enter into our equations directly, since
we only encounter the combination $V/V_*$. Nevertheless, $V_*$ is not an
independent parameter but is rather determined by the normalization of
$\Delta_t^2$, with the result $V_* = \frac{3\pi^2}{2}rA_s$ in the slow roll
approximation.}
Finally, we still need the mapping $\ln\left(\frac{k}{k_*}\right) \rightarrow
\phi_k$. In the slow roll approximation, this is given by
\begin{equation}
\frac{k}{k_*} = \sqrt{\frac{V}{V_*}} \exp\left\{\int_0^\phi\left|\frac{V}{V_{,\phi}}\right|d\phi\right\}.
\label{kmapping}
\end{equation}
For a given set of parameter values, this integral can be calculated and
inverted numerically; however, this would be computationally expensive to
perform while the parameters are being varied during the MCMC procedure
described in Section \ref{sec:priors}. Thus, an analytic approximation is
desirable here, which can be obtained as follows. Taking the log of equation
\ref{kmapping}, the formula is easily integrated and inverted if we first
consider the no-oscillation case where $a=0$. This yields
\begin{equation}
\phi_{k,0} \equiv \frac{p}{\phi_{min}}\ln\left(\frac{k}{k_*}\right).
\label{phik0}
\end{equation}
For the case where $a \neq 0$, an approximate solution is found by expanding in
$a \ll 1$ and keeping terms to first order in $\frac{a\phi_{min}}{nf}$. Upon
integrating, the formula can be inverted approximately by substituting
equation \ref{phik0} into the sine term, producing the formula
\begin{equation}
\label{phik_approx}
\phi_k \approx \phi_{k,0} - \frac{a \phi_{min}}{p} \left\{\sin\left(\frac{\phi_{k,0}}{f} + \delta\right) - \sin\delta\right\}.
\end{equation}
Using this approximation in eqs.~\ref{powerspec} and \ref{tensor_powerspec}, we
find the power spectrum differs from the exact numerical solution by less than
1\% over the range $2 < \ell < 2500$ for all the test cases considered (using
the approximate formula $\ell \approx k x_c$ where $x_c \approx 14100$ Mpc is
the comoving angular diameter distance to last scattering).
\section{Sampling and priors}\label{sec:priors}
We sample the model parameter space with a Markov Chain Monte Carlo (MCMC)
method using the CosmoMC software package (\citealt{lewis2002}), which has been
modified to incorporate the power spectra in eqs.~\ref{powerspec} and
\ref{tensor_powerspec} along with the model parameters. For the sampling we use
the Metropolis-Hastings algorithm extended by a ``fast-slow'' algorithm for
efficiently sampling nuisance parameters (\citealt{lewis2013}). In addition to
Planck data (\citealt{plancklike}), the likelihood includes WMAP 9-year
polarization data (\citealt{hinshaw2013}) as well as small-scale CMB data from
the ACT (\citealt{story2013}) and SPT surveys (\citealt{sievers2013}).
\subsection{Choice of model parameters}\label{sec:modelparams}
To sample the parameter space, at first it would seem most straightforward to
vary the five model parameters in equation \ref{potential} ($\phi_{min},p,f,a,$
and $\delta$). However, if this is done then the tensor-to-scalar ratio $r$
\emph{cannot} be varied freely, but must be set according to the relation
$r=16\epsilon_V$, where $\epsilon_V = \frac{1}{2}
\left(\frac{V_{,\phi}}{V}\right)^2$ is the first potential slow-roll parameter.
Since $r$ is the more interesting observable, we prefer to vary $r$ freely
while using this constraint to fix the value of $\phi_{min}$ as a function of
the other parameters. Substituting the potential from equation \ref{potential},
we find our constraint equation,
\begin{equation}
\frac{p}{\phi_{min}}(1-a\sin\delta) - \frac{a}{f}\cos\delta = \sqrt{\frac{r}{8}}.
\label{r_constraint}
\end{equation}
While $a$ gives the amplitude of the oscillation in the potential, the power
spectrum amplitude is more directly observable in the CMB. By expanding
equation \ref{powerspec} in the amplitude $a$, it can be seen that the scalar
power spectrum amplitude is $\approx 2\frac{a\phi_{min}}{nf}$ (this dominates
over $a$ since $\phi_{min}$ is typically of order 10 while $f$ will be of order
$0.1$). Now, if $a$ is exactly zero, then we see from equation
\ref{r_constraint} that $\frac{p}{\phi_{min}} = \sqrt{\frac{r}{8}}$. In
practice, we will find that $a$ must be small, so this will still hold to a
reasonably good approximation for realistic values of $a$. With this in mind,
we will define the (approximate) power spectrum oscillation amplitude
\begin{equation}
b \equiv \frac{2a}{f}\sqrt{\frac{8}{r}}.
\label{bdef}
\end{equation}
where, again, typically $b \gg a$. It should be emphasized that our analysis
will not assume that $b \ll 1$. While it is true that the power spectrum
amplitude may differ somewhat from $b$ unless $b \ll 1$, this does not preclude
our using it as a parameter since the amplitude is still proportional to $b$.
In practice, our inferred $b$ values will satisfy $b \lesssim 1$, but $b$ will
not necessarily be very small.
Our power spectrum parameters to vary, then, are $b$, $f$, $\delta$ and $p$, in
addition to $r$ and the scalar amplitude $A_s$. Expressing equation
\ref{r_constraint} in terms of $b$, we find
\begin{equation}
\frac{p}{\phi_{min}} = \sqrt{\frac{r}{8}}\left(\frac{1+\frac{1}{2}b\cos\delta}{1-\frac{f}{2}\sqrt{\frac{r}{8}}b\sin\delta}\right).
\label{phimin}
\end{equation}
From the above formula it is obvious that as $b \rightarrow 0$, we recover
$\frac{p}{\phi_{min}} \approx \sqrt{\frac{r}{8}}$. Equation \ref{phimin} will
be used to eliminate $\phi_{min}$ in the power spectrum formulae
(eqs.~\ref{powerspec}, \ref{tensor_powerspec}).
\subsection{e-folding prior}
The number of e-foldings from the time the mode $k_*$ exits the horizon to the
end of inflation is constrained theoretically to lie within the approximate
range 50-60. We will therefore impose a corresponding prior on the number of
e-foldings, which is given by the integral
\begin{equation}
N = \int_0^{\phi_{e}} \left|\frac{V}{V_{,\phi}}\right|,
\label{efoldings_exact}
\end{equation}
where $\phi_e$ denotes the scalar field value at the end of inflation
determined by the solution to the equation $\epsilon_V(\phi) \approx 1$. In the
case with zero amplitude ($a \approx 0$), using the expression for the
potential in equation \ref{potential} one finds an expression for the number of
e-foldings $N_0$,
\begin{equation}
\phi_{min}^2 = \frac{p}{2}\left(4 N_0 + p\right).
\label{phimin_efoldings}
\end{equation}
Even for nonzero amplitude, equation \ref{phimin_efoldings} holds approximately
true, so long as $\phi_{min}$ is defined in terms of the amplitude according to
equation \ref{phimin}. Combining eqs.~\ref{phimin} and \ref{phimin_efoldings},
we find an expression for the approximate number of e-foldings $N_0$:
\begin{equation}
N_0 = p\left\{\frac{4}{r}\left(\frac{1-\frac{f}{2}\sqrt{\frac{r}{8}}b\sin\delta}{1+\frac{1}{2}b\cos\delta}\right)^2 - \frac{1}{4}\right\}.
\label{efoldings_formula}
\end{equation}
To determine the exact number of e-foldings $N$, the value for $\phi_{e}$ and
the integral in equation \ref{efoldings_exact} must be calculated numerically.
In practice, the presence of the oscillations cause $N$ to differ from that
determined by equation \ref{phimin_efoldings} only by a small amount (typically
$N>N_0$ by less than 3), although it depends on the oscillation amplitude and
period. Since the integral in equation \ref{efoldings_exact} is
computationally expensive to calculate while varying parameters, we will
instead use equation \ref{efoldings_formula} for the approximate number of
e-foldings to enforce a prior in $N_0$ during the MCMC routine.
While equation \ref{efoldings_formula} typically gives a reasonable
approximation to the number of e-foldings, a catastrophe can occur near the end
of inflation if the oscillations dominate over the monomial term in the
potential---in this case the potential may cease to become monotonic and a
local minimum (false vacuum) can occur, rendering the number of e-foldings
effectively infinite.\footnote{To be precise, a very large number of e-foldings
would occur until the field tunnels out of the local minimum via bubble
nucleation. Given that relatively few e-foldings would follow this before
inflation ends, this scenario would produce an open universe with a very
sub-critical energy density, in gross violation of cosmological constraints
(see for example \citealt{bucher1995}).} This occurs if either the amplitude
$b$ or the exponent $p$ are too large; in the latter case, the slope of the
monomial term becomes very shallow before inflation ends, allowing the
oscillations to dominate. To deal with this issue, we will refine the
e-folding prior during post-processing by performing the numerical integral
(equation \ref{efoldings_exact}) to find $N$ for each point in the MCMC chain.
This allows us to eliminate regions of parameter space where the number of
e-foldings $N$ becomes large or infinite.
We therefore obtain a flat prior in the number of e-foldings as follows. During
the MCMC routine, we sample the parameter space with a flat prior in the
approximate number of e-foldings $N_0$ over a liberal range from 40 to 70.
During post-processing, the exact number of e-foldings $N$ for each point in
parameter space is calculated by first finding the field value at the end of
inflation $\phi_e$ via a grid search. If a local stationary point is
encountered before inflation ends, then the number of e-foldings is effectively
infinite and thus the point is discarded. For the remaining points, we compute
$N$ by performing the integral in equation \ref{efoldings_exact} numerically;
points with $N$ outside the canonical range from 50 to 60 are then discarded.
By this method, we obtain a flat prior in the number of e-foldings $N$ to good
approximation (this will be verified in Section \ref{sec:results}).
One last subtlety remains in implementing the e-folding prior: the parameter
$N_0$ is entirely determined by the model parameters discussed in Section
\ref{sec:modelparams} and thus cannot be included as a separate parameter. We
therefore impose the $N_0$ prior by making a transformation of variables. In
Section \ref{sec:results} we will show that $\delta$ prefers to be small, and
since $f\sqrt{\frac{r}{8}} \ll 1$, we can approximate equation
\ref{efoldings_formula} as
\begin{equation}
N_0 \approx \frac{4p}{r}\left(1+\frac{1}{2}b\right)^{-2}.
\label{efoldings_formula_approx}
\end{equation}
Thus, to a good approximation the number of e-foldings depends only on the
parameters $p$, $r$, and $b$, with very little dependence on $f$ and $\delta$.
Since $N_0$ will not be one of our primary model parameters, we enforce the
e-folding prior by starting with $N_0$ as a parameter (with a given prior),
then making the transformation from $N_0$ to $b$. We then derive our prior in
$b$ from the prior in $N_0$ and the resulting Jacobian $|\partial N_0/\partial
b|$ using equation \ref{efoldings_formula}. As we will see in Section
\ref{sec:results}, $b$ is fairly well-constrained (apart from a small
non-Gaussian tail) and the Jacobian has only a minor effect on the inferred
parameters.
\subsection{Priors in the model parameters}
Apart from the amplitude $b$ whose prior is defined by the e-folding constraint
(equation \ref{efoldings_formula}), we choose a flat prior in the remaining
model parameters ($p$, $r$, $f$, $\delta$). Here we discuss the preferred range
of each parameter.
In order to sample the full range of possible phases and amplitudes,
the most straightforward approach would be to vary $\delta$ over the range
($-\pi,\pi$) and allow $b$ to vary from 0 to some large amplitude $b_{max}$.
However, the same can be achieved by varying $\delta$ in the range
($-\frac{\pi}{2},\frac{\pi}{2}$) and allowing $b$ to have a \emph{negative}
amplitude, since the point ($b$,$\delta=\pi$) is equivalent to
(-$b$,$\delta=0$). We favor the latter approach, since it avoids having a
bimodal posterior distribution in the phase shift $\delta$. A negative
amplitude corresponds to having a \emph{positive} running of the spectral index
near the pivot scale, and the posterior will run continuously from positive to
negative $b$-values. We choose a liberal $b_{max}=2$ so our allowed range in
$b$ will therefore be $(-2,2)$.
We can get a sense of the desired range in $f$ by considering the oscillation
period in $\log \ell \sim \log k + const$. Combining eqs.~\ref{powerspec} and
\ref{phik0}, we find that the axion decay constant $f$ is related to the period
in $\log k$ by
\begin{equation}
f = \frac{\ln 10}{2\pi}\left(\frac{p}{\phi_{min}}\right)P_{\log k}
\label{Plogk_exact}
\end{equation}
Using equation \ref{phimin_efoldings}, this becomes
\begin{equation}
f \approx 0.26 P_{\log k} \sqrt{\frac{p}{N_0}}.
\label{Plogk}
\end{equation}
From the largest scales down to the Silk damping tail probed by ACT/SPT,
observable primary anisotropies in the CMB span a range of $\ell \approx
2-3500$, corresponding to $\Delta \log \ell \approx 3$. At higher multipoles,
there is no strong preference for negative running in either the Planck or ACT
data (although SPT does prefer negative running at high $\ell$ to some
extent---see \citealt{valentino2010}). We therefore expect at worst a mild
suppression of power at high $\ell$. In order to have suppression at low $\ell$
without an equally large suppression at higher $\ell$, the entire range of
multipoles $\Delta\log \ell\sim 3$ should fit less than roughly three quarters
of a full period; in other words, the period $P_{\log \ell} = P_{\log k}$
should be roughly greater than $\sim 4$. For $p$ running from $0.5$ to $4$ and
50-60 e-foldings, we find from equation \ref{Plogk} that $f$ should be larger
than $\approx 0.1$.
On the other hand, a large period of oscillation would require a large
amplitude to achieve the desired suppression of power at low $\ell$, and this
would approach the constant running limit (over CMB scales) that has been
considered before. For $f\approx 1$, the oscillation period is at least
several times larger than the relevant $\ell$-range (unless $p \ll 1$) such
that the running is effectively constant in this regime. As we will see in
Section \ref{sec:results} however, regions of parameter space with a large
period $P_{\log k}$ are forbidden by the e-folding constraint unless the
amplitude $b \lesssim 1$, and hence the running of the spectral index is small
(see also appendix \ref{sec:3dposts}). Thus, allowed solutions with $f \gtrsim
1$ tend to have small running and cannot fit the low-$\ell$ power spectrum
significantly better than the usual power-law power spectrum model. In Section
\ref{sec:results} we will show that this expectation is correct and thus the
constraints do not change significantly when $f$ is extended up to 2 (see
Figure \ref{fig:fpriors}). We therefore choose the fiducial range to be $f \in
(0.1,1)$.
Given that the period $P_{\log k}$ is more directly observable in the power
spectrum than $f$ is, it might be tempting to use $P_{\log k}$ as our model
parameter instead of $f$. We choose not to do this, for two reasons. On the
theory side, a super-Planckian axion decay constant $f \gtrsim 1$ is difficult
to implement in the underlying string theory; in fact $f \ll 1$ seems necessary
to embed the corresponding model, although it may be possible for multiple
axions to combine to produce a larger effective axion decay constant
(\citealt{kim2005}; \citealt{kappl2014}). Given these difficulties, we prefer
to impose an upper limit $f \leq f_{max}$, with $f_{max}=1$ being the fiducial
value. This does \emph{not} translate to a clean upper bound in $P_{\log k}$,
however, since from equation \ref{Plogk} it is possible to have $f \gtrsim 1$
even if $P_{\log k}$ is relatively small provided $p$ is large enough.
The second reason for choosing $f$ as our parameter instead of $P_{\log k}$ is
more subtle. As we will see in Section \ref{sec:results}, the posterior
distribution is multi-modal and there exist regions of parameter space which
fit the high-$\ell$ likelihood at the expense of lower $\ell$. For example, if
$p$ is made small, by equation \ref{Plogk} the period becomes large unless $f$
is also small. However in the latter case, we find another mode emerges which
has negligible running and only fits the high-$\ell$ likelihood (for discussion
see appendix \ref{sec:3dposts}). Since we are primarily interested in
improving the fit at low as well as high multipoles, by imposing a lower bound
$f_{min}=0.1$ we avoid this spurious mode entirely. Thus, our fiducial range in
$f$ is ($0.1,1$). In Section \ref{sec:nr_constraints} we will consider the
effect of varying the prior of $f$, including the allowed range (see Figure
\ref{fig:fpriors}).
\begin{figure*}[t]
\centering
\includegraphics[height=1.0\hsize,width=1.0\hsize]{triangle_plot.eps}
\caption{Posteriors in the inflation model parameters ($p$, $b$, $\delta$, $f$)
discussed in Section \ref{sec:modelparams} and the tensor-to-scalar ratio $r$
at the pivot scale $k_*=0.05$ Mpc$^{-1}$. Contours enclose 68\% and 95\% of
the total probability in each joint posterior. Note the small probability for
$r=0.2$, which lies just outside the 99\% confidence region. The correlation
between the exponent $p$ and $r$ is a consequence of the e-folding constraint
(eq.~\ref{efoldings_formula}). In contrast to the amplitude $b$ and phase shift
$\delta$ which have peaked distributions, the axion decay constant $f$ is
poorly constrained although it exhibits a mild correlation with $b$. Finally,
note that the distributions in $r$ and $p$ exhibit very little dependence on
$f$, and are thus fairly insensitive to the assumed prior in $f$.}
\label{fig:triangle_plot}
\end{figure*}
\section{Results}\label{sec:results}
\subsection{Constraints on the oscillation parameters $b$, $\delta$,
$f$}\label{sec:axion_constraints}
After sampling the parameter space with the data and priors discussed in
Section \ref{sec:priors}, we display marginal posterior probability
distributions in the model parameters in a ``triangle plot'' in Figure
\ref{fig:triangle_plot}. Starting with the one-dimensional posteriors, it is
clear that the oscillation amplitude $b$ and phase shift $\delta$ have
distributions that are well peaked, albeit with significant non-Gaussian tails.
By contrast, the axion decay constant $f$ is very poorly constrained, with a
mild preference for $f \approx 0.5$ but largely prior-dominated.
\begin{figure*}[t]
\centering
\subfigure[spectral index]
{
\includegraphics[height=0.30\hsize,width=0.38\hsize]{post_ns.eps}
\label{post_ns}
}
\subfigure[running of the spectral index $\alpha_*$]
{
\includegraphics[height=0.30\hsize,width=0.38\hsize]{post_alpha.eps}
\label{post_alpha}
}
\subfigure[number of e-foldings $N$]
{
\includegraphics[height=0.30\hsize,width=0.38\hsize]{post_N.eps}
\label{post_N}
}
\subfigure[oscillation period $P_{\log k}$]
{
\includegraphics[height=0.30\hsize,width=0.38\hsize]{post_Plogk.eps}
\label{post_Plogk}
}
\caption{Marginal posterior probability in four derived parameters: (a) The
spectral index at the pivot scale $k_* = 10$ Mpc$^{-1}$. The total spectral
index $n_s$ (solid line) is plotted together with the unmodulated spectral
index $n_{s,0}$ (dashed line) which excludes the oscillatory part. (b) Running
of the spectral index $\alpha_*$ at the pivot scale. (c) The number of
e-foldings $N$ from the time that mode $k_*$ left the horizon, to the end of
inflation. (d) Period of oscillation in the power spectrum in terms of
$\log_{10}k$.}
\vspace{10pt}
\label{fig:post_derivedparams}
\end{figure*}
Before proceeding, is important to verify that the constraints on the spectral
index $n_s$ and running $\alpha_*$ at the pivot scale $k_*=0.05$ Mpc$^{-1}$ are
consistent with those obtained from the usual power-law spectrum and constant
running models. To this end, analytic expressions for $n_s$ and $\alpha_*$ are
derived in Appendix \ref{sec:ns_alpha_formulas} and given in equations
\ref{ns_formula} and \ref{alpha_formula}, respectively. Using these formulas we
plot the corresponding derived posteriors in Figures \ref{post_ns} and
\ref{post_alpha}. The probability is peaked around $n_s\approx 0.96$, entirely
consistent with the base Planck model; likewise, the allowed running $\alpha_*$
lies in the approximate range $(-0.03,0)$, consistent with the usual constant
running model. In Figure \ref{post_N} we plot a posterior in the number of
e-foldings $N$ (calculated from eq.~\ref{efoldings_exact}), which shows $N$ to
be entirely dominated by the chosen flat prior in the fiducial range 50-60.
Finally, in Figure \ref{post_Plogk} we plot a posterior in the oscillation
period $P_{\log k}$ (using eq.~\ref{Plogk_exact}), which is well-peaked in
contrast to the constraint on $f$.
Returning to the model parameter constraints, the structure of the posterior
distribution can be better understood from the joint 2-dimensional posteriors
in Figure \ref{fig:triangle_plot}. In the $f$ vs. $b$ plot, one sees a mild
correlation in $f$ and $b$, particularly for $f \lesssim 0.6$. This can be
understood in terms of the running of the spectral index: if the period
(corresponding to $f$) is increased, one must also increase the amplitude $b$
to keep the running $\alpha_*$ constant (this can also be seen in the formula
for $\alpha_*$ in equation \ref{alpha_formula}). The correlation is not very
tight, however, because the required running depends on $r$: the greater the
tensor contribution at low $\ell$, the greater the (negative) running must be
to adequately suppress a corresponding amount of low-$\ell$ scalar power.
In Figure \ref{fig:triangle_plot}, we can see in the $b$ vs. $\delta$ plot the
distribution is obviously multi-modal, where outside the maximum probability
region there are three separate ``wings''. Two of the wings run off to large or
small $\delta$, while the amplitude $b$ is very small or even negative. The
third wing runs off to high amplitudes $b$. The multi-modal structure is
discussed in further detail in appendix \ref{sec:3dposts}; here we will simply
note that in each of these wings, $f$ takes on either very small or large
values (as expected since $b$ and $f$ are correlated). If one stays within the
high probability region, $f$ is somewhat better constrained than the
$f$-posterior in Figure \ref{fig:triangle_plot} would suggest.
\subsection{Best-fit model}\label{sec:bestfit}
Next, to find the best-fit point by minimizing the likelihood requires some
care, because the exact number of e-foldings $N$ is computationally too
expensive to perform during the minimization procedure. We therefore obtain the
best-fit point by factoring in a prior in $N_0$ (given by
eq.~\ref{efoldings_formula}), where in this case our prior is chosen to be
strongly peaked (a Gaussian with dispersion $\sigma_{N_0}$=0.5) about a
particular value $N_{0,i}$. This procedure is performed over a grid of 16
values $N_{0,i}$ regularly spaced from 45 to 65. After finding each best-fit
point, the exact number of e-foldings $N$ is calculated for each; the best-fit
point with the highest likelihood value that remains within the range $N \in
[50,60]$ is chosen as the global best-fit point.
The resulting best-fit parameter values are given in the first row of Table
\ref{tab:bestfit}. Errors are given for the inflation parameters $r$, $p$, $b$
and $\delta$, determined by the 16\% and 84\% percentiles of the posterior
probability distribution in each parameter.
In Figure \ref{fig:allcls} we plot the CMB angular power spectrum over the
multipole range $2 \leq \ell \leq 2500$ for the best-fit axion monodromy model
(dark line) compared to the base Planck model (red line; defined as the
$\Lambda$CDM model with a power-law spectrum, i.e.~zero running). Note that
for $\ell \gtrsim 30$, the two models are indistinguishable, while at lower
multipoles the axion monodromy power is suppressed by up to $\approx$ 20\%
compared to the base Planck model. This suppression is a consequence of the
running spectral index coming from the sinusoidal term in the potential
(eq.~\ref{potential}).
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $r$ & $p$ & $b$ & $\delta$ & $f$ & $10^9 A_s$ & $H_0$ & $\Omega_m$ & $\Omega_b$ & $\tau$ & $N$ & $\alpha_*$ & $n_s$ & $n_{s,0}$\\
\hline
Best-fit & $0.07^{+0.05}_{-0.04}$ & $1.55^{+0.56}_{-0.92}$ & $0.44^{+0.24}_{-0.45}$ & $0.30^{+0.32}_{-0.31}$ & 0.53 & 2.209 & 67.9 & 0.307 & 0.048 & 0.093 & 58.5 & -0.014 & 0.959 & 0.979\\
\hline
$r=0.13$ & 0.13 & 2.37 & 0.41 & 0.20 & 0.58 & 2.216 & 68.2 & 0.304 & 0.048 & 0.094 & 59.9 & -0.020 & 0.960 & 0.970\\
$r=0.19$ & 0.19 & 3.08 & 0.31 & 0.12 & 0.54 & 2.210 & 68.3 & 0.303 & 0.048 & 0.093 & 51.9 & -0.025 & 0.960 & 0.961\\
\hline
\end{tabular}
\caption{Best-fit axion monodromy models}
\label{tab:bestfit}
\end{table*}
\subsection{Constraints on the tensor-to-scalar ratio $r$ and potential
parameter $p$}\label{sec:nr_constraints}
Marginal posteriors in $p$ and $r$ are shown in Figure \ref{fig:triangle_plot}.
Strikingly, the BICEP2 result $r = 0.2$ lies just outside the 99\% confidence
region, while the highest probability $r$-value is $\approx 0.06$, similar to
the best-fit value $r \approx 0.07$ (Table \ref{tab:bestfit}). Likewise, $p >
3$ lies outside the 95\% confidence region.
From the e-folding constraint given by equation \ref{efoldings_formula} it is
apparent that the exponent $p$ and the tensor-to-scalar ratio $r$ should be
strongly correlated, provided the amplitude $b$ does not get too large. The
joint posterior in $p$ and $r$ shown in Figure \ref{fig:triangle_plot} shows
that this is indeed the case; a high $r$-value is correlated with a large
exponent $p$. The width of the posterior around this correlation is determined
by the allowed number of e-foldings, and also the range of $b$-values.
Since the axion decay constant $f$ is poorly constrained, it is important to
check whether these results are sensitive to the chosen prior on $f$. From the
$r$ vs. $f$ and $p$ vs. $f$ plot in Figure \ref{fig:triangle_plot}, one can
anticipate that the inferred probability of $r$ and $p$ are weakly dependent on
$f$, since essentially no correlation with $f$ is seen for either parameter. To
verify this, in Figure \ref{fig:fpriors} we plot posterior distributions in $r$
for four different assumed priors in $f$: a flat prior in the fiducial range $f
\in (0.1,1)$ (solid line), a flat prior in the ranges $f \in (0.05,0.25)$ (dashed
line) and $f \in (0.1,2)$ (dot-dashed line), and finally a log prior in $f$
over the fiducial range. As can be seen in the figure, the inferred probability
in $r$ (and likewise $p$) is quite robust to changes in the assumed $f$ prior.
Since the other model parameters ($b$,$\delta$) are fairly well-constrained,
our result for $r$ is thus quite insensitive to the priors in the axion model
parameters.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.92\hsize]{allcl_models.eps}
\caption{Best-fit TT angular power spectrum for the base Planck model with zero
running (dark line), and axion monodromy model (red line). Error bars are shown
for $2 \leq \ell < 50$. These best-fit spectra are determined using a
combination of Planck+lensing+WP+high $\ell$ data.}
\label{fig:allcls}
\end{figure}
Why does the data prefer a small (but nonzero) $r$ value? Generally, higher $r$
means a larger tensor contribution to $TT$ anisotropies at low $\ell$. This
exacerbates the deficit in power at low $\ell$, and thus requires a higher
negative running to make up for it. However, the likelihood at high $\ell$
shows no strong preference for negative running (\citealt{planck2013}), and the
fit worsens at high $\ell$ as the running increases. Thus, the fit generally
worsens as $r$ is increased toward large values. This is not the whole story
however, because $r$ is significantly more constrained in the axion monodromy
model compared to the usual constant running model, for which $r < 0.26$ at the
95\% confidence level (\citealt{planck2013}). This can be seen in Figure
\ref{fig:r_vs_ns}, where we plot a joint posterior in $r$ vs. $n_s$ for the
constant running model (red) and the axion monodromy model (blue). While both
models prefer a similar $n_s$, $r$ is more constrained in axion monodromy: the
point $r=0.2$, $n_s\approx 0.96$ lies outside the 95\% CL contour for axion
monodromy, while it is well within the same contour for the constant running
model.
Additionally, note from Figure \ref{fig:r_vs_ns} that $r$ prefers to be zero in
the constant running model, whereas the axion monodromy model has its peak
probability near the best-fit $r\approx 0.07$. The reason why axion monodromy
does not prefer $r=0$ is simple: one can see from equations \ref{ns_formula}
and \ref{alpha_formula} that $n_s$ and $\alpha_*$ are dependent on $r$, whereas
in the constant running model, these parameters can be chosen independently of
$r$. The best-fit constant running model has a zero tensor contribution
($r=0$) and running $\alpha \approx -0.012$. This shows that having a nonzero
tensor contribution cannot be entirely made up for by negative running---even
if running is allowed, the likelihood itself still prefers $r=0$. In the axion
monodromy model however, this solution is impossible, since the running is
proportional to $r$ according to equation \ref{alpha_approx}. Thus, $r\approx
0$ would necessarily imply negligible running, and likewise the spectral index
would be a poor fit. Instead, the best-fit axion monodromy model settles for an
$r$-value ($\approx 0.07$) which is high enough to give the necessary running
and spectral index, but low enough that the tensor contribution doesn't spoil
the fit too much. This compromise necessarily results in a slightly worse fit
at low $\ell$ compared to the constant running model.
We still need to understand why axion monodromy is more restrictive at the
large $r$ end compared to constant running. To investigate this, Table
\ref{tab:delta_chisq} shows the change in $\chi^2$ for four models compared to
the base Planck model with zero running ($\alpha=0$). The four models are: the
best-fit axion monodromy model (for which $r \approx 0.07$), the best-fit
constant running model (for which $r=0$), and the same two best-fit models when
$r$ is fixed to 0.19 rather than varied. Note that while the total $\chi^2$ is
decreased by a similar amount in both best-fit models, $\chi^2$ is increased in
the $r=0.19$ models, with the axion monodromy $r=0.19$ giving the worst fit. To
understand why this is the case, we must break this down into the various
likelihoods representing different multipole ranges.
Starting with the high-$\ell$ likelihoods, note that while the fit to the
CAMspec likelihood (comprising the majority of multipoles in the Planck data)
is barely affected for the general best-fit models, the fit is dramatically
worsened in the models for which $r=0.19$. This is a consequence of the
aforementioned high running required to fit the low-$\ell$ likelihood well for
large $r$. The same is true (though not as dramatically) for the ACT/SPT
likelihoods. For the Commander likelihood (low $\ell$), on the other hand, all
models show an improved fit, but the constant running model gives a
significantly better fit; this disparity is greater for the $r=0.19$ models. In
the following section we investigate the nature of this disparity more closely
and why it worsens with large $r$.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.92\hsize]{rpost_fpriors.eps}
\caption{Posterior distribution in the tensor-to-scalar ratio $r$ for four
different priors in $f$: three uniform priors covering different ranges in $f$
(solid, dashed, dot-dashed lines) and a log prior in $f$ (dotted line). The
inferred probability in $r$ (and likewise $p$) is quite robust since it is
insensitive to the assumed prior in $f$, while the other model parameters
($b$,$\delta$) are fairly well-constrained.}
\label{fig:fpriors}
\end{figure}
\subsection{Comparison to constant running model at low multipoles}\label{sec:lowl}
In Figure \ref{fig:lowcls} we plot the best-fit angular $TT$ power spectrum for
$2 \lesssim \ell \lesssim 50$ for the base Planck model with zero running (red
solid line), axion monodromy (blue dashed line), and constant running (magenta
dotted line) models. The black error bars give the errors in each individual
multipole. Note that while both models achieve similar reduction in power at
very low $\ell$, the axion monodromy model achieves very little reduction in
power for $\ell \gtrsim 25$. The deficit in power in the data (points with
error bars) is most noticeable over the range $20 \lesssim \ell \lesssim 30$,
and in this range the constant running model is a significantly better fit.
Why does the constant running model fit the low-$\ell$ likelihood better? There
are a few reasons. As discussed in the previous section, axion monodromy does
not have the freedom to choose $r$ and $\alpha$, $n_s$ independently. In
particular, $r=0$ is disfavored because it produces a negligibly small running
and incorrect spectral index. Unfortunately however, a nonzero $r$ slightly
worsens the fit at low $\ell$ by including a tensor contribution.
\begin{figure}
\includegraphics[height=0.9\hsize,width=0.92\hsize]{r_vs_ns.eps}
\caption{Joint posteriors in the spectral index $n_s$ and the tensor-to-scalar
ratio $r$, both evaluated at the pivot scale $k=0.05$ Mpc$^{-1}$. Constraints
(68\% and 95\%) are shown for the constant running model (red) and axion
monodromy model (blue). While the constant running model prefers $r=0$, in
axion monodromy the highest probability occurs for $r\approx 0.07$ because a
nonzero $r$-value is required to fit the running and spectral index well. Note
also the constant running model allows for a higher $r$, primarily because it
is not subject to the e-folding constraint.}
\label{fig:r_vs_ns}
\end{figure}
Even if $r$ is fixed to the same value in both models, however, the constant
running model achieves a better fit at low $\ell$. In Figure
\ref{fig:axion_vs_run} we plot the primordial scalar power spectrum for axion
monodromy models (solid lines) and constant running models (dashed lines), with
$r$ fixed to 0.07 (dark lines) and 0.19 (red lines) in each case. For either
$r$-value, we find that the constant running model achieves greater suppression
of power at low $k$ (and hence, low $\ell$). For the $r=0.19$ case we need a
much greater suppression compared to $r=0.07$ to make up for the additional
tensor power. However, it is evident that the axion monodromy $r=0.19$ model
achieves much less suppression at low $k$ compared to the corresponding
constant running model. This occurs because the $r=0.19$ model has a
relatively high exponent $p \approx 3$, as required by the e-folding constraint
(eq.~\ref{efoldings_formula}). As a result, the relatively large monomial term
``softens'' the magnitude of the running at low $k$. The higher the $p$ (and
hence, $r$), the more the running is mitigated at low $k$. This is the
principle reason why axion monodromy provides a worse fit compared to constant
running when $r$ is large.
Even for relatively small amplitudes, as $r$ is increased to ever higher
values, $p$ must also be high (eq.~\ref{efoldings_formula}). However,
sufficiently high $p$ solutions lead to a very large or infinite number of
e-foldings due to the appearance of a local minimum in the potential from the
oscillation (unless the amplitude $b$ is very close to zero), as discussed in
Section \ref{sec:priors}. This effect excludes nearly all regions of parameter
space where $p > 4$, and a large fraction of parameter space where $p > 3$.
Thus, the direct effect of the oscillations on the total number of e-foldings
$N$ restricts the allowed parameter space for large $r$ even further.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.92\hsize]{lowcl_models.eps}
\caption{Best-fit CMB $TT$ angular power spectrum at low multipoles for the
base Planck model with zero running (red solid line), the axion monodromy model
(blue dashed line), and the constant running model (magenta dotted line).
Although axion monodromy and the constant running model exhibit a similar
suppression of power at the lowest multipoles, the latter has a greater
suppression of power in the range $10 < \ell < 50$, partly because there is no
tensor contribution ($r=0$) and because it is not subject to the e-folding
constraint.}
\label{fig:lowcls}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
& Best-fit model & Best-fit constant $\alpha$ & Best-fit ($r=0.19$) & Best-fit constant $\alpha$ ($r=0.19$)\\
\hline
Lowlike (WMAP Pol.) & -0.5 & 0.1 & -1.1 & -1.0\\
Lensing & 0.8 & 0.9 & 1.0 & 1.2\\
Commander ($2 \leq \ell < 50$) & -1.8 & -2.5 & -0.6 & -1.7\\
CAMspec ($50 \lesssim \ell \lesssim 2500$) & 0.2 & 0.1 & 1.5 & 2.2\\
ACT/SPT ($600 \lesssim \ell \lesssim 3500$) & -0.5 & -0.2 & 0.2 & -0.1\\
\hline
Total & -1.7 & -1.6 & 1.0 & 0.6\\
\hline
\end{tabular}
\caption{$\Delta \chi^2$ compared to best-fit base Planck model (with $\alpha=0$)}
\label{tab:delta_chisq}
\end{table*}
\subsection{Comparison to constant running model at high multipoles}\label{sec:highl}
As Table \ref{tab:delta_chisq} shows, axion monodromy actually improves the fit
for the ACT/SPT likelihood (high $\ell$) compared to the constant running
model. We can understand why this is the case from the analytic expression for
$n_s$ at the pivot scale $k=k_*$ (derived in appendix
\ref{sec:ns_alpha_formulas}). The spectral index $n_s$ has two contributions,
\begin{equation}
n_s \approx n_{s,0} + \Delta n_s,
\label{ns_total}
\end{equation}
where $n_{s,0}$ is the contribution from the monomial term in the potential
(i.e.~the unmodulated spectral index), while $\Delta n_s$ is the contribution
from the sinusoidal term. By comparing to equation \ref{ns_formula} and
substituting the approximate e-folding constraint (equation
\ref{efoldings_formula_approx}), we find that
\begin{eqnarray}
n_{s,0} & = & 1 - \frac{r}{8}\left(1 + \frac{2}{p}\right) \label{ns0_exact} \\
& \approx & 1 - \frac{r}{8} - \frac{1}{N_0} \label{ns0_approx}
\end{eqnarray}
while for the oscillating term we have
\begin{equation}
\Delta n_s \approx -\frac{b}{f}\sqrt{\frac{r}{8}}\sin\delta + \frac{r}{4}\left(1-\frac{1}{p}\right)b\cos\delta.
\label{delta_ns}
\end{equation}
Generally, the first term in equation \ref{delta_ns} dominates over the second
term provided that the phase shift $\delta$ is not very small or negative,
since $b$ and $f$ are of a similar order. Given that the data prefers a
positive $\delta$, we may therefore expect $\Delta n_s < 0$, and therefore
$n_{s,0}$ should be at least slightly higher than $n_s$.
In Figure \ref{fig:post_derivedparams} we plot derived posteriors in $n_s$ and
$n_{s,0}$ using equations \ref{ns_total}, \ref{ns0_exact}, and \ref{delta_ns}.
The constraint in $n_s$ ($n_s \approx 0.96$) is quite similar to that obtained
in the base Planck model (\citealt{planck2013}), which is expected since it is
well constrained by the data. By contrast, the best-fit value for $n_{s,0}$ is
$\approx 0.98$, which is strikingly high. This follows from equation
\ref{ns0_approx}: since the number of e-foldings is restricted to the
approximate range 50-60, the lower the $r$-value (and the corresponding
$p$-value), the higher $n_{s,0}$ must be. Since the best-fit model has
$r\approx 0.07$, this relatively low $r$ accounts for why $n_{s,0}$ is so high.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.92\hsize]{pk_axion_vs_run.eps}
\caption{Best-fit scalar power spectrum for axion monodromy (solid lines) vs.
constant running models (dashed lines), all normalized to 1 at $k=0.05$
Mpc$^{-1}$ for the sake of comparison. For $r=0.07$ (dark lines), axion
monodromy achieves only slightly less suppression at low $k$ (corresponding to
low $\ell$) compared to the constant running model. For $r=0.19$ (red lines)
however, axion monodromy has milder negative running at low $k$ and thus cannot
achieve enough suppression to make up for the tensor contribution at low
$\ell$. This is a consequence of the e-folding constraint, which enforces a
large exponent $p$ and limits the oscillation amplitude $b$ of the power
spectrum.}
\label{fig:axion_vs_run}
\end{figure}
The question remains, why does the data at high $\ell$ prefer such a high
$n_{s,0}$? According to Table \ref{tab:bestfit}, the best-fit axion monodromy
model shows the greatest improvement in $\chi^2$ for the ACT/SPT likelihood. We
have seen that at high multipoles, the data shows no strong preference for a
running spectral index, with $n_s\approx 0.96$ preferred even at higher $\ell$.
This is also largely true when the high-$\ell$ data from ACT/SPT is factored
in, although it should be noted that ACT and SPT are in tension here; SPT
prefers negative running while ACT does not (\citealt{valentino2010}). Thus,
while negative running improves the fit at low $\ell$, it worsens the fit at
the high-$\ell$ end unless the running is relatively small. In the best-fit
axion monodromy model however, the running diminishes in magnitude as one
proceeds to smaller scales (higher $\ell$), and hence the spectral index
$\Delta n_s$ is slightly higher at high $\ell$. This is why the relatively
high value of $n_{s,0}$ is preferred, since it allows a smaller running at high
$\ell$.
A similar conclusion regarding a high $n_{s,0}$ was reached by
\cite{meerburg2014}, in which their best-fit solution has $n_{s,0} \approx 1$
for the case where they incorporate the BICEP2 constraint $r \sim 0.2$.
However, in their work the full inflation model is not used; the spectral index
$n_{s,0}$ is chosen completely independently of $r$, and no e-folding
constraint is applied. Equation \ref{ns0_approx} shows that for $r=0.2$,
$n_{s,0}$ cannot be larger than 0.975, and this upper limit is only possible in
the limit $N_0 \rightarrow \infty$. From this it can be concluded that while
the high-$\ell$ data prefers a high $n_{s,0}$, this is forbidden by the
e-folding constraint unless $r$ is relatively small. Indeed, the $r=0.19$
axion monodromy model in Table \ref{tab:bestfit} has $n_s \approx n_{s,0}$ and
the fit is actually slightly worsened at the high-$\ell$ end.
\subsection{Can the number of e-foldings be amended to allow a higher
tensor-to-scalar ratio?}\label{sec:amending_efoldings}
While the e-folding constraint disfavors high $r$-values, one might question
whether our model might be amended so that the number of e-foldings satisfies
the constraints even for high $r$. In particular, our model assumes that the
oscillation amplitude of the potential $a$ remains constant all the way to the
end of inflation. For this reason, solutions with sufficiently large $r$ (and
hence $p$) tend to dramatically inflate the number of e-foldings near the end
of inflation, excluding these solutions. However, if the oscillation amplitude
were to diminish before the end of inflation, these high-$r$ solutions could in
principle still satisfy the e-folding constraint.
Furthermore, from the point of view of the underlying microphysics in the axion
monodromy model, there is no necessary reason to expect the modulation
parameters to remain constant during inflation. On the contrary, these
parameters are determined by the values of dynamical moduli fields (e.g.
related to the size and shape of the compactified extra-dimensional manifold)
which might well evolve with time as inflation progresses. Of course, if the
oscillation amplitude were to increase with time, the resulting constraint on
$r$ would be even stricter. However it is reasonable to consider the case
where the oscillation amplitude dies out well before the end of inflation. To
consider this, we note that the approximate e-folding parameter $N_0$
(eq.~\ref{efoldings_formula}) gives the number of e-foldings without the direct
contribution of the sinusoidal oscillation; further, by far the largest
contribution of the oscillation to the number of e-foldings $N$ occurs at
scales smaller than that probed by the CMB. This is particularly the case for
solutions with high $p$ (and hence, high $r$) values. Therefore, we can use
$N_0$ as a conservative estimate of the number of e-foldings in the case where
the oscillations die out at scales smaller than that of the CMB.
With this in mind, we apply a flat prior in the number of e-foldings $N_0 \in
(50,60)$ to approximate the case where the amplitude dies out at smaller
scales. The resulting constraint on $r$ is shown in Figure
\ref{fig:post_r_N0}, plotted next to the fiducial result which uses a flat
prior in $N$. As can be seen, there is greater probability for large $r$, but
even in this case $r > 0.2$ is disfavored at the 95\% CL, for the same reason
outlined in the previous section: the large amplitude (and low $p$) required to
achieve enough suppression of power at low $\ell$ is forbidden by the e-folding
constraint (eq.~\ref{efoldings_formula}).
In spite of the above difficulties, certainly there are ways to decrease the
number of e-foldings to allow for higher $r$. For example, if the exponent $p$
switches to a lower value at small scales, inflation would end more quickly and
the number of e-foldings would be decreased. From the standpoint of
single-field inflation this seems highly unnatural, although such a transition
might naturally occur if multiple fields contribute to the effective inflaton
potential (for an example of this scenario see \citealt{kobayashi2014}). From
the Occam's razor point of view however, it seems more likely that the
tensor-to-scalar ratio is simply smaller than the BICEP2 result suggests,
particularly given the concerns about contamination by dust polarization.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.9\hsize]{post_r_N0.eps}
\caption{Posterior in $r$ assuming two different e-folding priors. Solid line
corresponds to a flat prior in $N$, which gives the number of e-foldings under
the assumption the oscillation in the potential continued with undiminished
amplitude to the end of inflation. Dotted line corresponds to a flat prior in
$N_0$, which approximates the number of e-foldings in the scenario that the
oscillation in the potential died out rapidly after the modes observed in the
CMB exited the horizon. The former scenario gives smaller probability for large
$r$-values, since it excludes regions of parameter space for which oscillations
dominate near the end of inflation, resulting in a large or infinite number of
e-foldings. However, $r=0.2$ is located outside the 99\% confidence region for
the flat $N$ prior, and outside the 95\% confidence region for the flat $N_0$
prior and thus is disfavored in either scenario.}
\label{fig:post_r_N0}
\end{figure}
\section{Can axion monodromy alleviate the small-scale problems of
$\Lambda$CDM?}\label{sec:smallscale_probs}
\subsection{Implications for dwarf galaxy formation}\label{sec:dwarfs}
We have shown that oscillation in the power spectrum can allow for a
significant tensor-to-scalar-ratio, alleviating the tension with the standard
$\Lambda$CDM + power-law spectrum model at very large scales, while still
satisfying the e-folding requirement. However, there are also apparent
departures from $\Lambda$CDM for small-scale structure which can be mitigated
by the same mechanism. First, there is the ``missing satellites'' problem,
which refers to the fact that cosmological dissipationless N-body simulations
predict a much larger number of dwarf satellite galaxies around the Milky Way
than are actually observed (\citealt{klypin1999}).
The second problem is that in the centers of baryon-poor galaxies the measured
dark matter density is systematically lower than predicted by dark matter-only
$\Lambda$CDM simulations. In many cases, such as in low surface brightness
galaxies and field dwarfs with rotation (\citealt{kuzio2006};
\citealt{simon2005}; \citealt{adams2014}; \citealt{gentile2004};
\citealt{salucci2012}; \citealt{oh2011}), this lower density is due to the
presence of a constant dark matter density core (the so-called ``core-cusp''
problem). The problem seems to extend to the lowest masses, and recent work
with dwarfs in the Local Group (\citealt{boylan2012};
\citealt{tollerud2014}; \citealt{kirby2014}) and further away
(\citealt{ferrero2012}) show that they are also systematically under-dense
compared to simple expectations from $\Lambda$CDM, the so-called ``too big to
fail'' problem (\citealt{boylan2011}).
It is possible that the simple $\Lambda$CDM expectations are incorrect and
feedback from supernovae (\citealt{governato2012}), reionization
(\citealt{bullock2000}), and other effects of star formation
(\citealt{brooks2012}) all change these expectations dramatically. There is no
consensus regarding either of these two problems. Reionization will prevent
small satellites from being bright enough to be observed but it is unclear
whether this by itself explains the luminosity distribution of the known
satellites. There are $\Lambda$CDM-based solutions for the second problem but
none that alleviate the problem for all the different types of galaxies.
\begin{figure}
\includegraphics[height=0.7\hsize,width=0.92\hsize]{pk_rel.eps}
\caption{The ratio of the matter power spectrum of axion monodromy compared to
that of the base Planck model. Curves shown correspond to the best-fit model
with the tensor-to-scalar ratio $r$ fixed to 0.07 (dark line), 0.13 (red dashed
line), and 0.19 (blue dotted line). Note there is a reduction in power of
$\sim$ 20-35\% at $k=10$ Mpc$^{-1}$, roughly the scale relevant to dwarf galaxy
formation.}
\label{fig:relative_power}
\end{figure}
Here we ask whether the reduced power on small scales alleviates some of these
issues. If the matter power spectrum is suppressed at the scales $k \sim 10$
Mpc$^{-1}$, then this would change the formation of dwarf galaxies. Halos would
collapse at lower redshift, resulting in lower central dark matter densities.
This reduced power may also change the way baryonic feedback operates in
low-luminosity systems. This reduction in power has been considered in light of
the BICEP2 result: if one allows a constant negative running $\alpha \approx
-0.02$ in the power spectrum to allow for the high tensor-to-scalar ratio
$r=0.2$ reported by BICEP2, \cite{garrison2014} found using N-body simulations
that the too-big-to-fail problem is significantly alleviated, although not
eliminated entirely (it should be noted that baryonic affects are not included
in this work). The reduction in power at dwarf scales in this ``BICEP2
cosmology'' (\citealt{abazajian2014}) is approximately 40\% compared to vanilla
$\Lambda$CDM. However, as mentioned before, a constant running violates the
e-folding constraint. Since the model considered in this work achieves
significant running with the requisite amount of e-foldings, we consider here
the suppression of power at small scales from our model. Again, we do not
assume the BICEP2 result, but rather consider three different $r$-values 0.07,
0.13, and 0.19.
In Figure \ref{fig:relative_power} we plot the relative power obtained by
dividing the matter power spectrum in our model by the best-fit Planck power
spectrum with zero running. In this plot we use the fitting functions of
\cite{eisenstein1998} for the transfer function. For each $r$-value considered,
we find the best-fit solution using the same method outlined in Section
\ref{sec:results}, except with $r$ fixed to the given value. We plot the
best-fit cases where $N$ lies in the fiducial range $50-60$. Note that for each
case, the difference in power compared to the base Planck model comes not only
from the primordial power spectrum parameters, but also because of differences
in the transfer function which is sensitive to cosmological parameters,
particularly $\Omega_m$ and $H_0$. For $r=0.19$ we found that most of our
solutions had $p>3$ and thus in many cases, a local minimum formed in the
potential rendering the number of e-foldings infinite. We did find one solution
in the desired range, which is shown in the figure.
At dwarf scales ($k\sim 10$ Mpc$^{-1}$), we find an 22\% reduction in power for
$r=0.07$, 28\% reduction for $r=0.13$, and 33\% reduction for $r=0.19$. This is
not as drastic a reduction as in the BICEP2 cosmology ($\approx$ 40\%), and
recall in addition that $r=0.19$ is disfavored by the Planck data (Figure
\ref{fig:fpriors}). Nevertheless, a reduction in power of $\sim$ 20-30\% is
entirely consistent with the data and e-folding constraint, and may be expected
to alleviate the too-big-to-fail problem, particularly when considered in
combination with baryonic feedback effects. One can also see from Figure
\ref{fig:relative_power} that even greater suppression occurs at smaller
scales, and this has a bearing on the missing satellites problem. In addition,
the corresponding suppression of substructure would substantially reduce the
expected dark matter annihilation signal (\citealt{garrison2014}).
\begin{figure}
\includegraphics[height=0.85\hsize,width=0.92\hsize]{lya_2D.eps}
\caption{Constraints on the amplitude $\Delta_L^2$ and slope $n_{eff}$ of the
linear matter power spectrum at $z=3$ and $k=0.009$ (km/s)$^{-1}$ where $k$ is
given in velocity space. Grey contours are from Lyman-$\alpha$ forest data
analyzed in \cite{mcdonald2005}; green contours are from Lyman-$\alpha$ forest
data from the BOSS survey (\citealt{boss2014a}); red and blue contours are from
the marginalized posterior probability for the axion monodromy model and the
base Planck model (with zero running), respectively. Note that the more recent
BOSS data are in tension with the base Planck model at greater than 2-sigma,
whereas they are consistent with axion monodromy to within 1-sigma.}
\label{fig:lyalpha}
\end{figure}
\subsection{Comparison to power spectrum constraints from Lyman-$\alpha$ forest
data}\label{sec:lyman_alpha}
Aside from dwarf galaxies, another approach to detect oscillations in the power
spectrum is to observe at high redshift where the matter power spectrum is
close to linear at small scales. The Lyman-$\alpha$ forest observed in quasar
spectra can reveal structure down to $\sim$ 100 kpc scales in the approximate
redshift range $z=2$ to $z=4$, which is in the quasi-linear regime. Past
studies have found constraints on the running of the spectral index, though not
with the accuracy required for a positive detection of running of order $\alpha
\approx -0.01$ (\citealt{seljak2006}; \citealt{viel2004}). However, more recent
quasar observations by the Baryon Oscillation Spectroscopic Survey (BOSS)
collaboration (\citealt{dawson2013}; \citealt{boss2014a}) have dramatically
enlarged the available dataset of quasars with measured Ly$\alpha$ forest power
spectra. In Figure \ref{fig:lyalpha} we plot inferred probability contours in
the amplitude $\Delta_L^2$ and slope $n_{eff}$ of the linear matter power
spectrum at $z=3$ and $k=0.009$ km/s (corresponding to roughly 1 Mpc scales).
The grey contours are from \cite{mcdonald2005}, while the green contours are
from the more recent data from \cite{boss2014b}. The red and blue contours are
from derived posterior probability distributions for axion monodromy and the
base Planck model respectively. From this figure it is evident that while the
older data cannot distinguish between the two models, the new BOSS data favors
axion monodromy: the base Planck model prediction for $n_{eff}$ is more than
2-sigma apart from that of \cite{boss2014b}, while the axion monodromy
prediction is consistent to within 1-sigma. More generally, this comparison
indicates that a negative running in the power spectrum may now be favored by
recent Ly$\alpha$ forest data, at the level required to accommodate a
substantial tensor-to-scalar ratio.
There are a number of other approaches to probing the small-scale power
spectrum besides Lyman-$\alpha$ forest data. Down the road, 21-cm observations
are a promising approach to constraining the power spectrum at even smaller
scales (\citealt{shimabukuro2014}) since, unlike the Lyman-$\alpha$ forest,
much higher redshifts can be probed where the data is not as limited at large
$k$ by thermal broadening. In the meantime, resolving the discrepancy between
SPT and ACT on whether negative running is preferred in the CMB at high $\ell$
will be an important step. Within the coming decade, advanced redshift surveys
such as WFIRST (\citealt{spergel2013}) and LSST may offer the best window onto
the power spectrum at scales smaller than CMB (\citealt{meerburg2014}), with
the caveat that galaxy bias must be properly taken into account. In any event,
a non-trivial primordial power spectrum remains an intriguing possibility for
alleviating small-scale problems. This will be true particularly if alternate
forms of dark matter (WDM, self-interacting) are ruled out as solutions to the
too-big-to-fail and/or missing satellites problems.
\section{Conclusions}\label{sec:conclusions}
In this paper we have investigated whether inflation models can allow for a
large gravitational wave background while remaining consistent with the
CMB power spectrum at horizon scales (low $\ell$). We have shown that axion
monodromy inflation accomplishes this naturally through Planck-scale
corrections, producing a gentle oscillation in the inflaton potential
(eq.~\ref{potential_original}). This generates a running spectral index in the power
spectrum while still achieving enough e-foldings to solve the horizon problem,
in stark contrast to the usual constant running model. We have fit our model to a
combination of Planck, ACT, SPT, and WMAP low-$\ell$ polarization CMB data
together with a prior on the number of e-foldings. The best-fit model
parameters are given in Table \ref{tab:bestfit}, while inferred probability
distributions in the inflation model parameters are shown in Figure
\ref{fig:triangle_plot}.
We find a best-fit tensor-to-scalar ratio $r = 0.07^{+0.05}_{-0.04}$ and thus
the predicted imprint of gravitational waves on the CMB is within reach of
B-mode polarization experiments. It is possible that these
primordial B-modes have already been observed by the BICEP2 experiment; however
if one assumes neglible foreground contamination, the BICEP2 result
$r\approx0.2$ is disfavored at the 99\% confidence level. This is primarily a
consequence of the e-folding constraint and the requirement to fit the spectral
index at high $\ell$ in addition to the low-$\ell$ power spectrum. Since a
running spectral index is the most straightforward way to reconcile the BICEP2
result with Planck, it is significant that attempting to implement running in
the underlying inflation theory disfavors such a large $r$, as we have shown
here. While it is possible that a dramatic change in the inflaton potential at
small scales (high $k$) can alter the e-folding constraint to allow for a
higher $r$, we find it more likely that the tensor-to-scalar ratio is simply
smaller than the best-fit BICEP2 result suggests, particularly in light of the
uncertainties about foreground contamination by dust polarization in the BICEP2
field.
In addition to the large gravitational wave amplitude, the (best-fit) axion
monodromy model makes two corresponding predictions: first, despite the
additional tensor power on horizon scales, the overall power at low
multipoles is reduced as a consequence of the running spectral index, providing
a better fit to the CMB power spectrum at large scales. The second prediction
is that the matter power spectrum is suppressed at the scale of dwarf galaxies,
and thus axion monodromy can alleviate some of the small-scale problems of
$\Lambda$CDM---in particular, the too-big-to-fail and missing satellites
problems. We find that our best-fit models reduce the power at scales relevant
to dwarf galaxy formation ($k \sim 10$ Mpc$^{-1}$) by as much as $\sim$35\%
depending on the assumed $r$-value, with the greatest reduction achieved at
large $r$. However, 20-30\% suppression is more likely, which will alleviate
the too-big-to-fail problem and may solve it entirely when combined with
baryonic feedback effects, as discussed in \cite{garrison2014}. Additionally,
we find that axion monodromy is preferred by recent Ly$\alpha$ forest data over
the base Planck model without running (Figure \ref{fig:lyalpha}).
If axion monodromy (or a similar oscillating large-field model) accounts for
the reduced power at large and small scales, then the tensor-to-scalar ratio is
likely to lie in the range $r\approx 0.03-0.12$ (68\% CL) and hence the imprint
of gravitational waves on the CMB will be observable by B-mode experiments in
the future (\citealt{abazajian2013}). With this comes the tantalizing prospect
of constraining physics at the Planck scale through sky surveys, as we have
demonstrated here. Thus, future CMB experiments, in combination with probes of
the power spectrum at small scales, may settle the issue of whether
Planck-scale physics manifest in inflation can reconcile the standard
$\Lambda$CDM cosmology with data at all observable scales of the Universe.
\section*{Acknowledgements}
QM would like to thank James Bullock, Shahab Joudaki, Jose Ceja and Shea
Garrison-Kimmel for their encouragement and feedback at the
beginning stages of this project.
We gratefully acknowledge a grant of computer time from XSEDE allocation
TG-AST130007. MK was supported in part by NSF grant PHY-1214648.
This research was also supported, in part, by a grant of computer time from the
City University of New York High Performance Computing Center under NSF Grants
CNS-0855217, CNS-0958379 and ACI-1126113.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Galactic nuclear outflows and winds, which are powered by energetic processes
in the central regions of galaxies,
have dominated the transfer of mass, energy,
momentum, and metals from the disk to the halo and even the
intergalactic medium (IGM). The feedback from the nuclear activity is also crucial for
the formation and evolution of galaxies and the IGM
\citep[e.g.,][]{2005ARA&A..43..769V,2008MNRAS.387..639H,2018MNRAS.474.3673R,2021AN....342.1135M}.
The Galactic center (GC) is the nearest laboratory to study the details
of the feedback effects from the nuclear region of a galaxy.
Over the past several decades, a large number of exciting findings
on the multiphase outflows from the GC have been revealed based on the compelling
multiwavelength observations of the ground-based telescopes
\citep[e.g.,][]{1984Natur.310..568S,2013Natur.493...66C,2016ApJ...831...72H,
2019Natur.573..235H,2020ApJ...899L..11K} and the space observatories
\citep[e.g.,][]{2000ApJ...540..224S,2003ApJ...582..246B,2006ApJ...646..951K,
2010ApJ...724.1044S,2013ApJ...779...57K,2014ApJ...793...64A,2015ApJ...799L...7F,
2017ApJ...834..191B,2020Natur.588..227P}.
The various studies show that past energetic events
(e.g., AGN-driven and/or starburst-driven winds) at about several to tens of Myr ago
may be responsible for the nuclear outflow/wind phenomena and
the related large-scale structures \citep[see, e.g., the recent reviews by][]
{2020A&ARv..28....2V,2021A&A...646A..66P}.
The above studies mainly focus on the radio continuum, IR dust, optical,
X-ray, $\gamma$-ray emission, and some UV absorption lines on
the outflow structures of the Milky Way.
On the other hand, the traditional tracers of the neutral atomic and molecular gas
(i.e., 21~cm \HI\ and 2.6~mm CO emission) should be also very useful for revealing
the spatial and dynamical features of the Galactic nuclear wind on a large scale.
In fact, the absence of high-$z$ atomic gas within the inner disk
(i.e., Galactocentric distance of $R_{\rm GC}\lsim$~3~kpc) has been found
by \citet{1984ApJ...283...90L} according to the atomic gas distribution.
The molecular gas in the inner 3~kpc of the Galactic disk is deficient
with respect to that in the $R_{\rm GC}\gsim$~3~kpc region based on the
early CO observations and the MWISP data
\citep[e.g.,][]{1975ApJ...202...30B,1977ASSL...70..165B,2001ApJ...547..792D,2021ApJ...910..131S}.
These results are consistent with the results from the large-scale
\HI\ and CO surveys \citep[see, e.g., reviews of][]{
1976ARA&A..14..275B,1990ARA&A..28..215D,1991ARA&A..29..195C,
1992Burton,2001ApJ...547..792D,2009ARA&A..47...27K,2015ARA&A..53..583H}.
Moreover, \HI\ study displays a good correlation between
the \HI\ voids and the Fermi bubbles, indicating the physical
connection between them \citep[][]{2016ApJ...826..215L}.
Benefiting from the large-scale \HI\ surveys, further studies show that the kinematic
features of the atomic gas are very likely related to the Milky Way nuclear wind
\citep[e.g.,][]{2009ApJS..181..398M,2013ApJ...770L...4M,2016ApJ...826..215L,
2018ApJ...855...33D,2020ApJ...888...51L,2020Natur.584..364D}.
Some authors also suggest that the large-scale multiwavelength features
can be explained by the interaction between the Galactic nuclear wind
(i.e., the GC superbubbles) and the Galactic gaseous disk
\citep[e.g.,][]{2017PASJ...69L...8S,2021MNRAS.506.2170S}.
In this paper, we present the result of high dynamic range CO observations
toward the large-scale view of the inner Galaxy. The kinematic information
of the gas allows us to construct the needed molecular cloud (MC) samples
from the CO emission, which is very helpful to trace the structures of
the gaseous disk of the Milky Way.
At $\gsim$260~pc (and even $\gsim$600~pc) above and below the Galactic plane,
the enhanced CO emission is discovered to be concentrated in a narrow range of
$l\sim18^{\circ}-22^{\circ}$ or $R_{\rm GC}\sim$~2.6--3.1~kpc
for the Sun's distance to the GC of $R_{0}$=8.15~kpc
\citep[e.g.,][]{Reid19},
indicating the interface between the Milky Way nuclear wind and the gaseous disk.
Furthermore, some cometary CO structures exhibiting the head
toward the Galactic plane and the tail away from the plane show
that the cold molecular gas is entrained in multiphase flows.
The multiphase outflows are probably driven by the warm/hot gas
from the Milky Way nuclear wind.
In Section 2, we describe the CO, \HI, and IR data used in the paper.
In Section 3, we discuss the results and then give a simplified picture
to explain the multiwavelength observations.
Finally, Section 4 gives a brief summary based on our new findings.
\section{CO, \HI, and IR Data}
The CO data are from the Milky Way Imaging Scroll Painting survey
\citep[i.e., the MWISP project; see details in][]{2019ApJS..240....9S}.
Briefly, we employed the CO emission in the region of
$l=12^{\circ}$--$26^{\circ}$ and $|b|\lsim5\fdg1$ to investigate the
molecular gas distribution near the tangent points,
where the gas's distance is well determined.
The spatial and spectral
resolutions of the CO data are $\sim50''$ and $\sim$~0.2~$\km\ps$,
respectively. After fitting the baseline and calibrating the main-beam efficiency,
the reduced 3D data cubes (i.e., the position-position-velocity space, hereafter PPV)
with a grid spacing of 30$''$ have a typical rms
noise level of $\sim$~0.5~K for
\twCO\ ($J$=1--0) and $\sim$~0.3~K for
\thCO/C$^{18}$O ($J$=1--0) at a channel width of 0.16~$\km\ps$.
We mainly focus on the \twCO\ ($J$=1--0) emission in this paper.
In this work, the \HI\ data from the all-sky \HI\ survey
\citep[][]{2016A&A...594A.116H} are used for large-scale comparisons
with the MWISP CO data.
The angular and velocity resolutions of the \HI\ data are
16\farcm2 and 1.5~$\km\ps$, respectively. The typical RMS sensitivity
of the HI4PI data is $\sim$43~mK.
We also use the data from the survey of the Wide-field Infrared
Survey Explorer \citep[WISE;][]{2010AJ....140.1868W} to investigate
the correlation between the molecular gas distribution and the dust emission.
The 12~$\mu$m and 22~$\mu$m WISE data used here have
a spatial resolution of 6\farcs5 and 12\farcs0, respectively.
\section{Results and Discussions}
\subsection{Thinner Gaseous Disk within $R_{\rm GC}<$3~kpc}
The \HI\ data have shown that the atomic gas layer is noticeably thin in the
region of $R_{\rm GC}\lsim$~3~kpc, probably indicating the interaction
between the Milky Way nuclear wind and the gaseous disk
\citep[][]{1984ApJ...283...90L,2016ApJ...826..215L}.
And the early CO observations also show that
the cold compressed gas is confined to a thinner layer within $R_{\rm GC}<$3~kpc
\citep[e.g.,][]{1975ApJ...202...30B,1976ARA&A..14..275B,1977ASSL...70..165B}.
Recently, a large number of small and high-velocity \HI\ clouds are found to be
far above and below the disk toward the GC, suggesting that the large-scale atomic gas
is physically associated with the Fermi bubbles and then the Milky Way nuclear wind
\citep[][]{2013ApJ...770L...4M,2018ApJ...855...33D,2020ApJ...888...51L}.
Furthermore, \cite{2020Natur.584..364D} have revealed that the cold and dense
molecular gas far from the plane can even survive in the hot and shocked nuclear wind.
These results show that energetic processes in the GC have profound effects on
the distribution and evolution of the interstellar medium (ISM) on a large scale,
which is consistent with the observational and theoretical studies on
other galaxies \citep[see the summary in][]{2020A&ARv..28....2V}.
As a widely used tracer of the MCs, CO data can provide
the large-scale spatial and kinematic information of the H$_2$ gas.
In this study, we explore the distribution and properties
of the molecular gas far from the Galactic plane based on the MWISP CO data.
The study is limited to the CO emission near the tangent points, which can avoid the
distance ambiguity and thus decrease the foreground confusion of the unrelated structures
\citep[see, e.g., the schematic view of Figure~1 in][]{2021ApJ...910..131S}.
We have confirmed that the inner molecular disk consists of two components:
the thin CO layer with a thickness of $\sim$85~pc and the thick layer
with $\sim$280~pc \citep[][]{2021ApJ...910..131S}.
The well-known thin CO disk harbors the majority of the molecular gas
in the Milky Way, while the thick CO disk is composed of many
small clouds in relatively high-$z$ regions.
Some MCs in the thick CO layer are probably related to the feedback of
the energetic star-forming activities near the Galactic plane
\citep[see, e.g., the case of microquasar SS~433 in][]{2018ApJ...863..103S}.
Interestingly, the two molecular gas layers are both thinner
within the $R_{\rm GC}\lsim$~3~kpc region \citep[i.e., an FWHM of $\sim$60~pc
and $\sim$150~pc for the thin and thick CO layers, respectively;
see Figures 3-5 in][]{2021ApJ...910..131S}.
Figure~\ref{tangent} shows a large-scale distribution of the atomic and
molecular gas toward the tangent points. That is, we just integrated the gas
emission with velocities greater than the terminal velocities.
Here the terminal velocities can be determined from
the most positive velocity of the CO emission in the first quadrant
of the Milky Way \citep[see details in][]{2021ApJ...910..131S}.
Obviously, the disk traced by the atomic and molecular gas is indeed
thinner in regions of $l\lsim 22^{\circ}$ (or $R_{\rm GC}\lsim$~3~kpc).
Is the thinner gaseous disk within $R_{\rm GC}\lsim$~3~kpc related to
the large-scale \HI\ voids or the Milky Way nuclear wind?
Is a substantial amount of cold neutral gas entrained by the Milky Way nuclear wind?
If so, why is the neutral gas not destroyed by the high-velocity wind
in such harsh environments?
And what is the relation between the cool outflows and the hot wind from the
nuclear region of the Milky Way?
The MWISP CO data with a wide spatial dynamic range, in combination with
\HI\ and other multiwavelength observations, can give some hints on these topics.
\subsection{Enhanced high-$z$ CO Emission toward $l\sim$19\fdg1--22\fdg5}
MCs far from the Galactic plane may reveal some features of the large-scale
structures related to the Milky Way nuclear wind.
To search for the high-$z$ MCs in the MWISP 3D datacube,
we use the density-based spatial clustering of applications with noise
(DBSCAN\footnote{https://scikit-learn.org/stable/auto\_examples/cluster/plot\_dbscan.html})
clustering algorithm.
The algorithm is useful to identify CO clouds with noise in a big dataset,
which is crucial to reveal possible unusual features traced by weak emission.
Full details of the algorithm can be found in our previous studies
\citep[e.g.,][]{2020ApJ...898...80Y,2021ApJ...910..131S},
and a brief description of the method is shown below.
In order to further improve the signal-to-noise ratio, the MWISP raw data were resampled to
0.5$\km\ps$, corresponding to a typical rms noise level of $\sim$~0.3~K per channel
for the \twCO\ emission. For the PPV space, the minimum cutoff is 2$\times$rms and
MinPts is set to 4 in the DBSCAN algorithm.
Then, post-selection criteria (i.e., $T_{\rm peak}\geq 4\times$rms,
$\Delta l\times\Delta b\geq$~1~beam, $\Delta v\geq$~3~channels, and
the minimum number of neighborhood voxels $\geq$ 16)
are used on the samples to remove the noise contamination.
Here $T_{\rm peak}$ is the intensity per channel in the units of K,
$\Delta l$ ($\Delta b$) is the spatial size in the units of arcmin,
and $\Delta v$ is the spectral extension in the units of $\km\ps$.
Note that we use $T_{\rm peak}\geq 4\times$rms as the post-selection
criteria, which allows us to detect weaker CO emission in
the whole data compared to our previous studies
\citep[e.g., $T_{\rm peak}\geq 5\times$rms in][]{2021ApJ...910..131S}.
Furthermore, we identified the high-$z$ MCs toward the tangent points
according to the selection criteria of $v_{\rm MC}\gsim v_{\rm tan}(l)-7$~km~s$^{-1}$
and $|z_{\rm MC}(l,b,v)| \geq 110$~pc
\citep[i.e., $\geq3\times\sigma_z$ of the thin CO disk; see Table 3 in][]{2021ApJ...910..131S}.
Here $v_{\rm MC}$ and $v_{\rm tan}(l)$ are the velocity of the MC and
the corresponding tangent velocity at a certain longitude, respectively.
And $\sigma_z$ is the standard deviation of the vertical distribution of
the CO emission. By considering the cloud-cloud velocity dispersion of MCs
\citep[][]{Malhotra94,2021ApJ...910..131S},
we adopted the value of $v_{\rm tan}(l)-7$~km~s$^{-1}$ to build
the high-$z$ MC samples near the tangent points.
To reduce striping artifacts and other uncertainties in the whole data,
all selected MC samples are manually checked based on the cloud's spatial and
velocity features in the PPV space. We find that the procedure is efficient
and valid. The method with the improved criteria can substantially increase
the MC samples with weak CO emission, which is important to reveal
the molecular gas distribution traced by small and faint clouds
at the marginal signal-to-noise ratio of the MWISP survey.
And some interesting structures with weaker CO emission are indeed revealed
in our subsequent analysis (e.g., small MCs with $T_{\rm peak}\sim$1~K, see Section 3.3).
In total, 321 MCs near the tangent points were identified as the high-$z$
MC samples (i.e., $|z|\gsim$~110~pc) in the region of $l=12^{\circ}$--$26^{\circ}$
and $|b|\lsim5\fdg1$ (see purple circles in Figure~\ref{tangent}).
These discrete high-$z$ MCs usually have small sizes
($\sim$0\farcm7--2\farcm2 or $\sim$1--5 pc in the radius),
weak emission ($\sim$1--3~K in the peak temperature), and
high virial parameters ($\sim$5--30), which are consistent with
our previous studies \citep[e.g.,][]{2021ApJ...910..131S}.
Obviously, the high-$z$ MCs are well coincident with the distribution of
the \HI\ emission near the tangent points, indicating the physical association
between them (Figure~\ref{tangent}).
We check the spectral properties between the high-$z$ CO cloud and
the \HI\ gas at the same location.
Both the CO cloud and the \HI\ gas have comparable LSR velocity near
the tangent points, confirming the association of them.
Figure~\ref{lb} displays the 321 high-$z$ MCs in the $R_{\rm GC}$--$z$ space.
Besides the thinner gas layers discussed in Section 3.1, interestingly enough,
we also found that the extreme high-$z$ MCs (hereafter EHMCs)
are located in two NARROW regions of [$l\sim$19\fdg1 to 20\fdg5, $b\sim$2\fdg0 to 5\fdg1]
and [$l\sim$20\fdg5 to 22\fdg1, $b\sim-$2\fdg0 to $-5$\fdg1]
(see Figures~\ref{tangent} and \ref{lb}).
These EHMCs (i.e., $|z|\gsim$260~pc or $\sim3\times$FWHM
of the thin CO layer) are unusual in such the narrow region
far above and below the Galactic plane.
The molecular gas mass of the EHMCs is also concentrated in the narrow range of
$R_{\rm GC}$=2.6--3.1~kpc (i.e., $\gsim9.2\times10^{3}\Msun$ in regions of
$\sim 2\times \Delta R_{\rm GC} \times \Delta z$=$2\times 220$~pc$\times 410$~pc,
see the histogram in Figure~\ref{lb} and Section 3.3).
These EHMCs, together with other high-$z$ MCs at $R_{\rm GC}\lsim$~2.3--2.6~kpc,
constitute molecular crater-wall structures lying along the edges of the
\HI\ voids (i.e., from $l\sim17^{\circ}$ and $|b|\gsim1^{\circ}$
to $l\sim22^{\circ}$ and $|b|\gsim5^{\circ}$, see Figure~\ref{tangent}).
We suggest that the molecular crater walls, in combination with the \HI\ voids
above and below the Galactic plane, are related to the GC superbubbles
seen in radio, X-ray, and $\gamma$-ray emission
\citep[e.g., Fermi bubbles;][]{2010ApJ...724.1044S}.
\subsection{EHMCs and Cometary Structures}
Table~1 lists the parameters of each EHMC:
(1) the ID of the EHMCs, arranged from the low Galactic longitude;
(2) and (3) the EHMC's Galactic coordinates ($l$ and $b$);
(4) the EHMC's LSR velocity ($v_{\rm LSR}$);
(5) the EHMC's one-dimensional velocity dispersion ($\sigma_v$);
(6) the EHMC's peak emission ($T_{\rm peak}$);
(7) the EHMC's effective radius (i.e., $d\times\sqrt{(\theta_{\rm MC}^2-\theta_{\rm beam}^2)/\pi}$,
where $\theta_{\rm MC}$ and $\theta_{\rm beam}$, in units of arcmin, are the angular
size of the CO emission and the beam size, respectively);
(8) the EHMC's distance obtained from the tangent points, i.e, $d=$8.15~kpc$\times$cos($l$)/cos($b$);
(9) the EHMC's $z$ height defined as $z=d\times$sin($b$);
(10) the EHMC's mass estimated from the CO-to-H$_2$ conversion factor method,
$X_{\rm CO}=2\E{20}$~cm$^{-2}$(K~km~s$^{-1})^{-1}$ \citep[e.g.,]
[]{2001ApJ...547..792D,2013ARA&A..51..207B};
and (11) the EHMC's virial parameter $\alpha=\frac{M_{\rm virial}}{M_{\rm Xfactor}}
=\frac{5\sigma_{v}^2R}{GM_{\rm Xfactor}}$,
where $R$ is the effective radius of the EHMCs and $G$ the gravitational constant.
Generally, the 47 EHMCs also have small sizes of several arcminutes
(the mean value of 3\farcm1 and the median value of 2\farcm4),
weak emission (the mean value of 1.6~K and the median value of 1.8~K),
and molecular masses spanning 40--3100~$\Msun$
(the mean value of 340~$\Msun$ and the median value of 110~$\Msun$).
Based on the new criteria (Section 3.2), about half of the EHMCs
are small clouds with a mass of $\lsim 100\ \Msun$.
As the location anchors, the EHMCs are essential to trace the molecular
crater-wall structures located along the edges of the \HI\ voids
at $\sim18^{\circ}$--$22^{\circ}$ (Figure~\ref{tangent}).
Moreover, some large EHMCs display spatially resolved structures.
Two zoomed-in views of the EHMCs
are shown in Figure~\ref{twosamples} for EHMC G019.957$+$02.863 and
EHMC G021.548$-$03.414 (also see regions of the red boxes in Figure~\ref{tangent}).
Based on the new MWISP CO data
at moderately high angular resolution and fairly high sensitivity,
we find that the two EHMCs traced by CO emission (black contours)
display the elongated head-to-tail structures pointing away from the
Galactic plane (see blue arrows in Figure~\ref{twosamples}).
We also find that the elongation of the molecular gas coincides exactly with
the atomic gas ridges revealed by \HI\ emission (see purple contours in Figure~\ref{twosamples})
and the dust filaments traced by IR emission.
Remarkably, the cometary structure of the EHMC G021.548$-$03.414
can be clearly seen from the CO emission (i.e., see CO black contours
of the $\sim$~20~pc long structure in the bottom panel of Figure~\ref{twosamples}).
The CO emission of the head of the EHMC is brightest toward the Galactic plane,
while the cometary tail is faint away from the plane.
At the head of the EHMC, the \twCO\ peak temperature is $\sim$~4.4~K,
which is about four times of the detected \thCO\ emission at the brightest part of
the \twCO\ emission (i.e., $T_{\rm peak13}\sim4\times$rms13=1.1~K, see Table~1).
We also show the peak temperature of the detected \thCO\ emission for the EHMCs in Table~1.
No C$^{18}$O emission is detected for any identified EHMCs.
Contrary to the bright CO emission at the head of the cometary cloud,
the IR dust emission is bright in the tail regions where the atomic gas is also
enhanced based on the \HI\ data with a grid of $5'$ (Figure~\ref{twosamples}).
The feature probably indicates that the entrained molecular gas (and dust)
in the tail is heated by the surrounding warm/hot gas, leading to the multiphase flows
from the cold head to the warm tail in the crater-wall region.
Based on the WISE data, the dust temperature at the tail of
the EHMC G021.548$-$03.414 is probably $\sim$100--500~K, which is larger
than the molecular gas temperature at the head of the cloud
(e.g., the estimated molecular gas temperature of $\lsim$~10~K based on
the optically thick \twCO\ emission and the beam filling factor of 1).
We must stress that the derived dust temperature is dependent on
the accurate IR flux from the thermal emission at several bands.
More observations are necessary to draw a solid conclusion.
On the other hand, there are at least three MC structures
(i.e., EHMC ID 13, 14, and 15 in Table~1)
concentrated in the region toward EHMC G019.957$+$02.863.
Figure~\ref{pv} shows the position--velocity (PV) diagrams of the two
EHMCs from the head to the tail (see blue arrows in Figure~\ref{twosamples}).
For EHMC G021.548$-$03.414, the velocity gradient of $\sim -0.23\km\ps$pc$^{-1}$
is found to be from the denser head to the faint tail.
Considering a small inclination angle of $i\sim10^{\circ}-20^{\circ}$
for the outflows on the sky, the true velocity gradient could be larger
(e.g., $\sim -1\km\ps$pc$^{-1}$ for the moving flows roughly perpendicular
to the line of sight).
The entrained molecular gas of the EHMC is moving toward us
based on the observed blueshifted CO emission.
These new findings indicate that the head of the high-$z$ MCs mainly
contains cold and dense molecular gas, while the extended filamentary tail
incorporates a substantial amount of atomic gas, molecular gas,
and dust stripping from the MC's head.
That is, the molecular gas is changing phase to become the atomic gas
(and very likely the warm ionized gas) as it moves to the high-$z$ regions.
The material in the tail is comoving with the surrounding
warm/hot ionized gas from the low-$z$ regions to the high-$z$ regions,
providing a supply of fresh gas to the Milky Way halo and eventually
falling back onto the plane (e.g., the velocity of cool outflows at
$\sim140-330\km\ps$ is less than the escape velocity of the Milky Way;
see Section 3.4.2).
The association of the bright dust emission with the cool gas in the tail
shows that the dust can survive long in the crater walls with
the local high density (e.g., at least several Myr; see Section 3.4.3).
The survival of dust is very important
for efficient formation of molecules in the interface between
the cool outflows and the warm/hot wind.
The observation features are also consistent with the recent simulation results
that the molecular and dusty clouds are likely to survive long enough
in warm/hot winds \citep[e.g.,][]{2018MNRAS.480L.111G,2020MNRAS.492.1970G,
2019MNRAS.486.4526B,2022MNRAS.510..551F}.
Some other elongated high-$z$ CO clouds lying in the edges of the
\HI\ voids are also shown in Figure~\ref{lb4s}.
We note that the MCs in panels (c) and (d) of Figure~\ref{lb4s} form
a $\sim$~140~pc long structure from $b\sim-$2\fdg6 to $b\sim-$3\fdg5,
supporting the existence of the large-scale molecular crater walls
along the edges of the \HI\ voids (and the boundary of the GC superbubbles;
see Figure~\ref{tri}).
\subsection{Impact of Galactic Winds on the Gaseous Disk}
\subsubsection{Origin of the EHMCs}
There are probably two scenarios that can explain the observed EHMCs far from the Galactic
plane, i.e., (1) the local star-formation-feedback scenario
and (2) the Milky Way nuclear wind scenario.
We will discuss the two scenarios described below.
For the local star formation feedback scenario,
the feedback from star-forming activities near the Galactic plane has profound
effects on the surrounding ISM environment, i.e., changing the physical properties and
the distributions of the gas.
The energetic processes from massive stellar winds and/or supernova explosions can
produce large-scale structures such as supershells, superbubbles, and large-scale chimneys.
Especially, the region between $l\sim20^{\circ}$ and $30^{\circ}$ is close to the near tip
of the Galactic bar and it hosts intense star-forming regions in the Galactic plane.
The region also shows a rich population of \HI\
extraplanar clouds that are not seen in other regions of the Galaxy
\citep[the \HI\ scale height in the region of $l\sim20^{\circ}$
is twice the scale height of the corresponding region in $l\sim -20^{\circ}$, see
][]{2008ApJ...688..290F,Ford10}
Therefore, the high-$z$ MCs ($|z|\gsim$~100~pc) in the thick CO disk may be the debris of
the disk gas that was blown away by local star-formation feedback near the Galactic plane
\citep[see, e.g., discussions in][]{2021ApJ...910..131S}. For example,
the identified high-$z$ MCs at ($l\sim$40\fdg3, $b\sim -$4\fdg3) are probably associated with
the nearby SS~433/W50 system \citep[e.g., $z_{\rm MC}\sim-$400~pc at a distance of 5~kpc; see]
[]{2018ApJ...863..103S}.
Is it possible that the EHMCs at $l\sim20^{\circ}$ are from the energetic processes
of stellar feedback near the Galactic plane, for example, the Ophiuchus superbubble
\citep[see details in][]{2007ApJ...656..928P}?
Based on results from the MWISP data, we find that the high-$z$ MCs with very faint
CO emission are highly turbulent, indicating that
the cold gas is mixing with the warmer gas and the MCs are either dispersing or being
assembled by external dynamical processes. These small-sized (1--4~pc) high-$z$ MCs
are probably short-lived, e.g., less than their typical internal crossing time of several Myr.
The clouds thus cannot move too far from the disk if they are directly from the
Galactic midplane, conforming to the observed MCs' distribution for the thick CO
disk \citep[e.g., $\sigma_z\sim$110--120~pc; see][]{2021ApJ...910..131S}.
Here we ignore the case that the high-$z$ molecular gas may condense in situ.
Indeed, we find no detailed correlation between the Ophiuchus superbubble in \HI\
emission and the high-$z$ MCs near the tangent point in a region of
$l\sim20^{\circ}$--$40^{\circ}$ and $z$=100--600~pc.
On the other hand, the Ophiuchus superbubble appears to be one-sided and does not
extend below the Galactic plane \citep[e.g.,][]{2007ApJ...656..928P},
although the HI4PI data show a lot of anomalous structures below
the Galactic plane \citep[see also][]{Ford10,2021ApJ...910..131S}. We
argue that the enhanced EHMCs at $l\sim18^{\circ}$--$22^{\circ}$
(or $R_{\rm GC}\sim$2.6--3.1~kpc) are not associated with the old Ophiuchus superbubble
\citep[e.g., an age of about 30~Myr; see][]{2007ApJ...656..928P}.
In principle, some EHMCs are probably related to the young star-formation feedback
(e.g., several Myr) near the Galactic plane. However, the extended direction of the
cometary EHMCs (see Figure~\ref{twosamples}) and the coherent EHMC samples in Figure~\ref{tri}
show that the large and cometary EHMCs are likely related to dynamical processes toward
regions of $l\lsim18^{\circ}$--$22^{\circ}$ (or toward the GC direction),
in which the star-forming activities near the Galactic plane
are not very intense except for the Central Molecular Zone (CMZ) region.
According to the above discussions, we propose that most of the EHMCs
in $R_{\rm GC}\sim$2.6--3.1~kpc are associated with the Milky Way nuclear wind.
We also summarize the observation results as below:\\
(1) The dominant EHMCs are just located near the edges of the large-scale \HI\
voids toward $l\sim18^{\circ}-22^{\circ}$ or $R_{\rm GC}\sim$~2.6--3.1~kpc
(see Figures~\ref{tangent}, \ref{lb}, and \ref{tri}), indicating the association
between them.\\
(2) The large EHMCs along the edges of the \HI\ voids display
unusual head-to-tail structures pointing away from the Galactic plane
(or the direction away from the GC; see Figures~\ref{twosamples} and \ref{lb4s}).\\
(3) The observed velocity gradient of EHMC G021.548$-$3.414, together with its
large velocity width (Figure~\ref{pv}), supports the idea that the MC is unstable
and is partially destroyed by ambient dynamical processes.
That is, the molecular gas is accelerated from the head to the tail
owing to the blueshifted CO profile for the tail structure (see the bottom
panel of Figure~\ref{pv}).
The feature supports the entrainment scenario that the material is moving from the
Galactic plane to the high-$z$ regions.\\
(4) The IR emission of EHMC G021.548$-$3.414 (Figure~\ref{twosamples}) is bright at
its long tail but is faint at the dense head, favoring the scenario of the molecular gas
being ablated and heated from the cloud edges by a warm/hot wind.
These observational features can be naturally explained by the entrainment scenario
that the cold gas is moving with the multiphase medium at the interface between
the warm/hot nuclear wind of the Milky Way and the gaseous disk.
The process also leads to the concentrated molecular gas (and mass) in narrow
regions of $l\sim18^{\circ}-22^{\circ}$ (or $R_{\rm GC}\sim$2.6--3.1~kpc) and
$|z|\gsim 260$~pc (see the cometary EHMCs along the edges of the \HI\
voids in Figures~\ref{twosamples} and \ref{tri}, $|z|\sim 400-450$~pc$\gsim 3 \sigma_z$
of the thick CO disk).
The \HI\ voids, which are physically associated with the Fermi bubbles,
are thus surrounded by enhanced CO emission at low latitudes of
$|b|\gsim2^{\circ}$--$5^{\circ}$ and $R_{\rm GC}\lsim$2.6--3.1~kpc.
This scenario also agrees well with the recent observation that the Milky Way nuclear wind
can remove gas from the disk to the halo \citep[e.g.,][]{2021ApJ...923L..11C}
and can accelerate MC fragments into the high-$z$ regions
\citep[e.g.,][]{2020Natur.584..364D}.
Finally, note that not all EHMCs in Table~1 are related to the Milky Way nuclear winds.
Only samples of ID 05--31 in Table~1 are used to calculate the mass of the
crater walls (see Figure~\ref{lb}).
Among the EHMCs in the crater walls, over 60\% of samples are resolved and
nearly 30\% of EHMCs (ID 13, 14, 15, 19, 23, 24, and 27) display cometary structures.
These features are unusual in the narrow region of $R_{\rm GC}\sim$2.6--3.1~kpc.
And the dominant molecular mass in the CO crater walls is from the large-size
and cometary EHMCs that are located along the edges of the \HI\
voids (e.g., $R_{\rm GC}\sim$2.6--3.1~kpc; see EHMCs in Figure~\ref{tri}
and the caption).
Therefore, the possible contamination from the local star-formation feedback
near the Galactic plane has little effect on the subsequent estimation assuming
that the cometary and large EHMCs along the crater walls are (very likely)
of Milky Way nuclear wind origin.
Of course, observing a similar region at the other side of the crater
(i.e., $l\lsim -20^{\circ}$), where star-formation activity is not very intense,
would be a neat confirmation.
However, the MWISP survey cannot cover that longitude range.
\iffalse
high-$z$ cometary CO clouds,
together with the coincidence of the elongated atomic structures and
dust emission at the edges of the \HI\ voids, are the solid evidence
for the entrainment of the cool gas co-moving with hot flows away
from the Galactic plane.
\fi
\subsubsection{Cool Outflows in the Milky Way}
Multiphase outflows are a common feature in galaxies \citep[e.g.,][]{2020A&ARv..28....2V}.
For the Milky Way, multiwavelength observations have revealed many interesting
large-scale structures related to the Galactic nuclear wind,
i.e, the GC Lobes in radio \citep[e.g.,][]{1984Natur.310..568S,2013Natur.493...66C},
IR \citep[e.g.,][]{2003ApJ...582..246B}, X-ray \citep[e.g.,][]{2020Natur.588..227P},
$\gamma$-ray \citep[i.e., the Fermi bubbles in][]{2010ApJ...724.1044S} emission
and UV absorption \citep[e.g.,][]{2008ApJ...679..460Z,2015ApJ...799L...7F,
2017ApJS..232...25S,2018ApJ...860...98K,2020ApJ...898..128A},
showing the bipolar outflows on scales of a few degrees to tens of degrees.
New multiwavelength analyses also found chimney-like structures that
are physically related to the intermittent activity near the GC
\citep[][]{2019Natur.567..347P,2021A&A...646A..66P}.
Recently, the \HI\ studies have revealed spatial
\citep[e.g., \HI\ holes in][]{2016ApJ...826..215L} and kinematic atomic gas features
\citep[e.g., anomalous high-velocity clouds extending up to the high-$z$ regions;]
[]{2013ApJ...770L...4M,2018ApJ...855...33D,2020ApJ...888...51L}
associated with the Galactic wind.
These studies support the existence of large-scale multiphase outflows in our Galaxy
(i.e., the neutral gas at $T\lsim 10^{2}$~K, warm ionized gas $T\sim 10^{3}-10^{4}$~K,
and high-temperature gas at $T\gsim 10^{6}$~K).
Based on the discussions in Section 3.4.1, we suggest that large
amounts of molecular gas, which is concentrated in the edges of the \HI\
voids associated with the Fermi bubbles, is entrained in large-scale
multi-phase outflows from the Galactic gaseous disk to the high-$z$ regions.
The detected EHMCs could be an important mass reservoir of the cool outflows
in the Milky Way. The molecular mass in the crater-wall structures
of $l\sim$19\fdg1--22\fdg5 and $|z|\gsim$~260~pc
(i.e., Area$\sim 2\times \Delta R_{\rm GC} \times \Delta z$=
$2\times 220$~pc width $\times410$~pc height;
see Figure~\ref{lb} and ID 05--31 in Table~1) is estimated to be
$\gsim9.2\times10^{3}\Msun$ by adopting the CO-to-H$_2$ conversion factor of
$X_{\rm CO}=2\E{20}$~cm$^{-2}$(K~km~s$^{-1})^{-1}$ \citep[e.g.,]
[]{2001ApJ...547..792D,2013ARA&A..51..207B}.
In the meanwhile, many small clouds with weak CO emission (e.g., $T_{\rm peak}\lsim$~1~K)
may exist in the walls of the crater-like structures, but we cannot pick them
out owing to the beam dilution and the limited sensitivity of the survey data.
For a conservative estimate, the mean volume density of the molecular gas
in the crater walls is $\gsim3\times10^{-4}\ \Msun$~pc$^{-3}$
(or $\gsim0.01$~H~cm$^{-3}$)
assuming a thickness of $\sim$200~pc along the line of sight (LOS).
Due to confusion with the unrelated \HI\ emission near the Galactic plane,
the total mass of the atomic gas cannot be precisely measured in the same
region of the EHMC concentrations.
Based on the \HI\ emission near the tangent points, however,
the upper limit of the total mass of the atomic gas
can be roughly estimated to be $\lsim1.3\times10^{5}\ \Msun$
from I$_{{\rm H\textsc{i}}}(v_{\rm LSR}>v_{\rm tan}$),
leading to the atomic gas density of $\lsim4\times10^{-3}\ \Msun$~pc$^{-3}$
(or $\lsim0.1$~H~cm$^{-3}$). Note that the estimated \HI\ mass contains
the attribution from some unrelated gas structures with $v_{\rm LSR} \textless v_{\rm tan}$
because of the broad line width of the \HI\ emission.
Therefore, the derived atomic gas mass (and the density) toward
the tangent points is probably the upper limit based on the
estimated value of $I_{{\rm H\textsc{i}}}(v_{\rm LSR} \textgreater v_{\rm tan}$).
The mean density of the cold gas estimated above is at least one order of
magnitude larger than the hot gas density in the bubbles
\citep[e.g., the often-used value of $\sim10^{-3}$~H~cm$^{-3}$ in]
[]{2003ApJ...582..246B,2015ApJ...808..107C,2015MNRAS.453.3827S,
2016ApJ...829....9M}, suggesting that the hot gas in the nuclear wind
is surrounded by dense and cold shells at the boundaries of
the Fermi bubbles near the Galactic gaseous disk.
Therefore, the cold gas at $R_{\rm GC}\sim$3~kpc likely confines
the hot wind near the gaseous disk at least on the height of
$|z|\sim$600~pc (see Figures~\ref{tangent} and \ref{lb}).
That is what we observed based on the MWISP CO survey
and the combination of the \HI\ data.
Considering the bipolar-outflow structures for the Milky Way nuclear wind,
we can estimate that the total mass of the molecular crater walls
should be $\gsim1\times10^{6}\ \Msun$ for regions of
$\sim 2\times 2\pi R_{\rm GC} \times \Delta R_{\rm GC} \times \Delta z$
=$3.4 \times 10^{9}$~pc$^{3}$
(i.e., the two bowl-like structures above and below the Galactic plane
at $R_{\rm GC}\sim$3~kpc and $|z|\sim$260--670~pc).
It is interesting to note that the total molecular mass in the crater walls
is well comparable to that of the \HI\ gas in the Fermi bubbles
\citep[i.e., $\sim10^{6}\ \Msun$ in][]{2018ApJ...855...33D,2020ApJ...888...51L}.
Additionally, some molecular gas could still survive in the inner of the
Fermi bubbles \citep[e.g.,][]{2020Natur.584..364D},
which will increase the total molecular mass of the cool outflows
associated with the Milky Way nuclear wind.
Our results show that a large amount of neutral gas
(the total atomic and molecular gas mass of $\sim 10^{7}\ \Msun$ at $|z|\gsim$~260~pc)
is located in the crater walls,
which surround the base of GC superbubbles at low latitudes
above and below the Galactic plane.
The high-$z$ molecular gas, together with the related atomic gas and dust,
constitutes the cool outflows associated with the Milky Way nuclear wind.
Figure~\ref{tri} shows large-scale velocity distributions of the gas
along the crater-wall structures (black dashed lines in the left panel).
We find that the crater-wall structures display the systematic
velocity gradient of $\sim -0.03\km\ps$pc$^{-1}$ along the LOS.
That is, the observed velocity along the LOS decreases with increasing
the height of $|z|$, for regions both above (e.g., EHMC IDs~5--13 in Table~1)
and below (EHMC IDs~19--28 in Table~1) the Galactic plane.
The lag is probably the result of the interaction between the entrained gas
from the disk and the slowly rotating gas in the halo
\citep[e.g.,][]{2008MNRAS.386..935F,2009MNRAS.399.1089M,2015MNRAS.451.4223M,2016ApJ...826..215L}.
Additionally, two substructures, which are identified from the
coherent EHMCs with similar spatial and velocity features in the crater walls,
display the positive velocity gradient with increasing the height of $|z|$
(see the black solid lines in the right panels of Figure~\ref{tri}).
Here the coherent EHMCs mean that (1) the clouds have similar LSR
velocities in a small region and (2) they have similar elongations along the
crater walls (or the edges of the \HI\ voids).
Both the substructures traced by CO emission are located at $\sim$400--450~pc
far from the Galactic plane (see, e.g., black contours in Figure~\ref{twosamples}).
The velocity gradients of the two substructures are
$\sim 0.15\km\ps$pc$^{-1}$ for the $\sim$~20~pc long structure above the plane
(see the black solid line in the top right panel of Figure~\ref{tri}
for EHMCs IDs 13, 14, 15, and 17 from Table~1) and
$\sim 0.16\km\ps$pc$^{-1}$ for the $\sim$~40~pc long one below the plane
(see the black solid line in the bottom right panel of Figure~\ref{tri}
for EHMCs IDs 23, 24, 26, and 27 from Table~1), respectively.
The true velocity gradient is likely 3--6 times larger than the observed gradient
along the LOS by considering the projection correction of
the small inclination angle
(e.g., $\nabla v_{\rm true}=\nabla v_{\rm obs}$/sin($i$)
for $i\sim10^{\circ}-20^{\circ}$; see Section 3.3).
We argue that the velocity gradient of the large-scale coherent EHMCs probably
results from cool outflows associated with the Milky Way nuclear wind.
The velocity of cool outflows is roughly $\sim 140-330\km\ps$
(i.e., $v_{\rm w}\sim path\times \nabla v_{\rm true}$)
assuming that the CO gas comes from the locations of 110--130~pc far from
the Galactic plane at a constant acceleration
(e.g., $path\sim$~280--330~pc and $\nabla v_{\rm true}\sim0.5-1\km\ps$pc$^{-1}$).
Here we tentatively assume that the entrained high-$z$ gas is launched from the
boundary of the thin CO disk to their current places (see, e.g., the cometary CO
structures at $b\sim 1^{\circ}$ or $|z|\sim$120~pc in the top panels of Figure~\ref{lb4s}).
The estimated velocity of the cool outflows is roughly
comparable to the value of $\sim$200--300$\km\ps$ from the \HI\ kinematic models
\citep[e.g.,][]{2013ApJ...770L...4M,2018ApJ...855...33D,2020ApJ...888...51L}.
The cold gas in the multiphase outflows is mainly entrained
along the walls of the hot gas cavity blown by the Milky Way nuclear wind.
The scenario is also similar to the famous examples
of nearby starburst galaxy M82 \citep[e.g.,][]{2015ApJ...814...83L,2019PASJ...71...87Y,
2021ApJ...915L...3K} and NGC~253 \citep[e.g.,][]{2013Natur.499..450B,2015ApJ...801...63M,
2017ApJ...835..265W,2019ApJ...881...43K}.
Further observations and simulations are helpful in understanding
the whole picture of the Galactic multi-phase nuclear outflows/winds
\citep[e.g.,][]{2020ApJ...894....1F,2020ApJ...894..117Z,2021MNRAS.506.5658B,
2021ApJ...922..254C,2021MNRAS.508.4667P,2021arXiv210903834M,2022ApJ...924...82F,
2022AJ....163..134T,2022NatAs.tmp...52Y}.
\subsubsection{Survival of Molecular Gas in the High-$z$ Regions}
In principle, the isolated and cool clouds
(e.g., $n\sim$10--1000~cm$^{-3}$ and $T\lsim 10^{2}$~K) will be eventually
destroyed in the harsh environment (e.g., the high-velocity shock and/or
the surrounding warm/hot winds, $n\sim 10^{-3}$--1~cm$^{-3}$ and $T\gsim10^{4}$~K).
The cloud-crushing time \citep[e.g.,][]{1994ApJ...420..213K} can be defined as
$t_{\rm cc}=\chi^{\frac{1}{2}}\frac{2r_{\rm cloud}}{v_{\rm w}}$,
where $\chi=\frac{\rho_{\rm cloud}}{\rho_{\rm wind}}$ is the density contrast
between the cloud and the wind,
$r_{\rm cloud}$ is the cloud radius, and $v_{\rm w}$ is the wind velocity.
By adopting $\chi=1000$ and $v_{\rm w} =200\km\ps=v_{\rm w200}$ (see Section 3.4.1),
we find that the crushing time of the high-$z$ MCs is $t_{\rm cc}\sim0.9v_{\rm w200}^{-1}$~Myr
for the typical EHMC radius of $r_{\rm cloud}$=3~pc.
The crushing time of the EHMCs is slightly small compared to the dynamical time of the
gas flows (i.e., $t_{\rm dyn}=\frac{l}{v_{\rm w}}\gsim1.3v_{\rm w200}^{-1}$~Myr,
where the moving distance of gas flows is $l\gsim$260~pc for the EHMCs).
Note that changing $\chi=1000$ to $\chi=100$ will decrease the crushing time
by a factor of $\sim$3.
The smaller clouds (e.g., $r_{\rm cloud}\ll$~1~pc) thus could not
survive long in the high-velocity wind.
For a more realistic case, however, EHMCs with parsec scales likely survive for a long
period of time (e.g., a few Myr for several times of $t_{\rm cc}$) in the gas-rich
environment by considering the radiative cooling, condensation of gas from warm clouds,
and some other effects
\citep[see, e.g.,][]{2015MNRAS.449....2M,2017MNRAS.470..114A,2018MNRAS.480L.111G,
2020MNRAS.492.1970G,2022MNRAS.511..859G,2019MNRAS.486.4526B,2020ApJ...895...43S,
2020MNRAS.499.4261S,2021MNRAS.505.1083G,2021MNRAS.501.1143K,2022MNRAS.510..551F}.
As an interesting example, adopting the velocity gradient of $\sim 1\km\ps$pc$^{-1}$
for EHMC G021.548$-$03.414 with the projection correction (Section 3.3),
its dynamic time is estimated to be
$t_{\rm dyn}=t_{\rm acc}=2\times\frac{\Delta length}{\Delta v}\sim2.0$~Myr,
which is comparable to its crushing time of $t_{\rm cc}\sim2.2v_{\rm w200}^{-1}$~Myr
for its effective radius of 7.2~pc (see Table~1).
The long tail of the cloud, together with the revealed velocity gradient,
indicates that the molecular gas is ablated by the surrounding high-velocity wind.
The molecular gas of the cloud is entrained in the multiphase flows,
in which the molecular gas will be transformed to the neutral atomic
and/or warm ionized gas moving toward the high-$z$ regions.
We suggest that the EHMC is crushed and will be destroyed in future several Myr.
Thus, the lifetime of the EHMC at $|z|\sim$450~pc far from the Galactic plane
is probably $\sim$5--10~Myr, which is much longer than its crushing time.
Our CO observations reveal large reservoirs of cool gas surrounding the boundaries of
the large-scale nuclear wind near the gaseous disk.
That is, the abundant atomic gas is concentrated in the crater-wall structures that
the dense EHMCs are embedded in (Sections 3.2 and 3.3).
The EHMCs thus are not isolated objects situated in an empty space.
The gas-rich environment with the local high density increases the survival ability
of MCs in the crater walls.
For example, mass loading from the gaseous disk to the crater walls
can flatten the density and temperature profiles on the boundary of
the wind bubbles, creating multiphase flows in such regions.
The multiphase gas-rich environment may cool fast to replenish the
cold gas reservoir and then extend the lifetime of the high-$z$ MCs.
Additionally, the newly cooled gas from the surrounding high-velocity flows
can carry momentum of the hot gas, leading to the
observed entrainment scenario in the porous and mixed multiphase medium.
Nearly no EHMC is observed in the regions of $R_{\rm GC}\lsim$2.6~kpc (Figure~\ref{lb}).
There are two plausible reasons for the feature.
One is that high-$z$ clouds are indeed destroyed by the hot winds,
in which the wind velocity in the nuclear wind bubble is higher
than that near the boundary of the bubble.
The molecular gas of the clouds may be rapidly transformed to
the warm/hot ionized gas (e.g., $t_{\rm cc}\lsim\ 0.2$~Myr for
the high-velocity wind with $v_{\rm w}=1000\km\ps$ and $T\gsim10^{6}$~K).
The ionized gas moves fast toward the high-$z$ regions,
in which little cool gas can survive in the hot flows.
On the other hand, our EHMC samples are from the CO emission toward the tangent points.
The EHMC samples only occupy a small volume of the bubble in a certain LOS.
It thus decreases the detection rate of possibly survived high-$z$ MCs
with an origin size of tens of parsecs in the hot wind bubble.
Finally, we emphasize that the cloud, and even the ISM, is
inherently complex in its structure rather than the isolated and homogenous
distribution in temperature, velocity, and density.
The mean volume density of the EHMCs is $\sim$20~H$_2$~cm$^{-3}$,
much below the CO critical density of 3000/$\tau_{\rm 12CO}$~H$_2$~cm$^{-3}$
\citep[see, e.g.,][]{2013seg..book..491S,2015PASP..127..299S}.
This probably shows that the EHMCs consist of clumpy and multiphase medium,
in which the highly structured molecular gas with a low volume filling factor
is mixed with more diffuse gas \citep[e.g.,][]{1991ApJ...378..186F,1992A&A...257..715F,
1996ApJ...472..191F,2006ARA&A..44..367S,2016A&A...591A.104H}.
The original material in the MCs is heated into the multiphase gas (i.e., ionized/atomic/molecular)
and entrained as it mixed with the warm/hot wind. Once the cloud material is entrained,
it may quickly cool back down to the molecular phase in the enhanced gas+dust environment.
In such a scenario, many effects (e.g., conduction and cooling,
magnetic field and cosmic rays, and various instabilities) should be carefully
taken into account for the interaction between the fractal MCs and the warm/hot wind.
Detailed analysis of these effects is beyond the scope
of this paper.
More multi-wavelength observations and simulations will be very helpful
to clarify these issues.
\subsubsection{Milky Way Nuclear Wind and the Gaseous Disk}
In the crater-wall regions (i.e., $|z|\sim260$--670~pc and $R_{\rm GC}\sim$3~kpc),
the total mass of the molecular gas is $\gsim1\times10^{6}\ \Msun$.
For the highest EHMC at $|z|\sim$620~pc, its dynamical time is estimated to be
$t_{\rm dyn}=\frac{620\ {\rm pc}}{v_{\rm w200}}\lsim3.0v_{\rm w200}^{-1}$~Myr.
The mass-loading rate from the inner gaseous disk of $R_{\rm GC}\lsim$3~kpc
to the high-$z$ regions is thus $\gsim0.3v_{\rm w200}\ \Msun$~yr$^{-1}$.
If we take into account the disturbed MCs in the region close
to the Galactic plane (e.g., 110~pc$\lsim|z|\lsim$260~pc),
the true mass-loading rate may be increased by a factor of $\sim10$
by assuming a Gaussian distribution
\citep[e.g., the total mass is $\sim10$ times of that in $|z|\gsim$260~pc regions
for the thick CO disk of $\sigma_z\sim$110--120~pc; see][]{2021ApJ...910..131S}.
By considering the neutral atomic gas coexisting with the molecular gas,
the mass-loading rate of the cool outflows may be slightly larger than the
estimated value of $\sim3v_{\rm w200}\ \Msun$~yr$^{-1}$.
Although large uncertainty remains and more observations are needed,
the rough estimate shows that the outflow rate at the order of
$\sim2-4\ \Msun$~yr$^{-1}$ is possible according to the large-scale enhanced
CO emission at $R_{\rm GC}\sim$3~kpc.
The energy source of the Milky Way nuclear wind is still being debated,
i.e., intermittent activities from Sgr A* for AGN-like model vs. integrated
effects of the stellar feedback from the CMZ for the starburst model.
The total kinetic energy of the cool-gas outflows can be estimated
as $E_{\rm K}=0.5M_{\rm gas}v_{\rm w}^2\gsim4\times10^{54}v_{\rm w200}^{2}$~erg
for the total molecular gas mass of $\gsim1\times10^{7}\ \Msun$ at $|z|\gsim$110~pc
(e.g., about $\sim10$ times of that in $|z|\gsim$260~pc regions).
The required kinetic power to the pushed molecular gas is then
$\sim4\times10^{40}v_{\rm w200}^{3}$~erg~s$^{-1}$ for the assumed
dynamical time of $\sim3.0v_{\rm w200}^{-1}$~Myr.
Whatever the exact origin of the Milky Way nuclear wind, our estimates show that
the cool-gas outflows can be easily powered by the energetic processes
near the GC (e.g., the total energy of $10^{56}-10^{57}$~erg or
the total power of $10^{42}-10^{44}$~erg~s$^{-1}$ for the Fermi bubbles).
On the other hand, low-density gas can be easily accelerated in the hot wind environment.
And the hot ionized gas is probably the dominant reservoir of energy of the Fermi bubbles
\citep[e.g., the inferred temperature of $\gsim2\times10^{6}$~K,
the low density of $\sim10^{-3}$~cm$^{-3}$, and the typical velocity of
$\sim500-1000\km\ps$ for the hot gas;][]{2016ApJ...829....9M,2017ApJ...834..191B}.
The warm and hot ionized gas of the nuclear wind plays an important role
in driving the cool-gas outflows near the gaseous disk.
Accordingly, the ablated gas from the high-structured MCs
in the gaseous disk joins the moving flow, modifying the velocity,
temperature, and density distribution of the multi-phase medium.
The thinner disk of the atomic and molecular gas within the region of
$R_{\rm GC}$\lsim3~kpc can be explained by the effect of
the large-scale Milky Way nuclear wind.
Assuming the hot wind velocity of $\sim500-1000\km\ps$
and the mass-lose rate of $\sim10-20\ \Msun$~yr$^{-1}$
near the Galactic inner gaseous disk,
the disk will lose $\sim6\times10^{7}\ \Msun$ in a period of $\sim$3--6~Myr
(e.g., from the CMZ to the $R_{\rm GC}\sim$3~kpc regions),
which is about 30\% of the molecular gas within the $R_{\rm GC}$\lsim3~kpc region
\citep[e.g., the total molecular mass of $\sim2\times10^{8}\ \Msun$, see][]
{2015ARA&A..53..583H,2016PASJ...68....5N}.
Therefore, a bulk of the molecular gas can be removed from the gaseous disk
within $R_{\rm GC}\lsim$3~kpc, especially for the region of $|z|\gsim$~50--100~pc.
The total mass of the removed molecular gas in the inner disk
is roughly comparable to the value of $\sim7\times10^{7}\ \Msun$
for the whole 3-kpc arm \citep[e.g., refer to H$_2$ masses per unit length of
$\sim4\times10^{6}\ \Msun$~kpc$^{-1}$ in][]{2008Dame}.
We propose that the 3-kpc arm is probably related to the Milky Way nuclear wind.
It is true that only a small fraction of molecular gas ($\gsim1\times10^{6}\ \Msun$)
is accelerated to the $|z|\gsim$260~pc region, while the dominant molecular gas seems to be accumulated
at the disk \citep[e.g., the 3-kpc arm; see][]{2008Dame,Reid19,2021MNRAS.506.2170S}.
Both of the crater-walls traced by EHMCs and the 3-kpc arm
are naturally located at the similar Galactocentric distance of
$R_{\rm GC}\sim$~3~kpc, where the high-velocity hot wind is almost stopped
and/or is confined by the cold gas near the gaseous disk.
Due to angular momentum conservation, gas inflows generally accompany with
gas outflows. The current star formation rate in the CMZ is
$\lsim 0.1\Msun$~yr$^{-1}$ \citep[e.g.,][]{2009ApJ...702..178Y,2013MNRAS.429..987L},
which will exhaust the CMZ gas in a few $\times 10^{8}$~Myr without other gas supply.
The continuous gas inflow from the inner disk may provide additional
gas supply for the star formation in the CMZ \citep[e.g., the inflow rate
at the order of $\sim1-4\ \Msun$~yr$^{-1}$; see][]{2019MNRAS.490.4401A,
2019MNRAS.484.1213S,2020MNRAS.499.4455T,2021ApJ...922...79H}.
The estimated mass outflow rate based on the CO data is roughly comparable to
the gas inflow rate from the simulations at the same order of magnitude of
$\sim2-4\ \Msun$~yr$^{-1}$. Generally, the gas is transported from the inner
gaseous disk at $R_{\rm GC}\sim$3~kpc
to the CMZ by inflows, while the angular momentum of gas is taken away by outflows
from the inner disk. The removed gas in the inner disk may fall back to
the disk \citep[e.g., the fountain models;][]{1976ApJ...205..762S,
1980ApJ...236..577B,1990ApJ...352..506H,2008A&A...484..743S,
2009MNRAS.399.1089M,2015MNRAS.451.4223M,2017ASSL..430..323F}
and/or may be accumulated in the 3-kpc arm and the high-$z$ crater walls.
In this regard, the total molecular gas mass in the CMZ should be roughly
comparable to that in the 3-kpc arm.
Briefly, the existence of the outflows and the inflows is probably the common feature
in the inner region of the Milky Way (or other barred spiral galaxies).
The dynamical processes indicate that there is a delicate
balance between gas outflows and inflows toward the inner region of the
Milky Way.
Highly variable inflow rate from a recent epoch may lead to the episodic accretion
onto the CMZ and intermittent activity from Sgr A*
\citep[see, e.g., recent observations in][]{2019Natur.567..347P,2021A&A...646A..66P}.
The following enhanced outflows will then terminate star formation near the GC
and/or restrain nuclear activity due to decreasing the gas supply.
Finally, Figure~\ref{cartoon} shows a schematic view of the observation results,
i.e., the large-scale $\HI$ voids related to the Fermi bubbles/X-ray bubbles
in the inner Galaxy, the CO crater walls surrounding the edges of the $\HI$ voids
above and below the Galactic plane, the thinner gaseous disk within
$R_{\rm GC}\lsim$3~kpc, the expanding 3-kpc arm at the base of the enhanced EHMCs,
and the entrained MCs with cometary structures pointing away from the Galactic plane.
\section{Summary}
Based on the MWISP CO data and the improved criteria of the DBSCAN algorithm,
we construct high-$z$ MC samples near the tangent points,
in which the distances of the MCs are well determined.
In the region of $l=12^{\circ}$--$26^{\circ}$ and $|b|\lsim5\fdg1$,
a total of 321 high-$z$ MCs (i.e., MCs at $|z|\gsim$110~pc) are identified, of which 47 MCs
lie in the extreme high-$z$ regions (i.e., EHMCs at $|z|\gsim$260~pc).
Besides the weak CO emission and small sizes, these high-$z$ MCs
also display some unusual properties:
1. The high-$z$ MCs in the $R_{\rm GC}\lsim$3~kpc region are significantly less
than that of the outer region, which is consistent with the
deficient atomic gas and molecular gas in the inner Galactic disk
of $R_{\rm GC}\lsim$3~kpc.
2. The EHMCs (i.e., IDs 05--31 in Table~1) are mainly concentrated in
narrow regions of [$l\sim$19\fdg1 to 20\fdg5,
$b\sim$2\fdg0 to 5\fdg1] and [$l\sim$20\fdg5 to 22\fdg1, $b\sim-$2\fdg0 to $-5$\fdg1].
Some EHMCs are even located at $|z|\gsim$~600~pc far above and below the Galactic plane.
The EHMC concentrations, together with other high-$z$ MCs at
$l\lsim18^{\circ}$, constitute molecular crater-walls with a
measured thickness of $\sim$~220~pc.
The molecular crater wall structures lie along the edges of the $\HI$ voids
(Figure~\ref{tangent}) that are associated with the Milky Way nuclear wind
\citep[e.g.,][]{2016ApJ...826..215L,2021MNRAS.506.2170S}.
3. Some large high-$z$ MCs, which lie in the crater walls,
display intriguing elongated head-to-tail structures pointing away from the Galactic disk,
favoring the scenario of the entrained molecular gas moving with the multiphase outflows.
Especially, the $\sim20$~pc long tail of the EHMC G021.548$-$03.414 (Figure~\ref{twosamples})
is physically associated with a filamentary structure of the IR dust emission,
which is exactly located in the enhanced $\HI$ ridge.
The observed velocity gradient of the EHMC (Figure~\ref{pv}), together with its
cometary head toward the Galactic plane, shows that the cold molecular gas
is indeed entrained by the multiphase outflows from the Galactic plane
to the high-$z$ regions.
Based on the above results, we suggest that the powerful nuclear wind of
the Milky Way has a profound impact on the large-scale distribution of
the gaseous disk (Figure~\ref{cartoon}).
The \HI\ voids above and below the Galactic plane, the CO crater walls
at the edges of the \HI\ voids, and the expanding 3-kpc arms at
the base of the molecular crater walls are probably the natural result
of the intermittent nuclear activity of the Milky Way in the recent 3--6~Myr.
The cometary MCs lying in the crater walls show that the nuclear wind
removes gas from the inner Galaxy to the high-$z$ regions.
The cold gas in the crater-wall structures at $R_{\rm GC}\sim$3~kpc plays
a crucial role confining the Milky Way nuclear wind.
The cool outflows may be an important mass reservoir
for supplying halo material that has been pushed up from the interface
between the nuclear wind and the gaseous disk within $R_{\rm GC}\lsim$3~kpc.
In this scenario, some interesting estimates can be summarized as follows:
1. The gas-rich environment increases the survival ability of the EHMCs
in the crater walls with the local high density of $\sim$1--10~cm$^{-3}$
for \HI\ clouds and $\sim 10^2-10^3$~cm$^{-3}$ for CO clouds.
The estimated lifetime of EHMCs is several Myr for clouds on the parsec scales,
which is comparable to the dynamical time of the cool gas flows according to
the observed velocity gradient of the CO gas
(i.e., $\sim0.5-1\km\ps$pc$^{-1}$ after the projection correction
with a small inclination angle of $i\sim20^{\circ}-10^{\circ}$).
Basically, the observed EHMCs in the walls will be destroyed in future several Myr.
The MCs thus cannot move too far away from the Galactic plane
(e.g., $|z|\gsim$~1~kpc) before the molecular gas becomes
the neutral atomic gas and/or ionized gas.
2. The velocity of the cool outflows is estimated to be $\sim140-330\km\ps$
assuming that the gas is launched from the boundary of the thin CO disk
\citep[i.e., from $|z|=3\times\sigma_z \sim$110-120~pc;][]{2021ApJ...910..131S}.
The hypothesis is supported by the observed cometary high-$z$ MCs pointing away
from the Galactic plane at locations of $|z|\sim$110--130~pc
(top panels of Figure~\ref{lb4s}) and the large-scale velocity gradient of
the coherent EHMCs at $|z|\sim$400--450~pc far from the plane (see Figure~\ref{tri}).
3. The molecular gas in the EHMC concentrations of
$\sim2\times220$~pc$\times410$~pc
has a total mass of $\gsim1\times10^{4}\ \Msun$ (Figure~\ref{lb}).
If the extreme high-$z$ MCs are more broadly distributed in
the whole regions of $R_{\rm GC}\sim$~3~kpc, $|z|\sim260$--670~pc, and
the wall's thickness of $\sim$~220~pc,
the total molecular mass is estimated to be $\gsim1\times10^{6}\ \Msun$,
which is comparable to the total \HI\ mass in the Fermi bubbles
\citep[][]{2018ApJ...855...33D,2020ApJ...888...51L}.
4. Assuming a Gaussian distribution for the thick CO disk \citep[i.e., $\sigma_z=$120~pc
and the thickness FWHM=2.355$\sigma_z$ in][]{2021ApJ...910..131S},
a significant amount of molecular gas (e.g., the order of $10^{7}\ \Msun$)
may accumulate at the low latitudes of the gaseous disk of $|z|\sim$110--260~pc.
The mass-loading rate of the cool outflows at $R_{\rm GC}\sim$~3~kpc
(i.e., outflows to the crater-wall structures) is roughly comparable to the mass inflow rate
(i.e., inflows to the CMZ) at the same order of $\sim2-4\ \Msun$~yr$^{-1}$.
5. The thinner gas disk within $R_{\rm GC}$\lsim3~kpc may be the joint result of
(1) inflows from the inner gaseous disk to the CMZ and
(2) outflows from the gaseous disk to the 3-kpc arm and the high-$z$ region.
The 3-kpc arm at the base of the EHMC concentration, together with the thinner
gaseous disk within $R_{\rm GC}\lsim$3~kpc, can be naturally
explained by the interaction between the Milky Way nuclear
wind and the Galactic gaseous disk.
Considering the large uncertainties in the discussions,
the above estimates should be used with caution.
Nevertheless, we think that the results are useful for further studies.
For example, the multiwavelength observations (e.g., radio continuum,
millimeter and submillimeter molecular line emission, optical/near-IR
emission lines, and UV absorption) are advocated to
investigate the physical properties of the cometary high-$z$ MCs. The large-scale
surveys with high sensitivity and high resolution, together with the improved simulations,
are also very helpful to reveal the nature of the Galactic nuclear winds/outflows.
\acknowledgments
This research made use of the data from the Milky Way Imaging Scroll Painting
(MWISP) project, which is a multiline survey in \twCO/\thCO/C$^{18}$O along the
northern Galactic plane with the PMO 13.7m telescope. We are grateful to all the members
of the MWISP working group, particularly the staff members at the PMO 13.7m telescope,
for their long-term support. MWISP was sponsored by the National Key R\&D Program of
China with grant 2017YFA0402700 and the CAS Key Research Program of Frontier Sciences
with grant QYZDJ-SSW-SLH047.
We acknowledge support from the National Natural Science Foundation of China
through grants 12173090 and 12041305.
X.C. acknowledges support by the CAS International Cooperation Program
(grant No. 114332KYSB20190009).
We also thank the anonymous referee for many useful and
constructive comments that largely improved the quality of the paper.
The work makes use of publicly released data from the HI4PI survey, which combines
the EBHIS in the Northern hemisphere with the GASS in the Southern Hemisphere.
The Parkes Radio Telescope is part of the Australia Telescope National Facility,
which is funded by the Australian Government for operation as a National Facility
managed by CSIRO. The EBHIS data are based on observations performed with the
100 m telescope of the MPIfR at Effelsberg. EBHIS was funded by the Deutsche
Forschungsgemeinschaft (DFG) under the grants KE757/7-1 to 7-3.
This publication makes use of data products from the Wide-field Infrared Survey
Explorer, which is a joint project of the University of California, Los Angeles,
and the Jet Propulsion Laboratory/California Institute of Technology,
funded by the National Aeronautics and Space Administration.
\facility{PMO 13.7m}
\software{GILDAS/CLASS \citep{2005sf2a.conf..721P}}
\bibliographystyle{aasjournal}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Signal processing techniques for radar systems have to a great extent focused on narrowband signals. At this time, signal generators are able to synthesize arbitrary signals with a bandwidth of the order of GHz~\cite{Han02,Zhu09,Wentzloff06}. On one hand, this provides interesting new opportunities, e.g., a wideband signal achieves an increased range resolution compared to its narrowband counterpart\cite{taylor94,Khan05,Hussain98,Weiss94}. On the other hand, the simplifying narrowband assumption, where velocity is approximated as a frequency shift, is not valid\cite{Weiss94}. This does not only complicate the design, but also disqualifies traditional detection techniques, as estimation of time-delay and Doppler-shift can not be separated in time and frequency. It should also be considered that, for some applications, maintaining a low computational complexity is crucial, which is naturally against the original desire of super-resolution. Accordingly, this work is devoted to provide an adaptive detection scheme, which establishes an arbitrary trade-off between complexity and resolution.
The above concern is related to other contributions in the area of waveform design. For example, design of wideband ambiguity functions with narrow peaks for Orthogonal-Frequency-Division-Multiplexing signals is considered in~\cite{Sen10,Sen09}. In~\cite{he12} various techniques for designing narrowband or wideband waveforms are discussed. Interesting analysis of wideband radar systems from various perspectives are also found in~\cite{Lush91,Yazici06,Antonio05}. However, these studies mostly focus on designing ambiguity functions with a narrow peak neglecting complexity. It is easy to see that applying these methods to the discussed topic leads to a large number of transmissions, due to lack of shift-invariance properties, as well as a large set of detection filters.
To combat complexity, this work proposes a waveform design method that maintains a necessary mainlobe width in the ambiguity function, thus decreasing the overall number of pulse transmissions and receive filters. Although this leads to detections of a restricted resolution, it is easy to adapt a secondary super-resolution detection procedure~\cite{borison92,cuomo99,moore95} based on the original estimates. This provides a highly flexible design with low complexity.
Here, focuses is on obtaining original estimates of a restricted resolution. This is carried out by considering relatively wide parameter ranges and designing corresponding waveforms, for which reliable detection, in the entire mesh, is ensured after filtering matched to a nominal value. This is not straightforward as wideband signals may lead to a focused ambiguity function, where off-grid targets are easily missed. A matched filter design is selected to ensure a good performance in presence of noise, assuming nominal parameter values. The main contribution of this work is to formulate this idea, and to provide waveforms that guarantee robust detection in a desired interval of target parameters.
In what follows, a statistical framework for detecting a single target is developed, and for which expressions are simplified by approximation to obtain a tractable design.
\section{Problem Formulation}
Consider a bistatic radar system that, on the transmitter side, employs $M$ waveform generators each connected to an antenna element. The receiver side comprises one antenna element connected to a filter bank. Each generator samples a baseband signal composed of a set of $N$ basis functions $\psi_{m,n}(t)$, where $m$ and $n$ are the antenna and basis label, respectively. In other words,
\begin{equation}
\label{eq:basisexpansion}
\tilde{x}_m(t) = \sum_{n=1}^N s_{m,n}\psi_{m,n}(t).
\end{equation}
where $\tilde{x}_m$ is the waveform at the $m$th signal generator, and $s_{m,n}$ is a complex scalar coefficient. The received signal is a mixture of the reflected transmitted waveforms, and can be expressed, for a point target, as
\begin{equation}
\label{eq:signal}
y(t) = \sigma_t \sum_{m=1}^M x_m(\mu(t-\tau_{m}(\phi)-\tau)) + n(t).
\end{equation}
where $\sigma_t$ is the object's reflection coefficient, $\tau$ denotes the time-delay from the zero-phase sensor to the receiver, and $\mu$ is the time-scaling related to the velocity of an object\cite{Hussain98,Weiss94}. Furthermore, $\tau_{m}(\phi)$ is given by the inter-element spacing and the spatial direction, $\phi$, towards the object. This direction, azimuth and/or elevation, is assumed to be known. If this is not the case, a beamforming technique, see, e.g., \cite{gershman1999,li2006,vorobyov2003}, is necessary. In~\eqref{eq:signal}, $x_m(t) = \tilde{x}_m(t)e^{j\omega_ct}$ is centered around the system's carrier frequency $f_c=\omega_c/2\pi$ and $n(t)$ is a white Gaussian noise.
At the receiver side the down-converted signal, $ \tilde{y}(t) = y(t)e^{-j\omega_ct}$, is passed through a filter bank, i.e.,
\begin{equation}
\label{eq:correlation}
r(\tau,\tau',\mu,\mu') = \int h^\ast (t;\tau',\mu') \tilde{y}(t)dt.
\end{equation}
Here, $(\cdot)^\ast$ denotes the complex conjugate. If a so-called matched filter structure\cite{Skolnik,Skolnik81} is employed, the correlating filters, $h(t;\tau',\mu')$, are equal to the received signal calculated from their corresponding transmission model.
To ease calculations, assume that the basis kernels, $\psi_{m,n}(t)$, are Gaussian. This particular choice of functions results in that~\eqref{eq:correlation} can be analytically calculated to
\begin{equation}
\begin{aligned}
\label{eq:output_matchedfilter_2}
&r(\tau_0,\mu,\mu') = \sum_{\substack{m,m'\in \mathcal{M} \\ n,n'\in \mathcal{N} }} \frac{\sigma_t\sqrt{\mu\mu'} s_{m,n}s_{m',n'}e^{-j\omega_c\mu'(\tau_0 + \tau_{m,m'})}}{\sqrt{2\pi(\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2)}} \cdot\\
&e^{-\frac{(\mu'\mu_{m,n}- \mu \mu_{m',n'} + \mu \mu'(\tau_0 + \tau_{m,m'}))^2}{2(\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2)}}e^{\omega_c(\mu-\mu')(j\mu_G -\frac{\omega_c}{2}(\mu-\mu')\sigma_G^2)},
\end{aligned}
\end{equation}
where $\mathcal{M} = \{1\dots M\}$, $\mathcal{N} = \{1\dots N\}$, $\sigma_{m,n}^2$ and $\mu_{m,n}$ correspond to the variance and the mean of the $n$th basis kernel sampled by the $m$th signal generator, respectively, $\tau_0 = \tau-\tau'$, $\tau_{m,m'} =\tau_{m}(\phi)-\tau_{m'}(\phi)$, and
\begin{equation}
\begin{aligned}
& \mu_G = \frac{\mu\mu_{m,n}\sigma_{m',n'}^2+\mu'( \mu_{m',n'} - \mu'(\tau_0 + \tau_{m,m'}) )\sigma_{m,n}^2}{\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2} \notag \\
& \sigma_G^2 = \frac{\sigma_{m,n}^2\sigma_{m',n'}^2}{\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2}. \notag
\end{aligned}
\end{equation}
The expression in~\eqref{eq:output_matchedfilter_2} is equivalently written in a matrix form as
\begin{equation}
r(\tau_0,\mu,\mu') = \sigma_t\mathbf{s}^H\mathbf{R}(\tau_0,\mu,\mu')\mathbf{s},
\end{equation}
where $\mathbf{s} = \left [s_{1,1} \dots s_{N,1} , s_{1,2} \dots s_{N,2} \dots s_{N,M} \right]^T$ and $\mathbf{R}(\tau_0,\mu,\mu')$ contains the entries
\begin{equation}
\begin{aligned}
&R_{m,m',n,n'}(\tau_0,\mu,\mu') = \frac{\sigma_t\sqrt{\mu\mu'}e^{-j\omega_c\mu'(\tau_0 + \tau_{m,m'})}}{\sqrt{2\pi(\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2)}}\cdot \\
& e^{-\frac{(\mu'\mu_{m,n}- \mu \mu_{m',n'} + \mu \mu'(\tau_0 + \tau_{m,m'}))^2}{2(\mu^2\sigma_{m',n'}^2 +\mu'^2\sigma_{m,n}^2)}} e^{\omega_c(\mu-\mu')(j\mu_G -\frac{\omega_c}{2}(\mu-\mu')\sigma_T^2)},
\end{aligned}
\end{equation}
in a proper order.
\subsection{Robust Waveform Design Based on Statistical Performance}
Target detection is formulated as a statistical problem as follows. Take a region $\mathcal{S}$ of the parameter pairs $(\mu,\tau)$ in which a good detection performance is desired. The detection is based on the output $r$ from a filter matched to a nominal point $(\tau',\mu')\in \mathcal{S}$. A matched filter\cite{Skolnik,Skolnik81} ensures maximal detection performance when the true parameters are nominal in the presence of white noise.
The idea is to develop a simple decision rule for $r$, which identifies whether a target is present in $\mathcal{S}$ or not. The problem can be formulated as a composite hypothesis testing, where $\mathcal{H}_0$ denotes the hypothesis that no source is present, in which $r$ is generated by a white Gaussian noise processes, and $\mathcal{H}_1$ denotes the composite hypothesis of source existence. Then, the likelihood functions under the different hypotheses are
\begin{equation}
\begin{aligned}
&\mathcal{H}_0:\ r\sim\mathcal{N}(0,\sigma^2\mathbf{s}^H\mathbf{R}_0\mathbf{s})\\
&\mathcal{H}_1:\ r~\sim\mathcal{N}(\sigma_t\mathbf{s}^H\mathbf{R}(\tau_0,\mu,\mu')\mathbf{s},\sigma^2\mathbf{s}^H\mathbf{R}_0\mathbf{s}),
\end{aligned}
\end{equation}
where $\mathbf{R}_0=\mathbf{R}(\tau_0=0,\mu',\mu')$ and $\sigma^2$ denotes the noise power. The pair $(\tau_0,\mu)$ denotes the unknown true parameters. Later, we simply refer to it as $\theta$. Let us consider the Generalized Likelihood Ratio Test (GLRT)\cite{kay1998fundamentals}, which can be written as
\begin{equation}
\min\limits_{\theta,\sigma_t}|r-\sigma_t\mathbf{s}^H\mathbf{R}(\theta)\mathbf{s}|^2+\gamma\gtrless|r|^2,
\end{equation}
where the argument $\mu'$ is dropped as its value is assumed to be fixed, and $\gamma$ is a threshold. As $\sigma_t$ is free the minimization on the left hand side gives zero. Thus, the GLRT simplifies to
\begin{equation}
\label{eq:estimator}
\gamma\gtrless|r|,
\end{equation}
which is a simple power thresholding scheme.
For this detector probabilities of detection, $P_D$, and false alarm, $P_{FA}$, can be calculated to
\begin{equation}
\begin{aligned}
&P_D(\theta,\sigma_t,\gamma)=\frac{1}{\pi\mathbf{s}^H\mathbf{R}_0\mathbf{s}\sigma^2}
\int_{|r|>\gamma}e^{-\frac{|r-\sigma_t\mathbf{s}^H\mathbf{R}(\theta)\mathbf{s}|^2}{\mathbf{s}^H\mathbf{R}_0\mathbf{s}\sigma^2}}\text{d}\nu(r)\\
&P_{FA}=\frac{1}{\pi\mathbf{s}^H\mathbf{R}_0\mathbf{s}\sigma^2}
\int_{|r|>\gamma}e^{-\frac{|r|^2}{\mathbf{s}^H\mathbf{R}_0\mathbf{s}\sigma^2}}\text{d}\nu(r),
\end{aligned}
\end{equation}
where $\nu(\ldotp)$ denotes the Lebesgue measure on the complex plane of $r$.
To optimize the waveform, consider the worst detection performance $P_D(\theta,\sigma_t,\gamma)$ over all scenarios defined by $(\theta,\sigma_t)$, where $P_{FA}=\alpha$ is fixed. One may define the best design as the one maximizing the worst detection.
Unfortunately, direct calculation shows that this approach fails in the current occasion as the worst detection performance is independent of the choice of waveform. To overcome this, denote by $P_{D,\text{worst}}(\alpha)$, or simply $P_{D,\text{worst}}$, the worst detection performance. Then, define an optimal design as follows:
\newtheorem{Definition}{Definition}
\newtheorem{theorem}{Theorem}
\begin{Definition}
\label{def}
A design $\mathbf{s}$ is optimal for a given value of $\alpha$ if for sufficiently small values of $\epsilon$, the set $\mathcal{S}_\epsilon$ of $\epsilon-$worse scenarios, defined by
\begin{equation}
\mathcal{S}_\epsilon=\{(\theta,\sigma_t)\mid|P_D(\theta,\sigma_t)<P_{D,\text{worst}}+\epsilon\}
\end{equation}
has a minimal area (Lebesgue measure) in any compact region.
\end{Definition}
Clearly, this gives a design, where it is least likely to encounter a low performance, although not impossible. The following theorem provides a practical method to realize this design.
\begin{theorem}
The optimal design in the sense of \textit{Definition~\ref{def}} is a solution to the following optimization
\begin{equation}\label{eq:opt}
\begin{aligned}
&\max_{\mathbf{s}} \min_{\theta} &&|\mathbf{s}^H\mathbf{R}(\theta)\mathbf{s}| \\
&\text {s.t.} &&\mathbf{s}^H\mathbf{R}_0\mathbf{s} = 1.
\end{aligned}
\end{equation}
\end{theorem}
The reader may notice that~\eqref{eq:opt} could be directly introduced and intuitively validated. It is simple to verify that $|\mathbf{s}^H\mathbf{R}(\theta)\mathbf{s}|$ and $\mathbf{s}^H\mathbf{R}_0\mathbf{s}$ are the share of signal and noise in the filter output energy over $\mathcal{S}$, respectively. Thus, \eqref{eq:opt} promotes a uniformly high signal-output energy. However, the above calculations tie \eqref{eq:opt} to a statistically sound detection approach.
Further, $\mathbf{R}_0$ is positive semi-definite, and the term $\mathbf{s}^H\mathbf{R}_0\mathbf{s}$ characterizes the energy of the signal, which is typical for a matched filter design. Thus, no extra care about the transmit energy is considered. Clearly, \eqref{eq:opt} guarantees high detection rate, but it does not consider false detection due to coupling between the filter and out-of-box sources. However, it is expected that the finite energy constraint automatically enforces low sidelobe energy. We later argue on the validity of this assumption by analyzing numerical results.
\section{Proposed Method for Solving \eqref{eq:opt}}
It is difficult to exactly solve \eqref{eq:opt}, as $\mathbf{R}(\theta)$ is in general not Hermitian. One approximate solution is to consider the inner optimization over a finite number of grid points, $\theta_1,\theta_2,\ldots,\theta_l$. Then, \eqref{eq:opt} can be written as
\begin{equation}\label{eq:opt2}
\begin{aligned}
&\max_{\mathbf{s}} \min_{\lambda_1,\lambda_2,\ldots,\lambda_l} &&\sum_k|\mathbf{s}^H\mathbf{R}(\theta_k)\mathbf{s}|\lambda_k \\
&\text {s.t.} &&\mathbf{s}^H\mathbf{R}_0\mathbf{s} = 1, \quad\sum_k\lambda_k=1,
\end{aligned}
\end{equation}
where $\lambda_k$ is a positive number. By changing the order of minimization and maximization in \eqref{eq:opt2} we obtain the following suboptimal design, which is simpler to solve \cite{bazaraa2013nonlinear}.
\begin{equation}\label{eq:opt_dual}
\begin{aligned}
& \min_{\lambda_1,\lambda_2,\ldots,\lambda_l} \max_{\mathbf{s}}&&\sum_k|\mathbf{s}^H\mathbf{R}(\theta_k)\mathbf{s}|\lambda_k \\
&\text {s.t.} &&\mathbf{s}^H\mathbf{R}_0\mathbf{s} = 1, \quad\sum_k\lambda_k=1.
\end{aligned}
\end{equation}
As opposed to the uniformly optimal design in the original order, the change of order results in an optimization considering average performance weighted by $\{\lambda_k\}$. However, as our simulation results indicate, the latter average design also leads to a remarkably good detection performance.
Now, the inner optimization in \eqref{eq:opt_dual} is simplified as follows. Consider the eigenvalue decomposition of $\mathbf{R}_0$, i.e.,
\begin{equation}
\mathbf{R}_0=\mathbf{U}\Sigma\mathbf{U}^H=\mathbf{U}_0\Sigma_0\mathbf{U}_0^H,
\end{equation}
where $\Sigma_0$ and $\mathbf{U}_0$ are the nonzero blocks of $\Sigma$ and its corresponding columns of $\mathbf{U}$, respectively. Let $\mathbf{u}=\mathbf{\Sigma_0}^\frac{1}{2}\mathbf{U_0}^H\mathbf{s}$, which implies that any vector $\mathbf{s}$ is uniquely decomposed in terms of its corresponding $\mathbf{u}$ as
\begin{equation}
\mathbf{s}=\mathbf{U_0}\mathbf{\Sigma_0}^{-\frac{1}{2}}\mathbf{u}+\mathbf{U_1}\mathbf{p},
\end{equation}
where $\mathbf{p}$ is a suitable vector and $\mathbf{U_1}$ spans the null space of $R_0$. Note that, any vector $\mathbf{s}$ with $\mathbf{s}^H\mathbf{R}_0\mathbf{s}=0$ corresponds to zero-energy, which leads to a zero output-signal. This clearly means that $\mathbf{R}(\theta)\mathbf{U}_1=0$ for every $\theta$. Thus, the term $U_1$ does not have any effect on the waveform design, and the inner optimization in \eqref{eq:opt_dual} can be expressed as
\begin{equation}\label{eq:opt_dual_3}
\begin{aligned}
& \max_{\mathbf{u}} &&\sum_k\lambda_k | \mathbf{u}^H\tilde{\mathbf{R}}(\theta_k)\mathbf{u})| \\
&\text {s.t.} &&\|\mathbf{u}\|^2_2= 1,
\end{aligned}
\end{equation}
where $\tilde{\mathbf{R}}(\theta_k)=\mathbf{\Sigma}_0^{-1/2}\mathbf{U_0}^H\mathbf{R}(\theta_k)\mathbf{U_0}\mathbf{\Sigma_0}^{-\frac{1}{2}}$.
To solve \eqref{eq:opt_dual_3} we propose the following efficient scheme. First, note that, $|\alpha|=\max\limits_{\phi}\Re(e^{-j\phi}\alpha)$. Thus, \eqref{eq:opt_dual_3} is equivalently written as
\begin{equation}\label{eq:cycle}
\begin{aligned}
&\max_{\mathbf{u},\phi_1,\phi_2,\ldots,\phi_l}\sum_k\lambda_k\Re(e^{-j\phi_k}\mathbf{u}^H\tilde{\mathbf{R}}(\theta_k)\mathbf{u})\\
&=\max_{\mathbf{u},\phi_1,\phi_2,\ldots,\phi_l}\mathbf{u}^H\mathbf{M}(\phi_1,\phi_2,\ldots,\phi_l)\mathbf{u},
\end{aligned}
\end{equation}
where
\small
\begin{equation}
\begin{aligned}
\mathbf{M}(\phi_1,\phi_2,\ldots,\phi_l)=\sum_k\lambda_k ( e^{-j\phi_k}\tilde{\mathbf{R}}(\theta_k) + e^{j\phi_k}\tilde{\mathbf{R}}^H(\theta_k)).
\end{aligned}
\end{equation}
\normalsize
As the optimization is performed over all unit vectors $\mathbf{u}$, the solution for a fixed choice of $\phi_1,\ldots,\phi_l$ is the eigenvector of $\mathbf{M}$ corresponding to the largest eigenvalue, which we denote by $\mathbf{u}_m(\mathbf{M})$ and $\lambda_m(\mathbf{M})$, respectively. Accordingly, we propose the following cyclic solution of the inner optimization.
\begin{enumerate}
\item Start from an arbitrary choice of $\phi_k^{0}$ and set $r=1$.
\item Get $\mathbf{M}^{r-1}=\mathbf{M}(\phi_1^{r-1},\ldots,\phi_n^{r-1})$ and set $\mathbf{u}^{r}=\mathbf{u}_m(\mathbf{M}^{r-1})$
by calculating $\lambda_m(\mathbf{M}(\phi^{r-1}))$.
\item Evaluate $\phi_k^{n}$ as the argument of the complex number $(\mathbf{u}^{r})^H\tilde{\mathbf{R}}(\theta_k)\mathbf{u}^{r}$ , update $r$ to $r+1$ and go to step 2.
\end{enumerate}
Step 2 and step 3 increase the cost of \eqref{eq:cycle} with respect to $\mathbf{u}$ and $\phi$. Thus, the cost monotonically increases, which guarantees convergence.
Once a solution, say $\bar{\mathbf{u}}$, for the inner optimization and given values of $\lambda_k$ is obtained, a local optimization technique such as steepest descent is performed to update $\lambda_k$ for the outer minimization. Although, the gradient, $\nabla F$, of the cost $F=F(\lambda_1,\ldots,\lambda_l)$ at a given point is simple to compute, applying a steepest decent technique is generally difficult.
However, the number of grid points, $l$, in \eqref{eq:opt2} may be significantly low as a result of sufficient correlation between conceivable return signals. In fact, we employ only a pair of properly selected grid points, typically the corner points of a parameter box, from which the outer optimization can be substantially simplified to
\begin{equation}
\begin{aligned}
&\min_{0\leq \lambda\leq 1}F(\lambda,1-\lambda),\\
\end{aligned}
\end{equation}
and a resulting 1-dimensional optimization is either carried out by a grid search or a bisection method\cite{fletcher13}
\section{Numerical Validation}
For different choices of regions in which a robust performance is desired, the efficiency of the proposed algorithm is presented in terms of average and minimum correlation as well as Receiver Operating Characteristic (ROC)\cite{richards05} curves. To examine the method, we calculate its efficiency by Monte-Carlo simulation in selected scenarios. The number of waveform generators is $M=3$, for each $N=30$ basis functions are generated. The system has a bandwidth-time product of $200$, the nominal value of the time-scaling is $\mu' = 0.94$, and the target's reflection coefficient is set to $\sigma_t =1$.
The Gaussian basis functions are generated with mean values, $\mu_k$, uniformly located over the pulse duration, and their standard deviations are selected randomly within an interval $[\sigma_{\min,k}\ \sigma_{\max,k}]$, where $\sigma_{\max,k}$ is such that the effective interval between $\mu_k\pm 3\sigma_k$ is ensured to entirely lie within the pulse time, and $\sigma_{\min,k}$ restricts the highest effective frequency component.
The region in which the performance is evaluated is chosen as a box, which varies in size. The smallest box is within the confines of $\mu \in [\mu_0-\epsilon_{\mu},\mu_0+\epsilon_{\mu}]$ and $\tau \in [-\epsilon_{\tau},\epsilon_{\tau}]$, where $\epsilon_{\mu}$ and $\epsilon_{\tau}$ are determined with respect to twice the system's resolution limits. In order to investigate the effect of varying uncertainty, the box size increases with a factor $\beta$ to $\epsilon_{\mu}/\beta$ and $\epsilon_{\tau}/\beta$, where $\beta = [0.9,\dots,0.1]$
The minimum and average in-box correlation, $|\mathbf{s}^H\mathbf{R}(\theta)\mathbf{s}|$, are presented in Table~\ref{tab:correlation_avg} and Table~\ref{tab:correlation_min}, in which the results are averaged over $100$ random choices of basis functions. The correlation properties are evaluated over a dense grid and then taking average or minimum, respectively. The outcome is compared with the cases where a Linear Frequency Modulated (LFM) pulse and a single Gaussian pulse is transmitted from each signal generator, with the same bandwidth-time product. It should be noted that the LFM pulse provides a high resolution, and is not expected to give a robust performance.
\begin{table}[h!]
\caption{Average correlation properties of the designed waveforms.}
\label{tab:correlation_avg}
\begin{tabular}{l c c c c c}
Algorithm & $\beta =1$ & $\beta =0.8$& $\beta =0.6$ & $\beta =0.4$& $\beta =0.2$\\
\hline
\textit{Proposed alg.} & $0.99$ & $0.98$ & $0.97$ & $0.94$ & $0.65$ \\
\textit{Gaussian} & $0.97$ & $0.93$ & $0.88$ & $0.80$ & $0.60$ \\
\textit{LFM} & $0.30$ & $0.26$ & $0.21$ & $0.15$ & $0.09$
\end{tabular}
\end{table}
\begin{table}[h1]
\caption{Minimum correlation properties of the designed waveforms.}
\label{tab:correlation_min}
\begin{tabular}{l c c c c c}
Algorithm & $\beta =1$ & $\beta =0.8$& $\beta =0.6$ & $\beta =0.4$& $\beta =0.2$\\
\hline
\textit{Proposed alg.} & $0.98$ & $0.96$ & $0.93$ & $0.85$ & $0.41$ \\
\textit{Gaussian} & $0.88$ & $0.83$ & $0.70$ & $0.42$ & $0.1$ \\
\textit{LFM} & $0.0039$ & $0.0018$ & $0.0018$ & $0.0018$ & $0.0016$
\end{tabular}
\end{table}
Figure~\ref{fig:ROC_curve} shows the ROC curves. These curves are calculated from the statistical formulation through a Monte Carlo simulation using the estimator in \eqref{eq:estimator} with $10^6$ noise realizations, and a Signal-to-Noise Ratio (SNR) of $10$~dB. The curves are also averaged over $10$ independent draws of basis functions. The Figure shows the characteristics when $\beta = [1,0.8,0.6,0.4,0.2]$, for a varying threshold $\gamma = [0,0.05,\dots,4]$.
\begin{figure}[h!]
\centering
\includegraphics[width=8.5cm]{ROC_curve_chirp.eps}
\caption{ROC curves when the SNR is $10$~dB for regions specified by $\beta = [1,0.8,0.6,0.4,0.2]$.}
\label{fig:ROC_curve}
\end{figure}
The curve corresponding to $\beta=1$ illustrates the performance in which the smallest box is selected. This curve is the nominal performance. As seen, the performance decreases when expanding the region. However, it exhibits a robust behavior for relatively large boxes.
The last part of the numerical results presents how an out-of-region source will affect the ROC curve. This kind of source increases the probability of false alarm if its return is highly correlated with the matched filter for the region of interest. The position of the source is randomly generated outside the interval of $\mu$ and $\tau$. Results shown in Figure~\ref{fig:ROC_curve_outsource} illustrates ROC curves when the out-of-region source has a reflection coefficient of $\sigma_{os} =1$. The outcome when $\sigma_{os} =-1$ coincides with $\sigma_{os}=1$, and is therefore omitted in the Figure. The result is compared with the corresponding ROC curve when no such source exists for waveforms optimized with $\beta =0.6$.
\begin{figure}[h!]
\centering
\includegraphics[width=8.5cm]{ROC_outsource2.eps}
\caption{ROC curve when an out-of-region source with $\sigma_{os} = 1$ is present. The SNR is $10$~dB and $\beta = 0.6$.}
\label{fig:ROC_curve_outsource}
\end{figure}
\section{Conclusions}
For a wideband radar, we considered a robust technique for waveform design within relatively wide parameter ranges. The method was developed from a statistical framework for detecting a single target, and simplified by approximation to obtain a tractable design. Being robust implies that the waveforms are not designed to provide good resolution properties. Therefore, a next step is to apply a super-resolution technique to the restricted area obtained from this first stage.
The correlating filters at the receiver side were selected to have a matched filter structure. We remark that, a different kind of filter design, e.g., \cite{Ackroyd73,Stoica2008,Stoica2008_2} might result in better performance, which is a topic for future investigation.
As shown by numerical validation, the method ensured reliable detection in the desired range of target parameters. The probabilities of detection and false alarm are illustrated with ROC curves, which showed a small loss of performance when increasing the desired region of reliable detection up to a certain size.
The outcome was compared with two conventional transmit signal designs and showed an increased performance for the investigated problem. Note that, LFM signals have good resolution properties, which makes them unsuitable for the discussed application. It was also seen that in presence of an out-of-region source the detection properties are only slightly affected, which implied that the design promoted a low sidelobe energy.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $f$ be a Hecke-Maass cusp form on $\operatorname{PSL}(n,\mathbb{Z})\backslash\operatorname{PGL}(n,\mathbb{R})/O(n)$, see e.g. \cite{Goldfeld2006}. Let $A(m_1,m_2,\dots,m_{n-1})$ be the Fourier coefficients in its Jacquet-Whittaker expansion. Then the Godement-Jacquet $L$-function associated to $f$ is given as a Dirichlet series, and in terms of the Langlands-Satake parameters $\ga_{i}(p)$, by
$$
L(f,s)=\sum_{m\ge1}{A(m,1,1,\dots,1)\over m^s}=\prod_p\prod_{i=1}^n\left(1-{\ga_i(p)\over p^s}\right)^{-1}.
$$
The exterior square $L$-function is defined via the Euler product
\be\label{fwedgeDef}
L(f,s,\wedge^2)=\prod_p\prod_{1\le i <j\le n}\left(1-{\ga_i(p)\ga_j(p)\over p^s}\right)^{-1}
.
\ee
Two distinct representations of the exterior square $L$-function are known,
the first due to Jacquet and Shalika \cite{JacquetShalika1990}, and the second discovered by
Bump and Friedberg \cite{BumpFriedberg1990}.
It is our goal in this short
note to
present an elementary derivation of the Jacquet-Shalika construction,
expressing the
Euler product
in \eqref{fwedgeDef}
as a classical Dirichlet series in the Fourier coefficients $A(m_1,\dots,m_{n-1})$.\\
On $\operatorname{GL}(2)$,
$$
L(f,s,\wedge^2)=\prod_p\left(1-{\ga(p)\bar\ga(p)\over p^s}\right)^{-1} =\gz(s).
$$
On $\operatorname{GL}(3)$, it is easy to see that the exterior square $L$-function
$$
L(f,s,\wedge^2)=L(\tilde f,s)=\sum_m{A(1,m)\over m^s}
$$
is just the dual $L$-function corresponding to the contragredient form $\tilde f$.\\
On $\operatorname{GL}(4)$, experts have known for some time that the exterior square $L$-function can be expressed as a zeta function times the ``Middle'' $L$-function:
$$
L(f,s,\wedge^2)=\gz(2s) \sum_m {A(1,m,1)\over m^s}.
$$
The general formula on $\operatorname{GL}(n)$
is
as follows.\\
\begin{thm}\label{one}
For odd $n\ge3$, the Dirichlet series for the exterior square $L$-function is given by
$$
L(f,s,\wedge^2)=\sum_{m_2,m_4,\dots,m_{n-1}\ge1}{A(1,m_2,1,m_4,1,\dots,1,m_{n-1})\over (m_2\,m_4^2\,m_6^3\cdots m_{n-1}^{(n-1)/2})^s}.
$$
For even $n\ge2$, we have
$$
L(f,s,\wedge^2)=\gz\left({n\over 2}\,s\right)\sum_{m_2,m_4,\dots,m_{n-2}\ge1}{A(1,m_2,1,m_4,1,\dots,1,m_{n-2},1)\over (m_2\,m_4^2\,m_6^3\cdots m_{n-2}^{(n-2)/2})^s}.
$$
\end{thm}
\section{Proof of Theorem \ref{one}}
As is well-known from the work of Shintani and Casselman-Shalika, the
Fourier coefficient $A(p^{k_{1}},...,p^{k_{n-1}})$
is the Schur function
\be\label{AS}
A(p^{k_{1}},\dots,p^{k_{n-1}})
=
S_\lambda(\ga)
.
\ee
Here
$$
\ga=(\ga_{1}(p),\dots,\ga_{n}(p))
$$
are the Langlands-Satake parameters,
and
$$
\lambda=(\lambda_{1},\dots,\lambda_n),
\qquad
\text{ where }
\qquad
\lambda_j=\sum_{i>j} k_i
.
$$
\
Recall
the following identity \cite[(3.3)]{BumpFriedberg1990}:
\bea\nonumber
\sum_{k_1,k_2,\dots,k_{n-1}\ge0}
S_{\gl}(\ga)\
X^{k_1+k_3+k_5+\cdots} \
Y^{k_2+k_3+2k_4+2k_5+\cdots} \qquad\qquad& &\\
\label{eq1}
=L_0\prod_i(1-\ga_i(p)X)^{-1}\prod_{i<j}(1-\ga_i(p)\ga_j(p)Y)^{-1},
\eea
where
$$
L_0=\twocase{}{1-Y^{n/2}}{if $n$ is even;}{1-XY^{(n-1)/2}}{if $n$ is odd.}
$$
Setting $X=p^{-s}$, $Y=p^{-w}$, using \eqref{AS}, and taking the product over all primes $p$ of both sides of \eqref{eq1} gives
$$
\frak{Z}(s,w):=\sum_{r_1,r_2,\dots,r_{n-1}\ge1}{A(r_1,r_2,\dots,r_{n-1})\over r_1^s\, r_2^w\, r_3^{s+w}\, r_4^{2w}\, r_5^{s+2w}\cdots}
=Z(s,w)L(f,s)L(f,w,\wedge^2),
$$
where
$$
Z(s,w)=\twocase{}{1/\gz(\frac n2 w)}{if $n$ is even;}{1/\gz(s+{n-1\over2}w)}{if $n$ is odd.}
$$
\
On the other hand, we have the following Hecke relations
\cite[Theorem 9.3.11]{Goldfeld2006}. For $n$ odd,
\beann
A(m,1,1,\dots,1)A(1,m_2,1,m_4,1,\dots,1,m_{n-1}) \ \ \ \ \ \ \ \ \ \ \ \ \ & & \\
=\sum_{{c_2\,c_4\,c_6\cdots c_{n-1}\,c_n=m}\atop{c_2|m_2, c_4|m_4, c_6|m_6,\dots,c_{n-1}|m_{n-1}}} A(c_n,{m_2\over c_2},c_2,{m_4\over c_4},c_4,\dots,{m_{n-1}\over c_{n-1}})
.
\eeann
For $n$ even, we have
\beann
& & A(m,1,1,\dots,1)A(1,m_2,1,m_4,1,\dots,1,m_{n-2},1) \\
&=&\sum_{{c_2\,c_4\,c_6\cdots c_{n-2}\,c_n=m}\atop{c_2|m_2, c_4|m_4, c_6|m_6,\dots,c_{n-2}|m_{n-2}}} A(c_n,{m_2\over c_2},c_2,{m_4\over c_4},c_4,\dots,{m_{n-2}\over c_{n-2}},c_{n-2})
.
\eeann
In either case, dividing both sides by $m^s$ and $(m_2\,m_4^2\,m_6^3\cdots)^w$ and summing gives
\beann
\mathcal{Z}(s,w)&:= & \left(\sum_{m\ge1}{A(m,1,1,\dots,1)\over m^s} \right)\left(\sum_{m_2,m_4,\dots\ge1}{A(1,m_2,1,m_4,1,\dots)\over (m_2\,m_4^2\,m_6^3\cdots)^w} \right) \\
&= & \sum_{m,m_2,m_4,\dots\ge1}\sum_{{c_2\,c_4\cdots=m}\atop{c_2|m_2,c_4|m_4,\cdots}}{A(c_n,{m_2\over c_2},c_2,{m_4\over c_4},c_4,\dots)\over m^s(m_2\,m_4^2\,m_6^3\cdots)^w }.
\eeann
Interchange the orders of summation and write $m_i=m_i'c_i$:
\beann
\mathcal{Z}(s,w)
&= &
\sum_{c_2,c_4,\cdots,c_n\ge1}
\sum_{{m_2',m_4',m_6'\dots\ge1}\atop{m=c_2\,c_4\cdots c_n}}{A(c_n,m_2',c_2,m_4',c_4,\dots)\over (c_2\,c_4\cdots c_n)^s(m_2'\,c_2\,m_4'^2\,c_4^2\,m_6'^3\,c_6^3\cdots)^w }.
\eeann
Rename $r_1=c_n$, $r_2=m_2'$, $r_3=c_2,\dots$, with
$$
r_{n-1}=\twocase{}{m_{n-1}'}{if $n$ is odd;}{c_{n-2}}{if $n$ is even.}
$$
If $n$ is even, then
$$
\mathcal{Z}(s,w)=\frak{Z}(s,w)
$$
and dividing both sides by $L(f,s)$ proves the theorem in this case. \\
If $n$ is odd, then we have an additional sum over $r_n=c_{n-1}$, which does not appear inside the Fourier coefficients. Thus
$$
\mathcal{Z}(s,w)=\gz(s+{n-1\over2}w) \frak{Z}(s,w) = L(f,s)L(f,w,\wedge^2).
$$
Again, dividing both sides by $L(f,s)$ gives the desired result.
This completes the proof. $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Box$\\
The referee has kindly pointed out to us
that the argument above is equivalent to
the fact
\cite[page 238
(11.9;2)]{Littlewood1940}
that
$$
\prod_{i<j}(1-\ga_{i} \ga_{j})^{-1}=\sum S_\lambda(\ga)
,
$$
where the summation runs over partitions $\lambda$ whose conjugate partition is even.\\
\subsection*{Acknowledgements}
The author wishes to express his gratitude to Dorian Goldfeld,
Meera Thillainatesan,
Sol Friedberg,
and the referee for many comments and corrections to an earlier draft.
\\
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Every day, vast amounts of data are processed in business and in science to support operations or, through \emph{data science}, to gain understanding of the world behind the data. Recently, interest has developed in obtaining new insights by combining data across different organisations. In some cases, simple data exchange between parties can fulfill this requirement, but often the data involved are privacy- or commercially sensitive, and the parties are legally unable or strategically unwilling to provide others with a copy. In such cases, collaboration can still occur, but more advanced techniques are needed.
A variety of such techniques have been proposed, all of them involving some form of distributed computation. One approach is to somehow obfuscate sensitive data before sharing it, through anonymisation, (homomorphic) encryption or secure multiparty computation. Potential downsides to these methods are imperfect protection and poor performance. Another option is to not share the data at all, but allow processing to be done wherever the data is and share only aggregated results. Compute-to-data solutions and federated machine learning algorithms fall in this category. Trusted third party systems combine remote processing with simple sharing, allowing processing of the raw data to take place at some location not managed by either the data provider or the data consumer.
Various systems have been and are currently being designed to address each of these solutions. However, they typically only support one of these models, so that an organisation wishing to do all of them, perhaps across different data sets and with different partners, will end up having to maintain a variety of systems. Furthermore, existing systems often require a significant amount of trust between the participating parties in order for the system to work at all, with data access treated in an all-or-nothing fashion. As a result, organisations may end up running multiple copies of the system in order to serve different collaborative projects.
Mahiru is a design for a federated, policy-driven data processing and exchange system for use by data scientists. It runs generic workflows, and as such supports all the distributed data processing patterns mentioned above as well as arbitrary combinations of them. Owners of data and software set policies which govern whether and where their data or software can be copied and how data can be processed. These are enforced by their own system and the systems they choose to share copies with. Data scientists use data and software by running local applications which submit workflows to their local Mahiru site, which plans them and executes them in collaboration with the other sites in the system. Throughout the execution, policies are enforced by all the involved sites.
This paper is organised as follows. Section \ref{sec:related_work} describes related work, covering a selection of existing data sharing systems as well as addressing the concept of trust in data sharing. Section \ref{sec:design_considerations} sets out the key design considerations that informed the design of Mahiru. In section \ref{sec:architecture} the overall architecture of Mahiru is described as well as how it operates. Section \ref{sec:policies} describes the policy mechanism in general, with Section \ref{sec:use_cases} providing examples of how it can be used. In Section \ref{sec:implementation} we discuss the implementation in some detail, including some networking and security considerations. Finally, in Section \ref{sec:discussion}, we discuss the strengths and weaknesses of the design and present some possibilities for future development.
\section{Related work}
\label{sec:related_work}
\subsection{Data sharing systems}
\label{sec:data_sharing_systems}
A great many projects are currently underway to create different kinds of cross-organisational data sharing systems. These systems vary widely in scope and architecture. Rather than trying to review all of them, we will describe a few representative systems that illustrate the different goals and approaches taken. A first distinction that can be made here is between sharing non-sensitive (business) data and sharing sensitive (personal) data. In the former case, data is usually shared by providing others with a copy after negotiating terms and securing payment, while in the latter case (remote) data processing comes into play. Systems also differ in the type of data they share, the common types being individual records (transaction-oriented systems), entire data sets (analytics-oriented systems), or streams (typically Internet-of-Things applications). From a technical perspective, there are differences in the degree of centralisation of data and metadata, and whether there is support for data processing.
Perhaps the largest project in this space is International Data Spaces (IDS), a project initiated by the European Union with the goal of connecting and more closely integrating European industry, in particular SMEs\cite{IDS}. IDS defines data exchange protocols, and provides central components including a data broker, clearing house, identity provider, app store, and vocabulary provider \cite{IDSArch}. Data are requested from a data provider, optionally processed, and returned to the data consumer. Several IDS-based or IDS-supporting data exchange systems are coming online. One of these is the Smart Connected Supplier Network, a system for exchanging order-related data between organisations in a supply chain\cite{SCSN}. It provides a standardised API used to send orders, invoices and despatch information. Participants connect to a provider, which routes their messages in a peer-to-peer fashion to the provider of the recipient, who forwards them again.
A second IDS-compatible project is the Mobility Data Marketplace (MDM), which aims to be the German national access point for traffic data\cite{MDM}. MDM comprises a central portal, through which metadata are made available for streams of data. Parties negotiate for access separately, then permissions are set by the provider and data is streamed through a central data broker. Finally Advaneo is a data marketplace for open and commercial data, with a centralised metadata store, a payment system using credit cards, centralised data exchange for open data and peer-to-peer data exchange through IDS connectors for more sensitive commercial data.
The Ocean Protocol takes a very different approach, and focuses on trading ownership of or access rights to data\cite{ocean_protocol}. It stores metadata on a blockchain, and enables trade on the blockchain in tokens representing data or data access rights, in a manner similar to a commodities market. Obtaining the data corresponding to a token is mostly outside the scope of the project, which focuses on market making and derivatives more than data access or processing.
The above systems are mostly geared towards trading commercial or public data between companies and governments. When it comes to sharing of personal data, an oft-mentioned project is SOLID\cite{SOLID}. SOLID gives individuals a secure digital locker, in which they can store information about themselves. These data can then be copied by other parties from the locker, but only if the person concerned permits them to do so. In contrast to the above systems, SOLID deals in individual records rather than data sets or streams. There is no payment system built in and data processing is done by the recipient once it has a copy of the record(s). Other, similar digital locker systems are also being developed, both for personal and for business data\cite{Dexes}\cite{genetic_locker}.
A final category of systems for sharing personal data do distributed data analysis. Data Shield is an R-based data analysis system which allows a user to interactively access one or more remote data sets and calculate summary statistics, but only in such a way that no data about individuals is revealed\cite{data_shield}. It does this by offering only a limited API, and by using an implementation that protects against differential privacy violations. Users need individual accounts on the data servers they access, and authenticate using certificates. Vantage6 is a federated machine learning system implementing the Personal Health Train concept\cite{Vantage6}. It assumes a collaboration between a number of organisations that wish to share data. Each data provider sets up a data station, which gets access to one or more data sets. A central server orchestrates a federated machine learning algorithm, repeatedly sending partially trained models to the data stations for further training, until the model coefficients converge. The trained model is then returned to the user, who can use it to make predictions without having had access to the data. Parties within a collaboration are assumed to trust each other.
\subsection{Trust}
\label{sec:trust}
A successful data sharing system is a system that data owners and data users are willing and able to use. Convincing data owners to make their data available is usually the largest hurdle to clear. The design of a data sharing system should therefore be informed by an understanding of how a data owner decides whether to share their data. Data sharing is a social interaction, which can only happen in the presence of sufficient trust.
Trust can be defined as \textit{the willingness of a party to be vulnerable to actions of another party based on the expectation that the other party will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party}\cite{mayer_1995}. Without the ability to monitor or control the other party, trusting them inevitably incurs risk. Conversely, one can say that any social interaction involving a risk that cannot be fully mitigated requires trust. When deciding whether to trust or not, potential trustors will evaluate the associated risks and look for \textit{good reasons} to accept those risks, i.e. factors reducing either the (perceived) probability or the consequences of an adverse outcome. If enough good reasons can be found that increase trust and decrease risk until the level of available trust exceeds the remaining risk, the decision to trust is taken\cite{bachmann_2001}.
Thus, for a data sharing system to be used, it needs to offer features that increase the users' trust and decrease risk. Trust can be divided into two categories: system trust and personal trust. System trust or institutional trust is created by legal or social norms, and backed up by the power of institutions including the judiciary, trade associations and technical standards. Note that laws or codes of conduct do not actually make adverse outcomes impossible, and that violating them will destroy trust in that specific instance, but as long as they are generally adhered to and violations are sanctioned in general, they create trust in and throughout the system.
Personal trust concerns the specific counterparty in a social interaction. A potential trustor may attempt to evaluate the specific trustworthiness of the trustee by considering their benevolence, integrity and ability\cite{mayer_1995}. \textit{Benevolence} is the degree to which the trustor believes the trustee to want to do right by them, separate from any reward the trustee may get for doing so. This requires the trustee to have the same understanding of what is right, which is referred to as \textit{integrity}. Finally, a benevolent potential trustee of high integrity needs to have the \textit{ability} to execute on their good intentions to be trustworthy.
In an increasingly complex world with many interactions, personal trust tends to be replaced by system trust because personal trust is very expensive to maintain. If system trust is unavailable, organisations may instead choose to base their interactions more on power than on trust. In an interaction based on trust, a positive scenario is constructed with the expectation that both parties will work to enact it. In an interaction based on power, a negative scenario is constructed and the more powerful party uses the threat of sanctions to keep the subordinate party from enacting it. This is obviously a less desirable situation for the subordinate party, and may lead them to refuse the interaction. Thus, in the context of data sharing systems, strong system trust, backed up by institutions, encourages less powerful parties to take part and share their data\cite{van_der_burg_2021}.
\section{Design considerations}
\label{sec:design_considerations}
Mahiru is designed for doing data science, both in academia and in industry, using data that is not necessarily public but shared between parties conditionally. Its intended users are researchers, and providers of data, software, services and infrastructure. To enable their collaboration, Mahiru needs to provide the necessary technical functionality while reducing risk and increasing trust as much as possible.
\subsection{Risk}
\label{sec:design_considerations_risk}
The fundamental risk of giving someone else a copy of your data is in \textit{loss of control}. This loss of control can then result in loss of privacy and loss of economic opportunity (if the copy is shared further without permission), as well as reputation damage and exposure to sanctions (if one is accountable for misuse to some third party). There are fundamentally two technical means to mitigate this risk: 1) try to assert some level of control over the receiving party's system, and 2) don't give them a copy at all but instead allow them to process the data on the system of the data owner or that of a trusted third party.
For option 1), remote attestation and other Digital Restrictions Management (DRM) technology would be used to ensure that the receiver's system runs trusted software. These technologies cannot attest however that the hardware and software that are used to store and process the data do not have any vulnerabilities, so while DRM can reduce risk, it cannot remove it. In practice, DRM technologies have not proven very effective at preventing unauthorised use of widely distributed data, as only one receiver needs to be able to circumvent the technology and then publish the content, but they may help for data that is shared only with a small number of known users. They are however a system administration burden on the data receiver, and they reduce the receiver's autonomy in administrating their system.
A fundamental design choice of Mahiru is that participants in the data exchange should retain full control over their systems. Accordingly, DRM technologies are not required, and Mahiru's security model does not depend on their presence. Nevertheless, Mahiru users could choose to implement e.g. remote attestation separately, and they may find others more willing to share data with them if they do so.
Option 2), distributed processing, only applies to cases where the data user actually needs data that is derived from the source data, and the derived data is less sensitive than the source data. Also, the data user now needs to trust the data provider (or a third party) with their processing code, and the data provider needs to trust the processing code to output only the intended less sensitive output. The associated risks may be mitigated by inspecting, sandboxing and/or monitoring the processing code, or by inspecting the output before sending it back to the receiver. An advantage here is that the data remains on the system of its owner, and under their full control.
In order to support option 2), the system needs to be able to execute processing workflows in a distributed fashion, and it needs an authorisation system which can describe where data can go and how it can be processed. This significantly increases system complexity, but also allows fundamentally better privacy protection for many use cases. Mahiru provides both distributed execution and an authorisation system for it, as described below.
\subsection{Trust}
\label{sec:design_considerations_trust}
Besides mitigating risk, measures can be taken to increase trust. Perhaps the most important one is setting up behavioural norms that participants in the data exchange are expected to abide by. International Data Spaces is the prime example of this, having invested from an early stage in what they call \textit{soft infrastructure}. The power needed to back up the norm can be provided by legal contracts signed by participants as a condition for joining the data exchange, and for example by having an external party audit their systems from time to time. At the technical level, this means that it must be possible to limit access to the exchange to parties that have followed an appropriate process and have passed inspection. Mahiru supports this by letting a single party control its registry of participating parties and sites (see below), but the actual terms and conditions are outside its scope.
Trust in software applications can be increased by inspecting and certifying them, increasing specific trust in the certified applications. Using Mahiru, auditors that are part of an exchange can mark software providers' software as trustworthy, and data owners can then give permission to process their data based on those marks (see Section \ref{sec:delegation}). Additionally, system trust can be increased by monitoring execution of the applications as they run on other parties' systems, through the prospect of anomalous behaviour being detected and sanctioned. We discuss this further in Section \ref{sec:implementation}.
To help facilitate personal trust, the system can provide participants with information about who controls the data, software and systems that make up the data exchange. This transparency helps participants to evaluate benevolence, integrity and ability of those parties. To this end, Mahiru stores and exchanges metadata about parties, sites, data and software. Some kind of reputation tracking system could be added in order to facilitate exchange of relevant information between the parties, possibly enhancing trust although such a system could also be attacked by malicious parties. This is currently outside our scope.
No discussion of data sharing and trust can be complete without mentioning blockchain technology. Blockchains are said to be \textit{trustless}, in the sense that they do not require any participant to (personally) trust any other specific participant. Perhaps it is better to say that blockchain technology provides a large amount of system trust, backed by technical standards, a consensus algorithm, and a community of participants, which reduces or even removes the need for personal trust. However, it is important to understand that this trust only pertains to the integrity and evolution of the contents of the shared database formed by the blockchain. Thus, a blockchain may record that two parties have expressed a desire to share data under certain conditions, but it cannot enforce those conditions once the data is on the system of the receiver, nor can it for example ascertain that the data has arrived on the system of the receiver. As a result, the use of blockchains in data sharing systems is limited to payment for data (as implemented by Datapace) and trading of ownership rights (as in the Ocean Protocol).
\section{Architecture}
\label{sec:architecture}
Mahiru is designed to be installed once by an organisation, and then used over time to service multiple collaborations. With partners and partners-of-partners joining the system, there will be participants which do not know or trust each other at all, as well as participants which collaborate closely and have a high degree of trust between them. Mahiru takes the no-trust situation as a starting point: it is a federated system in which each participant operates its own site, consisting of its own software, running on its own hardware, sitting on its own premises. The organisation's data and software are stored in this site, and are not available to anyone by default.
Any of these aspects may be relaxed: the servers may be put in a data center operated by someone else, they can be virtual machines running on cloud hardware, software may be obtained from elsewhere and managed by someone else (e.g. Mahiru sites could be offered as a Software-as-a-Service solution), and the Mahiru policy system may be used to give permission for data and software to be transmitted to others or processed on their behalf. None of these are required however, full control is always possible.
A Mahiru data exchange system thus consists of many \emph{sites}, operated by different \emph{parties}. These sites communicate over the Internet, and need to be able to find each other and set up secure communications. This is facilitated by the \emph{Registry}. The registry contains a list of all parties and sites involved in the data exchange, but no data, software or policies. In most cases, it will be a simple database maintained by an organisation running the data exchange, which would verify any general requirements made by the exchange on the parties and their sites (e.g. minimum security measures) in the physical world, and only add those who qualify to the database. Alternatively, anyone could be allowed to register, in which case it would be up to individual parties to decide which other parties and sites to trust.
\subsection{Registry}
\label{sec:registry}
The registry stores information about the parties and sites involved in the data sharing system. Each of these objects needs to be uniquely identifiable. Mahiru does not have a global unique identifier service. Instead, each party uses a DNS name it controls as its namespace, and then creates unique identifiers of the form \texttt{<type>:<ns>:<name>}, where \texttt{<ns>} is its namespace and \texttt{<type>} is either \texttt{party} or \texttt{site}. The \texttt{<name>} part must be unique within the namespace. Note that parties are identified by an identifier within their namespace, not by the namespace itself. Furthermore, departments or subsidiaries can be registered as separate parties with their own namespace, and grouped under their parent party, as described in Section~\ref{sec:organising_objects} below.
Besides its identifier and namespace, a party registration contains several X.509 certificates. The party's main certificate is used to sign registry records and policy rules. Its User CA certificate is used to sign certificates for individual users, which are also stored in the registry within the party's record. Sites have a unique identifier, and store the identifiers of their owner and administrator. The owner of a site is the party using it, the administrator the party controlling the servers and software. These can be the same, for a self-hosted site, or different if the site is hosted by another party. Finally, each site has an endpoint URL where it can be reached, and an HTTPS certificate for communicating.
If the CA and the Registry are kept separate, which they should, then this provides some level of security redundancy. If the Registry is compromised, records can be removed to achieve a denial-of-service attack, but records cannot be modified without the private key of the subject of the record (which is stored at their site and potentially offline), and no new records can be added without separately compromising the CA to create the corresponding certificates. Likewise, if the CA is compromised, the attacker would still need to compromise the Registry as well for the rest of the system to accept its newly minted fake certificates.
In order to keep the Registry from becoming a central bottleneck in exchanges with very many parties and sites, the information in the Registry is replicated to each site rather than requested every time it is needed. Transactions on the Registry are serialised and numbered, and each object is tagged with the transaction number in which it was created when it is created. Objects are never removed. Instead a deletion operation adds a second tag to the object with the transaction number of the removal, thus marking it as having been removed in that and subsequent versions. Updates are processed as removal and simultaneous addition of the updated object.
Sites then request updates from the current version of their replica to the current version of the canonical store, and receive a set of objects to add, a set of objects to remove, the new version, and a timestamp specifying until when their newly updated replica will be valid. Whenever the local replica is expired, it must first be updated before its contents can be used again.
Instead of a central database and a replication system, a completely decentralised data exchange could be achieved by putting the registry on a (public or private) block chain. Note that no trading or data processing would take place on this block chain, it would only register the parties and sites, and contain a mechanism for admitting new parties to the system. The data exchange would then become a decentralised autonomous organisation.
\subsection{Sites}
\label{sec:sites}
Data storage, exchange and processing is done exclusively by the sites. Each site stores data and software (together, \emph{assets}) as well as the policies governing their exchange, processing and use. Sites offer an external REST API served over HTTPS covering three areas: 1) policy replication, 2) asset sharing, and 3) data processing. Authentication is done through standard HTTPS X.509 certificates provided by an external Certificate Authority; additionally they may be stored in (and removed from and thus invalidated) the Registry and/or OCSP may be employed to facilitate rapid retraction of compromised certificates. Sites also have an internal REST API, which is used for managing assets, managing policies, and submitting data processing requests.
An asset may be copied (downloaded) from one site to another, if its owner has given permission to do so. The receiver of the asset is expected to treat the asset according to its owner's policies. Parties may (partially) delegate policy decisions regarding one or more of their assets to other parties in the system, who can then control the asset by changing their own policies. Thus, in order to know what permissions one has regarding a certain asset, policies from several sources have to be evaluated together. To facilitate this, policies are replicated between the sites using the same replication algorithm used by the Registry. Each site maintains a local replica of all the parties' policies, then evaluates them locally whenever it needs to make a policy decision.
Note that this means that each site's copy of the combined policies is only eventually consistent with the global state of the policies, because policy updates at a given site do not propagate immediately to all other sites. As a result, access may be granted based on recently retracted policies. However, as policy freshness requirements are set by the originating site, each participant can decide for itself whether to set a long timeout (slow propagation, but low load on the replication service) or a short one (fast propagation, higher load and cost).
\subsection{Operation}
\label{sec:operation}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1_global_architecture.png}
\caption{Mahiru's global architecture and operation. The system consists of any number of sites and a registry which records their identity and location. 1) Workflow submission, 2) registry update, 3) policy update, 4) planning, 5) execution request, 6) distributed execution, 7) result return. See Section~\ref{sec:operation} for details.}
\label{fig:global_operation}
\end{figure}
Requests for (processing of) data in Mahiru take the form of workflows of the Directed Acyclic Graph (DAG) type, with data flowing along the edges; the format can be described as a simplified version of the Common Workflow Language\cite{CWL}. Each node (workflow step) in the DAG employs a \emph{compute asset} to process one or more \emph{data assets}, producing one or more new data assets which are propagated to subsequent steps. We envision a user using any of a variety of applications with Mahiru support, depending on their context and use case, with the workflow generated by the application rather than by the user directly.
Figure~\ref{fig:global_operation} shows Mahiru's global operation. Execution is initiated by the user's application, which submits a workflow to the local Mahiru site (1). The site then ensures that its Registry replica is valid (2), and that the policy replicas for the sites serving policies for the assets used in the workflow are likewise current (3). It will then create an execution plan (4), which for each step in the workflow determines at which site it will be executed (see Section \ref{sec:policy_evaluation}). Each of the involved sites is then sent an execution request (5), comprising the workflow and the plan. These sites will in turn ensure they have up-to-date policy replicas (not shown), verify that the actions they are asked to perform are permitted, and start executing steps as their inputs become available. During this process, intermediate results may be exchanged between the sites via the asset sharing part of their API (6), each transaction again being checked against the policies set by the data owners. The originating site finally obtains the final outputs from the sites that produced them, verifies that usage permissions exist, and returns the outputs to the user's application (7).
\subsection{Identity management}
Mahiru's users are individuals associated with a party that is involved in the system. System administrators and data managers can be identified and authenticated at each local site using off-the-shelf solutions. Workflow submission however requires a separate solution, since workflow execution is distributed across sites. Running a workflow requires resources, which may have to be paid for, and this requires a means for each party involved in the execution to prove that the workflow was submitted by a given party. Parties in turn may want to know which of their users has submitted which workflow for accounting and accountability purposes, but this information should not be shared with other parties to avoid violating the privacy of its users (employees).
Non-repudiation can be solved by a digital signature on the workflow, which in Mahiru is created by the user using a pseudonymous user certificate and key. This certificate is signed by a signing certificate controlled by the party's system administrator, which in turn is signed by the central CA, with both the party CA and user certificates registered in the Registry. As a result, any party involved in execution can verify the signature, and prove that it was created by a user belonging to a particular party, but it cannot identify the user beyond the pseudonymous identity. This enables accounting and reporting abuse, while protecting employee privacy.
\section{Policies}
\label{sec:policies}
In a Mahiru system, each participating party can contribute assets (software and data) by putting them into its local site. In order to allow others to make use of these assets, policies must be set. Mahiru has a bespoke policy system that occupies a niche in between system-level policies (e.g. file access, firewall rules) and social-level policies (e.g. laws, licenses, contracts). Policies are collections of rules, with rules of a small number of types combined to permit sharing of, processing of, and delegation of control over assets.
In order to be able to give permissions regarding assets to sites and parties, all of these objects must be uniquely identified. This is done by an identifier of the form \texttt{<type>:<ns>:<name>}. Object types include \texttt{party}, \texttt{site}, \texttt{asset}, collections and categories of these (see below), and \texttt{result}, which is covered in Section \ref{sec:policy_evaluation}. Each participating party owns a namespace, which is a DNS (sub)domain registered to it that is used for the \texttt{<ns>} field. It is the responsibility of that party to ensure that all its objects are given a unique (within the namespace and organisation) \texttt{<name>} field. Each object is considered to be owned by the owner of the namespace it is in. This is strictly an administrative ownership which establishes the party with primary control over the asset within the Mahiru system, it does not necessarily correlate to legal ownership of any intellectual property related to the asset.
Mahiru rules are relations between assets, sites, parties and other objects. For each rule type, objects of particular types are referred to, one of which is the \emph{subject} of the rule. Rules may only be made by the owner of their subject. This is enforced by a cryptographic signature attached to the rule, which can be verified against the signing certificate of the party owning the namespace the subject is in, as registered in the Registry and countersigned by a CA. These signatures are created by the parties when they create the rules, and can be created off-line if security requirements warrant the extra effort. A Trusted Platform Module could provide for a less secure but more convenient alternative, depending on the threat model.
\subsection{Organising objects}
\label{sec:organising_objects}
The first two rules presented here provide means to group assets. Mahiru has two kinds of groupings, collections and categories. Collections and categories of assets are considered assets themselves, so that they can be nested. Assets may be placed in an asset collection by an \textbf{InAssetCollection}(\emph{asset}, collection) rule. The asset is the subject of the rule, and asset collections are identified by an identifier of type \texttt{asset\_collection}. Assets may be categorised by an \textbf{InAssetCategory}(asset, \emph{category}) rule, which has the category (of type \texttt{asset\_category}) as its subject.
Thus, asset collections and categories both group assets, but asset owners place their assets in collections, while category owners place assets in their categories. Permissions propagate through collections (i.e. if one has access to an asset collection, one has access to all assets in that collection, recursively) but not through categories. The distinction between collections and categories becomes relevant when the owner of the asset category or collection is not the same as the owner of the asset. Such cross-namespace rules are used to delegate authority, as described below.
Besides assets, also sites and parties can be categorised by corresponding rules. A site can be put into a site category by the owner of that category via an \textbf{InSiteCategory} rule, and a party can be put into a party category via an \textbf{InPartyCategory} rule. Unlike for assets, permissions propagate down through categories of sites and parties, so giving a particular site category access to an asset gives access to all sites in the category. The exact algorithm is explained below.
\subsection{Access and usage rights}
\label{sec:access_rights}
Mahiru features two kinds of permissions: access permission and usage permission. Access permission for an asset is given to a site, and permits the asset to be present in the Mahiru software on that site and, for software assets, for them to be run. This permission is given through a rule of the form \textbf{MayAccess}(\emph{asset}, site). Here, \emph{asset} is an asset or asset collection, while \emph{site} is a site or a site category, and the permission applies to any asset in the collection or any subcollections, and any site in the category or subcategories. Since assets can only be placed in collections by their owner, access permissions for a collection may only be given by its owner, and owners of site categories determine which sites are in them, it is clear that asset owners have ultimate control over the path from asset to site.
Usage permission for an asset is given to a party, using a rule of the form \textbf{MayUse}(\emph{asset}, party, conditions). Asset collections and party categories apply as above. This rule permits the named party to extract the asset from a Mahiru site and use it according to the given conditions (e.g. only for non-commercial use). Conditions may be expressed in plain text, or some formal description language such as the Open Digital Rights Language (ODRL) could be used. Note that by definition, the conditions cannot be enforced by Mahiru. Instead, they represent a non-binding request, an IP licence, and/or a reminder of a separately agreed contractual obligation.
Wildcards may be used for the non-subject objects in these rules, which makes it possible to declare assets to be publicly available.
\subsection{Rules for data processing}
\label{sec:processing_rules}
In Mahiru, data is processed using directed acyclic graph-style workflows, each step comprising the application of a compute asset (software) to one or more data assets (a data set or previous result). This produces results, which can in turn be inputs for another compute asset in the same workflow. Each step in the workflow must be executed by a site, which requires the presence of the step's inputs, the compute asset, and the step's outputs at that site. For this to be permitted, the site must have access rights (as described previously) to all of these. This poses a problem, because access rights for the outputs must be set before the workflow is executed\footnote{manually approving individual requests when needed works in principle, but does not scale}, at which time the outputs do not exist and cannot be referred to by a rule.
This is solved through the use of a \textbf{ResultOfDataIn}(\emph{data\_asset}, compute\_asset, output, collection) rule. With this rule, the owner of the specified data asset declares that if the asset is processed using the given compute asset then the asset produced for the named step output must be considered to be in the given collection. Access and usage permissions can then be given for this collection. The data asset may be a collection of assets instead, and for the compute asset an asset category may be given, with the rule applying recursively.
A second rule type, \textbf{ResultOfComputeIn} has the same attributes but has the compute asset as its subject, allowing its owner to give permissions as well. In this case, an asset category may be specified for the data asset, and an asset collection for the compute asset. Permission is needed from both the owner of the data asset and the owner of the compute asset for a compute step to be allowed to run.
Wildcards may be used here as well, so that for example permission to process a data asset $A$ in arbitrary ways on any site may be set by a combination of InAssetCollection($A$, $C$), ResultOfDataIn($C$, *, *, $C$), and MayAccess($C$, *).
\subsection{Delegation of authority}
\label{sec:delegation}
In some cases, asset owners may wish to delegate some or all of their authority regarding their assets to another party. For example, a data owner may have joined a consortium providing a pool of data assets to others, with access controlled by the consortium organisation, or they may want to delegate inspection of compute assets to be used to process their data to a trusted auditor. In Mahiru, this is done by creating rules which refer to (usually) collections and categories in different namespaces. In the first example, the consortium would set up a site, and allocate an asset collection identifier $C$ in their namespace. The consortium members would then add InAssetCollection($A$, $C$) for each asset $A$ they contribute to the consortium, thus enabling the consortium to give rights to $A$ via rules of the form MayAccess($C$, site) and MayUse($C$, party). For the second case, the auditor would set up a site and create a category $T$ of trusted software, then add an InAssetCategory(software, $T$) rule for each inspected and approved compute asset. Data owners would then refer to $T$ in their ResultOfDataIn rules to delegate the assessment of compute assets.
\subsection{Policy evaluation}
\label{sec:policy_evaluation}
When a workflow execution request or a data access request arrives at a site, the combined policies must be evaluated in order to determine whether to grant the request. For a data access request, this means verifying that the requester has access permissions for the requested asset. For a workflow execution request, the site must verify for each step assigned to it whether that step is allowed to run there, which entails verifying that it has access permissions to the step's inputs, the compute asset, and the step's outputs. To plan a workflow submitted by a local user, for each step the set of sites with permission to run that step is determined, after which possible plans can be enumerated by backtracking through the graph, or some optimal plan can be devised via dynamic programming.
From the above, it is clear that the central policy evaluation operation is to determine whether a particular site $s$ has access to a particular asset $a$. For \emph{primary assets}, which are data and compute assets that were put into the system directly, this is relatively straightforward. Define relations $M$ and $C_{oA}$ as
\begin{equation}
(s, a) \in M \; \textit{iff there exists a rule MayAccess(a, s)}
\end{equation}
\begin{equation}
\begin{array}{rll}
(a, c) \in C_{oA} & \textit{iff} & a = c, or \\
& & \textit{a rule InAssetCollection(a, c) exists}
\end{array}
\end{equation}
and $C_{tS}$ analogously to $C_{oA}$ but for \textit{InSiteCategory} rules. We can then define the has-access relation $H$ as
\begin{multline}
(s, a) \in H \Leftrightarrow \exists (s', a') \\
(s, s') \in C^+_{tS} \wedge (s', a') \in M \wedge (a, a') \in C^+_{oA}
\end{multline}
where superscript plus denotes the transitive closure. Informally, $s$ has access to $a$ if there is a path from $s$ directly or up through the site categories to a MayAccess rule referencing $a$ or an asset collection that directly or indirectly contains $a$.
\emph{Secondary assets} are assets derived from other assets, specifically intermediate and final results of workflows. Since these assets do not yet exist when the policies are set, there are no direct MayAccess rules referring to them. Instead, they are implicitly added to asset collections through the use of ResultOfDataIn and ResultOfComputeIn rules, and those rules must be combined with the definition of the workflow that produced the asset to determine which sites can access it. For each workflow step output, access permissions must be obtained from the owner of the compute asset used, the owner of each input, and the owners of assets used to produce each input, recursively.
\emph{Permissions} for result $r$ of a workflow step are represented by a permissions object $P(r)$, which is a set of sets of assets or asset collections. Given $P(r)$, we can define a second has-access relation $H'$ as
\begin{equation}
(s, r) \in H' \Leftrightarrow \forall C\!\in\!P(r) \:\: \exists c\!\in\!C \:\: (s, c)\!\in\!H
\end{equation}
In words, a site $s$ needs to have access to at least one asset or collection in each set in $P(r)$ to be able to access result $r$. For a compute asset $a$, which is always a primary asset, we set $P(a) = \{\{a\}\}$, so that $(s, a) \in H' \Leftrightarrow (s, a) \in H$.
It remains to define how to propagate $P$ across workflow steps. This is done via the ResultOfComputeIn and ResultOfDataIn rules, for which we will define two more relations:
\begin{equation}
\begin{array}{rl}
(i, p, n, c') \in R_C & \textit{iff there exists a rule} \\
& \textit{ResultOfComputeIn}(i', p', n', c') \textit{ where} \\
& (i' = * \vee (i, i') \in C_{tA}^+) \; \wedge \\
& (p' = * \vee (p, p') \in C_{oA}^+) \; \wedge \\
& (n' = * \vee n' = n)
\end{array}
\end{equation}
\begin{equation}
\begin{array}{rl}
(i, p, n, c') \in R_D & \textit{iff there exists a rule} \\
& \textit{ResultOfDataIn}(i', p', n', c') \textit{ where} \\
& (i' = * \vee (i, i') \in C_{oA}^+) \; \wedge \\
& (p' = * \vee (p, p') \in C_{tA}^+) \; \wedge \\
& (n' = * \vee n' = n)
\end{array}
\end{equation}
These relations contain any combination of input, processing compute asset, output name and output collection for which a matching rule exists. Note that rules may specify wildcards ($*$) for the data asset, compute asset and output name. Also note that the use of categories and collections is reversed between the two rules because the subjects are reversed; in both cases a collection can be used for the subject of the rule, and a category for the other asset. The target $c'$ must be an asset collection.
Given a result $r$ produced by applying compute asset $p$ to inputs $I$ and taking its output $n$, we can compute $P(r)$ as
\begin{equation}
\begin{array}{rl}
P_C(r) = \{ & \\
& \{c' \mid (c, p, n, c') \in R_C \wedge c \in C \} \\
& \quad \mid C \in P(i) \wedge i \in I\}
\end{array}
\end{equation}
\begin{equation}
\begin{array}{rl}
P_D(r) = \{ & \\
& \{c' \mid (c, p, n, c') \in R_D \wedge c \in C \} \\
& \quad \mid C \in P(i) \wedge i \in I\}
\end{array}
\end{equation}
\begin{equation}
P(r) = P_C \cup P_D
\end{equation}
Intuitively, we propagate each permission set in each $P(i)$ via matching ResultOfComputeIn and ResultOfDataIn rules to a set of output collections. Recall that access to at least one collection in each set in $P(r)$ is needed, so that each owner of an input and the owner of the compute asset gets control over the result. $P$ is computed recursively for the whole workflow, so that access and usage of the end result are only granted if all owners agree.
\section{Use cases}
\label{sec:use_cases}
With its ability to run arbitrary workflows in a distributed fashion, Mahiru allows for a variety of different execution patterns or archetypes.\cite{shakeri_2019} In this section, we demonstrate how various common patterns would be implemented in a Mahiru data exchange.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig2_one_step_workflow.png}
\caption{Downloading and simple processing of data.
\enspace\textbf{a)} Policy rules needed for \textsf{B} to download and use dataset \textsf{D} from party \textsf{A}. Sites \textsf{Asite} (where the data originates) and \textsf{Bsite} (where it is copied to) are permitted to have a copy of \textsf{D} through MayAccess rules, and a MayUse rule permits party \textsf{B} to take it out of the Mahiru system.
\enspace\textbf{b)} A simple processing workflow comprising a single processing step using \textsf{B}'s compute asset \textsf{Bproc} applied to \textsf{A}'s dataset \textsf{D}.
\enspace\textbf{c)} Policy rules to allow running the one-step workflow at site \textsf{Bsite}. \textsf{Asite} and \textsf{Bsite} are again given access rights, allowing \textsf{D} to be copied to \textsf{Bsite}. Then \textsf{D} is allowed to be processed using \textsf{Bproc} by both \textsf{A} (via a ResultOfDataIn rule) and \textsf{B} (via a ResultOfComputeIn rule), with the result declared to be in the collections \textsf{ns\_a:Dres} and \textsf{ns\_b:Dres} respectively, to both of which \textsf{Bsite} needs access for the step to be run at \textsf{Bsite}. Finally, \textsf{B} needs to be permitted to use the result for it to be able to exit the system.
\enspace\textbf{d)} Permissions for running the same workflow but with the step executed at \textsf{Asite} (compute-to-data). Data, software and output are permitted to be at \textsf{Asite} through MayAccess rules, while the output is also permitted to go to \textsf{Bsite}, and party \textsf{B} is allowed to use it.
\enspace\textbf{e)} Policy for allowing a trusted-third-party scenario at \textsf{Csite}. \textsf{A} and \textsf{B} do not share their data and software with each other, but allow them to be copied to \textsf{Csite}, which can also have the results. Processing can then take place there, with the results once again permitted to go to \textsf{B} via \textsf{Bsite}.
}
\label{fig:one_step_workflow}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig3_federated_learning.png}
\caption{Federated machine learning using Mahiru.
\textbf{a)} Workflow for horizontally partitioned federated learning. An initial set of model parameters \textsf{PInit} from a user at party \textsf{D} is updated based on input datasets \textsf{A} and \textsf{B} provided by parties \textsf{A} and \textsf{B} respectively, after which the updated parameters are combined and the process is repeated until convergence. The software (compute assets \textsf{Train} and \textsf{Merge}) is supplied by party \textsf{C}.
\enspace\textbf{b)} Permissions set by software supplier \textsf{C}. We assume here that the software is Open Source, and that any data may be processed with it and the results used by anyone. Different licensing models may be expressed through a policy that limits use to specific inputs or which only allows specific parties to use the results.
\enspace\textbf{c)} Policy set by data supplier \textsf{A} to allow local (at \textsf{Asite}) processing of its data using \textsf{C}'s software. Data asset \textsf{A} may be used as an input for the \textsf{Train} step, and the resulting updated parameters may be processed further repeatedly using \textsf{Train} and \textsf{Merge} steps. The other data supplier \textsf{B} sets identical policies.
\enspace\textbf{d)} Policy set by \textsf{A} to allow the results to be copied to \textsf{Dsite} and used by party \textsf{D}.
\enspace\textbf{e)} Policy set by \textsf{D} to allow the initial parameters to be used in the workflow.
}
\label{fig:federated_learning}
\end{figure*}
\subsection{Downloading data}
The simplest operation that can be performed by Mahiru is a data download. This is achieved by the data user submitting an empty workflow, which has one output connected to its one input and no processing steps. The desired data asset is set as the input, and the output will simply be the requested data. To be able to run this workflow, the requester and their site need MayUse and MayAccess permissions for the data asset (see Figure~\ref{fig:one_step_workflow}a).
\subsection{Local processing}
The previous case can be extended to do data processing by adding a step to the workflow that runs a compute asset, possibly created by the data user (Figure~\ref{fig:one_step_workflow}b). If the user's site can access the compute asset that is used in the step as well as the step's output, and the user may use the result, then this workflow can be run by downloading the data asset to the user's site and then running the step there. Note that the data owner needs to give permission for both the transfer of the input data and for giving the output to the data user (Figure~\ref{fig:one_step_workflow}c).
\subsection{Compute-to-data}
Giving the user permissions to obtain a copy of the data requires a large amount of trust, because the data owner has no control over the data user's system. Alternatively, the exact same workflow could be run with the step running at the site of the data owner. This way, no access permissions to the data are needed for the user's site. However, the data owner's site must now be able to access the compute asset as well as its output, and the data user's site still needs to be able to access the output (Figure~\ref{fig:one_step_workflow}d). If these permissions are available, then the workflow can be executed by sending the compute asset to the data owner's site, executing the step there, then downloading the result to the user's site. Note that if the roles of the software and the data are swapped, i.e. the software owner not allowing the software to go to another site and the data being sent to it from the user's site, processed, and the result returned to the user, then we obtain a Software-as-a-Service archetype (not shown).
\subsection{Trusted third parties}
In some cases, the data owner does not trust the data user with a copy of the data, and the data user does not trust the data owner with a copy of their software. In this case, if a third party can be found that is trusted by both the data owner and the data user, then permissions can be set accordingly and the processing can be done by this trusted third party (Figure~\ref{fig:one_step_workflow}e). Note that there may be other reasons for using a trusted third party, such as the availability of large compute resources\cite{Scheerman_ODISSEI}. If the owner of the data and the software is the same party in such a case, then this setup becomes an Infrastructure-as-a-Service model (not shown).
\subsection{Federated machine learning}
Federated machine learning is a somewhat more complicated case, because the algorithm involves many processing steps. For horizontally partitioned data, a model can be trained by repeatedly sending it to the data sites, updating its coefficients there using the local data, then sending back the new coefficients and combining them with the ones obtained from the other data sites. This can be expressed in a Mahiru workflow which alternates a set of parallel training steps with combining steps (Figure~\ref{fig:federated_learning}a). This workflow involves four parties: data providers A and B, software provider C, and data user D. C's policies govern the software, where it can go and how it can be used, with a very liberal policy shown in Figure~\ref{fig:federated_learning}b). The data providers allow C's software to be used with their data (Figure~\ref{fig:federated_learning}c), and they allow D to have the results (Figure~\ref{fig:federated_learning}d). Finally, D needs to allow its inputs to be used (Figure~\ref{fig:federated_learning}e). If the data cannot leave their owners' sites, then the training steps have to run at the data sites in a compute-to-data fashion, while the combining steps can go anywhere the data owners allow the coefficients to go. If additional permissions are available then other ways of executing the workflow may be possible too. For example, if one of the data owners has given a trusted third party's site access to the data, then the training steps for that data set can be run at this site as well, and possibly more efficiently. The policy evaluation algorithm will automatically take this into account, and produce corresponding execution plans.
\section{Implementation}
\label{sec:implementation}
To demonstrate the feasibility of our design, we have created a proof-of-concept implementation, which is available online as Open Source software\cite{Mahiru_software}. This implementation lacks some of the features needed for production use (such as persistent storage, and user interfaces), but does demonstrate the policy mechanism (including replication), data exchange, and distributed workflow execution.
A Mahiru data exchange consists of a registry and a collection of sites. The registry is a database that can be accessed through the simple replication protocol described in section~\ref{sec:registry}. The amount of stored data is tiny and update rates of the database are likely to be very low, as a result of which most requests for updates from the sites will be empty and can be generated quickly. Also, in most cases propagation delays of many hours will be acceptable, so that even for a large exchange with many sites the load on the system is low. A simple database server therefore suffices.
The sites perform all the work of sharing and processing data, and are somewhat more complicated. Figure~\ref{fig:site_architecture} shows the main components of a site in our reference implementation. The software is envisioned to be deployed in a DMZ, exposing an External REST API to the Internet, and an Internal REST API to the Intranet.
The PolicyStore component contains a database for the local rules and a replication server accessible through the External REST API for other sites to replicate the local policies. It also manages a set of replicas of other sites' policy databases. An internal API endpoint accessible through the Internal REST API can be used for setting the local policies. Attached to the PolicyStore is the PolicyEvaluator, which implements the algorithms described in section~\ref{sec:policy_evaluation}, and serves as an authorisation server to the AssetStore, WorkflowOrchestrator, and StepRunner.
The AssetStore stores data and compute assets as well as their metadata. It exposes an API endpoint for managing the assets on the Internal REST API, and an API endpoint for retrieving the assets on the External REST API. It uses the PolicyEvaluator to authorise any external requests. Both data and compute assets are Docker container images.
The WorkflowOrchestrator can be accessed through the Internal REST API, taking workflow execution requests from a user application running on the Intranet. For each request, it creates an execution plan as described in section~\ref{sec:policy_evaluation}, calling on the PolicyEvaluator to evaluate the combined policies. Once a plan has been made, it sends step execution requests to the required sites' External REST APIs to orchestrate the execution.
These step execution requests are routed to the StepRunner, which first uses the PolicyEvaluator to verify their legality, and then uses the DomainAdministrator to locally orchestrate the containers needed to execute the step. The DomainAdministrator first creates a local network containing a container for each input asset's image as well as a container for each output, and then runs the compute asset container inside the same bridge network. The data asset containers run an HTTP server serving the data, and the compute asset container uses this to retrieve its inputs. Outputs are written to a WebDAV-enabled HTTP server in the output containers, which are then saved to images for use by the next step. The prototype does all this using the local Docker service.
In many cases, a compute asset may only need a small fraction of the data available in a particular data asset. In this case, downloading the entire image is wasteful, especially if the data set is large. The prototype therefore allows accessing an input data asset remotely. To do this, the StepRunner sends a connection request rather than a download request to the site holding the input asset, and that site's AssetStore has its DomainAdministrator create a container for it. Through their NetworkAdministrator components, the two sites then set up a WireGuard VPN connection between it and the compute asset container on the executing site, after which execution proceeds as described before.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig4_site_architecture.png}
\caption{Mahiru site software architecture. The PolicyStore, PolicyEvaluator and AssetStore implement data exchange, the WorkflowOrchestrator orchestrates distributed workflow execution, and the StepRunner, DomainAdministrator and NetworkAdministrator execute workflow steps. See Section~\ref{sec:implementation} for details.}
\label{fig:site_architecture}
\end{figure}
A production version of the system will need to service many more requests, and therefore a larger number of containers are needed. Kubernetes and Docker Swarm are commonly used solutions for managing large numbers of containers across many physical machines. Kubernetes and Swarm employ overlay networks technologies to connect containers together and control their connectivity via network policies. This allows connecting containers belonging to the same step execution request while isolating them from containers belonging to other requests \cite{LCN2020}. Like Kubernetes and Docker Swarm, these overlay networks are centrally controlled and not designed to operate across independently administrated domains. As such, they cannot be used to provide the cross-domain connections to remote data assets. Instead, WireGuard VPN tunnels can be used as in the current prototype, at the cost of having to manage the two solutions to make them work together.
As an alternative approach, we have investigated the use of P4 programmable switches to facilitate both the local and the remote connections\cite{CITS2022}. This could potentially facilitate both within-domain and cross-domain connectivity with a single solution, and it enables the use of programmable network hardware like SmartNICs or DPUs to offload packet processing and improve performance. Furthermore, P4 programs can be used to audit and constrain the network activity of the containers to provide isolation and improve system trust.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig5_network_architecture.png}
\caption{Proposed design for a scalable container execution backend. Containers are run on multiple servers equipped with SmartNICs, which are programmed to connect the containers to each other within and between servers, and to containers running at other sites via a separate encryption device and router. Mahiru's Domain Administrator controls the containers via Kubernetes, and its Network Administrator controls the networking hardware. A Switch Controller component on each server programs the local SmartNIC on behalf of the Network Administrator.}
\label{fig:network_architecture}
\end{figure}
Figure~\ref{fig:network_architecture} shows how a more scalable implementation orchestrates containers and network connections. The DomainAdministrator starts containers by contacting Kubernetes, which in turn starts the required containers on one or more of the servers it controls, and returns their locations. The DomainAdministrator then sends this information to the NetworkAdministrator, which sets up connections between the containers. If two containers are on the same server, then the SmartNIC in that server can be used to connect them directly. For containers on different servers, the corresponding SmartNICs need to collaborate to make the connection. If a container needs to connect to a remote container, its local SmartNIC is programmed to set up a connection through the Encryption service, the Router and the Internet to the other domain. In this latter case information is needed on how to connect to the other domain, which is obtained by Mahiru through its site-to-site API connection and passed to the DomainAdministrator as well. For details we refer to Shakeri et al.\cite{Shakeri2022}. To support auditing, the NetworkAdministrator would program the SmartNICs to inspect traffic and log activity, and a new component would be added to retrieve these logs, analyse them and provide the results to the system administrator \cite{CITS2022}.
\section{Discussion and future work}
\label{sec:discussion}
In this paper, we have introduced an architecture for a federated data exchange and a policy mechanism supporting it. The architecture is based on standard technologies, including REST-based web services and public-key cryptography. It also uses containers and WireGuard, which is gaining acceptance as the new standard for secure network tunnels. A proof-of-concept implementation demonstrates the feasibility of this design, although scalability remains to be fully demonstrated.
\subsection{Policies}
Regarding basic access to assets the permission system resembles a simple discretionary access control system as found in file systems and file servers, with groups of objects and groups of subjects that can access them. Propagation of access and usage rights to the members of a collection enables delegation of control, which could also be achieved with e.g. POSIX file permissions. An extension is needed and provided to be able to authorise data processing however, with a minimal increase in complexity. All policy rules are permissions, which simplifies the system and reduces the risk of unintended consequences when policies are combined across organisational boundaries, but it does mean that prohibitions and duties cannot be expressed.
Highly complex legislation, standards and contracts often govern data exchange and must be dealt with. More complex access control systems like XACML\cite{XACML}, Organization Based Acces Control\cite{OrBAC} or eFlint \cite{eFlint} allow much more complex policies, at the cost of increased implementation complexity. They also make the policies more difficult to understand, and unintended consequences of combining policies made by different administrators harder to avoid. Some work has been done on combining XACML policies, in a simplified case in the Web Services Policy Language (WSPL)\cite{Anderson2004} and also for more complex scenarios\cite{Mazzoleni2008}. One interesting possibility would be to use one of the more complex languages to express a very detailed policy, and then to derive a Mahiru policy from this. We are currently exploring this possibility with the developers of eFlint.
One other interesting difference between Mahiru's policy mechanism and XACML is the use of an eventually-consistent replication mechanism to exchange policies between organisations. XACML assumes that policies are evaluated by a server named a Policy Decision Point (PDP) and enforced by a Policy Enforcement Point (PEP). Mahiru sites internally use a similar design, with the PolicyEvaluator as the PDP and the AssetStore and StepRunner as PEPs. Doing this across sites to facilitate delegation of authority and transfer of assets to other sites would result in the load on the PDP increasing with the number of workflows being executed throughout the system (workflow planning in particular requires many policy decisions as alternative possibilities are evaluated). Having effectively a cache at each site means that each request to the replication server covers many workflows.
\subsection{Workflows}
The workflow format and execution engine implemented in the prototype are rather simple, and serve mainly to demonstrate the federated execution mechanism. For a production-ready system, several extensions are needed, and more can be made to widen the applicability of the system. First, workflow signing as described above is not yet fully implemented: the certificates are there but the signatures not yet. We plan to fix this in the near future. Error handling and progress monitoring, while not as important for a prototype, are certainly needed for a production system, as is accounting, which will also entail a small extension to the policy mechanism to enable owners of compute resources to give permission for parties to run workflows there. For increased security, another rule type can be added that lets owners of compute resources control which software assets they are willing to run.
Second, workflow inputs need to be added. Currently, workflows process only assets, which are static data sets managed by a system administrator. However, practical workflow runs often have some data that is specific to that particular workflow run and created by the user, for example configuration data for an execution step or a processing script in a compute-to-data scenario. Entering these as assets in impractical. Instead, it needs to be possible for the user to submit these ephemeral inputs with the workflow job itself, with the user giving implicit permission to process them according to the submitted workflow.
Third, assets may change over time as new data are collected or software is updated. Giving assets a version number in addition to their ID makes it possible to track these changes for the purpose of provenance. Policies might then be extended with version constraints, although this would increase complexity.
Secure multiparty computation can potentially be supported by Mahiru by extending the policy mechanism, the asset types and the workflow format to support shares of assets. Such a design must ensure that enough shares to reconstruct the asset can only end up on sites that can access that asset, ideally also between multiple workflow executions.
Adding loops to the workflow format would reduce the size of repetitive workflows such as those used in machine learning tasks. These would work much like loops in a programming language, and allow the loop body to be planned once, rather than repeatedly. If containers for workflow steps within the loop body are reused between steps, then the overhead of creating and destroying them is avoided, and by putting FIFO-queues in the data asset containers for the intermediate results in between steps and using remote access, a distributed application emerges that is still governed by the Mahiru policies. However, this does transfer some of the responsibility of executing the workflow from the Mahiru middleware to the assets, which requires increased trust and/or better monitoring of execution.
Loops could also facilitate stream processing, for example in an Internet-of-Things context, with each loop iteration processing a new item from an input stream and producing an item for an output stream. Beyond this, even distributed coupled simulations seem possible in which different parties collaboratively simulate a system (e.g. an energy grid) without sharing their simulation codes (which likely contain proprietary information about their network), as governed by policies enforced by their own system.
\section{Conclusion}
\label{sec:conclusion}
Data sharing is a complex problem. Organisations have many kinds of data, for which different kinds of sharing are appropriate. Maintaining multiple systems to support this is expensive and potentially failure-prone. In this paper, we introduced Mahiru, a design for a flexible data sharing architecture that can support many different data sharing methods. Mahiru is based on a federated architecture, in which workflows can be executed in a distributed fashion while adhering to policies set by owners of the data and software used. The policy mechanism is simple yet powerful, which makes it easier for users to set policies and understand their implications. Use of a replication algorithm in key places ensures scalability of the system. As a result, Mahiru can support sharing of public as well as sensitive data, a variety of processing models including compute-to-data, software-as-a-service, trusted-third-party, infrastructure-as-a-service, and federated learning, all based on a single design with a single policy mechanism.
\section*{Acknowledgements}
This research was funded by the Netherlands eScience Center and the Netherlands Organisation for Scientific Research under project 2717G18 SECCONNETSmart.
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
The resolution of an optical imaging system can be limited by several factors. Whereas imperfections of the optical setting can be improved, diffraction always defines a fundamental limit in this regard.
In ground--based telescope imaging with a small mirror, diffraction is often the dominating effect. However, the influence of the atmospheric
turbulence scales much faster with the increase of the mirror diameter. Already in the next generation of telescopes, called the extremely large telescopes, atmospheric turbulence is the major limiting factor for the angular resolution far beyond the diffraction--limit.
It is a great challenge for science and technology to find ways to achieve diffraction--limited imaging for the future ground--based telescopes.
During the last decades adaptive optics (AO) technology has developed into a powerful remedy for this problem. Adaptive optics refers to real-time compensation for the distortions in the wavefronts of incoming light due to the atmospheric turbulence. Although the technology involves a number of engineering challenges, the benefits have proven to be fundamental. Adaptive optics correction has been implemented in many major telescope projects, e.g., the Very Large Telescope and the Gran Telescopio Canarias. Moreover, AO is planned as an essential part of all extremely large telescopes.
The work for this paper was largely carried out within a project established towards developing mathematical algorithms for the European Extremely Large Telescope (E-ELT), the future telescope of the European Southern Observatory (ESO).
The mathematical challenges and future prospects for inverse problems field were comprehensively reviewed by Ellerbroek and Vogel in \cite{ElVo09}. With the arrival of next generation implementations of adaptive optics, the severely ill-posed atmospheric tomography problem becomes the crux of mathematical AO research. In this paper we discuss atmospheric tomography in the context of multi--conjugate adaptive optics (MCAO).
In the classical AO, a single guide star, i.e., a single light source is observed. The aberrations in the incoming light are measured which enables the AO system to correct the cumulative effect of the turbulence towards directions close to the guide star. The correction is performed with a deformable mirror (DM).
Multi--conjugate adaptive optics extends this idea by using several guide stars and multiple deformable mirrors. First, the data obtained from the guide stars is used to solve a reconstruction of the turbulence profile in the atmosphere (see Fig.~\ref{PF_MCAO_fig}). This problem is called atmospheric tomography. Second, having multiple deformable mirrors and the three dimensional reconstruction of the turbulence enable the astronomer to correct for much larger field of view than in the classical implementations.
The physics of turbulence is an extensive field of study and much is understood about how the turbulence in the atmosphere is formed. Statistical models for turbulence are frequently utilized in the adaptive optics literature by postulating the tomography step as a Bayesian inference problem. In addition, this makes it possible to take into account the statistical nature of the measurement noise. The maximum a posteriori (MAP) estimate is the standard point estimate used to describe the resulting Gaussian posteriori distribution.
In the MCAO related literature, both iterative and non--iterative solution methods have been proposed for the MAP estimator. With the launch of the next generation of extremely large telescopes the dimension of the problem increases rapidly. Even with heavy parallelization,
the non--iterative methods have a high computational cost and the research in recent years has been inclined towards iterative methods.
The iterative solvers of the MAP estimate are typically based on
conjugated gradient (CG) methods, see \cite{EGV03} and references therein.
Effort has been put into developing an efficient preconditioner for the problem.
The multigrid preconditioners are investigated in \cite{GEV03,GEV07,GiEl08}, whereas in \cite{VoYa06b, YVE06}, Fourier domain preconditioners have been proposed.
Especially for iterative methods it is of value to be able to represent the operators in a sparse form. In this regard the Fourier basis is very useful and Fourier-transform based reconstructors have been proposed in \cite{TV01, TLL02, Gavel04}.
In other typical bases, the forward operator and the inverse covariance of the noise are fast to apply. However, the inverse covariance of the prior is often a full matrix
and thus a sparse approximation is required.
In the approach introduced by Ellerbroek \cite{Ellerbroek02} the turbulence power law is modified in order to achieve a sparse approximation by biharmonics.
Later, a very promising CG based method called the Fractal Iterative Method (FrIM)
has been developed by Tallon and others in \cite{Tallon_07, TT10, Tallon_et_al_10, Brunner_et_al_12}.
There, the inverse covariance is approximated by a factorization that can be applied in ${\mathcal O}(n)$ operations.
We also point out that outside the Bayesian framework the atmospheric tomography problem
has been approached by an algorithm based on the Kaczmarz iteration. The method was introduced by Ramlau and Rosensteiner in \cite{RaRo12}. The authors obtain a very efficient matrix--free
solver by performing the wavefront reconstruction and the tomography step in two separate problems.
The reconstruction of the atmosphere from incoming wavefronts by the Kaczmarz iteration delivers very promising results in good imaging conditions. The incoming wavefronts are reconstructed from the measurement with an algorithm called the CuReD~\cite{Ros12}.
Several solvers for the wavefront reconstruction exist (see, e.g.,~\cite{Bardsley_et_al_11,PGB02}).
In this paper we suggest a method utilizing compactly supported orthonormal wavelets to represent the atmosphere.
Our method is a CG based iterative method that solves the MAP estimate for the atmospheric tomography problem. We demonstrate a successful performance in low flux imaging conditions, i.e., when only a low number of photons can be measured, and with respect to some practical phenomena that are well--known to limit the reconstruction quality. We have implemented the method with the Daubechies wavelets \cite{Daubechies_88, Cohen_etal_92} in order to have good reconstruction of local details. In addition, the Daubechies wavelets have a useful localization in the frequency domain. This enables us to introduce our key contribution by approximating the inverse covariance with completely diagonal representation. It is shown that such a representation produces an equivalent regularization term
in the MAP estimation problem as indicated by the theory. We point out that this approximation is flexible with respect to choosing a different model for the turbulence power law. In terms of temporal control we rely on an established method called the pseudo--open loop control (POLC) which has been demonstrated to be very robust \cite{Piatrou05}.
In the numerical tests we introduce two variants of the algorithm. In the first setting, we reconstruct more layers of the atmosphere than deformable mirrors using the CG algorithm and optimize the DM shapes accordingly.
In our opinion this demonstrates well the best qualitative performance of the wavelet based method. Moreover, we investigate the stability of the regularization procedure in this setting. The second variant of the method is an accelerated algorithm developed towards achieving the real--time requirements. Here, layers are reconstructed at the altitudes of the deformable mirrors and DM shapes are chosen as the reconstructed layers.
In the accelerated method
we utilize a modified Jacobi preconditioner, for which we demonstrate fast convergence.
Numerical simulations are carried out on the OCTOPUS, the official end-to-end simulation tool of ESO. We illustrate the performance of our method in the low flux imaging conditions and compare these results against the matrix--vector multiply (MVM) algorithm, which is the benchmark reconstructor of ESO.
Wavelet methods in adaptive optics have been previously studied in \cite{HaAgBr08, HaAgCoBr10}, however not in the context of MCAO or in the Bayesian scheme.
In the field of inverse problems wavelets are applied widely (e.g., \cite{SiltanenMueller12, klann_ramlau_reichel}). For an extensive introduction to wavelet basis we refer to \cite{Daub92}.
Notice that atmospheric tomography is a severely ill-posed problem
and is very closely connected to limited angle tomography \cite{Davison}.
Thus, from the general perspective of inverse problems, the theoretical limitations of MCAO are interesting and have been considered in \cite{TV01,TLL02}.
Inverse problems related to waves travelling in random media have been considered, e.g., in the works of Papanicolaou, Bal and Borcea \cite{Borcea_etal, BalPinaud05, Fouque_etal}.
This paper is structured as follows. In Section \ref{sec:math} we discuss the mathematical model for the propagation of light through the atmosphere and how the measurements are obtained. We close the section by explaining the Bayesian paradigm and how the MAP estimate
for the atmospheric tomography is solved. In Section \ref{sec:prior} we introduce the diagonal approximation for the inverse covariance of the turbulence statistics. Section \ref{sec:othereffects} features parts of our method that are essential to MCAO solver, but which have been studied before. Here, we discuss the fitting step and the control algorithm. The concepts of spot elongation and tip--tilt uncertainty are also introduced. These practical phenomena have an essential impact on the noise model. Finally, in Section \ref{sec:numerics} we demonstrate the numerical performance of our method.
\section{Atmospheric tomography}
\label{sec:math}
\subsection{Problem setting}
\label{subsec:problem}
The wind in the atmosphere causes an irregular mixing of warm and cold air. This effect is called the atmospheric turbulence. The fluctuations of the temperature are essentially proportional to the refractive index fluctuations \cite{Ro99} and hence the turbulence affects the propagation of light. With the geometric optics approximation and under appropriate assumptions on the atmosphere,
the phase of light $\phi$ at the aperture is distorted according to
\begin{equation}
\label{eq:intg}
\phi({\bf r}, \boldsymbol\theta) \approx \int_0^H \rho ({\bf r} + \boldsymbol\theta \cdot \xi) d \xi,
\end{equation}
to a good approximation towards directions $\boldsymbol\theta = (\theta_1, \theta_2, 1)$ close to the zenith. Above, $\rho$ describes the fluctuations of the refractive index,
${\bf r} = (x,y,0)$ is the location at the aperture, and $H$ is the height of the atmosphere. The approximation \eqref{eq:intg} is derived in \cite{Tatarski1961, Rytov_et_al}.
The challenge in atmospheric tomography is to obtain a good estimate of $\rho$ based on indirect measurements of $\phi(\cdot, \boldsymbol\theta)$ towards directions $\boldsymbol\theta$ of the guide stars.
The strength of the turbulence at a given altitude varies heavily. However, at the typical telescope sites most of the turbulence is concentrated on certain altitudes. This observation has given rise to a layered atmospheric model, where the refractive index is approximated on a finite number of two-dimensional layers at fixed altitudes. In the simplest example, in ground layer adaptive optics only one layer is considered since the majority of the turbulence strength is located close to the aperture of the telescope (the ground layer). Due to the availability of several deformable mirrors, the implementation of MCAO benefits from a more accurate description of the atmosphere.
Let us consider how the equation \eqref{eq:intg} reduces for a layered atmosphere model.
We denote each modelled layer, located at altitude $\hl$ by $\layl$, $\l = 1, \ldots, \nlay$,
and by $\boldsymbol\phi = (\phi_1, \ldots, \phi_\nlay)$ a vector representing the atmosphere.
Assuming geometric propagation, the light arriving from infinity produces incoming wavefronts according to
\begin{equation}
\label{eq:ngsproj}
\phi({\bf r}, {\boldsymbol\theta}) = P^\textup{NGS}_{\boldsymbol\theta} \boldsymbol\phi = \sum_{\l = 1}^\nlay \layl({\bf r} + {\boldsymbol\theta} \hl),
\end{equation}
where ${\bf r}$ denotes a point inside the aperture and the vector ${\boldsymbol\theta}$
describes the direction of the guide star (see Fig.~\ref{PF_MCAO_fig}). Here, the projection $P^\textup{NGS}_{\boldsymbol\theta}$
maps the atmosphere to the incoming wavefront from direction ${\boldsymbol\theta}$.
More details on the wave propagation through the layered atmosphere model can be found in~\cite{RoWe96}.
However, in practice, there are not enough bright natural guide stars to cover most areas of interest. To overcome this problem, astronomers have developed a technology utilizing lasers, which can generate artificial stars at finite altitude. A laser guide star (LGS) is produced by shooting a powerful laser into the atmosphere where it scatters strongly at certain higher altitudes \cite{Ro99}. Due to the finite altitude, the light arriving to the telescope passes through a cone--like volume in the atmosphere (see Fig.~\ref{PF_FW_CONE_fig}). The corresponding distortion then satisfies
\begin{equation*}
\phi({\bf r},{\boldsymbol\theta}) = P^\textup{LGS}_{\boldsymbol\theta} \boldsymbol\phi = \sum_{\l = 1}^\nlay \layl\left(\left(1-\frac{\hl}{H}\right){\bf r} + {\boldsymbol\theta} \hl\right),
\end{equation*}
where $H$ denotes the altitude where the laser scatters. Again, $P^\textup{LGS}_{\boldsymbol\theta}$ stands for the projection of the atmosphere to the incoming wavefront with respect to the cone geometry. Whereas an LGS provides a very bright source at directions where no NGS is available, it also introduces some practical limitations. These phenomena, called tip--tilt effect and spot elongation, are described in Section \ref{sec:othereffects}.
The incoming wavefront can be measured indirectly. A common measurement device
in adaptive optics is the Shack--Hartmann wavefront sensor, which was described in detail in \cite{ElVo09}. Essentially, a Shack--Hartmann sensor measures a quantity proportional to the average gradient of the wavefront on a rectangular grid formed by small lenslets, i.e.,
\begin{equation}
\label{eq:SH_eq}
{s}_{ij} = C\int_{\Omega_{ij}} \nabla \phi({\bf r}, \boldsymbol \theta) d{\bf r}
\end{equation}
where ${s}_{ij} = (s^x_{ij}, s^y_{ij}) \in {\Bbb {R}}^2$ is the measurement and $C$ is a constant. Above, $\Omega_{ij}$ denotes the lenslet also referred to as the subaperture.
Other wavefront sensor modalities exist (e.g., curvature sensor \cite{Ro99}) but are not considered here.
Let us next describe an equation connecting the measurements with the atmosphere for a full MCAO system. The MCAO system we consider will utilize both laser and natural guide stars. Moreover, each guide star here has an individual direction $\boldsymbol \theta$ which we often use to index both the guide star and their corresponding wavefront sensor (WFS). Next, we use $\Gamma$ to denote the measurement operator
\begin{equation}
\label{eq:SH_op}
{\bf s}_{\boldsymbol\theta} = \Gamma_{\boldsymbol\theta} \phi(\cdot, \boldsymbol\theta)
\end{equation}
where ${\bf s}_{\boldsymbol\theta}$ is a vector containing all grid values ${s}_{ij}$ from formula \eqref{eq:SH_eq}. The Shack--Hartmann sensors modelled in equation \eqref{eq:SH_op} can have different resolution and hence $\Gamma_{\boldsymbol \theta}$ is direction-dependent.
In what follows we consider a system that observes $\ngs$ guide stars. Their directions are denoted by $\boldsymbol \theta_m$, where $1\leq m \leq \ngs$. We assume that out of this number, the first $\nlgs$ are laser guide stars. For the rest of the paper we simplify the direction--dependent notations by replacing ${\boldsymbol \theta}_m$ by $m$ whenever no confusion appears.
Now we can write the subproblems for different guide star directions by
\begin{equation}
\label{eq:atm_tomo_comp}
{\bf s}_{m} = \Gamma_{m} P^\textup{LGS}_{m}\boldsymbol\phi\quad {\rm and} \quad
{\bf s}_{m'} = \Gamma_{m'} P^\textup{NGS}_{m'}\boldsymbol\phi
\end{equation}
for $1\leq m \leq \nlgs$ and $\nlgs < m' \leq \ngs$. The full system
is then described by
\begin{equation}
\label{eq:atm_tomo}
{\bf s} = ({\bf s}_{m})_{m=1}^{\ngs} = \mathbf A\boldsymbol\phi,
\end{equation}
where $\mathbf A$ is the concatenation of operators $\Gamma_{m} P^\textup{LGS}_{m}$ and $\Gamma_{m'} P^\textup{NGS}_{m'}$.
Estimating $\boldsymbol\phi$ from a given ${\bf s}$ is called the {atmospheric tomography problem}.
\begin{figure}
\centering
\includegraphics[width=0.62\textwidth]{mi_mcao_thetas_beamer}
\caption{In atmospheric tomography the turbulence layers are reconstructed from the wavefront sensor data.}
\label{PF_MCAO_fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.62\textwidth]{mi_cone_effect_thetas_beamer}
\caption{
The cone effect.
Laser guide star light source is fixed above the turbulence layers at some finite altitude.
Light traveling from the LGS to the telescope pupil passes through smaller areas at higher turbulence layers.
}
\label{PF_FW_CONE_fig}
\end{figure}
\subsection{Bayesian inference}
\label{subsec:bayes}
The Bayesian inference is a standard approach for solving problem~\eqref{eq:atm_tomo}. This appears natural since information is available about the statistical behavior of the unknown wavefronts and the measurement noise.
The Bayesian paradigm considers problem~\eqref{eq:atm_tomo} in a random setting
$S = \mathbf A \Phi + \mathcal{E},$
where $S$, $\Phi$ and $\mathcal{E}$ denote random variables describing the measurement, the incoming turbulence wavefront and the additive noise, respectively.
Given a sample of $S$, i.e., the measurement, the task is to deduce information about the unknown~$\Phi$.
Both $\Phi$ and $\mathcal{E}$ are typically modelled as Gaussian random variables. We discuss the distribution of the wavefronts and their physical interpretation below in Section \ref{sec:prior} in more detail. The noise in the Shack--Hartmann measurements is produced by several components \cite{Ro99}.
However, for the LGS measurements, the effect of spot elongation has a major influence on the noise distribution. We return to the noise covariance and the spot elongation in Section \ref{sec:othereffects}.
Below, we denote the covariance operators of $\Phi$ and $\mathcal{E}$ by $\Cphase$ and $C_{\noiseRV}$, respectively. Furthermore, it is assumed that the separate layers are zero centered and uncorrelated. This implies that $\Cphase$ has
a block-diagonal structure
\begin{equation*}
\Cphase = {\rm diag}\left( C_1, \ldots , C_{\nlay}\right),
\end{equation*}
where $C_{\l}$ denotes the covariance of the layer $\l$.
In the setup given above, the maximum a posteriori estimate can be obtained by solving
\begin{equation}
\label{REG_REG_map}
\recfull = \operatorname*{argmin}_{\layfull} \left(\|\Cphase^{-1/2} \layfull \|^2_{2} + \| C_{\noiseRV}^{-1/2} (\sfull -\mathbf A \layfull)\|^2_{2}\right).
\end{equation}
For a general introduction to the Bayesian inverse problems, see \cite{somersalo05}.
\section{Turbulence models and the Bernstein--Jackson equivalence}
\label{sec:prior}
Let us consider for the moment the theory of Gaussian random
variables in real separable Hilbert spaces. Let $\phi$ be a measurable map from the probability
space $\Omega$ to a Hilbert space $H$. Then $\phi$ is Gaussian if and only if
for all $\rho_1, \ldots, \rho_m \in H$ the mapping $\Omega \ni \omega \mapsto (\langle \phi, \rho_j\rangle)_{j=1}^m$ is a Gaussian random variable in ${\Bbb {R}}^m$. The distribution of
$\phi$ is determined by the expectation $\mathbb{E} \phi$ and the covariance operator
$C_\phi : H \to H$ defined by
\begin{equation*}
\langle \psi_1, C_\phi \psi_2 \rangle =
\mathbb{E} \left( \langle \phi- \mathbb{E} \phi, \psi_1 \rangle \langle \phi- \mathbb{E} \phi, \psi_2 \rangle \right).
\end{equation*}
It is well-known that for any $\psi\in H$ and a linear positive self-adjoint trace class operator $C$ in $H$ with ${\mathcal N}(C) = \{0\}$ there exists a Gaussian random variable $\phi$ in $H$ with mean $\psi$ and covariance $C$ \cite{DPZ}.
Below, we are concerned with zero-centered random variables $\phi$ that have realizations in some Sobolev space $H^s({\Bbb {R}}^2)$ with $s>0$ and a covariance operator of the form
\begin{equation}
\label{eq:def_cphi}
C_\phi = {\mathcal F}^* M {\mathcal F}.
\end{equation}
Above, ${\mathcal F}$ is the Fourier transform on ${\Bbb {R}}^2$
and $M$ is a multiplication operator $M f(\kappa) = m(\kappa) f(\kappa)$
where $m$ is a positive bounded function.
With an appropriate decay of $m$ at infinity, the operator $C_\phi$ is trace class and hence $\phi$ is well-defined.
Further, the Schwartz kernel $k_\phi(x,y)$ of the operator $C_\phi$ satisfies
\begin{equation*}
k_\phi(x,y) =
\mathbb{E} \left( \phi(x)- \mathbb{E} \phi(x)\right)\left(\phi(y)- \mathbb{E} \phi(y)\right)
\end{equation*}
in the sense of generalized functions.
We call $k_\phi$ the covariance function of the random process $\phi$.
In the literature of adaptive optics, a turbulence layer $\phi$ is assumed to be isotropic and stationary, i.e., $k_\phi$ depends only on the distance of $x$ and $y$. In particular, $k_\phi$ is
completely characterized by
\begin{equation}
\label{eq:stationary}
\tilde k_\phi(z) = k_\phi(x,x+z).
\end{equation}
where $x,z\in{\Bbb {R}}^2$. Now, it can be shown that if the statistics of $\phi$ satisfy equations \eqref{eq:def_cphi} and \eqref{eq:stationary}, then
\begin{equation*}
m(\kappa) = ({\mathcal F} \tilde k_\phi)(\kappa)
\end{equation*}
in the sense of tempered distributions.
The multiplier function $m$ is often referred to as the power spectrum.
The classical model based on the Kolmogorov--Obukhov law of turbulence states that the power spectrum $m$ follows a power law
\begin{equation}
\label{eq:pwrspec}
m(\kappa) = C|\kappa|^{-11/3}
\end{equation}
inside the so-called inertial range $K_{in} \leq |\kappa|\leq K_{out}$ with some constant $C$.
It is not straightforward to expand the power law \eqref{eq:pwrspec} outside the inertial range due to the strong singularity at zero.
We make a common choice of von Karman model \cite{Ro99, ElVo09} to modify \eqref{eq:pwrspec} by assuming
\begin{equation*}
m(\kappa) = c_\rho(h) \left(\frac 1{K_{out}^2}+|\kappa|^2\right)^{-11/6},
\end{equation*}
where $K_{out}$ is the outer scale of the turbulence and $c_\rho(h)$ describes the measure of the optical turbulence strength depending on the altitude. This choice for the power law coincides asymptotically with \eqref{eq:pwrspec} in the high--frequency regime, however, the singularity at zero is removed.
In conclusion, we notice that an equivalence
\begin{equation}
\label{eq:cov_approx}
\|C_{\phi}^{-1/2} f \|_{L^2}^2
= \|(K_{out}^{-2}+|\kappa|^2)^{\frac{11}{12}} {\mathcal F} f\|_{L^2}^2
\simeq K_{out}^{-\frac{11}{3}} \| f\|_{L^2}^2 + \|(-\Delta)^{\frac{11}{12}} f\|_{L^2}^2
\end{equation}
holds in the Cameron--Martin space of $\phi$, i.e., for any $f\in H^{11/6}({\Bbb {R}}^2)$.
Here and in what follows, we write $p\simeq q$ if the two pseudo--norms $p$ and $q$ are equivalent.
Assume that the wavelets studied here are $r$--regular, i.e., have $r$ vanishing moments and are $r$ times continuously differentiable. For sufficiently large $r$
the last term in \eqref{eq:cov_approx} is equivalent with the expression
\begin{equation}
\label{REG_WAV_norm_equiv}
\| (-\Delta)^{\frac{11}{12}} f \|^2_{L^2} \simeq \sum_{\lambda=1}^\infty 2^{2 \cdot \frac{11}{6} j} | \langle f, \wav_\lambda \rangle |^2,
\end{equation}
where $j$ is the wavelet scale of the wavelet $\wav_\lambda$ with global index $\lambda$.
The equivalence above is known as the Bernstein--Jackson inequalities \cite{meyer92}.
In the discretized problem, the function $f$ in \eqref{eq:cov_approx} is represented by finite number of wavelet scales. In that case, an equivalent representation for the regularizing term in \eqref{eq:cov_approx} can be produced by a diagonal matrix $D_\ell : {\Bbb {R}}^{ n_\ell } \to {\Bbb {R}}^{ n_\ell }$ that satisfies
\begin{equation*}
D_\ell = {\rm diag}\left(\frac{1}{c_\rho(h)}\left(K_{out}^{-\frac{11}{3}}+2^{\frac{11}{3}j}\right)\right)_{\lambda=1}^{n_\ell}.
\end{equation*}
Above, $n_{\ell}$ denotes the total number of wavelets for layer $\ell$.
Moreover, we denote $\Dfull = {\rm diag} (D_1, \ldots , D_\nlay)$. Finally, by approximating the
prior covariance of the discretized problem by the ideal model we get
\begin{equation}
\label{REG_WAV_penalty_discr}
\|C_{\Phi}^{-1/2} \layfull \|^2_{(\Ltwo)^\nlay} =
\sum_{\ell=1}^{\nlay} \|C_{\ell}^{-1/2} \phi_{\ell} \|^2_{L^2}
\simeq \sum_{\ell=1}^{\nlay} (D_\ell \mathbf c_\ell, \mathbf c_\ell)_2
= (\Dfull \cfull, \cfull)_{2}
\end{equation}
for $\layfull = (\layl)_{\l = 1}^{\nlay}$ and the wavelet decomposition $\layl = \sum_{\lambda=1}^{n_\ell} c_{\ell,\lambda} \psi_{\ell,\lambda}$ with respect to the wavelet basis $\{ \psi_{\ell,\lambda} : \lambda = 1,\ldots, n_\ell\}$ of layer $\ell$.
Above, we denote by $\cfull_\ell = (c_{\ell,\lambda})_{\lambda=1}^{n_\ell}$ the vector of wavelet coefficients associated to layer $\ell$, and by $\cfull$ the concatenation of vectors $\cfull_\ell$.
We point out that in practice the term $K_{out}^{-11/3}$ becomes negligible. Furthermore, the approximation error introduced in \eqref{REG_WAV_penalty_discr} is beyond the scope of this paper. In numerical simulations presented in Section \ref{sec:numerics} we study how different weighting of the regularizing term $(\Dfull \cfull, \cfull)_{2}$ affects the reconstructions obtained by the method.
\section{Other features of MCAO}
\label{sec:othereffects}
\subsection{Fitting step}
In an MCAO system the correction for the wavefront distortions is produced by several deformable mirrors that are conjugated to different altitudes. Hence, a successful mathematical algorithm for MCAO must also deduce optimal mirror shapes for the DMs based on the reconstruction from equation \eqref{eq:atm_tomo}. This subproblem is called the fitting step. Given a sufficient reconstruction of the atmosphere, the fitting step is a well--posed least squares minimization problem and thus classical solution methods provide an efficient reconstruction strategy. We point out that in the ideal fitting one aims
to minimize a functional
\begin{equation}
\label{eq:fit_step_ideal}
{\rm argmin}_{{\bf a}} \mathbb{E} \left( \int_{{\rm F}} \int_\Omega \left(H_{\boldsymbol\theta} {\bf a} - P^\textup{NGS}_{\boldsymbol\theta} \boldsymbol \phi\right)^2dx d{\boldsymbol\theta} \right),
\end{equation}
where ${\bf a}$ is the correction profile, $H_{\boldsymbol\theta}$ is the correction towards direction ${\boldsymbol\theta}$ and $P^\textup{NGS}_{\boldsymbol\theta}$ is defined in equation \eqref{eq:ngsproj}. Moreover, $\Omega$ is the aperture domain and $\boldsymbol\theta$ belongs to the field of view $F$.
The problem is typically discretized by choosing a finite set of directions over which the difference in equation \eqref{eq:fit_step_ideal} is averaged. We follow this tradition by
formulating the fitting step as the minimum norm solution to
\begin{equation}
\label{eq:fit_step_min}
{\rm argmin}_{{\bf a}}\norm{{\bf H} {\bf a} - {\bf P} \boldsymbol \phi}_2
\end{equation}
where ${\bf H}$ and ${\bf P}$ are concatenations of operators $H_j$ and $P^\textup{NGS}_j$, respectively, towards a finite set of directions ${\boldsymbol\theta}_j$, $j=1, \ldots ,N$ sampled from the field of view.
For more detailed discussion on the fitting step see \cite{ElVo09}.
\subsection{Closed loop control}
\label{subsec:control}
Although in next generation telescopes the DMs are adjusted within milliseconds, the delay between the measurement and the applied DM correction induces an error that needs to be considered. Consequently, a robust temporal control is a fundamental part of the system.
In an MCAO system, the wavefront sensors are located behind the deformable mirrors in the optical path of light.
This is contrary to our assumption on the prior model discussed in the previous section, as the WFS measures the residuals of the incoming wavefronts, instead of the incoming wavefronts themselves.
This mode of operation is called closed loop.
In order to model the physics of turbulence in the prior covariance, we follow here a method called the pseudo--open loop control (POLC).
The straightforward idea of POLC is to approximate open loop measurements by combing the mirror shapes with the residual data.
The POLC was introduced to AO in \cite{ElVo03} and further studied in \cite{Gi05_closedstab,Piatrou05}. It has proven to be stable and robust against large levels of system errors \cite{Piatrou05}.
We rely on a modified POLC, where an integrator is used in the control scheme (see e.g., \cite{Piatrou05}).
We assume that our system has a two time--step delay. Let $t \in {\Bbb{N}}, t \ge 2$ denote time--steps. Further, residual Shack--Hartmann data $\sfull$ are measured over the time period $[t-2,t-1)$ for the mirror corrections~$\mathbf a_{t-2}$. We assume that the reconstruction step (computing DM shapes from measurements) consumes one time period, $[t-1,t)$ and the computed mirror shapes $\afull_t$ are applied to the mirrors at time step~$t$.
\begin{algo}{Pseudo--open loop control.}
\label{REG_POLC_algo}
\begin{enumerate}
\item
$\sfull^\textup{ol} = \sfull + \widehat{\mathbf A} \afull_{t-2}$
\item
$\Delta \afull = \Rec \sfull^\textup{ol} - \afull_{t-2}$
\item
$\afull_t = \afull_{t-1} + g \Delta \afull$
\end{enumerate}
\end{algo}
Above,
$\widehat{\mathbf A}$ maps the mirror shapes $\afull$ to the correction in the measurement space, similar to (\ref{eq:atm_tomo}).
Moreover, the reconstruction operator $\Rec$ maps WFS measurements to DM shapes.
The scalar parameter $0 \le g \le 1$ denotes the gain, which controls the mirror update.
We point out that the POLC is not optimal in the sense of cumulative residual variance. More sophisticated Kalman filter based methods can achieve this \cite{Petit}. The drawback however in such methods is the additional computational load that is a limiting factor, especially for MCAO. Numerically efficient solutions towards this end are an interesting topic for future research.
\subsection{Spot elongation and tip--tilt uncertainty}
Measurements observed from an artificially generated laser guide star suffer from two deficiencies: spot elongation and tip--tilt uncertainty. These effects were discussed in detail in \cite{ElVo09} and we only briefly state how we correct for these errors in our algorithm.
A successful mathematical model must take these effects into account, as the performance of the AO system would degrade otherwise.
The spot elongation effect occurs due to the physics of scattering at the sodium layer. In practice, a laser guide star is not an ideal point source but rather an extended three dimensional source.
This translates to an elongation of the observed spot on the measurement device, which can be described by a correlation of $x$- and $y$-measurements in each individual subaperture of the Shack--Hartmann WFS.
Our method handles spot elongation following the approach taken in \cite{SpotElong}.
Let us give a brief overview of the noise covariance matrix $C_\mathcal{E}$.
For an MCAO system, which relies on a combination of laser guide star and natural guide star wavefront sensors, the full noise covariance matrix is given as a block-diagonal matrix with respect to the sensors,
\begin{equation*}
C_\mathcal{E} = {\rm diag}\left(\widetilde C_{1}, \ldots, \widetilde C_{\nlgs},
\widetilde C_{\nlgs+1}, \ldots, \widetilde C_{\ngs}\right).
\end{equation*}
Here we associate a noise covariance matrix $\widetilde C_{m}$ for each direction of the guide stars $\boldsymbol \theta_m$, $m = 1, \ldots, \ngs$.
Recall that the first $\nlgs$ directions are associated with sensors observing laser guide stars. The remaining natural guide star sensors are not affected by spot elongation.
For those directions we assume that the noise is identically and independently distributed in all subapertures with variance $\sigma^2$.
Also, the spot-elongated measurements are uncorrelated between different subapertures. However,
for any subaperture the noise in the $x$- and $y$-measurements is correlated and hence the covariance matrix is block diagonal, composed of 2$\times$2 blocks
\begin{equation*}
\widetilde C_{m} = {\rm diag} \left( \widetilde C_{m, 1}, \ldots, \widetilde C_{m, S}\right),
\end{equation*}
where $S$ is the total number of subapertures of the WFS in direction ${\boldsymbol\theta}_m$.
Each block $\widetilde C_{m, i}$, $i = 1, \ldots, S$, can be expressed as
\begin{equation}
\label{eq:elongation_matrix}
\widetilde C_{m, i} = \sigma^2 \left(
I
+ \frac{\tau}{f^2} \boldsymbol\beta \cdot \boldsymbol\beta^\top
\right)
\end{equation}
where $f$ is the full width at half maximum (FWHM) of the non-elongated spots,
$\boldsymbol\beta$ is the elongation vector and $\sigma^2$ is like above. Moreover, the parameter $0\leq \tau\leq 1$ is used to tune the relative increase of the noise with respect to the elongation.
In consequence, the block-diagonal structure of the covariance matrix implies that applying $C_{{\mathcal E}}$ or its inverse is computationally very cheap.
For details on deriving $\widetilde C_{m, i}$ we refer to \cite{SpotElong} and the references therein.
The tip--tilt uncertainty is closely related to the uncertainty of the location where the LGS scatters in the atmosphere. Such an error has a large impact on the wavefront sensor measurements. However, it can be shown that the major part of the error is contained in the average $x$- and $y$-derivatives over the whole sensor, i.e., the tip and the tilt of the incoming wavefront.
There are several tip--tilt correction methods that can be applied,
such as a split tomography approach \cite{GiEl08}, a coupled--equation approach \cite{ElVo09} or a noise--weighted approach \cite{VoYa06b}.
We use a more straightforward approach, in which we remove the incorrect tip--tilt component in the LGS measurements by modifying equation~(\ref{eq:atm_tomo_comp}) to
\begin{equation}
\label{eq:removing_tt}
(I-T) {\bf s}_{m} = (I-T) \Gamma_{m} P^\textup{LGS}_{m}\boldsymbol\phi
\end{equation}
for $m = 1, \ldots, \nlgs$, where $T$ is an orthonormal projection into the tip and tilt components. Other way of stating \eqref{eq:removing_tt} is to say that we use
\begin{equation}
\label{eq:widehatc}
\widehat C_{m}^{-1} = (I-T)\widetilde C_{m}^{-1}(I-T)
\end{equation}
as the covariance matrix instead of $\widetilde C_{m}^{-1}$.
Hence this approach neglects more information as, e.g., the noise--weighted approach and relies more on the NGS measurements.
The successful performance of this method is supported by numerical tests.
\input{numerics.tex}
\section{Conclusions}
In this paper we have introduced a novel reconstruction method for the atmospheric tomography problem based on wavelets. The theoretical properties of regular wavelets enable us to apply a sparse regularization on the problem that corresponds to utilizing the Kolmogorov turbulence statistics as an a priori model. Here, we have studied the qualitative performance of the method in the context of MCAO. We derived two variants of the method, concentrating more on quality by solving the full atmospheric reconstruction followed by a fitting step using CG or on speed with the layers--at--DM PCG approach.
We studied the stability of the CG method with respect to the Bernstein--Jackson approximation. Moreover, we demonstrated a fast convergence of the PCG based algorithm. Lastly, we illustrated the quality of the reconstructions in the low--flux regime and showed that the method outperforms the standard reconstruction method, called the MVM, which is used in the ESO simulation platform OCTOPUS.
We believe that the wavelet method is a very promising algorithm in the field of atmospheric tomography. Fully utilizing the multiscale structure of wavelets can be approached by constructing suitable multigrid preconditioning schemes. Furthermore, the gain in the temporal control can be applied scale--dependently. Together with the careful analysis of the speed of the algorithm we leave these considerations for a future study. We point out that an implementation utilizing the discrete wavelet transform is needed in order to achieve the speed requirements of the E-ELT.
\vspace{1cm}
{\bf Acknowledgements:} This work was done in the framework of the project \emph{Mathematical Algorithms and Software for E-ELT Adaptive Optics}. The project is in cooperation with the European Southern Observatory (ESO) and is funded by the Austrian Federal Ministry of Science and Research. The authors are grateful to the Austrian Adaptive Optics team for support. Moreover, the authors would like to thank Miska Le Louarn, Cl{\'e}mentine B{\'e}chet and Petteri Piiroinen for fruitful discussions. Helin was partly funded by the project ERC-2010 Advanced Grant, 267700.
\clearpage
\addcontentsline{toc}{section}{Bibliography}
\bibliographystyle{plain}
\section{Numerical implementation}
\label{sec:numerics}
\subsection{Simulation environment and algorithm}
For the simulations, we use the proposed multi-conjugate adaptive optics configuration for the European Extremely Large Telescope.
The telescope gathers light through a circular pupil with diameter of 42~m, of which roughly 28 percent are obstructed.
There are six laser guide stars positioned in a circle with a diameter of 2 arcmins.
To each laser guide star, a Shack--Hartmann wavefront sensor with 84$\times$84 subapertures is assigned.
Moreover, there are three natural guide stars positioned in a circle with a diameter of 8/3 arcmins.
The sensors assigned to the natural guide stars are low resolution Shack--Hartmann sensors (one with 2$\times$2 and two with 1$\times$1 subapertures) for tip--tilt correction. Further, the E-ELT uses a configuration of three deformable mirrors, located at altitudes 0 km, 4 km and 12.7~km. The mirrors are modeled by piecewise bilinear functions, with a total number of 9296 degrees of freedom.
We demonstrate the performance of our method on OCTOPUS \cite{OCTOPUS}, the official end-to-end simulation tool of the European Southern Observatory.
The software generates nine frozen layers of the atmosphere located at altitudes between 47 m and 18000 m. The evolution of the atmosphere is simulated by shifting these layers according to their wind directions and speed.
In the test cases below we simulate one second of evolving atmosphere. The Shack--Hartmann measurements are read out 500 times per second.
A two--step delay is observed, as described in Section~\ref{subsec:control}.
The measurements suffer from photon noise and readout noise.
The quality of the reconstruction is evaluated in 25 directions arranged in a 5$\times$5 grid over the field of view, which is a square of 2 arcmins. As a criteria the long exposure (LE) Strehl ratio \cite{Ro99} is used in K band (for a wavelength of 2200 nm). The Strehl ratio is a commonly used measure of quality in the astronomical community. Towards directions close to the zenith it can be estimated by the Mar\'echal approximation \cite{Ro99} according to
\begin{equation*}
s(\boldsymbol\theta) \approx e^{-(2 \pi r({\boldsymbol \theta}) / \lambda)^2},
\end{equation*}
where $s$ is the Strehl ratio, $r({\boldsymbol \theta})$ is the root mean square error in the correction of the incoming wavefront from direction ${\boldsymbol \theta}$, and $\lambda$ is the wavelength. The long exposure Strehl relates to the average Strehl ratio over the observed timespan.
The star asterism, as well as the 25 evaluation directions are depicted in Fig.~\ref{grid_lgs_ngs_5x5}.
\begin{figure}
\centering
\includegraphics[scale=.85]{grid_lgs_ngs_5x5}
\caption{Star asterism of the six laser guide stars and three natural guide stars, as well as the 5$\times$5 quality evaluation grid over the field of view (in arcsec).}
\label{grid_lgs_ngs_5x5}
\end{figure}
We base our algorithm on the POLC method described in Section~\ref{subsec:control}, where the operator $\mathbf R$ combines the solution operator to the atmospheric tomography problem (\ref{REG_REG_map}) and to the fitting step equation~(\ref{eq:fit_step_min}). In the special case of reconstructing turbulence layers directly at DM altitudes, we omit the fitting step equation and determine the mirror shapes by the shape of the reconstructed turbulence layers.
The atmospheric tomography problem (\ref{REG_REG_map}) discretized in the wavelet basis is equivalent to solving the linear system of equations
\begin{equation}
\label{eqn_lse}
(\widetilde{\mathbf A}^T C_{\noiseRV}^{-1} \widetilde{\mathbf A} + \alpha \Dfull) \cfull = \widetilde{\mathbf A}^T C_{\noiseRV}^{-1} \sfull,
\end{equation}
where $\widetilde{\mathbf A}$ is the discretization of (\ref{eq:atm_tomo}).
The role of the scalar parameter $\alpha$ is further discussed in Section~\ref{section_test_case3}.
We solve equation (\ref{eqn_lse}) using either the conjugate gradient (CG) method or the preconditioned conjugate gradient (PCG) method with a modified Jacobi preconditioner, discussed below.
In the numerical simulations we utilize the Daubechies wavelet basis. They are a well--known orthogonal wavelet family with compact support \cite{Daubechies_88, Cohen_etal_92}
and have a useful time-frequency localization. For a sufficiently large $n$, the Daubechies~$n$ wavelets are $2$--regular and fulfill the equivalence~\eqref{REG_WAV_norm_equiv}. In order to enhance the spatial localization
we have chosen to use $n=3$. It is well--known that the Daubechies~3 wavelets
belong to the H\"older space $C^{1+\delta}({\Bbb {R}}^2)$ with $\delta\approx 0.0878$~\cite{Daub92}. Even though they are not 2--regular, they form a Riesz basis in $H^2({\Bbb {R}}^2)$~\cite{Dahmen95}.
Based on our numerical tests we believe that this is sufficient in practice.
By formulating the problem in the wavelet domain we gain a significant improvement in terms of convergence of the CG algorithm. This follows from the underlying operators possessing a favorable spectral structure.
Further, the convergence is accelerated by using a modified Jacobi preconditioner, which we discuss in the following.
The operator appearing on the left-hand side of equation (\ref{eqn_lse}) is given by
\begin{equation*}
\sum_{m=1}^{\nlgs} (\widetilde A^\textup{LGS}_{m})^T \widehat C_{m}^{-1} \widetilde A^\textup{LGS}_{m}
+ \sum_{m'=\nlgs+1}^{\ngs} (\widetilde A^\textup{NGS}_{m'})^T \widetilde C_{m'}^{-1} \widetilde A^\textup{NGS}_{m'}
+ \alpha \Dfull,
\end{equation*}
where $\widehat C_{m}^{-1}$ is defined by equation \eqref{eq:widehatc},
\begin{equation*}
\widetilde A^\textup{LGS}_{m} = \Gamma_{m} P^\textup{LGS}_{m} W^{-1} \textup{ and } \widetilde A^\textup{NGS}_{m} = \Gamma_{m} P^\textup{NGS}_{m} W^{-1}.
\end{equation*}
Above, $W^{-1}$ is the inverse wavelet transform mapping wavelets to functions and~$\Gamma_m$ is the discretization of the Shack--Hartmann operator according to the Fried geometry (see, e.g., \cite{Ro99}).
Following the discussion of Ellerbroek and Vogel in \cite{ElVo09}, we choose our preconditioner based on only the LGS components, as the low--rank perturbations, corresponding to only a finite number of eigenvalues, do not affect the asymptotic convergence rate of the conjugate gradient algorithm. Thus, our modified Jacobi preconditioner is
\begin{equation*}
J = \operatorname*{diag} \left( \sum_{m=1}^{\nlgs} (\widetilde A^\textup{LGS}_{m})^T \widetilde C_{m}^{-1} \widetilde A^\textup{LGS}_{m} \right) + \alpha \Dfull.
\end{equation*}
Finally, we reduce the number of conjugate gradient iterations by choosing the initial guess as the reconstruction in the previous time--step. This widely used technique for iterative methods in adaptive optics \cite{ElVo09} is known as warm restart.
\subsection{Stability of the regularization}
\label{section_test_case3}
The diagonal regularization operator in our method was obtained by using the Bernstein--Jackson equivalence. Clearly, the argument applied here does not state explicitly which value for $\alpha$ in \eqref{eqn_lse} is the optimal choice. Also, in this context $\alpha$ can be seen as a regularization parameter. Increasing its value can be considered as stabilization against modeling errors.
We point out that there are several components of the problem that affect the stability, e.g., the temporal control (gain) and modeling of spot elongation. However, too large value will reduce the quality of the reconstructions. In the following, we demonstrate the performance of the method when $\alpha$ varies.
We study a realistic noise--contaminated situation where the LGSs illuminate 100 photons per subaperture and time--step. Furthermore, the spot elongation and tip--tilt uncertainty for the LGS are simulated.
The NGS tip--tilt sensors observe 500 photons per subaperture and frame. The readout noise is set to 3 and 5 electrons per pixel for the LGS and NGS sensors, respectively.
Our algorithm is set as follows. We reconstruct nine layers at the altitudes of the nine simulated turbulence layers using the CG algorithm with 10 iterations. Further, we solve the fitting step equation (\ref{eq:fit_step_min}) using the CG algorithm with 4 iterations for optimization directions given by the 5$\times$5 evaluation grid.
We choose a gain of $g=0.4$ for the temporal control. The parameter $\tau$ in \eqref{eq:elongation_matrix} was set to $0.8$ for all test cases.
We run independent simulations with a variable parameter $\alpha$ in (\ref{eqn_lse}) for values $\alpha = 0.2, 0.5, 1, 2, 5, 10, 20$ and $40$.
In Fig.~\ref{figure_test3} we plot the average long exposure Strehl over the field of view (red curve) and the on-axis Strehl (blue curve) for those~$\alpha$.
The goal of the MCAO system is to obtain the best correction over the field of view, i.e., attain the largest field average Strehl. However, on-axis Strehl for the zenith-direction is also a quantity of interest.
As can be seen from the plot, the peak on-axis Strehl, as well as the peak field average, is attained with $\alpha = 1$ or~$2$; the difference between the results can be considered negligible.
All higher values of $\alpha$ over-regularize the problem and the performance of the algorithm, although kept stable, decreases. A low value for $\alpha$ corresponds to an under-regularized problem, and the performance of the method drops.
The importance of the regularizing term $\mathbf D$ in the presence of high photon--noise can clearly be observed.
\begin{figure}
\includegraphics[scale=.85]{test3}
\caption{Long exposure Strehl vs.~varying $\alpha$ in equation (\ref{eqn_lse}).}
\label{figure_test3}
\end{figure}
\subsection{Convergence of the accelerated method}
Here, we demonstrate the convergence properties of our accelerated method.
We run the simulations in the same configuration as in the first test case, where the number of photons per subaperture and time--step for LGS and NGS wavefront sensors are 100 and 500, respectively. The readout noise is set to 3 electrons per pixel for the LGS sensors and to 5 electrons per pixel for the NGS sensors.
Our accelerated algorithm is set as follows. We reconstruct three layers at the altitudes of the deformable mirrors using the PCG algorithm with the modified Jacobi preconditioner. We utilize $\alpha=1$ in \eqref{eqn_lse} and choose a gain of $g=0.4$ for the temporal control. The parameter $\tau$ in \eqref{eq:elongation_matrix} was set to $0.8$ for all test cases.
In Fig.~\ref{figure_test1} we plot the long exposure Strehl averaged over separation from the zenith after one second of simulated atmospheric propagation. We run separate simulations for the algorithm with 1, 2, 3, 4, 5 and 10 PCG iterations. As can be observed, the improvement after iteration 4 is negligible, which indicates that 4~PCG iterations are sufficient for convergence in this configuration.
\begin{figure}
\includegraphics[scale=.75]{test1}
\caption{Long exposure Strehl using the PCG method with different number of iterations, averaged over the radial field position.}
\label{figure_test1}
\end{figure}
\subsection{Performance with noisy data}
In this example we consider the performance of our methods with respect to the noise level in the LGS measurements. In other words, we simulate LGSs with different flux between 20 and 200.
We fix the NGS number of photons per subaperture and time--step to 500.
The readout noise is kept as in the previous tests at 3 and 5 electrons per pixel for the LGS and NGS sensors, respectively.
We compare the performance of our methods with the matrix--vector multiply (MVM) method (see, e.g., \cite{MAD}), which is considered to be the benchmark reconstructor of the ESO. The MVM is a non--iterative method in which the MAP estimate is discretized using, e.g., the Zernike polynomials. The regularized forward matrix is inverted and applied directly onto the measurements.
The MVM that is presented here reconstructs three layers at DM altitudes, similar to the accelerated wavelet PCG method.
We set the regularization parameter $\alpha = 1$ and the gain $g=0.4$.
The parameter~$\tau$ in \eqref{eq:elongation_matrix} is tuned for each case separately \cite{SpotElong}.
All tests of the CG method are carried out with 10 iterations for the atmospheric tomography step and 4 iterations for the fitting step; the accelerated method utilizes 4~PCG iterations, which we found to be sufficient above.
In Fig.~\ref{figure_test2} a comparison in quality of the reconstruction of the three methods is depicted. Both of the wavelet methods perform better than the MVM in almost all cases; the difference in the 20 photon case can be considered negligible. We believe the reason for this may be due to a better approximation of the layers using the wavelet basis, as well as the numerical stability of the iterative scheme, as opposed to matrix inversion.
Amongst the two wavelet methods, the approach of reconstructing nine layers followed by a fitting step outperforms the three layer--reconstruction method in quality. The benefit of the full atmospheric tomography is especially emphasized when more photons are detected by the sensor. The disadvantage of the nine layer CG method is the higher computational cost over the PCG.
\begin{figure}
\includegraphics[scale=.85]{test2}
\caption{Long exposure Strehl vs.~detected number of photons per subaperture and time--step of LGS sensors. Solid curve corresponds to the on-axis Strehl; dashed curve to the field average.}
\label{figure_test2}
\end{figure}
To illustrate the difference between the three methods we plot Strehl values in the 25 directions over the field of view for MVM, 3-layer wavelet PCG and 9-layer wavelet CG methods for the $100$ photon case in Fig.~\ref{figure_test2_contour}.
\begin{figure}
\includegraphics[width=\textwidth]{test2_contour_crop}
\caption{Contour plots of the LE Strehl over the field of view (in arcsec) for MVM (left), 3-layer PCG (middle) and 9-layer CG (right) methods.
}
\label{figure_test2_contour}
\end{figure}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
According to the sixth sustainable development goals set by the United Nations in 2018, clean water is a key resource for humans and the environment~\cite{SDG_UN_2018}.
However, the water quality is threatened extensively by human influences such as emission of wastewater or overfertilization caused by agriculture.
Hence there is a great demand for a continuous and efficient system to monitor water quality (cf.~\cite{Koponen.2002, PalmerS.KutserT.HunterP.D..2015, MaierP.M.HinzS.KellerS..2018}).
In addition to commonly applied in-situ probes, remote sensing as a technique is often considered when monitoring large water surfaces.
Remote sensing offers some advantages over point sample measurement.
In particular, satellite image data is frequently available and it is cost-efficient in the long run.
Furthermore, information about water quality parameters derived by satellite images are more representative than in-situ measured point values in terms of area-wide coverage.
One important water quality parameter is chlorophyll~\textit{a} (chl~\textit{a}).
It serves as a proxy for the nutrition supply of a water body.
Chl~\textit{a} is a pigment which appears in phytoplankton, and provides the basis for photosynthesis.
The occurrence of phytoplankton depends on the natural nutrition supply of a water body as well as human sources.
Chl~\textit{a} is detectable by passive remote sensing sensors in the visible spectrum.
An absorption feature in the spectral band region around \SI{665}{\nm} indicates chl~\textit{a}~\cite{MorelA.PrieurL..1977}.
Several studies have already demonstrated the applicability of remote sensing data with respect to the estimation of chl~\textit{a} concentrations in inland waters \cite{Koponen.2002, KellerS.MaierP.RieseF.NorraS.HolbachA.BorsigN.WilhelmsA.MoldaenkeC.Zaak.2018}.
To estimate the chl~\textit{a} concentration with spectral data, two complementary approaches are applied.
First, engineering approaches consider spectral features or band ratios~\cite{GITELSON.1992, Gons.1999}.
Second, machine learning (ML) approaches have been emerged in the last decade~\cite{KeinerL.E.YanX.H..1998, Matarrese.2008, GonzalezVilas.2011, MaierP.M.KellerS..2018, KellerS.MaierP.RieseF.NorraS.HolbachA.BorsigN.WilhelmsA.MoldaenkeC.Zaak.2018}.
These approaches estimate the chl~\textit{a} concentration primarily in a supervised way without prior-knowledge of the underlying physical processes.
In general, the estimation of chl~\textit{a} concentrations in water bodies from remote sensing data is a challenging task.
Inland waters are optically complex since they contain suspended and particular materials.
These materials are characteristic for every inland water~\cite{PalmerS.KutserT.HunterP.D..2015}.
Another limiting factor when monitoring inland waters is the spatial resolution of the satellite images.
Unfortunately, high spectral resolution is often accompanied by a lower spatial resolution.
In case of the oceans, this is not an issue.
With respect to inland waters however, the spatial resolution is crucial and hence an exclusion criteria of some satellite sensors.
For example, the SeaWiFS (Sea-viewing Wide Field-of-view Sensor) as an ocean water observation satellite mission has a spatial resolution of more than \SI{1}{\kilo\meter}~\cite{OReillyJ.E.MaritorenaS.MitchellB.G.SiegelD.A.CarderK.L.GarverS.A.Kahru.1998}.
Therefore, most of the smaller inland water bodies are represented by only one, mixed pixel which hinders the use of satellite data for the estimation of the chl~\textit{a} concentration of small water bodies.
Some studies investigate the trade-off between spectral and spatial resolution of satellite data recorded by the common missions~\cite{Decker.1992, Beck.2016}.
A thorough analysis of the estimation performance of feature engineering approaches on chl~\textit{a} concentrations for several simulated satellite sensors is presented in~\cite{Beck.2016}.
Previous work~\cite{MaierP.M.KellerS.2019} addresses the effect of different hyperspectral resolutions of the input data and machine learning models when estimating chl~\textit{a} concentrations.
In this study, we simulate satellite data with respect to several multi- and hyperspectral satellite missions such as Landsat~5, Landsat~8, Sentinel~2, Sentinel~3, EnMAP and Hyperion.
The basis of the simulated data is a spectrometer dataset of $13$ different inland waters which was conducted in the surrounding region of Karlsruhe (Germany) during the summer 2018.
In total, the dataset contains $408$ datapoints.
Each datapoint consists of the spectral information and the associated chl~\textit{a} concentration.
The simulated spectral data serves as input data for selected ML models to estimate the chl~\textit{a} concentration of the different inland waters.
The objectives of this contribution are:
\begin{compactitem}
\item the simulation of satellite data based on the measured spectrometer data by applying the spectral response function or a Gaussian function (\Cref{sec:format});
\item the estimation of the chl~\textit{a} concentration by applying different supervised ML models such as random forest (RF), support vector machine (SVM), multivariate adaptive regression spline (MARS) and an artificial neural network (ANN) on the respective simulated data (\Cref{sec:typestyle});
\item the comparison of the regression performance in terms of simulated data and applied ML model (\Cref{sec:typestyle});
\item the discussion of the regression performance with the focus on the spectral and the spatial resolution of the input data (\Cref{sec:typestyle}).
\end{compactitem}
\section{Dataset and data simulation}
\label{sec:format}
The data used in this contribution is from a measurement campaign~\cite{MaierP.M.KellerS.2019} in the surroundings of Karlsruhe, a city located in the Southwest of Germany.
During the summer of 2018, $13$ different inland water bodies were measured with a spectrometer and water samples were evaluated with a photometer.
A detailed description of the measurement campaign including the measurement setup is given in~\cite{MaierP.M.KellerS.2019}.
The spectrometer records hyperspectral data in a spectral range of \SI{341}{\nm} to \SI{1015}{\nm} with a sampling interval of~\SI{0.66}{\nm}.
Its measurement principle is based on the ratio between the incoming and the up-welling radiance in the perpendicular direction.
The spectrometer was mounted on a tripod, which was placed as far as possible in the water in case of a natural water body. When measuring an artificial water body, the spectrometer was set outside the water.
The water samples for the chl~\textit{a} concentration analysis, which we use as reference data, were collected close to the spectrometer.
The measured chl~\textit{a} concentrations and the respective spectra of the continuous spectrometer measurements were matched by their respective timestamps.
In total, we obtain a dataset with $408$ datapoints.
Each datapoint consists of the spectral data and a chl~\textit{a} concentration value.
For our satellite-based simulation of the spectral data, we used spectrometer data in the wavelengths range of \SIrange{400}{900}{\nm} is used.
The simulation of the spectra in accordance with the satellite missions was conducted with the hsdar-package in R~\cite{LehnertL.W.MeyerH.BendixJ}.
Three different approaches exist to calculate the satellite bands out of spectral data with different weighting functions: a Gaussian function, an equal-weighted function and the actual spectral response function.
To calculate the spectra according to the Sentinel~2, Landsat~5 and Landsat-8 missions, we relied on the real spectral response function.
When simulating Sentinel~3, the EnMAP and Hyperion satellite missions, we applied the Gaussian function.
In the case of Sentinel~3, which is not implemented in the hsdar-package, we used the parameters central wavelength and full width at half maximum according to~\cite{Fletcher.} and a Gaussian function to simulate the bands.
\Cref{tab:Overview} gives an overview of the spectral and spatial characteristics of the satellite missions which have been used for the data simulation.
Furthermore, \Cref{fig:bands} illustrates the bandwidth of each satellite mission in the spectral range of \SIrange{400}{900}{\nano\meter}.
\begin{table*}[tb]
\centering
\caption{Summary of some characteristics of the different satellite systems used for the data simulation covering the spectral range between \SIrange{400}{900}{\nm}. The hyperspectral satellite missions are highlighted by ${\ast}$.}
\begin{tabular}{lcccccc}
\toprule
{Satellite}& {Number} & {Bandwidth} & {Spectral range} & {Spatial resolution} & {Approach for} & {Data}\\
{mission} & {of bands} & {in nm} & {in nm} & {in m} & {the simulation} & {source}\\
\midrule
Sentinel~2 & $9$ & {\numrange{18}{145}} & {\numrange{443}{865}} & {\numrange{10}{60}} & {Response function} & \cite{LehnertL.W.MeyerH.BendixJ} \\
Sentinel~3 & $19$ & {\numrange{2.5}{75}} & {\numrange{400}{900}} & {\numrange{300}{1000}} & {Gaussian function} & \cite{Fletcher.} \\
Landsat~8 & $5$ & {\numrange{16}{60}} & {\numrange{443}{865}} & $30$ & {Response function} & \cite{LehnertL.W.MeyerH.BendixJ}\\
Landsat~5 & $4$ & {\numrange{60}{140}} & {\numrange{485}{840}} & $30$ & {Response function} & \cite{LehnertL.W.MeyerH.BendixJ}\\
Hyperion$^{\ast}$ & $54$ & $10$ & {\numrange{406}{895}} & $30$ & {Gaussian function} & \cite{LehnertL.W.MeyerH.BendixJ}\\
EnMAP$^{\ast}$ & $77$ & $6.5$ & {\numrange{423}{895}} & $30$ & {Gaussian function} & \cite{LehnertL.W.MeyerH.BendixJ}\\
\bottomrule
\end{tabular}
\label{tab:Overview}
\end{table*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.9\textwidth]{Satellitenbaender_Multispectral_4.pdf}
\caption{Median spectra of the spectrometer dataset and symbolization of the width of the satellite bands (colored lines). The dots in the middle of each bandwidth represent the simulated reflectance value of the band.}
\label{fig:bands}
\end{figure*}
\section{Methodology}
\label{sec:pagestyle}
For the estimation of the chl~\textit{a} concentration based on the different simulated satellite data, we selected four ML models: support vector machine~\cite{Vapnik.2013}, random forest~\cite{Breiman.2001}, multivariate adaptive regression spline~\cite{Milborrow.2018} and an artificial neural network~\cite{Ripley.1996}.
The applied ML models are inspired by the selection in~\cite{MaierP.M.KellerS..2018} due to their satisfactorily performance.
To apply these models, the dataset consisting of the chl~\textit{a} values and the simulated satellite data was prepared.
It was split into five equally sized parts with respect to distribution of the target variable, the chl~\textit{a} concentration.
Then, each of those parts was split randomly into two subsets: a training subset and a test subset.
All five training subsets were aggregated to the final training subset.
The test subset was generated similarly.
As a result, the distribution of the chl~\textit{a} concentration in the training as well as the test subset were representative compared to the reference measurements.
The training subset was used for the training of the ML models, while the test subset remained unused until the test phase. Before starting with the training, we applied a grid search to adjust the hyperparameters of the models.
For example, hyperparameters of the SVM model are the penalty function cost and the kernel parameter gamma.
During the test phase, the models were validated on the yet unknown test dataset.
The performance of the regression was expressed by the coefficient of determination ($R^2$) and the mean absolute error (MAE).
Following the regression performance on the same database in~\cite{MaierP.M.KellerS.2019}, we also calculated the first derivative of the spectra for the simulated hyperspectral data of the Hyperion and EnMAP mission and applied those derivatives as input data for the RF and MARS model.
In addition, we pre-processed the simulated satellite data with a scaling to ensure good regression results for the the MARS, SVM and ANN models.
\section{Results and Discussion}
\label{sec:typestyle}
\Cref{fig:bars} and \Cref{tab:MAE} present the regression performance of estimating the chl~\textit{a} concentration with respect to the applied ML models as well as the different simulated satellite data.
Regarding \Cref{fig:bars}, the regression performance of the four ML models are in the same range.
When considering the simulated satellite input data for estimating the chl~\textit{a} concentration, the regression results expressed as $R^2$ are distinguishable.
For the simulated hyperspectral satellite data (EnMAP and Hyperion), the coefficient of determination ($R^2$) is quite similar.
In case of the simulated Landsat data, the regression results are closely related.
In detail, the ANN model performs worse than the other three models on these two simulated datasets.
However, for the simulated Sentinel data, the ANN model provides the best regression results.
Considering the different simulated satellite data, the regression with the simulated hyperspectral data based on the EnMAP and Hyperion mission achieves the best results.
The corresponding MAE values range between \SIrange{10.1}{12.6}{\micro\gram\per\liter}.
The MAE values of the models with simulated multispectral data according to the Sentinel missions is in the range between \SIrange{10.9}{14.8}{\micro\gram\per\liter}.
The estimation of the chl~\textit{a} concentration of all regression models with simulated Landsat data performs the worst compared to the other simulated satellite data.
The MAE ranges between \SIrange{17.8}{20.5}{\micro\gram\per\liter}.
Analyzing bandwidth, number of bands, spectral range and resolution of the simulated satellite data, \Cref{fig:bands} shows that Landsat~5 (green) and Landsat~8 (blue) have similar bands with a similar band positioning.
The three bands between \SIrange{450}{700}{\nm} are nearly the same.
In the spectral range of \SIrange{800}{900}{\nm} Landsat~8 provides a narrower band than Landsat~5 and it has an additional fifth narrow band near \SI{430}{\nm}.
With respect to the estimation of the chlorophyll \textit{a} concentration, this additional band has no further impact on the regression task.
Similar to the simulated Landsat data, the simulated multispectral Sentinel~3 data provides a better spectral resolution and accounts for more bands with narrower bandwidths than Sentinel~2.
However, the regression performance of the ML models on simulated Sentinel~3 data is not clearly better than the regression performance of the models with simulated Sentinel~2 data.
When comparing the estimation performance with either simulated Sentinel data or simulated Landsat data, the outperformance of the models using the simulated Sentinel data can be well explained.
First, the simulated Sentinel data is characterized by more bands.
And second, these bands are well positioned within the spectral range of \SIrange{400}{900}{\nano\meter}.
For example, the simulated Sentinel data includes the extremes in the range of \SIrange{660}{710}{\nm} which are related to chl~\textit{a}.
The mentioned spectral range is not included in the two Landsat missions and explains the poor chl~\textit{a} estimation of all models~\cite{Decker.1992}.
The simulated hyperspectral data (EnMAP and Hyperion) with a nearly constant spectral resolution of \SI{6.5}{\nm} and \SI{10}{\nm} are not shown in \Cref{fig:bands} due to reason of transparency.
Comparing the regression results with the simulated hyperspectral and the simulated Sentinel data, the models relying on the hyperspectral datasets perform only slightly better.
This finding indicates that the band positioning of the Sentinel missions is good for the estimation of chl~\textit{a} concentrations.
Regarding the applicability of the simulated satellite data for a general monitoring approach in the context of inland waters, the Sentinel~2 data serves its purpose.
It provides data with appealing spectral resolution, a sufficient spatial resolution and is characterized by a high temporal frequency.
Hyperspectral data with a better spectral resolution leads to a satisfying chl~\textit{a} estimation by applying the same ML models.
However, their temporal resolution stays behind the temporal resolution of the Sentinel missions referring to two satellite systems.
Differentiating between the two Sentinel missions, the application of the Sentinel~3 satellites is limited to large inland water surface due to their poor spatial resolution of \SIrange{300}{1000}{\meter}.
In addition, the Landsat satellite missions provide an attractive spatial and temporal resolution as well.
However, the regression results of the models are the worst with this data since the Landsat missions are characterized by the lowest spectral resolution of all simulated satellite missions.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.85\textwidth]{Ergebnisse_7.pdf}
\caption{Regression results ($R^2$ in~$\%$) of the four ML models with different simulated satellite data.}
\label{fig:bars}
\end{figure*}
\renewcommand{\arraystretch}{1.0}
\begin{table}[tb]
\centering
\caption{Performance of the regression models expressed by MAE in \si{\micro\gram\per\liter}.}
\resizebox{\linewidth}{!}{
\begin{tabular}{lSSSS}
\toprule
\multirow{1}{*}{Simulated satellite data} & {RF} & {SVM} & {ANN} & {MARS}\\
\midrule
EnMAP & {10.9} & {12.6} & {11.7} & {10.1}\\
Hyperion & {11.3} & {12.2} & {11.3} & {10.5}\\
Landsat~5 & {17.8} & {18.5} & {19.6} & {19.0}\\
Landsat~8 & {18.8} & {18.8} & {20.0} & {20.5}\\
Sentinel~2 & {14.8} & {13.2} & {11.5} & {14.2}\\
Sentinel~3 & {14.3} & {14.1} & {10.9} & {13.0}\\
\bottomrule
\end{tabular}}
\label{tab:MAE}
\end{table}
\section{Conclusion}
\label{sec:majhead}
In this paper, we address the estimation of chl~\textit{a} concentration with different simulated spectral data and supervised ML models.
We rely on a spectrometer dataset measured at several inland water bodies.
For the simulation of the satellite-base data, we chose six different satellite missions as examples.
In addition, we apply four different supervised ML models for the estimation of the chl~\textit{a} concentration.
When comparing the simulated satellite data, the regression performance of all models with the simulated hyperspectral data achieves the best results due to their spectral and spatial resolution.
Referring to the estimation results, the ML models combined with the simulated Sentinel data are slightly worse than the estimation based on the simulated hyperspectral data.
Regarding the applicability for a generic monitoring approach of inland waters, the Sentinel~2 mission provides the best option for smaller water bodies.
The Sentinel~3 mission poses an alternative for large water bodies.
When focusing on the different ML models, the choice of a specific ML model has a minor impact on the regression performance.
Solely, the ANN models outperforms the other models when using the simulated Sentinel data.
In this study, we have focused on the estimation of the the chl~\textit{a} concentration as a selected water quality parameter.
For the estimation of further quality parameters such as different algae types, the (simulated) hyperspectral data could provide an excellent basis due
to its high spectral resolution.
The choice of ML models and the (simulated) satellite data has to be adapted according to the respective water quality parameter which will be estimated.
This investigation could be addressed in future work.
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Materials and methods}
\subsection{Preparation of DNA molecules}
A 105-bp-long DNA molecule was extracted from yeast genomic DNA by polymerase chain reaction (PCR) to serve as a control DNA template without any structural defect. To probe the effect of a permanent defect, we planned to introduce a DNA mismatch to the control molecule by mixing it with its mutant followed by a strand-exchange reaction. To do so, we additionally prepared a set of mutated DNA molecules that differ from the control only in a certain location in which we put a mutation of size equal to 1bp, 3bp, or 5bp. To make such molecules, first, the mutated templates of the control DNA were synthesized from Eurofins Genomics (EXTREMer oligos) and duplexed via PCR. Each of the duplexed products was then incorporated into a pJET1.2$\backslash$blunt vector (ThermoFisher) and cloned into DH5\textrm{$\alpha$} \textit{Escherichia coli} cells. Finally, the cloned fragments of DNA were extracted via colony PCR from the cells and were sequenced to ensure the correct mutation was made at the desired location.
To modify these molecules to carry a FRET pair (i.e. Cy3 and Cy5), biotin, and single-stranded sticky ends, we followed our standard preparation protocol \cite{Le2014jove}, which involves a series of PCR and strand exchange reactions that can be found in elsewhere. For introducing a DNA mismatch in the final construct, we mixed the Cy3-labled control molecule with one of the Cy5-labeled mutated molecules with a ratio of 4:1 in the strand-exchange reaction.
The final DNA construct generated by this protocol carries a 5$^\prime$ protruding sticky end on each end and makes a hairband loop upon end-annealing as shown in Figure 2(a) of the main text. We also made hairpin loops by having sticky ends on the same DNA strand (Figure 3(a) of the main text). A complete list of all DNA sequences can be found in Tables \ref{supp-tab1} and \ref{supp-tab2} below.
\subsection{single-molecule FRET looing and unlooping assay}
We followed our previous single-molecule FRET assay that employs the sudden salt-exchange protocol \cite{Le2014,Jeo}. For cyclization, DNA molecules were deposited on a passivated surface of a flow-cell and were incubated at a low salt (10 mM [NaCl) imaging buffer containing the PCD-PCA oxygen scavenging system \cite{Aitken2008} for 10 minutes. We then injected a high salt (1 M [NaCl]) imaging buffer into the flow-cell to promote sticky ends to capture the loop configuration. Decyclization measurements were done similarly, except that the NaCl concentration was changed from 2 M to 75 mM. The immobilized molecules were excited by a 532-nm laser continuously through an objective-type TIR microscope from the beginning of the buffer exchange. The time trajectories of FRET signals (Figure 1(b) of the main text) from the molecules were recorded by an EMCCD camera (DU-897ECS0-\# BV, Andor) at a rate of 100 ms per frame for the mismatch-free molecules and 50 ms per frame for the molecules with a mismatch.
\subsection{Minicircle simulations}
The Monte Carlo simulation of a minicircle was implemented as previously described \cite{zheng2009theoretical,Le2014}. A set of 105 connected nodes was used to create a coarse-grained representation of a DNA minicircle of 105 bp. The bending energy at each node was described by the kinkable worm-like chain model \cite{zheng2009theoretical} with the parameters of b = 0.3 and h = 12 following the same notation used in Ref. \cite{Vologodskii2013a}. We performed the simulation with and without a flexible defect of zero bending energy placed at a fixed location. For the case of no flexible spot, we first initialized the simulation without allowing the kink formation. Once the kink-free simulation was equilibrated, we allowed spontaneous kinks to appear. To construct the probability density of kink positions, we ran the simulation and stop at the first appearance of a kink. We then recorded the position of this kink and equilibrated back to the kink-free state. This procedure was repeated until we collected a distribution of 1000 kink positions. The same procedure was repeated in the presence of the hyperflexible spot to predict the effect of a flexible spot on the probability distribution of kink.
\clearpage
\onecolumngrid
\begin{longtable*}{| P{0.14\textwidth}| p{0.75\textwidth}| }
\caption{DNA sequences of hairband molecules.}\label{supp-tab1} \\
\hline \hline
\endfirsthead
\hline \multicolumn{2}{|r|}{{Continued on next page}} \\ \hline
\endfoot
\endlastfoot
No mismatch & 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCAACGAGGTCGCACACGCCCCACACCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGTGTGGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
1bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCAACGAGGTCGCACACGCCCCAGACCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGTCTGGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
3bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCAACGAGGTCGCACACGCCCCGGGCCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGCCCGGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
5bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCAACGAGGTCGCACACGCCCGCGCGCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGTTGCTCCAGCGTGTGCGGGCGCGCGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
3bp-mismatch (10 bp off-center)& 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCAACGAGGTCGTGGACGCCCCACACCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGTTGCTCCAGCACCTGCGGGGTGTGGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
3bp-mismatch (20 bp off-center) & 5$^\prime-$\underline{TGAATTTAC}\seqsplit{G}\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCGCGATCGCCATGGCGGTGAGGTCGCACACGCCCCACACCCAGACCTCCCTGCGAGCGGGCATGGGTACAATCATTCGAGCTCGTTGTAG}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGCGCTAGCGGTACCGCCACTCCAGCGTGTGCGGGGTGTGGGTCTGGAGGGACGCTCGCCCGTACCCATGTTAGTAAGCTCGAGCAACA}\textcolor{green}{\textbf{T}}C\underline{ACTTAAATG}-5$^\prime$
\\ \hline
\end{longtable*}
\begin{longtable*}{| P{0.14\textwidth}| p{0.75\textwidth}| }
\caption{DNA sequences of hairpin molecules.}\label{supp-tab2} \\
\hline \hline
\endfirsthead
\hline \multicolumn{2}{|r|}{{Continued on next page}} \\ \hline
\endfoot
\endlastfoot
No mismatch & 5$^\prime-$\underline{TGAATTTACG}(CT)G\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCACATCGCCATGGCAACGAGGTCGCACACGCCCCACACCCAGACCTCCCTGCGAGCGGGCATGGGTTGCATGTCAGCTATGGATCCATTCGTAAATTCA}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGTGTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGTGTGGGTCTGGAGGGACGCTCGCCCGTACCCAACGTACAGT}(CG)\underline{ATACCTAGGT}-5$^\prime$[\textcolor{green}{Cy3}]
\\ \hline
1bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTACG}(CT)G\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCACATCGCCATGGCAACGAGGTCGCACACGCCCCAGACCCAGACCTCCCTGCGAGCGGGCATGGGTTGCATGTCAGCTATGGATCCATTCGTAAATTCA}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGTGTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGTCTGGGTCTGGAGGGACGCTCGCCCGTACCCAACGTACAGT}(CG)\underline{ATACCTAGGT}-5$^\prime$[\textcolor{green}{Cy3}]
\\ \hline
3bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTACG}(CT)G\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCACATCGCCATGGCAACGAGGTCGCACACGCCCCGGGCCCAGACCTCCCTGCGAGCGGGCATGGGTTGCATGTCAGCTATGGATCCATTCGTAAATTCA}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGTGTAGCGGTACCGTTGCTCCAGCGTGTGCGGGGCCCGGGTCTGGAGGGACGCTCGCCCGTACCCAACGTACAGT}(CG)\underline{ATACCTAGGT}-5$^\prime$[\textcolor{green}{Cy3}]
\\ \hline
5bp-mismatch (central) & 5$^\prime-$\underline{TGAATTTACG}(CT)G\textcolor{red}{\textbf{T}}\seqsplit{GCCAGCAACAGA[T]AGCCACATCGCCATGGCAACGAGGTCGCACACGCCCGCGCGCCAGACCTCCCTGCGAGCGGGCATGGGTTGCATGTCAGCTATGGATCCATTCGTAAATTCA}-3$^\prime$ \newline
3$^\prime-$\seqsplit{CACGGTCGTTGTCTATCGGTGTAGCGGTACCGTTGCTCCAGCGTGTGCGGGCGCGCGGTCTGGAGGGACGCTCGCCCGTACCCAACGTACAGT}(CG)\underline{ATACCTAGGT}-5$^\prime$[\textcolor{green}{Cy3}]
\\ \hline
\end{longtable*}
\begin{minipage}{0.95\textwidth}
Both top (5$^\prime$ to 3$^\prime$) and bottom (3$^\prime$ to 5$^\prime$) sequences are shown. The underlined sequences represent sticky ends. A Cy5 fluorophore is internally attached at the thymine base colored in red. A Cy3 fluorophore is either at the green thymine base or the 5$^\prime$ end of the bottom strand. A biotin molecule is linked to the thymine base shown as [T]. Hairpin molecules includes a 2-nt gap (indicated by sequences in parentheses) near each end of the top strand before sticky ends.
\end{minipage}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Many applications in science and engineering require that the predictions of uncertain models be updated by information from a stream of noisy data (see
e.g. \cite{Doucet2001,Bocquet2010}). The model and data jointly define a conditional probability density function~(pdf) $p(x^{0:n}|z^{1:n})$,
where the discrete variable $n=0,1,2,\dots $ can be thought of as discrete time, $x^n$ is a real $m$-dimensional vector to be estimated, called the ``state", $x^{0:n}$ is a shorthand for $x^0,x^1,\dots,x^n$, and where the data sets
$z^{n}$ are a $k$-dimensional vectors ($k\leq m$). All the information we have about the state at time $n$ is contained in this conditional pdf
and a variety of methods are available for its study, e.g. the Kalman filter \cite{Kalman1960}, the extended and ensemble Kalman filter
\cite{EvensenBook}, particle filters \cite{Doucet2001}, or variational methods \cite{TalagrandCourtier,Bennet1993}. Given a model and data, each of these algorithms will produce a result. We are interested in the conditions under which this result is reasonable, i.e. consistent with the real-life situation one is modeling.
Generally, we restrict the analysis to linear state space models driven by Gaussian noise and supplemented by a synchronous stream of data perturbed by Gaussian noise, i.e. the noisy data are available at every time step of the model and only then. We further assume that all model parameters (including the covariance matrices of the noise) are known, i.e. we consider the case of state estimation (not parameter estimation or combined state and parameter estimation). We study this class of problems because it can be examined in some generality and (we believe) can explain qualitatively its important aspects, but we also point out its limitations.
In section~2, we examine what can be expected for our Gaussian model in principle, without regard to a specific algorithm. We define an effective dimension of a Gaussian data assimilation problem and show that, unless this effective dimension is moderate, the uncertainty in the model and the data is excessive so that making reliable conclusions about the underlying process is impossible. We argue that realistic problems have a moderate effective dimension. Investigating the role of the effective dimension in data assimilation algorithms is the subject of the remaining part of the paper. We briefly review particle filters in section 3. In section 4, we use the results of \cite{Snyder} to find the conditions under which the optimal particle filter (which in the linear synchronous case coincides with the implicit particle filter) performs well, and compare these conditions to what can be done in principle. We conclude that optimal particle filters can solve many data assimilation problems even if the number of variables to be estimated is large. Building on the results in \cite{Bickel,Bickel2,BickelBootstrap}, another particle filter is shown to fail under realistic conditions. Thus, the implementation of particle filters is very important, since a poor implementation can lead to a poor performance even if the effective dimension is small. In section 5 we consider particle smoothing and variational data assimilation and show that these methods can only be successful under conditions which are comparable to what we observed in particle filtering. We discuss limitations of our analysis in section~\ref{sec:limitation} and present conclusions in section~7.
To avoid confusion, we wish to point out here that the effective dimension defined in this paper is different from the effective dimension introduced in \cite{Bickel,Bickel2,BickelBootstrap,Snyder}. The effective dimension in\cite{Bickel,Bickel2,BickelBootstrap,Snyder} is connected to particle filters, whereas the effective dimension defined in this paper is a characteristic of the model and data stream, i.e. independent of the algorithm for data assimilation. We show in particular that the effective dimension (as defined here) is well-bounded for realistic models and that numerical data assimilation can be successful in these cases, even if the number of variables to be estimated is large (i.e. a moderate effective dimension in our sense can imply a small effective dimension in the sense of \cite{Bickel,Bickel2,BickelBootstrap,Snyder}.
\section{The effective dimension of linear Gaussian data assimilation problems}
\label{sec:EffectiveDimension}
We consider autonomous, linear Gaussian state space models of the form
\begin{equation}
x^{n+1}=Ax^n+w^n
\label{model}
\end{equation}
where $n=0,1,2,\dots$ is a discrete time, $A_n$ is a given $m\times m$ matrix and $w^n$ are independent and identically distributed (iid) Gaussian random variables with mean zero and given covariance matrix $Q$, which we write
as $w^n\sim\mathcal{N}(0,Q)$. The initial conditions may be random and we assume that their pdf is also Gaussian, i.e. $x^0\sim\mathcal{N}
(\mu_0,\Sigma_0)$, with both $\mu_0$ and $\Sigma_0$ given. We assume further that the data satisfy
\begin{equation}
z^{n+1}=Hx^{n+1}+v^{n+1},
\label{data}
\end{equation}
where $H$ is a given $k\times m$ matrix ($k\leq m$) and the $v^{n+1}\sim\mathcal{N}(0,R)$ are iid, where $R$ is a given $k\times k$ matrix. The $w^n$'s and $v^n$'s are independent of each other.
In principle, but not necessarily in practice, the
covariance matrix $P_n$ of the state $x^n$ conditioned on the data $z^{1:n}$ can be computed recursively, starting with $P_0 = \Sigma_0$:
\begin{eqnarray*}
X_n &=&AP_nA^T+Q,\\
K_n& =& X_nH^T(HX_nH^T+R)^{-1},\\
P_{n+1} &=& (I_m-K_nH)X_n,
\end{eqnarray*}
where $I_m$ is the identity matrix of order $m$ and the $m\times k$ matrix $K$ is often called the ``Kalman gain''. This is the Kalman formalism. Under suitable conditions on the ranks of the several matrices (specifically, if the pair $(H,A)$ is $d$-detectable and $(A,Q)$ is $d$-stabilizable, see \cite{Lancaster}, pp. 90--91), the covariance matrix reaches a ``steady state'' so that
\begin{equation*}
P_{n+1} =P_n = P = (I-KH)X,
\end{equation*}
where $X$ is the unique positive semi-definite solution of the discrete algebraic Riccati equation (DARE)
\begin{equation*}
X=AXA^T-AXH^T(HXH^T+R)^{-1}HXA^T+Q.
\end{equation*}
Note that the steady state value of the covariance matrix is independent of the initial covariance matrix $\Sigma_0$ and that the rate
of convergence to this limit is at least linear, in many cases quadratic (see \cite{Lancaster}, p. 313). This means that, after a relatively short time, the samples of the state given the data are normally distributed with mean $\mu_n$ and covariance matrix $P$ (the mean $\mu_n$ of the variables is not needed here, but it can also be computed using Kalman's formulas).
The Frobenius norm $||P||_F=(\sum_{ij}p_{ij}^2)^{1/2}$ of the covariance matrix $P=(p_{ij})$ determines how far these samples spread out in the state space. To see this, consider the random variable $y = (x_n-\mu_n)^T(x_n-\mu_n)$, where $x_n-\mu_n\sim\mathcal{N}(0,P)$, i.e. consider the squared distances of the samples from their mean (their most likely value).
Let $U$ be an orthogonal $m\times m$ matrix whose columns are the eigenvectors of $P$. Then
\begin{equation*}
y= (x_n-\mu_n)^T(x_n-\mu_n)=s^Ts=\displaystyle \sum_{j=1}^ms_j^2,
\end{equation*}
where $s=U^T(x_n-\mu_n)\sim\mathcal{N}(0,\Lambda)$, and $\Lambda=U^TPU$ is a diagonal matrix whose diagonal elements are the $m$
eigenvalues $\lambda_j$ of $P$. It is now straightforward to compute the mean and variance of $y$ because the $s_j$'s (the elements of $s$) are independent :
\begin{equation*}
E(y) = \displaystyle \sum_{j=1}^m \lambda_j,\quad var(y) = \displaystyle 2\sum_{j=1}^m \lambda_j^2.
\end{equation*}
Note that $y=r^2$, where $r$ is the distance from the sample to the most likely state (the mean). Assuming that $m$ is finite (but large), we obtain, using Taylor expansion of $r/\sqrt{\sum \lambda_j}=(z/\sqrt{\sum \lambda_j})^{1/2}$ around $\sqrt{\sum\lambda_j}$ and assuming that $\lambda_j=O(1)$, that
\begin{eqnarray*}
E(r) &=& \frac{4\left(\displaystyle\sum_{j=1}^m \lambda_j\right)^2-2\displaystyle\sum_{j=1}^m \lambda_j^2}{4\left(\displaystyle\sum_{j=1}^m \lambda_j\right)^{1.5}}+O_p\left(\frac{\displaystyle\sum_{j=1}^m \lambda_j^4}{(\displaystyle\sum_{j=1}^m \lambda_j)^4}\right)=\hat{E}(r)+O_p\left(\frac{\displaystyle\sum_{j=1}^m \lambda_j^4}{(\displaystyle\sum_{j=1}^m \lambda_j)^4}\right),\\
var(y)&=&\frac{\displaystyle \sum_{j=1}^m \lambda_j^2}{2\displaystyle \sum_{j=1}^m \lambda_j}+O_p\left(\frac{\displaystyle\sum_{j=1}^m \lambda_j^4}{(\displaystyle\sum_{j=1}^m\lambda)^3}\right)=\hat{v}(r)+O_p\left(\frac{\displaystyle\sum_{j=1}^m \lambda_j^4}{(\displaystyle\sum_{j=1}^m\lambda)^3}\right).
\end{eqnarray*}
Using the techniques in \cite{BickelBootstrap}, one can show that for $m\rightarrow \infty$ and $\sum\lambda \rightarrow \infty$ and with $\lambda_j=O(1)$ (i.e. in the case for which the moments of $y$ do not necessarily exist) the above expressions become ``$As mean(r)$'' respectively ``$As var(r)$'', i.e. the moments of limiting Gaussian variables.
Now suppose that $m$ is large but finite and that $\lambda_j =O(1)$ for $j=1,\dots,m$; then $\hat{E} =O(m^{1/2})$ and $\hat{v} =O(1)$. This means that the samples collect on a shell of thickness $O(1)$ at a distance $O(m^{1/2})$ from their mean and are distributed over a volume $O(m^{(m+1)/2})$, i.e. the predictions spread out over a large volume at a large distance from the most likely state. However, the data assimilation problem reflects an experimental situation, and the numerical samples should behave just like experimental samples: if the uncertainty is large, one will observe that the outcomes of repeated experiments exhibit a large spread; if the uncertainty is small, then the spread in the outcomes of experiments is also small. Since the outcomes of repeated experiments rarely exhibit large variations, one should expect that the samples of numerical data assimilation all fall into a small ``low-dimensional" ball, centered around the most likely state, i.e. the radius, $E( r)\approx\hat{E}$, is comparable to the thickness, $var( r)\approx\hat{v}$. Data assimilation only makes sense in this case because, otherwise, the samples spread out over a huge volume and there is not enough information in the model and data to make reliable conclusions about the state.
Standard inequalities show that
\begin{equation*}
\sqrt{\displaystyle \sum_{j=1}^m \lambda_j^2} \leq \displaystyle \sum_{j=1}^m \lambda_j \leq \, \sqrt{m\displaystyle \sum_{j=1}^m \lambda_j^2},
\end{equation*}
and now one can obtain upper bounds for $\hat{E}$ and $\hat{v}$:
\begin{equation*}
\hat{E} \leq m\left(\displaystyle\sum_{j=1}^m \lambda_j^2\right)^{1/4}, \quad \hat{v}\leq \frac{1}{2}\left(\displaystyle\sum_{j=1}^m \lambda_j^2\right)^{1/2}.
\end{equation*}
These upper bounds imply that the sum of the eigenvalues squared of the steady state covariance matrix $P$, i.e. its Frobenius norm, determines the mean and variance of the distance of a sample from the most likely state, i.e. the spread of the samples in the state space. We thus define the \emph{effective dimension} of the Gaussian data assimilation problem defined by equations~(\ref{model}) and~(\ref{data}) to be the Frobenius norm of $P$, $||P ||_F = \sqrt{\sum \lambda_j^2}$.
Note that this effective dimension is different from the definitions in \cite{Bickel,Bickel2,BickelBootstrap,Snyder}, which are defined in connection with specific particle filters. The effective dimension defined here is independent of a data assimilation technique; it is a characteristic of the model~(\ref{model}) and data stream~(\ref{data}). We expect the effective dimension to be bounded in practice and corroborate this point in the next two sections.
Finally we want to point out that we study the posterior pdf (the pdf of $x$ conditioned on the data); this pdf can be calculated with the Kalman filter formulas, which however are valid only for linear Gaussian models as in (\ref{model}) and (\ref{data}). Nonlinear or non-Gaussian models are not discussed here, but we mention the limitations of our analysis in more detail in section \ref{sec:limitation}.
\subsection{Bounds on the effective dimension}\label{sec:EffDimBounds}
To discover the real-life interpretation of the effective dimension, we study its upper bounds in terms of the Frobenius norms of $Q$ and $R$. From Khinchin's theorem \cite{ChorinHald} we know that the Frobenius norms of $Q$ and $R$ must be bounded or else the energy of the noise is infinite (which is unrealistic). Here, we show that a small Frobenius norm of $Q$ and $R$ can lead to a small effective dimension. We start by a simple example, which is also useful in the study of data assimilation methods in later sections.
\vspace{3mm}
\subsubsection{Example} Put $A=H=I_m$ and let $Q=qI_m$, $R=rI_m$. The Riccati equation can be solved analytically for this example and we find that the Frobenius norm of the steady state covariance is
\begin{equation*}
||P||_F=\sqrt{m}\frac{\sqrt{q^2+4qr}-q}{2}.
\end{equation*}
In a real-life problem, we would expect $||P||_F$ to grow slowly, if at all, when the number of variables increases. The boundedness of the effective dimension induces a ``balance condition'' between the errors in the model (represented by $q$) and the errors in the data (represented by~$r$). In this simple example, this balance condition is the inequality
\begin{equation*}
\frac{\sqrt{q^2+4qr}-q}{2}\leq \frac{1}{\sqrt{m}},
\end{equation*}
where the $1$ in the numerator of the right-hand side stands for a constant $O(1)$, or even for a function of $m$ that grows slower than $\sqrt{m}$; we set this constant equal to $1$ because this already captures the general behavior.
Figure \ref{fig:KFMap} illustrates the balance condition and shows a plot of the function that is defined by the left-hand-side of the above inequality as well as three level sets, corresponding to $m=5,10,100$ respectively; for a given dimension~$m$, all values of~$q$ and~$r$ below the corresponding level set lead to an $O(1)$ effective dimension.
\begin{figure}[h!tbp]
\begin{center}
{\includegraphics[width=.7\textwidth]{KFMap.pdf}}
\caption{Covariance map for sequential data assimilation.}
\label{fig:KFMap}
\end{center}
\end{figure}
The balance condition in this model problem implies that the smaller the errors in the data ($r$), the larger can be the uncertainty in the model ($q$) and vice versa, and, for large $m$, only small values of $q$ and $r$ are allowed. Moreover, note that for very small $q$, the boundaries for successful data assimilation are (almost) vertical lines. The reason is that if the model is very good, neither accurate nor inaccurate data can improve it, i.e. data assimilation is not necessary. If the model is poor, only nearly perfect data can help. We will encounter this balance condition (in more complicated forms) again in the general case in the next section and also in the analysis of particle filters and variational data assimilation.
Finally, note that the Frobenius norms $||Q||_F=q\sqrt{m}$ and $||R||_F=r\sqrt{m}$ increase with the number of dimensions unless $q$ or $r$ or both decrease with $m$ as shown in figure \ref{fig:KFMap}. We will argue in section \ref{sec:EffDimRealLife} that in realistic cases, the Frobenius norms of $Q$ and $R$ remain bounded as $m$ increases. We also expect, but cannot prove in general, that a balance condition as in figure \ref{fig:KFMap} is valid in the general case (arbitrary $A,H,Q,R$), with $q$ and $r$ replaced by the Frobenius norms of $Q$ and $R$.
\vspace{3mm}
\subsubsection{The general case}
In the general case, the balance condition between the uncertainties in the model ($||Q||_F$) and data ($||R||_F$) is more complicated because the effective dimension is the Frobenius norm of the solution of a Riccati equation which in general does not admit a closed form solution.
However, if the covariance matrices Q and R have small Frobenius norms, then the effective dimension of the problem can be small and data assimilation can be successful. To see this, let $X$ and $P$ be the solution of the DARE respectively the steady state covariance matrix of a given $(A,Q,H,R)$ data assimilation problem and let $\tilde Q\leq Q$, i.e. $\tilde Q - Q$ is symmetric positive semi-definite (SPD). If $\tilde R\leq R$, then, by the comparison theorem (Theorem 13.3.1) in \cite{Lancaster}, $\tilde X \leq X$, where $\tilde X$ is the solution of the DARE associated with the $(A,\tilde Q,H,\tilde R)$ data assimilation problem. From the Kalman formulas we know that
\begin{equation*}
P = X-XH^T(HXH^T+R)^{-1}HX,
\end{equation*}
which implies that $P\leq X$. Moreover, for two SPD matrices $C$ and $D$, $C\leq D$ implies $||C||_F \leq ||D||_F$. Thus, the smaller the Frobenius norm of $Q$ and $R$, the smaller is the upper bound $||X||_F$ on the effective dimension.
However, the requirement that these Frobenius norms be small is not sufficient to ensure that the effective dimension of the problem is small; in particular, it is evident that the properties of $A$ must play a role; for example, if the $L_2$ norm of $A$ exceeds unity, the model (\ref{model}) is unstable and successful data assimilation is unlikely.
While the model, or $A$, is implicitly accounted for in $X$, the solution of the DARE, one can construct sharper bounds on the effective dimension by accounting for the model~(\ref{model}) and data stream~(\ref{data}) more explicitly. To that extend, we construct matrix bounds on $P$, from matrix bounds for the solution of the DARE \cite{Kwon}. Let $X\leq X_u$, and $X_l\leq X$, be upper and lower matrix bounds for the solution of the DARE, for example, we can choose the lower bound in \cite{Komaroff1994}
\begin{equation*}
Q\leq X_l = A(Q^{-1}+H^TR^{-1}H)^{-1}A^T+Q \leq X,
\end{equation*}
and the upper bound in \cite{Kwon}
\begin{equation*}
X \leq X_u = A(X_*^{-1}+H^TR^{-1}H)^{-1}A^T+Q,
\end{equation*}
where $X_* = A(\eta^{-1}+H^TR^{-1}H)^{-1}A^T+Q$, $\eta =f(-\lambda_1(A)-\lambda_n(H^TR^{-1}H)\lambda_1(Q)+1,2\lambda_n(H^TR^{-1}H),2\lambda_1(Q)))$, $f(a,b,c) = (\sqrt{a^2+bc}-a)/2)$ and $\lambda_1( C)$ and $\lambda_n( C)$ are the largest respectively smallest eigenvalue of the matrix $C$. Then an upper matrix bound for the steady state covariance matrix is
\begin{equation*}
P \leq X_u-X_lH^T(HX_uH^T+R)^{-1}HX_l.
\end{equation*}
The Frobenius norm of this upper matrix bound is an upper bound for the effective dimension.
\vspace{3mm}
\subsection{The real-world interpretation of effective dimension}\label{sec:EffDimRealLife}
We have shown that there is little hope for making reliable conclusions unless the effective dimension of the data assimilation problem defined by equations~(\ref{model}) and~(\ref{data}) is small. We now give more detail about the physical interpretation of this result.
Suppose the variables $x$ one is estimating are point values of, for example, the velocity of a flow field (as they often are in
applications). The Frobenius norm of the covariance matrix $Q$ is proportional to the specific kinetic energy of the noise field that is perturbing an underlying flow. This energy should be a small fraction of the energy of the flow, or else there is not enough information in the model (\ref{model}) to examine the flow one is interested in. We can thus assume that the Frobenius norm of $Q$ is (much) less than $m$. By the same arguments, we can assume that the Frobenius norm of $R$ is small, or else the noise in the data equation overpowers the actual measurements. Since small Frobenius norms of $Q$ and $R$ often imply a small Frobenius norm of $P$, we are dealing with a data assimilation problem with a small effective dimension.
Point values of a flow field typically come from a discretization of a stochastic differential equation. As one refines this discretization, one can expect the correlation between the errors at neighboring grid-points to increase. These errors are represented by the covariance matrix $Q$ and from Khinchin's theorem (see e.g. \cite{ChorinHald}) we know that a random field with sufficiently correlated components has a finite energy density (and vice versa). This implies that the Frobenius norm of $Q$ does not grow without bound as we increase $m$.
Another and perhaps even more dramatic instance of this situation is one where the random process we are interested in is smooth so that the spectrum of its covariance matrix decays quickly \cite{Adler,Rasmussen}. For practical purposes one may then consider $m-d$ of the eigenvalues to be equal to zero (rather than just very small). This is an instance of ``partial noise'' \cite{Morzfeld2012}, i.e. the state
space splits into two disjoint subspaces, one of dimension $d$, which contains state variables, $u$, that are directly driven by Gaussian
noise, and one of dimension $m-d$, which contains the remaining variables, $v$, that are (linear) functions of the random variables~$u$. Thus, the steady state covariance matrix is of size $d\times d$ and the effective dimension is independent of the state dimension and moderate even if $m$ is large.
Note that the key to the small effective dimension in the above cases is the correlation among the errors and indeed, the data assimilation problems (or covariances $Q$ and $R$) derived by various practitioners and theorists show a strong correlation of the errors (see e.g. \cite{vanLeuween2003,Wheeler,Zhang,Rasmussen,Adler,MillerCane,MillerHackert,MillerSpitz,Morzfeld2012,Bennet1987}). The correlations are also key to the well-boundedness of infinite dimensional problems \cite{Stuart} where the spectra of the covariances (which are compact operators in this case) decay; a well correlated noise model was obtained from an infinite dimensional problem in \cite{Ghattas,Bennet1987}.
The geometrical interpretation of this situation is as follows: because of correlations in the noise, the probability mass is concentrated on a $d$-dimensional manifold, regardless of the dimension $m\geq d$ of the state space. In addition one must be careful that the noise in the
observations not be too strong. Otherwise the data can push the probability mass away from the $d$-dimensional manifold (i.e. the data
increase uncertainty, instead of decreasing it). This assumption is reasonable, because typically the data contain information and not just noise.
Next, suppose that the vector $x$ in (\ref{model}) and (\ref{data}) represents the components of an abstract model with the several components representing various indicators, for example of economic activity (so that the concept of energy is not well-defined). It is unreasonable to assume that each source of error affects only one component of $x$. As an example of what happens when each source of error affects many components, consider a model where Gaussian sources of error are distributed with spherical symmetry in the space of the $x$'s and have a magnitude independent of the dimension $m$. In an $m$ dimensional space, the components of the unit vector of length $1$ have magnitude of order $1/\sqrt{m}$, so that the variance of each component must decrease like $1/m$. Thus, the covariance matrices in~(\ref{model}) and~(\ref{data}) are proportional to $m^{-1}I_m$ and the effective dimension (for $A=H=I_m$) is $||P||_F=(\sqrt{5}-1)/2m$, which is small when $m$ is large. This is a plausible outcome, because the more data and indicators are considered, the less uncertainty there should be in the outcome (because the new indicators provide additional information).
\section{Review of particle filters}
\label{sec:Review}
In importance sampling one generates samples from a hard-to-sample pdf $p$ (the ``target" pdf) by producing weighted samples from an easy-to-sample pdf, $\pi$, called the ``importance function" (see e.g. \cite{KalosWhitlock,ChorinHald}). Specifically, if the random variable one is interested in is $x \sim p$, one generates samples $X_j \sim \pi, j=1,\dots,m,$ (we
use capital letters for realizations of random variables) and weighs each by the weight
\begin{equation*}
W_j \propto \frac{p(X_j)}{\pi(X_j)}.
\end{equation*}
The weighted samples $\left\{ X_j,W_j \right\}$ (called particles in this context) form an empirical estimate of the target pdf $p$, i.e. for a smooth function $u$, the sum
\begin{equation*}
E_m(u) = \sum_{j=0}^{m}u(X_j)\hat{W}_j,
\end{equation*}
where $\hat{W}_j=W_j/\sum_{j=0}^{M}W_j$, converges almost surely to the expected value of $u$ with respect to the pdf $p$ as $m \rightarrow
\infty$, provided that the support of $\pi$ includes the support of~$p$.
Particle filters apply these ideas to the recursive formulation of the conditional pdf:
\begin{equation*}
p(x^{0:n+1}|z^{1:n+1})=p(x^{0:n}|z^{1:n})\frac{p(x^{n+1}|x^{n})p(z^{n+1}|x^{n+1})}{p(z^{n+1}|z^{1:n})}.
\end{equation*}
This requires that the importance function factorize in the form:
\begin{equation}
\label{eq:factorization}
\pi(x^{0:n+1}|z^{0:n+1}) = \pi_0(x^0)\prod_{k=1}^{n+1} \pi_k(x^{k}|x^{0:k-1},z^{1:k}).
\end{equation}
where the $\pi_k$ are updates for the importance function.
The factorization of the importance function leads to the recursion
\begin{equation}
\label{eq:Weights}
W^{n+1}_j\propto \hat{W}^{n}_j\frac{p(X_j^{n+1}|X_j^{n})p(Z^{n+1}|X_j^{n+1})}{\pi_{n+1}(X_j^{n+1}|X_j^{0:n},Z^{0:k})},
\end{equation}
for the weights of each of the particles, which are then scaled so that their sum equals one. Resampling after every step makes it possible to set $\hat{W}_j^n=1/m$ when one
computes $W^{n+1}_j$ (see e.g. \cite{Doucet2001}).
Once one has set $\hat{W}_j^n=1/m$ but before sampling, each of the weights can be viewed as a function of the
random variable $x^{n+1}_j$ and is therefore a random variable.
The weights determine the efficiency of particle filters. Suppose that, before the normalization and resampling step, one weight is much larger than all others; then upon rescaling of the weights such that their sum equals one, one finds that the largest normalized weight is near 1 and all others are near 0. In this case the empirical estimate of the conditional pdf by the particles is very poor (it is a single, often unlikely point) and the particle filter is said to have collapsed. The collapse of particle filters can be studied via the variance of the logarithm of the weights, and it was argued rigorously in \cite{Bickel,Bickel2,BickelBootstrap,Snyder} that a large variance of the logarithm of the weights leads to the collapse of particle filters. The choice of importance function $\pi$ is critical for avoiding the collapse and many different importance functions have been considered in the literature (see e.g. \cite{Brad ,Weare2009,Weare2012,vanLeeuwen,Chorintupnas,chorin2010,Morzfeld2011}). Here we we follow \cite{Bickel,Bickel2,BickelBootstrap,Snyder} and discuss two particle filters in detail.
\subsection{The SIR filter}
A natural choice for the importance function is to generate samples with the model (\ref{model}), i.e. to choose $\pi_{n+1}=p(x^{n+1}|x^n)$. When a resampling step is added, the resulting filter is often called a sequential importance sampling with resampling (SIR) filter\cite{GordonSIR} and its weights are
\begin{equation*}
W^{n+1}_j\propto p(Z^{n+1}|X_j^{n+1}).
\end{equation*}
It is known that the SIR filter collapses if the importance function $\pi_{n+1}=p(x^{n+1}|x^n)$, called the ``prior'', and the target, or ``posterior" density, $p(y^{n+1}|x^{n+1}) p(x^{n+1}|x^n)$, are nearly mutually singular. This can happen even in one dimensional problems, however the situation becomes more dramatic as the dimension $m$ increases. A rigorous analysis of the asymptotic behavior of weights of the SIR filter (as the number of particles and the dimension go to infinity) is given in \cite{Bickel,Bickel2,BickelBootstrap} and it is shown that the number of particles required to avoid the collapse of the SIR filter grows exponentially with the variance of the observation log likelihood (the logarithm of the weights).
\subsection{The optimal particle filter}
One can avoid the collapse of particle filters in low-dimensional problems by choosing the importance function wisely. If one chooses an importance function $\pi$ so that the weights in (\ref{eq:Weights}) are close to uniform, then all particles contribute equally to the empirical estimate they define. In \cite{Doucet,OptimalImportanceFunction,liuchen1995,Snyder} the importance function
$\pi_{n+1}(x^{n+1}|x^{0:n},z^{0:n+1})= p(x^{n+1}|x^{n},z^{n+1})$, is discussed and it is shown that this importance function is``optimal'' in the sense that it minimizes the variance of the weights given the data \emph{and} $X_j^n$. For that reason, a filter that uses this importance function is called ``optimal particle filter'' and the optimal weights are
\begin{equation}
\label{eq:OptimalWeight}
W_j^{n+1} \propto p(Z^{n+1}|X_j^{n}).
\end{equation}
For the class of models and data we consider, the optimal particle filter is identical to the implicit particle filter \cite{Morzfeld2011,chorin2010}. The asymptotic behavior of the weights of the optimal particle filter was studied in \cite{Snyder} and it was found that the optimal filter collapses if the variance of the logarithm of its weights is large.
\section{The collapse and non-collapse of particle filters}
The conditions for the collapse have been reported in \cite{Bickel,Bickel2,BickelBootstrap} for SIR and in \cite{Snyder} for the optimal particle filter; here we connect these to our analysis of effective dimension.
\subsection{The case of the optimal particle filter}\label{sec:OptimalFilter}
It was shown in \cite{Snyder}, that the optimal particle filter collapses if the Frobenius norm of the covariance matrix of $\left(HQH^T+R\right)^{-0.5}HAx^{n-1}$ is large (asymptotically infinite as $k\rightarrow \infty$). However if this Frobenius norm is small, then the variance of the logarithm of the weights is also small so that the optimal particle filter works just fine (i.e. it does not collapse). We now investigate the role the effective dimension of section \ref{sec:EffectiveDimension} plays for the collapse of the optimal particle filter.
If the conditional pdf has reached steady state, then the covariance of $x^{n-1}$ is $P$ (the steady state solution of the Riccati equation), so that the Frobenius norm of the symmetric matrix
\begin{equation}
\label{eq:EVP}
\Sigma= HAP A^TH^T\left(HQH^T+R\right)^{-1},
\end{equation}
governs the collapse of the optimal particle filter. If the Frobenius norm of $\Sigma$ is well-bounded for large $m$ and $k$, then the optimal particle filter will work. The boundedness of $\Sigma$ induces a balance condition between the errors in the model and in the data; the situation is analogous to what we observed in section \ref{sec:EffectiveDimension}.
To understand this balance condition better, we consider again the simple example of section \ref{sec:EffectiveDimension}, i.e. we set $H=A=I_m$ and $Q=qI_m$, $R=rI_m$. We already computed $P$ in section~\ref{sec:EffectiveDimension} and find from~(\ref{eq:EVP}) that
\begin{equation*}
||\Sigma||_F = \sqrt{m}\frac{\sqrt{q^2+4qr}-q}{2(q+r)}.
\end{equation*}
The boundedness of $||\Sigma||_F=O(1)$ thus induces the condition
\begin{equation*}
\frac{\sqrt{q^2+4qr}-q}{2(q+r)}\leq \frac{1}{\sqrt{m}}.
\end{equation*}
With $m$ fixed, the left-hand-side depends only on the ratio of the covariances of the noise in the model and in the data, so that the level sets are rays. In the center panel of figure \ref{fig:CompareMap}, we superpose these rays, for which optimal particle filtering can be successful, with the (q,r)-region in which data assimilation is feasible (as computed in section \ref{sec:EffectiveDimension}). The left panel of the figure shows what is in principle possible, for comparison.
\begin{figure}[h!tbp]
\begin{center}
{\includegraphics[width=1.0\textwidth]{CompareMaps.pdf}}
\caption{Covariance map for successful data assimilation (left panel), and two particle filtering algorithms (center and right panel). The broken ellipse in the right panel locates the area where the SIR filter works.}
\label{fig:CompareMap}
\end{center}
\end{figure}
We find that the optimal particle filter can successfully solve many of the data assimilation problems that are well defined (in the sense that one can expect reliable conclusions given model and data, see section \ref{sec:EffectiveDimension}). The exception are problems for which $q\approx r$, i.e. the noise in the model and data are equally strong.
Another way to see this is to set $\epsilon =q/r$ so that the balance condition becomes
\begin{equation*}
\frac{\sqrt{\epsilon^2+4\epsilon}-\epsilon}{2(1+\epsilon)}\leq \frac{1}{\sqrt{m}},
\end{equation*}
which we solve for $m$ and then plot the maximum dimension $m$ as a function the ratio of the noise in the model and the noise in the data; all values smaller than this maximum dimension are shown in figure \ref{fig:FilterMap} as the light blue area.
\begin{figure}[h!tbp]
\begin{center}
{\includegraphics[width=0.6\textwidth]{FilterMap.pdf}}
\caption{Maximum dimension for two particle filters.}
\label{fig:FilterMap}
\end{center}
\end{figure}
We conclude that the optimal particle filter works for high-dimensional data assimilation problems if $\epsilon$ is either small or large. The case of large $\epsilon$ is the case typically encountered in practice. The reasons are as follows: if $\epsilon$ is small, then the model is very accurate. In this case, neither accurate nor inaccurate data can improve the model predictions (this case corresponds to the vertical line in figure \ref{fig:CompareMap}), i.e. data assimilation is unnecessary since one can simply trust the predictions of the model (\ref{model}). If $\epsilon$ is large, then the uncertainty in the data is much less than the uncertainty in the model, i.e. we can learn a lot from the data. This is the interesting case and the optimal particle filter (or the implicit particle filter) can be expected to work in such scenarios. However, problems occur when $\epsilon\approx 1$ and if the state dimension exceeds $m\approx 20$. We expect this case to occur infrequently, because typically the data are more accurate than the model.
It is however important to realize that the collapse of the optimal particle filter for $\epsilon \approx 1$ does \emph{not} imply that Monte Carlo sampling in general is not applicable in this case. Particle filtering induces variance into the weights because of its recursive problem formulation and this variance can be reduced by particle smoothing. The reason is as follows: the variance of the weights of the optimal particle filter depends only on the variance of the particles' positions at time $n$ (see (\ref{eq:OptimalWeight})), i.e. each particle is updated to time $n+1$ such that no additional variance is introduced (this is why this filter is called optimal); however the positions at time $n$ may be unlikely in view of the data at $n+1$ (due to accumulation of errors up until this point). In this case, one can go back and correct the past, i.e. use a particle smoother (see also section \ref{sec:VarAndSmoothing}). However, the number of steps one needs to go back in time for successful smoothing is problem dependent and, thus, we cannot provide a full analysis here (given that we work in a restrictive linear setting it seems more realistic and efficient to do this analysis on a case by case basis). In particular, it was indicated in two independent papers \cite{Weare2009,Brad} that smoothing a few steps backwards can help with making Monte Carlo sampling applicable in situations where particle filters fail. In \cite{Weare2009}, the low-noise regime (which is an instance of the case where $\epsilon\approx 1$) is considered in connection with an application in oceanography and it was found that particle smoothing is helpful, however the approximations for optimal particle smoothers become difficult and computationally expensive as the problems get nonlinear. In~\cite{Brad}, particle smoothing was found to give superior results than particle filtering for combined parameter and state estimation, again in connection with an application in oceanography.
In the general case (arbitrary $A,H,Q,R$), we can simplify the balance condition for successful particle filtering by using the upper bound for the Frobenius norm of $\Sigma$ :
\begin{equation*}
||\Sigma||_F\leq ||A||_F^2 ||H||_F^2 ||P||_F ||\left(HQH^T+R\right)^{-1}||_F.
\end{equation*}
If we require that this upper bound is less than $\sqrt{m}$, then we find, using the upper bound
\begin{equation*}
\sqrt{m}=||I||_F \leq ||\left(HQH^T+R\right)||_F ||\left(HQH^T+R\right)^{-1}||_F,
\end{equation*}
that
\begin{equation*}
||A||_F^2 ||H||_F^2 ||P||_F \leq ||H||_F^2||Q||_F+||R||_F,
\end{equation*}
is a sufficient condition that the Frobenius norm of $\Sigma$ is small (i.e. it grows slower than $O(\sqrt{m})$). As in section 2, we find that the balance condition in terms of $||R||_F$ and $||Q||_F$, is simple in simple cases, but delicate in general.
\subsection{The case of the SIR filter}
The collapse of the SIR filter has been studied in \cite{Bickel,Bickel2,BickelBootstrap}, and it was shown that, for a properly normalized model and data equation, this collapse is governed by the Frobenius norm of the covariance of $Hx^n$; undoing the scaling, and noting that $x^{n-1}$ has covariance $P$ (the steady state solution of the Riccati equation), we find that the Frobenius norm of
\begin{equation*}
\Sigma =H\left(Q+APA^T\right)H^TR^{-1}.
\end{equation*}
governs the collapse of SIR filters. Thus, the boundedness of $||\Sigma||_F$ is the balance condition for successful data assimilation with an SIR particle filter. For the simple example considered earlier ($A=H=I_m$, $Q=qI_m$, $R=rI_m$), this condition becomes
\begin{equation*}
\frac{\sqrt{q^2+4qr}+q}{2r}\leq\frac{1}{\sqrt{m}}.
\end{equation*}
For $m=100$, the (q,r)-region for which data assimilation with an SIR filter can be successful is plotted in the right panel of figure \ref{fig:CompareMap}. We observe that this region is very small compared to the region that is accessible with an optimal particle filter.
We can also set $\epsilon = q/r$ and obtain
\begin{equation*}
\frac{\sqrt{\epsilon^2+4\epsilon}+\epsilon}{2}\leq\frac{1}{\sqrt{m}},
\end{equation*}
which we solve for $m$ so that we can plot the maximum dimension for which SIR particle filtering can be successful as a function of the covariance ratio $\epsilon$ (see figure \ref{fig:FilterMap}). Again, we observe that the SIR particle can only be useful in a limited class of problems. In particular, we find that the SIR particle filter works in high-dimensional problems only if the model is very accurate (compared to the data). However, we argued before that this case is somewhat unrealistic, since we expect that the errors in the model be typically larger than the errors in the data (or else the data are not very useful, or particle filtering unnecessary because the model is very good). In these realistic scenarios, the SIR particle filter collapses and we conclude that, as the dimension $m$ increases, it becomes more and more important to use the optimal importance function or a good approximation of it (see e.g. \cite{Morzfeld2011,Brad, Weare2009, Weare 2012} for approximations of the optimal filter).
In the general case, we can use an upper bound, e.g.
\begin{equation*}
||\Sigma||_F \leq||H||_F^2||R^{-1}||_F\left(||Q||_F+||A||_F^2||P||\right),
\end{equation*}
and if we require that this bound is less than $\sqrt{m}$, we obtain the simplified balance condition
\begin{equation*}
||H||_F^2\left(||Q||_F+||A||_F^2||P||\right)\leq ||R||_F.
\end{equation*}
The above condition implies that the Frobenius norm of the covariance matrix of the model noise, $Q$, must be much smaller than the Frobenius norm of the covariance matrix of the errors in the data, which is unrealistic.
\subsection{Discussion}
We wish to point out differences and similarities of our work and the asymptotic studies in \cite{Bickel,Bickel2,BickelBootstrap,Snyder}. Clearly, the results of \cite{Bickel,Bickel2,BickelBootstrap,Snyder} are used in our analysis of the optimal particle filter (section \ref{sec:OptimalFilter}) and the SIR filter (this section). Moreover, our analysis confirms Snyder's findings in \cite{Snyder}, where it was shown that the optimal particle filter ``dramatically reduces the required sample size'' (by lowering the exponent in the relation between the number of particles and the state dimension). In \cite{Bickel,Bickel2,BickelBootstrap,Snyder}, it was shown that the number of particles required grows exponentially with the variance of the logarithm of the weights; the variance of the logarithm of the weights if governed by the Forbenius norms of covariance matrices (which are different for SIR and the optimal particle filter). Our main contribution (in connection with particle filters) is to study the connection of these Frobenius norms with the effective dimension of section \ref{sec:EffectiveDimension}: if the effective dimension is well-bounded then these Frobenius norms do not necessarily grow with the state dimension $m$. Thus, we can find conditions under which the SIR and optimal particle filters can work. We also explain the physical interpretation of our results and conclude that the optimal particle filter can work for many realistic (and large dimensional) problems, because realistic conditions imply a moderate effective dimension.
\section{Particle smoothing and variational data assimilation}\label{sec:VarAndSmoothing}
We now consider the role of the effective dimension in particle smoothing and variational data assimilation. The idea here is to replace the step-by-step construction of the conditional pdf in a particle filter (or Kalman filter) by direct sampling of the full pdf
$p(x^{0:n}|z^{1:n})$, i.e. all available data are assimilated in one sweep. Particle smoothers apply importance sampling to obtain weighted samples from this pdf, and in variational data assimilation one estimates the state of the system by the mode of this pdf.
It is clear that either method can only be successful if the Frobenius norm of the covariance matrix of the variables conditioned on the data is small, or else the samples of numerical or physical experiments collect on a thin shell far from the most likely state (to obtain this result, one has to repeat the steps in section 2). We now determine the conditions under which this Frobenius norm is small. As is customary in data assimilation, we distinguish between the ``strong constraint'' and ``weak constraint'' problem.
\subsection{The strong constraint problem}
In the strong constraint problem one considers a ``perfect model'', i.e. the model errors are neglected and we set $Q=0$ (see e.g. \cite{TalagrandCourtier}). Since the initial conditions determine the state trajectory, the goal is to obtain initial conditions that are compatible with the data, i.e. we are interested in the pdf
\begin{align*}
p(x^0|z^{1:n})\propto& \exp\left(-\frac{1}{2}\left(x^0-\mu_0\right)^T\Sigma_0^{-1}\left(x^0-\mu_0\right)\right)\\
&\times \exp\left(-\frac{1}{2}\displaystyle\sum_{j=1}^n\left(z^j-HA^jx^0\right)^TR^{-1}\left(z^j-HA^jx^0\right)\right).
\end{align*}
Straightforward calculation shows that this pdf is Gaussian (under our assumptions) and its covariance matrix is
\begin{equation*}
\Sigma^{-1} =\Sigma_0^{-1}+\displaystyle\sum_{j=1}^n(A^j)^TH^TR^{-1}HA^j.
\end{equation*}
As explained above, data assimilation for the Gaussian model makes sense only if the Frobenius norm of this matrix is small (or at least, if it does not grow with the state dimension). In this case, the samples collect on a small and low-dimensional ball, close to the most likely state. The boundedness of the Frobenius norm of $\Sigma$ induces a balance condition between the prior errors~($\Sigma_0$) and the errors on the data~($R$). The situation is analogous to the balance conditions we encountered before in sequential data assimilation.
We illustrate the balance condition for the strong constraint problem by considering a version of the simple example we used earlier, i.e. we set $A=H=I_m$, $Q=0$, $R=rI_m$, and, in addition, $n=1$, $\Sigma_0=\sigma_0I_m$. In this case, we can compute $\Sigma$ and its Frobenius norm:
\begin{equation*}
||\Sigma||_F =\sqrt{m}\frac{\sigma_0r}{\sigma_0+ r}.
\end{equation*}
Figure \ref{fig:VarMap} shows the values of $r$ and $\sigma_0$ which lead to an $O(1)$ Frobenius norm of $\Sigma$.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[width=0.7\textwidth]{VarMap.pdf}}
\caption{Covariance map for strong 4D-Var and optimal particle smoothing.}
\label{fig:VarMap}
\end{center}
\end{figure}
Three level sets indicate the state dimensions $m=10,100,1000$; for a given state dimension, the values of $r$ and $\sigma_0$ below the corresponding curve lead to a small $||\Sigma||_F$. We observe that, for a fixed $m$, a larger error in the prior knowledge ($\sigma_0$) can be tolerated if the error in the data is very small and vice versa. Similar observations were made in \cite{Haben2011a,Haben2011b} in connection with the condition number in 3D-Var.
Variational data assimilation (strong 4D-Var) represents the conditional pdf by its mode, i.e. by a single point in the state space. The smaller is the ball on which the samples collect (i.e. the smaller the Frobenius norm of $\Sigma$), the more applicable is strong 4D-Var. Particle smoothers on the other hand construct an empirical estimate of the pdf via sampling. Since the target pdf is Gaussian, we can construct an optimal particle smoother (minimum variance in the weights) by sampling this Gaussian, so that the weights are constant (zero variance). It is clear that, for realistic conditions (small $||\Sigma||_F$) the optimal particle smoother can be expected to perform well, regardless of the state dimension $m$, because it can efficiently represent the pdf one is interested in.
The situation is different for other particle smoothers. Consider, for example, the SIR-like particle smoother that uses $p(x_0)$ as its importance function. This filter produces weights whose negative logarithm is given by
\begin{equation*}
\phi =\frac{1}{2}\displaystyle\sum_{j=1}^n\left(Z^j-HA^jx^0\right)^TR^{-1}\left(Z^j-HA^jx^0\right).
\end{equation*}
For $n=1$, the variance of these weights depends on the Frobenius norm of the matrix $HA\Sigma_0A^TH^TR^{-1}$, which has the upper bound
\begin{equation*}
||HA\Sigma_0A^TH^TR^{-1}||\leq ||H||_F^2||A||_F^2 ||\Sigma_0||_F ||R^{-1}||.
\end{equation*}
If we require that this upper bound is less than $\sqrt{m}$ then we obtain (using $\sqrt{m}\leq||A||_F||A^{-1}||_F$) the condition
\begin{equation*}
||H||_F^2||A||_F^2 ||\Sigma_0||_F \leq ||R||,
\end{equation*}
which implies that the errors before we collect the data must be smaller than the errors in the data, which is unrealistic. In particular, for the simple example considered above we find that $\sigma_0\leq r/\sqrt{m}$. We conclude that, as in particle filtering, particle smoothing is possible under realistic conditions only if the importance function is chosen carefully.
Note that the results we obtained here are different than those we would obtain if would simply put $Q=0$ in the Kalman filter formulas of section~2. It is easy to show that for $Q=0$ the steady state covariance matrix converges to the zero matrix. What this means is that, with enough data, one can wait for steady state, and then accurately estimate the state at large $n$. What we have done in this section is to consider the consequences of having access to only a finite data set, i.e. making predictions before steady state is reached.
Finally, note that, in contrast to the sequential problem, the minimum variance of the weights of the smoothing problem is zero, whereas particle filters always produce non-zero variance weights. This variance is induced by the factorization of the importance function $\pi$, and since this factorization is not required in particle smoothing, this source of variance can disappear (or be reduced) by clever choice of importance functions.
\subsection{The weak constraint problem}
In the weak constraint problem (see e.g. \cite{Bennet1993}), one is interested in estimating the full state trajectory given the data, i.e. in the pdf
\begin{align*}
p(x^{0:n}|z^{1:n})\propto& \exp\left(-\frac{1}{2}\left(x^0-\mu_0\right)^T\Sigma_0^{-1}\left(x^0-\mu_0\right)\right)\\
&\times \exp\left(-\frac{1}{2}\displaystyle\sum_{j=1}^n\left(z^j-Hx^j\right)^TR^{-1}\left(z^j-Hx^j\right)\right).
\end{align*}
An easy calculation reveals that this pdf is Gaussian and its covariance matrix is
\begin{equation*}
\Sigma^{-1}=\scriptsize\left( \begin{array}{cccc}
\Sigma_0^{-1}+A^T\Sigma_1^{-1}A & -A^TQ^{-1} &\cdots & 0\\
-Q^{-1}A &Q^{-1}+A^TQ^{-1}A+H^TR^{-1}H & -A^TQ^{-1} & \\
0 & \ddots & \ddots & \ddots \\
\vdots & & &-A^TQ^{-1}\\
0 & \cdots & -Q^{-1}A &Q^{-1}+H^TR^{-1}H \\
\end{array} \right).
\end{equation*}
For the same arguments as before, data assimilation can only be successful if the Frobenius norm of $\Sigma$ is moderate. This implies (again) a delicate balance condition between the errors in the prior knowledge ($||\Sigma_0||_F$), the errors in the model (\ref{model}) ($||Q||_F$) and the errors in the data (\ref{data}) ($||R||_F$).
As in the strong constraint problem, variational data assimilation (weak 4D-Var) represents this pdf by its mode (a single point) and this approximation is the more applicable, the smaller the Frobenius norm of $\Sigma$ is. An optimal particle smoother can be constructed for this problem by sampling directly (zero variance weights) the Gaussian conditional pdf. For the same reasons as in the previous section, we can expect an optimal particle smoother to perform well under realistic conditions, but also can expect difficulties if the choice of importance function is poor.
\section{Limitations of the analysis}\label{sec:limitation}
We wish to point out limitations of the analysis above. To find the conditions for successful data assimilation we study the conditional pdf and we rely on the Kalman formalism to compute it. Since the Kalman formalism is only applicable to linear Gaussian problems, our results are at best indicative of the general nonlinear/non-Gaussian case. However, we believe that the general idea of that the probability mass must concentrate on a low-dimensional manifold holds in the nonlinear case as well. Since Khinchin's theorem is independent of our linearity assumption, and since we expect that correlations amongst the errors also occur in nonlinear models, one can speculate that the probability mass \emph{does} collect on a low-dimensional manifold (under realistic assumptions on the noise). However finding (or describing) this manifold in general becomes exceedingly difficult and is perhaps best done on a case-by-case basis in which special features, or structures, in the model at hand can be exploited.
We have further assumed that all model parameters, including the covariances of the errors in the model and data equations, are known. If these must be estimated simultaneously (combined parameter and state estimation), then the situation becomes far more difficult, even in the case of a linear model equation (\ref{model}) and data stream (\ref{data}). It seems reasonable that estimating parameters using data at several consecutive time points (as is done implicitly in some versions of variational data assimilation or particle smoothing) would help with the parameter estimation problem and perhaps even with model specification.
Concerning particle filters, we have examined in detail only two choices of importance function, the one
in SIR, where the samples are chosen independently of the data, and, at
the other extreme, one where the choice of samples depends strongly on the data. There is a
large literature on importance functions, see \cite{Brad,Doucet, Weare2009,Weare2012,vanLeeuwen,Chorintupnas,Morzfeld2011,chorin2010}; it is quite
possible that other choices can outperform the optimal/implicit particle filter even
in the present linear synchronous case once computational costs are
taken into account. In nonlinear problems the optimal particle filter is hard to
implement and the implicit particle filter is suboptimal, so further analysis
may be needed to see what is optimal in each particular case (see also \cite{Weare2009, Weare2012} for approximations of the optimal filter).
More broadly, the analysis of particle filters in the present paper is not robust as assumptions change. For example, if the model noise is multiplicative (i.e. the covariance matrices are state dependent), then our analysis does not hold, not even for the linear case. Moreover, the optimal particle filter becomes very difficult to implement, whereas the SIR filter remains easy to use. Similarly, if model parameters (the elements of $A$ or the covariances $Q$ and $R$) are not known, simultaneous state and parameter estimation using an optimal particle filter becomes difficult, but SIR, again, remains easy to use. While the filters may not collapse in these cases, they may give a poor prediction. The existence of such important departures is confirmed by the fact that the ensemble Kalman filter and square root filter differ substantially in their performance. However, our analysis indicates that, if (\ref{model}) and (\ref{data}) hold, the ensemble Kalman filter, the Kalman filter and the optimal particle filter are equivalent in the non-collapse region of the optimal filter.
Similarly, variational data assimilation or particle smoothing can be successful (and is indeed equivalent to Kalman filtering) if (\ref{model}) and (\ref{data}) hold. We expect that variational data assimilation and particle smoothing can be successful in the nonlinear case, provided that the probability mass concentrates on a low-dimensional manifold. In particular, particle smoothing has the potential of extending the applicability of Monte Carlo sampling to data assimilation, since the variance of weights due to the sequential problem formulation in particle filters is reduced (the data at time $2$ may label what one thought was likely at time $1$ as unlikely). This statement is perhaps corroborated by the success of variational data assimilation in numerical weather prediction.
Finally, it should be pointed out that we assumed throughout the paper that the model and data equations are ``good'', i.e. that the model and data equations are capable of describing the \emph{physical} situation one is interested in. It seems difficult in theory and practice to study the case where the model and data equations are incompatible with the (real) data one has collected. For example, it is unclear to us what happens if the covariances of the errors in the model and data equations are systematically under- or overestimated, i.e. if the various data assimilation algorithms work with ``wrong'' covariances.
\section{Conclusions}
\label{sec:Conclusions}
We have investigated the conditions under which data assimilation is feasible, regardless of the algorithm used to do the assimilation. We quantified these conditions by defining an effective dimension of a Gaussian data assimilation problem and have shown that the boundedness of the effective dimension induces a balance condition for the errors in the model and data. This condition must be satisfied or else one cannot reach reliable conclusions about the process one is modeling, even when the (linear) model is completely correct. The balance condition is often satisfied for realistic models, i.e. the effective dimension is moderate, even if the state dimension is large.
The analysis was carried out in the linear synchronous case, where it can be done in some generality; we believe that this analysis captures the main features of the general case, but we have also discussed the limitations of the analysis.
Building on the results in \cite{Bickel,Bickel2,BickelBootstrap,Snyder}, we studied the effects of the effective dimension on particle filters in two instances, one in which the importance function is based on the model alone, and one in which it is based on both the model and the data. We have three main conclusions:
\begin{enumerate}
\item The stability (i.e., non-collapse of weights) in particle filtering depends on the effective dimension of the
problem. Particle filters can work well if the effective dimension is small even if the true dimension is large (which we expect to happen often in practice).
\item A suitable choice of importance function is essential, or else particle filtering fails even when data assimilation is (in principle) possible with a sequential algorithm.
\item There is a parameter range in which the model noise and the observation noise are roughly comparable, and in which even the optimal particle filter collapses, even under ideal circumstances.
\end{enumerate}
We have then studied the role of the effective dimension in variational data assimilation and particle smoothing, for both the weak and strong constraint problem. It was found that these methods too require a moderate effective dimension or else no accurate predictions can be expected. Moreover, variational data assimilation or particle smoothing may be applicable in the parameter range where particle filtering fails, because the use of more than one consecutive data set helps reduce the variance which is responsible for the collapse of the filters.
These conclusions are predicated on the linearity of the model and data equations, and on the assumption that the generative and data models are close enough to reality.
\section*{Acknowledgements}
We thank Prof. P. Bickel of UC Berkeley for many interesting discussions, for making our thoughts more rigorous (where possible) and for helping us recognize the limitations of our analysis. We thank Prof. R. Miller of Oregon State University for very helpful discussions and help with the literature. We thank Prof. J. Weare for an interesting discussion. This work was supported in part by the Director, Office of Science, Computational and Technology Research, U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the National Science Foundation under grant DMS-1217065.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Named Entity Recognition (NER) is an important problem in NLP, which refers to identifying named entities (\eg, person names, organizations, or locations) from a
given
chunk of text. The most widely employed strategy for the NER task is to train a sequential labeling model based on a labeled dataset,
and this model
learns to assign named entities to each token~\citep{chiu2016named, ma2016end, devlin2018bert}. This process of \textit{training} can be viewed as \textit{memorization}, in which the model iterates over the whole training set to memorize
and generalize
the most confident named entity assigned to the given word. This strategy of \textit{memorization} has difficulty in handling long-tail cases, and requires a large training set
as sentence semantics get diverse and complicated
~\citep{hammerton2003named,collobert2011natural,lample2016neural,chiu2016named,devlin2018bert,liu2019gcdt,shao2021self}.
Motivated by recent progress in retrieval augmented methods \cite{khandelwal2019generalization,khandelwal2020nearest}, which have been
successfully
employed to handle similar issues in language generation, \ie
Language Modeling (LM) and Neural Machine Translation (NMT),
we propose the $k$NN-NER framework for the NER task.
$k$NN-NER first
retrieves $k$ nearest neighbors from the cached training set.
Then, it computes the distribution over labels by interpolating
the distribution over labels output from
a vanilla NER model,
and weights for labels from similar examples in the training set, retrieved using token-level $k$NN search.
In this way, we are able to
resolve the
long-tail
issue mentioned above:
by accessing the cached training examples through $k$NN search during inference,
similar cases (and their labels) will shed light on the test examples, which makes
memorizing the entire dataset unnecessary.
We conducted extensive experiments to evaluate the effectiveness of the proposed $k$NN-NER framework.
We show that $k$NN-NER consistently outperforms its vanilla counterpart, which is based only on the
distribution output from a vanilla tagging model and does not rely on similar examples in the training set.
By applying $k$NN on the vanilla NER model with BERT~\citep{devlin2018bert} as the backbone, we
are able to
achieve a new state-of-the-art result 72.03 (+1.25) F1-score on the Chinese Weibo NER dataset and results comparable to SOTA performances
on a variety of widely applied NER benchmarks, including
CoNLL03, OntoNotes5.0,
Chinese MSRA, and Chinese OntoNotes4.0. Additionally, our experiments show that $k$NN-NER can achieve comparable results to the vanilla NER model with 40\% less amount of training data.
\begin{figure*}[htb]
\includegraphics[scale=0.315]{./method_knn_ner.png}
\caption{An example for the process of $k$NN-NER. The datastore contains a set of representation-label pairs, which are extracted from the hidden states of the vanilla NER model. By given an inference sentence: \textit{Obama lives in Washington}, suppose that at current test time $t$ we need to assign named entity to the word \textit{Washington}. The word representation of \textit{Washington} is used to query $k$ nearest neighbors from the datastore according to the similarity distance, and through the softmax function, the similarity distances are converted to $k$NN entity distribution. Interpolating the $k$NN distribution with the vanilla NER model distribution, we get the final distribution for the assigned named entities.}
\label{fig:process}
\end{figure*}
\section{Related Work}
\paragraph{Retrieval Augmented Model}
Retrieval augmented models additionally use the input to retrieve a set of relevant information to improve the model performance under the merit that \textit{an open-book exam is easier than a close-book exam}.
Recent success on various NLP tasks has shown the effectiveness of retrieval augmented models in improving the quality of neural NLP models, such as language modeling~\citep{khandelwal2019generalization,meng2021gnn}, question answering~\citep{guu2020realm, lewis2020pre, lewis2020retrieval, xiong2020approximate},
text classification ~\cite{lin2021bertgcn},
dialog generation~\citep{fan2020augmenting, thulke2021efficient, weston2018retrieve} and neural machine translation~\citep{khandelwal2019generalization, meng2021fast,wang2021faster}.
\paragraph{Named Entity Recognition} Research on NER has a long history. \citet{hammerton2003named} first attempted to solve this problem using unidirectional LSTMs. \citet{collobert2011natural} presented a CNN-CRF structure and \citet{lample2016neural} combined the bidirectional LSTMs with CRFs. \citet{ma2016end} and \citet{chiu2016named} further added character feature via character CNN. Many works then focus on better improve the decoding structure: leveraging the context information \citep{liu2018empower, liu2019gcdt, lin2021asrnn, cui2019hierarchically}; interpolating latent variables \citep{lin2020enhanced, shao2021self}; transferring to CRF model \citep{ye2018hybrid, panchendrarajan2018bidirectional}; combining positive information \citep{dai2019joint}. Other researches like viewing the NER task as a machine reading comprehension(MRC) task also have made great performance \citep{li2019unified, li2019dice, gan2021dependency}.
\section{Proposed Method: $k$NN-NER}
\subsection{Background: Vanilla NER}
\paragraph{Sequence labeling for NER}
Given an input sentence $\xv=\{x_{1},...,x_{n}\}$ with length $n$, $\forall~ 1\leq i\leq n$, $x_{i}$ denotes the $i$-th word token within this sentence. We formalize the NER task as a sequence labeling task which assigns a label $y_{i}$ to each given word $x_{i}$. A training set with $N$ samples is then denoted by $\{\mathcal{X},\mathcal{Y}\}= \{(\xv^1,\yv^1), \cdots, (\xv^N, \yv^N)\}$, where $(\xv,\yv)$ is the text sequence and corresponding label sequence.
For the vanilla NER model, we decompose the above sequence labeling task into two steps: (\emph{i}) using a text encoder to represent word tokens as high-dimensional vectors, and (\emph{ii}) classifying each high-dimensional vector into a named entity category. For step (\emph{i}), we use
masked language models, \eg,
BERT\citep{devlin2018bert} and RoBERTa\citep{liu2019roberta},
as the feature extractor. For a given word $x_i$, the output $\vh_i$ from the last layer of feature extractor is used as the contextualized word embedding vector, where $\vh_{i} \in \mathbb{R}^{m}$ with $m$ as the embedding dimension.
Then for the step (\emph{ii}), we pass the word representation $\hv_i$ through a multi-layer perceptron (MLP), and obtain the distribution over the named entity vocabulary via a softmax operation:
\begin{equation}
\begin{aligned}
p_{\text{NER}}(y_i|\vx, x_i) = \text{softmax}({\ \text{MLP}(\hv_{i}})).
\end{aligned}
\end{equation}
\subsection{$k$ Nearest Neighbor NER}
The key idea of the $k$NN-NER model is that it augments the process of classification during inference stage with a $k$ nearest neighbor retrieval mechanism.
As shown in Figure \ref{fig:process}, the $k$NN-NER process can be split into two parts: (\emph{i}) following the vanilla NER steps, \ie, extracting word representation $\vh$ and then assigning probability distribution $p_{\text{NER}}$ for each word in a given input sentence; and (\emph{ii}) finding the most similar contexts in the datastore and adjust the final entity distribution $p_{\text{final}}$ with a $k$NN-augmented entity distribution $p_{\text{KNN}}$.
In the following parts, we focus on two major components of $k$NN-NER framework: datastore construction and $k$NN entity probability interpolation.
\paragraph{Building datastore} The datastore $\mathcal{D}$ consists of a set of \textit{key-value} pairs.
Each \textit{key} is the contextualized word embedding of a word from a given sentence, and the corresponding \textit{value} is the name entity of that word in that sentence.
Then the datastore $\mathcal{D}$ is formulated as:
\begin{equation}
\begin{aligned}
\mathcal{D} \myeq{} \{\mathcal{K}, \mathcal{V}\} = &\{(\hv_i, y_{i}) | ~\forall x_{i} \in \xv, \forall y_{i} \in \yv,
\\ &(\xv, \yv) \in \{\mathcal{X}, \mathcal{Y}\}\}.
\end{aligned}
\end{equation}
where $\hv_i$ is the contextualized representation of word $x_i$, $\mathcal{K}$ represents the set of \textit{keys} and $\mathcal{V}$ represents the corresponding \textit{value} set.
\paragraph{$k$NN-augmented Entity Probability} Suppose that we have constructed the datastore $\mathcal{D}$. During inference time, for each word ${x_i}$ from a given input sentence $\xv$, our $k$NN-NER model first generates contextualized word embedding $\hv_{i}$ and distribution over the entire entity labels $p_{\text{NER}}(y_{i}|\xv, x_{i})$ for each word $x_{i}$.
Then for each word $x_{i}$, corresponding $\hv_{i}$ is used to query $k$ nearest neighbors set $\mathcal{N}$ from datastore $\mathcal{D}$ with $L^2$ Euclidean distance $d(\hv_i, \cdot)$ as similarity measure.
The retrieved named entity set is then converted into a distribution over the entire named entity vocabulary based on an RBF kernel output \citep{vert2004primer} of the distance to the original word embedding $\vh_i$. The probability of predicting the label as an entity $e_j$ is proportional to the summarization of kernel outputs from all \textit{values} in $\mathcal{N}$ which equal to $e_j$.
\begin{equation}
\begin{aligned}
p_{\text{kNN}}(y_{i}=&e_j|\xv, x_{i}) \varpropto\\ & \sum_{(\vk,v)\in \mathcal{N}} \mathbbm{1}_{v=e_j} \exp(\frac{-d(\vh_{i}, \vk)}{T})
\end{aligned}
\end{equation}
where $e_j$ represents the $j$th entity within the entity vocabulary and $T$ is a temperature parameter to flatten the distribution. Note that, for the labels that do not appear in the retrieved set, we always assign zero probability to these entities.
Finally, we augment the pure NER distribution $p_{\text{NER}}(y_{i}|\xv, x_{i})$ with $p_{\text{kNN}}(y_{i}|\xv, x_{i})$ as:
\begin{equation}
\begin{aligned}
p_{\text{final}}(y_{i}|\xv,x_{i})=&\lambda p_{\text{NER}}(y_{i}|\xv, x_{i})\ +\\ &(1-\lambda) p_{\text{kNN}}(y_{i}|\xv, x_{i})
\end{aligned}
\end{equation}
where $\lambda$ makes a balance between $k$NN distribution and pure NER distribution.
\section{Experiments}
\subsection{Datasets}
We conduct experiments on commonly used English datasets and Chinese datasets. For English datasets, we use the widely
used
CoNLL2003 and OntoNotes 5.0 benchmarks. For Chinese datasets, we use OntoNotes 4.0, MSRA and Weibo NER. We adopt the general evaluation metric: span-level precision, recall and F1 score. Dataset details are described at Appendix \ref{sec:appendix_datasets}.
\subsection{Experiment Results}
\input{english_result}
\input{chinese_result}
\paragraph{The vanilla models} For the vanilla NER model, we choose BERT~\citep{devlin2018bert} and RoBERTa~\citep{liu2019roberta} for both English datasets and Chinese datasets, and ChineseBERT~\citep{sun2021chinesebert} only for Chinese datasets.
Both base and large version of the vanilla NER model are used in our experiments. The details of implementation can be found in the original work, BERT~\citep{devlin2018bert}, RoBERTa~\citep{liu2019roberta} and ChineseBERT~\citep{sun2021chinesebert}.
\paragraph{Results} Table \ref{tab:result_on_english} and Table \ref{tab:result_on_chinese} show the results on English datasets and Chinese datasets respectively. \footnote{For two English datasets, we only interpolated $k$NN into pure BERT\citep{devlin2018bert} model, so the results are different from the reported ones. More details can be found in https://github.com/google-research/bert/issues/223}
We observe a significant improvement by interpolating $k$NN model across all tasks. Especially on Chinese OntoNotes 4.0 and Chinese Weibo NER dataset, we observe an improvement of 1.75 and 1.63 respectively on F1-score based on BERT model.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.31]{./knn_ner.png}
\caption{F1-score on Chinese OntoNotes 4.0 by varying the percentage of training set.}
\label{fig:vary_percentage}
\end{figure}
\paragraph{Performance on low resource scenario}
Empirically, we also observe that $k$NN-NER can achieve comparable results with much fewer training samples benefitting from direct access to the cached datastore.
On dataset Chinese OntoNotes 4.0, we conducted experiments by varying the percentage of training set while holding the full training set as the datastore for $k$NN search.
Figure \ref{fig:vary_percentage} shows that without additional training and annotation, $k$NN-NER can still generate comparable result to the vanilla NER model with 40\% less amount of training data.
\paragraph{Effectiveness and Sensitivity of $k$}
To clearly observe the effectiveness of the hyperparameter $k$ during $k$NN search, we varied $k$ on dataset Chinese OntoNotes 4.0 with BERT as the vanilla NER model.
From Table \ref{tab:vary_k}, we observe that with the increase of $k$, the F1-score first increases and then keeps horizontal after $k$ reaches 256.
A larger $k$ can retrieve more informative neighbors from the cached datastore.
As $k$ continues increasing, the newly retrieved examples are less similar with the current input example and hence add ignorable change to the final performance. The steady performance with large enough $k$ values shows that our $k$NN-NER model is robust and not sensitive to choice $k$.
\begin{table}[th!]
\centering
\resizebox{.48\textwidth}{!}{
\begin{tabular}{ll}\toprule
\multicolumn{2}{c}{{\bf F1-score on Chinese OntoNotes 4.0}} \\\midrule
\textbf{Varying $k$} & \textbf{F1-score} \\\midrule
\text{The Vanilla NER Model} & 79.16 \\\midrule
\text{+ by setting $k$=8} & 79.49(+0.33) \\
\text{+ by setting $k$=16} & 79.67(+0.51) \\
\text{+ by setting $k$=32} & 80.01(+0.85) \\
\text{+ by setting $k$=64} & 80.53(+1.37) \\
\text{+ by setting $k$=128} & 80.83(+1.67) \\
\text{+ by setting $k$=256} & \textbf{80.91(+1.75)} \\
\text{+ by setting $k$=512} & \textbf{80.91(+1.75)} \\\bottomrule
\end{tabular}
}
\caption{F1-score on Chinese OntoNotes 4.0 by varying the $k$NN parameter $k$.}
\label{tab:vary_k}
\end{table}
\section{Conclusion}
In this paper, we propose a new $k$NN-NER framework, which augments the generated distribution through assigning $k$ nearest neighbors from the cached training set. This strategy requires no additional operation on training phase. By applying $k$NN search on the vanilla NER model, we achieve a new state-of-the-art result 72.03 F1-score on Chinese Weibo NER dataset and comparable results on a variety of datasets, \eg, Chinese MSRA and Chinese OntoNotes 4.0. Additionally, our experiments show that $k$NN-NER can achieve comparable results to the vanilla NER model with only 60\% training data.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\label{}}
\section{Introduction\label{secintro}}
\begin{figure}[b]
\centering
\includegraphics[width=80mm]{figure1.ps}
\caption{Expected spectrum distortion of 5 years SK data with around 70\% reduced background below 5.5 MeV, 4 MeV recoil electron energy threshold and half energy-correlated systematic uncertainty compared to SK-I under current best solution}
\label{fig4}
\end{figure}
It is well-established that the best solution of the solar neutrino oscillation is MSW-LMA; this is strongly supported by many experiments' results\cite{SKI_full}\cite{SKII_full}\cite{SNONCD}\cite{KAMLAND}\cite{BOREXINO}. For this solution it is expected that the $\nu_e$ survival probability in the low energy region ($<\sim$1 MeV) is higher than in the high energy region ($>\sim$10 MeV), because for low energies vacuum oscillation is dominant while for high energies matter oscillation is dominant. So a transition from vacuum to matter oscillation should exist, roughly between 1.0 and 10.0 MeV, in the energy range in which \isotope[8]{B} solar neutrinos predominate. The recoil electron energy threshold of Super-Kamioakande is around 5 MeV, so Super-Kamiokande has a chance to discover the low-energy upturn in the \isotope[8]{B} spectrum.
To realize this goal, reducing background and systematic uncertainty below 5.5 MeV and lowering the recoil electron energy threshold are important \cite{nuclb143.13}. If we can take 5 years of 4 MeV threshold data with 70\% reduced background below 5.5 MeV and half the energy-correlated systematic uncertainty compared to SK-I, we should observe a spectrum as shown in Fig. \ref{fig4} and might discover this energy spectrum distortion. So, starting with the data of SK-III, making these detector and analysis improvements has been high priority work.
\section{Improvement\label{seccur}}
Since the end of SK-I, we have worked hard to reduce background and systematic uncertainties, improving several key items. The following sections will introduce some of them.
\subsection{Water System\label{subsecWS}}
\begin{figure}[b]
\centering
\includegraphics[width=80mm]{figure2.ps}
\caption{Vertex distribution of SK-I/III.}
\label{figver}
\end{figure}
A major background for the solar neutrino observation at SK is the radioactivity from radon (Rn) in our otherwise extremely pure water. This ultra-pure water which fills the detector is made from natural mine water using a very high efficiency water purification system. The Rn background events are very similar to solar events, so it is very difficult to remove them using only analysis tools. As a result, many Rn events could be included in the final data set. To reduce this background, the improvement of the Rn reduction efficiency of the SK water purification system is the most effective approach. So, since the end of SK-I we have upgraded the system several times, including the addition of a new heat exchanger and two reverse osmosis units for SK-III.
In addition, we investigated the water flow in the tank by purposefully injecting radon-enriched water. Tracing the resulting background events in time from this injected Rn, we found stagnation of water in the top and bottom of the detector volume which increased the background. To counter this effect we installed new pipes and changed the water flow. Previously, the water was supplied from the bottom of the inner detector (ID) and drained from the top of both the ID and outer detector (OD). Now, it is supplied from the ID bottom and drained from the top and bottom in the ID and OD, with a total flow two times faster than before. As a result we have a central region with half the background (yellow box in Fig. \ref{figver}) as compared with SK-I, making lowering of the energy threshold a possibility.
Note that the excessive background near the wall and bottom of SK-III also existed in SK-II. This background is from the Fiber Reinforced Plastic (FRP) PMT covers which were added at the start of SK-II in order to protect against propagating shock waves from PMT implosions, and so could not be reduced by improving the water system. It posed a significant obstacle to enlarging the fiducial volume below 6.5 MeV.
\subsection{Vertex Shift\label{subsecvs}}
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure3.ps}
\caption{Vertex shift before and after correction. After correction,
vertex shift around wall was reduced to below 10 cm.} \label{figversh}
\end{figure}
The vertex shift, defined as a vector from an averaged position of the reconstructed vertices of the data to that of corresponding Monte Carlo (MC) data, has been one of the main systematic uncertainties for the solar neutrino flux measurement since the beginning of SK-I\cite{SKI_full}. Because it could make events move in or out of the fiducial volume, it is a non-negligible systematic uncertainty.
Vertex shift is measured by placing a Ni-Cf gamma ray source at several positions of the tank. The reconstructed data vertices were shifted more than 10 cm from the real source position inward toward the tank center, in contrast those of MC which were shifted less than 10 cm. The origin of the excessive shift of the data was a mystery since SK-I. In SK-III, we investigated this mystery, resolving it just ten days before the end of SK-III.
It turned out that relative hit timing within a wide range ($\sim$100 nsec) was not perfect due to characteristics of our electronics. We measured the timing linearity by artificially shifting the external trigger timing, a common stop signal of individual TDCs for each hit channel. We found that hit timing should be corrected -0.7$\%$ to restore linearity.
After the correction was applied the vertex shift shortened significantly (see Fig. \ref{figversh}). As a result, the background events around the wall (inside brown ellipse) were reduced and the fiducial volume between 5.0 and 6.5 MeV could be enlarged up to 13.3 kton (Fig. \ref{figver_vs}). In addition, we can apply the same correction for SK-I/II and expect to reduce the systematic
uncertainty of the SK-I/II data, because the SK-III DAQ is the same as that of SK-I/II.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure4.ps}
\caption{Vertex distribution before and after correction. }
\label{figver_vs}
\end{figure}
\subsection{Time and Position Dependent MC\label{subsectpMC}}
\begin{figure}[b]
\centering
\includegraphics[width=80mm]{figure5.ps}
\caption{$\mathrm{N_{eff}}$ time variation of Data (black) and MC (red) over the full SK-III period with fixed water transparency.} \label{figtimdecaye}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure6.ps}
\caption{$\mathrm{N_{eff}}$ of several positions before and after the position dependent MC installation}
\label{figdtp}
\end{figure}
That the water quality in SK changes as a function of time was known; the SK-I MC took it into account \cite{SKI_full}. In SK-III we also measured the time variation of the water transparency using $\mu$-$e$ decay data, as well as the wavelength dependence of various scattering and absorption water coefficients using a nitrogen/dye laser system \cite{SKnim}.
Using the full period of SK-III, we investigated possible relationships among the water transparency and water coefficients and found, similar to SK-I, the only apparent connection was between water transparency and the absorption coefficients. So we installed the absorption coefficients as a function of the water transparency measured by $\mu$-$e$ decay data and the scattering coefficients as a constant for the water transparency. As a result we had a time dependent MC.
We tested this MC with muon decay data. Figure \ref{figtimdecaye} shows the $\mathrm{N_{eff}}$ time variation of data and MC in SK-III assuming a fixed water transparency. $\mathrm{N_{eff}}$ is the effective number of hits to yield the same value at any position in the SK tank by applying several corrections to the number of hits \cite{SKI_full}. One such correction takes into account the water transparency. To observe the effect of the time variation of water transparency directly, we didn't apply this correction, and confirmed that MC tracks the data properly.
We also found a position dependence in the water quality. The existence of position dependence due to the water flow has been debated since SK-I. We installed light injectors on the barrel of the detector and tried to find it, but this method couldn't resolve the question. Finally, in SK-III, using several calibration sources we concluded that position dependence not only exists but also changes as a function of time. So we made a position dependent MC by installing a z-dependence in the water coefficients varied by the so-called top-bottom asymmetry as measured using calibration data over the full SK-III data-taking period.
We tested this MC with \isotope[16]{N} calibration data \cite{SKI_full}. We took the data at several positions(Fig. \ref{figdtp}). Before correction the difference between data and MC was $\sim$3\% at the top of the detector, $\sim$2\% in the center, and $\sim$1\% at the bottom, but after correction was less than 1\% at the top and in the center. However, the difference at the bottom became larger; it still needs improvement.
\subsection{Angular Resolution\label{subsecar}}
Another large source of systematic uncertainty in SK-I is the angular resolution \cite{SKI_full}. Because this uncertainty comes essentially from the hit pattern difference between data and MC, we must tune the MC to reduce the uncertainty. The MC was tuned several times. DAQ-related items like single photoelectron response, timing resolution, etc., and optical properties like scattering and absorption coefficients, the reflectance of PMTs, etc., were tuned using several calibration sources, as was done for SK-I \cite{SKnim}.
When we calculated the difference of the angular resolution of data and MC, their agreement was not improved compared with SK-I. So we explored several optical properties using MC, finally finding that halving the original value for the reflectivity of a black sheet which covers the ID wall gave better agreement. Until that time we used the SK-II value for black sheet reflectivity, even though the materials of SK-II/III black sheet are different.
To confirm this finding, we put a light injector with a black sheet reflector into the SK tank and measured the amount of direct light and reflected light for specific incident angles. Figure \ref{example_black} shows the ratio of the amount of charge due to reflected light versus direct light of data (black) and half reflectivity MC (red). This shows reasonable agreement, even if the 337 nm result (upper figure) still needs tuning.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure7.ps}
\caption{Black sheet reflectivity tuning result for three wavelengths. Data
is black and MC is red.} \label{example_black}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=80mm]{figure8.ps}
\caption{The distribution of the opening angle (depending on energy)
between the particle direction and the vector from the reconstructed
vertex to the hit PMT position.}
\label{example_like_dir}
\end{figure}
Next, we improved the direction fitter itself. In essence, the likelihood function that represents the distribution of the opening angle between the particle direction and the vector from the reconstructed vetex to the hit PMT position was improved. In SK-I it was made for 10 MeV MC electrons \cite{SKI_full}, while for SK-III several energy bins were considered. As a result, we achieved about 10\% better angular resolution around 5 MeV than in SK-I (Fig. \ref{example_veres}).
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure9.ps}
\caption{Angular resolution of SK-I/III.} \label{example_veres}
\end{figure}
\subsection{Other Items\label{subsecother}}
In addition to the above efforts, we retuned most of the reconstruction tools and reduction criteria to reduce their systematic uncertainties. We are continuing to tune those items whose tuning results weren't sufficiently good, as mentioned in the previous sections. As the determination of our systematic errors is still under way, we can't show the exact numbers of the systematic uncertainties yet.
\section{New solar $\nu$ results\label{secres}}
As a result of the above efforts, we took good solar neutrino data during the full Super-Kamiokande-III data-taking period, which ran from August of 2006 through August of 2008. Currently 298 live days of data with a lower total energy threshold of 4.5 MeV is finished being analyzed. This section shows the following new solar neutrino results for this data set:
\begin{itemize}
\item Observed \isotope[8]{B} $\nu$ flux in SK-III
\item Angular distributions
\item \isotope[8]{B} $\nu$ flux time variation
\item Recoil electron energy spectrum
\item Day/Night asymmetry
\end{itemize}
But the 4.5-5.0 MeV energy bin still needs additional MC tuning, so for the flux calculation we didn't include this bin. We have only quoted statistical errors, since the estimation of the systematic uncertainties isn't finished.
\subsection{Observed the Solar $\nu$ Flux\label{subsecflux}}
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure10.ps}
\caption{Angular distribution of solar neutrino event candidates between 5.0 and 20.0 MeV. The area under the dotted line is the contribution from remaining background events. The area between the solid and dotted line indicates the elastic scattering peak.} \label{example_figure}
\end{figure}
Solar neutrinos traversing SK occasionally interact with bound electrons of water molecules. These $\nu$-$e$ interactions are elastic scattering, and their incident neutrino and a recoil electron directions are highly correlated. Fig. \ref{example_figure} shows the $\cos\theta_{\mathrm{sun}}$ distribution of solar neutrino event candidates where $\theta_{\mathrm{sun}}$ is the angle between the recoil electron direction and the direction from the sun; a sharp peak around 1.0 is observed. Using the same method as SK-I \cite{SKI_full}, we extracted the solar neutrino signal from this distribution. The best-fit value for the number of signal events between 5.0 MeV and 20.0 MeV is $\mathrm{4968^{+108}_{-106}(stat.)}$. The corresponding \isotope[8]{B} $\nu$ flux is
\begin{equation*}
\mathrm{(2.31 \pm 0.05(stat.))\times 10^6cm^2s^{-1},}
\end{equation*}
is consistent with SK-I/II values of $\mathrm{(2.35 \pm 0.02(stat.) \pm 0.08(sys.)) \times 10^6cm^2s^{-1}}$ and $\mathrm{(2.38 \pm 0.05(stat.) ^{+0.16}_{-0.15}(sys.)) \times 10^6cm^2s^{-1}}$ within the statistical error.
\subsection{Angular Distributions\label{subsecang}}
Sec. \ref{subsecflux} showed the angular distribution of all events between 5.0 and 20.0 MeV. This section will focus on the angular distributions of the lowest energy bins.
First, Fig. \ref{example_figure_col2} shows the 5.0-5.5, 5.5-6.0 and 6.0-6.5 MeV regions. For these energy bins, the fiducial volume (13.3 kton) is smaller than that (22.5 kton) of higher energy bins, because of the large number of background events from the ID wall (Sec. \ref{subsecWS}). As a result of the effort described in Sec. \ref{seccur}, the background level in SK-III became lower than that of SK-I, especially below 6.0 MeV where it has been cut by one half. Moreover, the solar peak of SK-III is sharper than that of SK-I. In other words, the improved direction fitter (Sec. \ref{subsecar}) has made SK-III's angular resolution better than SK-I's.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure11.ps}
\caption{SK-I(blue)/III(red) angular distributions of solar neutrino event candidates in 5.0-5.5, 5.5-6.0 and 6.0-6.5MeV regions and the same volume (the central 13.3 kton)} \label{example_figure_col2}
\end{figure}
Next, we'd like to report a new low energy bin of 4.5-5.0 MeV. As a result of improvement of the SK water purification system, we were able to achieve the 4.5 MeV recoil electron energy threshold. Though high background from the ID wall make this fiducial volume (9.0 kton) much smaller than that of higher energy ranges, the low background in the central detector region provides a possibility to analyze the 4.5-5.0 MeV region; we have observed the solar signal in this energy bin. Figure \ref{example_angle2} shows the angular distribution of this region. Because of the huge background rate and insufficient exposure, at first glance the solar peak isn't readily apparent, but the fitting result was good. Still, due to the remaining work we cannot quote the exact flux number yet.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure12.ps}
\caption{Angular distribution of solar neutrino event candidates between 4.5 and 5.0 MeV. The area under the blue dotted line is the contribution from background events. The area between the red solid and blue dotted line indicates the elastic scattering peak.} \label{example_angle2}
\end{figure}
\subsection{Seasonal Variation\label{subsectime}}
The time or seasonal variation of the total flux since the start of SK-I is shown in Fig. \ref{example_time}. The size of each horizontal bin is 1.5 months. The variation agrees well with a sinusoidal trend consistent with the expected $1/r^2$ variations in SK-I data due to the eccentricity of the Earth's orbit around the sun.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure13.ps}
\caption{The time variation of the solar flux beginning with SK-I.} \label{example_time}
\end{figure}
\subsection{Energy Spectrum\label{subsecspec}}
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{figure14.ps}
\caption{Energy spectrum. Red area is \isotope[8]{B} $\nu$ contribution.} \label{example_spec}
\end{figure}
Figure \ref{example_spec} shows the ratios of the expected (normalized by the SK-I best fit) MC and measured recoil electron energy spectra of SK-I/II/III. The dotted lines are the average data/MC ratios and the green bands represent statistical error. The red areas indicate the expected \isotope[8]{B} $\nu$ contribution from the SK-I best fit and the white areas under the dotted lines are the $hep$ $\nu$ contributions. We can confirm that SK-I/II/III are statistically consistent; even though SK-III has lower backgrounds below 6.5 MeV, statistical errors are bigger.
\subsection{Day/Night Asymmetry\label{subsecdna}}
The Day/Night asymmetry($\mathrm{A_{DN}}$) is the only direct test of matter effects on the solar neutrino oscillations. This value is obtained from $\mathrm{A_{DN}=2(D-N)/(D+N)}$, where D and N are the day and night fluxes measured by selecting events which occur when the cosine of the solar zenith angle is less than zero (day) and greater than zero (night), respectively. In SK-I it was measured as $\mathrm{-2.1\% \pm 2.0\%(stat.)^{+1.3}_{-1.2}\%(syst.)}$, and also fitted as $\mathrm{-1.8\% \pm 1.6\%(stat.)^{+1.3}_{-1.2}\%(syst.)}$, and in SK-II measured as $\mathrm{-6.3\% \pm 4.2\%(stat.) \pm 3.7\%(syst.)}$. In SK-III it can be measured to 4.3\%(stat.) with the shown 298 days of data, and perhaps to 3.7\%(stat.) using the entire SK-III data set including periods of high threshold or high background runs. If so, then using the combined SK-I/II/III data can determine it to 1.6\%(stat.) and fit variations to 1.3\%(stat.).
\section{Conclusion\label{secsum}}
We achieved lower backgrounds below 6.0 MeV in the center of SK, and have almost finished analyzing SK-III data. We are trying to reduce systematic errors compared to SK-I and the estimation of these systematic errors is under way. Our SK-III results are consistent with the SK-I/II results within statistical uncertainties. At SK-III, the recoil electron energy threshold was lowered to 4.5 MeV. With existing data, SK is statistically sensitive down to LMA day/night asymmetries of 1.3\%. We expect to present preliminary SK-III solar neutrino results before the end of 2009.
\bigskip
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\emph{Knowledge Hypergraphs} are graph structured knowledge bases that store facts about the world in the form of relations between two or more entities.
They can be seen as one generalization of \emph{Knowledge Graphs},
in which relations are defined on exactly two entities.
Since accessing and storing all the facts in the world is difficult, knowledge bases are incomplete; the goal of \emph{link prediction} (or \emph{knowledge completion}) in knowledge (hyper)graphs
is to predict unknown links or relationships between entities based on the existing ones.
In this work we are interested in the problem of link prediction in knowledge hypergraphs. Our motivation
for studying link prediction in these more sophisticated knowledge structures is based on the fact that most knowledge in the world has inherently complex composition,
and that not all data
can be represented as a relation between two entities without either
losing a portion of the information or creating incorrect data points.
Link prediction in knowledge graphs is a problem that is studied extensively,
and has applications in several tasks such as searching~\citep{singhal} and automatic question answering~\citep{watson}.
In these studies, knowledge graphs are defined as directed graphs having
nodes as entities and labeled edges as relations;
edges are directed from the \emph{head} entity to the \emph{tail} entity.
The common data structure for representing knowledge graphs is a set of triples $relation(head, tail)$
that represent information as a collection of binary relations.
There exist a large number of knowledge graphs that are publicly available, such as
\acr{NELL}~\citep{carlson2010toward} and \acr{Freebase}~\citep{bollacker2008freebase}.
It is noteworthy to mention that \acr{Freebase} is a complex knowledge base where $61$\% of the relations are beyond binary (defined on more than two nodes). However, current methods use a simplified version of \acr{Freebase} where the non-binary relations are converted to binary ones (defined on exactly two entities).
Embedding-based models~\citep{nguyen2017overview} have proved to be effective for knowledge graph completion. These approaches learn embeddings for entities and relations.
To find out if $r(h, t)$ is a fact (i.e. is true), such models define a function that embeds relation $r$ and entities $h$ and $t$, and produces the probability that $r(h, t)$ is a fact.
While successful, such embedding-based methods make the strong assumption that all relations are binary.
In this work, we
introduce two embedding-based models that perform link prediction in knowledge hypergraphs.
The first is \emph{\oursimple{}}, which is inspired
from \emph{SimplE}~\citep{kazemi2018simple}, originally designed
to perform link prediction in knowledge graphs.
For a given entity, \emph{\oursimple{}} shifts the entity embedding
by a value that depends on the position of the entity in the given relation.
Our second model is \emph{\ourmodel{}}, which in addition to learning
entity embeddings, learns positional (convolutional) filters;
these filters are disentangled from entity representations and are used to transform the representation of an entity
based on its position in a relation.
We show that both \oursimple{} and \ourmodel{} are fully expressive.
To evaluate our models, we introduce two new datasets from subsets of \acr{Freebase}, and develop baselines by extending existing models on knowledge graphs to work with hypergraphs.
We evaluate the proposed methods on standard binary and non-binary datasets. While both \oursimple{} and \ourmodel{} outperform our baselines and the state-of-the-art, \ourmodel{} is more effective with fewer parameters. It also produces much better results when predicting relations that contain at least one entity in a position never encountered during training, demonstrating the clear advantage of disentangling position representation from entity embeddings.
The contributions of this paper are:
(1) \ourmodel{} and \oursimple{}, two embedding-based methods for knowledge hypergraph completion that outperform the baselines for knowledge hypergraphs,
(2) a set of baselines for knowledge hypergraph completion,
and
(3) two new knowledge hypergraphs obtained from subsets of \acr{Freebase}, which can serve as new evaluation benchmarks for knowledge hypergraph completion methods.
\section{Motivation and Related Work}
Knowledge hypergraph completion is a relatively under-explored area.
We motivate our work
by outlining that adjusting current models to accommodate hypergraphs does not yield satisfactory results.
Existing knowledge graph completion methods can be used in the beyond-binary setting by either
(1) extending known models to work with non-binary relational data (e.g., m-TransH~\citep{m-TransH}), or by
(2) converting non-binary relations into binary ones using methods such as reification or star-to-clique~\citep{m-TransH}, and then applying known link prediction methods.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{hypergraph-degree-from-university.pdf}
\caption{\texttt{DEGREE\_FROM\_UNIVERSITY} defined on three facts.}
\label{fig:hypergraph-degree-from-univeresity}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{reified.pdf}
\caption{Reifying non-binary relations with three additional entities.}
\label{fig:reified}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{star-to-clique.pdf}
\caption{Converting non-binary relations into cliques.}
\label{fig:star-to-clique}
\end{subfigure}
\caption{
In this example, the three facts in the original graph (a) show that Turing received his PhD from Princeton and his undergraduate degree from King's College Cambridge.
Figures (b) and (c) show two methods of converting this ternary relation into three binary ones.
}
\label{fig:hypergraph-to-graph}
\end{figure}
In the first case, the only example that extends a known model to work with non-binary relations is m-TransH~\citep{m-TransH}, which is an extension of TransH~\citep{TransH}, and
which we show to be less effective than our models in Section~\ref{sec:experiments}.
The second case is about restructuring a knowledge hypergraph
to work with current knowledge graph completion methods.
One common approach to reduce a hypergraph into a graph is \emph{reification}.
In order to reify a fact with a relation defined on $k$ entities, we first create a new entity $e$ (square nodes in Figure~\ref{fig:reified})
and connect $e$ to each of the $k$ entities that are part of the given fact.
Another approch is \emph{Star-to-clique}, which converts a fact defined on $k$ entities
into $k \choose 2$ facts with distinct relations between all pairwise entities in the fact. See Figure~\ref{fig:star-to-clique}.
Both conversion approaches have their caveats
when current link-prediction models are applied to the resulting graphs. The example in Figure~\ref{fig:hypergraph-degree-from-univeresity} shows three facts that
pertain to the relation \relation{DEGREE_FROM_UNIVERSITY}.
When we reify the hypergraph in this example (Figure~\ref{fig:reified}), we add three reified entities.
At test time, we again need to reify the test samples, which means we need a way to embed newly created entities about which we have almost no information.
Applying the star-to-clique method to the hypergraph does not yield better results either: in this case, the resulting graph loses
some of the information that the original hypergraph had ---
in Figure~\ref{fig:star-to-clique},
it is no longer clear which degree was granted by which institution.
Existing methods that relate to our work in this paper can be grouped into three main categories:
\textbf{Knowledge graph completion.}
Embedding-based models such as
\emph{translational}~\citep{TransE,TransH}, \emph{bilinear}~\citep{DistMult,trouillon2016complex,kazemi2018simple}, and \emph{deep models}~\citep{nickel2011three,socher2013reasoning} have proved effective for knowledge graphs where all relations are binary.
We extend some of the models in this category and compare them with the proposed methods.
\textbf{Knowledge hypergraph completion.}
Soft-rule models~\citep{richardson2006markov, de2007problog, kazemi2014relational} can easily handle variable arity relations and have the advantage of being interpretable.
However, they have a limited learning capacity and can only learn a subset of patterns~\citep{nickel2016review}.
Embedding-based methods are more powerful than soft-rule approaches.
\citet{guan2019link} proposed an embedding-based method
based on the star-to-clique approach which its caveats are discussed.
m-TransH~\citep{m-TransH} extends TransH~\citep{TransH} to knowledge hypergraph completion. \citet{kazemi2018simple} prove that TransH and consequently m-TransH are not fully expressive and have restrictions in modeling relations.
In contrast, our embedding-based proposed models are fully expressive and outperform m-TransH.
\textbf{Learning on hypergraphs.}
Hypergraph learning has been employed to model high-order correlations among data in many tasks, such as in video object segmentation~\citep{huang2009video} and in modeling image relationships and image ranking~\citep{huang2010image}. There is also a line of work extending graph neural networks to hypergraph neural networks~\citep{HGNN} and hypergraph convolution networks~\citep{HGCN}. On the other hand, these models are designed for undirected hypergraphs, with edges that are not labeled (no relations), while knowledge hypergraphs are directed and labeled.
As there is no clear or easy way of extending these models to our knowledge hypergraph setting, we do not consider them as baselines for our experiments.
\section{Definition and Notation}
A world consists of a finite set of entities $\entset$,
a finite set of relations $\relset$, and a set of tuples
$\tau$ defined over $\entset$ and $\relset$.
Each tuple in $\tau$ is of the form $r(v_1, v_2, \dots, v_k)$ where
$r \in \relset$ is a relation and each $v_i\in \entset$ is an entity, for all $i=1,2,\dots,k$.
Here \emph{arity} $\left|r\right|$ of a relation $r$ is the
number of arguments that the relation takes and is fixed for each relation.
A world specifies what is true: all the tuples in $\tau$
are true,
and the tuples that are not in $\tau$ are false.
A knowledge hypergraph consists of a subset of the tuples $\tau' \subseteq \tau$.
Link prediction in knowledge hypergraphs is the problem of predicting the missing tuples in $\tau'$,
that is, finding the tuples $\tau \setminus \tau'$.
An \emph{embedding} is a function that converts an entity or a relation into a vector (or sometimes a higher order tensor) over a field (typically the real numbers).
We use bold lower-case for vectors, that is, ${\bm{c}}{e} \in \realset^k$ is an embedding of entity $e$, and
${\bm{c}}{r} \in \realset^{l}$ is an embedding of a relation $r$.
Let ${\bm{c}}{v_1}, {\bm{c}}{v_2}, \dots, {\bm{c}}{v_k}$ be a set of vectors.
The variadic function $\rm{concat}({\bm{c}}{v_1}, \dots, {\bm{c}}{v_k})$ outputs the concatenation of its input vectors.
The 1D convolution operator~$*$ takes as input a vector ${\bm{c}}{v}$ and a convolution weight filter ${\bm{c}}{\omega}$,
and outputs the convolution of ${\bm{c}}{v}$ with the filters ${\bm{c}}{\omega}$.
We define the variadic function $\dotsum{}$ to be the sum of the element-wise product
of its input vectors, namely
$\dotsum{{\bm{c}}{v_1}, {\bm{c}}{v_2}, \dots, {\bm{c}}{v_k}} \ensuremath{=} \sum_{i=1}^{\ell} {\bm{c}}{v_1}^{(i)} {\bm{c}}{v_2}^{(i)} \dots {\bm{c}}{v_k}^{(i)}$
where each vector ${\bm{c}}{v_i}$ has the same length, and ${\bm{c}}{v_j}^{(i)}$ is the $i$-th element of vector ${\bm{c}}{v_j}$.
For the task of knowledge graph completion, an embedding-based model defines a function $\phi$ that takes a tuple $x$
as input, and generates a prediction, \eg a probability (or score) of the tuple being true.
A model is \emph{fully expressive} if given any complete world (full assignment of truth values to all tuples),
there exists an assignment of values to the embeddings of the entities and relations
that accurately separates the tuples that are true in the world from those that are false.
\section{Knowledge Hypergraph Embedding: Proposed Methods}
\label{HypE}
The idea at the core of our methods is that the way an entity representation is used to make predictions is affected by the role that the entity plays in a given relation.
In the example in Figure~\ref{fig:hypergraph-to-graph}, Turing plays the role of a student at a university, but he may have a different role (e.g. `professor') in another relation.
This means that the way we use Turing's embedding may need to be different for computing predictions for each of these roles.
The prediction for an entity should depend on the position it appears. If the prediction does not depend on the position, then the relation has to be symmetric. If it does and positions are learned independently, information about one position will not interact with that of others.
It should be noted that in several embedding-based methods for knowledge graph completion, such as canonical polyadic~\citep{CP, lacroix2018canonical}, ComplEx~\citep{trouillon2016complex}, and SimplE~\citep{kazemi2018simple},
the prediction depends on the position of an entity.
In what follows, we propose two embedding-based methods for link prediction in knowledge hypergraphs.
The first model is inspired by SimplE and has its roots in link prediction in knowledge graphs;
the second model takes a fresh look at knowledge completion as a multi-arity problem,
without first setting it up within the frame of binary relation prediction.
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\vspace{-6mm}
\includegraphics[width=\textwidth]{HSimplE.png}
\vspace{-6mm}
\caption{Function $\phi$ for \oursimple{}.}
\label{fig:score_HSimplE}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\vspace{-6mm}
\includegraphics[width=\textwidth]{HypE1.png}
\vspace{-6mm}
\caption{Function $f({\bm{c}}{e}, i)$ used in \ourmodel{}.}
\label{fig:representor}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.32\textwidth}
\vspace{-6mm}
\includegraphics[width=\textwidth]{HypE2.png}
\vspace{-6mm}
\caption{Function $\phi$ for \ourmodel{}.}
\label{fig:score_HypE}
\end{subfigure}
\caption{Visualization of \ourmodel{} and \oursimple{} architectures. (a) function $\phi$ for \oursimple{} transforms entity embeddings by shifting them based on their position and combining them with the relation embedding.
(b) function $f({\bm{c}}{e}, i)$ for \ourmodel{} takes an entity embedding and the position the entity appears in the given tuple, and returns a vector. (c) function $\phi$ takes as input a tuple and outputs the score of \ourmodel{} for the tuple. }\label{fig:HyperConvE}
\end{figure}
\textbf{\oursimple{}:}
\oursimple{} is an embedding-based method for link prediction in knowledge hypergraphs that is inspired by SimplE~\citep{kazemi2018simple}.
SimplE learns two embedding vectors ${\bm{c}}{e^{(1)}}$ and ${\bm{c}}{e^{(2)}}$ for an entity~$e$
(one for each possible position of the entity),
and two embedding vectors ${\bm{c}}{r^{(1)}}$ and ${\bm{c}}{r^{(2)}}$ for a relation~$r$
(with one relation embedding as the inverse of the other).
It then
computes the score of a triple as
$\phi(r( e_1, e_2))=\dotsum{{\bm{c}}{r^{(1)}},{\bm{c}}{e_{1}^{(1)}},{\bm{c}}{e_{2}^{(2)}}} + \dotsum{{\bm{c}}{r^{(2)}},{\bm{c}}{e_{2}^{(1)}},{\bm{c}}{e_{1}^{(2)}}}$.
In \oursimple{}, we adopt the idea of having different representations for an entity
based on its position in a relation, and updating all these representations
from a single training tuple.
We do this by representing each entity $e$ as a single vector
${\bm{c}}{e}$ (instead of multiple vectors as in SimplE), and
each relation $r$ as a single vector ${\bm{c}}{r}$.
Conceptually, each ${\bm{c}}{e}$ can be seen as the concatenation of the different representations
of $e$ based on every possible position.
For example, in a knowledge hypergraph where the relation with maximum arity is $\delta$,
an entity can appear in $\delta$ different positions; hence ${\bm{c}}{e}$ will be the concatenation
of $\delta$ vectors, one for each possible position.
\oursimple{} scores a tuple using the following function:
$${\phi(r(e_i, e_j, \dots, e_k)) = \dotsum{{\bm{c}}{r}, {\bm{c}}{e_i}, \rm{shift}({\bm{c}}{e_j}, \rm{len}({\bm{c}}{e_j})/\delta), ..., \rm{shift}({\bm{c}}{e_k}, \rm{len}({\bm{c}}{e_k})*(\delta-1)/\delta}}.$$
Here, $\rm{shift}({\bm{c}}{a}, x)$
shifts vector ${\bm{c}}{a}$ to the left by $x$ steps, $\rm{len}({\bm{c}}{e})$ returns length of vector ${\bm{c}}{e}$, and
$\delta = \max_{r \in \relset}(|r|)$.
We observe that for knowledge graphs ($\delta=2$),
SimplE is a special instance of \oursimple{}, with
${\bm{c}}{e} = \rm{concat}({\bm{c}}{e}^{(1)}, {\bm{c}}{e}^{(2)})$ and ${\bm{c}}{r}= \rm{concat}({\bm{c}}{r}^{(1)}, {\bm{c}}{r}^{(2)})$.
The architecture of \oursimple{} is summarized in Figure~\ref{fig:score_HSimplE}.
\textbf{\ourmodel{}:}
\ourmodel{} learns a single representation for each entity, a single representation
for each relation, and positional convolutional weight filters for each possible position.
At inference time, the appropriate positional filters are first used to transform the embedding
of each entity in the given fact;
these transformed entity embeddings
are then combined with the embedding of the relation to produce a~\emph{score},
\eg a probability value that the input tuple is true.
The architecture of \ourmodel{} is summarized in Figures~\ref{fig:representor} and ~\ref{fig:score_HypE}.
Let $n$, $l$, $d$, and $s$ denote the number of filters per position, the filter-length, the embedding dimension and the stride of the convolution, respectively.
Let $\omega_i \in \mathbb{R}^n \times \mathbb{R}^l$ be the convolutional
filters associated with an entity at position $i$, and let
$\omega_{ij} \in \mathbb{R}^l$ be the $j$th row of $\omega_i$.
We denote by $P \in \mathbb{R}^{n q} \times \mathbb{R}^{d}$ the projection matrix,
where $q = \floor{(d - l) / s} + 1$ is the feature map size.
For a given tuple, define $f({\bm{c}}{e}, i) = \rm{concat}({\bm{c}}{e} * \omega_{i1}, \dots, {\bm{c}}{e} * \omega_{i n}) P$
to be a function that returns a vector of size $d$ based on the entity embedding ${\bm{c}}{e}$ and it's position $i$ in the tuple.
Thus, each entity embedding ${\bm{c}}{e}$ appearing at position $i$ in a given tuple is convolved with the set of position-specific filters $\omega_{i}$ to give $n$ feature maps of size $q$.
All $n$ feature maps corresponding to an entity are concatenated to a vector
of size $nq$ and projected to the embedding space by multiplying it by $P$.
The projected vectors of entities and the embedding of the relation
are combined by an inner-product to define $\phi$:
\begin{equation}
\label{eq:phi}
\phi(
r(e_1, \dots, e_{|r|})) = \tuplee{{\bm{c}}{r}, f({\bm{c}}{e_1}, 1), \dots, f({\bm{c}}{e_{|r|}}, |r|)}
\end{equation}
The advantage of learning positional filters independent of entities is two-folds:
On one hand, learning a single vector per entity keeps entity representations
simple and disentangled from its position in a given fact.
On the other hand, unlike \oursimple{}, \ourmodel{} learns positional filters from all entities that appear in the given position;
Overall, this separation of representations for entities, relations, and position facilitates
the representation of knowledge bases having facts of arbitrary number of entities. It also gives \ourmodel{} an advantage in the case when we test a trained \ourmodel{} model on a tuple that contains an entity in a position never seen before at train time. We discuss this further in Section~\ref{sec:hypergraph results}.
Both \oursimple{} and \ourmodel{} are fully expressive --- an important property that has been
the focus of several studies~\citep{simpleplus,trouillon2017knowledge,xu2018powerful}.
A model that is not fully expressive can easily underfit to the training data and embed assumptions that may not be reflected in reality.
We defer the proofs of expressivity
to Appendix~\ref{appendix}.
\subsection{Objective Function and Training}
To learn either of a \oursimple{} or \ourmodel{} model, we use stochastic gradient descent with mini-batches. In each learning iteration, we iteratively take in a batch of positive tuples from the knowledge hypergraph. As we only have positive instances available, we need also to train our model on negative instances. For this purpose, for each positive instance, we produce a set of negative instances.
For negative sample generation, we follow
the contrastive approach of \citet{TransE} for knowledge graphs and extend it to knowledge hypergraphs:
for each tuple,
we produce a set
of negative samples of size $N |r|$
by replacing each of the entities with $N$ random entities in the tuple, one at a time.
Here, $N$ is the ratio of negative samples in our training set, and is a hyperparameter.
Given a knowledge hypergraph defined on $\tau'$, we let $\tau'_{train}$, $\tau'_{test}$, and $\tau'_{valid}$ denote
the train, test, and validation sets, respectively,
so that $\tau' = \tau'_{train} \cup \tau'_{test}\cup \tau'_{valid}$.
For any tuple $x$ in $\tau'$, we let $T_{neg}(x)$ be a function that generates a
set of negative samples through the process described above.
Let ${\bm{c}}{r}$ represent relation embeddings, $\entityv$ represent entity embeddings,
and let $\phi$ be the function given by \eqref{eq:phi} that maps a tuple to a score based on {\bm{c}}{r} and {\bm{c}}{e}.
We define the following cross entropy loss, which is a combination of softmax and negative log likelihood loss, and has been shown to be effective for link prediction~\citep{baselines-strike}:
\begin{equation*}
\mathcal{L}({\bm{c}}{r}, \entityv) = \sum_{x' \in \tau'_{train}}{-log\bigg(\frac{e^{\phi(x')}}{ e^{\phi(x')} + \displaystyle\sum_{x \in T_{neg}(x')}{e^{\phi(x)}}}\bigg)}
\end{equation*}
\section{Experimental Setup}
\subsection{Datasets}
We conduct experiments on a total of $5$ different datasets.
For the experiments on datasets with binary relations, we use two standard benchmarks for
knowledge graph completion: WN18~\citep{WN18} and FB15k~\citep{TransE}.
WN18 is a subset of \acr{Wordnet}~\citep{miller1995wordnet} and FB15k is a subset of \acr{Freebase}~\citep{bollacker2008freebase}.
We use the train, validation, and test split proposed by \citet{TransE}.
The experiments on knowledge hypergraph completion are conducted on three datasets.
The first is JF17K proposed by \citet{m-TransH}; as no validation set is proposed
for JF17K, we randomly select 20\% of the train set as validation.
We also create two datasets \acr{FB-auto} and \acr{m-FB15K} from \acr{Freebase}. See Appendix~\ref{appendix} for more dataset details.
\subsection{Baselines}\label{baselines}
To compare our results to that of existing work, we first design a few simple baselines
that extend current models to work with knowledge hypergraphs.
We only consider models that admit a simple extension to beyond-binary relations
for the link prediction task.
The baselines for this task are grouped into the following categories:
(1) methods that work with binary relations and that are easily extendable to higher-arity: r-SimplE, m-DistMult, and m-CP;
(2) existing methods that can handle higher-arity relations: m-TransH.
Below we give some details about methods in category (1).
\textbf{r-SimplE:} To test performance of a model trained on reified data,
we converted higher-arity relations in the train set to binary relations through reification. We then use the SimplE model (that we call r-SimplE) to train and test on this reified data. In this setting, at test time higher-arity relations are first reified to a set of binary relations; this process creates new auxiliary entities for which the model has no learned embeddings. To embed the auxiliary entities for the prediction step,
we use the observation we have about them at test time.
For example, a higher-arity relation $r(e_1, e_2, e_3)$ is reified at test time by being replaced by three facts: $r_1(id123, e_1)$, $r_2(id123, e_2)$, and $r_3(id123, e_3)$.
When predicting the tail entity of $r_1(id123, ?)$, we use the other two reified facts to
learn an embedding for entity $id123$.
Because $id123$ is added only to help represent the higher-arity relations as a set of binary relations,
we only do tail prediction for reified relations.
\textbf{m-DistMult:} DistMult~\citep{DistMult} defines a score function $\phi(r(e_i, e_j)) = \dotsum{{\bm{c}}{r}, {\bm{c}}{e_i}, {\bm{c}}{e_j}}$. To accommodate non-binary relations, we redefine this function as
$\phi(\ensuremath{\tuplew{r}{e_i}{\dots}{e_j}}) = \dotsum{{\bm{c}}{r}, {\bm{c}}{e_i}, \dots, {\bm{c}}{e_j}}$.
\textbf{m-CP:} Canonical Polyadic (CP) decomposition~\citep{CP} embeds each entity $e$ as two vectors ${\bm{c}}{e^{(1)}}$ and ${\bm{c}}{e^{(2)}}$, and each relation $r$ as a single vector ${\bm{c}}{r}$.
CP defines the score function ${\phi(r(e_i, e_j))=\dotsum{{\bm{c}}{r},{\bm{c}}{e_{i}^{(1)}},{\bm{c}}{e_{j}^{(2)}}}}$.
We extend CP to a variant (m-CP) that accommodates non-binary relations, and which
embeds each entity $e$ as $\delta$ different vectors ${\bm{c}}{e^{(1)}}, .., {\bm{c}}{e^{(\delta)}}$, where $\delta = \max_{r \in \relset}(|r|)$.
m-CP computes the score of a tuple as $\phi(\ensuremath{\tuplew{r}{e_i}{\dots}{e_j}})=\dotsum{{\bm{c}}{r}, {\bm{c}}{e_i^{(1)}}, ..., {\bm{c}}{e_j^{(|r|)}}}$.
\subsection{Evaluation Metrics}
Given a knowledge hypergraph on $\tau'$, we evaluate various completion methods using a train and test set $\tau'_{train}$ and $\tau'_{test}$.
We use two evaluation metrics: Hit@t and Mean Reciprocal Rank~(MRR).
Both these measures rely on the \emph{ranking} of a tuple $x \in \tau'_{test}$ within a set of \emph{corrupted} tuples.
For each tuple $r(e_1, \dots, e_k)$ in $\tau'_{test}$ and each entity position $i$ in the tuple,
we generate $|\entset|-1$ corrupted tuples by replacing
the entity $e_i$ with each of the entities in $\entset \setminus \{e_i\}$.
For example, by corrupting entity $e_i$, we would obtain a new tuple $r(e_1, \dots, e_i^c, \dots, e_k)$ where $e_i^c \in \entset \setminus \{e_i\}$.
Let the set of corrupted tuples, plus $r(e_1, \dots, e_k)$, be denoted by $\theta_i(r(e_1,\dots,e_k))$.
Let $\rank_i(r(e_1, \dots, e_k))$ be the ranking of $r(e_1, \dots, e_k)$ within
$\theta_i(r(e_1, \dots, e_k))$ based on the score $\phi(x)$ for each $x \in \theta_i(r(e_1,\dots,e_k))$.
In an ideal knowledge hypergraph completion method, the rank $\rank_i(r(e_1, \dots, e_k))$
is $1$ among all corrupted tuples.
We compute the MRR as
$
\frac{1}{K} \sum_{r(e_1, \dots,e_k) \in \tau'_{test}}
\sum_{i=1}^{k}\frac{1}{\rank_{i} r(e_1, \dots,e_k)}
$
where $K = \sum_{r(e_1,\dots e_k) \in \tau'_{test}} |r|$ is the number of prediction tasks.
Hit@t measures the proportion of tuples in $\tau'_{test}$ that rank among top $t$ in their corresponding corrupted sets.
We follow \citet{TransE} and remove all corrupted tuples that are in $\tau'$ from our computation of MRR and Hit@t.
\begin{table*}[t]
\footnotesize
\setlength{\tabcolsep}{2pt}
\caption{Knowledge hypergraph completion results on JF17K, \acr{FB-auto} and \acr{m-FB15K} for baselines and the proposed method. The prefixes `r' and `m' in the model names stand for \emph{reification} and \emph{multi-arity} respectively. Both our methods outperform the baselines on all datasets.}
\vspace{-3mm}
\label{table:FB-subsets}
\begin{center}
\resizebox{\columnwidth}{!}{%
\setlength{\tabcolsep}{3pt}
\begin{tabular}{l|cccc|cccc|cccc}
\multicolumn{1}{c}{}
& \multicolumn{4}{c}{JF17K}
& \multicolumn{4}{c}{\acr{FB-auto}} & \multicolumn{4}{c}{\acr{m-FB15K}}\\
\cmidrule(lr){2-5} \cmidrule(lr){6-9}
\cmidrule(lr){10-13}
Model & MRR & Hit@1 & Hit@3 & Hit@10 & MRR & Hit@1 & Hit@3 & Hit@10 & MRR & Hit@1 & Hit@3 & Hit@10\\\hline
r-SimplE & 0.102 & 0.069 & 0.112 & 0.168 & 0.106 & 0.082 & 0.115 & 0.147 & 0.051 & 0.042 & 0.054 & 0.070\\
m-DistMult & 0.460 & 0.367 & 0.510 & 0.635 & 0.784 & 0.745 & 0.815 & 0.845 & 0.705 & 0.633 & 0.740 & 0.844 \\
m-CP & 0.392 & 0.303 & 0.441 & 0.560 & 0.752& 0.704 & 0.785 & 0.837 & 0.680 & 0.605 & 0.715 & 0.828 \\
m-TransH~\citep{m-TransH} & 0.446 & 0.357 & 0.495 & 0.614 & 0.728 & 0.727 & 0.728 & 0.728& 0.623 & 0.531 & 0.669 & 0.809\\
\hline
\oursimple{} (Ours) & 0.472 & 0.375 & 0.523 & 0.649 & 0.798 & 0.766 & 0.821 & 0.855 & 0.730 & 0.664 & 0.763 & 0.859\\
\ourmodel{} (Ours) & \textbf{0.492} & \textbf{0.409} & \textbf{0.533} & \textbf{0.650} & \textbf{0.804} &
\textbf{0.774} & \textbf{0.823} & \textbf{0.856} & \textbf{0.777} & \textbf{0.725} & \textbf{0.800} & \textbf{0.881} \\
\end{tabular}
}
\end{center}
\end{table*}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{CHART.png}
\caption{}
\label{fig:constrained_budget}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{plot.png}
\caption{}
\label{fig:difficult}
\end{subfigure}
\caption{
The above experiments show that \ourmodel{} outperforms \oursimple{} when trained with fewer parameters, and when tested on samples that contain at least one entity in a position never encountered during training.
(a) MRR of \ourmodel{} and \oursimple{} for different embedding dimensions. (b) Results of m-CP, \oursimple{}, and \ourmodel{} on the \emph{missing positions} test set.}
\label{fig:Plots}
\end{figure}
\section{Experiments}\label{sec:experiments}
This section summarizes our experiments with \oursimple{} and \ourmodel{}.
We evaluate both models on knowledge hypergraphs, as well as on knowledge graphs, and show results on training with different embedding dimensions.
Moreover, to test their representation power further, we evaluate \oursimple{} and \ourmodel{} on a more challenging dataset that we describe below.
We also conduct ablation studies based on performance breakdown across different arities.
\subsection{Knowledge Hypergraph Completion Results}
\label{sec:hypergraph results}
The results of our experiments, summarized in Table~\ref{table:FB-subsets}, show that
both
\oursimple{} and \ourmodel{} outperform the proposed baselines across the three datasets JF17K, \acr{FB-auto}, and \acr{m-FB15K}.
They further demonstrate that reification for the r-SimplE model does not work well; this is because the reification process introduces auxiliary entities for which the model does not learn appropriate embeddings because these auxiliary entities appear in very few facts.
Comparing the results of r-SimplE against \oursimple{}, we can also see that extending a model to work with hypergraphs works better than reification when high-arity relations are present.
The ability of knowledge sharing through the learned position-dependent convolution filters suggests that \ourmodel{} would need a lower number of parameters than \oursimple{} in order to obtain good results. To test this, we train both models with embedding dimension of $50$, $100$, and $200$. Figure~\ref{fig:constrained_budget} shows the MRR evaluation on the test set for each model with different embedding sizes. Based on the MRR result, we can see that \ourmodel{} outperforms \oursimple{} by $24\%$ for embedding dimension $50$, implying that \ourmodel{} works better under a constrained budget. This difference becomes negligible for embedding dimensions of $200$.
Disentangling the representations of entity embeddings and positional filters enables \ourmodel{} to better learn the role of position within a relation, because the learning process considers the behaviour of all entities that appear in a given position at time of training. This becomes specially important in the case when some entities never appear in certain positions in the train set, but you still want to be able to reason about them no matter what position they appear in at test time.
In order to test the effectiveness of our models in this more challenging scenario, we created a \emph{missing positions} test set by selecting the tuples from our original test set
that contain at least one entity in a position it never appears in in the train dataset.
The results on these experiments (Figure~\ref{fig:difficult}) show that (1) both \oursimple{} and \ourmodel{} outperform m-CP (which learns different embeddings for each entity-position pair), and more importantly, (2) \ourmodel{} significantly outperforms \oursimple{} for this challenging test set, leading us to believe that disentangling entity and position representations may be a better strategy for this scenario.
\subsection{Knowledge Graph Completion Results}
To confirm that \oursimple{} and \ourmodel{} still work well on the more common knowledge graphs, we evaluate them on WN18 and FB15K.
Table~\ref{table:results-table-arity2} shows link prediction results on WN18 and FB15K.
Baseline results are taken from the original papers except that of m-TransH, which we implement ourselves.
Instead of tuning the parameters of \ourmodel{} to get potentially better results, we instead
follow the \citet{kazemi2018simple} setup with the same grid search approach by setting $n = 2$, $l = 2$, and $s = 2$.
This results in all models in Table~\ref{table:results-table-arity2}
having the same number of parameters, and thus makes them
directly comparable to each other.
Note that since \oursimple{} is equivalent to SimplE for binary relations (as shown in Section~\ref{HypE}),
we have excluded \oursimple{} from the table.
The results show that on WN18 and FB15K,
\ourmodel{} outperforms all baselines except SimplE, while its performance remains comparable to that of SimplE.
\begin{table*}[t]
\footnotesize
\caption{Knowledge graph completion results on WN18 and FB15K for baselines and \ourmodel{}. Note that we do not include results for \oursimple{} because for knowledge graphs, \oursimple{} is equivalent to SimplE.
The table shows that \ourmodel{} performs similar to the best baselines for knowledge graphs with binary relations.}
\vspace{-3mm}
\label{table:results-table-arity2}
\begin{center}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{l|cccc|cccc}
\multicolumn{1}{c}{}
& \multicolumn{4}{c}{WN18} & \multicolumn{4}{c}{FB15k}\\
\cmidrule(lr){2-5} \cmidrule(lr){6-9}
Model & MRR & Hit@1 & Hit@3 & Hit@10 & MRR & Hit@1 & Hit@3 & Hit@10\\\hline
CP~\citep{CP} & 0.074 & 0.049 & 0.080 & 0.125 & 0.326 & 0.219 & 0.376 & 0.532 \\
TransH~\citep{TransH} & - & - & - & 0.867 & - & - & - & 0.585\\
m-TransH~\citep{m-TransH} & 0.671 & 0.495 & 0.839 & 0.923 & 0.351 & 0.228 & 0.427 & 0.559 \\
DistMult~\citep{DistMult} & 0.822 & 0.728 & 0.914 & 0.936 & 0.654 & 0.546 & 0.733 & 0.824\\
SimplE~\citep{kazemi2018simple} & \textbf{0.942} & \textbf{0.939} & \textbf{0.944} & \textbf{0.947} & \textbf{0.727} & \textbf{0.660} & 0.773 & 0.838\\
\hline
\ourmodel{} (Ours) & 0.934 & 0.927 & 0.940 & 0.944 & 0.725 & 0.648 & \textbf{0.777} & \textbf{0.856}\\
\end{tabular}
\end{center}
\end{table*}
\subsection{Ablation Study on Different Arities}
We break down the performance of \oursimple{}, \ourmodel{} and each of the baselines
across relations with different arities. Table~\ref{table:ablation-study} shows the Hit@10 results of
the models for each arity in JF17K.
We observe that the proposed models outperform the state-of-the-art and
the baselines in all arities except arity~$6$, which has a total of only $37$ tuples in the train and test sets.
Table~\ref{table:ablation-study} also shows that the performance of all models generally improve
as arity increases.
We note that
the train set has much fewer relation types
that are defined on a high number of entities --- JF17K contains only two relation types that admit six entities.
This leads us to hypothesize that
the position and/or entity representations learned for higher arities are optimized for these few relation types.
\begin{table*}[t]
\footnotesize
\caption{Breakdown performance of Hit@10 across relations with different arities on JF17K.}
\vspace{-3mm}
\label{table:ablation-study}
\begin{center}
\begin{tabular}{l|ccccc|c}
\multicolumn{1}{c}{}
& \multicolumn{5}{c}{Arity}\\
\cmidrule(lr){2-6}
Model & 2 & 3 & 4 & 5 & 6 & All\\\hline
r-SimplE & 0.478 & 0.025 & 0.015 & 0.022 & 0.000 & 0.168\\
m-DistMult & 0.359 & 0.591 & 0.745 & 0.869 & 0.359 & 0.635\\
m-CP & 0.305 & 0.517 & 0.679 & 0.870 & 0.875 & 0.560\\
m-TransH~\citep{m-TransH} & 0.316 & 0.563 & 0.762 & 0.925 & \textbf{0.979} & 0.614\\
\hline
\oursimple{} (Ours) & \textbf{0.376} & 0.625 & 0.742 & 0.810 & 0.010 & 0.649\\
\ourmodel{} (Ours) & 0.338 & \textbf{0.626} & \textbf{0.776} & \textbf{0.936} & 0.536 & \textbf{0.650}\\
\end{tabular}
\end{center}
\end{table*}
\section{Conclusions}
Knowledge hypergraph completion is an important problem that has received little attention.
In this work, having introduced two new knowledge hypergraph dataset, baselines, and two new methods for link prediction in knowledge hypergraphs, we hope to kindle interest in the problem.
Unlike knowledge graphs,
hypergraphs have a more complex structure that opens the door to more challenging
questions such as: how do we effectively predict the missing entities in a given (partial) tuple?
Is MRR a good evaluation metric for hypergraphs?
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Depth estimation from 2D images is a fundamental task in many applications including scene understanding and reconstruction \cite{Lee2011,moreno2007active,Hazirbas2016FuseNetID}. Having a dense depth map of the real-world can be very useful in applications including navigation and scene understanding, augmented reality \cite{Lee2011}, image refocusing \cite{moreno2007active}, and segmentation \cite{Hazirbas2016FuseNetID}. Recent developments in depth estimation are focusing on using convolutional neural networks (CNNs) to perform 2D to 3D reconstruction.
While the performance of these methods has been steadily increasing, there are still major problems in both the quality and the resolution of these estimated depth maps.
Recent applications in augmented reality, synthetic depth-of-field, and other image effects \cite{Hedman2018,Cao2018,Wang2018} require fast computation of high resolution 3D reconstructions in order to be applicable. For such applications, it is critical to faithfully reconstruct discontinuity in the depth maps and avoid the large perturbations that are often present in depth estimations computed using current CNNs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{teaser}
\end{center}
\caption{\textbf{Comparison of estimated depth maps:} input RGB images, ground truth depth maps, our estimated depth maps, state-of-the-art results of \cite{Fu2018DeepOR}.}
\label{fig:teaser}
\end{figure}
Based on our experimental analysis of existing architectures and training strategies \cite{Eigen2014,Li2015,Laina2016,Xu2017,Fu2018DeepOR} we set out with the design goal to develop a simpler architecture that makes training and future modifications easier. Despite, or maybe even due to its simplicity, our architecture produces depth map estimates of higher accuracy and significantly higher visual quality than those generated by existing methods (see Fig.~\ref{fig:teaser}). To achieve this, we rely on transfer learning were we repurpose high performing pre-trained networks that are originally designed for image classification as our deep features encoder. A key advantage of such a transfer learning-based approach is that it allows for a more modular architecture where future advances in one domain are easily transferred to the depth estimation problem.
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{network_overview}
\end{center}
\caption{\textbf{Overview of our network architecture.} We employ a straightforward encoder-decoder architecture with skip connections. The encoder part is a pre-trained truncated DenseNet-169 \cite{huang2017densely} with no additional modifications. The decoder is composed of basic blocks of convolutional layers applied on the concatenation of the $2\times$ bilinear upsampling of the previous block with the block in the encoder with the same spatial size after upsampling. }
\label{fig:network_overview}
\end{figure*}
\paragraph{Contributions:} Our contributions are threefold. First, we propose a simple transfer learning-based network architecture that produces depth estimations of higher accuracy and quality. The resulting depth maps capture object boundaries more faithfully than those generated by existing methods with fewer parameters and less training iterations. Second, we define a corresponding loss function, learning strategy, and simple data augmentation policy that enable faster learning. Third, we propose a new testing dataset of photo-realistic synthetic indoor scenes, with perfect ground truth, to better evaluate the generalization performance of depth estimating CNNs.
We perform different experiments on several datasets to evaluate the performance and quality of our depth estimating network. The results show that our approach not only outperforms the state-of-the-art and produces high quality depth maps on standard depth estimation datasets, but it also results in the best generalization performance when applied to a novel dataset.
\section{Related Work}
The problem of 3D scene reconstruction from RGB images is an ill-posed problem. Issues such as lack of scene coverage, scale ambiguities, translucent or reflective materials all contribute to ambiguous cases where geometry cannot be derived from appearance. In practice, the more successful approaches for capturing a scene's depth rely on hardware assistance, e.g. using laser or IR-based sensors, or require a large number of views captured using high quality cameras followed by a long and expensive offline reconstruction process. Recently, methods that rely on CNNs are able to produce reasonable depth maps from a single or couple of RGB input images at real-time speeds. In the following, we look into some of the works that are relevant to the problem of depth estimation and 3D reconstruction from RGB input images. More specifically, we look into recent solutions that depend on deep neural networks.
\paragraph{Monocular depth estimation} has been considered by many CNN methods where they formulate the problem as a regression of the depth map from a single RGB image \cite{Eigen2014,Laina2016,Xu2017,Hao2018DetailPD,Xu2018StructuredAG,Fu2018DeepOR}. While the performance of these methods have been increasing steadily, general problems in both the quality and resolution of the estimated depth maps leave a lot of room for improvement. Our main focus in this paper is to push towards generating higher quality depth maps with more accurate boundaries using standard neural network architectures. Our preliminary results do indicate that improvements on the state-of-the-art are possible to achieve by leveraging existing simple architectures that perform well on other computer vision tasks.
\paragraph{Multi-view} stereo reconstruction using CNN algorithms have been recently proposed \cite{Huang2018DeepMVSLM}. Prior work considered the subproblem that looks at image pairs \cite{Ummenhofer2017}, or three consecutive frames \cite{Godard2018DiggingIS}. Joint key-frame based dense camera tracking and depth map estimation was presented by \cite{Zhou2018DeepTAMDT}. In this work, we seek to push the performance for single image depth estimation. We suspect that the features extracted by monocular depth estimators could also help derive better multi-view stereo reconstruction methods.
\paragraph{Transfer learning} approaches have been shown to be very helpful in many different contexts. In recent work, Zamir et al. investigated the efficiency of transfer learning between different tasks~\cite{Zamir2018TaskonomyDT}, many of which were are related to 3D reconstruction. Our method is heavily based on the idea of transfer learning where we make use of image encoders originally designed for the problem of image classification \cite{huang2017densely}. We found that using such encoders that do not aggressively downsample the spatial resolution of the input tend to produce sharper depth estimations especially with the presence of skip connections.
\paragraph{Encoder-decoder} networks have made significant contributions in many vision related problems such as image segmentation \cite{Ronneberger2015u}, optical flow estimation \cite{Dosovitskiy2015}, and image restoration \cite{LehtinenMHLKAA18}. In recent years, the use of such architectures have shown great success both in the supervised and the unsupervised setting of the depth estimation problem \cite{Godard2017,Ummenhofer2017,Huang2018DeepMVSLM,Zhou2018DeepTAMDT}. Such methods typically use one or more encoder-decoder network as a sub part of their larger network. In this work, we employ a single straightforward encoder-decoder architecture with skip connections (see Fig. \ref{fig:network_overview}). Our results indicate that it is possible to achieve state-of-the-art high quality depth maps using a simple encoder-decoder architecture.
\section{Proposed Method} \label{sec:method}
In this section, we describe our method for estimating a depth map from a single RGB image. We first describe the employed encoder-decoder architecture. We then discuss our observations on the complexity of both encoder and decoder and its relation to performance. Next, we propose an appropriate loss function for the given task. Finally, we describe efficient augmentation policies that help the training process significantly.
\subsection{Network Architecture}
\paragraph{Architecture.} Fig. \ref{fig:network_overview} shows an overview of our encoder-decoder network for depth estimation. For our \textit{encoder}, the input RGB image is encoded into a feature vector using the DenseNet-169 \cite{huang2017densely} network pretrained on ImageNet \cite{Deng2009}. This vector is then fed to a successive series of up-sampling layers \cite{LehtinenMHLKAA18}, in order to construct the final depth map at half the input resolution. These upsampling layers and their associated skip-connections form our \textit{decoder}. Our decoder does not contain any Batch Normalization \cite{Ioffe2015BNA} or other advanced layers recommended in recent state-of-the-art methods \cite{Fu2018DeepOR,Hao2018DetailPD}. Further details about the architecture and its layers along with their exact shapes are described in the appendix.
\paragraph{Complexity and performance.} The high performance of our surprisingly simple architecture gives rise to questions about which components contribute the most towards achieving these quality depth maps. We have experimented with different state-of-the-art encoders \cite{Bianco2018}, of more or less complexity than that of DenseNet-169, and we also looked at different decoder types \cite{Laina2016, Wojna2017TheDI}. What we experimentally found is that, in the setting of an encoder-decoder architecture for depth estimation, recent trends of having convolutional blocks exhibiting more complexity do not necessarily help the performance. This leads us to advocate for a more thorough investigation when adopting such complex components and architectures. Our experiments show that a simple decoder made of a $2\times$ bilinear upsampling step followed by two standard convolutional layers performs very well.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{kitti_fig}
\end{center}
\caption{\textbf{Qualitative results from the KITTI dataset:} input RGB images, our estimated depth maps, state-of-the-art results of \cite{Fu2018DeepOR}.}
\label{fig:kitti}
\end{figure*}
\subsection{Learning and Inference}
\paragraph{Loss Function.} A standard loss function for depth regression problems considers the difference between the ground-truth depth map $y$ and the prediction of the depth regression network $\hat{y}$ \cite{Eigen2014}. Different considerations regarding the loss function can have a significant effect on the training speed and the overall depth estimation performance. Many variations on the loss function employed for optimizing the neural network can be found in the depth estimation literature \cite{Eigen2014,Laina2016,Ummenhofer2017,Fu2018DeepOR}. In our method, we seek to define a loss function that balances between reconstructing depth images by minimizing the difference of the depth values while also penalizing distortions of high frequency details in the image domain of the depth map. These details typically correspond to the boundaries of objects in the scene.
For training our network, we define the loss $L$ between $y$ and $\hat{y}$ as the weighted sum of three loss functions:
\begin{equation}
L(y,\hat{y}) = \lambda L_{depth}(y,\hat{y}) + L_{grad}(y,\hat{y}) + L_{SSIM}(y,\hat{y}).
\end{equation}
The first loss term $L_{depth}$ is the point-wise L1 loss defined on the depth values:
\begin{equation}
L_{depth}(y,\hat{y}) = \frac{1}{n} \sum_{p}^{n} \lvert y_p -\hat{y}_p \rvert.
\end{equation}
The second loss term $L_{grad}$ is the L1 loss defined over the image gradient $\boldsymbol{g}$ of the depth image:
\begin{equation}
L_{grad}(y,\hat{y}) = \frac{1}{n} \sum_{p}^{n} \lvert \boldsymbol{g_\mathrm{x}}(y_p,\hat{y}_p) \rvert + \lvert \boldsymbol{g_\mathrm{y}}(y_p,\hat{y}_p) \rvert
\end{equation}
where $\boldsymbol{g_\mathrm{x}}$ and $\boldsymbol{g_\mathrm{y}}$, respectively, compute the differences in the $\mathrm{x}$ and $\mathrm{y}$ components for the depth image gradients of $y$ and $\hat{y}$.
Lastly, $L_{SSIM}$ uses the Structural Similarity (SSIM) \cite{Wang2004SSIM} term which is a commonly-used metric for image reconstruction tasks. It has been recently shown to be a good loss term for depth estimating CNNs \cite{Godard2017}. Since SSIM has an upper bound of one, we define it as a loss $L_{SSIM}$ as follows:
\begin{equation}
L_{SSIM}(y,\hat{y}) = \frac{1 - SSIM(y,\hat{y})}{2}.
\end{equation}
Note that we only define one weight parameter $\lambda$ for the loss term $L_{depth}$. We empirically found and set $\lambda=0.1$ as a reasonable weight for this term.
An inherit problem with such loss terms is that they tend to be larger when the ground-truth depth values are bigger. In order to compensate for this issue, we consider the reciprocal of the depth \cite{Ummenhofer2017, Huang2018DeepMVSLM} where for the original depth map $y_{orig}$ we define the target depth map $y$ as $y = m / y_{orig}$ where $m$ is the maximum depth in the scene (e.g. $m=10$ meters for the NYU Depth v2 dataset). Other methods consider transforming the depth values and computing the loss in the log space \cite{Eigen2014,Ummenhofer2017}.
\paragraph{Augmentation Policy.} Data augmentation, by geometric and photo-metric transformations, is a standard practice to reduce over-fitting leading to better generalization performance \cite{krizhevsky2012imagenet}.
Since our network is designed to estimate depth maps of an entire image, not all geometric transformations would be appropriate since distortions in the image domain do not always have meaningful geometric interpretations on the ground-truth depth. Applying a vertical flip to an image capturing an indoor scene may not contribute to the learning of expected statistical properties (e.g. geometry of the floors and ceilings). Therefore, we only consider horizontal flipping (i.e. mirroring) of images at a probability of $0.5$. Image rotation is another useful augmentation strategy, however, since it introduces invalid data for the corresponding ground-truth depth we do not include it.
For photo-metric transformations we found that applying different color channel permutations, e.g. swapping the red and green channels on the input, results in increased performance while also being extremely efficient. We set the probability for this color channel augmentation to $0.25$. Finding improved data augmentation policies and their probability values for the problem of depth estimation is an interesting topic for future work \cite{Cubuk2018AutoAugmentLA}.
\section{Experimental Results}
In this section we describe our experimental results and compare the performance of our network to existing state-of-the-art methods. Furthermore, we perform ablation studies to analyze the influence of the different parts of our proposed method. Finally, we compare our results on a newly proposed dataset of high quality depth maps in order to better test the generalization and robustness of our trained model.
\subsection{Datasets}
\paragraph{NYU Depth v2} is a dataset that provides images and depth maps for different indoor scenes captured at a resolution of $640\times480$ \cite{Silberman2012}. The dataset contains 120K training samples and 654 testing samples \cite{Eigen2014}. We train our method on a 50K subset. Missing depth values are filled using the inpainting method of \cite{Levin2004}. The depth maps have an upper bound of 10 meters. Our network produces predictions at half the input resolution, i.e. a resolution of $320\times240$. For training, we take the input images at their original resolution and downsample the ground truth depths to $320\times240$. Note that we do not crop any of the input image-depth map pairs even though they contain missing pixels due to a distortion correction preprocessing. During test time, we compute the depth map prediction of the full test image and then upsample it by $2\times$ to match the ground truth resolution and evaluate on the pre-defined center cropping by Eigen et al. \cite{Eigen2014}. At test time, we compute the final output by taking the average of an image's prediction and the prediction of its mirror image.
\paragraph{KITTI} is a dataset that provides stereo images and corresponding 3D laser scans of outdoor scenes captured using equipment mounted on a moving vehicle \cite{geiger2013vision}. The RGB images have a resolution of around $1241\times376$ while the corresponding depth maps are of very low density with lots of missing data. We train our method on a subset of around 26K images, from the left view, corresponding to scenes not included in the 697 test set specified by \cite{Eigen2014}. Missing depth values are filled using the inpainting method mentioned earlier. The depth maps have an upper bound of 80 meters. Our encoder's architecture expects image dimensions to be divisible by 32 \cite{huang2017densely}, therefore, we upsample images bilinearly to $1280\times384$ during training. During testing, we first scale the input image to the expected resolution and then upsample the output depth image from $624\times192$ to the original input resolution. The final output is computed by taking the average of an image's prediction and the prediction of its mirror image.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{quality_fig}
\end{center}
\caption{\textbf{Qualitative measures.} The left most column shows the input image (top) and its extracted normal map (bottom) using the ground truth depth. For the following columns, the top row visualizes the difference in the thresholded gradient magnitude image of the estimated depths computed using Laina et al. \cite{Laina2016}, Fu et al. \cite{Fu2018DeepOR}, and our method. Bright regions represent false edges while dark regions are remaining missed edges. The middle row shows the corresponding extracted normal maps. The bottom row visualizes the surface normal error. Note that since the method of \cite{Fu2018DeepOR} generates depth maps with sharp steps, computing a reasonable normal map is not straightforward. }
\label{fig:qualitative}
\end{figure*}
\begin{table*}[t]
\centering
\begin{tabular}{l|lll|lll}
\toprule
Method & $\delta_{1}\uparrow$ & $\delta_{2}\uparrow$ & $\delta_{3}\uparrow$ & rel$\downarrow$ & rms$\downarrow$ & $log_{10}\downarrow$ \\
\midrule
Eigen et al. \cite{Eigen2014} & 0.769 & 0.950 & 0.988 & 0.158 & 0.641 & - \\
Laina et al. \cite{Laina2016} & 0.811 & 0.953 & 0.988 & 0.127 & 0.573 & 0.055 \\
MS-CRF \cite{Xu2017} & 0.811 & 0.954 & 0.987 & 0.121 & 0.586 & 0.052 \\
Hao et al. \cite{Hao2018DetailPD}& 0.841 & 0.966 & 0.991 & 0.127 & 0.555 & 0.053 \\
Fu et al. \cite{Fu2018DeepOR} & 0.828 & 0.965 & 0.992 & \textbf{0.115} & 0.509 & \textbf{0.051} \\
Ours & \textbf{0.846} & \textbf{0.974} & \textbf{0.994} & 0.123 & \textbf{0.465} & 0.053 \\
\midrule
Ours (scaled) & \textbf{0.895} & \textbf{0.980} & \textbf{0.996} & \textbf{0.103} & \textbf{0.390} & \textbf{0.043} \\
\bottomrule
\end{tabular}
\bigskip
\caption{\textbf{Comparisons of different methods on the NYU Depth v2 dataset.} The reported numbers are from the corresponding original papers. The last row shows results obtained using our method with applied scaling that matches the median with the ground truth \cite{Zhou2017}. }
\label{tab:1}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{l|lll|llll}
\toprule
Method & $\delta_{1}\uparrow$ & $\delta_{2}\uparrow$ & $\delta_{3}\uparrow$ & rel$\downarrow$ & sq. rel$\downarrow$ & rms$\downarrow$ & $log_{10}\downarrow$ \\
\midrule
Eigen et al. \cite{Eigen2014} & 0.692 & 0.899 & 0.967 & 0.190 & 1.515 & 7.156 & 0.270 \\
Godard et al. \cite{Godard2017} & 0.861 & 0.949 & 0.976 & 0.114 & 0.898 & 4.935 & 0.206 \\
Kuznietsov et al. \cite{Kuznietsov2017} & 0.862 & 0.960 & 0.986 & 0.113 & 0.741 & 4.621 & 0.189 \\
Fu et al. \cite{Fu2018DeepOR} & \textbf{0.932} & \textbf{0.984} & \textbf{0.994} & \textbf{0.072} & \textbf{0.307} & \textbf{2.727} & \textbf{0.120} \\
Ours & \underline{0.886} & \underline{0.965} & \underline{0.986} & \underline{0.093} & \underline{0.589} & \underline{4.170} & \underline{0.171} \\
\bottomrule
\end{tabular}
\bigskip
\caption{\textbf{KITTI dataset.} We compare our method against the state-of-the-art on this dataset. Measurements are made for the depth range from $0m$ to $80m$. The best results are bolded, and the second best are underlined.}
\label{tab:kitti}
\end{table*}
\subsection{Implementation Details}
We implemented our proposed depth estimation network using TensorFlow \cite{tensorflow2015-whitepaper} and trained on four NVIDIA TITAN Xp GPUs with 12GB memory. Our encoder is a DenseNet-169 \cite{huang2017densely} pretrained on ImageNet \cite{Deng2009}. The weights for the decoder are randomly initialized following \cite{glorot2010understanding}. In all experiments, we used the ADAM \cite{jlb2015adam} optimizer with learning rate $0.0001$ and parameter values $\beta_1=0.9$, $\beta_2=0.999$. The batch size is set to 8. The total number of trainable parameters for the entire network is approximately 42.6M parameters. Training is performed for 1M iterations for NYU Depth v2, needing 20 hours to finish. Training for the KITTI dataset is performed for 300K iterations, needing 9 hours to train.
\subsection{Evaluation}
\paragraph{Quantitative evaluation.} We quantitatively compare our method against state-of-the-art using the standard six metrics used in prior work \cite{Eigen2014}. These error metrics are defined as:
\begin{itemize}
\item average relative error (rel): $\frac{1}{n}\sum_p^n \frac{\lvert y_p-\hat{y}_p \rvert}{y}$;
\item root mean squared error (rms): $\sqrt{\frac{1}{n}\sum_p^n (y_p-\hat{y}_p)^2)}$;
\item average ($\log_{10}$) error: $\frac{1}{n}\sum_p^n \lvert \log_{10}(y_p)-\log_{10}(\hat{y}_p) \rvert$;
\item threshold accuracy ($\delta_i$): $\%$ of $y_p$ s.t. $\text{max}(\frac{y_p}{\hat{y}_p},\frac{\hat{y}_p}{y_p}) = \delta < thr$ for $thr=1.25,1.25^2,1.25^3$;
\end{itemize}
where $y_p$ is a pixel in depth image $y$, $\hat{y}_p$ is a pixel in the predicted depth image $\hat{y}$, and $n$ is the total number of pixels for each depth image.
\paragraph{Qualitative results.} We conduct three experiments to approximately evaluate the quality of the results using three measures on the NYU Depth v2 test set. The first measure is a perception-based qualitative metric that measures the quality of the results by looking at the similarity of the resulting depth maps in image space. We do so by rendering a gray scale visualization of the ground truth and that of the predicted depth map and then we compute the mean structural similarity term (mSSIM) of the entire test dataset $\frac{1}{T} \sum_i^T {SSIM}(y_i,\hat{y}_i)$. The second measure considers the edges formed in the depth map. For each sample, we compute the gradient magnitude image of both the ground truth and the predicted depth image, using the Sobel gradient operator \cite{Sobel1968}, and then threshold this image at values greater than 0.5 and compute the F1 score averaged across the set. The third measure is the mean cosine distance between normal maps extracted from the depth images of the ground truth and the predicted depths also averaged across the set. Fig. \ref{fig:qualitative} shows visualizations of some of these measures.
Fig. \ref{fig:gallery} shows a gallery of depth estimation results that are predicated using our method along with a comparison to those generated by the state-of-the-art.
As can be seen, our approach produces depth estimations at higher quality where depth edges better match those of the ground truth and with significantly fewer artifacts.
\begin{table}[t]
\centering
\begin{tabular}{l|lll}
\toprule
Method & mSSIM$\uparrow$ & F1$\uparrow$ & mne$\downarrow$ \\
\midrule
Laina et al. \cite{Laina2016} & 0.957 & 0.395 & 0.698 \\
Fu et al. \cite{Fu2018DeepOR} & 0.949 & 0.351 & 0.730 \\
\textbf{Ours} & \textbf{0.968} & \textbf{0.519} & \textbf{0.636} \\
\bottomrule
\end{tabular}
\bigskip
\caption{\textbf{Qualitative evaluation.} For the NYU Depth v2 testing set, we compute three measures that reflect the quality of the depth maps generated by different methods. The measures are: mean SSIM of the depth maps, mean F1 score of the edge maps, and mean of the surface normal errors. Higher values indicate better quality for the first two measures while lower values are better for the third. }
\label{tab:2}
\end{table}
\subsection{Comparing Performance}
In Tab. \ref{tab:1}, the performance of our depth estimating network is compared to the state-of-the-art on the NYU Depth v2 dataset. As can be seen, our model achieves state-of-the-art on all but two quantitative metrics. Our model is able to outperform the existing state-of-the-art \cite{Fu2018DeepOR} while requiring fewer parameters, 42.6M vs 110M, fewer number of training iterations, 1M vs 3M, and with fewer input training data, 50K samples vs 120K samples. A typical source of error for single image depth estimation networks is the estimated absolute scale of the scene. The last row in Tab. \ref{tab:1} shows that when accounting for this error, by multiplying the predicted depths by a scalar that matches the median with the ground truth \cite{Zhou2017}, we are able to achieve with a good margin state-of-the-art for the NYU Depth v2 dataset on all metrics. The results in Tab. \ref{tab:2} show that for the same dataset our method outperforms state-of-the-art on our defined quality approximating measures. We conduct these experiments for methods with published pre-trained models and code.
In Tab. \ref{tab:kitti}, the performance of our network is compared to the state-of-the-art on the KITTI dataset. Our method is the second best on all the standard metrics. We suspect that one reason our method does not outperform the state-of-the-art on this particular dataset is due to the nature of the provided depth maps. Since our loss function is designed to not only consider point-wise differences but also optimize for edges and appearance preservation by looking at regions around each point, the learning process does not converge well for very sparse depth images. Fig. \ref{fig:kitti} clearly shows that while quantitatively our method might not be the best, the quality of the produced depth maps is much better than those produced by the state-of-the-art.
\subsection{Ablation Studies}
We perform ablation studies to analyze the details of our proposed architecture. Fig. \ref{fig:ablation} shows a representative look into the testing performance, in terms of validation loss, when changing some parts of our standard model or modifying our training strategy. Note that we performed these tests on a smaller subset of the NYU Depth v2 dataset.
\paragraph{Encoder depth.} In this experiment we substitute the pretrained DenseNet-169 with a denser encoder, namely the DenseNet-201. In Fig. \ref{fig:ablation} (red), we can see the validation loss is lower than that of our standard model. The big caveat, though, is that the number of parameters in the network grows by more than $2\times$. When considering using DenseNet-201 as our encoder, we found that the gains in performance did not justify the slow learning time and the extra GPU memory required.
\paragraph{Decoder depth.} In this experiment we apply a depth reducing convolution such that the features feeding into the decoder are half what they are in the standard DenseNet-169. In Fig. \ref{fig:ablation} (blue), we see a reduction in the performance and overall instability. Since these experiments are not representative of a full training session the performance difference in halving the features might not be as visible as we have observed when running full training session.
\paragraph{Color Augmentation.} In this experiment, we turn off our color channel swapping-based data augmentation. In Fig. \ref{fig:ablation} (green), we can see a significant reduction as the model tends to quickly falls into overfitting to the training data. We think this simple data augmentation and its significant effect on the neural network is an interesting topic for future work.
\subsection{Generalizing to Other Datasets}
To illustrate how well our method generalizes to other datasets, we propose a new dataset of photo-realistic indoor scenes with nearly perfect ground truth depths. These scenes are collected from the Unreal marketplace community \cite{UnrealMarket2018}. We refer to this dataset as \textbf{Unreal-1k}. It is a random sampling of 1000 images with their corresponding depth maps selected from renderings of 32 virtual scenes using the Unreal Engine. Further details about this dataset can be found in the appendix. We compare our NYU Depth v2 trained model to two supervised methods that are also trained on the same dataset. For inference, we use the public implementations for each method. The hope of this experiment is to demonstrate how well do models trained on one dataset perform when presented with data sampled from a different distribution (i.e. synthetic vs. real, perfect depth capturing vs. a Kinect, etc.).
Tab. \ref{tab:1} shows quantitative comparisons in terms of the average errors over the entire Unreal-1k dataset. As can be seen, our method outperforms the other two methods. We also compute the qualitative measure mSSIM described earlier. Fig. \ref{fig:unreal} presents a visual comparison of the different predicted depth maps against the ground truth.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{ablation}
\end{center}
\caption{\textbf{Ablation Studies.} Three variations on our standard model are considered. \emph{DenseNet-201} (red) refers to a deeper version of the encoder. The \emph{half decoder} variation (blue) represents the model with only half the features coming out of the last layer in the encoder. Lastly, we consider the performance when disabling the \emph{color-swapping} data augmentations (green). }
\label{fig:ablation}
\end{figure}
\begin{table*}[t]
\centering
\begin{tabular}{l|lll|lll|l}
\toprule
Method & $\delta_{1}\uparrow$ & $\delta_{2}\uparrow$ & $\delta_{3}\uparrow$ & rel$\downarrow$ & rms$\downarrow$ & $log_{10}\downarrow$ & mSSIM$\uparrow$ \\
\midrule
Laina et al. \cite{Laina2016} & 0.526 & 0.786 & 0.896 & 0.311 & 1.049 & 0.130 & 0.903 \\
Fu et al. \cite{Fu2018DeepOR} & \textbf{0.545} & 0.794 & 0.898 & 0.313 & 1.040 & 0.128 & 0.895 \\
\textbf{Ours} & 0.544 & \textbf{0.803} & \textbf{0.904} & \textbf{0.301} & \textbf{1.030} & \textbf{0.125} & \textbf{0.910} \\
\bottomrule
\end{tabular}
\bigskip
\caption{\textbf{Comparisons of different methods on the Unreal-1k dataset.} Both the quantitative and qualitative metrics are presented. Note that even for the best performing methods the errors are still considerably large.}
\label{tab:3}
\end{table*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{gallery_large}
\end{center}
\caption{\textbf{A gallery of estimated depth maps on the NYU Depth v2 dataset:} input RGB images, ground truth depth maps, state-of-the-art results of \cite{Fu2018DeepOR} (provided by the authors), our estimated depth maps. Note that, for better visualization, we normalize all depth maps with respect to the range in its specific ground truth. }
\label{fig:gallery}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{unreal}
\end{center}
\caption{\textbf{Visual comparison of estimated depth maps on the Unreal-1k dataset:} input RGB images, ground truth depth maps, results using Laina et al. \cite{Laina2016}, our estimated depth maps, results of Fu et al. \cite{Fu2018DeepOR}.}
\label{fig:unreal}
\end{figure*}
\section{Conclusion}
In this work, we proposed a convolutional neural network for depth map estimation for single RGB images by leveraging recent advances in network architecture and the availability of high performance pre-trained models. We show that having a well constructed encoder, that is initialized with meaningful weights, can outperform state-of-the-art methods that rely on either expensive multistage depth estimation networks or require designing and combining multiple feature encoding layers. Our method achieves state-of-the-art performance on the NYU Depth v2 dataset and our proposed Unreal-1K dataset. Our aim in this work is to push towards generating higher quality depth maps that capture object boundaries more faithfully, and we have shown that this is indeed possible using an existing architectures. Following our simple architecture, one avenue for future work is to substitute the proposed encoder with a more compact one in order to enable quality depth map estimation on embedded devices. We believe their are still many possible cases of leveraging standard encoder-decoder models alongside transfer learning for high quality depth estimation. Many questions on the limits of our proposed network and identifying more clearly the effect on performance and contribution of different encoders, augmentations, and learning strategies are all interesting to purse for future work.
{ \small \bibliographystyle{zieee} | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Throughout this paper, we let $I$ be a finite set of positive integers. We will also use the standard notation $[n]$ to represent the set $\{1, 2, ..., n\}$, and for $m<n$, we let $[m, n]$ represent the set $\{m, m+1, ..., n\}$.
Consider the symmetric group $\mathfrak{S}_n$ of permutations $w = [w_1,w_2,...,w_n]$ of $[n]$. A \textit{descent} of $w$ is an index $i$ satisfying $w_i > w_{i+1}$, and the \textit{descent set} of $w$ is $$\text{Des}(w) := \{i\hspace{0.08cm}|\hspace{0.08cm} i\text{ is a descent of }w\} \subseteq [n-1].$$ For example, $\text{Des}([1,7,3,2,6,4,5]) = \{2, 3, 5\}$. Next, consider the set of all permutations of $[n]$ with a given descent set, $$D(I;n) := \{w \in \mathfrak{S}_n \hspace{0.08cm}|\hspace{0.08cm} \text{Des}(w) = I \},$$ and its cardinality $$d(I;n) := \#D(I;n).$$
Using the Principle of Inclusion and Exclusion, MacMahon \cite{mac} proved in 1915 that $d(I;n)$ is in fact a polynomial in $n$, for a fixed finite set $I$. We call $d(I;n)$ the \textit{descent polynomial} of $I$. For the next century, little detailed work was done on these descent polynomials, until Diaz-Lopez et al. \cite{DLHIOS} published a paper on them in 2017. In that paper, Diaz-Lopez et al. provide recursions for $d(I;n)$ and extensively study algebraic properties of descent polynomials. Some of their results include a theorem about the positivity of coefficients of $d(I;n)$ when expressed in a Newton basis, as well as bounds on roots of descent polynomials.
Along the lines of descents, we can also define a \textit{peak} of a permutation $w$ as an index $i$ satisfying $w_{i-1} < w_i > w_{i+1}$. Analogously, we can define the peak set of a permutation $w$ as $$\text{Peak}(w) := \{i \hspace{0.08cm}|\hspace{0.08cm} i \text{ is a peak of }w\} \subseteq [2,n-1].$$ Following this definition is $P(I;n) := \{w \in \mathfrak{S}_n \hspace{0.08cm}|\hspace{0.08cm} \text{Peak}(w) = I \}$. In 2013, Billey et al. \cite{bbs} studied $\#P(I;n)$ as a function of $n$ for fixed $I$ and showed that in general it is not polynomial, but of the form $p(I;n)2^{n-\#I-1}$, where $p(I;n)$ is a polynomial. This is called the \textit{peak polynomial} of $I$. Billey et al. also presented a recursion for $p(I;n)$, and studied formulas for $p(I;n)$ given a specific set $I$.
We now move on to \textit{double descents}, which we investigate in this paper. A double descent of a permutation $w$ is an index satisfying $w_{i-1} > w_i > w_{i+1}$. Next, we define $$\text{DDes}(w) := \{i \hspace{0.08cm}|\hspace{0.08cm} i \text{ is a double descent of }w \} \subseteq [2,n-1],$$ and analogously, $$DD(I;n) := \{w\in \mathfrak{S}_n \hspace{0.08cm}|\hspace{0.08cm} \text{DDes}(w) = I \},$$ and $dd(I;n) := \#DD(I;n)$. For example, $DD(\{2\}; 4) = \{[3,2,1,4], [4,3,1,2], [4,2,1,3]\}$, so $dd(\{2\}; 4) = 3$.
The paper is structured as follows. We start off in Section \ref{woddes} where we discuss known results about permutations without double descents. After that, we discuss permutations with singleton double descent sets in Section \ref{singletonddes}. In particular, we present a recursion for $dd(I;n)$ for singleton $I = \{k\}$, which allows us to express $dd(\{k\};n)$ in terms of $dd(\{l\};m)$ for $l<k$ and $m<n$. We also discuss a method for estimating values of $dd(I; n)$ again for singleton sets $I$. In the next Section (\ref{rimhooks}), we analyze certain classes of rim hooks associated with singleton and empty double descent sets, and we also provide theorems regarding the sizes of these classes of rim hooks. While discussing rim hooks, we develop the theory of \textit{minimal elements}, which is useful in several proofs. Afterwards, we quickly take a look at \textit{circular permutations} in Section \ref{other}, another permutation-associated object (just like rim hooks). Then, in Section \ref{conjs}, we bring up conjectures obtained from studying patterns in computer-generated data. Most importantly, we discuss a conjecture that highlights a large difference between descents and double descents, as well as the so-called ``down up down up" conjecture which reveals an interesting pattern in data concerning singleton double descent sets. Finally, we conclude with a section on future research questions.
\section{Permutations Without Double Descents}\label{woddes}
In this section, we begin our discussion of permutations and double descents by discussing current results in the literature. We start off by considering the specific case of permutations with no double descents and no initial descent, which will build up to permutations with no double descents in general. That is, we are considering all $w \in \mathfrak{S}_n$ such that $\text{DDes}(w) = \emptyset$ and $w_1 < w_2$. We will use $b_n$ to denote the number of such permutations in $\mathfrak{S}_n$. On OEIS \cite{b_n}, Michael Somos presents the following recursion for the sequence $b_n$, which is useful for finding a generating function for $b_n$.
\begin{proposition}[\cite{somos}]\label{2.1}
The function $b_n$ satisfies the following recursion: $$b_{n+1} = \displaystyle \Bigg(\sum_{k=0}^n\dbinom{n}{k}b_kb_{n-k}\Bigg) - b_n.$$
\end{proposition}
On the same OEIS reference to Somos' recurrence, Peter Bala provides an exponential generating function for $b_n$. This is useful for computing $dd(\emptyset; n)$.
\begin{proposition}[\cite{bala}]\label{2.2}
The exponential generating function for $b_n$ is $\dfrac{1}{2} + \dfrac{\sqrt{3}}{2}\tan\left(\dfrac{\sqrt{3}}{2}x + \dfrac{\pi}{6}\right)$.
\end{proposition}
The following recursion, which relates $dd(\emptyset; n)$ and $b_n$, is given by Emanuele Munarini on OEIS \cite{a(n)}.
\begin{proposition}[\cite{munarini}]
The function $dd(\emptyset, n)$ satisfies the following recursion: $$dd(\emptyset; n+1) = \displaystyle\sum_{k=0}^n\dbinom{n}{k}\cdot dd(\emptyset, k)\cdot b_{n-k}.$$
\end{proposition}
This recursion, along with Proposition \ref{2.2}, can be used to prove the formula for the exponential generating function of $dd(\emptyset; n)$ given by Noam Elkies on OEIS \cite{a(n)}.
\begin{proposition}[\cite{elkies}]
The exponential generating function for $dd(\emptyset; n)$ is $\dfrac{\frac{\sqrt{3}}{2}\cdot e^{\frac{x}{2}}}{\cos\left(\frac{\sqrt{3}}{2}x + \frac{\pi}{6}\right)}$.
\end{proposition}
These results provide most of the background on permutations whose double descent set is the empty set. We now proceed to study permutations which have singleton double descent sets.
\section{Singleton Double Descent Sets}\label{singletonddes}
The main enumeration theorem of this section is the following recursion for $dd(I;n)$ when $I$ is a singleton set.
\begin{theorem}\label{recursion1}
Let $I = \{m\}$ be a singleton set. Then we have
\begin{equation}\label{eq1}
\begin{split}
dd(I; n+1) = & \Bigg(\sum_{k = m+1}^n \dbinom{n}{k}\cdot dd(I; k)\cdot b_{n-k}\Bigg) \\
& + \dbinom{n}{m-2}\cdot dd(\emptyset;m-2)\cdot \big(dd(\emptyset;n-m+2) - b_{n-m+2}\big) \\
& + \Bigg(\sum_{k=0}^{m-4}\dbinom{n}{k}\cdot dd(\emptyset;k)\cdot
c(\{m-1-k\}; n-k)\Bigg)
\end{split}
\end{equation}
where $c(I; n)$ denotes the number of permutations in $\mathfrak{S}_n$ with an initial ascent and with double descent set $I$.
\end{theorem}
\begin{proof}
To construct a permutation $w \in \mathfrak{S}_{n+1}$ with a double descent at $m$, we first consider possible values of $w^{-1}(n+1)$. Because there is a double descent at $m$, we have $w_{m-1} > w_m > w_{m+1}$, so $w^{-1}(n+1) \notin $ $\{m, m+1\}$ because all other $w_i < n+1$. Also, $w^{-1}(n+1)\neq m-2$; otherwise, there would be a double descent at $w_{m-1}$ since we have $w_{m-1} > w_m$. Thus, $w^{-1}(n+1) \in [m+2, n+1] \cup \{m-1\} \cup [m-3]$. Suppose $w^{-1}(n+1) \in [m+2, n+1]$. Then we can choose $k \in [m+1, n]$ elements of $[n]$ to form a permutation to the left of $n+1$ with a double descent at $m$, and the remaining $n-k$ elements of $[n]$ form a permutation to the right of $n+1$ with no initial descent and no double descents. For a given $k$, there are $$\binom{n}{k}\cdot dd(I; n) \cdot b_{n-k}$$ ways to do this, so summing over all valid $k$ gives the first term of \eqref{eq1}. Next, suppose $w^{-1}(n+1) = m-1$. Then we must have a permutation of length $m-2$ to the right of $n+1$ with no double descents, and a permutation of length $n - (m-2)$ to the right of $w$ with an initial descent (which contributes to the double descent at $w_m$) but no double descents. There are $$\binom{n}{m-2}\cdot dd(\emptyset, n-m+2) \cdot\big(dd(\emptyset;n-m+2) - b_{n-m+2}\big)$$ such permutations, where the last term counts the number of permutations with an initial descent but no double descents. This gives the second term of \eqref{eq1}. Finally, suppose $w^{-1}(n+1) \in$ $[m-3]$. Then we can choose $0 \leq k \leq m-4$ elements to the right of $n+1$ to form a permutation with no double descents, and the remaining $n-k$ elements form a permutation with a double descent at $m-1-k$ (which is the $m$th spot in the entire permutation $w \in \mathfrak{S}_{n+1}$) and no initial descent. For a given $k$, there are $$\binom{n}{k}\cdot dd(\emptyset; k) \cdot c(\{m-1-k\};n-k)$$ ways to do this. Summing over all valid $k$ gives the third and final term of \eqref{eq1}.
\end{proof}
We do not have too much information on $c(I;n)$ so far, but because it consists of elements of $DD(I;n)$, it seems to follow a nice pattern, which is summed up in the following conjecture.
\begin{conjecture}\label{c(n)estimate}
The limit $\lim_{n \rightarrow \infty}\frac{c(\{m\}; n)}{dd(\{m\}; n)}$ exists for a fixed $m \in \mathbb{N}$. That is, we can estimate $c(\{m\}; n)$ as $dd(\{m\}; n)\cdot C(m)$, where $C(m)$ is some constant depending on $m$. Estimates for the first few values of $C(m)$ are:
\emph{\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$m$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
$C(m)$ & 1 & 0.3941 & 0.6362 & 0.5056 & 0.5676 & 0.5359 & 0.5515 \\
\hline
\end{tabular}
\end{center}}
\end{conjecture}
For a fixed $m$, the values of $\frac{c(\{m\}; n)}{dd(\{m\}; n)}$ oscillate as $n$ increases, and they appear to converge to some limit. Thus, we can accurately estimate $C(m)$ by averaging the values of $\frac{c(\{m\}; n)}{dd(\{m\}; n)}$ across $n$. We estimated the values in the table above by averaging $\frac{c(\{m\}; n)}{dd(\{m\}; n)}$ (for a fixed $m$) up to $n = 12$, as the values of $\frac{c(\{m\}; n)}{dd(\{m\}; n)}$ were already within $0.001$ of each other by $n=7$.
We just computed some values of for small $n$ and averaged them to produce the estimates of $C(m)$ in the table above.
\begin{example}
Using Theorem \ref{recursion1} and assuming Conjecture \ref{c(n)estimate}, we can estimate the value $dd(\{m\}; n)$. Suppose $n=9$ and $m=6$. Then the theorem gives
\begin{align*}
dd(\{6\}, 9) = & \hspace{0.08cm} \sum_{k=7}^8\dbinom{8}{k}\cdot dd(\{6\}; k)\cdot b_{8-k} + \dbinom{8}{4}\cdot dd(\emptyset;4)\cdot\big(dd(\emptyset;4)-b_4\big)\\
+ & \hspace{0.08cm}\sum_{k=0}^2\dbinom{8}{k}\cdot dd(\emptyset;k)\cdot c(\{5-k\};8-k) \\ = & \hspace{0.08cm} \dbinom{8}{7}\cdot 426 \cdot 1 + \dbinom{8}{8}\cdot 2491\cdot 1 + \dbinom{8}{4}\cdot17\cdot(17-9) + \sum_{k=0}^2\dbinom{8}{k}\cdot dd(\emptyset;k)\cdot c(\{5-k\};8-k).
\end{align*}
\end{example}
Using the estimation given by the conjecture, we can simplify this to
\begin{align*}
= & \hspace{0.08cm} 15419 + \sum_{k=0}^2\dbinom{8}{k}\cdot dd(\emptyset;k)\cdot c(\{5-k\};8-k) \\ \approx &\hspace{0.08cm} 15419 + \sum_{k=0}^2\dbinom{8}{k}\cdot dd(\emptyset;k)\cdot dd(\{5-k\}; 8-k) \cdot C(5-k) \\ = & \hspace{0.08cm} 15419 + \dbinom{8}{0} \cdot 1 \cdot 2904 \cdot 0.6362 + \dbinom{8}{1} \cdot 1 \cdot 462 \cdot 0.3941 + \dbinom{8}{2} \cdot 2 \cdot 66 \cdot 1 \\ = & \hspace{0.08cm} 22419.118.
\end{align*}
The actual value of $dd(\{6\}; 9)$ is 22419, so the estimate is off by $0.00053\%$.
\section{Rim Hooks}\label{rimhooks}
One important object associated with permutations, the rim hook, is brought up by considering permutations as \textit{rim hook tableaux}. Rim hooks are skew shapes that do not contain $2 \times 2$ squares. The following are examples of rim hooks:
\vspace{0.3cm}
\begin{center}
\ydiagram{1} \hspace{1cm} \ydiagram{1+2,2,1,1} \hspace{1cm}
\ydiagram{1+1,1+1,2,1} \hspace{1cm} \ydiagram{1,1,1}
\end{center}
\vspace{0.3cm}
We use the standard skew shape notation to represent these rim hooks. For example, the second rim hook from the above left is written as $(3,2,1,1)/(1)$, and the third rim hook from the above left is written as $(2,2,2,1)/(1,1)$. This notation is explained as follows: the first tuple of numbers represents the Young diagram which contains the rim hook. In the example of $(2,2,2,1)/(1,1)$, the numbers $(2,2,2,1)$ correspond to a Young diagram with 2 squares in the first row, 2 in the second, 2 in the third, and 1 in the fourth. The tuple of numbers after the slash represents the number of squares to remove from the rows of the specified Young diagram, in order to create the desired rim hook. So, the tuple $(1,1)$ in $(2,2,2,1)/(1,1)$ means we remove 1 square from the first row (starting on the left side of the row) of the aforementioned Young diagram, as well as 1 square from the second row, thus creating a rim hook. Also, the notation $|\mathfrak{r}|$ for a rim hook $\mathfrak{r}$ (and more generally, a skew shape) will denote the total number of squares in $\mathfrak{r}$.
A \textit{rim hook tableau} is formed by filling in the squares of a rim hook with the numbers $1$ through $n$, where $n$ is the number of squares in the rim hook, or the \textit{length} of the rim hook. A rim hook tableau also must satisfy the two following rules: for every two vertically adjacent squares, the upper square must contain the smaller number, and for every two horizontally adjacent squares, the left square must contain the smaller number. For example:
\vspace{0.3cm}
\begin{center}
\ytableaushort{\none312,45} \text{ is not a valid rim hook tableau, but } \ytableaushort{\none124,35} \text{ is valid.}
\end{center}
\vspace{0.3cm}
Rim hooks can be used to encode the descent information of a permutation. This idea can be explained as follows: by reading a rim hook tableau from the bottom left to top right, following adjacent squares, we can reconstruct a permutation. For example, the above tableau on the right corresponds to the permutation [3,5,1,2,4] $\in \mathfrak{S}_5$. The rim hook of [3,5,1,2,4] precisely encodes a permutation in $\mathfrak{S}_5$ with a single descent at index 2. Any other permutation whose rim hook tableau has the same shape, such as [2,5,1,3,4], will have the same descents. Therefore, a rim hook of a certain shape will generate rim hook tableaux that correspond to permutations which all have the same descent set. This is how rim hooks can ``encode" descent sets (and analogously, double descent sets).
For example, the following are the rim hooks which generate permutations in $\mathfrak{S}_6$ with double descent set $\{2\}$:
\vspace{0.3cm}
\begin{center}
\ydiagram{1+2,2,1,1} \hspace{1cm} \ydiagram{2+1,3,1,1} \hspace{1cm} \ydiagram{4,1,1}
\end{center}
\vspace{0.3cm}
Some permutations with corresponding rim hook tableaux (to the rim hooks above, in that order) are [6,3,2,4,1,5], [5,4,1,2,6,3], and [4,3,2,1,5,6], all of which have a double descent at index 2.
We will use the notation $\mathcal{R}_I(n)$ to denote the set of all rim hooks of length $n$ which correspond to permutations with double descent set $I$. For example, the 3 rim hooks above are the elements of $\mathcal{R}_{\{2\}}(6)$. We can count the number of such rim hooks for singleton sets $I$ with the following formula.
\begin{theorem}\label{rimhookrec}
$\#\mathcal{R}_{\{m\}}(n) = F_{n-m}F_{m-1}$, where $F_n$ is the $n$th Fibonacci number.
\end{theorem}
To prove this theorem, we need the following 2 propositions which give recurrences for $\#\mathcal{R}_I(n)$.
\begin{proposition}\label{rhrecprop}
Let $m = \emph{\text{max}}(I \cup \{0\})$. For $n \geq m + 3$, we have $\#\mathcal{R}_I(n) = \#\mathcal{R}_I(n-1) + \#\mathcal{R}_I(n-2).$
\end{proposition}
\begin{proof}
All rim hooks must end in one of the two following shapes (i.e. these are their top right squares):
\vspace{0.3cm}
\begin{center}
\begin{ytableau} \none & & \\
\none[\Ddots]
\end{ytableau}
\text{ or }
\begin{ytableau} \none &\\\none &\\
\none[\Ddots]
\end{ytableau}
\end{center}
\vspace{0.3cm}
We will call rim hooks that end in the horizontal squares $H$-rim hooks and ones that end in vertical squares $V$-rim hooks. Now, suppose that $\mathcal{R}_I(n)$ contains $a$ $H$-rim hooks and $b$ $V$-rim hooks. To create a valid rim hook of $\mathcal{R}_I(n+1)$, we take rim hooks from $\mathcal{R}_I(n)$ and add an extra square, making sure not to create any additional double descents in the rim hooks. For example, the following shows valid and invalid extensions of a rim hook of $\mathcal{R}_{\{3\}}(7)$, where the shaded squares are the additional squares extending the rim hook (unshaded):
\vspace{0.3cm}
\begin{center}
\begin{ytableau}
\none & \none & \none & & *(green)\\
\none & & & \\
\none &\\
&\\
\end{ytableau}
\begin{tabular}{c}
\\\\\text{ is valid, but}
\end{tabular}
\begin{ytableau}
\none & \none & \none & *(red)\\
\none & \none & \none & \\
\none & & & \\
\none &\\
&\\
\end{ytableau}
\begin{tabular}{c}
\\\\\text{is not.}
\end{tabular}
\end{center}
\vspace{0.3cm}
A valid extension of an $H$-rim hook can either be an extra square to the right or to the top of the top right end of the rim hook, so an $H$-rim hook can be respectively extended to a new $H$-rim hook and a new $V$-rim hook. For a $V$-rim hook, however, the only valid extension is the addition of one square to the right of the top right end of the rim hook, creating a new $H$-rim hook. Thus, if $\mathcal{R}_I(n)$ has $a$ $H$-rim hooks and $b$ $V$-rim hooks, then $\mathcal{R}_I(n+1)$ will have $a+b$ $H$-rim hooks and $a$ $V$-rim hooks, for a total of $\#\mathcal{R}_I(n+1) = 2a + b$ rim hooks. Applying this pattern again, we get $\#\mathcal{R}_I(n+2) = 3a+2b$, thus showing that the recursion $\#\mathcal{R}_I(n) = \#\mathcal{R}_I(n-1) + \#\mathcal{R}_I(n-2)$ holds.
\end{proof}
This proposition shows that we can calculate any $\#\mathcal{R}_I(n)$ recursively, given the 2 initial values $\#\mathcal{R}_I(m+1)$ and $\#\mathcal{R}_I(m+2)$, where $m = \text{max}(I \cup \{0\})$. In order to prove Theorem \ref{rimhookrec} with the previous recursion, we need to determine initial values of $\mathcal{R}_{\{m\}}(n)$. The following proposition tells us what these initial values are.
\begin{proposition}\label{initalRH}
For $m \geq 4$, we have $\#\mathcal{R}_{\{m\}}(m+1) = \#\mathcal{R}_{\{m-1\}}(m) + \#\mathcal{R}_{\{m-2\}}(m-1)$.
\end{proposition}
\begin{proof}
The argument in this proof is nearly the same as the one in Proposition \ref{rhrecprop}, except here we create extensions on the bottom left of a rim hook and not the top right. Also, in this scenario we will define $h$-rim hooks and $v$-rim hooks as rim hooks that \textit{start} with two horizontal or two vertical squares. Now, suppose $\mathcal{R}_{\{m\}}(m+1)$ consists of $a$ $h$-rim hooks and $b$ $v$-rim hooks. An extension of these rim hooks will increase the index of the descent by 1 and add 1 to the length of the rim hook, thereby creating an element of $\mathcal{R}_{\{m+1\}}(m+2)$. By the same argument as in Proposition \ref{rhrecprop}, $\mathcal{R}_{\{m+1\}}(m+2)$ will contain $a+b$ $h$-rim hooks and $a$ $v$-rim hooks, for a total of $2a + b$ elements. We also get $\#\mathcal{R}_{\{m+2\}}(m+3) = 3a + 2b$, thus showing the desired recursion is true.
\end{proof}
With Propositions \ref{rhrecprop} and \ref{initalRH}, we can now prove Theorem \ref{rimhookrec}.
\begin{proof}[Proof of Theorem \ref{rimhookrec}]
After brief computation we get that $\#\mathcal{R}_{\{2\}}(3) = 1$ and $\#\mathcal{R}_{\{3\}}(4) = 1$, so by Proposition \ref{initalRH}, we have $\#\mathcal{R}_{\{m\}}(m+1) = F_{m-1}$ for $m \geq 2$, where $F_n$ denotes the $n$th Fibonacci number. Now, for a fixed $m$, the smallest valid $n$ for which $\mathcal{R}_{\{m\}}(n)$ is defined is $m+1$, and the rim hooks in $\mathcal{R}_{\{m\}}(m+1)$ necessarily end in 3 vertical squares. Hence, there are no $H$-rim hooks (defined as in Proposition \ref{rhrecprop}) in $\mathcal{R}_{\{m\}}(m+1)$, so $\#\mathcal{R}_{\{m+1\}}(m+2)$ must equal $\#\mathcal{R}_{\{m\}}(m+1)$ because each $V$-rim hook in $\mathcal{R}_{\{m\}}(m+1)$ is extended to one new $H$-rim hook in $\#\mathcal{R}_{\{m+1\}}(m+2)$. Therefore, we have determined that $$\#\mathcal{R}_{\{m\}}(m+1) = \#\mathcal{R}_{\{m+1\}}(m+2) = F_{m-1}.$$
After applying the recursion from Proposition \ref{rhrecprop} to these initial values, we deduce Theorem \ref{rimhookrec}.
\end{proof}
As we see, it is possible to calculate the size of any $\mathcal{R}_I(n)$ recursively, given two pre-computed initial values. However, there is a nicer non-recursive formula for the specific case $I = \emptyset$.
\begin{theorem}\label{emptysetformula}
Let $n \geq 2$, and let $F_n$ be the $n$th Fibonacci number. Then $\#\mathcal{R}_\emptyset(n) = \displaystyle F_{n+1}$.
\end{theorem}
Before we prove this theorem, we must first introduce the theory of \textit{minimal elements}. Define the \textit{height} of a rim hook (more generally, a Young diagram) to be the number of rows in the diagram. Then we define a \textit{minimal element of height h with double descent set I}, written as $\mu(I,h)$, as the rim hook of height $h$ that encodes double descent set $I$ and has the minimal number of squares possible.
For example, the following two rim hooks represent $\mu(\emptyset, 4)$ and $\mu(\{3\}, 5)$ respectively:
\begin{center}
\ydiagram{2+1,1+2,2,1} \hspace{1cm} \ydiagram{3+1,2+2,1+2,1+1,2} \hspace{1cm}
\end{center}
Minimal elements are useful because they allow us to quickly generate rim hooks by adding squares to the rows of a minimal element. The process of adding a square to a rim hook in general is as follows: to add a square to some row of a rim hook, just add a square to the right of the rightmost square in the specified row of the rim hook, and then shift all above rows to the right by 1.
The following diagram demonstrates this process (added square is shaded):
\vspace{0.3cm}
\begin{center}
\begin{ytableau}
\none & \none & & &\\
\none & &\\
&\\
\end{ytableau}
\begin{tabular}{c}
\\$\scalebox{1.5}{\ensuremath{\rightarrow}}$
\end{tabular}
\begin{ytableau}
\none & \none & & &\\
\none & & & *(green) \\
&\\
\end{ytableau}
\begin{tabular}{c}
\\$\scalebox{1.5}{\ensuremath{\rightarrow}}$
\end{tabular}
\begin{ytableau}
\none & \none & \none & & &\\
\none & & & \\
&\\
\end{ytableau}
\end{center}
\vspace{0.3cm}
Now, notice that any rim hook can be decomposed into a minimal element, along with additional squares in some rows. For example, the above right rim hook is equivalent to $\mu(\emptyset, 3)$ with 2 added squares in the top row, 1 added square in the second row, and 1 added square in the bottom row. In the case that the double descent set of the rim hook is $\emptyset$, the double descent set of the minimal element will also be $\emptyset$. We formalize this argument as follows:
\begin{proposition}\label{minimalbijection}
Let $|\mu(I,h)|$ denote the number of squares in $\mu(I,h)$. Then we can construct all elements of $\mathcal{R}_\emptyset(n)$ of height $h$ by adding $n - |\mu(\emptyset,h)|$ squares to the rows of $\mu(\emptyset,h)$. Specifically, there is a bijection between the set of elements of $\mathcal{R}_\emptyset(n)$ of height $h$ and the set of all possible additions of $n - |\mu(\emptyset,h)|$ squares to $\mu(\emptyset,h)$.
\end{proposition}
\begin{proof}
Suppose we have an arbitrary element $\mathfrak{r}$ of $\mathcal{R}_\emptyset(n)$ of height $h$ for some $n$. Then, by the definition of minimal element, $\mu(\emptyset,h)$ must be contained within $\mathfrak{r}$. In particular, $\mathfrak{r}$ can be uniquely obtained from $\mu(\emptyset, h)$ by adding $|\mathfrak{r}| - |\mu(\emptyset,h)| = n - |\mu(\emptyset, h)|$ squares to $\mu(\emptyset, h)$ in the correct rows.
\end{proof}
For example, suppose we want to construct an element of $\mathcal{R}_\emptyset(8)$ with height 4. Then we take $\mu(\emptyset,4)$, and because this already has 6 squares in it, we just add the 2 remaining squares to any 2 not necessarily distinct rows. The following diagram shows how this process works (added squares are shaded):
\vspace{0.3cm}
\begin{center}
\begin{ytableau}
\none & \none & \\
\none & & \\
& \\
\\
\end{ytableau}
\begin{tabular}{c}
\\\\$\scalebox{1.5}{\ensuremath{\rightarrow}}$
\end{tabular}
\begin{ytableau}
\none & \none & \\
\none & & & *(green) \\
& \\
& *(green)\\
\end{ytableau}
\begin{tabular}{c}
\\\\$\scalebox{1.5}{\ensuremath{\rightarrow}}$
\end{tabular}
\begin{ytableau}
\none & \none & \none & \none &\\
\none & \none & & & \\
\none & & \\
& \\
\end{ytableau}
\end{center}
\vspace{0.3cm}
To simplify notation for later, we will use the notation $\text{ext}_n(\mathfrak{m})$ to denote the set of rim hooks of length $n$ generated by a minimal element $\mathfrak{m}$, i.e. \textit{extensions} of $\mathfrak{m}$. That is, elements of $\text{ext}_n(\mathfrak{m})$ are created by adding $n - |\mathfrak{m}|$ extra squares to $\mathfrak{m}$ through the process of square-addition as shown above.
Now that we have built up an understanding of minimal elements, we can proceed with the proof of Theorem \ref{emptysetformula}.
\begin{proof}[Proof of Theorem \ref{emptysetformula}]
By Proposition \ref{minimalbijection}, if $M$ represents the set of all possible minimal elements of length at most $n$, then $\#\mathcal{R}_\emptyset(n) = \sum_{\mathfrak{m} \in M}\#\text{ext}_n(\mathfrak{m})$, because any element of $\mathcal{R}_\emptyset(n)$ is generated by the minimal element of the same height.
Thus, we begin by determining all the minimal elements of $\mathcal{R}_\emptyset(n)$. We start with the simple cases: $\mu(\emptyset,1)$ is just a single square; $\mu(\emptyset, 2)$ is the Young diagram given by $(1,1)$, and $\mu(\emptyset, 3)$ is the skew shape given by $(2,2,1)/(1)$. More generally, all minimal elements of height greater than 2 (and for double descent set $\emptyset$) have a staircase shape, where the top and bottom rows have 1 square, and the middle rows all have 2 squares.
Next, we determine the largest minimal element that can generate an element of $\mathcal{R}_\emptyset(n)$. Let $\mathfrak{m} = \mu(\emptyset, h)$ be the desired minimal element. Then $|\mathfrak{m}| = 2h - 2$, so the maximal $h$ such that $|\mathfrak{m}|\leq n$ is $H = \left \lfloor \frac{n+2}{2} \right \rfloor$.
Now that we know all the minimal elements that generate elements of $\mathcal{R}_\emptyset(n)$, we are almost done. We can simplify the summation at the beginning of this proof as follows: $$\#\mathcal{R}_\emptyset(n) = \sum_{\mathfrak{m} \in M}\#\text{ext}_n(\mathfrak{m}) = \sum_{k = 1}^H\#\text{ext}_n(\mu(\emptyset,k))$$ because all the possible minimal elements are the ones of heights ranging from 1 to $H = \left \lfloor \frac{n+2}{2} \right \rfloor$.
For a given height $h$, the value of $\#\text{ext}_n(\mu(\emptyset,h))$ is the number of ways to distribute $n - |\mu(\emptyset,h)|$ additional squares among the $h$ rows of $\mu(\emptyset,h)$. This is commonly known as the number of weak $h$-compositions of $n - |\mu(\emptyset,h)|$, and this is given by the formula $$\dbinom{(n - |\mu(\emptyset,h)|) + h - 1}{h - 1} = \dbinom{n - (2h - 2) + h - 1}{h - 1} = \dbinom{n - h + 1}{h - 1}.$$
Combining this with the previous summation, we get the following formula: $$\#\mathcal{R}_\emptyset(n) = \displaystyle \sum_{k = 1}^H\#\text{ext}_n(\mu(\emptyset,k)) = \sum_{k=1}^H\dbinom{n - k + 1}{k - 1}.$$
It is well-known that this sum is equivalent to the $(n+1)$st Fibonacci number (see OEIS \cite{fibNum}), giving the desired result.
\end{proof}
\begin{example}
Let us compute $\#\mathcal{R}_\emptyset(6)$ by using Theorem \ref{emptysetformula} and also by listing out the rim hooks individually. Theorem \ref{emptysetformula} gives $\#\mathcal{R}_\emptyset(6) = F_{7} = 13.$ Next, we list the elements of $\mathcal{R}_\emptyset(6)$:
\vspace{0.3cm}
\begin{center}
\scalebox{0.7}{\ensuremath{\ydiagram{6} \hspace{1cm} \ydiagram{4+1,5} \hspace{1cm} \ydiagram{3+2,4} \hspace{1cm} \ydiagram{2+3,3} \hspace{1cm} \ydiagram{1+4,2} \hspace{1cm} \ydiagram{5,1} \hspace{1cm} }}
\end{center}
\vspace{0.3cm}
\begin{center}
\scalebox{0.7}{\ensuremath{\ydiagram{3+1,2+2,3} \hspace{1cm} \ydiagram{2+2,1+2,2} \hspace{1cm} \ydiagram{3+1,1+3,2} \hspace{1cm} \ydiagram{1+3,2,1} \hspace{1cm} \ydiagram{2+2,3,1} \hspace{1cm} \ydiagram{3+1,4,1} \hspace{1cm} \ydiagram{2+1,1+2,2,1}}}
\end{center}
\vspace{0.3cm}
Indeed, there are 13 rim hooks in $\mathcal{R}_\emptyset(6)$, matching up with the value given by Theorem \ref{emptysetformula} as expected.
\end{example}
\section{Circular Permutations}\label{other}
Here we briefly mention the topic of \textit{circular permutations}. Intuitively, a circular permutation $w$ of length $n$ is just a permutation in $\mathfrak{S}_n$ ``wrapped-around''; that is, we read $w = [w_1, w_2, ..., w_n]$ from left to right, but when $w_n$ is reached, we just return back to $w_1$. This allows us to define double descents at all indices $1, 2,...,n$ and not just $2,3,...,n$. For example, a double descent at $n$ would mean $w_{n-1} > w_n > w_1$. Now, we formally define the set of circular permutations $\mathfrak{C}_n$ as follows. Define the \textit{rotation} map to be $\rho : \mathfrak{S}_n \xrightarrow{\sim} \mathfrak{S}_n$ which maps a permutation $w = [w_1, w_2, ..., w_n]$ to $[w_n, w_1, ..., w_{n-1}]$. Then, the set of equivalence classes of $\mathfrak{S}_n$ under the equivalence relation $w \sim \rho(w)$ is $\mathfrak{C}_n$.
When we discuss the double descents of a permutation $w \in \mathfrak{S}_n$, we mean double descents at the usual indices, $2, 3, ..., n-1$. However, if $w$ is an element of $\mathfrak{C}_n$, then double descents may also include indices $1$ and $n$.
\begin{theorem}
The number of permutations in $\mathfrak{C}_n$ with no double descents is equal to $b_{n-1}$.
\end{theorem}
\begin{proof}
Each equivalence class defining $\mathfrak{C}_n$ has exactly one representative $w \in \mathfrak{S}_n$ satisfying $w_1 = n$. Therefore, we can count permutations in $\mathfrak{C}_n$ with no double descents by counting permutations in $\mathfrak{S}_n$ with first element $n$ that have no double descents (defined as usual, so at indices in $[2, n-1]$) and do not satisfy $w_{n-1} > w_n > w_1$ or $w_n > w_1 > w_2$. To construct such an element of $\mathfrak{S}_n$, we just take an element of $\mathfrak{S}_{n-1}$ with no double descents and no initial descent and put $n$ to the left of it. That is, if $u = [u_1, u_2, ..., u_{n-1}] \in$ $ \mathfrak{S}_{n-1}$ has no double descents and no initial descent, then $[n, u_1, u_2, ..., u_{n-1}]$ is the desired element of $\mathfrak{S}_n$. The no initial descent condition is required since $n > u_1$, as $u_1 \in [n-1]$, so this avoids a double descent at index $2$. Now we check that $[n, u_1, ..., u_{n-1}]$ has no double descents at all indices. A permutation of the form $[n, u_1, ..., u_{n-1}]$ has no double descents at indices $2,3,...,n-1$ by construction, and it also does not satisfy $w_{n-1} > w_n > w_1$ or $w_n > w_1 > w_2$ (i.e. has no double descents at indices $n$ and $1$) because $w_n < w_1$; $w_n \in [n-1]$ and $w_1 = n$, so $w_n$ must be less than $w_1$. Clearly, the number of such permutations is just the number of permutations in $\mathfrak{S}_{n-1}$ with no double descents and no initial descent, $b_{n-1}$.
\end{proof}
\section{Conjectures}\label{conjs}
All of the following conjectures come from observing patterns in computer-produced data tables of values of $dd(I;n)$ for various $I$ and $n$.
\begin{conjecture}
The values $\{dd(\{i\}; n)\}_{i=1}^n$ are asymptotically equidistributed. Namely, for fixed $0 < \alpha < \beta < 1$, we have $\displaystyle \sum_{\alpha n < i < \beta n}dd(\{i\}; n) \sim (\beta - \alpha)\sum_{i=2}^{n-1}dd(\{i\};n)$.
\end{conjecture}
\begin{remark*}
This conjecture can be intuitively understood, as when a permutation becomes extremely long (i.e. for large $n$), the probability there is a double descent at index $k$ should be nearly the same as the probability of a double descent at index $k+1$.
\end{remark*}
\begin{conjecture}
Given a fixed $n \in \mathbb{N}$, the numbers $dd(\{i\};n)$ for $2 \leq i < \left \lceil \frac{n}{2} \right \rceil$ follow a ``down up down up" pattern. Namely, $dd(\{i\}; n) > dd(\{i + 1\}; n)$ if $i$ is even, and $dd(\{i\}; n) < $ $dd(\{i + 1\}; n)$ if $i$ is odd.
\end{conjecture}
\begin{remark*}
This conjecture is very unexpected, as it seems to hold for all values of $n$ (numerically verified for some $n$). In particular, the ``down up down up" pattern persists even as the values of $dd(\{i\};n)$ approach uniform distribution.
\end{remark*}
\begin{conjecture}
Let $n,i \in \mathbb{N}$ such that $i < \left \lceil \frac{n}{2} \right \rceil - 1$. If $i$ is even, we have $\frac{dd(\{i\}; n)}{dd(\{i+1\}; n)}>$ $\frac{dd(\{i+2\}; n)}{dd(\{i+3\}; n)}$, and if $i$ is odd, we have $\frac{dd(\{i\}; n)}{dd(\{i+1\}; n)}<$ $\frac{dd(\{i+2\}; n)}{dd(\{i+3\}; n)}$.
\end{conjecture}
\begin{remark*}
This conjecture illustrates how the values of $dd(\{i\};n)$ approach uniform distribution as $n$ becomes large. The values of the successive ratios $\frac{dd(\{i\}; n)}{dd(\{i+1\}; n)}$, which are approaching 1 as $dd(\{i\};n)$ reaches uniform distribution, are strictly decreasing toward 1 for successive even $i$, while these ratios are strictly increasing toward 1 for successive odd $i$.
\end{remark*}
\begin{conjecture}
For fixed $i,j \in \mathbb{Z}_{\geq 2}$, the limit $\lim_{n \rightarrow \infty}\frac{dd(\{i\}; n)}{dd(\{j\}; n)}$ exists and is a positive number.
\end{conjecture}
\begin{remark*}
This conjecture highlights a major difference between descents and double descents. According to Diaz-Lopez et al. \cite{DLHIOS}, $d(\{i\};n)$ is a polynomial of degree $i$, so $\lim_{n \rightarrow \infty}\frac{d(\{i\}; n)}{d(\{j\}; n)}$ is either $0$ or $\infty$ when $i \neq j$, whereas the corresponding limit for double descents is always a positive number.
\end{remark*}
In fact, we can generalize this conjecture:
\begin{conjecture}
Let $I, J \subset \mathbb{Z}_{\geq 2}$ be two finite sets such that $dd(I;n) > 0$ and $dd(J;n) > 0$ for all but finitely many $n$. Then the limit $\lim_{n \to \infty}\frac{dd(I;n)}{dd(J;n)}$ exists and is a positive number.
\end{conjecture}
The following graphs show values of $\frac{dd(I;n)}{dd(J;n)}$ plotted with respect to $n$ for various $I$ and $J$:
\begin{figure}[h]
\centering
\subfloat[$I = \{5\}$ and $J = \{2,3,4\}$]{{\includegraphics[width=7cm, keepaspectratio]{g1.png}}}
\qquad
\subfloat[$I = \{3\}$ and $J = \{2,3\}$]{{\includegraphics[width=7cm, keepaspectratio]{g2.png}}}
\newline
\centering
\subfloat[$I = \emptyset$ and $J = \{2,5\}$]{{\includegraphics[width=7cm, keepaspectratio]{g3.png}}}
\qquad
\subfloat[$I = \{2\}$ and $J = \{4\}$]{{\includegraphics[width=7cm, keepaspectratio]{g4.png}}}
\label{fig:example}
\end{figure}
Each graph demonstrates that $\frac{dd(I;n)}{dd(J;n)}$ converges; in particular, each ratio converges alternately.
\section{Future Work}
It might be possible to establish lower and upper bounds on $dd(I;n)$ by using Naruse's hook-length formula \cite{naruse} for skew shapes as well as Proposition \ref{rhrecprop}. Let $I$ be a double descent set. By definition of $\mathcal{R}_I(n)$, we have $$dd(I;n) = \sum_{\mathfrak{r}\in\mathcal{R}_I(n)}f^{\mathfrak{r}},$$ where $f^{\mathfrak{r}}$ is the number of rim hook tableaux of $\mathfrak{r}$. Then, we have the following bounds:$$ \inf_{\mathfrak{r}\in\mathcal{R}_I(n)}f^{\mathfrak{r}}\cdot\#\mathcal{R}_I(n) \leq dd(I;n) \leq \sup_{\mathfrak{r}\in\mathcal{R}_I(n)}f^{\mathfrak{r}}\cdot\#\mathcal{R}_I(n).$$
With the recursion given in Proposition \ref{rhrecprop}, one can determine $\#\mathcal{R}_I(n)$ as long as the initial conditions for the recursion are computed. For example, we have already determined the initial conditions for singleton double descent sets, allowing us to formulate Theorem \ref{rimhookrec}.
To evaluate $ \inf_{\mathfrak{r}\in\mathcal{R}_I(n)}f^{\mathfrak{r}}$ and $\sup_{\mathfrak{r}\in\mathcal{R}_I(n)}f^{\mathfrak{r}}$, one might be able to use Naruse's hook-length formula, which is as follows: $$f^{\lambda/\mu} = |\lambda/\mu|!\Bigg[\sum_{D \in \mathcal{E}(\lambda/\mu)}\Bigg(\prod_{c \in \lambda/D}\dfrac{1}{h(c)}\Bigg)\Bigg],$$ where $\lambda/\mu$ is a skew shape, and $\mathcal{E}(\lambda/\mu)$ is the set of \textit{excited diagrams} of $\lambda/\mu$, and $h(c)$ is the hook-length of a square $c$ as calculated in $\lambda$. More explanation on this formula can be found in the literature \cite{idk2, idk1}.
\begin{flushleft}
\section{Acknowledgments}
The author would first like to thank Pakawut Jiradilok (MIT) for his guidance and advice in this research. The author would also like to thank Yongyi Chen (MIT) and Dr. Tanya Khovanova (MIT) for their proofreading of this paper. The author would finally like to thank Professor Pavel Etingof (MIT), Dr. Slava Gerovitch (MIT), and Dr. Khovanova for providing this research opportunity at the MIT PRIMES program. The author is especially grateful for Dr. Khovanova's suggestions regarding double descents.
\textit{This material is based upon work supported by the National Science Foundation under Grant No. 1916120.}
\end{flushleft}
\newpage
\begin{flushleft}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\begin{figure*}[tp!]
\begin{equation}
f(I,p,\psi\, | \,I_0,p_0,\psi_0, \tens{\Sigma}) = \frac{2|p|\,I^2} {\sqrt{(2\pi)^3} \sigma^3} \,
\mathrm{exp} \left \lgroup - \frac{1}{2}
\left[ \begin{array}{c}
I -I_0 \\
p \, I\, \cos2\psi-p_0\,I_0\cos2\psi_0 \\
p \, I\, \sin2\psi-p_0\,I_0\sin2\psi_0 \\\end{array}
\right] ^{T}
\tens{\Sigma}^{-1}
\left[ \begin{array}{c}
I-I_0 \\
p\,I\,\cos2\psi-p_0\,I_0\,\cos2\psi_0\\
p\,I\,\sin2\psi-p_0\,I_0\,\sin2\psi_0\\\end{array}
\right]
\right \rgroup \, ,
\tag{1}
\label{eq:f_ipphi}
\end{equation}
\end{figure*}
\begin{figure*}[tp!]
\begin{equation}
f_{2D}(p,\psi\, | \, I_0\,p_0,\psi_0, \tens{\Sigma}_p) = \frac{p} {\pi \sigma_p^2} \,
\mathrm{exp} \left \lgroup - \frac{1}{2}
\left( p^2
\left[ \begin{array}{c}
\cos2\psi \\
\sin2\psi\\\end{array}
\right] ^{T}
\tens{\Sigma}_{p}^{-1}
\left[ \begin{array}{c}
\cos2\psi\\
\sin2\psi\\\end{array}
\right]
- 2 pp_0
\left[ \begin{array}{c}
\cos2\psi \\
\sin2\psi \\\end{array}
\right] ^{T}
\tens{\tens{\Sigma}}_{p}^{-1}
\left[ \begin{array}{c}
\cos2\psi_0\\
\sin2\psi_0\\\end{array}
\right]
+p_0^2
\left[ \begin{array}{c}
\cos2\psi_0 \\
\sin2\psi_0 \\\end{array}
\right] ^{T}
\tens{\Sigma}_{p}^{-1}
\left[ \begin{array}{c}
\cos2\psi_0\\
\sin2\psi_0\\\end{array}
\right]
\right)
\right \rgroup \, ,
\label{eq:f_2d_polar}
\tag{2}
\end{equation}
\end{figure*}
The complexity of polarization measurement analysis has been described by \citet{Serkowski1958} when discussing the presence
of a systematic bias in optical measurements of linear polarization from stars, and then
by \citet{Wardle1974} addressing the same issue in the field of radio astronomy.
The bias of polarization measurements happens when one is interested in
the polarization intensity $P\equiv \sqrt{(Q^2 + U^2)}$
or the polarization fraction $p\equiv P/I$ and the polarization angle $\psi=1/2\,\mathrm{atan}(U/Q)$
(where $I$, $Q$ and $U$ are the Stokes parameters),
quantities which become systematically biased in the presence of noise.
Working with the Stokes parameters $Q$ and $U$ as far as possible avoids this kind of bias. Once a physical modeling of $p$ and $\psi$ is available, and can be translated into $Q$ and $U$, a likelihood analysis can be performed directly on the Stokes parameters. For the other cases, where no modeling is available,
\citet{Simmons1985} proposed the first compilation and comparison of
methods to deal with the problem of getting unbiased polarization estimates of the polarization fraction and angle, with
their associated uncertainties.
Then \citet{Naghizadeh1993} extended the work of \citet{Simmons1985} to the
characterisation of the polarization angle uncertainties, and \citet{Vaillancourt2006} proposed a method to build confidence limits on polarization fraction measurements. More recently, \citet{Quinn2012} suggested using a Bayesian approach to get polarization estimates with low bias.
In all these studies the authors made strong assumptions:
no noise on the intensity $I$ and no correlation between
the $Q$ and $U$ components, which were also assumed to have equal noise properties.
\citet[hereafter PMA I, ][]{Montier2014}
have quantified the impact of the
asymmetry and the correlation between the $Q$ and $U$ noise components
on the bias of the polarization fraction and angle measurements. They have shown that
the asymmetry of the noise properties can not be systematically neglected as is usually done, and that the
uncertainty of the intensity may significantly affect the polarization measurements in the low signal-to-noise (SNR) regime.
In the context of the new generation of polarization data, such as {\it Planck\/}\footnote{\textit{Planck}~(\url{http://www.esa.int/Planck}) is a
project of the European Space Agency (ESA) with instruments
provided by two scientific consortia funded by ESA member states (in
particular the lead countries France and Italy), with contributions
from NASA (USA) and telescope reflectors provided by a collaboration
between ESA and a scientific consortium led and funded by Denmark.}
\citep{planck2011-1.1},
Blast-Pol \citep[The Balloon-borne Large Aperture Submillimeter Telescope for Polarimetry, ][]{Fissel2010},
PILOT \citep{Bernard2007} or ALMA \citep{Perez2013},
which benefit from a much better control of the noise properties,
it is essential to take into account the full covariance matrix
when deriving the polarization measurement estimates.
In recent works no correction for the bias of the polarization
fraction were applied \citep[e.g.][]{Dotson2010}, or
only high SNR data were used for analysis ($>$3)
to avoid these issues \citep[e.g.][]{Vaillancourt2012}.
Two issues are immediately apparent. First, this choice of the SNR threshold may not be relevant for all measurements,
and the asymmetry between the orthogonal Stokes noise components
could affect the threshold choice. Secondly, the question remains of how to
deal with low signal-to-noise data.
Using simply the measurements of the polarization parameters (we will call them the ``na\"ive'' ones)
as estimators of the true values leads to
very poor performance, as they lack any information on the noise power.
Instead, we would like to perform some transformation on the
polarization parameters, in order to remove bias and improve the variance.
This work is the second of a series on the 'Analysis of polarization measurements'.
Its aim is to describe how to recover from a measurement ($p$, $\psi$) the true polarization
fraction $p_0$ and polarization angle $\psi_0$ with their associated uncertainties,
taking into account the full covariance matrix $\tens{\Sigma}$.
We will compare the performance of the various estimators available,
and study the impact of the correlation and ellipticity of the covariance matrix on these estimates.
We stress that we adopt a frequentist approach to investigate the properties of these estimators, even
when dealing with the method inspired by the Bayesian analysis. This means that the estimators are defined as
single value estimates, instead of considering the probability density function (pdf) as the proper estimate, as it is usually done in Bayesian methods.
The performance of these estimators will be evaluated using three main criteria:
the minimum bias, the smallest risk function, and the shape of the distribution of the output estimates. The choice of the most appropriate estimator may vary with the application at hand,
and a compromise among them may be chosen to achieve good overall performance.
Throughout this work we will make the following two assumptions: i) circular polarization is assumed to be negligible, and ii)
the noise on Stokes parameters is assumed to be Gaussian. We also define four regimes of the covariance matrix
to quantify its asymmetry, in terms of effective ellipticity ($\varepsilon_\mathrm{eff}$) as described in PMA I:
the {\it extreme} (1$<$$\varepsilon_\mathrm{eff}$$<$2),
the {\it low} (1$<$$\varepsilon_\mathrm{eff}$$<$1.1),
the {\it tiny} (1$<$$\varepsilon_\mathrm{eff}$$<$1.01)
and the canonical ($\varepsilon_\mathrm{eff}$=1) regimes.
The paper is organized as follows: we first review in Sect.~\ref{sec:estimators}
the expression and the limitations of the polarization estimators, which are extended to take into account the
full covariance matrix. We discuss in Sect.~\ref{sec:uncertainties} the meaning of the polarization uncertainties
and we present the different uncertainty estimators.
We then compare the performance of the estimators of the polarization fraction in Sect.~\ref{sec:comparison_p_estimators},
and of the polarization angle in Sect.~\ref{sec:comparison_estimators_psi}. In Sect.~\ref{sec:3Dcase}, we discuss some aspects of the problem when the total intensity $I$ is not perfectly known. We conclude with general recipes in Sect.~\ref{sec:conclusion}.
\section{Polarization estimators}
\label{sec:estimators}
Early work on polarization estimators was based on the \citet{Rice1945} distribution which provides
the probability to find a measurement $p$, for a given true value $p_0$ and the noise estimate $\sigma_p$ of the $Q$ and $U$
Stokes parameters. The noise values of the Stokes parameters were assumed to be equal ($\sigma_p$=$\sigma_{\rm Q}$/$I_0$=$\sigma_{\rm U}$/$I_0$), and the total intensity was assumed to be perfectly known, $I=I_0$.
As we would like to include the full covariance matrix,
we use the generalized expression of the pdf from PMA I, which provides the probability
to get the measurements ($I$,$p$, $\psi$), given the true values ($I_0$, $p_0$,$\psi_0$) and the covariance matrix $\tens{\Sigma}$.
Following the notations of PMA I, the expression of the pdf in 3D, including the intensity terms, denoted
$f(I, p,\psi|I_0,p_0,\psi_0,\tens{\Sigma})$, is given by Eq.~\ref{eq:f_ipphi}, and the pdf in 2D,
$f_{2D}(p,\psi| I_0, p_0,\psi_0,\tens{\Sigma}_p)$, by Eq.~\ref{eq:f_2d_polar} when the intensity $I_0$ is assumed to be perfectly known.
\setcounter{equation}{2}
We also note the introduction of the covariance matrix reduced in 2D,
\begin{equation}
\tens{\Sigma}_p= \frac{1}{I_0^2} \left(\begin{array}{cc}
\sigma_{\rm Q}^2 & \sigma_{\rm QU} \\
\sigma_{\rm QU} & \sigma_{\rm U}^2 \\
\end{array}\right)
\quad = \quad
\frac{ \sigma_{p,G}^2 } {\sqrt{1-\rho^2 }} \left \lgroup \begin{array}{cc}
\varepsilon & \rho \\
\rho & 1/\varepsilon \\
\end{array}\right \rgroup \, ,
\end{equation}
where $\varepsilon=\sigma_{\rm Q} / \sigma_{\rm U}$ is the ellipticity and $\rho=\sigma_{\rm QU}/\sigma_{\rm Q}\sigma_{\rm U}$ is the correlation between the $Q$ and $U$ noise components,
leading to an effective ellipticity given by:
\begin{equation}
\varepsilon_\mathrm{eff} = \sqrt{\frac{1 + \varepsilon^2 + \sqrt{(\varepsilon^2-1)^2 + 4\rho^2\varepsilon^2}}{1 + \varepsilon^2 - \sqrt{(\varepsilon^2-1)^2 + 4\rho^2\varepsilon^2}}} \, .
\end{equation}
With these notations we have $\mathrm{Det}(\tens{\Sigma}_p)=\sigma_{p,\mathrm{G}}^4$ and
\begin{equation}
\sigma_{p,\mathrm{G}}^2 = \frac{\sigma_{\rm Q}^2}{I_0^2}\, \frac{\sqrt{1-\rho^2}}{\varepsilon}\, ,
\end{equation}
which represents the
equivalent radius of a circular Gaussian distribution with the same integrated area as the elliptical one.
We also define $\sigma_p$=$\sigma_{\rm Q}$/$I_0$=$\sigma_{\rm U}$/$I_0$ when $\varepsilon_\mathrm{eff}$=1.
Finally the pdfs of $p$ and $\psi$, $f_p$ and $f_{\psi}$,
are obtained by marginalization of $f_{2D}$ over $\psi$ and $p$, respectively. The expressions for the
1D pdfs $f_p$ and $f_{\psi}$ depend on the full set of initial parameters ($I_0$, $p_0$, $\psi_0$) in the general case, unlike the case
under the canonical simplifications (see appendix~C of PMA I for fully developed analytical expressions).
We describe below the various estimators of the polarization fraction and angle listed in Table~\ref{tab:list_esti}.
We stress that most of the expressions derived in this work have been obtained when restricting the
analysis in the 2D case, assuming furthermore that the true intensity $I_0$ is
perfectly known, except for the Bayesian estimator where we present a 3D development (see Sect.~\ref{sec:3Dcase}).
\begin{table}
\caption{List of the acronyms of the estimators used in this work. The parameters to which each estimator applies, independently (/) or
simultaneously (\&), are given in the last column.}
\center
\begin{tabular}{llc}
\hline
\hline
Acronym & Description & Parameters \\
\hline
ML & Maximum Likelihood & $p$ / $\psi$ \\
MP & Most Probable in 1D & $p$ / $\psi$ \\
MP2 & Most Probable in 2D & $p$ \& $\psi$ \\
AS & Asymptotic & $p$ \\
MAS & Modified Asymptotic & $p$ \\
MAP & Maximum A Posteriori & $p$ / $\psi$ \\
MAP2 & Maximum A Posteriori in 2D & $p$ \& $\psi$ \\
MB & Mean posterior Bayesian & $I$ \& $p$ \& $\psi$ \\
\hline
\end{tabular}
\label{tab:list_esti}
\end{table}
\subsection{Maximum Likelihood estimators}
\label{sec:ML}
The Maximum Likelihood (ML) estimators are defined as the values of $p_0$ and $\psi_0$ which maximize the
pdf calculated at the polarization measurements $p$ and $\psi$.
When computed using the
2D pdf $f_{2D}$ to fit $p_0$ and $\psi_0$ simultaneously, this estimator gives back the measurements,
whatever the bias and the covariance matrix are, and is inefficient at correcting the bias of the data.
After marginalization of the pdf $f_{2D}$ over $\psi$,
the 1D ML estimator of $p_0$, $\hat{p}_{\text{ML}}$, is now defined by
\begin{equation}
0 = \frac{\partial f_{p}}{ \partial p_0} \Big (p\, | \, p_0,\psi_0, \tens{\Sigma}_p \Big) _{\big | {p_0=\hat{p}_{\text{ML}}}} \, .
\end{equation}
Note that the expression of $f_p$ is independent of the measurement $\psi$, but still theoretically
depends on the true value $\psi_0$ which is unknown. In the canonical case ($\varepsilon_\mathrm{eff}$=1) $\psi_0$
disappears from the expression, but it must be considered as a nuisance parameter in the general case.
One way to proceed in such a case is to compute the mean of the solutions $\hat{p}_{\text{ML}}$ for $\psi_0$ varying in the range $-\pi/2$ to $\pi/2$.
As already stressed by \citet{Simmons1985}, this estimator yields a zero estimate below a certain threshold of the measurement $p$,
which implies a strong discontinuity in the resulting distribution of this $p_0$ estimator.
Nevertheless, contrary to the 2D ML
estimators, the $p$ ML estimator does not give back the initial measurements,
and is often used to build polarization estimates.
Similarly, the 1D ML estimator of $\psi_0$, $\hat{\psi}_{\text{ML}}$, is given after marginalization of $f_{2D}$ over $p$ by
\begin{equation}
0 = \frac{\partial f_{\psi}}{ \partial \psi_0} \Big (\psi \, | \, p_0, \psi_0, \tens{\Sigma}_p \Big) _{\big | {\psi_0=\hat{\psi}_{\text{ML}}}} \, .
\label{eq:ml1_psi}
\end{equation}
As mentioned for the ML estimator $\hat{p}_{\text{ML}}$,
the unknown parameter $p_0$ in the above expression has to be considered as a nuisance parameter when solving Eq.~\ref{eq:ml1_psi}.
We stress that because the canonical simplifications have always been assumed in the literature,
bias on the $\psi$ measurements has not been previously considered
and the $\hat{\psi}_{\text{ML}}$ estimator has not yet been used and qualified to correct this kind of bias.
This analysis is done in Sect.~\ref{sec:comparison_estimators_psi}.
\subsection{Most Probable estimators}
The Most Probable estimators of $p_0$ and $\psi_0$ are the values for which the
pdf $f_{2D}$ reaches its maximum at the
measurements values ($p$,$\psi$). Notice that the Most Probable estimators ensure that
the measurement values ($p$,$\psi$) are the most probable values of the pdf computed for this choice of $p_0$ and $\psi_0$, {i.e.}
they take the maximum probability among all possible measurements with this set of $p_0$ and $\psi_0$.
As a comparison the ML estimators ensure that the measurement values ($p$,$\psi$)
take the maximum probability for this choice of $p_0$ and $\psi_0$, compared to the probability of the same
measurement values ($p$,$\psi$) for all other possible sets of $p_0$ and $\psi_0$.
The 2D Most Probable estimators (MP2), $\hat{p}_{\text{MP2}}$ and $\hat{\psi}_{\text{MP2}}$,
are defined as the values of $p_0$ and $\psi_0$ simultaneously satisfying the two following relations:
\begin{equation}
\label{eq:mp_1}
0=\frac{\partial f_{2D}}{ \partial p} \Big (p,\psi \, | \, p_0, \psi_0, \tens{\Sigma}_p \Big) _{\Bigg |{\begin{array}{l}p_0=\hat{p}_{\text{MP2}} \\ \psi_0=\hat{\psi}_{\text{MP2}} \end{array}}}
\end{equation}
and
\begin{equation}
\label{eq:mp_2}
0=\frac{\partial f_{2D}}{\partial \psi} \Big (p,\psi \, | \, p_0, \psi_0, \tens{\Sigma}_p \Big )_{\Bigg |{\begin{array}{l}p_0=\hat{p}_{\text{MP2}}\\ \psi_0=\hat{\psi}_{\text{MP2}} \end{array}}} \, .
\end{equation}
These relations can be solved, using the fully developed expression of $f_{2D}$ including the terms of the inverse matrix $\tens{\Sigma}_p^{-1}$,
as provided in Appendix~\ref{sec:most_probable_detail}. When canonical simplifications are assumed,
this yields
\begin{eqnarray}
\hat{\psi}_{\text{MP2}} & = & \psi \, , \nonumber \\
\hat{p}_{\text{MP2}} & = & \Bigg \{ \begin{array}{ll}
(p - \sigma_p^2 / p) & \, \, \, \mathrm{for} \, \, p > \sigma_p \\
0 & \, \, \, \mathrm{for} \, \, p \le \sigma_p
\end{array} \, ,
\end{eqnarray}
as found in \citet{Quinn2012}. We observe that the MP2 estimate of the polarization fraction is systematically
lower than the measurements, so that this estimator
tends to over-correct $p$, as it will be shown in Sect.~\ref{sec:comparison_p_estimators}.
After marginalization over $p$ or $\psi$, the 1D Most Probable (MP) estimators, $\hat{p}_{\text{MP}}$ and $\hat{\psi}_{\text{MP}}$, are defined independently by:
\begin{equation}
0 = \frac{\partial f_p}{ \partial p} \Big (p\, | \, p_0, \psi_0, \tens{\Sigma}_p \Big) _{\big | {p_0=\hat{p}_{\text{MP}}}}
\end{equation}
and
\begin{equation}
0 = \frac{\partial f_{\psi}}{ \partial \psi} \Big (\psi \, | p_0 \, \psi_0, \tens{\Sigma}_p \Big) _{\big | {\psi_0=\hat{\psi}_{\text{MP}}}}\, .
\end{equation}
The 1D and 2D estimators are not expected to provide the same estimates. Under the canonical assumptions,
the MP estimator of $p$ is commonly known as the Wardle and Kronberg's \citep{Wardle1974} estimator.
As mentioned earlier, the MP estimator yields a zero estimate below
a certain threshold of $p$ \citep{Simmons1985},
which implies a strong discontinuity in the resulting distribution of these estimators
for low SNR measurements.
\subsection{Asymptotic estimator}
\label{sec:as}
The Asymptotic estimator (AS) of the polarization fraction $p$ is usually defined
in the canonical case by
\begin{equation}
\label{eq:AS}
\hat p_{\text{AS}}= \Bigg \{ \begin{array}{lcl}
\sqrt{p^2-\sigma_p^2} & p > \sigma_p \\
\quad 0 & p \le \sigma_p
\end{array} \, .
\end{equation}
The output distribution of the AS estimator appears as the asymptotic limit of the \citet{Rice1945} distribution when
$p/\sigma_p$ tends to $\infty$, just as the ML and MP estimators, and given by
\begin{equation}
\label{eq:limit}
\mathrm{pdf} \left( \dfrac{p}{\sigma_p} \right) \rightarrow {\cal N}\left( \sqrt{\left(\dfrac{p_0}{\sigma_p}\right)^2+1},1 \right) \, ,
\end{equation}
where ${\cal N}(\mu,\sigma)$ denotes the Gaussian distribution of mean
$\mu$ and variance $\sigma^2$. As with the previously presented estimators, this one suffers from
a strong discontinuity at $\hat{p}_{\text{AS}}$=0.
In the general case, when the canonical simplification is not assumed,
it has been shown by \citet[][hereafter P14]{Plaszczynski2014} that the
expression of the asymptotic estimator
can be extended to a general expression by changing the term $\sigma_p^2$ in Eq.~\ref{eq:AS} into a 'noise-bias' parameter $b^2$ defined by
\begin{equation}
\label{eq:sigp_equiv}
b^2 = \frac{\sigma_{\rm U}^{\prime2}\cos^2(2\psi_0-\theta)+ \sigma_{\rm Q}^{\prime2}\sin^2(2\psi_0-\theta) }{I_0^2} \, ,
\end{equation}
where $\theta$ represents the position angle of the iso-probability
bi-variate distribution, and $\sigma_{\rm U}^{\prime2},\sigma_{\rm Q}^{\prime2}$
the rotated variances
\begin{eqnarray}
\label{eq:rot}
\theta &=& \frac{1}{2}\mathrm{atan} \left( \frac{2\rho \sigma_{\rm Q} \sigma_{\rm U}}{\sigma_{\rm Q}^2-\sigma_{\rm U}^2}\right)\, , \\
\label{eq:sigmaqprime}
\sigma_{\rm Q}^{\prime2}&=&\sigma_{\rm Q}^2\cos^2\theta+\sigma_{\rm U}^2\sin^2\theta+\rho \sigma_{\rm Q} \sigma_{\rm U} \sin2\theta \, , \\
\label{eq:sigmauprime}
\sigma_{\rm U}^{\prime2}&=&\sigma_{\rm Q}^2\sin^2\theta+\sigma_{\rm U}^2\cos^2\theta-\rho \sigma_{\rm Q} \sigma_{\rm U} \sin2\theta \, .
\end{eqnarray}
and $\psi_0$ is the true polarization angle, which can be approximated asymptotically by the na\"ive measurement $\psi$
or, even better, by the estimate $\hat{\psi}_{\text{ML}}$ of Sect.~\ref{sec:ML}.
It has been shown that this equivalent 'noise-bias' $b^2$ ensures the minimal bias of $\hat{p}_{\text{AS}}$.
\subsection{Discontinuous estimators}
\label{sec:discontinuous_estimators}
\begin{figure}
\centering
\psfrag{xtitle}{$\hat{p}/p_0$}
\includegraphics[width=9cm]{Estimator_comparison_histo_all_sn1_v2.0.ps}
\caption{Distributions of $\hat{p}$ estimates obtained with the standard estimators: na\"ive (black), ML (blue), MP (light green), MP2 (green) and AS (red).
We assume the covariance matrix to be canonical, and a SNR of $p_0/\sigma_p$=1.}
\label{fig:histo_all}
\end{figure}
The estimators of $\hat{p}$ introduced above (ML, MP and AS)
exhibit a common feature: below
some cutoff value the estimator yields exactly zero.
This means that the estimator distribution is discontinuous and is
a mixture of a discrete one (at $\hat{p}$=0) and a continuous one (for $\hat{p}>0$), This type of distribution is illustrated
in Fig.~\ref{fig:histo_all} for a SNR $p_0/\sigma_{p}$=1 and a canonical covariance matrix.
The distribution of the na\"ive measurements is built using a Monte-Carlo simulation, starting from true polarization parameters
$p_0$ and $\psi_0$. The other three distributions of $\hat{p}$ are obtained after applying the ML, MP and AS estimators.
A non negligible fraction of the measurements provide null estimates of $\hat{p}$.
As shown in Fig.~\ref{fig:fracnull}, this fraction of null estimates reaches 40\% at low SNR with the MP and AS estimators,
and more than 50\% with the ML estimator for SNR$<$1. It converges to 0\% for SNR $>$4.
If taken into account as reliable estimate of $\hat{p}$, null estimates will somewhat artificially lower the statistical bias of the $\hat{p}$
estimates compared to the true value $p_0$, as detailed in Sect.~\ref{sec:comparison_p_estimators}.
A null value of these estimators should be understood as an indicator of the low SNR of this measurement,
which has in fact to be included into any further analysis as an upper limit value.
In practice, the user seldom has various realizations at hand. Using these estimators then leads to a result with upper limits mixed with non-zero estimates in the analysis.
Such complications may be especially hard to handle when studying polarized maps of the interstellar medium.
On the other hand, it would be disastrous to omit those estimates in any statistical analysis,
since weakly-polarized points would be systematically rejected.
To avoid such complications, we explore below other estimators which avoid this issue and
lead to continuous distributions. This is especially important in the range of SNR between 2 and 3, where the discontinuous estimators still
yield up to 20\% of null estimates.
\begin{figure}
\centering
\includegraphics[width=9cm]{Estimator_fracnull_v2.0.ps}
\caption{Statistical fraction of null estimates of $\hat{p}$ provided by the ML, MP, MP2 and AS estimators
applied on Monte-Carlo measurements, as a function of the
SNR, in the canonical case.}
\label{fig:fracnull}
\end{figure}
\begin{figure*}[bp!]
\begin{equation}
B(I_0,p_0,\psi_0\, | \,I,p,\psi, \tens{\Sigma}) \quad \propto \quad \sqrt{\frac{Det(\tens{\Sigma}^{-1})}{2(\pi)^3}}\,
\mathrm{exp} \left \lgroup - \frac{1}{2}
\left[ \begin{array}{c}
I -I_0 \\
p \, I\, \cos(2\psi)-p_0\,I_0\cos(2\psi_0) \\
p \, I\, \sin(2\psi)-p_0\,I_0\sin(2\psi_0) \\\end{array}
\right] ^{T}
\tens{\Sigma}^{-1}
\left[ \begin{array}{c}
I-I_0 \\
p\,I\,\cos(2\psi)-p_0\,I_0\,\cos(2\psi_0)\\
p\,I\,\sin(2\psi)-p_0\,I_0\,\sin(2\psi_0)\\\end{array}
\right]
\right \rgroup \, ,
\tag{23}
\label{eq:B_ipphi}
\end{equation}
\end{figure*}
\begin{figure*}[bp!]
\begin{equation}
B_{2D}(p_0,\psi_0\, | \,p,\psi, \tens{\Sigma}_p)\quad \propto \quad \frac{1}{\pi \sigma_{p,G}^2} \,
\mathrm{exp} \left \lgroup - \frac{1}{2}
\left[ \begin{array}{c}
p \, \cos(2\psi)-p_0 \cos(2\psi_0) \\
p \, \sin(2\psi)-p_0 \sin(2\psi_0) \\\end{array}
\right] ^{T}
\tens{\Sigma}_{p}^{-1}
\left[ \begin{array}{c}
p\,\cos(2\psi)-p_0\,\cos(2\psi_0)\\
p\,\sin(2\psi)-p_0\,\sin(2\psi_0)\\\end{array}
\right]
\right \rgroup \, ,
\label{eq:B_2d_polar}
\tag{25}
\end{equation}
\end{figure*}
\subsection{Modified Asymptotic estimator}
\label{sec:mas_estimator}
A novel powerful asymptotic estimator has been introduced by \citet{Plaszczynski2014} to correct the
discontinuous distribution of the standard estimators while still keeping the
asymptotic properties. It has been derived from the first order development of the Asymptotic estimator,
which has been modified to ensure positivity, smoothness and asymptotical convergence at high SNR.
The Modified Asymptotic (MAS) estimator is defined as follows:
\begin{equation}
\label{eq:mas1}
\hat{p}_{\text{MAS}}=p- b^2 \cdot \frac{1-e^{-p^2 / b^2}}{2p} \, ,
\end{equation}
where the 'noise-bias' $b^2$ is given by Eq.~\ref{eq:sigp_equiv} and computed using a polarization angle assessed from each sample using the asymptotic
estimator $\psi$.
P14 also provides a sample estimate of the variance of
the estimator that is shown to represent asymptotically the absolute risk function (defined in Sec.~\ref{sec:variance_risk}) of
the estimator:
\begin{equation}
\sigma^2_{\hat{p}, MAS} =\frac{\sigma_{\rm Q}^{\prime2}\cos^2(2\psi-\theta)+ \sigma_{\rm U}^{\prime2}\sin^2(2\psi-\theta)}{I_0^2} \, .
\end{equation}
This estimator focuses on getting a ``good'' distribution, that
transforms smoothly from a Rayleigh-like to a Gaussian one, the latter
being reached in the canonical case for an SNR of about 2.
\subsection{Bayesian estimators}
\label{sec:bayesian_estimators}
The pdfs introduced in Sect.~\ref{sec:estimators} provide the probability
to observe a set of polarization measurements ($I$, $p$, $\psi$)
given the true polarization parameters ($I_0$, $p_0$, $\psi_0$) and the covariance matrix $\tens{\Sigma}$.
Because we are interested in the opposite, i.e. getting an estimate of the true polarization parameters given a
measurement and the knowledge of the noise properties, we use Bayes Theorem to build the
posterior distribution. The posterior pdf $B$
is given in the 3D case by
\begin{eqnarray}
\label{eq:bayesian_3d}
& & B( I_0, p_0, \psi_0\, | \,I,p,\psi, \tens{\Sigma}) = \\
& & \frac{f(I,p,\psi\, | \,I_0, p_0, \psi_0, \tens{\Sigma}) \cdot \kappa (I_0, p_0, \psi_0)}{ \int_{0}^{+\infty} \int_{0}^{1} \int_{-\pi/2}^{\pi/2} f(I,p,\psi\, | \,I'_0, p'_0, \psi'_0, \tens{\Sigma})\, \kappa (I'_0, p'_0, \psi'_0) \, dI'_0dp'_0d\psi'_0}\, , \nonumber
\end{eqnarray}
where $\kappa(I_0,p_0,\psi_0)$ is the prior distribution,
which represents the a priori knowledge of the true polarization parameters and has to be positive everywhere
and normalized to 1 over the definition ranges of $I_0$, $p_0$ and $\psi_0$.
When no a priori knowledge is provided, we have to properly define a 'flat' prior, or non-informative prior, which
encodes the ignorance of the prior. A class of non-informative priors is given by the Jeffreys' prior \citep{Jeffrey1939} where
the ignorance is defined under symmetry transformations that leave the prior invariant.
As discussed by \citet{Quinn2012} for the two dimensional case, this kind of prior can be built
as a uniform prior in cartesian space ($Q_0$,$U_0$) or in polar space ($p_0$, $\psi_0$),
both expressing the ignorance of location.
We will prefer the latter, uniform in polar space, which ensures uniform sampling even for small values of $p_0$.
While $p_0$ and $\psi_0$ are only defined on a finite range ($[0,1]$ and $[-\pi/2,\pi/2]$, respectively),
the intensity $I_0$ may be infinite in theory, which leads to an issue when defining the ignorance prior.
In practice, an approximation of the ignorance prior for $I_0$ will be chosen as a top-hat centered on the measurement $I$ and chosen to be
sufficiently wide to cover the wings of the distribution until it becomes negligible. Such uniform priors lead to the
expression of $B$ given in Eq.~\ref{eq:B_ipphi}, where the normalization factor has been omitted.
\setcounter{equation}{23}
We emphasize that the definition of the ignorance prior introduced above becomes data-dependent, which is not strictly following
the Bayesian approach. Furthermore, the question of the definition range of the prior and the introduction
of non-flat priors will be discussed in Sect.~\ref{sec:priors}, in the context of
comparing the performance of the estimators inspired by the Bayesian approach.
Similarly, the posterior pdf in 2D (i.e., when the total intensity is perfectly know, $I=I_0$) is defined by
\begin{equation}
\hspace{-0.3cm}
B_{2D}( p_0, \psi_0\, | \,p,\psi, \tens{\Sigma}_p)=\frac{f_{2D}(p,\psi\, | \,p_0, \psi_0, \tens{\Sigma}_p) \cdot \kappa (p_0, \psi_0)}{ \int\limits_{0}^{1} \int\limits_{-\pi/2}^{+\pi/2} f_{2D}(p,\psi | p'_0, \psi'_0, \tens{\Sigma}_p)\, \kappa (p'_0, \psi'_0) \, dp'_0d\psi'_0} \, .
\label{eq:b_2d}
\end{equation}
\setcounter{equation}{25}
The analytical expressions of the posterior pdf $B_{2D}$ with a flat prior
is given in Eq.~\ref{eq:B_2d_polar}, where the normalization factors have been omitted and the intensity has been assumed perfectly known
($I=I_0$). Illustrations of this posterior pdf are presented in Appendix~\ref{sec:pdf_posterior}.
We also introduce $B_p$ and $B_{\psi}$ the Bayesian posterior pdfs of $p$ and $\psi$ in 1D, respectively, and
defined as the marginalization of the $B_{2D}$ over $\psi$ and $p$, respectively.
We use the Bayesian posterior pdf in 2D $B_{2D}$ to build two
frequentist estimators: the MAP and the MB.
The MAP2 and MAP estimators in 2D and 1D, respectively, are simply defined as the ($p_0$, $\psi_0$) values corresponding
to the maximum of the posterior pdf, $B_{2D}$, and $B_p$ and $B_{\psi}$, respectively.
We recall that these estimators
match exactly the ML estimators of Sect.~\ref{sec:estimators} in one and two dimensions, respectively,
when a flat prior is assumed. Hence the MAP2 estimators yield back the polarization measurements,
whereas the MAP estimators provide a simple way to compute the ML estimates.
The Mean Bayesian Posterior (MB) estimators are defined as the first order moments of the posterior pdf:
\begin{equation}
\hat{p}_{\text{MB}} \equiv \int_{-\pi/2}^{+\pi/2} \int_{0}^{1} p_0 B_{2D}(p_0,\psi_0\, | \, p, \psi, \tens{\Sigma}_p ) dp_0d\psi_0
\end{equation}
and
\begin{equation}
\hat{\psi}_{\text{MB}} \equiv \int_{\psi-\pi/2}^{\psi+\pi/2} \int_{0}^{1} \psi_0 B_{2D}(p_0,\psi_0\, | \, p, \psi, \tens{\Sigma}_p ) dp_0d\psi_0 \, .
\end{equation}
Notice that in the definition of $\hat{\psi}_{\text{MB}} $ the integral over $\psi_0$ is performed over a range centered on the measurement $\psi$.
This has to be done to take into account the circularity of the posterior pdf over the $\psi_0$ dimension.
Note that $B_{2D}(p_0,\psi_0\, | \, p, \psi, \tens{\Sigma}_p ) = B_{2D}(p_0,\psi_0+\pi \, | \, p, \psi, \tens{\Sigma}_p )$.
We stress that the frequentist estimators inspired by a Bayesian approach, $\hat{p}_{\text{MB}}$ and $\hat{\psi}_{\text{MB}}$, introduced above in the 2D case can be
easily extended to the 3D case by integrating the pdf $B( I_0, p_0, \psi_0\, | \,I,p,\psi, \tens{\Sigma})$
of Eq.~\ref{eq:bayesian_3d} over the $I$, $p$ and $\psi$ dimensions.
This is extremely powerful when the uncertainty of the intensity $I$ has to be taken into account in the estimate of the polarization parameters,
which is highly recommended in some circumstances, such as a low SNR on $I$ ($<$5)
or the presence of an unpolarized component on the line of sight (see Sect.~\ref{sec:3Dcase} and PMA I for more details).
\section{Uncertainties}
\label{sec:uncertainties}
\subsection{Variance and risk function}
\label{sec:variance_risk}
It is important not to confuse the variance (noted $\mathsf{V}$) of an estimator with its absolute
risk function (noted $\mathsf{R}$). For any distribution of the random $p$ variable the definitions
are :
\begin{eqnarray}
\label{eq:var_risk}
\mathsf{V}&\equiv&E\left[\left(X-E[X]\right)^2\right] \, , \\
\mathsf{R}&\equiv& E\left[\left(X- X_0\right)^2\right] \, ,
\end{eqnarray}
where $E[X]$ is the expectation of the random variable $X$ and $X_0$ is the true value. Introducing the absolute bias in $E[X]=X_0+\mathsf{B}$ and
expanding both relations, the link between the variance and the absolute risk function is
simply:
\begin{equation}
\label{eq:var_risk_link}
\mathsf{V}=\mathsf{R}-\mathsf{B}^2 \, .
\end{equation}
Therefore, for a constant absolute risk function, the variance decreases with the absolute bias
and both are equal when the estimator is unbiased. The variance does not require knowing
the true value of the random variable, which makes it useful to provide an uncertainty estimate, but
it has to be used extremely carefully in the presence of bias. In such cases,
the variance will always underestimate the uncertainty.
Furthermore, it is known that the variance is not appropriate for providing uncertainties with non-Gaussian distributions,
which is the case for the polarization fraction and angle. In such circumstances, the confidence intervals (see Sect.~\ref{sec:confidence_intervals}) are the preferred method for obtaining robust uncertainties.
The variance, however, is often used as a proxy of the uncertainty in the high regime of the SNR.
We will detail in Sect.~\ref{sec:uncertainty_comparison} and \ref{sec:uncertainty_psi_comparison} in which conditions this can still be applied.
\subsection{Posterior uncertainties}
\label{sec:credible_intervals}
One of the main benefits of the Bayesian approach is to provide simple estimates of the uncertainties associated with the
polarization estimates. One option is to build credible intervals around the MAP estimates as first proposed by \citet{Vaillancourt2006},
and the other option is to use the variance of the pdf.
Given a polarization measurement ($p$, $\psi$) and the posterior pdf $B_{2D}(p_0,\psi_0 | p, \psi, \tens{\Sigma}_p)$,
the lower and upper limits of the $\lambda$\% credible intervals are defined as
the lower and upper limits of $p_0$ and $\psi_0$ ranging the iso-probability region $\Omega(\lambda, p, \psi)$ over which
the integral of $B$ equals $\lambda$\%, so that
\begin{equation}
\iint_{\Omega(\lambda,p,\psi)} B_{2D} (p_0, \psi_0 \, | \, p, \psi, \tens{\Sigma}_p) \, dp_0d\psi_0= \frac{\lambda}{100} \, .
\end{equation}
These intervals, $[p^{\rm low}_{\text{MAP2}},p^{\rm up}_{\text{MAP2}}]$ and
$[\psi^{\rm low}_{\text{MAP2}},\psi^{\rm up}_{\text{MAP2}}]$, estimated from the 2D expression of $B_{2D}$ are defined around the
MAP2 estimates $\hat{p}_{\text{MAP2}}$ and $\hat{\psi}_{\text{MAP2}}$, which are
equal to the measurements ($p$, $\psi$).
A similar definition can be given in the one-dimensional case, which leads to different results.
The lower and upper limits, $p^{\rm low}_{\text{MAP}}$ and $p^{\rm up}_{\text{MAP}}$, around $\hat{p}_{\text{MAP}}$ are defined as follows
\begin{equation}
\int_{p^{\rm low}_{\text{MAP}}}^{p^{\rm up}_{\text{MAP}}} B_p (p_0 \, | \, p, \tens{\Sigma}_p) \, dp_0= \frac{\lambda}{100} \, ,
\end{equation}
with the constraint that the posterior probability function is identical for $p^{\rm low}_{\text{MAP}}$ and $p^{\rm up}_{\text{MAP}}$. Similarly, the lower and upper limits,
$\psi^{\rm low}_{\text{MAP}}$ and $\psi^{\rm up}_{\text{MAP}}$, around $\hat{\psi}_{\text{MAP}}$ are given by
\begin{equation}
\int_{\psi^{\rm low}_{\text{MAP}}}^{\psi^{\rm up}_{\text{MAP}}} B_{\psi} (\psi_0 \, | \, \psi, \tens{\Sigma}_p)\, d\psi_0= \frac{\lambda}{100} \, .
\end{equation}
We recall that this integral has to be computed around the measurement value $\hat{\psi}_{\text{MAP}}$ to take into account the circularity of the posterior pdf with the polarization angle.
Notice that the credible intervals built in 1D or 2D are not supposed to be identical, as
($\hat{p}_{\text{MAP2}}$, $\hat{\psi}_{\text{MAP2}}$) and
($\hat{p}_{\text{MAP}}$, $\hat{\psi}_{\text{MAP}}$) are not equal in the general case.
The second definition of the uncertainty comes from the second moment of the 1D posterior probability
density functions $B_p$ and $B_\psi$, as follows:
\begin{equation}
\quad \quad \quad \sigma_{p,\mathrm{MB}}^2 \equiv \int_{0}^{1} (p_0- \hat{p}_{\text{MB}})^2 B_p(p_0\, | \, p, \tens{\Sigma}_p ) \, dp_0\, ,
\end{equation}
and
\begin{equation}
\label{eq:sigma_psi_mb}
\quad \quad \quad \sigma_{\psi,\mathrm{MB}}^2 \equiv \int_{\psi-\pi/2}^{\psi+\pi/2} (\psi_0-\hat{\psi}_{\text{MB}} ) ^2 B_{\psi}(\psi_0\, | \, \psi, \tens{\Sigma}_p )\, d\psi_0 \, .
\end{equation}
The operation of subtraction between the two polarization angles must be done with care, restricting the
the maximum distance to $\pi/2$. At very low SNR, i.e. an almost flat uniform pdf,
the uncertainty reaches the upper limit
$\sigma_{\psi,\mathrm{MB}} \le \pi/\sqrt{12} \, \mathrm{rad} = 51.^{\circ}96$.
We stress that these 1-$\sigma$ estimates may not be associated with the usual 68\% confidence
intervals of the normal distribution, because of the asymmetrical shape of the posterior distribution and because of the circularity of the angular variable.
\subsection{Confidence intervals}
\label{sec:confidence_intervals}
So far we have considered point estimation of the true $p_0$ value
which is somewhat tricky in the low SNR regime because of the
non-Gaussian nature of the estimator distribution.
A different approach, that takes into account the entire shape of the
distribution is to build confidence regions (or intervals), which
allows at some given significance level, to obtain bounds on the true value
given some estimator value.
\citet{Simmons1985} have built the so-called Neyman ``confidence belt'' for
the na\"ive estimator in the canonical case. PMA I proposed the
construction of two-dimensional ($p_0,\psi_0$) intervals, for the
general covariance matrix case.
The classical construction suffers from a standard issue: at very low SNR the confidence
interval lies entirely in the unphysical $p<0$ region, and both
previous studies provide over-conservative regions.
P14 has implemented the Feldman-Cousins prescription
\citep{Feldman1998} which is based on using a likelihood ratio
criterium in the Neyman construction. This allows building
intervals that always lie in the physical region without ever being conservative.
They provided these intervals for the MAS estimator
including analytical approximations to the upper and lower limits for
68, 95 and 99.5\% significance levels.
\begin{figure}[t!p]
\vspace{1.2cm}
\centering
\begin{tabular}{c}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_e1_r0_v2.0.ps}\\
\includegraphics[width=.5\textwidth]{Estimator_comparison_risk_e1_r0_v2.0.ps} \\
\includegraphics[width=.5\textwidth]{Estimator_comparison_jb_e1_r0_v2.0.ps}
\end{tabular}
\caption{Comparison of the average relative bias (top), risk function (middle) and Jarque-Bera test (bottom)
of the pure measurements (na\"ive, black), ML (dashed blue),
MP (dashed light green), MP2 (dashed green), AS (dashed red),
MAS (orange) and MB (pink) $\hat{p}$ estimators in the canonical case,
as a function of the the SNR $p_0/\sigma_p$.
The dashed lines stand for the discontinuous estimators presenting a peak of their output distribution at $\hat{p}$=0.}
\label{fig:comparison_p_estimator}
\end{figure}
\begin{figure}[t!p]
\vspace{1.2cm}
\centering
\begin{tabular}{c}
\psfrag{xtitle}{$\hat{p}/p_0$}
\includegraphics[width=.5\textwidth]{Estimator_comparison_histo_e1_r0_sn1_v1.0.ps} \\
\psfrag{xtitle}{$\hat{p}/p_0$}
\includegraphics[width=.5\textwidth]{Estimator_comparison_histo_e1_r0_sn2_v1.0.ps} \\
\psfrag{xtitle}{$\hat{p}/p_0$}
\includegraphics[width=.5\textwidth]{Estimator_comparison_histo_e1_r0_sn5_v1.0.ps} \\
\end{tabular}
\caption{Output distributions of the na\"ive (black), MAS (orange) and the MB (pink) $\hat{p}$
estimators in the canonical case ($\varepsilon_\mathrm{eff}$=1), for three levels of the SNR
$p_0/\sigma_p$=1,2 and 5 (from top to bottom). In the case of the MB estimator,
we show two setups of $p_0$=1\% and 50\% to illustrate the dependence of the output distribution on the $p_0$ value,
due to the prior used in the Bayesian approach ($\hat{p}_{\text{MB}} \in [0,1]$ so that $\hat{p}_{\text{MB}}/p_0 \in [0,1/p_0]$).
The other estimators are not sensitive to the true value $p_0$.}
\label{fig:estimator_comparison_histo}
\end{figure}
\section{$\hat{p}$ estimator performance}
\label{sec:comparison_p_estimators}
\subsection{Methodology}
\label{sec:comparison_p_methodoloy}
We investigate in this section the capability at providing polarization fraction estimates with low bias
of the seven $\hat{p}$ estimators introduced in the previous sections:
the na\"ive measurements $p$, the Maximum Likelihood (ML), the Most Probable (MP and MP2), the Asymptotic (AS),
the Modified Asymptotic (MAS) and the Mean Bayesian Posterior (MB) estimators.
Their performance is first quantified in terms of relative bias and risk function of the
resulting estimates. Given true polarization parameters ($p_0$, $\psi_0$) and a covariance matrix $\tens{\Sigma}_p$,
we build a sample of one million simulated measurements ($p$,$\psi$) by adding noise on the true Stokes parameters
using the covariance matrix. We define the relative bias and risk function on $p$ as follows:
\begin{equation}
\mathrm{Bias}_p \equiv \frac{\left<\hat{p}\right> - p_0}{\sigma_{p,G}} \quad \mathrm{and} \quad \mathrm{Risk}_p \equiv \frac{\left<(\hat{p}-p_0)^2\right>}{\sigma_{p,G}^2} \, ,
\end{equation}
where $\hat{p}$ is the polarization fraction estimate computed on the simulated measurements $p$,
$p_0$ is the true polarization fraction, $< >$ denotes the average computed over the simulated sample,
and $\sigma_{p,G}$ is the estimate of the noise of the polarization fraction.
The choice of $\sigma_{p,G}$ to scale the absolute bias and risk function, as a proxy of the
$\hat{p}$ uncertainty, is motivated by the fact that it depends only on the effective ellipticity and not on $\psi_0$.
Notice that this choice can lead to a relative risk function falling below 1 at low SNR, due to the fact that
$\sigma_{p,G}^2$$>$$Var$ in this regime.
The accuracy of the $p$ estimators is also quantified regarding the shape of their output distributions.
We use the Jarque-Bera estimator \citep{Jarque1980} as a test of normality of the output distribution, and defined by
\begin{equation}
JB = \frac{n}{6} \left( \frac{\mu_3^2}{\mu_2^{3}} + \left(\frac{\mu_4}{\mu_2^2}-3\right)^2 / 4 \right) \, ,
\end{equation}
where $n$ is the number of samples and $\mu_i$ is the na\"ive estimate of the ith central moment of the distribution.
This test is based on the joint hypothesis of the skewness and the excess kurtosis being zero simultaneously.
A value $JB$=0 means a perfect agreement with the {\it normality} to the 4th order, but does not prevent departure from the normality at higher orders.
This $JB$ estimator tends to a $\chi^2$ test with two degrees of freedom when $n$ becomes large enough. Hence
the $JB$ has to satisfy the following condition $JB<\chi^2_{\alpha}$, once chosen a significance level $\alpha$.
For a significance level $\alpha$=5\% and 1\%, we get the conditions $JB<5.99$ and $JB<9.21$, respectively.
\subsection{Canonical case}
\label{sec:comparison_p_estimators_canonical}
We first assume the canonical simplification of the covariance matrix ($\varepsilon_\mathrm{eff}$=1).
The relative $\mathrm{Bias}_p$ and $\mathrm{Risk}_p$ quantities are shown on
Fig.~\ref{fig:comparison_p_estimator} for the seven $\hat{p}$ estimators.
We recall that the discontinuous estimators, shown in dashed line
(ML (blue), MP (light green), MP2 (green) and AS (red)),
have an output distribution presenting a strong peak at zero, which leads to artificially lower the statistical relative
$\mathrm{Bias}_p$ when simply including null values instead of using upper limits,
as discussed in Sect.~\ref{sec:discontinuous_estimators}. Effectively
these estimators show the lowest relative biases (top panel of Fig.~\ref{fig:comparison_p_estimator})
compared to the MAS (orange) and MB (pink) estimators.
Hence the ML and MP2 estimators seem to statistically over-correct the data, below SNR=3.
Consequently, the ML, MP and AS $\hat{p}$ estimators have to be used with an extreme care to
deal with null estimates. We suggest here to focus on the two continuous estimators: MAS and MB.
MAS provides the better performances in terms of relative bias over the whole range of SNR, while
MB appears less and less efficient at correcting the bias when the SNR tends to zero.
At larger SNR ($>$2), MB tends to slightly over-correct with a small negative relative bias (2\% of $\sigma_p$)
up to SNR $\sim$5, while MAS converges quickly to a null relative bias for SNR $>$ 3.
The MB estimator clearly minimizes the risk function (in the range 0.7$<$SNR$<$3.2),
as expected for this kind of posterior estimator.
At larger SNR ($>$3.2) both MAS and MB have roughly the same behavior, even if the risk function associated to
MAS appears slightly lower.
The resulting $\hat{p}_{\text{MB}}$ distribution is highly asymmetric at low SNR (see upper panels of Fig.~\ref{fig:estimator_comparison_histo}),
with a sharp cutoff at 0.8$\sigma_p$. Moreover, we note that the output $\hat{p}_{\text{MB}}$ distribution depends not only on the SNR
$p_0/\sigma_p$, but also on the value of the true polarization fraction $p_0$. We report two cases, $p_0$=1\% (pink)
and 50\% (dotted pink) in Fig.~\ref{fig:estimator_comparison_histo}. This comes from the prior of the Bayesian method, which
bounds the estimate $\hat{p}_{\text{MB}}$ between 0 and 1. As a consequence, the {\it normality} of the Bayesian distribution
is extremely poor, as pointed in the bottom panel of Fig.~\ref{fig:comparison_p_estimator}, where we show that the JB test of the MB estimator
is larger than 9.21 (consistent with a $\chi^2_{0.01}$ test) over the whole range of SNR explored here (up to SNR$\sim$5).
On the contrary, the resulting $\hat{p}_{\text{MAS}}$ distribution of Fig.~\ref{fig:estimator_comparison_histo} looks much better,
mimicking the Rayleigh distribution for low SNR and going neatly to the Gaussian regime, as pointed out by P14.
The JB of the MAS estimator is the lowest for SNR $>$3 (see bottom panel of
Fig.~\ref{fig:comparison_p_estimator}), illustrating the consistency between the MAS distribution and the normal distribution.
Notice that all distributions, na\"ive, MAS and MB, converge to a Gaussian distribution at higher SNR.
\begin{figure}[!tp]
\psfrag{toto1}{$\kappa \in [0, 100 p_0]$}
\psfrag{toto2}{$\kappa \in [0, 10 p_0]$}
\psfrag{toto3}{$\kappa \in [0, 5 p_0]$}
\psfrag{toto4}{$\kappa \in [0, 3 p_0]$}
\psfrag{toto5}{$\kappa \in [0, 2 p_0]$}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_v2.0.ps}
\caption{Impact of the flat prior interval upper limit on the relative $\mathrm{Bias}_p$ performance of the MB estimator.}
\label{fig:impact_prior}
\end{figure}
\begin{figure}[!t]
\psfrag{-----xtitle-----}{$<p_{0,i}>/\sigma_{p,G}$}
\psfrag{----------ytitle----------}{$< \hat{p}_{i} - p_{0,i}> / \sigma_{p,G}$}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo_v2.0.ps}
\caption{Illustration of the improvement of the MB estimator performances when using evolved priors.
Starting from an input distribution of true values ($p_{0,i}$), shown in Fig.~\ref{fig:impact_prior_histo_distrib}, the statistical relative bias is shown for four estimators:
na\"ive, MAS, and MB with three different priors.}
\label{fig:impact_prior_histo}
\end{figure}
\subsection{Impact of the Bayesian prior}
\label{sec:priors}
\begin{figure}[!tph]
\vspace{1.1cm}
\begin{tabular}{c}
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=1}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo_distrib1_v2.0.ps} \\
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=2}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo_distrib2_v2.0.ps} \\
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=3}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo_distrib3_v2.0.ps}
\end{tabular}
\caption{Output distributions of the $\hat{p}$ estimates starting from a distribution of independent
true values $(p_{0,i}$ centered around 10\% of polarization fraction (grey shaded region) shown at three
levels of noise characterized by the mean SNR $\langle{p_{0,i}}\rangle/\sigma_{p,G}$=1, 2 and 3 (top, middle and bottom, respectively).
The na\"ive (black) and MAS (orange) output distributions are compared to the MB output distributions obtained with three different priors:
flat prior between 0 and 1 (solid pink), set to the na\"ive output distribution (dotted pink) and set to the true input distribution (dashed pink).}
\label{fig:impact_prior_histo_distrib}
\end{figure}
\begin{figure}[!tph]
\vspace{1.1cm}
\begin{tabular}{c}
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=1}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo2_distrib1_v2.0.ps} \\
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=2}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo2_distrib2_v2.0.ps} \\
\psfrag{toto1}{$<p_{0,i}>/\sigma_{p,G}$=3}
\psfrag{MB toto2}{MB flat prior}
\psfrag{MB toto3}{MB prior ($\hat{p}_i$)}
\psfrag{MB toto4}{MB prior ($p_{0,i}$)}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_prior_histo2_distrib3_v2.0.ps}
\end{tabular}
\caption{Same as Fig.~\ref{fig:impact_prior_histo_distrib} with a different initial distribution $(p_{0,i})$ centered on 20\% of polarization fraction.}
\label{fig:impact_prior_histo2_distrib}
\vspace{3cm}
\end{figure}
The choice of the prior is crucial in the Bayesian approach, and
we have seen how it is hard to define a non-informative prior in Sect.~\ref{sec:bayesian_estimators}.
The MB estimator studied
up to now assumes a flat prior in $p_0$ between 0 and 1, equivalent to no a priori knowledge.
In practice when dealing with astrophysical data, we can bound the expected true values of the polarization fraction between much tighter limits.
We know, for example, that the polarization fraction of the synchrotron signal peaks at $\sim$75\%, but never reaches this maximum due
to line-of-sight averaging. The maximum polarization fraction of the dust thermal emission is still a debated issue,
but is unlikely to be larger than 20 to 30\% \citep{Benoit2004}. Appropriate priors can then be introduced to take into account this a priori physical knowledge into
the MB estimator.
We have already observed in Sect.~\ref{sec:comparison_p_estimators_canonical} how the output distribution of the $\hat{p}_{\text{MB}}$ estimates
is impacted by the value of the true $p_0$ (1\% or 50\%) due to the upper limit ($p_0$$<$1) of the prior, see Fig.~\ref{fig:estimator_comparison_histo}.
We explore here a family of simple priors defined by $\kappa(p'_0) = 1/(kp_0)$ for $p'_0 \in[0, kp_0]$ and 0 otherwise,
where we adjust the upper limit of the prior as a function of the expected true value.
We performed Monte Carlo simulations in the canonical case by setting the true value at $p_0$=1\% and varying the upper limit of the prior ($k=2,3,5,10,$ and 100).
The statistical relative $\mathrm{Bias}_p$ of the MB estimators associated with each version of the priors are shown on Fig.~\ref{fig:impact_prior}.
The smaller the upper limit, the lower the relative $\mathrm{Bias}_p$, as expected. However the upper limit of the prior has to be very constraining ($k\le3$) to
observe a decrease of the relative bias in the range of SNR between 1.5 and 3. This requires very good a priori knowledge.
Using more relaxed priors ($k\ge5$) will not improve significantly the performances of the MB estimator at SNR$>$1.
When dealing with maps of polarized data, an interesting approach would be to start by estimating the histogram of $p$ values in the map and use it as a prior into
our MB estimators, even if this moves away from a strictly Bayesian approach again by introducing a data-dependent prior.
As a first guess, the prior can be set to the histogram of the na\"ive estimates of $\hat{p}$, but
a more sophisticated prior would be an histogram of $p$ deconvolved from the errors, using a Maximum Entropy method for example.
We illustrate the performance of the MB estimator with this kind of prior on Figs.~\ref{fig:impact_prior_histo} and \ref{fig:impact_prior_histo_distrib}.
We start with a sample of 10~000 independent true values $(p_{0,i})$ ranging between 0 and 20\% of polarization fraction,
with a distribution shown as the grey shaded histogram in Fig.~\ref{fig:impact_prior_histo_distrib}
on which a random realization of the noise is added with the same noise level over the whole sample, leading
to varying SNRs through the sample.
We explore two extreme cases of the Bayesian prior, corresponding to i) an idealistic perfect knowledge of the input distribution
and ii) its first guess provided by the na\"ive estimates. Hence the prior is chosen as the input
distribution of the true $p_{0,i}$ values (dashed pink) and the output distribution of the na\"ive estimates (dotted pink).
We compare the performance of these two new versions of the MB estimators
with the na\"ive (black), MAS (orange) and flat prior MB (solid pink) estimators, in terms of relative bias in Fig.~\ref{fig:impact_prior_histo}.
We stress that the relative bias values are not defined as previously done in Sect.~\ref{sec:comparison_p_methodoloy},
but refer now to the mean of the difference
between each sample of true value $p_{0,i}$ and its associated estimate $\hat{p}_i$.
The pink shaded region provides the domain of the possible improvement of the
MB estimators, by setting an appropriate prior as close as possible to the true distribution.
The improvements may seem spectacular, leading to a statistical relative bias close to zero at all SNRs in the best configuration (dashed line).
Caution is warranted, however, when looking at the output distributions associated with these new MB estimators on Fig.~\ref{fig:impact_prior_histo_distrib},
shown for three levels of the noise chosen so that the mean SNR is $\overline{p_0}/\sigma_{p,G}$=1, 2 and 3.
At low SNR ($\simeq$1), the output distribution of the MB estimator with a {\it perfect} prior (dashed line) is extremely peaked around
the mean value of the sample $\overline{p_0}$, but does not match the input distribution at all. Even at higher SNR (2-3),
the three MB output distributions suffer from the same feature already mentioned in Sect.~\ref{sec:comparison_p_estimators_canonical},
a sharp cutoff at low values of $p$.
Using a prior that is too constraining will yield dramatic cuts of the extremes values of the input distribution.
By contrast, the na\"ive prior is quite effective in that it allows the MB estimator
to recover the upper limit of the input distribution reasonably well at a SNR$\gtrsim$2, while the other estimators fail to do so at such low SNR.
The performance of the MB estimator with an evolved prior will also strongly
depend on the initial true distribution of the polarization fraction. For example we
duplicated the analysis made above with a different initial distribution $(p_{0,i})$ centered on 20\% of
polarization fraction instead of 10\% (see Fig.~\ref{fig:impact_prior_histo2_distrib}).
In this configuration, the output distributions of the Bayesian estimators are not as much affected by the
cut-off at low $p$ as observed in Fig~\ref{fig:impact_prior_histo_distrib}.
The MB estimator with the na\"ive prior appears extremely effective, even at low SNR ($\sim$2).
\subsection{Robustness to the covariance matrix}
\label{sec:robustness_covariance_matrix}
In PMA I we have discussed extensively the impact of the asymmetry of the covariance matrix on the
measurements of the polarization fraction. In particular, we have stressed that once the effective ellipticity
departs from the canonical case, the bias on the polarization fraction now depends on the true polarization angle $\psi_0$,
which remains unknown. We would like to explore in this section how the performance of the various $\hat{p}$ estimators are sensitive to
the effective ellipticity of the covariance matrix.
\begin{figure*}[th!]
\centering
\begin{tabular}{cc}
\psfrag{ytitle}{$\hat{p}$}
\includegraphics[width=.5\textwidth]{impact_cov_epsi_v2.0.ps} &
\psfrag{ytitle}{$\hat{p}$}
\includegraphics[width=.5\textwidth]{impact_cov_epsi_bias_v2.0.ps}
\end{tabular}
\caption{Illustration of the robustness of the $\hat{p}$ estimators against the unknown $\psi_0$ parameter when
the covariance matrix departs from the canonical value. The covariance matrix is setup with $\varepsilon_\mathrm{eff}$=2 and a SNR $p_0/\sigma_{p,G}$= 1, and
a true polarization fraction $p_0$=0.1. For each value of $\psi_0$, we first illustrate (on the left panel) the performance of the 7 estimators on
one particular measurement set to the maximum of the pdf. We focus then on the statistical average estimates $\hat{p}$
computed over 10~000 Monte-Carlo realizations for the na\"ive, MAS and MB estimators (right panel). The full lines stand for the mean, and the dot-dash lines for the
1-$\sigma$ dispersion.}
\label{fig:impact_cov_epsilon}
\end{figure*}
\begin{figure*}[bh!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_fullcov_mas_v2.0.ps} &
\includegraphics[width=.5\textwidth]{Estimator_comparison_risk_fullcov_mas_v2.0.ps} \\
\includegraphics[width=.5\textwidth]{Estimator_comparison_bias_fullcov_post_v2.0.ps} &
\includegraphics[width=.5\textwidth]{Estimator_comparison_risk_fullcov_post_v2.0.ps} \\
\end{tabular}
\caption{Impact of the effective ellipticity of the covariance matrix on the statistical relative $\mathrm{Bias}_p$ (left column) and $\mathrm{Risk}_p$ (right column)
quantities in the {\it extreme} (light shaded region) and {\it low} (dark shaded regions) regimes, for
both MAS (orange, top) and MB (pink, bottom) $\hat{p}$ estimators.
The domain of the na\"ive measurements is repeated in grey shaded regions on both plots.
The canonical case of the MAS (and MB) is also repeated on each panel in dashed orange (and pink) lines.}
\label{fig:estimator_comparison_fullcov}
\end{figure*}
We illustrate the dependence of the $\hat{p}$ estimators on the true polarization angle $\psi_0$ in Fig.~\ref{fig:impact_cov_epsilon}.
Given true polarization parameters ($p_0$=0.1 and $\psi_0$ ranging between -$\pi/2$ and $\pi/2$)
and a covariance matrix characterized by $\varepsilon_\mathrm{eff}$=2 and $\theta$=0 (left panel), and a SNR
$p_0/\sigma_{p,G}$=1, we first set the polarization
measurements ($p$, $\psi$) to the maximum of the pdf $f_{2D}$ (left panel).
We apply then the six estimators on these measurements to get the $\hat{p}$ estimates for each $\psi_0$ between -$\pi/2$ and $\pi/2$.
With this particular setting, the MP2 (green) estimator gives back
the true polarization fraction $p_0$ whatever the polarization angle $\psi_0$, by definition of this estimator
and the choice of the measurement in this example. On the contrary, the MP (light green) and the ML (blue)
estimators are extremely sensitive to the true polarization angle $\psi_0$, yielding estimates spanning between 0 and 1.4$p_0$, while
the AS (red) and MAS (orange) estimators yield results spanning between 1 to 1.8$p_0$ when $\psi_0$ varies.
The MB (pink) estimator provides stable estimates in the range 1.4 to 1.5\,$p_0$,
which is consistent with the fact that the posterior estimators minimize the risk function. This of course has a cost, and the MB estimator
provides here the largest averaged relative bias compared to the other methods, with the exception of the na\"ive (black) one.
More generally, for each value of the true polarization angle $\psi_0$ between $-\pi/2$ and $\pi/2$,
we build a sample of 10\,000 simulated measurements using the same setup of the covariance matrix as above.
Then we compute the statistical average of the na\"ive, MAS and MB estimates (black, orange and pink lines, respectively)
obtained on this simulated sample,
with their associated 1-$\sigma$ dispersion (black, orange and pink dot-dash lines, respectively), as shown in the right panel of Fig.~\ref{fig:impact_cov_epsilon}.
The averaged MB estimates present the same characteristic as shown on the left panel.
By contrast, the averaged MAS estimates are independent from the unknown $\psi_0$ true polarization angle.
The MAS 1-$\sigma$ dispersion is, however, slightly larger than the MB 1-$\sigma$ dispersion.
The impact of the effective ellipticity of the covariance matrix is then analysed statistically
for the MAS and MB estimators only in Fig.~\ref{fig:estimator_comparison_fullcov}.
Instead of looking at the accuracy of the $\hat{p}$ estimators
around one particular measurement (the most probable one) as done in Fig.~\ref{fig:impact_cov_epsilon},
for each set of true polarization parameters ($p_0$=0.1, $\psi_0$), with $\psi_0$ ranging between
-$\pi/2$ and $\pi/2$, we perform Monte Carlo simulations. For each set of true polarization parameters,
we build a sample of 100~000 simulated measurements on which we apply the MAS and MB estimators to finally compute
the statistical relative $\mathrm{Bias}_p$ and $\mathrm{Risk}_p$, as defined in Sect.~\ref{sec:comparison_p_methodoloy}.
This is done for various setups of the covariance matrix chosen to cover the whole range of the {\it extreme} and {\it low} regimes.
The minimum and maximum relative $\mathrm{Bias}_p$ and $\mathrm{Risk}_p$ are then computed over the whole range of $\psi_0$ and effective ellipticity $\varepsilon_\mathrm{eff}$
in each regime of the covariance matrix to build the shaded regions of Fig.~\ref{fig:estimator_comparison_fullcov}
for the MAS (top panels) and MB (bottom panels) $\hat{p}$ estimators.
The domain of the na\"ive measurements in each regime is repeated in grey shaded regions, while
we show the result in orange shaded regions for the MAS and pink shaded regions for the MB estimators.
It appears that the relative $\mathrm{Bias}_p$ of the MAS estimator is less impacted by a change of ellipticity for SNR$>$2 than the MB estimator,
even in the {\it extreme} regime of the covariance matrix.
The dependance of the risk function on the ellipticity is almost identical for the two estimators around their respective canonical curve.
The thickness of the risk function region is slightly smaller for the MB estimator than for the MAS estimator at low SNR ($<$3),
while it is the opposite for larger SNR ($>$3), as already observed in the canonical case.
\begin{figure}[t]
\begin{tabular}{c}
\psfrag{-----------------ytitle-----------------}{$\mathcal{P} \left( p_0 \in \left[ \hat{p} - \sigma^{\rm low}_{\hat{p}} , \hat{p} + \sigma^{\rm up}_{\hat{p}} \right] \right)\, [\%]$}
\includegraphics[width=9cm]{uncertainty_cl_mb_mas_v1.0.ps}
\end{tabular}
\caption{Probability of finding the true polarization angle $p_0$ inside the interval $[\hat{p}-\sigma^{\rm low}_{\hat{p}} , \hat{p}+\sigma^{\rm up}_{\hat{p}}]$, where
$\sigma^{\rm low}_{\hat{p}}$ and $\sigma^{\rm up}_{\hat{p}}$ are the lower and upper limits of each estimator:
credible intervals ML/MAP (blue), a posteriori variance MB (pink) and MAS variance (orange). It is plotted as a function of the SNR $p_0/\sigma_{p,G}$.
Monte-Carlo simulations have been carried on in {\it low} regime of the covariance matrix.
The Gaussian level at 68\% is shown as a dashed line. }
\label{fig:uncertainties_cl_mb_ml_mas}
\end{figure}
\begin{figure}
\begin{tabular}{c}
\psfrag{-----------------ytitle-----------------}{$\mathcal{P} \left( p_0 \in \left[ \hat{p} - \sigma^{\rm low}_{\hat{p}} , \hat{p} + \sigma^{\rm up}_{\hat{p}} \right] \right)\, [\%]$}
\psfrag{xtitle}{$\hat{p} / \sigma_{\hat{p}}$}
\includegraphics[width=9cm]{uncertainty_cl_mb_mas_measured_snr_v1.0.ps}
\end{tabular}
\caption{Same as Fig.~\ref{fig:uncertainties_cl_mb_ml_mas} but plotted as a function of the measured SNR $\hat{p}/\sigma_{\hat{p}}$. }
\label{fig:uncertainties_cl_mb_ml_mas_measured_snr}
\end{figure}
\subsection{Polarization fraction uncertainty estimates}
\label{sec:uncertainty_comparison}
The questions of estimating the polarization uncertainties and how uncertainties are propagated are
essential in reliable polarization analysis.
The best approach consists of building the confidence intervals to retrieve
robust estimates of the lower and upper limits of the 68, 95 or 99.5\% intervals, which is valid
even when the distribution is not Gaussian. As already mentioned in sect.~\ref{sec:confidence_intervals}, building optimized confidence intervals
including the full knowledge of the covariance matrix may represent a challenge for large samples of data.
Hence P14 provides analytic approximations of such confidence intervals for the MAS estimator,
which can be extremely useful.
A commonly used approach, however, is to provide the 1-$\sigma$ dispersion,
assuming the Gaussian distribution of the $\hat{p}$ estimates as a first approximation.
We have already stressed the difference between the risk function and the variance, and the limitations of the latter to derive robust uncertainties in the presence of bias.
We compare below the performance of the usual uncertainty estimates introduced in Sect.~\ref{sec:uncertainties} to provide robust
68\% tolerance intervals: MAS variance, credible intervals MAP and 1-$\sigma$ a posteriori dispersion MB.
Starting with a true $p_0$ value, we have performed Monte-Carlo simulations in the {\it low} regime of the covariance matrix,
by exploring the whole range of the true polarization angle $\psi_0$, with an SNR spanning from 0 to 30.
For each simulated measurement ($p$,$\psi$), we compute the $\hat{p}$ estimates with their uncertainty
estimators $\sigma_{\hat{p}}$. We then compute the a posteriori probability
to find the true $p_0$ inside the interval $[\hat{p}-\sigma^{\rm low}_{\hat{p}} , \hat{p}+\sigma^{\rm up}_{\hat{p}}]$.
In the case of the MAP estimator, the lower and upper limits of the interval, $\hat{p}_{\text{MAP}}- \sigma^{\rm low}_{\hat{p}_{\text{MAP}}}$
and $\hat{p}_{\text{MAP}} + \sigma^{\rm up}_{\hat{p}_{\text{MAP}}}$, are set to $p^{\rm low}_{\text{MAP}}$
and $p^{\rm up}_{\text{MAP}}$, respectively, (with $\lambda$=68 as defined in Sect.~\ref{sec:credible_intervals}), which can be asymmetric.
We report the results compared to the expected 68\% level in Fig.~\ref{fig:uncertainties_cl_mb_ml_mas}.
We recall that this comparison approach is frequentist again, while anything derived from the Bayesian pdf
is used to build single estimates and to be compared with the confidence intervals.
As pointed out in Sect.~\ref{sec:variance_risk}, the theoretical variance associated with the MAS estimator
still tends to provide slightly lower probabilities than the expected 68\% at low SNR, mainly due to the asymmetry of the distribution.
The variance associated to the MB estimator, which is more biased at low SNR, gives extremely low probability to recover the true
$p_0$ value at low SNR ($<0.5$). By contrast, it provides probabilities greater than 68\% (as high as 90\%) for SNR between 0.5 and 2.
This comes from the fact that the MB variance statistically over-estimates by a
factor of 2 the exact variance of the a posteriori $\hat{p}_{\text{MB}}$ distribution at low SNR ($<$2).
Thus the MB uncertainty estimator yields conservative estimates of the uncertainty for SNR $>$0.5.
At high SNR ($>$3) all these uncertainty estimators provide compatible estimates of the probability close to 68\%.
\begin{figure}
\begin{tabular}{c}
\psfrag{ytitle}{$\hat{p} / \sigma_{\hat{p}}$}
\psfrag{xtitle}{$p_0 / \sigma_{p,0}$}
\includegraphics[width=9cm]{uncertainty_snr_v1.0.ps}
\end{tabular}
\caption{Average Measured SNR computed over 10\,000 Monte-Carlo simualtions
as a function of the true SNR for four methods : Naive $\hat{p}/\sigma_{p,C}$ (dark),
MAP confidence intervals $\hat{p}_{\text{ML}}/\sigma_{\hat{p},\text{MAP}}$ (blue),
MB $\hat{p}_{\text{MB}} / \sigma_{\hat{p},\text{MB}}$ (pink) and
MAS variance $\hat{p}_{\text{MAS}} / \sigma_{\hat{p},\text{MAS}}$(orange).
The covariance matrix is taken in its {\it low} regime.}
\label{fig:uncertainties_snr}
\end{figure}
Because the true SNR is always unknown (see Sect.~\ref{sec:snr}), the probability to
find the true $p_0$ value in the confidence interval is also shown as a function of the measured SNR
in Fig.~\ref{fig:uncertainties_cl_mb_ml_mas_measured_snr}.
This much more realistic picture shows that the variance estimates provide reliable probability for measured SNR larger than $\sim$6.
\subsection{Polarization signal-to-noise ratio}
\label{sec:snr}
In any real measurement, the true SNR $p_0/\sigma_{p,G}$ remains unknown.
From observations, we only have access to the measured SNR, which can be obtained by
the ratio $\hat{p} / \sigma_{\hat{p}}$ associated with each estimator, or by a confidence interval approach (see P14),
which is much more robust at a low true SNR. We show in Fig.~\ref{fig:uncertainties_snr} the accuracy of the
measured SNR compared to the true SNR for the four following methods: the na\"ive estimate plus Classical estimate of the uncertainty,
the MAS estimate with the associated variance, the MB estimate and its variance, and the ML estimate with the MAP credible intervals.
We observe that all methods agree only for a true SNR larger than 3, giving back the true SNR in this regime.
Below this true SNR, the measured SNR becomes extremely biased whatever the method used, due to the bias of the measurement $\hat{p}$
itself, but also due to the bias introduced by the variance as an estimate of the uncertainty when the output distribution departs from the Gaussian regime.
\begin{figure*}[]
\begin{tabular}{lcc}
\begin{minipage}[c]{.06\linewidth}
\begin{tabular}{l}
$ p_0/\sigma_{p,G}=0.5$
\end{tabular}
\end{minipage} &
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_bias2_sn05_v2.0.ps} \end{minipage}&
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_risk2_sn05_v2.0.ps} \end{minipage}\\
\begin{minipage}[c]{.06\linewidth}
\begin{tabular}{l}
$ p_0/\sigma_{p,G}=1$
\end{tabular}
\end{minipage} &
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_bias2_sn1_v2.0.ps} \end{minipage}&
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_risk2_sn1_v2.0.ps}\end{minipage} \\
\begin{minipage}[c]{.06\linewidth}
\begin{tabular}{l}
$p_0/\sigma_{p,G}=2$
\end{tabular}
\end{minipage} &
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_bias2_sn2_v2.0.ps}\end{minipage} &
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_risk2_sn2_v2.0.ps} \end{minipage}\\
\begin{minipage}[c]{.06\linewidth}
\begin{tabular}{l}
$ p_0/\sigma_{p,G}=5$
\end{tabular}
\end{minipage} &
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_bias2_sn5_v2.0.ps} \end{minipage}&
\begin{minipage}[c]{.4\linewidth} \includegraphics[width=1.2\textwidth]{Estimator_psi_risk2_sn5_v2.0.ps} \end{minipage}\\
\end{tabular}
\caption{Comparison of the relative $\mathrm{Bias}_{\psi}$ (left) and $\mathrm{Risk}_{\psi}$ (right) quantities of the four $\hat{\psi}$ estimators:
Naive (black), ML (blue), MP2 (green) and MB (pink) plotted as a function of the true polarization angle $\psi_0$
and computed at four SNR $p_0/\sigma_{p,G}$=0.5, 1, 2 and 5.
The covariance matrix is set to $\varepsilon=2$ and $\rho=0$ ($\varepsilon_\mathrm{eff}=2$).}
\label{fig:estimator_comparison_psi}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=9cm]{Estimator_psi_comparison_meanbias2_v2.0.ps} &
\includegraphics[width=9cm]{Estimator_psi_comparison_meanrisk2_v2.0.ps}
\end{tabular}
\caption{Statistical relative $\left|\mathrm{Bias}_{\psi}\right|$ (left panel) and $\mathrm{Risk}_{\psi}$ (right panel) averaged over $\psi_0$ between $-\pi/2$ and $\pi/2$,
as a function of the SNR on $p_0/\sigma_{p,G}$, for the four $\hat{\psi}$ estimators:
na\"ive (black), ML / MAP (blue), MP2 (green) and MB (pink). We consider two setups of the covariance matrix here: $\varepsilon_\mathrm{eff}$=2 (solid line) and
and $\varepsilon_\mathrm{eff}$=1.1 (dotted line).}
\label{fig:psi_mean_bias_risk}
\end{figure*}
\section{$\hat{\psi}$ estimator performance}
\label{sec:comparison_estimators_psi}
\subsection{Methodology}
As pointed out by PMA I, once the covariance matrix is not canonical ($\varepsilon_\mathrm{eft}>1$), a bias of the
polarization angle measurements $\psi$ appears with respect to the true polarization angle $\psi_0$. This bias may be positive or negative.
We propose to compare the accuracy at correcting the bias of the polarization angle of the four following $\hat{\psi}$ estimators:
na\"ive measurements $\psi$, the ML $\hat{\psi}_{\text{ML}}$
(which is equivalent to the MAP $\hat{\psi}_{\text{MAP}}$), the MP2 $\hat{\psi}_{\text{MP2}}$ and
the MB $\hat{\psi}_{\text{MB}}$.
Similarly to the $\hat{p}$ estimators, we define the relative bias and risk function on $\hat{\psi}$ as follows:
\begin{equation}
\mathrm{Bias}_{\psi} \equiv \frac{\left< \hat{\psi} - \psi_0 \right>}{\sigma_{\psi,0}} \quad \mathrm{and}
\quad \mathrm{Risk}_{\psi} \equiv \frac{\left< (\hat{\psi} - \psi_0)^2 \right> }{\sigma_{\psi,0}^2} \, ,
\end{equation}
where $\hat{\psi}$ is the polarization angle estimate computed on the simulated measurements $\psi$,
$\psi_0$ is the true polarization fraction and angle, $< >$ denotes the average computed over the simulated sample, and
$\sigma_{\psi,0}$ is the standard deviation of the simulated measurements.
\subsection{Performance Comparison}
\label{sec:efficiency_comparison_psi}
We explore the performance of the four $\hat{\psi}$ estimators at four SNR=0.5, 1, 2 and 5 (from top to bottom)
and a covariance matrix with an effective ellipticity $\varepsilon_\mathrm{eff}$=2, on Fig.~\ref{fig:estimator_comparison_psi}.
The relative $\mathrm{Bias}_{\psi}$ (left panels) and $\mathrm{Risk}_{\psi}$ (right panels) are plotted as a function of the true polarization angle $\psi_0$.
While the MB (pink) estimator seems to provide the least biased estimates with the lowest risk function at low SNR ($<$1),
it becomes the least efficient at higher SNR. On the contrary, the ML
(or MAP too) presents poor performances at low SNR, but
provides impressive results at high SNR, reducing the relative bias close to zero at a SNR of 5.
The MP2 estimator does not present any satisfactory properties: strong relative bias and risk function in almost all cases. Hence
this $\hat{\psi}_{\text{MP2}}$ estimator can be ruled out.
An overview of the performance of the four $\hat{\psi}$ estimators as a function of the SNR is shown
on Fig.~\ref{fig:psi_mean_bias_risk}, after marginalization over all the possible values of the $\psi_0$ parameter. As
the relative $\mathrm{Bias}_{\psi}$ can be positive or negative depending on $\psi_0$, we compute the average of the absolute value of the relative bias,
$< |\mathrm{Bias}_{\psi} | >$ as an indicator of the statistical performance of the estimators whatever the true polarization angle is.
We observe again on the left panel of Fig.~\ref{fig:psi_mean_bias_risk} that the MB (pink) estimator provides the lowest relative bias for SNR$<$1.2, while the
ML is especially powerful for SNR$>$2. All estimators provide almost the same results for the average $\mathrm{Risk}_{\psi}$ (left panel),
even if MB appears slightly better than the others, including the na\"ive measurements.
The examples provided above have been computed with an {\it extreme} effective ellipticity ($\varepsilon_\mathrm{eff}$=2) to emphasize the observations,
but the same conclusions can be reached for lower values of the ellipticity. See, for example, the case with $\varepsilon_\mathrm{eff}$=1.1
shown in dotted line in Fig.~\ref{fig:psi_mean_bias_risk}.
In the {\it low} regime of the covariance matrix, however, the statistical relative
bias on $\psi$ is very small, typically smaller than 5\% of the dispersion, so that the need to correct the bias on $\psi$ remains extremely limited.
\subsection{Polarization angle uncertainty estimates}
\label{sec:uncertainty_psi_comparison}
Once a reliable estimate of $\hat{\psi}$ based on the MB and ML (MAP) estimators has been obtained,
we would like to build a robust estimate of the associated uncertainties $\sigma_{\hat{\psi}}$,
which should be done by building confidence intervals. Because this last step could represent important
efforts in some cases, for example when dealing with the full covariance matrix,
we detail other methods below.
One option is to use the uncertainty associated with the MB estimator, $\sigma_{\hat{\psi},\text{MB}}$ (see Eq.~\ref{eq:sigma_psi_mb}).
Another is to use the
credible intervals built around the MAP estimates on the posterior pdf.
We can keep the lower and upper limits, $\psi^{\rm low}_{\text{MAP}}$ and $\psi^{\rm up}_{\text{MAP}}$
computed for a 68\% credible interval,
or build a symmetrized uncertainty:
\begin{equation}
\sigma_{\hat{\psi},{MAP}} = \frac{1}{2} \left( \psi^{\rm up}_{\text{MAP}} - \psi^{\rm low}_{\text{MAP}} \right) \, .
\end{equation}
A third option consists in taking the classical uncertainty given in PMA I, derived from the derivatives of the polarization parameters.
PMA I has already shown that this $\hat{\psi}$ uncertainty estimator, associated with the na\"ive measurements, tends to systematically
underestimate the true dispersion of the $\psi$ distribution.
\begin{figure}
\includegraphics[width=9cm]{uncertainty_sigphi_all_v2.0.ps}
\caption{Average polarization angle uncertainty as a function of the SNR in the canonical case: true uncertainty $\sigma_{\psi,0}$ (black),
Classical estimate $\sigma_{\psi,C}$ (C, dashed dark), ML $\sigma_{\hat{\psi},\text{MAP}}$ (blue)
and MB $\sigma_{\hat{\psi},\text{MB}}$ (pink) estimators.
The covariance matrix is assumed to be canonical.}
\label{fig:uncertainties_sigphi_all}
\end{figure}
We first assume the canonical simplification of the covariance matrix, which implies that the $\psi$ measurements are not statistically biased.
We also recall that under such assumptions the ML (MAP) and MB $\hat{\psi}$ estimators will give back the measurements $\psi$.
We study, however, how the uncertainties associated with these two estimators can be used to get a reliable estimate of the uncertainty $\sigma_{\hat{\psi}}$.
Starting from a true ($p_0$, $\psi_0$), we simulate a sample of 50~000 simulated
measurements $p$, $\psi$ at a given SNR $p_0/\sigma_{p}$, on which we apply the two
ML (MAP) and MB $\hat{\psi}$ estimators and their associated uncertainty $\sigma_{\hat{\psi},\text{MAP}}$ and $\sigma_{\hat{\psi},\text{MB}}$, respectively.
From this simulated set we can derive the averaged $\sigma_{\hat{\psi}}$ for both methods.
Because all estimators give back the measurements in the canonical case, we compare
the MAP (blue) and MB (pink) polarization angle uncertainties
estimators directly to the true dispersion (black) of the $\psi$ measurements in Fig.~\ref{fig:uncertainties_sigphi_all}.
We also repeat the average of the classical estimates (dashed line) of the polarization uncertainty estimate, which has been shown by PMA I (see their Fig.~7) to underestimate by a factor of two the true uncertainty at low SNR ($<$2).
We observe that the MAP estimator $\sigma_{\hat{\psi},\text{MAP}}$ provides an extremely good estimate of the polarization angle
uncertainty compared to the true one over the whole range of SNR, even if slightly conservative up to a SNR of 5.
The MB estimator $\sigma_{\hat{\psi},\text{MB}}$ provides consistent estimates of the uncertainty
from intermediate SNR$\sim$1, but still underestimates at lower SNR ($<$1).
In the non-canonical case a statistical bias on $\psi$ appears, which can be
partially corrected using the appropriate $\hat{\psi}$ estimators (see Sect.~\ref{sec:efficiency_comparison_psi}), leading
to an output distribution of the $\hat{\psi}$ estimates. We quantify the performance of the $\psi$ uncertainty estimators via Monte-Carlo simulations, as done for the $\hat{p}$ uncertainties. Starting from a set of polarization parameters
($p_0$=0.1, -$\pi$/2$<$$\psi_0$$<$$\pi$/2),
we build a sample of simulated measurements ($p$, $\psi$) using various
setups of the covariance matrix in the {\it low} regime, and various SNRs ranging from 0 to 30.
We then compute the a posteriori probability to find the true polarization angle $\psi_0$ in the interval
$[\hat{\psi}-\sigma^{\rm low}_{\hat{\psi}} , \hat{\psi}+\sigma^{\rm up}_{\hat{\psi}}]$, where $\sigma^{\rm low}_{\hat{\psi}}$
and $\sigma^{\rm up}_{\hat{\psi}}$ are symmetrized. The results are shown as a function of the
true SNR $p_0/\sigma_{p,G}$ in Fig.~\ref{fig:uncertainties_psi_cl_mb_ml} and of the measured SNR
$\hat{p}/\sigma_{\hat{p}}$ in Fig.~\ref{fig:uncertainties_psi_cl_mb_ml_measured_snr}.
We observe that the MAP estimator provides slightly conservative probabilities over the whole range of SNR.
The MB estimator gives low probabilities to recover the true polarization angle $\psi_0$ for a true SNR $<$1, and a measured SNR$<$2.
\begin{figure}
\begin{tabular}{c}
\psfrag{-----------------ytitle-----------------}{$\mathcal{P} \left( \psi_0 \in \left[ \hat{\psi} - \sigma^{\rm low}_{\hat{\psi}} , \hat{\psi} + \sigma^{\rm up}_{\hat{\psi}} \right] \right)\, [\%]$}
\includegraphics[width=9cm]{uncertainty_psi_cl_mb_mas_v1.0.ps}
\end{tabular}
\caption{Probability to find the true polarization angle $\psi_0$ inside the interval $[\hat{\psi}-\sigma^{\rm low}_{\hat{\psi}} , \hat{\psi}+\sigma^{\rm up}_{\hat{\psi}}]$, where
$\sigma^{\rm low}_{\hat{\psi}}$ and $\sigma^{\rm up}_{\hat{\psi}}$ are the lower and upper uncertainties for each estimator, ML/MAP (blue) and MB (pink), and
plotted as a function of the SNR $p_0/\sigma_{p,G}$.
Monte-Carlo simulations have been carried out in the {\it low} regime of the covariance matrix.
The expected level at 68\% is shown as a dashed line. }
\label{fig:uncertainties_psi_cl_mb_ml}
\end{figure}
\section{Three-dimensional case}
\label{sec:3Dcase}
In all of the preceding sections, the total intensity $I$ was assumed to be perfectly known, $I=I_0$.
in some cases, however, this assumption is not valid as discussed by PMA1.
For instance, one needs to subtract from the observed intensity signal any unpolarized
component, leading to three main issues: i) the derived polarization fraction may be grossly underestimated if this is not done properly,
ii) this subtraction may be subject to a relatively large uncertainty, larger than the noise on the total intensity, and could lead to diverging estimates of the polarization fraction when intensity crosses null values ; iii) this uncertainty on this unpolarized component intensity level
should be included in the 3D noise covariance matrix, and propagated to the uncertainty estimates of the polarization fraction.
This happens for instance when dealing with the polarization fraction of the Galactic dust component at high latitude,
where the total intensity of the signal is strongly contaminated by the unpolarized signal of the Cosmic Infrared Background (CIB).
The Bayesian approach has the definite advantage over other estimators discussed here
in that it can deal fairly easily with three-dimensional $(I,Q,U)$ noise. However,
an uncertain total intensity still poses problems, which are most acute in low brightness regions,
since the noisy $I$ may become null or negative, leading to infinite or negative polarization fractions.
With this in mind, it is possible that the choice of the prior in $p_0$ and $I_0$ may have a strong impact
on the $\hat{p}_\mathrm{MB}$ estimate. One may for instance choose to allow for negative $I_0$ in low-brightness regions,
which implies extending the definition range of the polarization fraction to the negative part, leading to a prior defined on [-1,1].
Another possibility in this case, and possible development of the present paper, is to extend the dimensionality
of the problem to include the unpolarized intensity component $I_\mathrm{offset}$, e.g.,
with a flat prior between $I_\mathrm{offset,min}$ and $I_\mathrm{offset,max}$, and still imposing $I_0>0$.
Let us stress that the Bayesian approach is also currently the only one that can deal with correlation between total intensity $I$ to Stokes $Q$ and $U$. We note, however, (i) new and forthcoming polarization data sets have a much better control of these systematics, and (ii) the impact of these correlations between noise components on the polarization fraction and angle bias is quite limited, as shown by PMA1.
\begin{figure}
\begin{tabular}{c}
\psfrag{-----------------ytitle-----------------}{$\mathcal{P} \left( \psi_0 \in \left[ \hat{\psi} - \sigma^{\rm low}_{\hat{\psi}} , \hat{\psi} + \sigma^{\rm up}_{\hat{\psi}} \right] \right)\, [\%]$}
\psfrag{xtitle}{$\hat{p} / \sigma_{\hat{p}}$}
\includegraphics[width=9cm]{uncertainty_psi_cl_mb_mas_measured_snr_v1.0.ps}
\end{tabular}
\caption{Same as Fig.~\ref{fig:uncertainties_psi_cl_mb_ml}, but plotted as a function of the measured SNR $\hat{p}/\sigma_{\hat{p}}$.}
\label{fig:uncertainties_psi_cl_mb_ml_measured_snr}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have presented in this work an extensive comparison of the performance of the polarization fraction and angle estimators.
While \citet{Simmons1985} focused on the common estimators of the polarization fraction, such as
the Maximum Likelihood (ML), the Most Probable (MP) and the Asymptotic (AS), and \citet{Quinn2012}
suggested to use a Bayesian approach to estimate the polarization fraction,
we have generalized all these methods to take into account the full covariance matrix of the Stokes parameters.
We have also included in this comparison a novel estimator of the polarization fraction, the Modified Asymptotic \citep[MAS, ][]{Plaszczynski2014}.
In addition, we have performed for the first time a comparison of the performance of the polarization angle estimators,
since a statistical bias of $\psi$ is expected when the covariance matrix departs from its canonical form.
We have followed a frequentist methodology to investigate the properties of the polarization estimators,
even when dealing with the frequentist estimators inspired by the Bayesian approach.
The question of the performance of a $\hat{p}$ or $\hat{\psi}$ estimator depends intrinsically on the analysis we would like to carry out with these quantities.
Including or not the full covariance matrix is one of the first questions that must be handled,
but the more important aspect relies on the properties of the output distribution of each estimator.
In practice, a compromise between three frequentist
criteria has to be found: a minimum bias, a minimum risk function, and the shape of the output distribution, in terms of non-Gaussianity.
We present below a few recipes associated to typical use cases.
{\it - Build a mask.} It is usually recommended to build a mask on the intensity map, instead of using the SNR of the polarization fraction,
so that no values of the polarization fraction (especially low values of $p$) are discarded in the further analysis.
It can be useful, however, to build a mask based on the SNR of a polarization fraction map when
we are interested in strong values of the polarization fraction only, and we try to reject $p$ estimates artificially boosted by the noise.
This is the case when we look for the maximum value of $p$, for example. In this context
we suggest following the prescription of P14, using a combination of the
MAS estimator with confidence intervals. This method allows building conservative domains where the SNR is ensured to be greater than a given threshold.
P14 provide numerical approximations in the canonical case. If one wants to take into account the specificity of the noise properties in each pixel,
confidence intervals can be built for any covariance matrix (including ellipticity and correlation), but it could require intensive computing.
Another alternative in that case is to build credible intervals using the posterior distribution (MAP).
{\it - Large maps of the polarization fraction with high SNR on the intensity. }
Another typical use case is to provide large maps of the polarization fraction with the associated uncertainty, when
the intensity is assumed to be perfectly known.
Because of their discontinuous distributions presenting a peak at $\hat{p}$=0 and their strong
dependance to the unknown true polarization angle $\psi_0$, the common estimators of $p$ ML, MP and AS
are not well designed for this purpose. These estimators could produce highly discontinuous patterns with zero values over the output $\hat{p}$ map when
the SNR goes below 4,
which may imply complicated analysis including upper limits values.
In order to avoid such issues, we first suggest using the MAS estimator which has been shown to produce the lowest relative bias, with a continuous output distribution which
becomes close to a Gaussian for SNR larger than 2. Moreover, the relative risk function associated with the MAS estimator becomes competitive for SNR$>$3,
while the MB estimator minimizes the relative risk function for an intermediate SNR, between 1 and 3.
The uncertainties can then be derived again from the confidence or credible intervals, depending on the ellipticity of the covariance matrix.
A second option, especially suited for intermediate SNR (2-3), consists in performing a preliminary analysis on the data
to build a prior from the $\hat{p}$ distribution, which can then be injected into the MB estimator.
The performance of this method strongly rely of the properties of the initial true distribution.
It is particularly efficient for true polarization fractions largely greater than zero, to avoid the major drawback of the
MB estimator presenting a lower-limit proportional to the noise level.
Hence the MB (with flat prior) estimator presents a cut-off at 0.8$\sigma_p$, so that it can never provide null estimates of $\hat{p}$.
We stress that above a SNR of 4, all methods (except MP2) fall in agreement.
{\it - Combined polarization fraction and angle analysis. }The Bayesian estimators of $\hat{p}_{\text{MB}}$ and $\hat{\psi}_{\text{MB}}$ may be used to build estimates of the polarization
fraction and angle simultaneously, by taking into account the full covariance matrix, including the ellipticity and correlation between
$Q$ and $U$, and the correlation between total and polarized intensity.
This could be useful when performing an analysis over large areas with inhomogeneous noise properties,
when the SNR on the intensity becomes problematic, or when an important correlation between $I$ and ($Q$, $U$) exists.
Nevertheless we stress that the output distributions of the MB estimates are strongly asymmetric at low SNR ($<$3), and
that the Bayesian uncertainty estimates can not be used as typical Gaussian 68\% tolerance intervals.
{\it - Low SNR on the intensity. }
We recommend in this case to use the Bayesian estimators which allow simultaneous estimates
of the intensity and the polarization parameters, taking into account the full covariance matrix, and include the impact of the uncertainty of the intensity
on the polarization fraction estimate.
{\it - Very low SNR studies. } Very low SNRs studies may require different approaches. We have seen that at low SNR, all estimators
provide biased estimates of the polarization fraction, with highly asymmetrical distributions. The more conservative option in this case is to use the
confidence or credible intervals.
Similarly the question of assessing the unpolarized level of a set of data (i.e. SNR$\sim$0)
has been first raised by \citet{Clarke1993}. They suggested to used Kolmogorov test to compare the measurement distributions with the expectation
derived from the Rice distribution with $p_0$=0. Another option is to build the likelihood in two dimensions ($Q$,$U$) to perform a $\chi^2$ test with $Q_0$=$U_0$=0.
A last method is to use the Bayesian posterior probability $B(p_0|p,\sigma_p)$ to assess the probability to have $p_0$=0
for a given measurement or a series of measurements by convolving all individual pdfs.
{\it - Polarization angle.} Concerning the polarization angle estimates $\hat{\psi}$, we have shown that the ML provides the best performance in terms of
relative bias and risk function for SNR$>$1. It corrects a potential bias of $\psi$ when the covariance matrix is not under its canonical form.
Because the ML and MAP estimators give equivalent results, the MAP can be used to
efficiently build credible intervals and symmetric uncertainties, which have been shown to be in a very good agreement with the output distributions.
Nevertheless we stress that the level of the absolute bias of $\psi$ remains extremely limited compared to the dispersion of the polarization angle
in most cases (i.e. in the {\it low} and {\it tiny} regime of the covariance matrix), so that it can be usually neglected.
\begin{acknowledgements}
This paper was developed to support the analysis of data from the
\textit{Planck}\ satellite. The development of \textit{Planck}\ has been supported by: ESA; CNES and
CNRS/INSU-IN2P3-INP (France); ASI, CNR, and INAF (Italy); NASA and DoE
(USA); STFC and UKSA (UK); CSIC, MICINN, JA, and RES (Spain); Tekes,
AoF, and CSC (Finland); DLR and MPG (Germany); CSA (Canada); DTU Space
(Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland);
FCT/MCTES (Portugal); and PRACE (EU). A description of the Planck
Collaboration and a list of its members, including the technical or
scientific activities in which they have been involved, can be found
at \url{http://www.sciops.esa.int/index.php?project=planck&page=}\\ \url{Planck_Collaboration}.
We acknowledge the use of the Legacy Archive for Microwave Background
Data Analysis (LAMBDA), part of the High Energy Astrophysics Science
Archive Center (HEASARC). HEASARC/LAMBDA is a service of the
Astrophysics Science Division at the NASA Goddard Space Flight Center.
Some of the results in this paper have been derived using the
{{\tt HEALPix}} package.
We would also like to thank P. Leahy, S. Prunet and M. Seiffert for their very useful comments.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Canonical Correlation Analysis (CCA) characterizes linear relationships between two sets of variables, and is commonly used to study associations between different data platforms in imaging and genomics \citep{Bach05aprobabilistic, Chi:2013gj, witten2009penalized}. However, while CCA uncovers common signals, it does not elucidate which signals are unique to each data source. Furthermore, standard CCA relies on the assumption of Gaussian distribution, and is not appropriate for analyses of datasets with count or proportion measurement types. Our first motivating example is nutrigenomic study \citep{martin2007novel}, which collected gene expression and lipid concentration data from the same mice. We are interested in finding the common and unique signals between gene expression and lipid metabolism in relation to wild-type versus mutant mice. While gene expression levels can be modelled by Gaussian distributions with appropriate normalization, the lipid concentrations are presented as proportions, many of which are close to zero (25\% of proportions with values 0.002 or less), violating the Gaussian assumption. Our second motivating example concerns tumor heterogeneity, as a profiled tumor tissue contains signals obtained from not only tumor cells, but also immune and stromal cells, which presents significant challenges for effective cancer treatment. Multiple cell-type deconvolution methods have been developed to evaluate cellular heterogeneity \citep{newman2015robust,li2017timer,wang2018transcriptome, wang2019deep}, with each method utilizing different biological information and different cell types to estimate the cellular
purity. It is thus of interest to investigate the information that is concordant across methods, as well as information that is method-specific. However, all methods generate proportion data, violating the Gaussian assumption.
Multiple methods have been developed that decompose the data matrices into both common and individual signals \citep{Lock:2013ez, shu2020d, OnPLS,gaynanova2019structural}. However, these methods are designed for Gaussian data, and are not appropriate for proportion or count measurements. \citet{Zoh:2016fe,yoon2020sparse} propose non-Gaussian extension of CCA, however the corresponding models are not designed for proportions data, and neither method can extract individual information. Several methods tackle both challenges by considering common and individual decomposition of natural parameter matrices in exponential family framework, with \citet{klamibayesian} taking a bayesian approach, and \citet{li2018general} taking a frequentist approach. However, these decomposition-based methods assume the common scores to be identical between two datasets rather than highly correlated, thus they do not reduce to standard CCA even in the Gaussian cases. Furthermore, majority of matrix decomposition methods \citep{Lock:2013ez, klamibayesian, li2018general} do not enforce orthogonality between the individual signals, allowing these signals to embed correlated information.
In this work, we propose to tackle both challenges within exponential family framework by considering low-rank decomposition of natural parameter matrices with common and individual components. We refer to our approach as Exponential CCA (ECCA). Unlike existing approaches based on exponential families \citep{klamibayesian, li2018general}, our model allows common scores to be different (but correlated), and enforces orthogonality between individual signals (thus no shared information is retained). These modeling differences lead to significantly more challenging estimation problem as it involves non-convex optimization with orthogonality constraints. To solve this problem, we derive an alternating algorithm based on the adaptation of the splitting method for orthogonality constrained problems \citep{Lai:2014dq}. Our algorithm converges in all numerical studies, with ECCA having on-par or superior estimation performance compared to competing methods. In application to nutrigenomic study \citep{martin2007novel}, ECCA is effective in extracting common and individual signals that separate mouse genotype effect from the diet effects. In application to tumor heterogeneity study in prostate cancer, ECCA is effective in extracting common and individual signals that relate to progression-free survival probability.
The rest of the paper is organized as follows. Section~\ref{sec:lrccamodel} introduces the proposed ECCA model. Section~\ref{sec:estimation} derives the estimation algorithm. Section~\ref{sec:lrccaSimu} compares ECCA with available methods in simulation studies. Section~\ref{sec:data} describes application of ECCA to (i) nutrigenomic study; (ii) tumor heterogeneity study. Section~\ref{sec:eccaDis} concludes with discussion.
\textbf{Notation:} For a matrix $\boldsymbol{A}$, we use $\boldsymbol{A}^\top$ to denote its transpose, $\boldsymbol{A}^{+}$ to denote its Moore-Penrose inverse, $\mathcal{C}(\boldsymbol{A})$ to denote its column space and $\mathcal{R}(\boldsymbol{A})$ to denote its row space. We use $\boldsymbol{P}_{\boldsymbol{A}} = \boldsymbol{A}\bA^{+}$ to denote the projection matrix onto the column space of $\boldsymbol{A}$. We use $\boldsymbol{P}^{\perp}_{\boldsymbol{A}} = \boldsymbol{I} - \boldsymbol{P}_{\boldsymbol{A}}$ to denote the projection matrix onto the orthogonal complement of $\mathcal{C}(\boldsymbol{A})$.
\section{Proposed model}\label{sec:lrccamodel}
We consider two data matrices $\boldsymbol{X}_1\in\mathbb{R}^{n\times p_1}$ and $\boldsymbol{X}_2\in\mathbb{R}^{n\times p_2}$, where the $n$ rows correspond to matched samples. Similar to \citet{collins2001generalization,li2018general,landgraf2020generalized}, we assume that each data matrix $\boldsymbol{X}_k$, $k=1,2$, has a corresponding natural parameter matrix $\boldsymbol{\Theta}_k\in\mathbb{R}^{n\times p_k}$, and that given the natural parameter matrix the entries are independent with log probability mass or density function for entry $x_{kij}$:
$$
\log f(x_{kij}|\theta_{kij})=x_{kij}\theta_{kij}-b_k(\theta_{kij}) + c_k(x_{kij}),
$$
where $c_k(\cdot)$ does not depend on $\boldsymbol{\Theta}_k$ and $b_k(\cdot)$ is a convex function.
The form of each function is determined by the choice of exponential distribution for dataset $k$ (e.g., Gaussian, Binomial, Poisson), and different distributions are allowed for $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$. Based on motivating datasets, we focus on the Gaussian case and Binomial proportion case. In Gaussian case with variance one, $b_k(\theta_{kij}) = \theta_{kij}^2/2$, with natural parameter corresponding to the mean of the distribution. In Binomial proportion case with $m$ trials, $b_k(\theta_{kij}) =m\log\{1+ \exp(\theta_{kij}/m)\}$, and $\theta_{kij} = m\log\{p_{kij}/(1-p_{kij})\}$, where $p_{kij}$ is probability of success.
To formulate exponential CCA with orthogonal variation, we consider the low-rank model on centered natural parameter matrices. Let $\boldsymbol{\Theta}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \widetilde \boldsymbol{\Theta}_k$, where $\textbf{1}_n$ is a vector of ones of length $n$ and $\boldsymbol{\mu}_k\in\mathbb{R}^{p_k}$ is the intercept, so that $\widetilde \boldsymbol{\Theta}_k$ is the column-centered matrix of natural parameters. Let $r_k =\rank(\widetilde \boldsymbol{\Theta}_k)$. We assume
\begin{equation}\label{eq:expDecom}
\begin{split}
\widetilde\boldsymbol{\Theta}_{1}= \boldsymbol{U}_{1}\boldsymbol{V}_1^\top+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top,\quad
\widetilde\boldsymbol{\Theta}_{2}= \boldsymbol{U}_{2}\boldsymbol{V}_2^\top+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top;
\end{split}
\end{equation}
where $\boldsymbol{U}_1, \boldsymbol{U}_2\in\mathbb{R}^{n\times r_0}$ are correlated score matrices such that $\boldsymbol{U}_k^{\top}\textbf{1}_n = \bf 0$, $\boldsymbol{U}_k^{\top}\boldsymbol{U}_k = \boldsymbol{I}_{r_0}$, $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \textup{diag}(\rho_1, \dots, \rho_{r_0})$ (capturing $r_0$ correlations between $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$), and $\boldsymbol{V}_k\in\mathbb{R}^{p_k\times r_0}$ are corresponding loading matrices. Furthermore, $\boldsymbol{Z}_k\in\mathbb{R}^{n\times (r_k-r_0)}$ capture orthogonal variation in each of the views (such that $\boldsymbol{Z}_k^{\top}\boldsymbol{Z}_k = \boldsymbol{I}_{r_k - r_0}$, $\boldsymbol{Z}_1^{\top}\boldsymbol{Z}_2 = \bf 0$, $\boldsymbol{Z}_k^{\top}(\textbf{1}_n\ \boldsymbol{U}_1\ \boldsymbol{U}_2) = \bf 0$), and $\boldsymbol{A}_k\in\mathbb{R}^{p_k \times (r_k-r_0)}$ capture the loadings corresponding to $\boldsymbol{Z}_k$. We refer to $\boldsymbol{J}_k = \boldsymbol{U}_k\boldsymbol{V}_k^{\top}$ as \textit{joint} signal, and to $\boldsymbol{I}_k = \boldsymbol{Z}_k\boldsymbol{A}_k^{\top}$ as \textit{individual} signal.
\subsection{Connection to classical CCA and model identifiability}\label{sec:normalCCA}
In the Gaussian case, the natural parameter corresponds to the mean of the distribution, thus $\boldsymbol{X}_k = \boldsymbol{\Theta}_k + \boldsymbol{E}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \widetilde \boldsymbol{\Theta}_k + \boldsymbol{E}_k$, $k=1, 2$, where $\boldsymbol{E}_k$ is the error matrix with elements following mean-zero Gaussian distribution. The classical CCA problem can be viewed as a problem of finding the correlated basis pairs $\boldsymbol{u}_{1l}, \boldsymbol{u}_{2l}\in \mathbb{R}^n$, $l=1, \dots, r_0$, between column spaces $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ \citep{shu2020d}:
\begin{equation}\label{eq:CCA}
\begin{split}
(\boldsymbol{u}_{1l},\boldsymbol{u}_{2l})&=\argmax_{\boldsymbol{u}_1,\boldsymbol{u}_2} \left\{\Cor(\boldsymbol{u}_1,\boldsymbol{u}_2)\right\}\\
\text{subject to}& \quad
\boldsymbol{u}_1^{\top}\boldsymbol{u}_1 = \boldsymbol{u}_2^{\top}\boldsymbol{u}_2 = 1, \\
&\quad \boldsymbol{u}_1\in \mathcal{C}(\widetilde \boldsymbol{\Theta}_1)\backslash\text{span}(\{\boldsymbol{u}_{1i}\}_{i=1}^{l-1}),\quad \boldsymbol{u}_2\in \mathcal{C}(\widetilde \boldsymbol{\Theta}_2)\backslash\text{span}\left(\{\boldsymbol{u}_{2i}\}_{i=1}^{l-1}\right).
\end{split}
\end{equation}
Each $(\boldsymbol{u}_{1l},\boldsymbol{u}_{2l})$ is the $l$th pair of canonical variables with corresponding $l$th canonical correlation $\Cor(\boldsymbol{u}_{1l},\boldsymbol{u}_{2l}) = \boldsymbol{u}_{1l}^{\top}\boldsymbol{u}_{2l} = \rho_l$. The total number of pairs $r_0$ with non-zero correlation $\rho_l >0$ corresponds to the number of principal angles between $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ that are strictly less than 90 degrees \citep{knyazev2002principal}.
Let $r_k = \rank(\widetilde \boldsymbol{\Theta}_k)$. By definition, the number of canonical pairs satisfies $0\leq r_0 \leq \min(r_1, r_2)$. In case of strict inequality, e.g., $r_0 < r_1$, this implies that the column space of $\widetilde \boldsymbol{\Theta}_1$ can be decomposed into $r_0$ basis vectors corresponding to canonical variables $\{\boldsymbol{u}_{1l}\}_{l=1}^{r_0}$, and the remaining $r_1 - r_0$ basis vectors $\{\boldsymbol{z}_{1l}\}_{l=1}^{r_1 - r_0}$ that are orthogonal to both canonical variables and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Similarly, if $r_0 < r_2$, then $\{\boldsymbol{z}_{2l}\}_{l=1}^{r_2 - r_0}$ can be chosen as arbitrary orthogonal basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)/\text{span}(\{\boldsymbol{u}_{2l}\}_{l=1}^{r_0})$ so that $\{\boldsymbol{u}_{2l}\}_{l=1}^{r_0}, \{\boldsymbol{z}_{2l}\}_{l=1}^{r_2 - r_0}$ form an orthogonal basis for $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Theorem 1 in \citet{shu2020d} formalizes the relationship between $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$ in terms of canonical variables and remaining basis vectors, which we restate below.
\begin{thm}\label{thm:ident}
Let $\boldsymbol{U}_1=[\boldsymbol{u}_{11},\cdots,\boldsymbol{u}_{1r_0}]\in\mathbb{R}^{n\times r_0}$, $\boldsymbol{U}_2=[ \boldsymbol{u}_{21},\cdots,\boldsymbol{u}_{2r_0}]\in\mathbb{R}^{n\times r_0}$ contain canonical variables from~\eqref{eq:CCA}. Let $\boldsymbol{Z}_1=[\boldsymbol{z}_{11}, \cdots, \boldsymbol{z}_{1(r_1-r_0)}]\in\mathbb{R}^{n\times (r_1-r_0)}$ and $\boldsymbol{Z}_2=[\boldsymbol{z}_{21}, \cdots, \boldsymbol{z}_{2(r_2-r_0)}]\in\mathbb{R}^{n\times (r_2-r_0)}$ be matrices of orthogonal basis vectors corresponding to $\mathcal{C}(\boldsymbol{P}^{\perp}_{\boldsymbol{U}_1}\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\boldsymbol{P}^{\perp}_{\boldsymbol{U}_2}\widetilde \boldsymbol{\Theta}_2)$, respectively. Let $\boldsymbol{Q}_1=\begin{pmatrix} \boldsymbol{U}_1 & \boldsymbol{Z}_1\end{pmatrix}$, $\boldsymbol{Q}_2=\begin{pmatrix} \boldsymbol{U}_2 &\boldsymbol{Z}_2\end{pmatrix}$. Then
\begin{equation*}
\begin{split}
\boldsymbol{Q}_1^{\top}\boldsymbol{Q}_1= \boldsymbol{I}_{r_1},\quad \boldsymbol{Q}_2^{\top}\boldsymbol{Q}_2= \boldsymbol{I}_{r_2}, \quad
\boldsymbol{Q}_1^{\top}\boldsymbol{Q}_2=\begin{pmatrix} \boldsymbol{\Lambda}& \bf 0\\\bf0 &\bf{0}\end{pmatrix},
\end{split}
\end{equation*}
where $\bf 0$ is a zero-valued matrix of compatible size, and $\boldsymbol{\Lambda}=\textup{diag}(\rho_1,\cdots,\rho_{r_0})$ is a diagonal matrix of canonical correlations.
\end{thm}
Thus $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ capture canonical correlations, whereas $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ capture orthogonal variation. By construction, $\boldsymbol{Q}_1$ is a set of basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$, and $\boldsymbol{Q}_2$ is a set of basis of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. Thus, in the Gaussian case, the proposed model~\eqref{eq:expDecom} encompasses the classical CCA decomposition, with additional explicit modeling of orthogonal variation (through $\boldsymbol{Z}_k$).
More generally, we apply the proposed model~\eqref{eq:expDecom} to perform basis decomposition on the matrices of natural parameters in general exponential family framework (and not just in Gaussian case). Since correlations in Gaussian case rely on column-centering, we also formulate our model on column-centered $\widetilde \boldsymbol{\Theta}_k$, thus original $\boldsymbol{\Theta}_k$ has an intercept term $\boldsymbol{\mu}_k$. We further formalize existence of model~\eqref{eq:expDecom} and corresponding identifiability conditions.
\begin{thm}\label{thm:modelident} Given column-centered $\widetilde \boldsymbol{\Theta}_k$, let $r_k = \rank(\widetilde \boldsymbol{\Theta}_k)$. Let $r_0$ be the number of non-zero canonical correlations between $\widetilde \boldsymbol{\Theta}_1$ and $\widetilde \boldsymbol{\Theta}_2$ according to \eqref{eq:CCA}.
\begin{enumerate}
\item There exist $\boldsymbol{U}_k$, $\boldsymbol{V}_k$, $\boldsymbol{Z}_k$, $\boldsymbol{A}_k$ such that model~\eqref{eq:expDecom} holds with corresponding conditions
\item If joint $\boldsymbol{J}_k=\boldsymbol{U}_k\boldsymbol{V}^\top_k$ and individual $\boldsymbol{I}_k=\boldsymbol{Z}_k\boldsymbol{A}^\top_k$ satisfy $ \rank(\boldsymbol{J}_k) + \rank(\boldsymbol{I}_k)= \rank(\widetilde \boldsymbol{\Theta}_k)$, then $\boldsymbol{J}_k$ and $\boldsymbol{I}_k$ are unique. Furthermore, if the canonical correlations are distinct, then $\boldsymbol{U}_k$ are unique up to a sign. If both $\widetilde{\boldsymbol{Z}}_k$ and $\boldsymbol{Z}_k$ satisfy the conditions, then there exists an orthogonal matrix $\boldsymbol{Q}_k \in \mathbb{R}^{(r_k - r_0) \times (r_k - r_0)}$ such that $\widetilde{\boldsymbol{Z}}_k = \boldsymbol{Z}_k\boldsymbol{Q}_k$.
\end{enumerate}
\end{thm}
The proof is in Web Appendix A.
\subsection{Connection to other existing decompositions}
The existence and identifiability conditions of proposed ECCA model are similar to the conditions for Decomposition-based Canonical Correlation Analysis (DCCA) \citep{shu2020d}. However, ECCA has two important differences with DCCA. First, DCCA decomposes $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ into common $\boldsymbol{C}$ and orthogonal $\boldsymbol{D}_1$, $\boldsymbol{D}_2$, and estimates those components separately rather than estimating $\boldsymbol{U}_d$ directly like ECCA does. Secondly, DCCA is restricted to the Gaussian case, where the corresponding estimates have closed-form. In contrast, ECCA considers a more general exponential family framework for which closed-form solutions do not exist, presenting significant optimization challenges that we address in this work.
Exponential PCA methods \citet{collins2001generalization, landgraf2020generalized} consider low-rank decomposition of matrix of natural parameters separately for each data set, and thus do not provide answers to which signals are correlated and which signals are unique across datasets. Generalized Association Study (GAS) \citep{li2018general} also considers decomposition of natural parameter matrices under exponential family framework into joint and individual parts, however the definition of joint and individual are different compared to ECCA. In GAS, the joint parts have zero principal angles (all canonical correlations are one), whereas the individual parts are non-intersecting, but not necessarily orthogonal. For example, two canonical variables with canonical correlation of 0.8 belong to individual parts of the decomposition under GAS model, but belong to joint part of decomposition under ECCA model. Thus, GAS and ECCA agree on their treatment of canonical correlations that are exactly one or exactly zero, but disagree on canonical correlations that are strictly between 0 and 1. ECCA's treatment of those as joint is consistent with standard CCA. Furthermore, unlike GAS, individual signals in ECCA are orthogonal to each other, meaning that those signals can be interpreted as view-specific information completely absent from another view. The differences in GAS and ECCA decompositions translate into significant differences in underlying optimization problems and corresponding algorithms, as additional orthogonality constraints in ECCA model present considerable challenges, which we address here with the help of a splitting method \citep{Lai:2014dq}.
\section{Estimation}\label{sec:estimation}
\subsection{Overview}\label{sec:parameter}
Let $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ be the negative log-likelihood associated with natural parameter matrix $\boldsymbol{\Theta}_k$ given the data matrix.
$$
L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = -\sum_{i=1}^{n}{\sum_{j=1}^{p_k}\log f(x_{kij}|\theta_{kij})} = \sum_{i=1}^{n}\sum_{j=1}^{p_k}\left\{- x_{kij}\theta_{kij} + b_k(\theta_{kij}) \right\} + C,
$$
where $C$ is a constant independent from $\boldsymbol{\Theta}_k$.
In Gaussian case with variance 1 (Section~\ref{sec:lrccamodel})
$$
L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = \frac{1}{2}\|\boldsymbol{X}_k-\boldsymbol{\Theta}_k\|^2_F + C,
$$
and in Binomial proportion case with $m$ trials
$$
L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k) = m\sum_{i = 1}^{n}\sum_{j = 1}^{p}\Big[-x_{kij}(\theta_{kij}/m) + \log\{1+\exp(\theta_{kij}/m)\}\Big] + C.
$$
Observe that the number of trials $m$ enters the likelihood as a multiplier and as a scaling term on $\boldsymbol{\Theta}_k$. Since the scaling does not affect the model decomposition, the choice of $m$ can be viewed as a relative weight assigned to view $k$.
Given ranks $r_0$, $r_1$ and $r_2$, we propose to fit model~\eqref{eq:expDecom} by minimizing sum of negative log-likelihoods associated with $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$, accounting for centering and model constraints:
\begin{equation}\label{eq:finalform}
\begin{split}
\minimize_{\boldsymbol{\mu}_k, \boldsymbol{U}_k, \boldsymbol{V}_k,\boldsymbol{Z}_k,\boldsymbol{A}_k}&\left\{L(\textbf{1}_n\boldsymbol{\mu}_{1}^\top + \boldsymbol{U}_{1}\boldsymbol{V}_1^\top+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\textbf{1}_n\boldsymbol{\mu}_{2}^\top + \boldsymbol{U}_{2}\boldsymbol{V}_2^\top+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\
\mbox{subject to}& \quad\boldsymbol{U}_k^{\top}\textbf{1}_n = {\bf 0},\quad \boldsymbol{U}_k^{\top}\boldsymbol{U}_k = \boldsymbol{I}_{r_0},\quad \boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \textup{diag}(\rho_1, \dots, \rho_{r_0}),\\
&\quad \boldsymbol{Z}_k^{\top}(\textbf{1}_n\ \boldsymbol{U}_1\ \boldsymbol{U}_2) = {\bf 0},\quad \boldsymbol{Z}_k^{\top}\boldsymbol{Z}_k = \boldsymbol{I}_{r_k - r_0}, \quad\boldsymbol{Z}_1^{\top}\boldsymbol{Z}_2 = {\bf 0}, \quad k=1,2.
\end{split}
\end{equation}
We discuss rank selection approaches in Section~\ref{sec:rank}.
To optimize~\eqref{eq:finalform}, we propose to use alternating updates over $\boldsymbol{\mu}_k, \boldsymbol{U}_k, \boldsymbol{V}_k, \boldsymbol{Z}_k, \boldsymbol{A}_k$ as summarized in Algorithm~\ref{a:full}. Each update corresponds to its own non-trivial optimization problem due to combination of (possibly) non-Gaussian likelihood $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ and the orthogonality constraints in~\eqref{eq:finalform}. We propose to use damped Newton's method to update the unconstrained model parameters (intercept $\boldsymbol{\mu}_k$ and loading matrices $\boldsymbol{V}_k$, $\boldsymbol{A}_k$). For constrained model parameters, we derive modifications of splitting orthogonality constraints (SOC) and Bregman iteration method \citep{yin2008bregman,Lai:2014dq}.
Below we provide high-level overview of each update, additional details are in Web Appendix B.
\begin{algorithm}[!t]
\caption{ECCA algorithm}\label{a:full}
\begin{algorithmic}[1]
\Require Initial values $\boldsymbol{U}_1^{(0)}, \boldsymbol{U}_2^{(0)}, \boldsymbol{Z}_1^{(0)}, \boldsymbol{Z}_2^{(0)}$, ranks $r_0, r_1, r_2$, $t_{\max}$, $\epsilon$
\State $t\gets 0$
\State Calculate starting negative log-likelihood $L^{(0)}$
\While{$|L^{(t)} - L^{(t-1)}| > \epsilon$ and $t < t_{\max}$}
\State $t \gets t+1$
\State Update of loadings: solve for $\boldsymbol{\mu}_k^{(t)}$, $\boldsymbol{V}_k^{(t)}$, $\boldsymbol{A}_k^{(t)}$, $k= 1, 2$
\State Update of orthogonal scores: solve for $\boldsymbol{Z}_1^{(t)}$, $\boldsymbol{Z}_2^{(t)}$
\State Update of correlated scores: solve for $\boldsymbol{U}_1^{(t)}$, $\boldsymbol{U}_2^{(t)}$
\State Rotation of correlated scores: update $\boldsymbol{U}_k^{(t)}$ and $\boldsymbol{V}_k^{(t)}$, $k=1, 2$
\State Calculate updated negative log-likelihood $L^{(t)}$
\EndWhile
\State \Return {$\boldsymbol{\mu}_k^{(t)}, \boldsymbol{U}_k^{(t)}, \boldsymbol{V}_k^{(t)}, \boldsymbol{Z}_k^{(t)}, \boldsymbol{A}_k^{(t)}, k=1, 2$}
\end{algorithmic}
\end{algorithm}
\subsection{Update of loadings}
\label{s:loading_update}
Given $\boldsymbol{U}_k$, $\boldsymbol{Z}_k$, $k=1, 2$, the update of loadings in~\eqref{eq:finalform} can be separated across $k$, leading to two separate optimization problems of the same form:
\begin{equation}\label{eq:loadings}
(\boldsymbol{\mu}^*_k,\boldsymbol{V}^*_k,\boldsymbol{A}^*_k) = \argmin_{\boldsymbol{\mu}_k,\boldsymbol{V}_k,\boldsymbol{A}_k}{L(\textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top+\boldsymbol{Z}_{k}\boldsymbol{A}_k^\top|\boldsymbol{X}_k)}.
\end{equation}
Since $L(\boldsymbol{\Theta}_k|\boldsymbol{X}_k)$ is differentiable and~\eqref{eq:loadings} has no constraints, we propose to use damped Newton's method for optimization. For example, the update for $\boldsymbol{\mu}$ takes the form
\begin{equation}
\label{eq:mu_update}
\boldsymbol{\mu}^{+} = \boldsymbol{\mu} - t(\nabla_{\boldsymbol{\mu}}^2 L)^{-1}\nabla_{\boldsymbol{\mu}} L,
\end{equation}
where $\boldsymbol{\mu}$ is the current value, $t \in (0, 1)$ is the step size, $\nabla_{\boldsymbol{\mu}} L$ is the gradient evaluated at current value $\boldsymbol{\mu}$, and $\nabla_{\boldsymbol{\mu}}^2 L$ is the Hessian evaluated at current value $\boldsymbol{\mu}$. We choose the step size by backtracking line search \citep{wright1999numerical}, and use the difference in objective function values to monitor the convergence.
In the special case that view $k$ follows Gaussian distribution,~\eqref{eq:loadings} has closed form solution. Let $\boldsymbol{S}_k = (\textbf{1}_n\ \boldsymbol{U}_k\ \boldsymbol{Z}_k) \in \mathbb{R}^{n\times (1+r_1)}$ and $\boldsymbol{T}_k = (\boldsymbol{\mu}_k\ \boldsymbol{V}_k\ \boldsymbol{A}_k) \in \mathbb{R}^{p\times (1+r_1)}$. Then
$$
L(\textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top+\boldsymbol{Z}_{k}\boldsymbol{A}_k^\top|\boldsymbol{X}_k) = \frac{1}{2}\|\boldsymbol{X}_k-\boldsymbol{S}_k\boldsymbol{T}_k^{\top}\|^2_F + C,
$$
thus $(\boldsymbol{\mu}^*_k\ \boldsymbol{V}^*_k\ \boldsymbol{A}^*_k) = (\boldsymbol{S}_k^+\boldsymbol{X}_k)^{\top}$, where $\boldsymbol{S}_k^+$ is the Moore - Penrose inverse of $\boldsymbol{S}_k$.
\subsection{Update of orthogonal scores}
\label{s:ind_update}
Given $\boldsymbol{\mu}_k$, $\boldsymbol{A}_k$, $\boldsymbol{V}_k$ and $\boldsymbol{U}_k$, let $\boldsymbol{B}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{U}_{k}\boldsymbol{V}_k^\top$. Then the update of orthogonal score matrices $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ corresponds to the following problem
\begin{equation}\label{eq:ZSOC}
\begin{split}
\minimize_{\boldsymbol{Z}_1,\boldsymbol{Z}_2}&\left\{L(\boldsymbol{B}_1+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\
\text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix}=\bf{0}, \quad \begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix}^\top \begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} = \boldsymbol{I}.
\end{split}
\end{equation}
Problem \eqref{eq:ZSOC} has convex objective function with respect to $(\boldsymbol{Z}_1, \boldsymbol{Z}_2)$ and nonconvex orthogonality constraints.
To solve this problem, we adapt the SOC method, a Splitting method for Orthogonality Constrained problems \citep{Lai:2014dq}.
We introduce the auxiliary matrix $(\boldsymbol{P}_1, \boldsymbol{P}_2)$ and reformulate~\eqref{eq:ZSOC} as
\begin{equation}\label{eq:ZSOC2}
\begin{split}
\min_{\boldsymbol{Z}_1,\boldsymbol{Z}_2,\boldsymbol{P}_1,\boldsymbol{P}_2}&\left\{L(\boldsymbol{B}_1+\boldsymbol{Z}_{1}\boldsymbol{A}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{Z}_{2}\boldsymbol{A}_2^\top|\boldsymbol{X}_2)\right\} \\
\text{subject to }&\begin{pmatrix}\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} = \begin{pmatrix} \boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}, \\
&\begin{pmatrix}\textbf{1}_n&\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}=\bf{0}, \quad\begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix}^\top \begin{pmatrix}\boldsymbol{P}_1&\boldsymbol{P}_2\end{pmatrix} = \boldsymbol{I}.
\end{split}
\end{equation}
The purpose of auxiliary matrix is to separate the objective function minimization from orthogonality constraints. Algorithm~\ref{a:ZSOC} summarizes SOC updates applied to~\eqref{eq:ZSOC2}, see Web Appendix B for derivation. As the updates for $\boldsymbol{Z}_k$ are unconstrained, we utilize updates from Section~\ref{s:loading_update}. We use the primal residuals, $(\boldsymbol{Z}_1^{(t+1)}, \boldsymbol{Z}_2^{(t+1)}) - (\boldsymbol{P}_1^{(t+1)}, \boldsymbol{P}_2^{(t+1)})$, and the dual residuals, $(\boldsymbol{P}_1^{(t+1)}, \boldsymbol{P}_2^{(t+1)}) - (\boldsymbol{P}_1^{(t)}, \boldsymbol{P}_2^{(t)})$, to monitor the convergence \citep{Boyd:2011bw}.
\begin{algorithm}[!t]
\caption{Splitting orthogonal constraint algorithm for~\eqref{eq:ZSOC2}}\label{a:ZSOC}
\begin{algorithmic}[1]
\Require Given: $t=0$, $\boldsymbol{Z}_1^{(0)}$, $\boldsymbol{Z}_2^{(0)}$, $\boldsymbol{U}=(\textbf{1}_n,\boldsymbol{U}_1,\boldsymbol{U}_2), t_{max}$
\State Initialize $\boldsymbol{P}_1^{(0)} \gets \boldsymbol{Z}_1^{(0)},\boldsymbol{P}_2^{(0)} \gets \boldsymbol{Z}_2^{(0)},\boldsymbol{B}_1^{(0)} \gets 0,\boldsymbol{B}_2^{(0)} \gets 0$
\While{$t \neq t_{max}$ and `not converge'}
\State $t \gets t+1$;
\State $\boldsymbol{Z}_1^{(t)} \gets \argmin_{\boldsymbol{Z}_1}L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + \frac{\gamma}2\|\boldsymbol{Z}_1-\boldsymbol{P}^{(t-1)}_1+\boldsymbol{B}_1^{(t-1)}\|_F^2$.
\State $\boldsymbol{Z}_2^{(t)} \gets \argmin_{\boldsymbol{Z}_2}L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) +\frac{\gamma}2\|\boldsymbol{Z}_2-\boldsymbol{P}_2^{(t-1)}+\boldsymbol{B}^{(t-1)}_2\|_F^2$.
\State Compute SVD of $(\boldsymbol{I}-\boldsymbol{U}\bU^{+})(\boldsymbol{Z}_1^{(t)}+\boldsymbol{B}_1^{(t-1)},\boldsymbol{Z}_2^{(t)}+\boldsymbol{B}_2^{(t-1)})=\boldsymbol{M}\boldsymbol{D}\boldsymbol{N}^\top$.
\State $(\boldsymbol{P}_1^{(t)},\boldsymbol{P}^{(t)}_2)\gets\boldsymbol{M}\boldsymbol{N}^\top$.
\State $\boldsymbol{B}_1^{(t)} \gets\boldsymbol{B}_1^{(t-1)} + \boldsymbol{Z}_1^{(t)} -\boldsymbol{P}_1^{(t)}.$
\State $\boldsymbol{B}_2^{(t)} \gets\boldsymbol{B}_2^{(t-1)} + \boldsymbol{Z}_2^{(t)} -\boldsymbol{P}_2^{(t)}.$
\EndWhile
\State \Return {$\boldsymbol{Z}_k^{(t)}, \boldsymbol{P}_k^{(t)}, \boldsymbol{B}_k^{(t)}, k=1, 2$}
\end{algorithmic}
\end{algorithm}
The parameter $\gamma$ can be interpreted as the inverse step size. The larger is $\gamma$, the more likely the algorithm converges, but takes more iterations. The smaller is $\gamma$, the larger are the steps, but the algorithm may fail to converge. By default, we use $\gamma = 1000$ which led to convergence in all our numerical studies. \citet{Lai:2014dq} shows empirically that the algorithm is guaranteed to converge when $\gamma$ is chosen sufficiently large, however provide no theoretical guarantees. Below we establish such guarantees for Algorithm~\ref{a:ZSOC} for Binomial/Gaussian and Binomial/Binomial cases by taking advantage of the results of \citet{wang2019global} on convergence of ADMM in non-convex problems.
\begin{thm}\label{thm:SOCconvergence}
If data matrices $\boldsymbol{X}_1, \boldsymbol{X}_2$ follow Gaussian or Binomial-proportion distribution, then for sufficiently large $\gamma$, the sequence $(\boldsymbol{Z}^{(t)} , \boldsymbol{P}^{(t)} , \boldsymbol{B}^{(t)})$ generated by SOC Algorithm~\ref{a:ZSOC} has at least one limit point, and each limit point is a stationary point of the augmented Lagrangian.
\end{thm}
In the special case that both exponential families are Gaussian, the solution has closed form. Let
$\boldsymbol{Y}_k = \boldsymbol{X}_k - \textbf{1}_n\boldsymbol{\mu}^\top_k - \boldsymbol{U}_k\boldsymbol{V}^\top_k, k = 1,2$,
$\boldsymbol{Y} = (\boldsymbol{Y}_1,\boldsymbol{Y}_2)$, $\boldsymbol{Z} = (\boldsymbol{Z}_1,\boldsymbol{Z}_2)$,
$\boldsymbol{U} = (\textbf{1}_n, \boldsymbol{U}_1, \boldsymbol{U}_2)$,
and let $\boldsymbol{A}$ be a block-diagonal matrix with blocks $\boldsymbol{A}_1$, $\boldsymbol{A}_2$. Then problem~\eqref{eq:ZSOC} becomes
\begin{equation}\label{eq:ZSOC-Gaussian}
\begin{split}
\minimize_{\boldsymbol{Z}_1,\boldsymbol{Z}_2}& \ {\frac{1}{2}\|\boldsymbol{Y} - \boldsymbol{Z}\boldsymbol{A}^\top\|_F^2}, \\
\text{subject to }&\quad \boldsymbol{U}^\top\boldsymbol{Z} = \mathbf{0},\quad \boldsymbol{Z}^\top\boldsymbol{Z} = \boldsymbol{I}.
\end{split}
\end{equation}
It can be shown (Web Appendix B), that the optimal solution is
$
\boldsymbol{Z}^* = \boldsymbol{Q}\boldsymbol{R}^\top,
$
where $\boldsymbol{Q}$ and $\boldsymbol{R}$ are matrices of singular vectors from the short SVD factorization $(\boldsymbol{I}-\boldsymbol{U}\bU^{+})\boldsymbol{Y}\boldsymbol{A}=\boldsymbol{Q}\boldsymbol{D}\boldsymbol{R}^\top$.
\subsection{Update of correlated scores}
\label{s:joint_update}
Given $\boldsymbol{\mu}_k$, $\boldsymbol{A}_k$, $\boldsymbol{V}_k$ and $\boldsymbol{U}_k$, let $\boldsymbol{B}_k = \textbf{1}_n\boldsymbol{\mu}_{k}^\top + \boldsymbol{Z}_{k}\boldsymbol{A}_k^\top$. The update of correlated score matrices $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ corresponds to the following problem
\begin{equation}\label{eq:USOClambda}
\begin{split}
\minimize_{\boldsymbol{U}_1,\boldsymbol{U}_2}& \ \{L(\boldsymbol{B}_1+\boldsymbol{U}_{1}\boldsymbol{V}_1^\top|\boldsymbol{X}_1) + L(\boldsymbol{B}_2+\boldsymbol{U}_{2}\boldsymbol{V}_2^\top|\boldsymbol{X}_2)\} \\
\text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} ^\top\begin{pmatrix}\boldsymbol{U}_1&\boldsymbol{U}_2\end{pmatrix}={\bf{0}}, \quad\boldsymbol{U}_1^\top\boldsymbol{U}_1 = \boldsymbol{U}_2^\top\boldsymbol{U}_2 = \boldsymbol{I}, \quad \boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \boldsymbol{\Lambda}.
\end{split}
\end{equation}
The key difference in this problem compared to~\eqref{eq:ZSOC} is that $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2$ is required to be a diagonal matrix with positive entries (corresponding to correlations), that is $\boldsymbol{U}_1^\top \boldsymbol{U}_2=\boldsymbol{\Lambda}$. Note, however, that if $\boldsymbol{U}_1^\top \boldsymbol{U}_2$ is non-diagonal, but full rank, one can apply rotations $\mathbf{\Gamma}_1$ to $\boldsymbol{U}_1$ and $\mathbf{\Gamma}_2$ to $\boldsymbol{U}_2$ so that $\mathbf{\Gamma}_1^{\top}\boldsymbol{U}_1^\top \boldsymbol{U}_2\mathbf{\Gamma}_2$ is diagonal. For rotations it holds that $\boldsymbol{U}_1\boldsymbol{V}_1^{\top} = \boldsymbol{U}_1\mathbf{\Gamma}_1\mathbf{\Gamma}_1^{\top}\boldsymbol{V}_1^{\top}$, thus the corresponding rotation on loadings $\boldsymbol{V}_1$ keeps the objective value of $L(\boldsymbol{B}_1+\boldsymbol{U}_{1}\boldsymbol{V}_1^\top|\boldsymbol{X}_1)$ unchanged. Therefore, we drop diagonal constraint from~\eqref{eq:USOClambda}, and perform extra rotation of scores $\boldsymbol{U}_k$ and loadings $\boldsymbol{V}_k$ in Section~\ref{s:normalize}.
Rewriting~\eqref{eq:USOClambda} without diagonal constraints separates the problem across $k = 1,2$, leading to two separate optimization problems of the same form:
\begin{equation}\label{eq:USOC}
\begin{split}
\minimize_{\boldsymbol{U}_k}&\{ L(\boldsymbol{B}_k+\boldsymbol{U}_{k}\boldsymbol{V}_k^\top|\boldsymbol{X}_k)\} \\
\text{subject to }& \begin{pmatrix}\textbf{1}_n&\boldsymbol{Z}_1&\boldsymbol{Z}_2\end{pmatrix} ^\top\boldsymbol{U}_k={\bf{0}}, \quad\boldsymbol{U}_k^\top\boldsymbol{U}_k = \boldsymbol{I}.
\end{split}
\end{equation}
As in Section~\ref{s:ind_update}, we adapt SOC algorithm to solve~\eqref{eq:USOC}, with Gaussian case having the closed-form solution. See Web Appendix B.
\subsection{Rotation of correlated scores and loadings}
\label{s:normalize}
Let $\widetilde\boldsymbol{U}_k$, $k=1,2$, be the score matrices obtained from solving~\eqref{eq:USOC}, which may not satisfy the regularity condition $\widetilde \boldsymbol{U}_1^\top \widetilde \boldsymbol{U}_2=\boldsymbol{\Lambda}$. Let $\widetilde \boldsymbol{V}_k$ be the corresponding loading matrices. Further we show how to construct rotations $\mathbf{\Gamma}_k$ ($\mathbf{\Gamma}_k\mathbf{\Gamma}_k^{\top} = \mathbf{\Gamma}_k^{\top}\mathbf{\Gamma}_k = \boldsymbol{I}$) so that setting $\boldsymbol{U}_k = \widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k$ leads to $\boldsymbol{U}_1^{\top}\boldsymbol{U}_2 = \boldsymbol{\Lambda}$, and the matrix $\widetilde \boldsymbol{U}_k\widetilde \boldsymbol{V}^\top_k = \boldsymbol{U}_k\boldsymbol{V}^\top_k$ remains unchanged with $\boldsymbol{V}_k = \widetilde\boldsymbol{V}_k\mathbf{\Gamma}_k$.
Let $\mathbf{\Gamma}_1$ and $\mathbf{\Gamma}_2$ be matrices of left and right singular vectors, respectively, from the singular value decomposition
$\widetilde \boldsymbol{U}^\top_1 \widetilde \boldsymbol{U}_2 = \mathbf{\Gamma}_1 \boldsymbol{\Lambda}\mathbf{\Gamma}_2^\top,$ where $\boldsymbol{\Lambda}$ is a diagonal matrix of nonnegative singular values. Let $\boldsymbol{U}_k = \widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k$ and $\boldsymbol{V}_k = \widetilde\boldsymbol{V}_k\mathbf{\Gamma}_k$. Then by construction
\begin{align*}
&\boldsymbol{U}^\top_1 \boldsymbol{U}_2 = (\widetilde \boldsymbol{U}_1\mathbf{\Gamma}_1)^\top(\widetilde \boldsymbol{U}_2\mathbf{\Gamma}_2) = \boldsymbol{\Lambda},\quad
\boldsymbol{U}^\top_k \boldsymbol{U}_k = (\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k)^\top(\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k) = \boldsymbol{I},\\
&\boldsymbol{U}_k \boldsymbol{V}^\top_k = (\widetilde \boldsymbol{U}_k\mathbf{\Gamma}_k)(\widetilde \boldsymbol{V}_k\mathbf{\Gamma}_k)^\top = \widetilde \boldsymbol{U}_k\widetilde \boldsymbol{V}^\top_k.
\end{align*}
Thus, the rotated score matrices $\boldsymbol{U}_k$ and loading matrices $\boldsymbol{V}_k$ satisfy all the regularity conditions. Furthermore, the likelihood stays the same.
\subsection{Initialization}\label{sec:init}
The proposed ECCA Algorithm~\ref{a:full} requires the initial score matrices $\boldsymbol{U}_k^{(0)},\boldsymbol{Z}_k^{(0)}$. Our default initialization is based on saturated natural parameters $\widehat{\boldsymbol{\Theta}}_k$ obtained from $\boldsymbol{X}_k$ as maximum likelihood estimators without any constraints. In Gaussian case, $\widehat{\boldsymbol{\Theta}}_k=\boldsymbol{X}_k$. In Binomial proportion case, if there are any zeros or ones in $\boldsymbol{X}_k$, we adopt the adjustments as in Chapter 10 of \citet{ott2015introduction}. To be specific, zeros are replaced by $0.375/(m + 0.75)$ whereas ones are replaced by $(m + 0.375)/(m + 0.75)$, where $m$ is the number of trials. Then we estimate natural parameters from adjusted data as $\widehat \theta_{kij} =m \log\{x_{kij}/(1-x_{kij})\}$. We let $\widetilde{\boldsymbol{\Theta}}_k$ be the column-centered $\widehat{\boldsymbol{\Theta}}_k$, and following Section~\ref{sec:normalCCA} initialize $\boldsymbol{U}_k^{(0)}$ as the first $r_0$ canonical variables of $\mathcal{C}(\widetilde \boldsymbol{\Theta}_1)$ and $\mathcal{C}(\widetilde \boldsymbol{\Theta}_2)$. We initialize $\boldsymbol{Z}_k^{(0)}$ as the remaining $r_k - r_0$ left singular vectors of $(\boldsymbol{I} - \boldsymbol{U}\bU^+)\widetilde{\boldsymbol{\Theta}}_k$.
\subsection{Rank estimation}\label{sec:rank}
In Gaussian case, many rank estimation methods have been proposed to determine the total rank $r_k$ for each view. Some examples are edge distribution method \citep{onatski2010determining}, profile likelihood method \citep{ProfileLik2006Zhu} and thresholding method \citep{gavish2014optimal}. However, none of these methods directly extends to non-Gaussian data. Here, we determine the total ranks $r_k$ by adopting the 10-fold cross-validation method designed for exponential families \citep{li2018general}. Given the observed matrix $\boldsymbol{X}_k$, a random part of its elements is set as missing, and the full underlying natural parameter matrix $\boldsymbol{\Theta}_k$ is estimated with given rank using exponential PCA \citep{collins2001generalization}. The best rank is chosen based on the minimal negative log-likelihood associated with hold-out elements of $\boldsymbol{X}_k$ and corresponding elements of estimated $\boldsymbol{\Theta}_k$. We refer to \citet{li2018general} for additional details.
To estimate the joint rank $r_0$, \citet{li2018general} apply similar approach to estimate the rank of concatenated $(\boldsymbol{\Theta}_1, \boldsymbol{\Theta}_2)$. However, this approach may not be valid for the proposed ECCA model~\eqref{eq:expDecom} since we allow the column spaces between $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ to be different. Instead, we adopt a principal angles approach as described in \citet{yuan2022double}. Specifically, we first construct the proxy low-rank natural parameter matrices $\widehat \boldsymbol{\Theta}_k$ by applying low-rank exponential PCA \citep{landgraf2020generalized} separately to each view. We then calculate principal angles between $\widehat \boldsymbol{\Theta}_1$ and $\widehat \boldsymbol{\Theta}_2$, and cluster these angles into two groups using profile likelihood. The number of elements in the cluster with smallest angles is used as an estimate of joint rank. We refer to \citet{yuan2022double} for additional details.
\section{Simulation studies}\label{sec:lrccaSimu}
We consider three settings for data generation, and use 100 replications for each.
\begin{description}
\item[\textbf{Setting 1}] Both $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$ follow Gaussian distribution.
\item[\textbf{Setting 2}] $\boldsymbol{X}_1$ follows Gaussian distribution and $\boldsymbol{X}_2$ follows Binomial proportion distribution.
\item[\textbf{Setting 3}] Both $\boldsymbol{X}_1$ and $\boldsymbol{X}_2$ follow Binomial proportion distribution.
\end{description}
For all settings, we set sample size $n = 50$, and dimensions $p_1 = 30$, $p_2 = 20$. We generate data according to model~\eqref{eq:expDecom} with $r_0=3$ nonzero canonical correlations with corresponding values $\boldsymbol{\Lambda} = \textup{diag}(1, 0.9, 0.7)$. The total ranks of centered natural parameter matrices are set to $r_1 = 7$, $r_2 = 6$. For Binomial proportion distribution we use $m=100$ trials. Additional data generation details are in Web Appendix C.
We compare the performance of the following methods: (i) ECCA, the proposed approach; (ii) DCCA adopted to the exponential family setting, where we apply DCCA \citep{shu2020d} to the saturated matrices of natural parameters; (iii) EPCA-DCCA, where we first estimate low-rank natural parameter matrices using exponential PCA \citep{landgraf2020generalized}, and then apply DCCA; (iv) GAS, Generalized Association Study framework \citep{li2018general}. Implementation details for each methods are described in Web Appendix C. The ranks for all methods are set at true values. For GAS, we consider two cases: joint rank 3 (GAS-rank3) which is misspecified model as it enforces top three canonical correlations to be one, and joint rank 1 (GAS-rank1) which puts the 2nd and 3rd canonical pairs as part of individual structures.
To assess the performance, we consider the overall relative error
$$
\text{relative error}=\frac{\|\widehat\boldsymbol{\Theta}_k - \boldsymbol{\Theta}_k\|_F^2}{\|\boldsymbol{\Theta}_k\|_F^2},\ k=1,2,
$$
where $\widehat\boldsymbol{\Theta}_k$ are the estimated natural parameter matrices and $\boldsymbol{\Theta}_k$ are true natural parameter matrices. We use this metric as its invariant to the choice of decomposition. To assess the joint signal estimation performance, we also evaluate the chordal distance \citep{ye2016schubert}
$$
\frac{1}{\sqrt{2}}\left\|\boldsymbol{J}_k\boldsymbol{J}_k^{+} - \widehat{\boldsymbol{J}}_k\widehat{\boldsymbol{J}}_k^{+}\right\|_F,\ k=1,2.
$$
Figure~\ref{fig:GG_signal} shows relative errors across all settings and Figure~\ref{fig:GG_subspace} shows the corresponding chordal distances associated with joint subspaces. When both distributions are Gaussian, all methods perform similar except GAS-rank3 that has the worst performance. This is expected, since GAS-rank1 is an accurate model in this setting. DCCA has the slight advantage over other methods in relative error, but gives similar chordal distances. When one or both distributions are Binomial, DCCA performance deteriorates, with EPCA-DCCA outperforming DCCA. GAS-rank1 as expected outperforms GAS-rank3, but surprisingly is significantly worse on joint signal compared to DCCA and exhibits high variance. One possible explanation is that GAS is implemented for Binomical case with $m=1$, and thus unlike ECCA, does not use $m$ to reweight the likelihood in objective. Another possible explanation is that GAS is using one-step approximation algorithm for model fitting, and this approximation may lead to suboptimal solutions in some cases. Overall, we find that the proposed ECCA has the best performance, as it is similar to DCCA in Gaussian case, and outperforms other methods when at least one of the distributions is Binomial.
\begin{figure}[!t]
\includegraphics[width=1\textwidth]{Whole_Estimation_Error.pdf}
\caption{Comparison of relative error for all settings over 100 replications. }
\label{fig:GG_signal}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=1\textwidth]{Whole_Joint_Error.pdf}
\caption{Comparison of chordal distance for all settings over 100 replications. }
\label{fig:GG_subspace}
\end{figure}
\section{Applications}\label{sec:data}
\subsection{Nutrigenomic study}
The nutrimouse dataset \citep{martin2007novel} is available in R package \textsf{mixOmics} \citep{rohart2017mixomics}. There are $n=40$ mice, with the first view containing $p_1=120$ gene expression measurements from liver cells, and the second view containing $p_2=21$ concentrations (in percentages) of hepatic fatty acids (lipids). Mice are separated into two genotypes, wild-type (wt) and PPAR$\alpha$ -/- (ppar) mutant, and are administered five different diets: reference diet of corn and colza oils (ref), saturated fatty acid diet of hydrogenated coconut oil (coc), Omega6-rich diet of sunflower oil (sun), Omega3-rich diet of linseed oil (lin), and diet with enriched fish oils (fish). Out goal is to extract correlated and orthogonal signals across both views, and investigate how these signals relate to genotype and diet effects.
We use Gaussian distribution to model gene expressions in first view, and Binomial distribution to model concentrations (transferring percentages to proportions). We use $m = 100$ trials to reflect that the original data is measured in percentages, which effectively adjusts the relative weights between Gaussian and Binomial likelihoods in~\eqref{eq:finalform} (Section~\ref{sec:parameter}). There are 17.5\% zero proportions, which we replace with $0.375/(m + 0.75)$ as in Section~\ref{sec:init}. We use cross-validation to estimate the total ranks as $r_1 = 3$ and $r_2 = 4$ (Section~\ref{sec:rank}). To determine the joint rank $r_0$, we calculate the principal angles between the low-rank estimated natural parameters leading to angles of 35.0, 57.2, 74.1 degrees. Given the angle 74.1 being close to 90, we set the joint rank to $r_0 = 2$, and fit ECCA model.
Left panel of Figure~\ref{fig:mouse_PCA_joint} displays the joint scores (two left singular vectors of concatenated $[\boldsymbol{U}_1\ \boldsymbol{U}_2]$) coded by diet and genotype. There is a clear genotype separation based on the first joint component, confirming that the genotype affects both gene expression and lipids concentrations. The second joint component captures diet effect, with the contrast between coc and fish diets being most visible. The other diets, however, are not well-separated. Right panel of Figure~\ref{fig:mouse_PCA_joint} displays the individual scores for lipids. In contrast to joint scores, the individual scores show a clear diet effect. In summary, the ECCA decomposition helps to separate the genetic effects on lipid concentrations from diet effects.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{ECCA_Joint_Separation_Mouse_not_Scaling_trials100.pdf}
\includegraphics[width=0.49\textwidth]{ECCA_Ind_Separation_Mouse_not_Scaling_trials100.pdf}
\caption{ECCA scores from nutrimouse data colored by genotype and diet. Left: Joint scores between gene expressions and lipid concentrations. Right: Individual scores for lipid concentrations. [This figure appears in color in the electronic version of this article, and any mention of color refers to that version]}
\label{fig:mouse_PCA_joint}
\end{figure}
To further illustrate advantages of ECCA on these data, in Web Appendix D we compare the results of ECCA with GAS \citep{li2018general}. We find that ECCA scores lead to better separation of genotype and diet effects, and that orthogonality of individual scores in ECCA is advantageous in interpretation for this study, as the observed diet effects in individual components can be fully attributed to lipids view.
\subsection{Tumor heterogeneity in prostate cancer}
Prostate cancer (PCa) is the second leading cause of cancer-related death in males in the U.S, with approximately 268,490 new cases and 34,500 deaths expected in 2022 \citep{jemal2021prostate}. The immune response in PCa plays a critical role in directing the evolution of tumor cells and contribute to the extensive inter-tumor heterogeneity among PCa patients \citep{binnewies2018understanding}. Current clinical indexes such as the cancer stage, PSA (prostate specific antigen) level, and Gleason scores lack the ability to address the mechanism of heterogeneity and thus are insufficient for definitive identification and treatment of PCa. To address this question by evaluating the immune cell subtype profiles, we apply our ECCA framework on The Cancer Genome Atlas (TCGA) \citep{abeshouse2015molecular} PCa dataset. We use two complementary deconvolution methods to achieve distinct aspects of PCa cellular compositions. For the first view, the cellular composition is determined using DeMixT method of \citet{wang2018transcriptome} that extracts transcript proportions corresponding to three cell types: immune, normal (stroma) and tumor. As the proportions from DeMixT sum to one, we only focus on normal and immune proportions ($p_1=2$). For the second view, we consider Tumor Immune Estimation Resource (TIMER) of \citet{li2017timer}, leading to cell count proportion data corresponding to $p_2 = 6$ cell types: B cells, CD4-T cells, CD8-Tcells, Dendritic cells, Macrophage cells and Neutrophil cells. Unlike DeMixT, TIMER does not produce compositional data, thus the six proportions do not sum to one. Both DeMixT and TIMER are applied to the RNA sequencing data from the same $n = 293$ patients, but dissected the mixed signals in different spaces, transcript versus cell counts; as well as in different cell types, all immune cells combined versus immune cell subtypes. Our goal is to separate joint and individual parts of the signal between DeMixT and TIMER, and investigate how these signals relate to the clinical outcome of prostate cancer patients as measured by progression-free survival.
In summary, we obtain $\boldsymbol{X}_1\in \mathbb{R}^{293\times 2}$ and $\boldsymbol{X}_2 \in \mathbb{R}^{293 \times 6}$ corresponding to DeMixT and TIMER, respectively. We treat both datasets as proportions arising from binomial distribution with the same number of trials $m$. From Section~\ref{sec:parameter}, the value of $m$ does not affect the resulting solution, and we set it to one for simplicity. Due to small number of features in both datasets, we omit the intercept terms $\boldsymbol{\mu}_k$ in ECCA model fixing $\boldsymbol{\mu}_k=0$ throughout. As DeMixT only has two features, we set its total rank as $r_1 = 2$. To determine the total rank for TIMER, we use cross-validation (Section~\ref{sec:rank}) leading to $r_2 = 3$. To determine the joint rank $r_0$, we calculate the principal angles between the low-rank estimated natural parameters of DeMixT and TIMER by exponential PCA. There are two non-zero principal angles of 27.0 and 72.3 degrees. Given the large separation across the two angles and 27.0 being close to zero, we set the joint rank to $r_0 = 1$. For simplified follow-up analysis and interpretation, we combine joint $\boldsymbol{U}_1$ and $\boldsymbol{U}_2$ into one common score $\boldsymbol{U}_{joint}$ based on leading left singular vector of $[\boldsymbol{U}_1 \ \boldsymbol{U}_2]$. We also rotate the individual scores $\boldsymbol{Z}_2$ for TIMER so that the corresponding loading vectors $\boldsymbol{A}_2$ are orthogonal in light of identifiability conditions in Theorem~\ref{thm:modelident}.
\begin{figure}[!t]
\begin{subfigure}{.5\textwidth}
\includegraphics[width =\textwidth]{Fig4A_ECCA_scores_and_Gleason.pdf}
\caption{}\label{fig:gleason}
\end{subfigure}
\begin{subfigure}[c]{.3\textwidth}
\includegraphics[width =\textwidth]{Fig4B_HR_valid_plot.pdf}
\caption{}\label{fig:HR}
\end{subfigure}\\
\centering
\begin{subfigure}[c]{.6\textwidth}
\includegraphics[width =\textwidth]{Fig4C_ECCA_loading_vectors.pdf}
\caption{}\label{fig:loadings}
\end{subfigure}
\caption{(A)~Stratification of joint and individual components by Gleason score categories; (B)~Hazard ratios with 95\% confidence intervals for PFI; (C)~Loading vectors corresponding to joint and individual scores for DeMixT and TIMER.}
\label{fig:loading}
\end{figure}
In order to evaluate the potential utility of $\boldsymbol{U}_{joint}$ and individual scores for PCa, we compare these scores with the clinically utilized prognostic feature Gleason score as well as their association with progression-free interval (PFI) by considering patients with Gleason scores of 7 and 8+ ($n = 239$). More details are in Web Appendix D.2. We find a significantly lower $\boldsymbol{U}_{joint}$ score together with a significantly higher individual score of DeMixT in Gleason score = 8+ (Figure~\ref{fig:gleason}, both p-values $<$ 0.001), representing a patient subgroup with less favorable clinical outcomes. However, neither of the individual scores of TIMER is associated with Gleason group (Figure~\ref{fig:gleason}). Furthermore, we find both high $\boldsymbol{U}_{joint}$ and low DeMixT individual score are independently associated with improved PFI in patients with PCa ($\boldsymbol{U}_{joint}$: hazard ratio (HR) = 0.81, 95\% confidence interval (CI): 0.65, 0.99, p-value = 0.05; DeMixT individual score: HR = 1.76, 95\% CI: 1.05, 2.95, p-value = 0.03; Figure~\ref{fig:HR}, Table~\ref{t:Coxph_NoGleason}). TIMER individual scores are not associated with PFI. The general trends in the observed associations remain after adjusting for the Gleason score status in Cox regression (Table~\ref{t:Coxph}), although no longer statistically significant, supporting the notion that measuring immune cell activities could improve the current clinical practice for identifying and treating PCa. Furthermore, these results, together with recent findings in tumor total mRNA expression levels as a potential biomarker \citep{cao2022estimation}, lead to our next hypothesis that immune transcript proportions, as generated by DeMixT, contain complementary signals from both the immune cell counts and the immune-specific transcriptome variations. Figure~\ref{fig:loadings} of the ECCA loading values reveals that in DeMixT the joint score with TIMER captures both stromal and immune proportions with a higher weight on the stromal proportion, whereas in TIMER it captures all proportions except dendritic cells, with the highest weight on the macrophage, which the immune cell type generating the highest amount of transcripts \citep{schelker2017estimation}. The individual DeMixT score represents an orthogonal and unexplained part of the immune transcript proportion (p-value = 0.0003). In contrast, neither the first individual score nor the second individual score for TIMER is significant. In summary, application of our novel ECCA analysis framework to multiple immune deconvolution methods have the potential to provide novel biological insights in varying immune cell activities in PCa.
\begin{table}[!t]
\caption{P-values from Cox Proportional-Hazards model using joint and individual scores between DeMixT and TIMER as predictors}
\begin{center}
\begin{tabular}{llcc}
Notation & Interpretation & Hazard ratio & P-value \\\hline
Age & Tumor diagnosed age & 1.22 & 0.172 \\
$\boldsymbol{U}_{joint}$ & Joint between DeMixT and TIMER & 0.81 & 0.049 \\
$\boldsymbol{Z}_1$ & Individual DeMixT & 1.76 & 0.032 \\
$\boldsymbol{Z}_{21}$ &1st individual TIMER & 0.64 & 0.112 \\
\end{tabular}
\label{t:Coxph_NoGleason}
\end{center}
\end{table}
\begin{table}[!t]
\caption{P-values from Cox Proportional-Hazards model using joint and individual scores between DeMixT and TIMER as predictors with the inclusion of Gleason score}
\begin{center}
\begin{tabular}{llcc}
Predictor & Interpretation & Hazard ratio & P-value \\\hline
Age& Tumor diagnosed age& 1.20 & 0.214 \\
Gleason score & Gleason score & 1.96 & 0.026 \\
$\boldsymbol{U}_{joint}$& Joint between DeMixT and TIMER& 0.86 & 0.164 \\
$\boldsymbol{Z}_1$& Individual DeMixT & 1.56 & 0.103 \\
$\boldsymbol{Z}_{21}$&1st individual TIMER & 0.67 & 0.163 \\
\end{tabular}
\label{t:Coxph}
\end{center}
\end{table}
\section{Discussion}\label{sec:eccaDis}
We present ECCA model for the association analysis of datasets with measurements coming from exponential family distributions. The R code with methods implementation can be found at \url{https://github.com/IrinaStatsLab/ECCA}. A unique characteristic of ECCA is the orthogonality of the individual score matrices, which enhances interpretation of individual signals, but leads to non-trivial optimization challenges. Numerical studies illustrate that ECCA outperforms existing methods in simulations.
When applied to nutrimouse data, ECCA effectively separates the effect of genotype from the effect of diet based on joint and individual scores between gene expression and lipids concentrations. When applied to tumor heterogeneity study, ECCA effectively extracts joint and individual signals that are biologically meaningful between two different immune deconvolution methods. These scores are then shown to provide additional insights into heterogeneity of immune cell subtype profiles, and their contribution to clinical prognosis in patients with localized but high-risk prostate cancer.
The method has several limitations that require further research. First, while the model~\eqref{eq:expDecom} and optimization~\eqref{eq:finalform} are formulated for general case of exponential family, our implementation and numerical results are limited to Gaussian and Binomial proportion cases, as those were sufficient for motivating datasets. It would be of interest to expand the results to other families, e.g., Poisson, Exponential. Secondly, the ECCA algorithm is computationally demanding due to the use of iterative SOC updates. One possible remedy is to run intermediate SOC updates only for a few iterations without full convergence. This will improve the overall cost of Algorithm~\ref{a:full} however a too small number of iterations may lead to divergence. Further investigation is needed to determine optimal tradeoff. Third, ECCA does not perform sparse regularization, thus may suffer in high-dimensional regimes. One possible way is to add $l_1$ regularization on the loading matrices as in sparse CCA \citep{witten2009penalized, yoon2020sparse}. To be specific, one can modify objective function~\eqref{eq:finalform} to be:
\begin{equation*
\min_{\boldsymbol{\Theta}_1,\boldsymbol{\Theta}_2}\{L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) + \beta_1\|\boldsymbol{V}_1\|_1 + \beta_2\|\boldsymbol{V}_2\|_1\},
\end{equation*}
where $\|\boldsymbol{Y}\|_1 = \sum^{j=1}_{p}{\sum_{k=1}^{r_0}{|y_{jk}|}}$ is sparsity-inducing penalty and $\beta_1, \beta_2 \geq 0$ control the sparsity levels. However, the new objective function is no longer differentiable requiring the use of more complex optimization algorithms, in addition to the sparsity parameter selection. Finally, in standard CCA it is typical to maximize the correlation as the objective function, that is to maximize the magnitude of the diagonal elements of $\boldsymbol{U}_1^\top\boldsymbol{U}_2$. The proposed ECCA can incorporate this maximization by adjusting the objective function as follows
\begin{equation*
\min_{\boldsymbol{\Theta}_1,\boldsymbol{\Theta}_2}\{L(\boldsymbol{\Theta}_1|\boldsymbol{X}_1) + L(\boldsymbol{\Theta}_2|\boldsymbol{X}_2) +\beta\|\boldsymbol{U}_1-\boldsymbol{U}_2\|_F^2\},
\end{equation*}
where $\beta \geq 0$ is a hyper-parameter. Due to orthogonality of $\boldsymbol{U}_k$, adding $\|\boldsymbol{U}_1-\boldsymbol{U}_2\|_F^2$ term to the objective is equivalent to adding $-\Tr(\boldsymbol{U}_1^\top\boldsymbol{U}_2)$, with $\beta$ controlling the relative importance of correlation maximization compared to likelihood for each individual view. Algorithm~\ref{a:full} can be used for this problem with some adaptation of score updates (Section~\ref{s:joint_update}), however it's unclear how to choose the value of optimal $\beta$. It would be of interest to investigate these extensions in future work.
\section*{Acknowledgements}
This work was supported by NSF DMS-1712943 and DMS CAREER-2044823.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec-intro}
Let $\{L^{ x }_{ t}\,;\,(x,t)\in R^{ 1}\times R^{ 1}_{ +}\}$ denote the local time of Brownian motion.
Let
\begin{equation}
\alpha_{p, t}=\int ( L^{ x}_{ t})^{ p}\,dx\label{5.1w}
\end{equation}
(an integral sign without limits is to be read as $\int_{-\infty}^{\infty}$,) and let $\eta=N(0,1)$ be
independent of $\alpha_{p, t}$. The main result of \cite{CLMR} is the following weak limit theorem.
\begin{theorem}\label{theo-clt2} For each fixed $t$
\begin{equation} { \int ( L^{ x+h}_{t}- L^{ x}_{ t})^{ 2}\,dx- 4ht\over h^{ 3/2}}
\stackrel{\mathcal{L}}{\rightarrow}c\sqrt{\alpha_{2,t}}\,\,\eta\label{5.0weak}
\end{equation}
as $h\rightarrow 0$, where $c=\( 64 / 3 \)^{ 1/2}$.
Equivalently
\begin{equation} { \int ( L^{ x+1}_{t}- L^{ x}_{ t})^{ 2}\,dx- 4t\over t^{ 3/4}}
\stackrel{\mathcal{L}}{\rightarrow}c\sqrt{\alpha_{2,1}}\,\,\eta\label{5.0tweak}
\end{equation}
as $t\rightarrow\infty$.
\end{theorem}
\medskip
In this paper we provide the analogous CLT for the third power.
\begin{theorem}\label{theo-trip}
For each fixed $t$
\begin{equation} { \int ( L^{ x+h}_{t}- L^{ x}_{ t})^{ 3}\,dx-12h\int ( L^{ x+h}_{t }- L^{ x}_{ t })L^{ x}_{ t}\,dx-24h^{2}t\over h^{ 2}}
\stackrel{\mathcal{L}}{\rightarrow}c\sqrt{\alpha_{3, t}}\,\,\eta\label{1.1}
\end{equation}
as $h\rightarrow 0$, where $c=\sqrt{ 192}$.
Equivalently
\begin{equation} { \int ( L^{ x+1}_{t}- L^{ x}_{ t})^{ 3}\,dx -12\int ( L^{ x+1}_{t }- L^{ x}_{ t })L^{ x}_{ t}\,dx-24t^{2}\over t}
\stackrel{\mathcal{L}}{\rightarrow}c\sqrt{\alpha_{3, 1}}\,\,\eta\label{1.2}
\end{equation}
as $t\rightarrow\infty$.
\end{theorem}
We explain below why the approach we use will not work for moments larger than three.
The equivalence of (\ref{1.1}) and (\ref{1.2})
follows from the scaling relationship
\begin{equation}
\{ L^{ x}_{ h^{-2}t}\,;\,( x,t)\in R^{ 1}\times R^{ 1}_{ +}\}
\stackrel{\mathcal{L}}{=}\{h^{ -1} L^{h x}_{ t}\,;\,( x,t)\in R^{ 1}\times
R^{ 1}_{ +}\},\label{scale}
\end{equation}
see e.g. \cite[Lemma 10.5.2]{book},
which implies that
\begin{equation}
\int ( L^{ x+h}_{t}- L^{ x}_{ t})^{ 3}\,dx\stackrel{\mathcal{L}}{=}h^{4} \int ( L^{ x+1}_{t/h^{2}}- L^{ x}_{ t/h^{2}})^{ 3}\,dx,\label{scl1}
\end{equation}
and
\begin{equation}
\int ( L^{ x+h}_{t}- L^{ x}_{ t})L^{ x}_{ t} \,dx\stackrel{\mathcal{L}}{=}h^{3} \int ( L^{ x+1}_{t/h^{2}}- L^{ x}_{ t/h^{2}})L^{ x}_{ t/h^{2}} \,dx.\label{scl2}
\end{equation}
Using this, and (\ref{1.1}) with $t=1$, and then setting $h^{2}=1/t$ gives (\ref{1.2}).
\medskip
Theorem \ref{theo-trip} is derived using the method of moments.
Note that the right hand side of (\ref{1.1}) is $c\sqrt{\alpha_{3, t}}\,\,\eta$ .
Unfortunately, we can only show that $\sqrt{\alpha_{p, t} }\,\,\eta$ is determined by its moments if $p=2$ or $3$, so we cannot use our approach to prove an analog of Theorem \ref{theo-trip} for moments larger than three.
In Section \ref{sec-est} we give some estimates on the potential densities and transition densities of Brownian motion which are used throughout this paper. Their proof is deferred until Section \ref{sec-Prooflemvprop}. In Section \ref{sec-BM} we show how Theorem \ref{theo-clt2} will follow from a result, Lemma \ref{lem-2weak}, on the moments of an analogous expression where $t$ is replaced by an independent exponential time. This
Lemma is proven in Section \ref{sec-expmom}. Other lemmas are that used in the proof of Theorem \ref{theo-clt2} are derived in Sections \ref{sec-6.1}-\ref{sec-var}.
This paper extends the basic approach used in \cite{CLMR}. The main novelty in this paper is the need to subtract a non-random term in (\ref{1.1}) in order to get a Central Limit Theorem. Dealing with this non-random subtraction term, and in particular the need to keep track of delicate cancellations, makes this paper considerably more difficult than \cite{CLMR}. Although, as mentioned, the approach of the present paper will not work for higher moments, Theorem \ref{theo-clt2} does suggest what a Central Limit Theorem for higher moments should look like. Here is our conjecture for the fourth integrated moment.
\begin{conjecture} For each fixed $t$
\begin{eqnarray}
&&{ \int ( \Delta^{h}L^{ x}_{ t})^{ 4}\,dx-24h\int ( \Delta^{h}L^{ x}_{ t})^{ 2}L^{ x}_{ t}\,dx+48h^{2}\int (L^{ x}_{ t})^{ 2}- ( \Delta^{h}L^{ x}_{ t})L^{ x}_{ t} \,dx\over h^{ 5/2}}\nonumber\\
&&\hspace{3 in}
\stackrel{\mathcal{L}}{\rightarrow}c_{4}\sqrt{\alpha_{4,t}}\,\,\eta\label{p2.4}
\end{eqnarray}
as $h\rightarrow 0$,where $c_{q}=\sqrt{ {2^{2q+1} q!\over q+1}}$ and $\Delta^{h}L^{ x}_{ t}=L^{ x+h}_{t}- L^{ x}_{ t}$.
\end{conjecture}
\section{Estimates for the potential density of \newline Brownian motion}\label{sec-est}
Let $p_{t}(x)$ denote the probability density of Brownian motion.
The $\alpha$-potential density of Brownian motion
\begin{equation}
u^{\alpha}(x)=\int_{0}^{\infty}e^{-\alpha t}p_{t}(x)\,dt={e^{-\sqrt{2\alpha}|x|} \over \sqrt{2\alpha}}\label{pot.1w}.
\end{equation}
Let $\la_{\alpha}$ be an independent exponential random variable with mean $1/\alpha$.
Kac's moment formula, \cite[Theorem 3.10.1]{book}, states that
\begin{equation} E^{ x_{ 0}}\(\prod_{ j=1}^{ n}L^{ x_{ j}}_{ \la_{\alpha}} \)=\sum_{
\pi}\prod_{ j=1}^{ n}u^{\alpha}( x_{\pi( j)}-x_{\pi( j-1)})\label{1.2w}
\end{equation} where the sum runs over all permutations $\pi$ of $\{ 1,\ldots,
n\}$ and
$\pi(0)=0.$
Let $\Delta_{ x}^{ h}$ denote
the finite difference operator on the variable $x$, i.e.
\begin{equation}
\Delta_{x}^{ h}\,f(x)=f(x+h)-f(x).\label{pot.3w}
\end{equation}
We write $\Delta^{ h}$ for $\Delta_{x}^{ h}$ when the variable $x$ is clear.
\medskip The next lemma collects some facts about $u^{\alpha}(x)$ that are used in this paper.
\begin{lemma}\label{lem-vprop}Fix $\alpha >0$. For $0<h\leq 1$,
\begin{eqnarray}
\Delta_{ x}^{ h}\Delta_{ y}^{ h} u^{\alpha}(x-y)\Bigg\vert_{ y=x}&= &2\({1-e^{-\sqrt{2\alpha}\,h} \over \sqrt{2\alpha}}\)=2h+O( h^{ 2}),\label{1.8}
\\\nonumber\\
v^{\alpha}(x)=:|\Delta ^{ h}\,u^{\alpha}(x)|&\leq &Ch\, u^{\alpha}( x),\label{1.3x}
\\\nonumber\\
w^{\alpha}(x)=:|\Delta^{ h}\Delta^{ -h} u^{\alpha}(x )|&\leq& \left\{\begin{array}{ll}
C h \, u^{\alpha}( x ), \\\\
C h^{2}\, u^{\alpha}( x ), \hspace{.2 in}\forall \,| x|\geq h.
\end{array}
\right. \label{1.3y}
\end{eqnarray}
We have
\begin{equation}
\int \(w^{\alpha}(x)\)^{q}\,dx=O( h^{ q+1}) \label{li.13}
\end{equation}
and
\begin{equation}
\int_{|x|\geq h} \(w^{\alpha}(x)\)^{q}\,dx=O( h^{ 2q}).\label{1.30gb}
\end{equation}
In addition, for any $q\geq 2$
\begin{eqnarray}
&&
\int \(\Delta^{ h}\Delta^{ -h}\,u^{\alpha}(x)\)^{q} \,dx=( 2^{q+1}/(q+1)+O( h))h^{ q+1},\label{1.30g}
\end{eqnarray}
In all these statements the constants $C$ and the terms $O( h^{ \,\cdot\,})$ may depend on $\alpha$.
\end{lemma}
The proof is provided in Section \ref{sec-Prooflemvprop}.
We note that the same proof shows that for any $\alpha_{1}, \ldots, \alpha_{q}>0$
\begin{equation}
\int \prod_{i=1}^{q} \(\Delta^{ h}\Delta^{ -h}\,u^{\alpha_{i}}(x)\)\,dx=( 2^{q+1}/(q+1)+O( h))h^{ q+1}.\label{mult.u}
\end{equation}
\begin{remark}
{\rm In Lemma \ref{lem-vprop} we have taken $h$ positive.
Using the fact that $u^{\alpha}(x)$ is an even function of $x$ it is easy to check that we obtain the analog of (\ref{1.3x}) for all $|h|\leq 1$ if on the right hand side we replace $h$ by $|h|$.}
\end{remark}
The following estimates, which will be used in the proof of Lemma \ref{lem-3.1j}, are also proven in Section \ref{sec-Prooflemvprop}.
\begin{lemma}\label{lem-vpropt}Let $0<h\leq 1$ and $0<T<\infty$. Then for some $C_{T}<\infty$
\begin{equation}
u_{T}( x)=:\int_{0}^{T} \,p_{t}(x)\,dt\leq C_{T}\, e^{-|x|},\label{9.300}
\end{equation}
\begin{equation}
v_{T}( x)=:\int_{0}^{T} |\Delta ^{ h}\,p_{t}(x)|\,dt\leq C_{T}h\, e^{-|x|},\label{9.3x}
\end{equation}
and
\begin{equation}
w_{T}(x)=:\int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt\le C_{T}h^{2}\frac{e^{-x^{2}/32T}}{|x|},\hspace{.2 in}
|x|\geq 2h.\label{9.3w}
\end{equation}
Also
\begin{equation}
\int w_{T}(x) \,dx\leq C_{T}h^{ 2}|\log h|,\label{9.13t}
\end{equation}
and for any $q\geq 2$
\begin{equation}
\int w_{T}^{q}(x)\,dx\leq C_{T} h^{ q+1},\label{9.30g}
\end{equation}
and
\begin{equation}
\int_{|x|\geq \sqrt{h}} \,w_{T}^{q}(x) \,dx\leq C_{T}h^{3q/2+1/2}.\label{9.30gb}
\end{equation}
\end{lemma}
\begin{lemma}\label{lem-vpropd}Let $0<h\leq 1$ and $0<\delta<T<\infty$. Then for some $C_{\delta,T}<\infty$
\begin{equation}
u_{T}( x)=:\sup_{\delta\leq t\leq T} \,p_{t}(x) \leq C_{\delta,T}\, e^{-x^{2}/2T},\label{d9.300}
\end{equation}
\begin{equation}
v_{T}( x)=:\sup_{\delta\leq t\leq T} |\Delta ^{ h}\,p_{t}(x)| \leq C_{\delta,T}h\, e^{-x^{2}/2T}.\label{d9.3x}
\end{equation}
and
\begin{equation}
w_{T}(x)=:\sup_{\delta\leq t\leq T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )| \le C_{\delta,T}h^{2}e^{-x^{2}/2T}.\label{d9.3w}
\end{equation}
\end{lemma}
\begin{lemma}\label{lem-big}Let $0<h\leq 1$.
For $q\geq 2$
\begin{equation}
\int \(\int_{0}^{\infty} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt\)^{q}\,dx=( 2^{q+1}/(q+1)+O( h^{1/2}))h^{ q+1},\label{big.1}
\end{equation}
and
\begin{equation}
\int \(\int_{0}^{h} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt\)^{q}\,dx=( 2^{q+1}/(q+1)+O( h^{1/2}))h^{ q+1}.\label{big.2}
\end{equation}
\end{lemma}
\section{Proof of Theorem \ref{theo-trip}}\label{sec-BM}
Theorem \ref{theo-trip} will follow from the next lemma.
\bl\label{lem-6.1} For each integer $m\ge 0$ and $t\in R_{+}$
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E\(\({ \int ( L^{ x+h}_{t}- L^{ x}_{ t})^{
3}\,dx- 12h\int L^{ x}_{ t}( L^{ x+h}_{t}- L^{ x}_{ t})\,dx -24h^{2}t\over h^{ 2}}\)^{m}\)\nonumber\\
&&\hspace{ .5in} =\left\{\begin{array}{ll}
\displaystyle{ ( 2n)!\over 2^{ n}n!}\( 192\)^{ n} E\left\{\(\int (L^{ x}_{ t})^{ 3}\,dx\)^{
n}\right\} &\mbox{ if }m=2n\\\\
0&\mbox{ otherwise.}
\end{array}
\right.
\label{7.53}
\end{eqnarray}
\end{lemma}
\noindent{\bf Proof of Theorem \ref{theo-trip} } It follows from \cite[(6.12)]{CLR} that for any $q$
\begin{equation}
E\left\{\(\int (L^{ x}_{ t})^{ q}\,dx\)^{ n}\right\}\leq C_{t}^{ n}( n!)^{ (q-1)/2}.\label{rb.1}
\end{equation}
Therefore, since $ \sqrt{( 2n)!}\le 2^{ n}n! $,
the right hand side of (\ref{7.53}), which is the $2n-$th moment of $\widetilde C\sqrt{\int (L^{ x}_{ t})^{ 3}\,dx}\,\,\eta$ is bounded above by $ \widetilde C^{2n}C^{ n}(2n)! $.
This implies that $\widetilde C\sqrt{\int (L^{ x}_{ t})^{ 3}\,dx}\,\,\eta$ is determined by its moments; (see \cite[p. 227-228]{Feller}).
Lemma \ref{lem-6.1} together with the method of moments, \cite[Theorem 30.2]{B}, then gives us (\ref{1.1}).{\hfill $\square$ \bigskip}
\noindent{\bf Proof of Lemma \ref{lem-6.1} }
Let $\la_{\zeta}$ be an exponential random variable with mean $1/\zeta$.
It follows from Lemma \ref{lem-2weak} below that for each integer $m\ge 0$,
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E\(\({ \int (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3}\,dx- 12h \int L^{ x}_{\la_{\zeta}}( \Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})\,dx -24h^{2}\la_{\zeta}\over h^{ 2}}\)^{m}\)\nonumber\\
&&\hspace{ .5in} =\left\{\begin{array}{ll}
\displaystyle {( 2n)!\over 2^{ n}n!}\( 192\)^{ n} E\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 3}\,dx\)^{
n}\right\} &\mbox{ if }m=2n\\\\
0&\mbox{ otherwise.}
\end{array}
\right.
\label{7.54a}
\end{eqnarray}
We write (\ref{7.54a}) as
\begin{eqnarray} &&
\int_{0}^{\infty}e^{- \zeta s } E\(\({ \int (\Delta_{x}^{h}L^{ x}_{s})^{
3}\,dx- 12h \int L^{ x}_{s}( \Delta_{x}^{h}L^{ x}_{s})\,dx-24h^{2}s \over h^{ 2}}\)^{m}\) \,ds
\nonumber \\
&&\qquad\longrightarrow\int_{0}^{\infty} e^{- \zeta s }E\left\{ \eta^{m}\( 192\int (L^{ x}_{s})^{3} \,dx\,\,\)^{
m/2}\right\} \,ds\label{77.1d}
\end{eqnarray}
as $h\rightarrow 0$.
For $h>0 $ let
\begin{equation}
\widehat F_{m,h}(s):= E\(\({ \int (\Delta_{x}^{h}L^{ x}_{s})^{
3}\,dx- 12h \int L^{ x}_{s}( \Delta_{x}^{h}L^{ x}_{s})\,dx -24h^{2}s\over h^{ 2}}\)^{m}\), \label{77.1e}
\end{equation}
and set
\begin{eqnarray} &&
\widehat F_{m,0}(s):= E\left\{ \eta^{m}\( 192\int (L^{ x}_{s})^{3} \,dx\,\,\)^{
m/2}\right\}. \label{77.1e}
\end{eqnarray}
Then (\ref{77.1d}) can be written as
\begin{equation}
\lim_{ h\rightarrow 0}\int_{0}^{\infty}e^{- \zeta s } \widehat {F}_{m,h}(s) \,ds =\int_{0}^{\infty}e^{- \zeta s } \widehat {F}_{m,0}(s) \,ds.\label{77.4}
\end{equation}
We consider first the case when $m$ is even and write $m=2n$. In this case $\widehat {F}_{2n,h}(s)\geq 0$ and the extended continuity theorem
\cite[XIII.1, Theorem 2a]{Feller} applied to (\ref{77.4}) implies that
\begin{equation}
\lim_{ h\rightarrow 0}\int_{0}^{t} \widehat {F}_{2n,h}(s) \,ds =\int_{0}^{t} \widehat {F}_{2n,0}(s) \,ds\label{77.5}
\end{equation}
for all $t$. In particular,
\begin{equation}
\lim_{ h\rightarrow 0}\int_{t}^{t+\delta} \widehat {F}_{2n,h}(s) \,ds =\int_{t}^{t+\delta} \widehat {F}_{2n,0}(s) \,ds.\label{77.6}
\end{equation}
It follows from the fact that $L^{ x}_{s}$ is almost surely continuous and increasing in $s$ that $ \widehat {F}_{2n,0}(s)$ is continuous in $s$.
(We saw in (\ref{rb.1}) that it is bounded.) Consequently,
\begin{equation}
\lim_{ \delta\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta}\int_{t}^{t+\delta} \widehat {F}_{2n,h}(s) \,ds = \widehat {F}_{2n,0}(t). \label{77.6}
\end{equation}
When $t=0$ we get
\begin{equation}
\lim_{\delta\to 0}
\lim_{h\to 0}{1\over \delta}\int_0^{\delta}\widehat {F}_{2n,h}(s)ds=0.\label{77.13}
\end{equation}
To obtain (\ref{7.54a}) when $m$ is even we must show that
\begin{equation}
\lim_{ h\rightarrow 0} \widehat {F}_{2n,h}(t) = \widehat {F}_{2n,0}(t) \label{77.15}.
\end{equation}
This follows from (\ref{77.6}) once we show that
\begin{equation}
\lim_{ \delta\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta}\int_{t}^{t+\delta} \widehat {F}_{2n,h}(s) \,ds = \lim_{ h\rightarrow 0} \widehat {F}_{2n,h}(t) . \label{77.6j}
\end{equation}
We proceed to obtain (\ref{77.6j}).
For $s\geq t$ we write
\begin{eqnarray}
&&\int (\Delta_{x}^{h}L^{ x}_{s})^{
3}\,dx=\int \(\Delta_{x}^{h}L^{ x}_{t}+\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
3}\,dx
\label{fu.1}\\
&&=\int (\Delta_{x}^{h}L^{ x}_{t})^{
3}\,dx +3\int (\Delta_{x}^{h}L^{ x}_{t})^{
2}\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\,dx \nonumber\\
&& +3\int \Delta_{x}^{h}L^{ x}_{t}\(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
2}\,dx +\int \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
3}\,dx\nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&\int L^{ x}_{s}( \Delta_{x}^{h}L^{ x}_{s})\,dx =\int (L^{ x}_{t}+(L^{ x}_{s}-L^{ x}_{t}))\(\Delta_{x}^{h}L^{ x}_{t}+\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)\,dx
\nonumber\\
&& =\int L^{ x}_{t}( \Delta_{x}^{h}L^{ x}_{t})\,dx +\int L^{ x}_{t} \Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t}) \,dx \label{fu.2}\\
&& + \int (L^{ x}_{s}-L^{ x}_{t})( \Delta_{x}^{h}L^{ x}_{t})\,dx +\int (L^{ x}_{s}-L^{ x}_{t}) \Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t}) \,dx \nonumber
\end{eqnarray}
so that
\begin{eqnarray}
&&\int (\Delta_{x}^{h}L^{ x}_{s})^{
3}\,dx-12h\int L^{ x}_{s}( \Delta_{x}^{h}L^{ x}_{s})\,dx-24h^{2}s
\label{fu.3}\\
&&=\int (\Delta_{x}^{h}L^{ x}_{t})^{
3}\,dx-12h \int L^{ x}_{t}( \Delta_{x}^{h}L^{ x}_{t})\,dx-24h^{2}t\nonumber\\
&&+3\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\}\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\,dx \nonumber\\
&& +3\int \Delta_{x}^{h}L^{ x}_{t}\left\{ \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
2} -4h (L^{ x}_{s}-L^{ x}_{t}) \right\}\,dx \nonumber\\
&& +\int \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
3}\,dx-12h\int (L^{ x}_{s}-L^{ x}_{t}) \Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t}) \,dx-24h^{2}(t-s).\nonumber
\end{eqnarray}
Note that, using $\widetilde B_{t}$ to denote an independent Brownian motion, and then using translation invariance
\begin{eqnarray}
&&\int \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
3}\,dx-12h\int (L^{ x}_{s}-L^{ x}_{t}) \Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t}) \,dx-24h^{2}(t-s)
\nonumber\\
&&=\int \(\Delta_{x}^{h}L^{ x}_{s-t} \)^{
3}\circ\theta_{t}\,\,dx-12h\int \left\{ L^{ x}_{s-t}\,\, \(\Delta_{x}^{h}L^{ x}_{s-t} \)\right\} \circ\theta_{t}\,\,dx -24h^{2}(t-s) \nonumber\\
&&\stackrel{law}{=} \int \(\Delta_{x}^{h}L^{x -\widetilde B_{t}}_{s-t} \)^{
3}\,dx-12h\int L^{ x-\widetilde B_{t}}_{s-t} \,\, \(\Delta_{x}^{h}L^{ x-\widetilde B_{t}}_{s-t}\) \,dx-24h^{2}(t-s) \nonumber\\
&&=\int (\Delta_{x}^{h}L^{ x}_{s-t})^{
3}\,dx-12h\int L^{ x}_{s-t}( \Delta_{x}^{h}L^{ x}_{s-t})\,dx-24h^{2}(t-s).\label{ti.1}
\end{eqnarray}
Also, using $\widetilde L^{x}_{t}$ to denote an independent copy of Brownian local time
\begin{eqnarray}
&&\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\}\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\,dx
\label{ti.2}\\
&& =\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\} \(\Delta_{x}^{h}L^{ x}_{s-t}\circ\theta_{t}\) \,dx \nonumber\\
&& \stackrel{law}{=} \int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\} \Delta_{x}^{h}\widetilde L^{ x-B_{t}}_{s-t} \,dx \nonumber\\
&& = \int \left\{ (\Delta_{x}^{h}L^{ x+B_{t}}_{t})^{
2}-4h L^{ x+B_{t}}_{t} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{s-t} \,dx \nonumber\\
&&\stackrel{law}{=}\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{s-t} \,dx \nonumber
\end{eqnarray}
where we have used the fact that $\{L^{ x+B_{t}}_{t}\,,\,x\in R^{1}\} \stackrel{law}{=}
\{L^{ x}_{t}\,,\,x\in R^{1}\}$ which follows from time reversal.
Similarly,
\begin{eqnarray}
&&\int \Delta_{x}^{h}L^{ x}_{t}\left\{ \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{
2} -4h (L^{ x}_{s}-L^{ x}_{t}) \right\}\,dx
\label{ti.3}\\
&& \stackrel{law}{=} \int \Delta_{x}^{h}\widetilde L^{ x}_{t}\left\{ \(\Delta_{x}^{h}(L^{ x}_{s-t} \)^{
2} -4h L^{ x}_{s-t} \right\}\,dx. \nonumber
\end{eqnarray}
Let
\begin{eqnarray}
\widehat {G}_{m, h}(t,r)&=&:E\(h^{-2}\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{
2}-4h L^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx\)^{m}\label{fu.4}
\end{eqnarray}
and set
\begin{eqnarray} &&
\widehat {G}_{m, 0}(t,r):= E\left\{ \eta^{m}\( 64\int (L^{ x}_{t})^{2}\widetilde L^{ x}_{r} \,dx\,\,\)^{
m/2}\right\}. \label{77.1eg}
\end{eqnarray}
We then use the triangle inequality with respect to the norm $\|\,\cdot\,\|_{2n}$ together with (\ref{fu.3})-(\ref{ti.3}) to see that
\begin{eqnarray}
\widehat {F}_{2n, h}^{1/(2n)}(s)&\leq & \widehat {F}_{2n, h}^{1/(2n)}(t)+\widehat {F}_{2n, h}^{1/(2n)}(s-t)
\label{fu.7aa}\\
&+ &
3\widehat {G}^{1/(2n)}_{2n, h}(t,s-t) +3\widehat {G}^{1/(2n)}_{2n, h}(s-t,t). \nonumber
\end{eqnarray}
Similarly we have
\begin{eqnarray}
\widehat {F}_{2n, h}^{1/(2n)}(s)&\geq & \widehat {F}_{2n, h}^{1/(2n)}(t)-\widehat {F}_{2n, h}^{1/(2n)}(s-t)
\label{fu.7ab}\\
&- &
3\widehat {G}^{1/(2n)}_{2n, h}(t,s-t) (t)-3\widehat {G}^{1/(2n)}_{2n, h}(s-t,t). \nonumber
\end{eqnarray}
We now use the triangle inequality with respect to the norm in $L^{2n}([t,t+\delta], \delta^{-1}\,ds)$ to see that
\begin{eqnarray}
&&
\left\{ {1 \over \delta}\int_{t}^{t+\delta}\widehat {F}_{2n, h}(s)\,ds\right\}^{1/(2n)}
\label{fu.7ac}\\ && \leq \widehat {F}_{2n, h}^{1/(2n)}(t)+ \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {F}_{2n, h}(s-t)\,ds\right\}^{1/(2n)} \nonumber\\
&& +
3 \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {G}_{2n, h}(t,s-t) \,ds\right\}^{1/(2n)} +3 \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {G}_{2n, h}(s-t,t) \,ds\right\}^{1/(2n)} \nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&
\left\{ {1 \over \delta}\int_{t}^{t+\delta}\widehat {F}_{2n, h}(s)\,ds\right\}^{1/(2n)}
\label{fu.7ad}\\ && \geq \widehat {F}_{2n, h}^{1/(2n)}(t)- \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {F}_{2n, h}(s-t)\,ds\right\}^{1/(2n)} \nonumber\\
&& -
3 \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {G}_{2n, h}(t,s-t) \,ds\right\}^{1/(2n)} -3 \left\{{1 \over \delta}\int_{t}^{t+\delta}\widehat {G}_{2n, h}(s-t,t) \,ds\right\}^{1/(2n)} \nonumber
\end{eqnarray}
Hence, in light of (\ref{77.13}), to prove (\ref{77.6j}) we need only show that for each $t$
\begin{equation}
\lim_{ \delta\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta}\int_{0}^{\delta} \widehat {G}_{2n, h}(t,s ) \,ds = 0 \label{fu.8}
\end{equation}
and
\begin{equation}
\lim_{ \delta\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta}\int_{0}^{\delta} \widehat {G}_{2n, h}(s,t) \,ds = 0. \label{fu.9}
\end{equation}
We use $E^{y,z}(\cdot)$ to denote expectation with respect to the independent Brownian motions $B_{t}$ starting at $y$ and $\widetilde B_{t}$ starting at $z$.Let $\la_{\zeta}, \la_{\zeta'}$ be independent exponential random variables with mean $1/\zeta, 1/\zeta'$.
We show in Lemma \ref{lem-6.3a} below that for each integer $n\ge 0$,
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E^{y,z}\(\({ \int \left\{ (\Delta_{x}^{h}L^{ x}_{\la_{\zeta}})^{
2}-4h L^{ x}_{\la_{\zeta}} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{\la_{\zeta'}}\,dx \over h^{ 2}}\)^{2n}\)\label{6.3lem}\\
&&\hspace{ .5in} = {( 2n)!\over 2^{ n}n!}\( \displaystyle 64 \)^{ n} E^{y,z}\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 2}\widetilde L^{ x}_{ \la_{\zeta'}}\,dx\)^{
n}\right\}
\nonumber
\end{eqnarray}
uniformly in $y,z.$
Just as (\ref{77.4}) implied (\ref{77.5}), it follows from (\ref{6.3lem}) that
\begin{equation}
\lim_{ h\rightarrow 0}\int_{0}^{t}\int_{0}^{q} \widehat {G}_{2n,h}(s,r) \,dr\,ds= \int_{0}^{t}\int_{0}^{q} \widehat {G}_{2n,0}(s,r) \,dr\,ds\label{fu.10}
\end{equation}
for all $t$. In particular,
\begin{equation}
\lim_{ h\rightarrow 0}\int_{t}^{t+\delta}\int_{q}^{q+\delta'} \widehat {G}_{2n,h}(s,r)\,dr\,ds =\int_{t}^{t+\delta}\int_{q}^{q+\delta'} \widehat {G}_{2n,0}(s,r) \,dr\,ds.\label{fu.11}
\end{equation}
It follows as with $ \widehat {F}_{2n,0}(s)$ that $ \widehat {G}_{2n,0}(s,r)$ is continuous in $s,r$. Consequently,
\begin{equation}
\lim_{ \delta,\delta'\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta\de'} \int_{t}^{t+\delta}\int_{q}^{q+\delta'} \widehat {G}_{2n,h}(s,r) \,dr\,ds=\widehat {G}_{2n,0}(t,q). \label{fu.12}
\end{equation}
When $t=0$ we get
\begin{equation}
\lim_{ \delta,\delta'\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta\de'} \int_{0}^{ \delta}\int_{q}^{q+\delta'} \widehat {G}_{2n,h}(s,r)\,dr\,ds=0.\label{fu.13}
\end{equation}
Similarly we have
\begin{equation}
\lim_{ \delta,\delta'\rightarrow 0}\lim_{ h\rightarrow 0}{1 \over \delta\de'} \int_{t}^{t+ \delta}\int_{0}^{ \delta'} \widehat {G}_{2n,h}(s,r) \,dr\,ds=0.\label{fu.14}
\end{equation}
For $s\geq t$ we write
\begin{eqnarray}
&& \int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \label{fu.4}\\
&& = \int \left\{ \(\Delta_{x}^{h}L^{ x}_{t}+\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{2}-4h
\( L^{ x}_{t}+ (L^{ x}_{s}-L^{ x}_{t})\) \right\}\Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \nonumber\\
&& = \int \left\{ \(\Delta_{x}^{h}L^{ x}_{t}\)^{2}-4h L^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \nonumber\\
&& +\int \left\{ \(\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t})\)^{2}-4h\(L^{ x}_{s}-L^{ x}_{t})\) \right\} \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx\\
&& +2\int \Delta_{x}^{h}L^{ x}_{t} \,\Delta_{x}^{h}(L^{ x}_{s}-L^{ x}_{t}) \, \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \nonumber
\end{eqnarray}
Hence as before we obtain
\begin{eqnarray}
&&\left\{ {1 \over \delta\de'}\int_{t}^{t+ \delta}\int_{0}^{ \delta'}\widehat {G}_{2n, h}(s,r)\,dr\,ds\right\}^{1/(2n)} \geq \left\{{1 \over \delta'} \int_{0}^{ \delta'} \widehat {G}_{2n, h}(t,r) \,dr\right\}^{1/(2n)} \nonumber\\
&&\hspace{1 in}-
\left\{{1 \over \delta\de'} \int_{t}^{t+ \delta}\int_{0}^{ \delta'}\bar {G}_{2n, h}(s-t,r)\,dr\,ds\right\}^{1/(2n)}
\label{fu.5}\\
&&\hspace{1 in} -\left\{ {2 \over \delta\de'} \int_{t}^{t+ \delta}\int_{0}^{ \delta'}\widehat {H}_{2n, h}(t,s-t,r)\,dr\,ds\right\}^{1/(2n)} \nonumber
\end{eqnarray}
where
\begin{eqnarray}
&&
\bar {G}_{m, h}(s-t,r)\label{fu.4d}\\
&&=:E\(h^{-2}\int \left\{ (\Delta_{x}^{h}L^{ x}_{s-t})^{
2}-4h L^{ x}_{s-t} \right\}\circ\theta_{t}\Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx\)^{m}\nonumber\\
&&=\int E^{y,0}\left\{ \(h^{-2}\int \left\{ (\Delta_{x}^{h}L^{ x}_{s-t})^{
2}-4h L^{ x}_{s-t}\right\} \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx\)^{m}\right\} p_{t}(y)\,dy\nonumber
\end{eqnarray}
and
\begin{equation}
\widehat {H}_{m, h}(t,s,r)=E\(h^{-2} \int \Delta_{x}^{h}L^{ x}_{t} \,\(\Delta_{x}^{h} L^{ x}_{s }\circ\theta_{t}\) \, \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \)^{m}.\label{fu.6}
\end{equation}
We show in Lemma \ref{lem-3.1j} below that for each integer $n\ge 0$,
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E\(\({ \int \Delta_{x}^{h}L^{ x}_{t} \,\(\Delta_{x}^{h} L^{ x}_{s }\circ\theta_{t}\) \, \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \over h^{ 2}}\)^{2n}\)\label{6.3lev}\\
&&\hspace{ .5in} = {( 2n)!\over 2^{ n}n!}\( \displaystyle 64 \)^{ n} E\left\{\(\int L^{ x}_{t}
\( L^{ x}_{s }\circ\theta_{t}\) \widetilde L^{ x}_{ r}\,dx\)^{
n}\right\},
\nonumber
\end{eqnarray}
locally uniformly in $r,s,t$ on $t>0$.
(\ref{fu.8}) then follows by arguing as we did to obtain (\ref{fu.14}).
We can also write
\begin{eqnarray}
&& \int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{r} \,dx \label{fu.4}\\
&& = \int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{q} \,dx \nonumber\\
&& +\int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}(\widetilde L^{ x}_{r}- \widetilde L^{ x}_{q} ) \,dx \nonumber \\
&& = \int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{q} \,dx \nonumber\\
&& +\int \left\{ (\Delta_{x}^{h}L^{ x}_{s})^{2}-4hL^{ x}_{s} \right\} \Delta_{x}^{h}\widetilde L^{ x}_{r-q}\circ\widetilde \theta_{q} \,dx \nonumber
\end{eqnarray}
and hence as before this leads to
\begin{eqnarray}
&& \left\{{1 \over \delta\de'}\int_{0}^{\delta}\int_{q}^{q+ \delta'}\widehat {G}_{2n, h}(s,r)\,dr\,ds \right\}^{1/(2n)} \geq \left\{{1 \over \delta} \int_{0}^{ \delta} \widehat {G}_{2n, h}(s,q) \,ds\right\}^{1/(2n)}\nonumber\\
&&\hspace{1 in}-
\left\{{1 \over \delta\de'} \int_{0}^{\delta}\int_{q}^{q+ \delta'}\widetilde {G}_{2n, h}(s,r-q)\,dr\,ds\right\}^{1/(2n)}
\label{fu.5a}
\end{eqnarray}
where
\begin{eqnarray}
&&
\widetilde {G}_{m, h}(s,r-q)\label{fu.4e}\\
&&=:E\(h^{-2}\int \left\{ (\Delta_{x}^{h}L^{ x}_{s })^{
2}-4h L^{ x}_{s } \right\} \Delta_{x}^{h}\widetilde L^{ x}_{r-q}\circ\widetilde \theta_{q} \,dx\)^{m}\nonumber\\
&&=\int E^{0,z}\left\{ \(h^{-2}\int \left\{ (\Delta_{x}^{h}L^{ x}_{s })^{
2}-4h L^{ x}_{s }\right\} \Delta_{x}^{h}\widetilde L^{ x}_{r-q} \,dx\)^{m}\right\} p_{q}(z)\,dz\nonumber
\end{eqnarray}
and then (\ref{fu.9}) follows by arguing as we did to obtain (\ref{fu.13}).
Thus we obtain (\ref{77.15}) and hence (\ref{7.53}) when $m$ is even.
\medskip
In order to obtain (\ref{77.15}) when $m$ is odd we first show that
\begin{equation}
\sup_{h>0}\widehat F_{2n,h}(t)\leq Ct^{2n }.\label{eq.1n}
\end{equation}
To this end, it clearly suffices to show that
\begin{equation}
\sup_{h>0}\widehat F^{(0)}_{2n,h}(t)\leq Ct^{2n }\label{eq.1}
\end{equation}
where
\begin{equation}
\widehat F^{(0)}_{m,h}(t):= E\(\({ \int (\Delta_{x}^{h}L^{ x}_{t})^{
3}\,dx- 12h \int L^{ x}_{t}( \Delta_{x}^{h}L^{ x}_{t})\,dx\over h^{ 2}}\)^{m}\). \label{77.1en}
\end{equation}
To see this we observe that by first changing variables and then using the scaling relationship (\ref{scale}) with $h=\sqrt{t}$,
we have
\begin{eqnarray}
\int ( L^{ x+h}_{t}- L^{ x}_{ t})^{3}\,dx&=&\sqrt{t}\int ( L^{\sqrt{t}( x+ht^{-1/2})}_{t}- L^{\sqrt{t} x}_{ t})^{3}\,dx\label{eq.1a}\\
&=& t^{2}\int ( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{ 1})^{3}\,dx\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\int L^{ x}_{t}( L^{ x+h}_{t}- L^{ x}_{t})\,dx&=&\sqrt{t}\int L^{\sqrt{t} x}_{t}( L^{\sqrt{t}( x+ht^{-1/2})}_{t}- L^{\sqrt{t} x}_{ t}) \,dx\label{eq.1a}\\
&=& t^{3/2}\int L^{ x}_{1}( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{1})\,dx.\nonumber
\end{eqnarray}
Therefore
\begin{eqnarray}
&&
{\int ( L^{ x+h}_{t}- L^{ x}_{ t})^{3}\,dx-12h \int L^{ x}_{t}( L^{ x+h}_{t}- L^{ x}_{t})\,dx
\over h^{2}} \label{eq.2}\\
&& \stackrel{\mathcal{L}}{=}
{t^{ 2} \int ( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{ 1})^{3}\,dx-12ht^{3/2}\int L^{ x}_{1}( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{1})\,dx
\over h^{ 2}} \nonumber \\
&&= t \,\,{\(\int ( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{ 1})^{3}\,dx-12(ht^{-1/2}) \int L^{ x}_{1}( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{1})\,dx \)
\over (ht^{-1/2})^{ 2}}\nonumber
\end{eqnarray}
so that for any integer $m$
\begin{equation}
\widehat F^{(0)}_{m,h}(t)=t^{ m }\widehat F^{(0)}_{m,ht^{-1/2}}(1).\label{eq.3}
\end{equation}
Therefore to prove (\ref{eq.1}) we need only show that
\begin{equation}
\sup_{t}\sup_{ h>0 }\widehat F^{(0)}_{2n,ht^{-1/2}}(1)\leq C.\label{eq.4}
\end{equation}
It follows from (\ref{77.15}) that for some $\delta>0$
\begin{equation}
\sup_{\{t,h\,|\,ht^{-1/2}\leq \delta\}} \widehat{F}^{(0)}_{2n,ht^{-1/2}}(1)\leq C.\label{eq.5}
\end{equation}
On the other hand, for $ht^{-1/2}\geq \delta$
\begin{eqnarray}
&&\Bigg|{\(\int ( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{ 1})^{3}\,dx-12(ht^{-1/2}) \int L^{ x}_{1}( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{1})\,dx \)
\over (ht^{-1/2})^{ 2}}\Bigg|
\nonumber\\
&& \qquad\leq \delta^{- 2}\int ( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{ 1})^{3}\,dx +12\delta^{-1}
|\int L^{ x}_{1}( L^{ x+ht^{-1/2}}_{1}- L^{ x}_{1})\,dx| \nonumber\\
&&\qquad \leq 4\delta^{- 2}\int (L^{ x}_{ 1})^{3}\,dx +24\delta^{-1}\int (L^{ x}_{ 1})^{2}\,dx \label{eq.6}
\end{eqnarray}
which has finite moments since
$\int (L^{ x}_{ 1})^{2}\,dx$ and $\int (L^{ x}_{ 1})^{3}\,dx$ have finite moments, see (\ref{rb.1}). Using this and (\ref{eq.5}) we get (\ref{eq.4}) and hence (\ref{eq.1}). As already noted, this implies (\ref{eq.1n}). It
then follows from the Cauchy-Schwarz inequality that
\begin{equation}
\sup_{h>0}|\widehat{F}_{m,h}(t)|\leq Ct^{ m }\label{eq.7}
\end{equation}
for all integers $m$.
We next show that for any integer $m$, the family of functions $\{\widehat{F}_{m,h}(t);\,h\}$ is equicontinuous in $t$, that is, for each $t$ and $\epsilon>0$ we can find a $\delta>0$ such that
\begin{equation}
\sup_{\{s\,|\,|s-t|\leq \delta \}}\sup_{ h>0 }|\widehat{F}_{m,h}(t)-\widehat{F}_{m,h}(s) |\leq \epsilon.\label{eq.8}
\end{equation}
Let
\begin{equation}
\Phi_{h}(t):={\int ( L^{ x+h}_{t}- L^{ x}_{ t})^{3}\,dx-12h \int L^{ x}_{t}( L^{ x+h}_{t}- L^{ x}_{t})\,dx
-24h^{2}t\over h^{2}}.\label{eq.9}
\end{equation}
Applying the identity
$A^{m}-B^{m}=\sum_{j=0}^{m-1}A^{j}(A-B)B^{m-j-1}$ with $A=\Phi_{h}(t),\,B=\Phi_{h}(s)$ gives
\begin{equation}
\widehat{F}_{m,h}(t)-\widehat{F}_{m,h}(s)=\sum_{j=0}^{m-1}\Phi_{h}(t)^{j}(\Phi_{h}(t)-\Phi_{h}(s))\Phi_{h}(s)^{m-j-1}
\end{equation}
Consquently by using the Cauchy--Schwarz inequality twice and (\ref{eq.7}), we see that
\begin{equation}
\sup_{\{s\,|\,|s-t|\leq \delta \}}\sup_{ h>0 }|\widehat{F}_{m,h}(t)-\widehat{F}_{m,h}(s) |\leq C_{t,m}\sup_{\{s\,|\,|s-t|\leq \delta \}}\sup_{ h>0 }\| \Phi_{h}(t)-\Phi_{h}(s) \|_{2}.\label{eq.10}
\end{equation}
Using (\ref{fu.3})-(\ref{ti.3}), we see that to obtain (\ref{eq.8}) it suffices to show that for any $\epsilon>0$ we can find some $\delta>0$ such that
\begin{equation}
\sup_{\{s\,|\,s\leq \delta \}}\sup_{ h>0 }\widehat{F}_{2,h}(s) \leq \epsilon\label{eq.11}
\end{equation}
and for any $T<\infty$
\begin{equation}
\sup_{\{\,t\leq T \}}\sup_{\{ \,s\leq \delta \}}\sup_{ h>0 }E\bigg[{1\over h^{2}}
\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{2}-4hL^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\bigg]^{2}\leq \epsilon\label{eq.12a}
\end{equation}
and
\begin{equation}
\sup_{\{\,s\leq T \}}\sup_{\{ \,t\leq \delta \}}\sup_{ h>0 }E\bigg[{1\over h^{2}}
\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{2}-4hL^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\bigg]^{2}\leq \epsilon.\label{eq.12b}
\end{equation}
By (\ref{eq.1})
\begin{equation}
\sup_{h>0}F_{2,h}(s)\leq Cs^{2}\label{eq.13}
\end{equation}
which immediately gives (\ref{eq.11}), while (\ref{eq.12a}) and (\ref{eq.12b}) follow from Lemma
\ref{lem-var} below. This establishes (\ref{eq.8}).
We now obtain (\ref{7.53}) when $m$ is odd. By equicontinuity, for any sequence $h_{n}\rightarrow 0$, we can find a
subsequence $h_{n_{j}}\rightarrow 0$, such that
\begin{equation}
\lim_{j\rightarrow\infty}\widehat F_{m,h_{n_{j}}}(t)\label{eq.16}
\end{equation}
converges to a continuous function which we denote by $\overline F_{m}(t)$. It remains to show that
\begin{equation}
\overline F_{m}(t)\equiv 0.\label{eq.17}
\end{equation}
Let
\begin{equation}
G_{m,h }(t):=e^{-t}\widehat F_{m,h }(t) \hspace{.2 in}\mbox{and}\qquad\overline G_{m}(t):=e^{-t}\overline F_{m}(t).\label{eq.18}
\end{equation}
By (\ref{eq.7})
\begin{equation}
\sup_{h>0}\sup_{t}|G_{m,h}(t)|\leq C\quad\mbox{ and}\quad\lim_{t\rightarrow \infty}\sup_{h>0}G_{m,h}(t)=0. \label{eq.19}
\end{equation}
It then follows from (\ref{77.4}) and the dominated convergence theorem that for all $\zeta>0$
\begin{equation}
\int_{0}^{\infty}e^{- \zeta s } \overline G_{m}(s) \,ds =0.\label{eq.20}
\end{equation}
We obtain (\ref{eq.17}) by showing that $\overline G_{m}(s)\equiv 0$.
It follows from (\ref{eq.19}) that $\overline G_{m}(t) $ is a continuous bounded function on $R_{+}$ that goes to zero as $t\to\infty$. By the Stone--Weierstrass Theorem; (see \cite[Lemma 5.4]{K}), for any $\epsilon>0$, we can find a finite linear combination of the form $\sum_{i=1}^{n}c_{i}e^{- \zeta_{i} s }$ such that
\begin{equation}
\sup_{s}|\overline G_{m}(s)-\sum_{i=1}^{n}c_{i}e^{- \zeta_{i} s }|\leq \epsilon.\label{eq.21}
\end{equation}
Therefore, by (\ref{eq.20})
\begin{eqnarray}
\int_{0}^{\infty}e^{- s } \overline G^{2}_{m}(s) \,ds\label{eq.22}&
=&\int_{0}^{\infty}e^{- s }\(\sum_{i=1}^{n}c_{i}e^{- \zeta_{i} s }\) \overline G_{m}(s) \,ds\\
&&\quad+\int_{0}^{\infty}e^{- s } \(\overline G_{m}(s)-\sum_{i=1}^{n}c_{i}e^{- \zeta_{i} s }\)\overline G_{m}(s) \,ds\nonumber\\
&=&\int_{0}^{\infty}e^{- s } \(\overline G_{m}(s)-\sum_{i=1}^{n}c_{i}e^{- \zeta_{i} s }\)\overline G_{m}(s) \,ds\nonumber\\
&\le &2\epsilon\(\int_{0}^{\infty}e^{- s } \overline G^{2}_{m}(s) \,ds\)^{1/2}
\end{eqnarray}
by the Cauchy--Schwarz inequality and (\ref{eq.21}). Thus $\int_{0}^{\infty}e^{- s } \overline G^{2}_{m}(s) \,ds=0$ which implies that $\overline G_{m}(s)\equiv 0$.
{\hfill $\square$ \bigskip}
\section{Moments at exponential times}\label{sec-expmom}
We often write $u_{h,-h}^{\zeta}(0)$ for $\Delta^{h}\Delta^{-h}u^{\zeta}(0)=2\(u^{\zeta}(0)-u^{\zeta}(h)\)$.
\begin{lemma}\label{lem-2weak}
For each $m$, as $h\rightarrow 0$
\begin{eqnarray} &&
E\(\({ \int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\over h^{ 2}}\)^{m}\)\nonumber\\
&&\hspace{ .8in} \Longrightarrow \left\{\begin{array}{ll}
{( 2n)!\over 2^{ n}n!}\( 192\)^{ n} E\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 3}\,dx\)^{
n}\right\} &\mbox{ if }m=2n\\
0&\mbox{ otherwise.}
\end{array}
\right.
\label{7.54}
\end{eqnarray}
\end{lemma}
{\bf Remark: } Of course, $\int L^{ x}_{ \la_{\zeta}} \,dx=\la_{\zeta}$. Using (\ref{1.8}) and the continuity of local time, Lemma \ref{lem-2weak} implies (\ref{7.54a}).
{\bf Proof of Lemma \ref{lem-2weak}}:
For any
integer
$m$ we have
\begin{eqnarray} && E\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\)^{ m}
\)\nonumber\\ &&=E\(\prod_{ i=1}^{ m}\( \int \left\{ (\Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x_{ i}}_{\la_{\zeta}}\Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x_{ i}}_{\la_{\zeta}}\right\}\,dx_{ i}\)
\)\nonumber\\ &&=\sum_{A,B,C} ( -1)^{m- |B|- |C|}(6u_{h,-h}^{\zeta}(0))^{|B|}\(6\(u_{h,-h}^{\zeta}\)^{2}\)^{|C|}\label{1.16g}\\ &&
\hspace{ .1in}E\(
\(\prod_{ i\in A} \int (\Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}})^{
3}\,dx_{
i}\)\(\prod_{ j\in B}\int L^{ x_{ j}}_{
\la_{\zeta}}\Delta_{x_{ j}}^{h}L^{ x_{ j}}_{ \la_{\zeta}}\,dx_{ j}\)\(\prod_{ k\in B}\int L^{ x_{ k}}_{
\la_{\zeta}}\,dx_{ k}\) \),\nonumber
\end{eqnarray}
where the sum runs over all partitions of $[1,m]$ into three parts, $A,B,C$.
We initially calculate
\begin{equation} E\(\prod_{ i\in A} \Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}}\Delta_{y_{ i}}^{h}L^{ y_{ i}}_{ \la_{\zeta}}\Delta_{z_{ i}}^{h}L^{ z_{ i}}_{ \la_{\zeta}}
\prod_{j\in B}L^{ x_{ j}}_{\la_{\zeta}}\Delta_{y_{ j}}^{h}L^{ y_{ j}}_{ \la_{\zeta}}\prod_{ k\in C}L^{ x_{ k}}_{\la_{\zeta}}\)\label{1.18g}
\end{equation}
and eventually we set $y_{l}=x_{ l}=z_{l}$ for all $l$.
Using (\ref{1.2w}) we have
\begin{eqnarray} && E\(\prod_{ i\in A} \Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}}\Delta_{y_{ i}}^{h}L^{ y_{ i}}_{ \la_{\zeta}}\Delta_{z_{ i}}^{h}L^{ z_{ i}}_{ \la_{\zeta}}
\prod_{j\in B}L^{ x_{ j}}_{\la_{\zeta}}\Delta_{y_{ j}}^{h}L^{ y_{ j}}_{ \la_{\zeta}}\prod_{ k\in C}L^{ x_{ k}}_{\la_{\zeta}}\)\label{1.19g}\\ && =\(\prod_{ i\in A}\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{
h}\Delta_{ z_{ i}}^{ h}\prod_{ j\in B} \Delta_{y_{ j}}^{h} \)E\(\prod_{ i\in A}L^{ x_{ i}}_{
\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}L^{ z_{ i}}_{ \la_{\zeta}}\prod_{ j\in B} L^{ x_{ j}}_{\la_{\zeta}}L^{ y_{ j}}_{ \la_{\zeta}}\prod_{ k\in C}L^{ x_{ k}}_{\la_{\zeta}}\)
\nonumber\\ && =\(\prod_{ i\in A}\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{
h}\Delta_{ z_{ i}}^{ h}\prod_{ j\in B} \Delta_{y_{ j}}^{h} \)\,\,
\sum_{ \sigma}\prod_{ j=1}^{m+2|A|+|B|}u^{\zeta}( \sigma( j)-\sigma( j-1)) \nonumber
\end{eqnarray} where the sum runs over all
bijections
\begin{eqnarray*}
&&\hspace{-.5 in}
\sigma:\,[1,2,\ldots,m+2|A|+|B|]\mapsto \\
&&\hspace{1 in}
\{ x_{ i}, y_{ i},z_{i}, i\in A\}\cup \{ x_{ j},y_{ j}, j\in B\}\cup \{ x_{ k}, k\in C\}.
\end{eqnarray*}
We then use the product
rule
\begin{equation}\Delta_{ x}^{ h}\{ f( x)g( x)\}=
\{ \Delta_{ x}^{ h} f( x)\}g( x+h)+f( x)\{ \Delta_{ x}^{ h}g( x)\}\label{pr1}
\end{equation}
to expand the right hand side of (\ref{1.19g}) into a sum of many terms, over all
$\sigma$ and all ways to allocate each $\Delta_{ x_{ i}}^{ h},\Delta_{ y_{ i}}^{ h}, \Delta_{ z_{ i}}^{ h}$ or $\Delta_{ y_{ k}}^{ h}$ to
a single $u^{\zeta}$ factor.
Consider first the case where $A=\{ 1,\ldots,m\}$. For a given term in the above
expansion, we will say that $x_{ i}$ is $3$-bound if $x_{ i},y_{ i},z_{i} $ are adjacent, (in other words, for some $j$ we have $ (\sigma(j),\sigma(j+1),\sigma(j+2))=(x_{ i},y_{ i},z_{i})$ or one of its $6$ possible permutations), and $\Delta_{ x_{ i}}^{ h},\Delta_{ y_{ i}}^{ h},\Delta_{ z_{ i}}^{ h}$ are all attached to the $u^{\zeta}$ factors which connect $x_{ i},y_{ i},z_{i}$. Thus if $ (\sigma(j),\sigma(j+1),\sigma(j+2))=(x_{ i},y_{ i},z_{i})$, the $\Delta_{ x_{ i}}^{ h},\Delta_{ y_{ i}}^{ h},\Delta_{ z_{ i}}^{ h}$ are all attached to $u^{\zeta}( y_{ i}-x_{ i})u^{\zeta}( z_{ i}-y_{ i})$. We return shortly to analyze this case.
If $x_{ i}$ is not $3$-bound, we will say that
it is $2$-bound if any two of the three elements $x_{ i},y_{ i},z_{i} $ are adjacent, for example if $x_{ i},y_{ i} $ are adjacent, (in other words either $(x_{ i},y_{ i})=(\sigma(j),\sigma(j+1))$ or $(y_{ i},x_{ i})=(\sigma(j),\sigma(j+1))$ for some $j$), and both $\Delta_{ x_{ i}}^{ h}$ and
$\Delta_{ y_{ i}}^{ h}$ are attached to the factor $u^{\zeta}( x_{ i}-y_{ i})$. In applying (\ref{pr1}) we are free to choose which function plays the role of $f$, and which the role of $g$. In case $x_{ i}$ is $2$-bound, taking our example with $(x_{ i},y_{ i})=(\sigma(j),\sigma(j+1))$, when using (\ref{pr1}) to expand $\Delta_{ x_{ i}}^{ h}$ we take $g$ to be $u^{\zeta}( x_{ i}-y_{ i})$ and
similarly when using (\ref{pr1}) to expand $\Delta_{ y_{ i}}^{ h}$. In this way we guarantee that we have not added $\pm h$ to the arguments of any other factors.
By (\ref{1.8}), setting $ x_{ i}=y_{ i}$ turns the factor $\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{ h}u^{\zeta}( x_{ i}-y_{ i})$ into $\Delta^{h}\Delta^{-h}u^{\zeta}(0)$, and since for every such $\sigma$ there is precisely one other bijection which differs from $\sigma$ only in that it
permutes
$x_{ i},y_{ i} $ we obtain a factor of $2\Delta^{h}\Delta^{-h}u^{\zeta}(0)$. This is precisely what we
would have obtained if instead of $\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{ h}L^{ x_{
i}}_{\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}$ in (\ref{1.19g}) we had $2\Delta^{h}\Delta^{-h}u^{\zeta}(0)L^{ x_{ i}}_{\la_{\zeta}} $. There are ${3 \choose 2 }=3$ ways to pick two letters from among
$\{x_{i},y_{i},z_{i}\}$. By considering all such cases, we obtain precisely what we
would have obtained if instead of $\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{ h}\Delta_{ z_{ i}}^{ h}L^{ x_{
i}}_{\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}L^{ z_{ i}}_{ \la_{\zeta}}$ in (\ref{1.19g}) we had $6\Delta^{h}\Delta^{-h}u^{\zeta}(0)L^{ x_{ i}}_{\la_{\zeta}} \Delta_{ z_{ i}}^{ h}L^{ z_{
i}}_{\la_{\zeta}}$. Consider then a term
in the expansion of (\ref{1.19g}) with $A=\{ 1,\ldots,m\}$ and $J=\{i\,|\,x_{ i} \mbox{ is $2$-bound}\} $ non-empty, but $ \{i\,|\,x_{ i} \mbox{ is $3$-bound}\}= \emptyset$.
By (\ref{1.19g}) there will be an identical contribution from the last line of (\ref{1.16g}) from any other $A, B, C$ with $B\subseteq J$ and $C=\emptyset$.
Since
$\sum_{B\subseteq J} ( -1)^{ |B|}=0$, we see that in the
expansion of (\ref{1.16g}) there will not be any contributions from $2$-bound $x$'s.
We emphasize that if $x_{ i}$ is $2$-bound, and, for example, $ (\sigma(j),\sigma(j+1),\sigma(j+2))=(x_{ i},y_{ i},z_{i})$, with both $\Delta_{ x_{ i}}^{ h},\Delta_{ y_{ i}}^{ h}$ attached to $u^{\zeta}( y_{ i}-x_{ i})$, then $\Delta_{ z_{ i}}^{ h}$ cannot be assigned to $u^{\zeta}( z_{ i}-y_{ i})$. Otherwise, $x_{ i}$ would be $3$-bound.
We now return to analyze the case where $x_{ i}$ is $3$-bound. Consider the case that $ (\sigma(j),\sigma(j+1),\sigma(j+2))=(x_{ i},y_{ i},z_{i})$. We first apply the $\Delta_{ x_{ i}}^{ h}$ and $\Delta_{ z_{ i}}^{ h}$ operators to
$u^{\zeta}( y_{ i}-x_{ i})u^{\zeta}( z_{ i}-y_{ i})$ to obtain $\Delta_{ x_{ i}}^{ h}u^{\zeta}( y_{ i}-x_{ i})\,\,\Delta_{ z_{ i}}^{ h}u^{\zeta}( z_{ i}-y_{ i})$.
Then by (\ref{pr1}) we have
\begin{eqnarray}
&& \Delta_{ y_{ i}}^{ h}\(\Delta_{ x_{ i}}^{ h}u^{\zeta}( y_{ i}-x_{ i})\,\,\Delta_{ z_{ i}}^{ h}u^{\zeta}( z_{ i}-y_{ i})\)
\label{nu.1}\\
&& = \(\Delta_{ y_{ i}}^{ h}\Delta_{ x_{ i}}^{ h}u^{\zeta}( y_{ i}-x_{ i})\)\,\,\Delta_{ z_{ i}}^{ h}u^{\zeta}( z_{ i}-y_{ i}-h)\nonumber\\
&& + \Delta_{ x_{ i}}^{ h}u^{\zeta}( y_{ i}-x_{ i})\,\,\(\Delta_{ y_{ i}}^{ h}\Delta_{ z_{ i}}^{ h}u^{\zeta}( z_{ i}-y_{ i})\).\nonumber
\end{eqnarray}
If we now set $y_{ i}=x_{ i}=z_{i}$ we obtain
\begin{equation}
\Delta^{h}\Delta^{-h}u^{\zeta}(0)\(u^{\zeta}(0)-u^{\zeta}(h)\)+\(u^{\zeta}(h)-u^{\zeta}(0)\)\Delta^{h}\Delta^{-h}u^{\zeta}(0)=0.\label{nu.2}
\end{equation}
Thus, $3$-bound variables such as $x_{ i}$ make no contribution to (\ref{1.19g}). However, there will be an analogous contribution from $6\Delta^{h}\Delta^{-h}u^{\zeta}(0)L^{ x_{ i}}_{\la_{\zeta}} \Delta_{ z_{ i}}^{ h}L^{ z_{i}}_{\la_{\zeta}}$ which is not cancelled by $2$-bound variables. This is the case where
$x_{ i},z_{ i} $ are adjacent, say $(x_{ i},z_{ i})=(\sigma(j),\sigma(j+1))$, and $\Delta_{ z_{ i}}^{ h}$ is attached to the factor $u^{\zeta}( x_{ i}-z_{ i})$. As before we may do this without adding an $h$ to the arguments in any other factors. After setting $x_{ i}=z_{ i} $ we obtain $6\Delta^{h}\Delta^{-h}u^{\zeta}(0)\(u^{\zeta}(h)-u^{\zeta}(0)\)=-3\(\Delta^{h}\Delta^{-h}u^{\zeta}(0)\)^{2}$. Since we can also interchange
$x_{ i},z_{ i}$, such $x_{i}$ contribute $-6\(\Delta^{h}\Delta^{-h}u^{\zeta}(0)\)^{2}$, which will be exactly canceled by the term $-6\(\Delta^{h}\Delta^{-h}u^{\zeta}(0)\)^{2}L^{ x_{i}}_{\la_{\zeta}}$.
Furthermore, this completely exhausts the contribution to (\ref{1.16g}) of all
$A\neq \{ 1,\ldots,m\}$.
Thus, in estimating (\ref{1.19g}) we need only consider $A=\{ 1,\ldots,m\}$ and
those cases where each of the $3m$ $\Delta^{ h}$'s are assigned either to unique
$u^{\zeta}$ factors, or if two are assigned to the same $u^{\zeta}$ factor, it is not of the form $u^{\zeta}(
x_{ i}-y_{ i}), u^{\zeta}(
x_{ i}-z_{ i})$ or $u^{\zeta}(
z_{ i}-y_{ i})$.
For ease of exposition, in the following calculations we first replace the right hand side of (\ref{pr1}) by $\{ \Delta_{ x}^{ h} f( x)\}g( x)+f( x)\{ \Delta_{ x}^{ h}g( x)\}$, and return at the end of the proof to explain why this doesn't affect the final result. We use the notation
\begin{equation}
E_{\ast}\(\(\int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\)^{ m}
\)\label{pr.1}
\end{equation}
to denote the expression obtained with this replacement.
We can thus write
\begin{eqnarray} && E_{\ast}\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\)^{ m}
\)\nonumber\\ &&=6^{ m}\sum_{ \pi,a}\int \mathcal{T}_{h}( x;\,\pi,a)\,dx\label{1.20g}
\end{eqnarray} with
\begin{equation}
\mathcal{T}_{h}( x;\,\pi,a) =\prod_{ j=1}^{ 3m}\(\Delta^{ h}_{ x_{ \pi( j)}}\)^{a_{ 1}(j)}
\(\Delta^{ h}_{ x_{ \pi( j-1)}}\)^{a_{ 2}(j)}\,u^{\zeta}( x_{\pi(j)}- x_{\pi(j-1)})\label{1.21g}
\end{equation} where the sum runs over all maps $\pi\,:\,[1,\ldots, 3m]\mapsto
[1,\ldots, m]$ with $|\pi^{ -1}(i )|=3$ for each $i$, and all `assignments'
$a=(a_{ 1},a_{ 2})\,:\,[1,\ldots, 2m]\mapsto \{ 0,1\}\times \{ 0,1\}$ with the
property that for each $i$ there will be exactly three operators of the form $\Delta^{
h}_{ x_{i}}$ in (\ref{1.21g}), and if $a( j)=( 1,1)$ for any $j$, then
$ x_{\pi(j)}\neq x_{\pi(j-1)}$. The factor $6^{ m}=(3!)^{m}$ in (\ref{1.20g}) comes from
the fact that $|\pi^{ -1}(i )|=3$ for
each $i$.
Let $m=2n$. Assume first that $a=e$ where now $e( 2j)=( 1,1)$, $e( 2j-1)=( 0,0)$ for all $j$.
\subsection{$a =e$ with $\pi$ compatible with a pairing}\label{ss-3.1t}
When $a =e$ we have
\begin{equation}\qquad
\mathcal{T}_{h}( x;\,\pi,e) =\prod_{ j=1}^{3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{\pi(2j)}- x_{\pi(2j-1)}).\label{91.1}
\end{equation}
Let $\mathcal{P}=\{(l_{2i-1},l_{2i})\,,\,1\leq i\leq n\}$ be a pairing of the integers $[1,2n]$. Let $\pi$ as in (\ref{91.1}) be such that for each $1\leq j\leq 3n$,
$\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\}$ for some, necessarily unique, $ 1\leq i\leq n$. In this case we say that $\pi$ is compatible with the pairing $\mathcal{P}$ and write this as $ \pi \sim \mathcal{P}$. (Note that when we write $\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\}$ we mean as two sets, so, according to what $\pi$ is, we may have $\pi(2j-1)=l_{2i-1}$, $\pi(2j )=l_{2i }$ or $\pi(2j-1)=l_{2i}$, $\pi(2j )=l_{2i-1 }$. )
In this case we have
\begin{equation} \qquad
\mathcal{T}_{h}( x;\,\pi,e) =\prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3}\prod_{ j=1}^{ 3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,.\label{91.2}
\end{equation}
We now show that
\begin{equation}
\int \mathcal{T}_{h}( x;\,\pi,e)\prod_{j=1}^{2n}\,dx_{j} =\int \mathcal{T}_{1,h}( x;\,\pi,a)\prod_{j=1}^{2n}\,dx_{j}+O(h^{4n+1})\label{91.3}
\end{equation}
where
\begin{eqnarray}\qquad
\mathcal{T}_{1,h}( x;\,\pi,e)&=&\prod_{ i=1}^{ n}\(1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\leq h\}}\)\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3}
\nonumber\\
&& \hspace{.8 in} \times \prod_{ j=1}^{ 3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)}).\label{91.4}
\end{eqnarray}
To prove (\ref{91.3}) we write
\begin{equation}
1=\prod_{ i=1}^{ n}\( 1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\leq h\}}+1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\geq h\}}\),\label{}
\end{equation}
insert this inside the integral on the left hand side of (\ref{91.3}) and expand the product. It then suffices to show that
\begin{eqnarray}
&& \int \prod_{i\in A} 1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\leq h\}}\prod_{i\in A^{c}} 1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\geq h\}} \(w^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3}
\nonumber\\
&& \hspace{.5 in} \times \prod_{ j=1}^{ 3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\prod_{j=1}^{2n}\,dx_{j}=O(h^{4n+1})\label{91.4}
\end{eqnarray}
whenever $|A^{c}|\geq 1$. To see this we first choose $j_{k},\,k=1,\ldots,n$ so that
\[ \{x_{\pi(2j_{k}-1)}- x_{\pi(2j_{k}-2)},\,k=1,\ldots,n \}\cup \{x_{l_{2i}}- x_{l_{2i-1}},\,i=1,\ldots,n \}\] generate $\{x_{1},\ldots, x_{2n}\}$.
After changing variables, (\ref{91.4}) follows easily from (\ref{li.13}), (\ref{1.30gb}) and the fact that $u^{\zeta}$ is bounded and integrable.
We then study
\begin{equation} \hspace{.4 in}
\int \mathcal{T}_{1,h}( x;\,\pi,e)\prod_{j=1}^{2n}\,dx_{j}. \label{f91.36}
\end{equation}
Recall that for each $1\le j\le 3n$,
$\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\} $, for some $1\le i\le n$. We identify these relationships by setting $i=\sigma (j) $ when $\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\} $. In the present situation this means that $\sigma:\,[1,3n]\mapsto [1,n]$ with $|\sigma^{-1}(i)|=3$ for each $1\leq i\leq n$. (One for each occurrence of $\{l_{2i-1},l_{2i}\} $). We write
\begin{eqnarray}
&&\prod_{ j=1}^{3 n}\, u^{\zeta}(x_{\pi(2j-1)}-x_{\pi(2j-2)})
\label{f91.37}\\
&&\qquad=\prod_{ j=1}^{3n}\,\( u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})+\Delta^{h_{j}}u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\) , \nonumber
\end{eqnarray}
where $h_{j}=(x_{\pi(2j-1)}-x_{l_{2\sigma (j)-1}})+(x_{l_{2\sigma (j-1)-1}}-x_{\pi(2j-2)})$.
Note that because of the presence of the term $\prod_{i=1}^{n}\(1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\leq h\}}\)$ in the integral in (\ref{f91.36}) we need only be concerned with values of $|h_{j}|\leq 2h$, $1\le j\le 3n$.
We expand the product on the right hand side of (\ref{f91.37}) and obtain a sum of many terms. Using (\ref{1.3x}) and the fact that $|h_{j}|\leq 2h$, $1\le j\le 3n$ we can see as above that
\begin{eqnarray}
&&\int \mathcal{T}_{1,h}( x;\,\pi,e)\prod_{j=1}^{2n}\,dx_{j}
\label{91.6a}\\
&&=\int \prod_{ i=1}^{ n}\(1_{\{|x_{l_{2i}}-x_{l_{2i-1}}|\leq h\}}\) \prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3} \nonumber\\
&&\hspace{1 in}\prod_{ j=1}^{ 3n}\, u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\prod_{j=1}^{2n}\,dx_{j}+O(h^{4n+1})\nonumber
\end{eqnarray}
where $x_{-1}=0$. Once again we can now see that
\begin{eqnarray}
&&\int \mathcal{T}_{1,h}( x;\,\pi,e)\prod_{j=1}^{2n}\,dx_{j}
\label{91.6}\\
&&=\int \prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3} \nonumber\\
&&\hspace{1 in}\prod_{ j=1}^{ 3n}\, u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\prod_{j=1}^{2n}\,dx_{j}+O(h^{4n+1}).\nonumber
\end{eqnarray}
Using translation invariance and then (\ref{1.30g}) we have
\begin{eqnarray}
&&\int \prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{3} \prod_{ j=1}^{ 3n}\, u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\prod_{j=1}^{2n}\,dx_{j}
\nonumber\\
&& =\int \prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}})\)^{3} \prod_{ j=1}^{ 3n}\, u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\prod_{k=1}^{2n}\,dx_{l_{k}} \nonumber\\
&& =(4+O( h))^{n}h^{ 4n}\int \prod_{ j=1}^{ 3n}\, u^{\zeta}(x_{l_{2\sigma (j)-1}}-x_{l_{2\sigma (j-1)-1}})\prod_{k=1}^{n}\,dx_{l_{2k-1}}. \label{91.7}
\end{eqnarray}
Rewriting this and summarizing, we have shown that
\begin{eqnarray}
&&\int \mathcal{T}_{h}( x;\,\pi,e)\prod_{j=1}^{2n}\,dx_{j}
\label{91.8}\\
&& =4^{n}h^{ 4n}\int \prod_{ j=1}^{ 3n}\, u^{\zeta}(y_{\sigma (j)}-y_{\sigma (j-1)})\prod_{k=1}^{n}\,dy_{k} +O(h^{4n+1}) \nonumber
\end{eqnarray}
with $y_{0}=0$.
Let $\mathcal{M}$ denote the set of maps $\sigma$ from $[1,\ldots,3n]$ to $[1,\ldots,n]$ such that $|\sigma^{ -1}( i)|=3$ for all $i$.
For each pairing $\mathcal{P}$, any
$\pi\sim \mathcal{P}$ gives rise as above to a map $\sigma\in \mathcal{M}$. Also, any of the $2^{ 3n}$ $\pi$'s obtained by permuting the
$2$ elements in each of the $3n$ pairs, give rise to the same $\sigma$. In addition, for any $\sigma'\in \mathcal{M}$, we can reorder the $3n$ pairs of $\pi$ to obtain a new $\pi'\sim \mathcal{P}$ which gives rise to $\sigma'$. Thus we have shown that
\begin{eqnarray} &&\sum_{ \pi\sim \mathcal{P}}\int\mathcal{T}_{h}(
x;\,\pi,e)\prod_{ j=1}^{ 2n}\,dx_{j}\label{1.35g}\\ &&= \( 32 h^{ 4}\)^{ n} \sum_{ \sigma\in \mathcal{M}}\int \prod_{ j=1}^{ 3n}\, u^{\zeta}(y_{\sigma (j)}-y_{\sigma (j-1)})\prod_{k=1}^{n}\,dy_{k}+O(h^{4n+1})\nonumber\\ &&= \( {16\over 3}h^{4}\)^{ n} E\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 3}\,dx\)^{
n}\right\}+O(h^{4n+1})\nonumber
\end{eqnarray}
where the last line follows from Kac's moment formula, compare (\ref{1.20g}).
To complete this subsection, let $\mathcal{G} $ denote the set of $\pi$ which are compatible with some pairing
$\mathcal{P}$. Since there are ${( 2n)! \over
2^{ n}n!}$ pairings of $[1,\ldots,2n]$, we have shown that
\begin{eqnarray} &&\sum_{ \pi \in \mathcal{G} }\int\mathcal{T}_{h}(
x;\,\pi,e)\prod_{ j=1}^{ 2n}\,dx_{j}\label{1.35g}\\ &&= {( 2n)! \over
2^{ n}n!}\( {16\over 3}h^{ 4}\)^{ n} E\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 3}\,dx\)^{
n}\right\}+O(h^{4n+1}).\nonumber
\end{eqnarray}
\subsection{$a=e $ but $\pi$ not compatible with a pairing}\label{ss-3.2t}
In this subsection we show that
\begin{equation}
\sum_{ \pi \not \in \mathcal{G} }\Big |\int\mathcal{T}_{h}(
x;\,\pi,e)\prod_{ j=1}^{ 2n}\,dx_{j}\Big |=O(h^{4n+1}).\label{91.10}
\end{equation}
We return to (\ref{91.1}) to obtain
\begin{eqnarray}
&&
\Big |\int\mathcal{T}_{h}(
x;\,\pi,e)\prod_{ j=1}^{ 2n}\,dx_{j}\Big |\label{91.12}\\
&&\leq \int \prod_{ j=1}^{3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,w^{\zeta}( x_{\pi(2j)}- x_{\pi(2j-1)})\prod_{ j=1}^{ 2n}\,dx_{j}
.\nonumber
\end{eqnarray}
Let us show that when $\pi$ not compatible with a pairing we can find $n+1$ linearly independent vectors from among the $3n$ vectors
\begin{equation}
x_{\pi(2j)}- x_{\pi(2j-1)}, \hspace{.3 in}1\leq j\leq 3n.\label{v.1}
\end{equation}
We will say that $x$ and $y$ are both `contained' in $x-y$. Since $|\pi^{-1}(i)|=3$ for each
$1\leq i\leq 2n$, we can find $j_{1}$ such that $x_{1}$ is contained in $x_{\pi(2j_{1})}- x_{\pi(2j_{1}-1)}$. In addition, $x_{\pi(2j_{1})}- x_{\pi(2j_{1}-1)}$ will contain another $x_{i_{1}}$.
We then pick an integer from $[2,\ldots, 2n]-\{i_{1}\}$, say $i_{2}$ and then find $x_{\pi(2j_{2})}- x_{\pi(2j_{2}-1)}$ which contains $x_{i_{2}}$. $x_{\pi(2j_{2})}- x_{\pi(2j_{2}-1)}$ will contain another
$x_{i_{3}}$ where we may have $i_{3}=1$ or $i_{3}=i_{1}$. In any event it is clear that in this manner we can obtain a sequence of vectors
\begin{equation}
x_{\pi(2j_{i})}- x_{\pi(2j_{i}-1)}, \hspace{.3 in}1\leq i\leq n\label{v.2}
\end{equation}
which are linearly independent, since for each $i$, $x_{\pi(2j_{i})}- x_{\pi(2j_{i}-1)}$ contains some $x_{k}$ not contained in any of the preceding $x_{\pi(2j_{l})}- x_{\pi(2j_{l}-1)}, 1\leq l<i $.
Then, if the $n$ vectors in (\ref{v.2}) contain all $x_{i}, 1\leq i\leq 2n$, it follows that the $n$ vectors in (\ref{v.2}) must contain disjoint pairs of $x_{i}$'s. As a consequence they cannot generate any vector of the form $x_{i}-x_{i'}$ which is not among the $n$ vectors in (\ref{v.2}). Since by our assumption $\pi$ is not compatible with a pairing, there are vectors of the form $x_{\pi(2j)}- x_{\pi(2j-1)}$
which are different from the $n$ vectors in (\ref{v.2}). This proves our claim that we can find $n+1$ linearly independent vectors from among the $3n$ vectors of (\ref{v.1}) in case the $n$ vectors in (\ref{v.2}) contain all $x_{i}, 1\leq i\leq 2n$. But if they do not contain all $x_{i}, 1\leq i\leq 2n$, say they do not contain $x_{k}$. There is some vector in (\ref{v.1}) which contains $x_{k}$, and it is clearly linearly independent of the vectors in (\ref{v.2}).
Thus we have a sequence
\begin{equation}
x_{\pi(2j_{i})}- x_{\pi(2j_{i}-1)}, \hspace{.3 in}1\leq i\leq n+1\label{v.3}
\end{equation}
of linearly independent vectors. Let $J=\{j_{i},\,1\leq i\leq n+1\}$. We use (\ref{1.3x}) to bound (\ref{91.12}) by
\begin{eqnarray}
&&
\Big |\int\mathcal{T}_{h}(
x;\,\pi,e)\prod_{ j=1}^{ 2n}\,dx_{j}\Big |\label{v.4}\\
&&\leq Ch^{2n-1}\int \prod_{ j=1}^{3n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,\prod_{j\in J^{c}} u^{\zeta}( x_{\pi(2j)}- x_{\pi(2j-1)}) \nonumber\\
&&\hspace{1 in}\,\prod_{j\in J}w^{\zeta}( x_{\pi(2j)}- x_{\pi(2j-1)})\prod_{ j=1}^{ 2n}\,dx_{j}
.\nonumber
\end{eqnarray}
We can complete the set of $n+1$ vectors in (\ref{v.3}) to a basis of $x_{i}, 1\leq i\leq 2n$ by choosing $n-1$ vectors from among the vectors appearing as arguments of $u^{\zeta}$ in the second line of (\ref{v.4}). We then bound the remaining $u^{\zeta}$ factors by a constant, change variables and use (\ref{li.13}) with $q=1$ to see that the integral on the right hand side of (\ref{v.4}) is bounded by $Ch^{2(n+1)}$. Combining this with (\ref{v.4}) proves (\ref{91.10}).
\subsection{$a\neq e $}\label{ss-3.3t}
We now claim that
\begin{equation}
\sum_{\pi }\sum_{ a \neq e}\Big |\int \mathcal{T}_{h}( x;\,\pi ,a ) \,\prod_{ j=1}^{ 2n}\,dx_{j}\Big |=O(h^{4n+1}). \label{91.20}
\end{equation}
If $\mathcal{T}_{h}( x;\,\pi ,a )$ contains $k<3n$ factors of the form $w^{\zeta}$, then we will have
$2(3n-k)$ factors of the form $v^{\zeta}$. We use (\ref{1.3x}) then to bound
\begin{equation}
\Big |\int \mathcal{T}_{h}( x;\,\pi ,a ) \,\prod_{ j=1}^{ 2n}\,dx_{j}\Big |\leq C h^{2(3n-k)}
\int \mathcal{I}_{h}( x;\,\pi ,a ) \,\prod_{ j=1}^{ 2n}\,dx_{j}. \label{91.20s}
\end{equation}
where $\mathcal{I}_{h}( x;\,\pi ,a ) $ is similar to $\mathcal{T}_{h}( x;\,\pi ,a )$ except that we have bounded the integrand by its absolute value and replaced each $v^{\zeta}$ by $u^{\zeta}$. We now show how to get a good bound on the integral of $\mathcal{I}_{h}( x;\,\pi ,a ) $.
Choose $j_{1}=1,j_{2},\ldots,j_{2n}$ such that
\begin{eqnarray}
&&\mbox{span }\{x_{\pi( j_{1})} , x_{\pi(j_{2})}- x_{\pi(j_{2}-1)},\ldots, x_{\pi(j_{2n})}- x_{\pi(j_{2n}-1)}\}
\nonumber\\
&&=\mbox{span }\{x_{1},\ldots, x_{2n}\} \label{sp.3}
\end{eqnarray}
It is easy to see that this can be done.
We now show that we can choose a permutation $\sigma_{1},\sigma_{2},\ldots, \sigma_{2n}$ of
$[1,2n]$ such that for any $1\leq k\leq 2n$
\begin{eqnarray}
&&x_{\pi(j_{k})},x_{\pi(j_{k}-1)}\in \{x_{\sigma_{1}} , x_{\sigma_{2}},\ldots, x_{\sigma_{k}}\}. \label{sp.2}
\end{eqnarray}
We take $\sigma_{1}=\pi( j_{1})=\pi( 1)$ and choose the $\sigma_{2},\ldots, \sigma_{2n}$ by induction so that (\ref{sp.2}) holds for each $1\leq k\leq 2n$. This clearly holds for $k=1$, since by definition $x_{\pi(0)}=0$. Assume we have chosen $\sigma_{1}, \sigma_{2},\ldots, \sigma_{l}$ so that (\ref{sp.2}) holds for all $k\leq l$. Then among the remaining $\{ x_{\pi(j_{l+1})}- x_{\pi(j_{l+1}-1)},\ldots, x_{\pi(j_{2n})}- x_{\pi(j_{2n}-1)}\}$ there will be at least one $i$ such
that either $x_{\pi(j_{i})}$ or $x_{\pi(j_{i}-1)}$ is equal to one of $x_{\sigma_{1}} , x_{\sigma_{2}},\ldots, x_{\sigma_{l}}$. This is because each element of $\{ x_{\pi(j_{l+1})}- x_{\pi(j_{l+1}-1)},\ldots, x_{\pi(j_{2n})}- x_{\pi(j_{2n}-1)}\}$ is a difference of $x$'s so that by themselves we could never
have
\begin{eqnarray}
&&\mbox{span }\{ x_{\pi(j_{l+1})}- x_{\pi(j_{l+1}-1)},\ldots, x_{\pi(j_{2n})}- x_{\pi(j_{2n}-1)}\}
\nonumber\\
&&=\mbox{span }\( \{x_{1},\ldots, x_{2n}\}-\{x_{\sigma_{1}} , x_{\sigma_{2}},\ldots, x_{\sigma_{l}}\}\). \label{sp.3}
\end{eqnarray}
We then take such an $i$ and if $x_{\pi(j_{i+1})}$ is equal to one of $x_{\sigma_{1}} , x_{\sigma_{2}},\ldots, x_{\sigma_{l}}$ set $\sigma_{l+1}=\pi(j_{i+1}-1)$, while if $x_{\pi(j_{i+1}-1)}$ is equal to one of $x_{\sigma_{1}} , x_{\sigma_{2}},\ldots, x_{\sigma_{l}}$ set $\sigma_{l+1}=\pi(j_{i+1})$. Then (\ref{sp.2}) holds for $k= l+1$ and completes our induction.
We will prove (\ref{91.10}) by first bounding the $dx_{ \sigma_{2n}}$ integral in (\ref{91.12}) involving all factors containing
$x_{ \sigma_{2n}}$. We then bound the $dx_{ \sigma_{2n-1}}$ integral involving all \underline{remaining} factors containing
$x_{\sigma_{2n-1}}$. We then iterate this procedure bounding in turn the $dx_{\sigma_{2n}}, dx_{\sigma_{2n-1}},\ldots,dx_{\sigma_{1}}$ integrals. (\ref{sp.2}) guarantees that at each stage we are integrating a non-empty product of bounded integrable functions. Note that by (\ref{1.3y}) and (\ref{li.13}) with $q=1$
\begin{eqnarray}
\sup_{a_{i}}\int \prod_{i=1}^{p}w^{\zeta}(y+a_{i})\,dy \leq Ch^{p-1}\sup_{a_{1}}
\int w^{\zeta} (y+a_{1})\,dy =O(h^{p+1}) \label{com.13x}
\end{eqnarray}
for all $p\geq 1$.
Let $p_{i}$ denote the number of \underline{remaining} $w^{\zeta}$ factors containing
$x_{\sigma_{i}}$ after we have bounded in turn the $dx_{\sigma_{2n}}, dx_{\sigma_{2n-1}},\ldots,dx_{\sigma_{i+1}}$ integrals, that is,
$p_{i}$ denotes the number of $w^{\zeta}$ factors containing
$x_{\sigma_{i}}$ but not any of $x_{\sigma_{2n}}, x_{\sigma_{2n-1}},\ldots,x_{\sigma_{i+1}}$. Since there are a total of $k<3n$ factors of the form $w^{\zeta}$ in (\ref{91.20s}), we have that $\sum_{i=1}^{2n}p_{i}=k$. Let $k_{0}=|\{i\,|\,p_{i}\neq 0\}|$.
If we apply our bounding procedure using (\ref{com.13x}) together with the fact that
$u^{\zeta}$ is bounded and integrable we see that
\begin{equation}
\int \mathcal{I}_{h}( x;\,\pi ,a ) \,\prod_{ j=1}^{ 2n}\,dx_{j}=O(h^{\sum_{i=1}^{2n}(p_{i}+1_{\{p_{i}\neq 0\}})})
=O(h^{k+k_{0}}).\label{com.14x}
\end{equation}
It is easy to see that in (\ref{91.20s}) each $x_{j}$ appears in at most $3$ factors of the form $w^{\zeta}$. Thus each $p_{i}\leq 3$. Since $\sum_{i=1}^{2n}p_{i}=k$, we must have $k_{0}\geq k/3$. Combining (\ref{com.14x}) with (\ref{91.20s}) we have
\begin{equation}
\Big |\int \mathcal{T}_{h}( x;\,\pi ,a ) \,\prod_{ j=1}^{ 2n}\,dx_{j}\Big |\leq C h^{2(3n-k)+4k/3}
=Ch^{4n+2(n-k/3)} \label{91.20t}
\end{equation}
which proves (\ref{91.20}) since $k<3n$.
Combining (\ref{91.20}), (note the factor $6^{m}=36^{n}$) with the results of Subsections \ref{ss-3.1t}-\ref{ss-3.2t} we have thus shown that
\begin{eqnarray}
&&E_{\ast}\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\)^{ 2n}
\)\nonumber\\ &&= {( 2n)! \over 2^{ n}n!}\( 192 h^{ 4}\)^{
n} E\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 3}\,dx\)^{ n}\right\}+O(h^{ 4n+1} ).\label{1.39g}
\end{eqnarray}
We now explain why we obtain the same expression on the right hand side when we have $E$ instead of $E_{\ast}$. In the paragraph following (\ref{pr.1}) we make use of precise cancellations to handle bound variables, which a priori might be affected by our modification of (\ref{pr1}).
Consider how (\ref{1.20g}) would look if we had used the
product formula (\ref{pr1}). Note that any estimates we used will still apply since these estimates involve integrating or bounding by the supremum, neither of which are affected by replacing any of the $x$'s are replaced by
$x+h$. (\ref{91.6}) will be affected, but note from (\ref{pr1}) that the only terms of the form $u^{\zeta}(x-y)$ that may have $x$ replaced by $x\pm h$ are those to which $\Delta^{h}_{x}$ is not applied. Similarly $y$ may be replaced by $y\pm h$ only if $\Delta^{h}_{y}$ is not applied to a term of the form $u^{\zeta}(x-y)$ . Consequently, we still have all terms of the form $\Delta^{h}\Delta^{-h}u^{\zeta}$. Thus we obtain (\ref{91.8}), except that some of the remaining $u^{\zeta}(x-y)$ may be replaced by $u^{\zeta}(x-y\pm h)$. Using (\ref{1.3x}) then leads to (\ref{91.8}).
Thus, it only remains to show that for each
$n$
\begin{eqnarray} &&E\(\(\int \left\{ (\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}})^{
3} - 6u_{h,-h}^{\zeta}(0) L^{ x}_{\la_{\zeta}}\Delta_{x}^{h}L^{ x}_{ \la_{\zeta}}
- 6\(u_{h,-h}^{\zeta}\)^{2} L^{ x}_{\la_{\zeta}}\right\}\,dx\)^{ 2n+1}
\)\nonumber\\ &&\hspace{ 3in}=O(h^{ 2( 2n+1) +1} ).\label{1.40g}
\end{eqnarray} This follows from the fact that we cannot form any $\pi$'s in
$\mathcal{G}$.
{\hfill $\square$ \bigskip}
\section{Proof of Lemma \ref{lem-6.3a}}\label{sec-6.1}
We use $E^{y,z}(\cdot)$ to denote expectation with respect to the independent Brownian motions $B_{t}$ starting at $y$ and $\widetilde B_{t}$ starting at $z$.
\bl\label{lem-6.3a}
Let $\la_{\zeta}, \la_{\zeta'}$ be independent exponential random variables with mean $1/\zeta, 1/\zeta'$.
For each integer $m\ge 0$,
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E^{y,z}\(\({ \int \left\{ (\Delta_{x}^{h}L^{ x}_{\la_{\zeta}})^{
2}-2\Delta^{h}\Delta^{-h}u^{\zeta}(0) L^{ x}_{\la_{\zeta}} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{\la_{\zeta'}}\,dx \over h^{ 2}}\)^{m}\)\label{16.30}\\
&&\hspace{ .5in} =\left\{\begin{array}{ll}
\displaystyle {( 2n)!\over 2^{ n}n!}\( \displaystyle 64 \)^{ n} E^{y,z}\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 2}\widetilde L^{ x}_{ \la_{\zeta'}}\,dx\)^{
n}\right\} &\mbox{ if }m=2n\\\\
0&\mbox{ otherwise}
\end{array}
\right.
\nonumber
\end{eqnarray}
uniformly in $y,z$.
\end{lemma}
{\bf Proof of Lemma \ref{lem-6.3a}}:
For any
integer
$m$ we have
\begin{eqnarray} && \qquad E^{y,z}\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{\la_{\zeta}})^{
2}-2\Delta^{h}\Delta^{-h}u^{\zeta}(0) L^{ x}_{\la_{\zeta}} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{\la_{\zeta'}}\,dx\)^{ m}
\)\label{16.31}\\ &&=E^{y,z}\(\prod_{ i=1}^{ m}\( \int \int \left\{ (\Delta_{x_{i}}^{h}L^{ x_{i}}_{\la_{\zeta}})^{
2}-2\Delta^{h}\Delta^{-h}u^{\zeta}(0) L^{ x_{i}}_{\la_{\zeta}} \right\}\Delta_{x_{i}}^{h}\widetilde L^{ x_{i}}_{\la_{\zeta'}}\,dx\,dx_{ i}\)
\)\nonumber\\ &&=\sum_{A\subseteq \{ 1,\ldots,m\}} ( -1)^{m- |A|}(2\Delta^{h}\Delta^{-h}u^{\zeta}(0))^{|A^{c}|}\nonumber\\ &&
\hspace{ .7in}E^{y,z}\(
\(\prod_{ i\in A} \int (\Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}})^{
2}\Delta_{x_{ i}}^{h}\widetilde L^{ x_{ i}}_{ \la_{\zeta}}\,dx_{
i}\)\(\prod_{ k\in A^{ c}}\int L^{ x_{ k}}_{
\la_{\zeta}}\Delta_{x_{k}}^{h}\widetilde L^{ x_{ k}}_{ \la_{\zeta}}\,dx_{k}\) \).\nonumber
\end{eqnarray}
We initially calculate
\begin{equation}(2\Delta^{h}\Delta^{-h}u^{\zeta}(0))^{|A^{c}|} E^{y,z}\(\prod_{ i\in A} \Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}}\Delta_{y_{ i}}^{h}L^{ y_{ i}}_{ \la_{\zeta}}\Delta_{z_{ i}}^{h}\widetilde L^{ z_{ i}}_{ \la_{\zeta}}
\prod_{ k\in A^{ c}}L^{ u_{ k}}_{\la_{\zeta}}\Delta_{v_{ k}}^{h}\widetilde L^{ v_{ k}}_{ \la_{\zeta}}\)\label{16.32}
\end{equation}
and eventually we set $y_{ j}=x_{ j}=z_{j}$ and $u_{j}=v_{ j}$ for all $j$.
Using (\ref{1.2w}) we have
\begin{eqnarray} && E^{y,z}\(\prod_{ i\in A} \Delta_{x_{ i}}^{h}L^{ x_{ i}}_{ \la_{\zeta}}\Delta_{y_{ i}}^{h}L^{ y_{ i}}_{ \la_{\zeta}}\Delta_{z_{ i}}^{h}\widetilde L^{ z_{ i}}_{ \la_{\zeta}}
\prod_{ k\in A^{ c}}L^{ u_{ k}}_{\la_{\zeta}}\Delta_{v_{ k}}^{h}\widetilde L^{ v_{ k}}_{ \la_{\zeta}}\)\label{16.33}\\ && =\(\prod_{ i\in A}\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{
h}\Delta_{ z_{ i}}^{ h}\prod_{ k\in A^{ c}} \Delta_{v_{ k}}^{h} \)E^{y,z}\(\prod_{ i\in A}L^{ x_{ i}}_{
\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}\widetilde L^{ z_{ i}}_{ \la_{\zeta}}\prod_{ k\in A^{ c}} L^{ u_{ k}}_{\la_{\zeta}}\widetilde L^{ v_{ k}}_{ \la_{\zeta}}\)
\nonumber\\ && =\(\prod_{ i\in A}\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{
h}\Delta_{ z_{ i}}^{ h}\prod_{ k\in A^{ c}} \Delta_{v_{ k}}^{h} \)\nonumber\\
&&\hspace{1 in}E^{y}\(\prod_{ i\in A}L^{ x_{ i}}_{
\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}\prod_{ k\in A^{ c}} L^{ u_{ k}}_{\la_{\zeta}}\)E^{z}\(\prod_{ i\in A}\widetilde L^{ z_{ i}}_{ \la_{\zeta}}\prod_{ k\in A^{ c}} \widetilde L^{ v_{ k}}_{ \la_{\zeta}}\)
\nonumber\\ && =\(\prod_{ i\in A}\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{
h} \)\,\,
\sum_{ \sigma }\prod_{ j=1}^{m+|A|}u^{\zeta}( \sigma( j)-\sigma( j-1)) \nonumber\\ && \hspace{.5 in}
\(\prod_{ i\in A}\Delta_{ z_{ i}}^{ h}\prod_{ k\in A^{ c}} \Delta_{v_{ k}}^{h} \)\,\,
\sum_{ \sigma'}\prod_{ j=1}^{m}u^{\zeta}( \sigma'( j)-\sigma'( j-1)) \nonumber
\end{eqnarray}
where the first sum runs over all
bijections\[\sigma:\,[1,\ldots,m+|A|]\mapsto
\{ x_{ i}, y_{ i}, i\in A\}\cup \{ u_{ i}, i\in A^{ c} \}.\]
with $\sigma(0)=y$ and the second sum runs over all
bijections\[\sigma':\,[1,\ldots,m]\mapsto
\{ z_{i}, i\in A\}\cup \{ v_{ i}, i\in A^{ c} \}\]
with $\sigma'(0)=z$.
We then use the product
to expand the right hand side of (\ref{16.33}) into a sum of many terms, over all
$\sigma$ and all ways to allocate each $\Delta_{ x_{ i}}^{ h},\Delta_{ y_{ i}}^{ h}$ or $\Delta_{ z_{ i}}^{ h}$ to
a single $v$ factor.
Consider first the case where $A=\{ 1,\ldots,m\}$. For a given term in the above
expansion, we will say that
$x_{ i}$ is bound if $x_{ i},y_{ i} $ are adjacent, (in other words either $(x_{ i},y_{ i})=(\sigma(j),\sigma(j+1))$ or $(y_{ i},x_{ i})=(\sigma(j),\sigma(j+1))$ for some $j$), and both $\Delta_{ x_{ i}}^{ h}$ and
$\Delta_{ y_{ i}}^{ h}$ are attached to the factor $u^{\zeta}( x_{ i}-y_{ i})$.
Setting $ x_{ i}=y_{ i}$ turns the factor $\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{ h}u^{\zeta}( x_{ i}-y_{ i})$ into $\Delta^{h}\Delta^{-h}u^{\zeta}(0)$, and as in the proof of Lemma \ref{lem-2weak} we can assume that our use of (\ref{pr1}) for $\Delta_{ x_{ i}}^{ h}$ and
$\Delta_{ y_{ i}}^{ h}$ does not introduce a $\pm h$ in the arguments of other factors.
For every such $\sigma$ there is precisely one other $\sigma$ which agrees with $\sigma$ except that it
permutes
$x_{ i},y_{ i} $, we obtain a factor of $2\Delta^{h}\Delta^{-h}u^{\zeta}(0)$. This is precisely what we
would have obtained if instead of $\Delta_{ x_{ i}}^{ h}\Delta_{ y_{ i}}^{ h}L^{ x_{
i}}_{\la_{\zeta}} L^{ y_{ i}}_{ \la_{\zeta}}$ in (\ref{16.33}) we had $2\Delta^{h}\Delta^{-h}u^{\zeta}(0)L^{ u_{ i}}_{\la_{\zeta}} $. Consider then any term
in the expansion of (\ref{16.33}) with $A=\{ 1,\ldots,m\}$ and $J=\{i\,|\,x_{ i} \mbox{ is bound}\}$.
By (\ref{1.19g}) there will be an identical contribution from the last line of (\ref{16.33}) for any other $A$ with $A^{c}\subseteq J$.
Since
$\sum_{A^{c}\subseteq J} ( -1)^{ |A^{c}|}=0$, we see that in the
expansion of (\ref{16.31}) there will not be any contributions from bound $x$'s.
Furthermore, this completely exhausts the contribution to (\ref{16.31}) of all
$A\neq \{ 1,\ldots,m\}$.
Thus in estimating (\ref{16.33}) we need only consider $A=\{ 1,\ldots,m\}$ and
those cases where if two $\Delta^{ h}$'s are assigned to the same $u^{\zeta}$ factor, it is not of the form $u^{\zeta}(
x_{ i}-y_{ i})$.
Again as in the proof of Lemma \ref{lem-2weak}, in the following calculation we first replace the right hand side of (\ref{pr1}) by $\{ \Delta_{ x}^{ h} f( x)\}g( x)+f( x)\{ \Delta_{ x}^{ h}g( x)\}$, and return at the end of the proof to explain why this doesn't affect the final result. We use the notation
\begin{equation}
E^{y,z}_{\ast}\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{\la_{\zeta}})^{
2}-2\Delta^{h}\Delta^{-h}u^{\zeta}(0) L^{ x}_{\la_{\zeta}} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{\la_{\zeta'}}\,dx\)^{ m}
\)\label{pr.16}
\end{equation}
to denote the expression obtained with this replacement.
We can thus write
\begin{eqnarray} &&\hspace{ .1in} E^{y,z}_{\ast}\(\( \int \left\{ (\Delta_{x}^{h}L^{ x}_{\la_{\zeta}})^{
2}-2\Delta^{h}\Delta^{-h}u^{\zeta}(0) L^{ x}_{\la_{\zeta}} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{\la_{\zeta'}}\,dx\)^{ m}
\)\label{16.34}\\ &&=2^{ m}\sum_{ \pi ,a }\,\sum_{ \pi', a'}\int \mathcal{T}_{h}( x;\,\pi,\pi',a,a')\,dx\nonumber
\end{eqnarray}
with
\begin{eqnarray}
&&
\mathcal{T}_{h}( x;\, \pi,\pi',a,a') \nonumber\\
&&=\prod_{ j=1}^{ 2m}\(\Delta^{ h}_{ x_{ \pi( j)}}\)^{a_{ 1}(j)}
\(\Delta^{ h}_{ x_{ \pi( j-1)}}\)^{a_{ 2}(j)}\,u^{\zeta}( x_{\pi(j)}- x_{\pi(j-1)})\label{16.21g}\\
& &\times\prod_{ j=1}^{ m}\(\Delta^{ h}_{ x_{ \pi( j)}}\)^{a'_{ 1}(j)}
\(\Delta^{ h}_{ x_{ \pi( j-1)}}\)^{a'_{ 2}(j)}\,u^{\zeta'}( x_{\pi'(j)}- x_{\pi'(j-1)})\nonumber
\end{eqnarray}
where the first sum runs over all maps $\pi\,:\,[1,\ldots, 2m]\mapsto
[1,\ldots, m]$ with $|\pi^{ -1}(i )|=2$ for each $i$, and all `assignments'
$a=(a_{ 1},a_{ 2})\,:\,[1,\ldots, 2m]\mapsto \{ 0,1\}\times \{ 0,1\}$ with the
property that for each $i$ there will be exactly two factors of the form $\Delta^{
h}_{ x_{i}}$ in the second line of (\ref{16.21g}), and if $a( j)=( 1,1)$ for any $j$, then
$ x_{\pi(j)}\neq x_{\pi(j-1)}$. The factor $2^{ m}$ in (\ref{16.34}) comes from
the fact that $|\pi^{ -1}(i )|=2$ for
each $i$. Similarly, the second sum runs over all permutations $\pi'\,:\,[1,\ldots, m]\mapsto
[1,\ldots, m]$, and all `assignments'
$a'=(a'_{ 1},a'_{ 2})\,:\,[1,\ldots, 2m]\mapsto \{ 0,1\}\times \{ 0,1\}$ with the
property that for each $i$ there will be exactly one factor of the form $\Delta^{
h}_{ x_{i}}$ in in the last line of (\ref{16.21g}). Here we have set $x_{ \pi(0)}=y, x_{ \pi'(0)}=z.$
From this point on the proof is very similar to that of Lemma \ref{lem-2weak}. Let $m=2n$. Assume first that $a=e$ where now $e( 2j)=( 1,1)$, $e( 2j-1)=( 0,0)$ for all $j$, and similarly for $a'$.
\subsection{$a =a'=e$ with $\pi, \pi'$ compatible with a pairing}\label{ss-6.1t}
When $a =a'=e$ we have
\begin{eqnarray}
&&
\mathcal{T}_{h}( x;\, \pi,\pi',e,e) =\prod_{ j=1}^{2n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{\pi(2j)}- x_{\pi(2j-1)})\nonumber\\
&& \times \prod_{ j=1}^{n}u^{\zeta'}( x_{\pi'(2j-1)}- x_{\pi'(2j-2)})\,\Delta^{ h}\Delta^{- h}\,u^{\zeta'}( x_{\pi'(2j)}- x_{\pi'(2j-1)}) .\label{691.1}
\end{eqnarray}
Let $\mathcal{P}=\{(l_{2i-1},l_{2i})\,,\,1\leq i\leq n\}$ be a pairing of the integers $[1,2n]$. Let $\pi$ as in (\ref{691.1}) be such that for each $1\leq j\leq 2n$,
$\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\}$ for some, necessarily unique, $ 1\leq i\leq n$. In this case we say that $\pi$ is compatible with the pairing $\mathcal{P}$ and write this as $ \pi \sim \mathcal{P}$. Similarly we say that $\pi'$ is compatible with the pairing $\mathcal{P}$ if for each $1\leq j\leq n$,
$\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\}$ for some, necessarily unique, $ 1\leq i\leq n$, and write this as $ \pi' \sim \mathcal{P}$.
If $\pi, \pi' \sim \mathcal{P}$ we have
\begin{eqnarray}
&&
\mathcal{T}_{h}( x;\, \pi,\pi',e,e)\nonumber\\
&& =\prod_{ i=1}^{ n}\(\Delta^{ h}\Delta^{- h}\,u^{\zeta}( x_{l_{2i}}- x_{l_{2i-1}})\)^{2}\prod_{ j=1}^{ 2n}u^{\zeta}( x_{\pi(2j-1)}- x_{\pi(2j-2)})\,\nonumber\\
&&\hspace{.1 in} \times \prod_{ i=1}^{ n} \Delta^{ h}\Delta^{- h}\,u^{\zeta'}( x_{l_{2i}}- x_{l_{2i-1}}) \prod_{ j=1}^{ n}u^{\zeta'}( x_{\pi'(2j-1)}- x_{\pi'(2j-2)}) .\label{691.2}
\end{eqnarray}
Set $\sigma (j)=i $ when $\{\pi(2j-1), \pi(2j)\}=\{l_{2i-1},l_{2i}\} $, so that $\sigma:\,[1,2n]\mapsto [1,n]$ with $|\sigma^{-1}(i)|=2$ for each $1\leq i\leq n$. Similarly, set $\sigma' (j)=i $ when $\{\pi'(2j-1), \pi'(2j)\}=\{l_{2i-1},l_{2i}\} $, so that that $\sigma'$ is a permutation of $ [1,n] $.
As in Sub-section \ref{ss-3.1t} we can show that
\begin{eqnarray}
&&\int \mathcal{T}_{h}( x;\, \pi,\pi',e,e)\prod_{j=1}^{2n}\,dx_{j}
\label{691.8}\\
&& =4^{n}h^{ 4n}\int \prod_{ j=1}^{ 2n}\, u^{\zeta}(y_{\sigma (j)}-y_{\sigma (j-1)})
\prod_{ j=1}^{ n}\, u^{\zeta'}(y_{\sigma' (j)}-y_{\sigma' (j-1)})\prod_{k=1}^{n}\,dy_{k} +O(h^{4n+1}) \nonumber
\end{eqnarray}
with $y_{ \sigma(0)}=y, y_{ \sigma'(0)}=z$ and error term uniform in $y,z$.
Let $\mathcal{M}_{d}$ denote the set of maps $\sigma$ from $[1,\ldots,dn]$ to $[1,\ldots,n]$ such that $|\sigma^{ -1}( i)|=d$ for all $i$.
For each pairing $\mathcal{P}$, each map
$\pi\sim \mathcal{P}$ gives rise as above to a map $\sigma\in \mathcal{M}_{2}$. Also, any of the $2^{ 2n}$ $\pi$'s obtained by permuting the
$2$ elements in each of the $2n$ pairs, give rise to the same $\sigma$. In addition, for any $\widehat\sigma\in \mathcal{M}_{2}$, we can reorder the $2n$ pairs of $\pi$ to obtain a new $\widehat \pi\sim \mathcal{P}$ which gives rise to $\widehat \sigma$. A similar analysis applies to our $\pi'$.
Thus we have shown that
\begin{eqnarray} &&\sum_{ \pi,\pi'\sim \mathcal{P}}\int\mathcal{T}_{h}( x;\, \pi,\pi',e,e)\prod_{ j=1}^{ 2n}\,dx_{j}\label{61.35g}\\ &&= \( 32 h^{ 4}\)^{ n} \sum_{ \sigma\in \mathcal{M}_{2},\,\sigma'\in \mathcal{M}_{1}}\int \prod_{ j=1}^{ 2n}\, u^{\zeta}(y_{\sigma (j)}-y_{\sigma (j-1)})\prod_{k=1}^{n}\,dy_{k}+O(h^{4n+1})\nonumber\\ &&= \( 16 h^{4}\)^{ n} E^{y,z}\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 2}L^{ x}_{ \la_{\zeta'}}\,dx\)^{
n}\right\}+O(h^{4n+1})\nonumber
\end{eqnarray}
where the last line follows from Kac's moment formula.
To complete this subsection, let $\mathcal{G} $ denote the set of $\pi,\pi'$ which are compatible with some pairing
$\mathcal{P}$. Since there are ${( 2n)! \over
2^{ n}n!}$ pairings of $[1,\ldots,2n]$, we have shown that
\begin{eqnarray} &&\sum_{ \pi,\pi' \in \mathcal{G} }\int\mathcal{T}_{h}( x;\, \pi,\pi',e,e)\prod_{ j=1}^{ 2n}\,dx_{j}\label{61.35g}\\ &&= {( 2n)! \over
2^{ n}n!}\( 16 h^{ 4}\)^{ n} E^{y,z}\left\{\(\int (L^{ x}_{ \la_{\zeta}})^{ 2}L^{ x}_{ \la_{\zeta'}}\,dx\)^{
n}\right\}+O(h^{4n+1}).\nonumber
\end{eqnarray}
The rest of the proof follows as in the proof of Lemma \ref{lem-2weak}.
{\hfill $\square$ \bigskip}
\section{Proof of Lemma \ref{lem-3.1j}}\label{sec-1.1}
\bl\label{lem-3.1j} For each integer $n\ge 0$,
\begin{eqnarray} &&
\lim_{ h\rightarrow 0}E\(\({ \int \Delta_{x}^{h}L^{ x}_{t_{1}} \,\(\Delta_{x}^{h} L^{ x}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{x}^{h}\widetilde L^{ x}_{t_{3}} \,dx \over h^{ 2}}\)^{2n}\)\label{6.3lev}\\
&&\hspace{ .5in} = {( 2n)!\over 2^{ n}n!}\( \displaystyle 64 \)^{ n} E\left\{\(\int L^{ x}_{t_{1}}
\( L^{ x}_{t_{2} }\circ\theta_{t_{1}}\) \widetilde L^{ x}_{ t_{3}}\,dx\)^{
n}\right\},
\nonumber
\end{eqnarray}
locally uniformly in $t_{1},t_{2},t_{3}$ on $t_{1}>0$.
\end{lemma}
{\bf Proof of Lemma \ref{lem-3.1j} } The proof of this Lemma is easier than that of Lemmas
\ref{lem-2weak} and \ref{lem-6.3a} since there are no subtraction terms. However, there are complications due to the fact that we now work with non-random times $t_{1},t_{2},t_{3}$ rather than exponential times.
We begin by writing
\begin{eqnarray}
&&E\(\( \int \Delta_{x}^{h}L^{ x}_{t_{1}} \,\(\Delta_{x}^{h} L^{ x}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{x}^{h}\widetilde L^{ x}_{t_{3}} \,dx \)^{2n}\)
\label{semi.1}\\
&&= E\(\prod_{i=1}^{2n}\( \int \Delta_{x_{i}}^{h}L^{ x_{i}}_{t_{1}} \,\(\Delta_{x_{i}}^{h} L^{ x_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{x_{i}}^{h}\widetilde L^{ x_{i}}_{t_{3}} \,dx_{i} \) \) \nonumber
\\
&&=\int\, E\(\prod_{i=1}^{2n}\( \Delta_{x_{i}}^{h}L^{ x_{i}}_{t_{1}} \,\(\Delta_{x_{i}}^{h} L^{ x_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{x_{i}}^{h}\widetilde L^{ x_{i}}_{t_{3}} \) \)\prod_{i=1}^{2n} \,dx_{i} \nonumber
\end{eqnarray}
We first evaluate
\begin{eqnarray}
&&E\(\prod_{i=1}^{2n}\( \Delta_{x_{i}}^{h}L^{ x_{i}}_{t_{1}} \,\(\Delta_{y_{i}}^{h} L^{ y_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{z_{i}}^{h}\widetilde L^{ z_{i}}_{t_{3}} \) \)
\label{semi.2}\\
&& =\prod_{i=1}^{2n}\( \Delta_{x_{i}}^{h} \Delta_{y_{i}}^{h} \Delta_{z_{i}}^{h}\) E\(\prod_{i=1}^{2n}\( L^{ x_{i}}_{t_{1}} \,\( L^{ y_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \, \widetilde L^{ z_{i}}_{t_{3}} \) \) \nonumber\\
&& =\prod_{i=1}^{2n}\( \Delta_{x_{i}}^{h} \Delta_{y_{i}}^{h} \) E\(\prod_{i=1}^{2n} L^{ x_{i}}_{t_{1}} \,\( L^{ y_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \) \nonumber\\
&& \hspace{1.5 in} \prod_{i=1}^{2n}\(\Delta_{z_{i}}^{h}\) E\(\prod_{i=1}^{2n} \widetilde L^{ z_{i}}_{t_{3}} \) \nonumber
\end{eqnarray}
and then set all $y_{i}=z_{i}=x_{i}$.
By Kac's moment formula
\begin{eqnarray}
&& E\(\prod_{i=1}^{2n} L^{ x_{i}}_{t_{1}} \,\( L^{ y_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \)
\label{semi.3}\\
&&= \sum_{\pi_{1},\pi_{2}}\int_{\{\sum_{j=1}^{2n} r_{1,j}\leq t_{1}\} } \prod_{j=1}^{2n}p_{r_{1,j}}(x_{\pi_{1}(j)}-x_{\pi_{1}(j-1)}) \nonumber \\
&&\hspace{1 in} \int_{\{\sum_{j=1}^{2n} r_{2,j}\leq t_{2}\} }
p_{r_{2,1}+(t_{1}-\sum_{j=1}^{2n} r_{1,j})}(y_{\pi_{2}(1)}-x_{\pi_{1}(2n)})\nonumber\\
&&\hspace{1.5 in} \prod_{j=2}^{2n}p_{r_{2,j}}(y_{ \pi_{2}(j)}-y_{ \pi_{2}(j-1)})\prod_{j=1}^{2n}\,dr_{1,j}\,dr_{2,j} \nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&E\(\prod_{i=1}^{2n} \widetilde L^{ z_{i}}_{t_{3}} \)
\label{semi.3a}\\
&& = \sum_{\pi_{3}}\int_{\{\sum_{j=1}^{2n} r_{3,j}\leq t_{3}\} } \prod_{j=1}^{2n}p_{r_{3,j}}(z_{ \pi_{3}(j)}-z_{ \pi_{3}(j-1)})\,dr_{3,j} \nonumber
\end{eqnarray}
where the sums run over all permutations $\pi_{j}$ of $\{1,\ldots, 2n\}$ and we also define $\pi_{j}(0) =0$
and $x_{0} =z_{0} =0$.
We then use the product
rule (\ref{pr1}) as before,
to expand the right hand side of (\ref{semi.2}) into a sum of many terms, and then setting $y_{i}=z_{i}=x_{i}$ we obtain:
\begin{eqnarray} && E\(\prod_{i=1}^{2n}\( \Delta_{x_{i}}^{h}L^{ x_{i}}_{t_{1}} \,\(\Delta_{x_{i}}^{h} L^{ x_{i}}_{t_{2} }\circ\theta_{t_{1}}\) \, \Delta_{x_{i}}^{h}\widetilde L^{ x_{i}}_{t_{3}} \) \)
\label{f1.20gi}\\ &&\qquad= \sum_{ \pi ,a }\int \mathcal{T}^{\sharp}_{h}( x;\,\pi ,a )\,dx\nonumber
\end{eqnarray}
where $x=(x_{1},\ldots, x_{2n}), \pi=(\pi_{1},\pi_{2}, \pi_{3}), a=(a_{1},a_{2}, a_{d})$ and
\begin{eqnarray}
&&
\mathcal{T}^{\sharp}_{h}( x;\,\pi ,a ) \label{f1.21gi} \\
&& = \prod_{d=1}^{3}\int_{{\cal R}_{d} } \prod_{ j=1}^{ 2n}\(\(\Delta^{ h}_{ x_{ \pi_{d}( j)}}\)^{a_{d, 1}(j)}
\(\Delta^{ h}_{ x_{ \pi_{d}( j-1)}}\)^{a_{d, 2}(j)}\,p^{\sharp}_{\bar r_{d,j}}(x_{\pi_{d} (j)}-x_{\pi_{d} (j-1)})\)\nonumber \\
&&\hspace{4 in}\prod_{ j=1}^{ m}\,dr_{d,j}.\nonumber
\end{eqnarray}
In (\ref{f1.20gi}) the sum runs over all triples of permutations $(\pi_{1},\pi_{2}, \pi_{3})$ and all
$a_{d} =(a_{d, 1},a_{ d,2})\,:\,[1,\ldots, 2n]\mapsto \{ 0,1\} \times \{ 0,1\} $, with the
restriction that for each $d,i$ there is exactly one factor of the form $\Delta^{
h}_{ x_{\pi_{d} (i)}}$. (Here we define $(\Delta_{x_{i}}^{h})^{0}=1 $ and $(\Delta_{0}^{h}) =1 $. We have also set $\pi_{1} (0)=\pi_{3} (0)=0$ and $\pi_{2} (0)=\pi_{1} (2n)$.) In (\ref{f1.21gi}) we set ${\cal R}_{d}=\{\sum_{j=1}^{2n} r_{d,j}\leq t_{d}\}$, $p_{r}^{\sharp}(x)$ may be either
$p_{r} (x), p_{r} (x+h)$ or $p_{r} (x-h)$, and $\bar r_{d,j}= r_{d,j}$ unless $d=2,j=1$ in which case $\bar r_{2,1}=r_{2,1}+(t_{1}-\sum_{j=1}^{2n} r_{1,j})$.
It is important to recognize that in the right hand side of (\ref{f1.21gi}) each difference operator is applied to only one of the terms $p_{\cdot}(\cdot)$.
Instead of (\ref{f1.21gi}) we first analyze
\begin{eqnarray}
&&
\mathcal{T} _{h}( x;\,\pi ,a ) \label{f1.21gm} \\
&& = \prod_{d=1}^{3}\int_{{\cal R}_{d} } \prod_{ j=1}^{ 2n}\(\(\Delta^{ h}_{ x_{ \pi_{d}( j)}}\)^{a_{d, 1}(j)}
\(\Delta^{ h}_{ x_{ \pi_{d}( j-1)}}\)^{a_{d, 2}(j)}\,p_{\bar r_{d,j}}(x_{\pi_{d} (j)}-x_{\pi_{d} (j-1)})\)\nonumber \\
&&\hspace{4 in}\prod_{ j=1}^{ m}\,dr_{d,j}.\nonumber
\end{eqnarray}
This differs from (\ref{f1.21gi}) in that we have replaced all $p_{r}^{\sharp}(x)$ by
$p_{r} (x)$. At before it will be seen that this has no effect on the asymptotics.
As before, we first consider the case that $a_{1}=a_{2}=a_{3}=e$
where $e=(e(1),\ldots,e(2n))$ and
$e(2j)=(1,1),\,e(2j-1)=(0,0)$, $j=1\ldots n$.
Let $\mathcal{P}=\{(l_{2i-1},l_{2i})\,,\,1\leq i\leq n\}$ be a pairing of the integers $[1,2n]$. Let $\pi_{1},\pi_{2}, \pi_{3}$ be permutations of $[1,2n]$ such that for each $1\leq d\leq 3, 1\leq j\leq n$,
$\{\pi_{d}(2j-1), \pi_{d}(2j)\}=\{l_{2i-1},l_{2i}\}$ for some, necessarily unique, $ 1\leq i\leq n$. In this case we say that $\pi $ is compatible with the pairing $\mathcal{P}$. ( Note that $\{\pi_{d}(2j-1), \pi_{d}(2j)\} $ is not necessarily the same for each $d$.) Then by (\ref{f1.21gm})
\begin{eqnarray}
&&
\mathcal{T}_{h}( x;\,\pi,e) \label{f1.21gia}\\
&&=\prod_{d=1}^{3}\int_{\{\sum_{j=1}^{n} r_{d,j}+s_{d,j}\leq t_{d}\} } \prod_{ j=1}^{ n}\(\Delta^{ h}\Delta^{ -h}
\,p_{r_{d,j}}(x_{\pi_{d}(2j)}-x_{\pi_{d}(2j-1)})\)\,dr_{d,j}\nonumber\\ &&\hspace{
1.1in}\times\prod_{ j=1}^{ n}\, p_{\bar s_{d,j}}(x_{\pi_{d}(2j-1)}-x_{\pi_{d}(2j-2)}) \,\, \,ds_{d,j}.\nonumber
\end{eqnarray}
where again $\bar s_{d,j}= s_{d,j}$ unless $(d,j)=(2,1)$ in which case we have
$\bar s_{2,1}= s_{2,1}+(t_{1}-\sum_{j=1}^{n} r_{1,j}+s_{1,j})$.
Set $\sigma_{d} (j)=i $ when $\{\pi_{d}(2j-1), \pi_{d}(2j)\}=\{l_{2i-1},l_{2i}\} $. Using the approach of Sub-section \ref{ss-3.1t} together with the estimates of Lemma \ref{lem-vpropt} in place of the estimates of Lemma \ref{lem-vprop} (in estimating error terms we take absolute values of all integrands and extend the time integration of each term to $[0,T]$ with $T=2\max (t_{1},t_{2},t_{3})$) we can show that
\begin{equation}
\int
\mathcal{T}_{h}( x;\,\pi ,e )\prod_{ j=1}^{ 2n}\,dx_{j}=\int
\widetilde \mathcal{T}_{h}( x;\,\pi ,e )\prod_{ j=1}^{ 2n}\,dx_{j}+O(h^{4n+1/2 }) \label{9.43}
\end{equation}
where
\begin{eqnarray}
\lefteqn{
\widetilde \mathcal{T}_{h}( x;\,\pi ,e)
=\prod_{d=1}^{3}\int_{\{\sum_{j=1}^{n} r_{d,j}+s_{d,j}\leq t_{d}\} } \prod_{ i=1}^{ n}\Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x_{l_{2i}}-x_{l_{2i-1}})\, dr_{d,i}}\nonumber\\ &&\hspace{
1in}\times\prod_{ j=1}^{ n}\, p_{\bar s_{d,j}}(x_{l_{2\sigma_{d} (j)-1}}-x_{l_{2\sigma_{d} (j-1)-1}})\,\, \,ds_{d,j}\label{f1.21giaww}.
\end{eqnarray}
The fact that the error term in (\ref{9.43}) is $O(h^{4n+1/2 })$ and not $O(h^{4n+1 })$
is due to the fact that we use (\ref{9.30gb}) instead of (\ref{1.30gb}).
Let $\widetilde A_{h} (\pi ,e )$ denote the integral on the right hand side of (\ref{9.43}) so that
\begin{eqnarray}
\lefteqn{
\widetilde A_{h} (\pi ,e )\nonumber}\\
&&\label{f9.44d}=\int \prod_{d=1}^{3}\int_{\{\sum_{j=1}^{n} r_{d,j}+s_{d,j}\leq t_{d}\} } \prod_{ i=1}^{ n}\Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x_{l_{2i}}-x_{l_{2i-1}})\, dr_{d,i}\nonumber\\ &&\hspace{
1in}\times\prod_{ j=1}^{ n}\, p_{\bar s_{d,j}}(x_{l_{2\sigma_{d} (j)-1}}-x_{l_{2\sigma_{d} (j-1)-1}})\,\, \,ds_{d,j}
\prod_{i=1}^{2n}\,dx_{i}.\nonumber
\end{eqnarray}
We make the change of variables $x_{l_{2i}}\to x_{l_{2i}}+x_{l_{2i-1}} $, $i=1,\ldots,n$ and write this as
\begin{eqnarray}
&& \int \prod_{d=1}^{3}\int_{\{\sum_{j=1}^{n} r_{d,j}+s_{d,j}\leq t_{d}\} } \prod_{ i=1}^{ n}\Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x_{l_{2i}})\, dr_{d,i}\nonumber\\ &&\hspace{
1in}\times\prod_{ j=1}^{ n}\, p_{\bar s_{d,j}}(x_{l_{2\sigma_{d} (j)-1}}-x_{l_{2\sigma_{d} (j-1)-1}})\,\, \,ds_{d,j}
\prod_{i=1}^{2n}\,dx_{i}..\nonumber
\end{eqnarray}
We now rearrange the integrals with respect to $x_{2},x_{4},\ldots,x_{2n}$ and get
\begin{eqnarray}
&& \widetilde A_{h} (\pi ,e )\label{simp.1} \\
&&=
\int \(\int_{\{\sum_{j=1}^{n} r_{d,j}+s_{d,j}\leq t_{d}, \forall d\} } \right. \prod_{i=1}^{n} \(\int \( \prod_{d=1}^{3} \Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x ) \) \,dx\) \nonumber\\
&&\hspace{
.2in} \times\prod_{d=1}^{3} \prod_{ j=1}^{ n}\, p_{\bar s_{d,i}}(x_{l_{2\sigma_{d} (j)-1}}-x_{l_{2\sigma_{d} (j-1)-1}})\,\left. \,ds_{d,i}\,dr_{d,i} \) \prod_{i=1}^{n}\,dx_{l_{2i-1}}.\nonumber
\end{eqnarray}
Let
\begin{eqnarray}
\lefteqn{ F(\sigma ;s)
\nonumber}\\
&&\,\,: = \int \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, p_{\bar s_{d,i}}(x_{l_{2\sigma_{d} (j)-1}}-x_{l_{2\sigma_{d} (j-1)-1}}) \prod_{i=1}^{n}\,dx_{l_{2i-1}} \label{f9.2d}\\
&& \,\,\, =\int \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, p_{\bar s_{d,i}}(y_{ \sigma_{d} (j) }-y_{ \sigma_{d} (j-1) })\, \prod_{i=1}^{n}\,dy_{i}, \nonumber
\end{eqnarray}
where we set $y_{i}=x_{l_{2i-1}}$.
We can now write
\begin{eqnarray}
&&
\widetilde A_{h} (\pi ,e )\nonumber\\
&&=\label{9.44b}
\int \int_{\{\sum_{i=1}^{n}r_{d,i}+s_{d,i}\leq t_{d}, \forall d\}}
F(\sigma ;s)\prod_{d=1}^{3} \prod_{i=1}^{n} \,ds_{d,i} \\
&&
\hspace{1 in} \prod_{i=1}^{n} \(\int \prod_{d=1}^{3}\( \Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x ) \) \,dx\)\prod_{d=1}^{3}\,dr_{d,i} .\nonumber
\end{eqnarray}
Using $2-e^{ih\la } -e^{-ih\la }=2-2\cos (\la h)=4\sin^{2} (\la h/2)$ we can write
\begin{eqnarray}
&&G_{h}(r)=:\int \prod_{d=1}^{3}\( \Delta^{ h}\Delta^{ -h}
\,p_{r_{d,i}}(x ) \) \,dx
\label{fou.1}\\
&& =\int \(\prod_{d=1}^{3}\( {1 \over 2\pi}\int e^{ix\la_{d,i}}
\(2-e^{ih\la_{d,i}} -e^{-ih\la_{d,i}} \) e^{-r_{d,i}\la^{2}_{d,i}/2}\,d\la_{d,i} \) \) \,dx \nonumber\\
&& =\({4 \over 2\pi }\)^{3}\int \( \int e^{ix\sum_{d=1}^{3}\la_{d,i}} \prod_{d=1}^{3}
\sin^{2} (\la_{d,i} h/2) e^{-r_{d,i}\la^{2}_{d,i}/2}\,d\la_{d,i} \) \,dx \nonumber\\
&& =\({4 \over 2\pi }\)^{3} \int \( \int e^{ix\sum_{d=2}^{3}\la_{d,i}} \( \int
e^{ix \la_{d,1}}\sin^{2} (\la_{1,i} h/2) e^{-r_{1,i}\la^{2}_{1,i}/2}\,d\la_{d,1}\)\,dx\right. \nonumber \\
&& \hspace{2.5 in}\left. \prod_{d=2}^{3}
\sin^{2} (\la_{d,i} h/2) e^{-r_{d,i}\la^{2}_{d,i}/2}\,d\la_{d,i} \) \nonumber\\
&& =\({4 \over 2\pi }\)^{3} \( \int
\sin^{2} (\la_{1,i} h/2) e^{-r_{1,i}\la^{2}_{1,i}/2}\right. \nonumber \\
&& \hspace{2.5 in}\left. \prod_{d=2}^{3}
\sin^{2} (\la_{d,i} h/2) e^{-r_{d,i}\la^{2}_{d,i}/2}\,d\la_{d,i} \) \nonumber
\end{eqnarray}
with $\la_{1,i}=:-\sum_{d=2}^{3}\la_{d,i}$ in the last equality. For the last equality we used Fourier inversion.
Since $G_{h}, F\geq 0$, we have the following upper and lower bounds for $ \widetilde A_{h} (\pi ,e )$
\begin{eqnarray}
\lefteqn{ \widetilde A_{h} (\pi ,e )\label{f9.25ga}\nonumber}\\ &&\leq
\(\int_{[0,\infty]^{3}}\(\int \prod_{d=1}^{3}\(\Delta^{ h}\Delta^{ -h}\,p_{r_{d} }(x)\)
\,dx\)\, \,dr_{d} \)^{n}\label{9.44b}\\ &&\hspace{
.5in}\times \int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}, \forall d\}}
F(\sigma ;s ) \prod_{d=1}^{3}\prod_{i=1}^{n} \,ds_{d,i} \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\lefteqn{ \widetilde A_{h} (\pi ,e )\nonumber}\\ &&\geq
\(\int_{[0,h]^{3}}\(\int \prod_{d=1}^{3}\(\Delta^{ h}\Delta^{ -h}\,p_{r_{d} }(x)\)
\,dx\)\, \,dr_{d} \)^{n}\label{f9.25ga}\\ &&\hspace{
.5in}\times \int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}-nh, \forall d\}}
F(\sigma ;s ) \prod_{d=1}^{3}\prod_{i=1}^{n} \,ds_{d,i} \nonumber .
\end{eqnarray}
We show that the two sides of the inequalities are asymptoticallly equivalent as $h\to 0$.
The following Lemma is proven below.
\bl\label{lem-spread}
\begin{eqnarray}
\lefteqn{\int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}, \forall d\}}
F(\sigma ;s ) \prod_{d=1}^{3}\prod_{i=1}^{n} \,ds_{d,i}
\nonumber}\\
&& - \int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}-nh, \forall d\}}
F(\sigma ;s ) \prod_{d=1}^{3} \prod_{i=1}^{n} \,ds_{d,i} \label{f9.4d} \leq C_{T}h.\nonumber
\end{eqnarray}
\end{lemma}
Referring to (\ref{9.43}), using (\ref{big.1})-(\ref{big.2})
we see that
\begin{eqnarray}
&&
\int
\mathcal{T}_{h}( x;\,\pi ,e )\prod_{ j=1}^{ 2n}\,dx_{j}=\widetilde A_{h} (\pi ,e ) +O(h^{4n+1/2})
\label{f9.5}\\
&& =
(8h^{4} )^{n} \int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}, \forall d\}}
F(\sigma ;s ) \prod_{d=1}^{3}\prod_{i=1}^{n} \,ds_{d,i}+O(h^{4n+1/2}) \nonumber\\
&& =(8h^{4} )^{n}\nonumber\\
&&\hspace{.3 in}\int \(\prod_{d=1}^{3}\int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}\}}\prod_{ i=1}^{ n}\, p_{\bar s_{d,i}}(y_{\sigma_{d} (i)}-y_{\sigma_{d} (i-1)})\,\prod_{i=1}^{n} \,ds_{d,i}\) \prod_{i=1}^{n}\,dy_{i}\nonumber\\
&&\hspace{3 in}+O(h^{4n+1/2}). \nonumber
\end{eqnarray}
Recall that, in the paragraph containing (\ref{f1.21gia}), for a given pairing $\mathcal{P}=\{(l_{2i-1},l_{2i})\,,\,1\leq i\leq n\}$ of the integers $[1,2n]$, we define what it means for a collection of permutations $\pi=(\pi_{1},\pi_{2}, \pi_{3})$ of $[1,2n]$ to be compatible with $\mathcal{P}$. We write this as $(\pi_{1},\pi_{2}, \pi_{3}) \sim \mathcal{P}$. Obviously, there are many such pairs. We can interchange the two elements of the pair $
\pi_{d}(2j-1), \pi_{d}(2j)$ without changing (\ref{f9.5}). There are $2^{3n}$ ways to do this. Furthermore, by permuting the pairs $\{\pi_{d}(2j-1), \pi_{d}(2j)\}$ we give rise in (\ref{f9.5}) to all possible permutations $\sigma_{d}$ of $[1,n]$. We thus obtain
\begin{eqnarray}
\lefteqn{ \sum_{(\pi_{1},\pi_{2}, \pi_{3})\sim \mathcal{P}}\int
\mathcal{T}_{h}( x;\,\pi ,e )\prod_{ j=1}^{ 2n}\,dx_{j} \nonumber}\\
&&\,\, =(2^{3}8h^{4} )^{n}\sum_{ \sigma }\int \(\prod_{d=1}^{3}\int_{\{\sum_{i=1}^{n} s_{d,i}\leq t_{d}\}}\right.
\nonumber\\
&&\hspace{1 in}\left. \prod_{ i=1}^{ n}\, p_{\bar s_{d,i}}(y_{\sigma_{d} (i)}-y_{\sigma_{d} (i-1)})\,\prod_{i=1}^{n} \,ds_{d,i}\) \prod_{i=1}^{n}\,dy_{i}+O(h^{4n+1/2}) \nonumber\\
&&=(64h^{4} )^{n}E\left\{\(\int L^{ x}_{ t_{1}}(L^{ x}_{ t_{2}}\circ\theta_{t_{1}}) \widetilde L^{ x}_{ t_{3}} \,dx\)^{ n}\right\}+O(h^{4n+1/2}). \label{f9.45r}
\end{eqnarray}
Here the sum in the second line runs over all permutations $\sigma=(\sigma_{1},\cdots, \sigma_{q})$ of $\{1,\ldots, n\}$ and we set $\sigma_{d}(0)= 0$. The fourth line follows from Kac's moment formula.
Since there are ${( 2n)!\over 2^{ n}n!}$ pairings of the $2n$
elements $\{1,\ldots, m=2n\}$ we see that
\begin{eqnarray} \quad
&&\sum_{ \mathcal{P}}\sum_{(\pi_{1},\pi_{2}, \pi_{3})\sim \mathcal{P}}\int \mathcal{T}_{h}( x;\,\pi ,e) \,\prod_{ j=1}^{ 2n}\,dx_{j}\label{f9.45s}\\
&& = {( 2n)!\over 2^{ n}n!} (64h^{4} )^{n}E\left\{\(\int L^{ x}_{ t_{1}}(L^{ x}_{ t_{2}}\circ\theta_{t_{1}}) \widetilde L^{ x}_{ t_{3}} \,dx\)^{ n}\right\}+O(h^{4n+1/2})\nonumber
\end{eqnarray}
where the first sum runs over all pairings $\mathcal{P}$ of $\{1,\ldots, 2n\}$.
Given the estimates of Lemma \ref{lem-vpropt} we can show as in Section 4 that the contributions to (\ref{f1.20gi}) from $(a_{1}, a_{2}, a_{3})= (e,e,e)$ for $\pi$ not compatible with a pairing is $O(h^{4n+1/2})$. The arguments of Section 4 will give a similar bound for $(a_{1}, a_{2}, a_{3})\neq (e,e,e)$ with one possible exception. This will happen if $\pi_{2}(1)=\pi_{1}(2n)$ so that the argument of the term $p_{\bar s_{2,1}}$ is zero, and two $\Delta$ operators are applied to this $p$. In that case, since $\Delta^{h}\Delta^{-h}p_{\bar s_{2,1}}(0)=2\Delta^{h}p_{\bar s_{2,1}}(0)$, we seem to have lost one $\Delta$ operator, all of which are used in Section 4 to obtain the required error estimate. The remedy will be found in the special nature of $\bar s_{2,1}$ as we now explain.
Instead of extending the time integration of each term to $[0,T]$, we first consider the region where
$\sum_{j=1}^{n} r_{1,j}+s_{1,j}\leq t_{1}/2$ so that $\bar s_{2,1}\geq t_{1}/2$. Then
\begin{equation}
|\Delta^{h}p_{\bar s_{2,1}}(0)|={c| e^{-h^{2}/2\bar s_{2,1}} -1| \over \bar s^{1/2}_{2,1}}\leq {c h^{2} \over \bar s_{2,1}^{3/2}}\leq c(t_{1})h^{2}.\label{cn.1}
\end{equation}
We then extend the time integration of each term to $[0,T]$ and proceed as before. On the other hand, if $\sum_{j=1}^{n} r_{1,j}+s_{1,j}\geq t_{1}/2$, then for some $j$ we have either
$r_{1,j}\geq \delta=:t_{1}/(4n)$ or $s_{1,j}\geq \delta$. Say it is the latter. We then use the $s_{1,j}$
integration for the bound, see (\ref{cn.1}),
\begin{eqnarray}
&& \int_{0}^{T}\int_{0}^{T}|\Delta^{h}p_{ s_{2,1}+s_{1,j}}(0)|\,ds_{2,1}\,ds_{1,j}
\label{cn.3}\\
&&\leq ch^{2} \int_{0}^{T}\int_{0}^{T}{ 1 \over (s_{2,1}+s_{1,j})^{3/2}}\,ds_{2,1}\,ds_{1,j}\leq ch^{2}\nonumber
\end{eqnarray}
and bound the other term involving $ s_{1,j} $, be it $ p_{s_{1,j}}(x), |\Delta^{h}p_{s_{1,j}}(x)|$ or $ |\Delta^{h}\Delta^{-h}p_{s_{1,j}}(x)|$, by its supremum over $s_{1,j}\geq \delta $, using Lemma \ref{lem-vpropd}.
{\hfill $\square$ \bigskip}
{\bf Proof of Lemma \ref{lem-spread}: }
Let $A\subseteq [0,t_{1}]^{n}\times [0,t_{2}]^{n}\times [0,t_{3}]^{n}$ and set
\begin{equation}
I(A)= \int_{A}
F(\sigma ;s)\prod_{d=1}^{3} \prod_{i=1}^{n} \,ds_{d,i}.\label{F.1}
\end{equation}
To prove Lemma \ref{lem-spread} it suffices to show that
\begin{equation}
I(A)\leq C_{T}|A|^{1/2}.\label{F.2}
\end{equation}
We have
\begin{eqnarray}
&&I(A)
\label{F.3}\\
&&=\int_{A} \(\int \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, p_{s_{d,i}}(y_{ \sigma_{d} (j) }-y_{ \sigma_{d} (j-1) })\, \prod_{i=1}^{n}\,dy_{i}\) \prod_{d=1}^{3} \prod_{i=1}^{n} \,ds_{d,i}\nonumber\\
&&= \int \(\int_{A} \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, p_{s_{d,i}}(y_{ \sigma_{d} (j) }-y_{ \sigma_{d} (j-1) })\, \,ds_{d,i} \) \prod_{i=1}^{n}\,dy_{i}.\nonumber
\end{eqnarray}
Then by the Cauchy-Schwarz inequality
\begin{eqnarray}
&&I(A)
\label{F.4}\\
&&\leq |A|^{1/2} \int \(\int_{[0,T]^{3n}} \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, p^{2}_{s_{d,i}}(y_{ \sigma_{d} (j) }-y_{ \sigma_{d} (j-1) })\, \,ds_{d,i} \)^{1/2} \prod_{i=1}^{n}\,dy_{i}\nonumber\\
&&\leq |A|^{1/2} e^{3nT}\int \( \prod_{d=1}^{3}\prod_{ j=1}^{ n}\, f(y_{ \sigma_{d} (j) }-y_{ \sigma_{d} (j-1) }) \)^{1/2} \prod_{i=1}^{n}\,dy_{i}\nonumber
\end{eqnarray}
where
\begin{equation}
f(y)=\int_{0}^{\infty} e^{-s } p^{2}_{s }(y)\, \,ds. \label{F.5}
\end{equation}
Since $f(y)$ is the 1-potential density of planar Brownian motion evaluated at $(\sqrt{2}\,\,y,0)$, we know that $f(y)$ has a logaritmic singularity at $y=0$ and has exponential falloff at $\infty $ so that the last integral in (\ref{F.4}) is finite.
{\hfill $\square$ \bigskip}
\section{Proof of Lemma \ref{lem-var}}\label{sec-var}
\bl\label{lem-var}Fix $T<\infty$. For all $s,t\leq T$
\begin{eqnarray}
&&
E\bigg[ \(
\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{2}-4hL^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\)^{2}\bigg]=
32 h^{4}E\(\int ( L^{ x}_{t})^{2} \widetilde L^{ x}_{s} \,dx\)\nonumber\\
&&\hspace{2.6 in}+O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\label{va.0}
\end{eqnarray}
\end{lemma}
{\bf Proof of Lemma \ref{lem-var}: }In order to prove (\ref{va.0}) we must make use of the subtraction on the left hand side to eliminate all terms which are not $O\( h^{4 }\)$, then isolate the main contribution which is the first term on the right hand side, and estimate all error terms. As we will see the terms which are not $O\( h^{4 }\)$ come from `bound' variables. Because we are not using exponential times, the subtractions do not exactly eliminate all bound variables, which makes the analysis more complicated than in previous sections.
We first write
\begin{equation}
E\bigg[ \(
\int \left\{ (\Delta_{x}^{h}L^{ x}_{t})^{2}-4hL^{ x}_{t} \right\}\Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\)^{2}\bigg]=I_{1}-8hI_{2}+16h^{2}I_{3}\label{rv.1}
\end{equation}
where
\begin{eqnarray}
&&I_{1}=E\bigg[ \(
\int \(\Delta_{x}^{h}L^{ x}_{t} \)^{2} \Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\)^{2}\bigg] \nonumber\\
&&
=E\bigg[
\int \(\Delta_{x}^{h}L^{ x}_{t} \)^{2} \Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\int \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \Delta_{y}^{h}\widetilde L^{ y}_{s} \,dy\bigg]
\label{km.10a}\\
&&=\int\int E\( \(\Delta_{x}^{h}L^{ x}_{t} \)^{2} \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \)E\( \Delta_{x}^{h}\widetilde L^{ x}_{s} \Delta_{y}^{h}\widetilde L^{ y}_{s} \) \,dx\,dy \nonumber
\end{eqnarray}
\begin{eqnarray}
&&I_{2}=E\bigg[
\int L^{ x}_{t} \Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\int \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \Delta_{y}^{h}\widetilde L^{ y}_{s} \,dy\bigg]
\label{km.10b}\\
&&=\int\int E\( L^{ x}_{t} \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \)E\( \Delta_{x}^{h}\widetilde L^{ x}_{s} \Delta_{y}^{h}\widetilde L^{ y}_{s} \) \,dx\,dy \nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&I_{3}=E\bigg[ \(
\int L^{ x}_{t} \Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\)^{2}\bigg] \nonumber\\
&&
=E\bigg[
\int L^{ x}_{t} \Delta_{x}^{h}\widetilde L^{ x}_{s} \,dx\int L^{ y}_{t} \Delta_{y}^{h}\widetilde L^{ y}_{s} \,dy\bigg]
\label{km.10c}\\
&&=\int\int E\( L^{ x}_{t} L^{ y}_{t} \)E\( \Delta_{x}^{h}\widetilde L^{ x}_{s} \Delta_{y}^{h}\widetilde L^{ y}_{s} \) \,dx\,dy. \nonumber
\end{eqnarray}
By Kac's moment formula and (\ref{pr1}) we have
\begin{equation}
G_{s}(x,y) =:E\( \Delta_{x}^{h}\widetilde L^{ x}_{s} \Delta_{y}^{h}\widetilde L^{ y}_{s} \) =\int_{\{s_{1}+s_{2}\leq s\}} F_{s}(x,y) \,ds_{1}\,ds_{2}
\label{km.11b}
\end{equation}
where \begin{eqnarray}
&&
F_{s}(x,y)\label{km.20b}\\
&&=
\Delta^{h}p_{s_{1}}(x)\,\Delta^{h}p_{s_{2}}(y-x-h) +
p_{s_{1}}(x)\,\Delta^{h}\Delta^{-h}p_{s_{2}}(y-x)\nonumber\\
&&+
\Delta^{h}p_{s_{1}}(y)\,\Delta^{h}p_{s_{2}}(x-y-h) +
p_{s_{1}}(y)\,\Delta^{h}\Delta^{-h}p_{s_{2}}(x-y).\nonumber
\end{eqnarray}
For any $\epsilon>0$
\begin{eqnarray}
&&|G_{s}(x,y)| \leq cs^{\epsilon/2 }v^{1-\epsilon}_{s}(x)v_{s}(y-x-h)+cs^{\epsilon/2 }u^{1-\epsilon}_{s}(x)w_{s}(y-x)
\nonumber\\
&&\hspace{.5 in}+cs^{\epsilon/2 }v^{1-\epsilon}_{s}(y)v_{s}(x-y-h)+cs^{\epsilon/2 }u^{1-\epsilon}_{s}(y)w_{s}(x-y). \label{gbd.1}
\end{eqnarray}
To see this we note the bounds
\begin{equation}
\int_{0}^{s}p_{r}(x)\,dr\leq \int_{0}^{s}p_{r}(0)\,dr\leq cs^{1/2}\label{int.1}
\end{equation}
and
\begin{equation}
\int_{0}^{s}|\Delta^{h}p_{r}(x)|\,dr\leq 2\int_{0}^{s}p_{r}(0)\,dr\leq cs^{1/2}.\label{int.1v}
\end{equation}
and interpolate to obtain
\begin{equation}
u_{s}(x)\leq cs^{\epsilon/2 }u^{1-\epsilon}_{s}(x),\,\hspace{.4 in}v_{s}(x)\leq cs^{\epsilon/2 }v^{1-\epsilon}_{s}(x). \label{int.2}
\end{equation}
It follows from (\ref{gbd.1}) and Lemma \ref{lem-vpropt} that for any $\epsilon>0$
\begin{equation}
\int |G_{s}(x,y)|\,dx\,dy\leq c s^{\epsilon/2 }h^{2-\epsilon}.\label{gbd.2}
\end{equation}
Clearly
\begin{equation}
E\( L^{ x}_{t} L^{ y}_{t} \)=\int_{\{t_{1}+t_{2}\leq t\}} \(A_{t_{1},t_{2} }(x,y)+A_{t_{1},t_{2} }(y,x)\)\,dt_{1}\,dt_{2}
\label{km.11c}
\end{equation}
where
\begin{equation}
A_{t_{1},t_{2} }(x,y)=
p_{t_{1}}(x)p_{t_{2}}(y-x). \label{km.20a}
\end{equation}
By Kac's moment formula and (\ref{pr1}), compare (\ref{1.21g}),
\begin{eqnarray}
&&E\( L^{ x}_{t} \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \)
\label{km.11b}\\
&&=2 \sum_{\pi',a' } \int_{\{\sum_{i=1}^{3}t_{i}\leq t\}}\prod_{i=1}^{3}
\(\Delta_{ \pi'(i)}^{h}\)^{a'_{1}(i)}
\(\Delta_{ \pi'(i-1)}^{h}\)^{a'_{2}(i)}p^{\sharp}_{t_{i}}( \pi'(i) - \pi'(i-1) )\,dt_{i} \nonumber
\end{eqnarray}
where the sum runs over all maps $\pi'\,:\,[1,2, 3]\mapsto
\{x,y\}$ with $|\pi'^{ -1}(x )|=1,\,|\pi'^{ -1}(y )|=2$, and all `assignments'
$a'=(a'_{ 1},a'_{ 2})\,:\,[1,2, 3]\mapsto \{ 0,1\}\times \{ 0,1\}$ with the
property that there will be exactly two factors of the form $\Delta^{
h}_{ y}$ in (\ref{km.11b}) and none of the form $\Delta^{
h}_{ x}$. The factor $2$ comes from
the fact that $|\pi'^{ -1}(x )|=1,\,|\pi'^{ -1}(y )|=2$. Recall that $p^{\sharp}_{t }(x)$ can be
$p_{t }(x), p_{t }(x+h)$ or $p_{t }(x-h)$, but we always have $\Delta^{h}\Delta^{-h}p^{\sharp}_{t }(x)=\Delta^{h}\Delta^{-h}p_{t }(x)$. Also, we always take the $p_{t }(\cdot)$ for a bound variable to be the $g$ in (\ref{pr1}).
Bound variables can come only from $\pi'_{1}=(x,y,y)$ and $\pi'_{2}=( y,y,x)$. Setting
\begin{equation}
f_{t}(h)=p_{t }(0)-p_{t }(h)\label{eff}
\end{equation}
we can write the contributions of $\pi'_{1}$ and $\pi'_{2}$ arising from a bound variable as
\begin{eqnarray}
&&\widetilde D_{\pi'_{1},t}(x,y)=p_{t_{1}}(x) \, p_{t_{2}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{3}}(0)\)
\label{km.11ba}\\
&&\hspace{.7 in}=2p_{t_{1}}(x) \, p_{t_{2}}(y-x)\,
f_{t_{3}}(h) \nonumber\\
&&\hspace{.7 in}=2A_{t_{1},t_{2} }(x,y)\,
f_{t_{3}}(h) \nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&\widetilde D_{\pi'_{2},t}(x,y)=p_{t_{1}}(y)\,
\(\Delta^{h}\Delta^{-h}p_{t_{2}}(0)\)\, p_{t_{3}}(x-y)
\label{km.11ba}\\
&& \hspace{.6 in} =2 p_{t_{1}}(y)\,
f_{t_{2}}(h)\, p_{t_{3}}(x-y) \nonumber\\
&& \hspace{.6 in} = 2A_{t_{1},t_{3} }(y,x)\,
f_{t_{2}}(h). \nonumber
\end{eqnarray}
The non-bound contributions for $\pi'_{1}$ and $\pi'_{2}$ are
\begin{equation}
\widetilde B_{\pi'_{1},t}(x,y)=p^{\sharp}_{t_{1}}(x) \, \Delta^{h}p^{\sharp}_{t_{2}}(y-x)\,
\Delta^{h}p_{t_{3}}(0)\label{}
\end{equation}
and
\begin{eqnarray}
&&\widetilde B_{\pi'_{2},t}(x,y)= p^{\sharp}_{t_{1}}(y)\, \, \Delta^{-h}p^{\sharp}_{t_{2}}(0) \, \Delta^{-h}p^{\sharp}_{t_{3}}(x-y)
\label{km.11ba}\\
&& \hspace{.6 in} + \Delta^{h}p^{\sharp}_{t_{1}}(y)\, \,p^{\sharp}_{t_{2}}(0) \, \Delta^{-h}p^{\sharp}_{t_{3}}(x-y) \nonumber\\
&& \hspace{.6 in}+ \Delta^{h}p^{\sharp}_{t_{1}}(y)\, \, \Delta^{h}p^{\sharp}_{t_{2}}(0) \, p^{\sharp}_{t_{3}}(x-y). \nonumber
\end{eqnarray}
and in addition there is a term from $\pi'_{3}=(y,x,y)$ which is
\begin{eqnarray}
&&\widetilde B_{\pi'_{2},t}(x,y)= \Delta^{h} p^{\sharp}_{t_{1}}(y)\,p^{\sharp}_{t_{2}}(x-y) \,\Delta^{h} p^{\sharp}_{t_{3}}(y-x)
\label{km.11ba}\\
&& \hspace{.6 in} + p^{\sharp}_{t_{1}}(y)\, \Delta^{-h} p^{\sharp}_{t_{2}}(x-y) \,\Delta^{h} p^{\sharp}_{t_{3}}(y-x).
\nonumber
\end{eqnarray}
We observe that by (\ref{int.2}) and Lemma \ref{lem-vpropt}, for any $1\leq j \leq 3$
\begin{equation}
\sup_{x,y} \int_{\{\sum_{i=1}^{3}t_{i}\leq t\}} |\widetilde B_{\pi'_{j}, t}(x,y)|\prod_{i=1}^{3}\,dt_{i} \leq ct^{\epsilon/2}h^{2-\epsilon}.\label{sb.1}
\end{equation}
Hence in view of (\ref{gbd.2}) we see that for any $\epsilon>0$ and $1\leq j \leq 3$
\begin{equation}
h\int \( \int_{\{\sum_{i=1}^{3}t_{i}\leq t\}} | \widetilde B_{\pi'_{j}, t}(x,y)|\prod_{i=1}^{3}\,dt_{i}\)|G_{s}(x,y)|\,dx\,dy=O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\label{gbd.22}
\end{equation}
Similarly
\begin{eqnarray}
&&E\( \(\Delta_{x}^{h}L^{ x}_{t} \)^{2} \(\Delta_{y}^{h}L^{ y}_{t} \)^{2} \)
\label{km.11a}\\
&&= 4 \sum_{\pi,a } \int_{\{\sum_{i=1}^{4}t_{i}\leq t\}}\prod_{i=1}^{4}
\(\Delta_{ \pi(i)}^{h}\)^{a_{1}(i)}
\(\Delta_{ \pi(i-1)}^{h}\)^{a_{2}(i)}p^{\sharp}_{t_{i}}( \pi(i) - \pi(i-1) )\,dt_{i} \nonumber
\end{eqnarray}
where the sum runs over all maps $\pi\,:\,[1,\ldots, 4]\mapsto
\{x,y\}$ with $|\pi^{ -1}(x )|=|\pi^{ -1}(y )|=2$, and all `assignments'
$a=(a_{ 1},a_{ 2})\,:\,[1,\ldots, 4]\mapsto \{ 0,1\}\times \{ 0,1\}$ with the
property that there will be exactly two factors of the form $\Delta^{
h}_{ x}$ in (\ref{km.11a}) and similarly for $\Delta^{
h}_{ y}$. The factor $4=2^{ 2}$ comes from
the fact that $|\pi^{ -1}(x )|=|\pi^{ -1}(y )|=2$.
Writing $\pi$ as a sequence $(\pi(1),\pi(2),\pi(3),\pi(4))$, we first consider
$\pi_{1}=(x,x,y,y)$ and $\pi_{2}=( y,y,x,x)$. These are the only $\pi$'s which have two bound variables. We can write the contribution of $\pi_{1}$ arising from two bound variables as
\begin{eqnarray}
&&
D_{\pi_{1},t}(x,y)=p_{t_{1}}(x)\,\(\Delta^{h}\Delta^{-h}p_{t_{2}}(0)\) p_{t_{3}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{4}}(0)\)\nonumber\\
&&\hspace{.6 in}=4 p_{t_{1}}(x)\,f_{t_{2}}(h) p_{t_{3}}(y-x)\, f_{t_{4}}(h)
\nonumber\\
&&\hspace{.6 in}=4 f_{t_{2}}(h) \, f_{t_{4}}(h) A_{t_{1},t_{3} }(x,y)
\label{km.11ab}
\end{eqnarray}
and similarly
\begin{equation}
D_{\pi_{2},t}(x,y)=4f_{t_{2}}(h) \, f_{t_{4}}(h) A_{t_{1},t_{3} }(y,x). \label{dp2}
\end{equation}
The contribution of $\pi_{1}$ arising from one bound variable is
\begin{eqnarray}
&&
B_{\pi_{1},t}(x,y)=p^{\sharp}_{t_{1}}(x)\,\(\Delta^{-h}p^{\sharp}_{t_{2}}(0)\)\,\Delta^{-h} p^{\sharp}_{t_{3}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{4}}(0)\)\nonumber\\
&& \hspace{.6 in}+ \Delta^{h}p^{\sharp}_{t_{1}}(x)\, p^{\sharp}_{t_{2}}(0) \,\Delta^{-h} p^{\sharp}_{t_{3}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{4}}(0)\) \nonumber\\
&& \hspace{.6 in}+ \Delta^{h}p^{\sharp}_{t_{1}}(x)\, \Delta^{h}p^{\sharp}_{t_{2}}(0) \, p^{\sharp}_{t_{3}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{4}}(0)\) \label{km.11ac}\\
&& \hspace{.6 in}+ p^{\sharp}_{t_{1}}(x)\,\(\Delta^{h}\Delta^{-h}p_{t_{2}}(0)\) \, \Delta^{h}p^{\sharp}_{t_{3}}(y-x)\,
\Delta^{h}p^{\sharp}_{t_{4}}(0) \nonumber
\end{eqnarray}
and similar terms for $\pi_{2}$.
This is also a contribution of $\pi_{3}=(x,y,y,x)$ arising from one bound variable
\begin{eqnarray}
&&
B_{\pi_{3},t}(x,y)= \Delta^{h}p^{\sharp}_{t_{1}}(x)\, \, p^{\sharp}_{t_{2}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{3}}(0)\)\, \Delta^{h}p^{\sharp}_{t_{4}}(x-y) \nonumber\\
&& \hspace{.6 in}+ p^{\sharp}_{t_{1}}(x)\, \, \Delta^{-h}p^{\sharp}_{t_{2}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{3}}(0)\)\, \Delta^{h}p^{\sharp}_{t_{4}}(x-y) \nonumber\\
&& \hspace{.6 in}+ \Delta^{h}p^{\sharp}_{t_{1}}(x)\, \, \Delta^{-h}p^{\sharp}_{t_{2}}(y-x)\,
\(\Delta^{h}\Delta^{-h}p_{t_{3}}(0)\)\, p^{\sharp}_{t_{4}}(x-y) \nonumber
\end{eqnarray}
and similar terms for $\pi_{4}=(y,x,x,y)$.
As before, we observe that by (\ref{int.2}) and Lemma \ref{lem-vpropt}, for any $1\leq j \leq 4$
\begin{equation}
\sup_{x,y} \int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} | B_{\pi_{j}, t}(x,y)|\prod_{i=1}^{4}\,dt_{i} \leq ct^{\epsilon/2}h^{3-\epsilon}.\label{sb.1}
\end{equation}
Hence in view of (\ref{gbd.2}) we see that for any $\epsilon>0$ and $1\leq j \leq 4$
\begin{equation}
\int \( \int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} | B_{\pi_{j}, t}(x,y)|\prod_{i=1}^{4}\,dt_{i}\)|G_{s}(x,y)|\,dx\,dy=O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\label{gbd.2c}
\end{equation}
Taking note of the factor $4$ in (\ref{km.11a}) and the factor $2$ in (\ref{km.11b}) we now show that
\begin{eqnarray}
&&
4\int\(\int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} \( D_{\pi_{1},t}(x,y)+ D_{\pi_{2},t}(x,y)\)\prod_{i=1}^{4}\,dt_{i}\)G_{s}(x,y)\,dx\,dy \nonumber\\
&&-16h \int \(\int_{\{\sum_{i=1}^{3}t_{i}\leq t\}} \(\widetilde D_{\pi'_{1},t}(x,y)+\widetilde D_{\pi'_{2},t}(x,y)\) \prod_{i=1}^{3}\,dt_{i}\,dx\,dy\)G_{s}(x,y)\nonumber\\
&&+ 16h^{2}\int \(\int_{\{t_{1}+t_{2}\leq t\}} \(A_{t_{1},t_{2}}(x,y)+A_{t_{1},t_{2}}(y,x)\)\,dt_{1}\,dt_{2}\)G_{s}(x,y)\,dx\,dy\nonumber\\
&&=O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\label{bd.2a}
\end{eqnarray}
We begin by rewritting (\ref{bd.2a}). By symmetry it suffices to show that
\begin{eqnarray}
&&
8\int\(\int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} D_{\pi_{1},t}(x,y) \prod_{i=1}^{4}\,dt_{i}\) G_{s}(x,y)\,dx\,dy\label{bd.3a}\\
&&-32h\int \(\int_{\{\sum_{i=1}^{3}t_{i}\leq t\}} \widetilde D_{\pi'_{1},t}(x,y) \prod_{i=1}^{3}\,dt_{i}\) G_{s}(x,y)\,dx\,dy\nonumber\\
&&+ 32h^{2}\int\(\int_{\{t_{1}+t_{2}\leq t\}} A_{t_{1},t_{2}}(x,y) \,dt_{1}\,dt_{2}\) G_{s}(x,y)\,dx\,dy=O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\nonumber
\end{eqnarray}
Using the above expressions for $D_{\pi_{1},t}(x,y), \widetilde D_{\pi'_{1},t}(x,y)$ and relabeling the $t_{i}'s $ this is equivalent to showing that
\begin{eqnarray}
&&
32\int\(\int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} A_{t_{1},t_{2}}(x,y) f_{t_{3}}(h) f_{t_{4}}(h)\prod_{i=1}^{4}\,dt_{i}\) G_{s}(x,y)\,dx\,dy\label{bd.3ab}\\
&&-64h\int \(\int_{\{\sum_{i=1}^{3}t_{i}\leq t\}} A_{t_{1},t_{2}}(x,y) f_{t_{3}}(h) \prod_{i=1}^{3}\,dt_{i}\) G_{s}(x,y)\,dx\,dy\nonumber\\
&&+ 32h^{2}\int\(\int_{\{t_{1}+t_{2}\leq t\}} A_{t_{1},t_{2}}(x,y) \,dt_{1}\,dt_{2}\) G_{s}(x,y)\,dx\,dy=O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\).\nonumber
\end{eqnarray}
This comes down to making precise the intuitive notion that
$ f_{r}(h)$ is $h$ times a delta-function in $r$, (in which case the left hand side would vanish).
To this end we note
\begin{equation}
\int_{0}^{\infty} f_{r}(h)\,dr= \int_{0}^{\infty} (p_{r}(0)-p_{r}(h))\,dr=h\label{bd.6}
\end{equation}
and for any $\delta>0$
\begin{equation}
\int_{\delta}^{\infty} f_{r}(h)\,dr=\int_{\delta}^{\infty} {1-e^{-h^{2}/2r} \over \sqrt{2\pi r}} \,dr\leq \int_{\delta}^{\infty} { h^{2}/2r\over \sqrt{2\pi r}} \,dr=O(h^{2}/\sqrt{\delta}).\label{kacv.7aa}
\end{equation}
We also note that
\begin{eqnarray}
&&\int_{\{t-2h^{\epsilon'}\leq t_{1}+t_{2}\leq t \}}p_{t_{1}}(x)\, p_{t_{2}}(y-x)\,
\,dt_{1}\,dt_{2}
\label{bd.5a}\\
&&\leq c\int_{\{t-2h^{\epsilon'}\leq t_{1}+t_{2}\leq t \}}{1 \over \sqrt{t_{1}}}\, {1 \over \sqrt{t_{2}}}\,
\,dt_{1}\,dt_{2}\leq Ct^{2/3}h^{\epsilon'/4}. \nonumber
\end{eqnarray}
We then write
\begin{eqnarray}
&&\int_{\{\sum_{i=1}^{4}t_{i}\leq t\}} A_{t_{1},t_{2}}(x,y) f_{t_{3}}(h) f_{t_{4}}(h)\prod_{i=1}^{4}\,dt_{i}
\label{db.10}\\
&&=\(\int_{\{t_{1}+t_{2}\leq t-2h^{\epsilon'}\}} A_{t_{1},t_{2}}(x,y) \,dt_{1}\,dt_{2} \)\(\int_{0}^{h^{\epsilon'}}f_{r}(h)\,dr\)^{2} \nonumber\\
&&+\int_{ C(t,h)} A_{t_{1},t_{2}}(x,y) f_{t_{3}}(h) f_{t_{4}}(h)\prod_{i=1}^{4}\,dt_{i} \nonumber
\end{eqnarray}
where
\begin{eqnarray}
&&
C(t,h)=\{\sum_{i=1}^{4}t_{i}\leq t\}-\{ t_{1}+t_{2}\leq t-2h^{\epsilon'}\}\times \{t_{3},t_{4}\leq h^{\epsilon'}\}\label{bd.5}\\
&&\hspace{.6 in}\subseteq \([0,t]^{4}\cap \{t_{3},t_{4}\leq h^{\epsilon'}\}^{c}\)\cup
\{t-2h^{\epsilon'}\leq t_{1}+t_{2}\leq t \}.\nonumber
\end{eqnarray}
Using (\ref{bd.6})-(\ref{bd.5a}) we see that for $\epsilon'$ small
\begin{eqnarray}
&&\(\int_{\{t_{1}+t_{2}\leq t-2h^{\epsilon}\}} A_{t_{1},t_{2}}(x,y) \,dt_{1}\,dt_{2} \)\(\int_{0}^{h^{\epsilon'}}f_{r}(h)\,dr\)^{2}
\label{db.11}\\
&&=h^{2} \int_{\{t_{1}+t_{2}\leq t\}} A_{t_{1},t_{2}}(x,y) \,dt_{1}\,dt_{2}+O(t^{2/3}h^{2+\epsilon'/4}) \nonumber
\end{eqnarray}
and
\begin{equation}
\int_{ C(t,h)} A_{t_{1},t_{2}}(x,y) f_{t_{3}}(h) f_{t_{4}}(h)\prod_{i=1}^{4}\,dt_{i} =O(t^{2/3}h^{2+\epsilon'/4}).\label{db.12}
\end{equation}
A similar analysis applies to the second term in (\ref{bd.3ab}). Then taking $\epsilon'=8\epsilon$ and using (\ref{gbd.2}) completes the proof of
(\ref{bd.3ab}).
We have now dealt with all terms coming from $I_{2}, I_{3}$ and it only remains to consider the contribution of non-bound variables to $I_{1}$. We will show that this is
\begin{eqnarray}
&& 32 h^{4}E\(\int ( L^{ x}_{t})^{2} \widetilde L^{ x}_{s} \,dx\)+ O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\). \label{bd.2c}
\end{eqnarray}
The proof of (\ref{bd.2c}) follows closely the proof of Lemma \ref{lem-3.1j}. The main contribution comes from $\pi=(x,y,x,y)$ or $ (y,x,y, x)$ and $a=e$. Taking $\pi=(x,y,x,y)$ and $a=e$ we have
\begin{eqnarray}
&& 4 \int\( \int_{\{\sum_{i=1}^{4}t_{i}\leq t\}}p_{t_{1}}( x ) \Delta^{h}\Delta^{-h}p_{t_{2}}(y- x )p_{t_{3}}(y- x )\Delta^{h}\Delta^{-h}p_{t_{4}}(y- x )\right.\nonumber\\
&& \left. \hspace{3 in}\prod_{i=1}^{4}\,dt_{i}\) G_{s}(x,y)\,dx\,dy
\nonumber \label{bd.14}
\end{eqnarray}
Since as before
\begin{eqnarray}
&& |\int_{\{\sum_{i=1}^{4}t_{i}\leq t\}}p_{t_{1}}( x ) \Delta^{h}\Delta^{-h}p_{t_{2}}(y- x )p_{t_{3}}(y- x )\Delta^{h}\Delta^{-h}p_{t_{4}}(y- x )\prod_{i=1}^{4}\,dt_{i} |
\nonumber\\
&&\hspace{1 in}\leq ct^{\epsilon/2} u^{1-\epsilon}_{t}(x)u_{t}(y-x)w^{2}_{t}(y-x), \label{bd.15}
\end{eqnarray}
we see that up to terms that are $O\((s\wedge t)^{\epsilon}h^{4+\epsilon}\)$ we can replace $G_{s}(x,y)$ in (\ref{bd.14})
by
\begin{equation}
\int_{\{s_{1}+s_{2}\leq s\}}\(p_{s_{1}}(x)\,\Delta^{h}\Delta^{-h}p_{s_{2}}(y-x)+p_{s_{1}}(y)\,\Delta^{h}\Delta^{-h}p_{s_{2}}(x-y)\)\,ds_{1}\,ds_{2}.\label{bd.16}
\end{equation}
Thus consider
\begin{eqnarray}
&& 4 \int\( \int_{\{\sum_{i=1}^{4}t_{i}\leq t\}}p_{t_{1}}( x ) \Delta^{h}\Delta^{-h}p_{t_{2}}(y- x )p_{t_{3}}(y- x )\Delta^{h}\Delta^{-h}p_{t_{4}}(y- x )\right.\nonumber\\
&& \left. \hspace{,5 in}\prod_{i=1}^{4}\,dt_{i}\)\( \int_{\{s_{1}+s_{2}\leq s\}} p_{s_{1}}(x)\,\Delta^{h}\Delta^{-h}p_{s_{2}}(y-x) \,ds_{1}\,ds_{2} \)\,dx\,dy
\nonumber \label{bd.17}
\end{eqnarray}
It now follows as in the proof of Lemma \ref{lem-3.1j} that up
to the error terms allowed in (\ref{bd.2c}) this is equal to
\begin{equation}
16h^{4}\int\( \int_{\{ t_{1}+ t_{2}\leq t\}}p_{t_{1}}( x ) p_{t_{2}}(0) \,dt_{1} \,dt_{2}\)
\( \int_{\{ s_{1} \leq s\}}p_{s_{1}}( x ) \,ds_{1} \)\,dx.\label{bd.18}
\end{equation}
The second term (\ref{bd.16}) gives the same contribution since up to another error term we can replace $p_{s_{1}}(y)$ by $p_{s_{1}}(x)$. There is a similar contribution from $ \pi= (y,x,y, x)$. Thus altogether we have
\begin{equation}
64\int\( \int_{\{ t_{1}+ t_{2}\leq t\}}p_{t_{1}}( x ) p_{t_{2}}(0) \,dt_{1} \,dt_{2}\)
\( \int_{\{ s_{1} \leq s\}}p_{s_{1}}( x ) \,ds_{1} \)\,dx.\label{bd.18}
\end{equation}
Since by Kac's moment formula
\begin{eqnarray}
&&E\(\int ( L^{ x}_{t})^{2} \widetilde L^{ x}_{s} \,dx\)
\label{bd.19}\\
&& =2 \int\( \int_{\{ t_{1}+ t_{2}\leq t\}}p_{t_{1}}( x ) p_{t_{2}}(0) \,dt_{1} \,dt_{2}\)
\( \int_{\{ s_{1} \leq s\}}p_{s_{1}}( x ) \,ds_{1} \)\,dx \nonumber
\end{eqnarray}
we obtain the main contribution to (\ref{bd.2c}). The fact that all remaining $\pi,a$ give error terms is now easy and left to the reader.{\hfill $\square$ \bigskip}
\section{Proof of Lemmas \ref{lem-vprop}--\ref{lem-big}}\label{sec-Prooflemvprop}
{\bf Proof of Lemma \ref{lem-vprop}}
Since
\begin{eqnarray} \lefteqn{
\Delta_{ x}^{ h}\Delta_{ y}^{ h} u^{\alpha}(x-y)\label{1.8w}}\\ && =
\{u^{\alpha}(x-y)-u^{\alpha}(x-y-h)\}\nonumber-
\{u^{\alpha}(x-y+h)-u^{\alpha}(x-y)\}\nonumber
\end{eqnarray}
we have
\begin{eqnarray} &&
\Delta_{ x}^{ h}\Delta_{ y}^{ h} u^{\alpha}(x-y)\Bigg\vert_{ y=x} =
\{u^{\alpha}(0)-u^{\alpha}(-h)\}-
\{u^{\alpha}(h)-u^{\alpha}(0)\}\nonumber\\ &&\hspace{ 1in}=2(u^{\alpha}(0)-u^{\alpha}(h) )=
2\({1-e^{-\sqrt{2\alpha}\,h} \over \sqrt{2\alpha}}\),\label{1.8a}
\end{eqnarray}
which gives (\ref{1.8}).
To obtain (\ref{1.3x}) we note that
\begin{equation}
\Delta_{x}^{ h}\,u^{\alpha}(x)=\({e^{-\sqrt{2\alpha}|x+h|} -e^{-\sqrt{2\alpha}|x|} \over \sqrt{2\alpha}}\).\label{pot.3ow}
\end{equation}
Therefore
\begin{eqnarray}
|\Delta_{x}^{ h}\,u^{\alpha}(x)|&\le&{ e^{-\sqrt{2\alpha}|x |}\over \sqrt{2\alpha}}\left| e^{ \sqrt{2\alpha}(|x|-|x+h|)}-1\right| \label{pot.3owa}\\
&\le& e^{-\sqrt{2\alpha}|x |}\( ||x|-|x+h||+O( ||x|-|x+h||^{2})\) \nonumber
\end{eqnarray}
which gives (\ref{1.3x}), (since we allow $C$ to depend on $\alpha$.)
To obtain (\ref{1.3y}) we simply note that
\begin{equation}
|\Delta^{ h}\Delta^{ -h}\,u^{\alpha}( x)|=|2u^{\alpha}( x)-u^{\alpha}( x+h)-u^{\alpha}( x-h)|\leq 2v^{\alpha}( x)\label{ff.4}
\end{equation}
where we used the fact that $u^{\alpha}( x)$ is an even function. The first part of (\ref{1.3y}) then follows from (\ref{1.3x}).
When $|x|\geq h$ we have
\begin{eqnarray}
\Delta^{ h}\Delta^{ -h}\,u^{\alpha}( x)&=& 2u^{\alpha}( x)-u^{\alpha}( x+h)-u^{\alpha}( x-h)\label{1.26gd}\\
&=& u^{\alpha}( x)\(2-e^{-\sqrt{2\alpha}\,h}-e^{\sqrt{2\alpha}\,h}\)\nonumber .
\end{eqnarray}
The statement in (\ref{1.30gb}) follows trivially from (\ref{1.3y}).
For (\ref{1.30g}) we note that for $|x|\leq h$
\begin{eqnarray}
\Delta^{ h}\Delta^{ -h}\,u^{\alpha}( x)&=& 2u^{\alpha}( x)-u^{\alpha}( x+h)-u^{\alpha}( x-h)\label{1.26g}\\ &=& (1-u^{\alpha}(
x+h))+(1-u^{\alpha}( x-h))-2(1-u^{\alpha}( x))\nonumber\\ &=& | x+h|+ | x-h|-2 | x|+O( h^{ 2}).\nonumber
\end{eqnarray}
When $0\leq x\leq h$ we therefore have
\begin{equation}
\Delta^{ h}\Delta^{ -h}\,u^{\alpha}( x)=x+h+h-x-2x+O( h^{ 2})=( 2+O( h))(h-x).\label{1.27g}
\end{equation}
Consequently
\begin{eqnarray}
\int_{0}^{ h} \(\Delta^{ h}\Delta^{ -h}\,u^{\alpha}(x)\)^{q} \,dx \nonumber
&=& ( 2^{q}+O( h))\int_{0}^{
h}(h-x)^{q}\,dx\\
&=&( 2^{q}/(q+1)+O( h))h^{ q+1}.\label{1.28g}
\end{eqnarray}
Similarly, when $-h\leq x\leq 0$ it follows from (\ref{1.26g}) that
\begin{equation}
\Delta^{ h}\Delta^{ -h}\,u^{\alpha}( x)=h-x+x+h+2x+O( h^{ 2})=( 2+O( h))(h+x),\label{1.29g}
\end{equation}
Consequently
\begin{eqnarray} \int_{-h}^{ 0}\(\Delta^{ h}\Delta^{ -h}\,u^{\alpha}(x)\)^{q} \,dx&=& ( 2^{q}+O( h))\int_{-h}^{
0}(h+x)^{q}\,dx\nonumber\\
&=&(2^{q}/(q+1)+O( h))h^{ q+1}.\label{1.30gx}
\end{eqnarray}
Using (\ref{1.28g}), (\ref{1.30gx}) and (\ref{1.30gb}) we get (\ref{1.30g}).
To obtain (\ref{li.13}) we write
\begin{eqnarray}
&&\int |\Delta^{ h}\Delta^{- h}\,u^{\alpha}(y) |^{q}\,dy
\label{li.13a}\\
&&\qquad= \int_{|y|\leq h} |\Delta^{ h}\Delta^{- h}\,u^{\alpha}(y) |^{q}\,dy + \int_{|y|\geq h} |\Delta^{ h}\Delta^{- h}\,u^{\alpha}(y) |^{q}\,dy \nonumber\\
&&\qquad\leq Ch^{q}\int_{|y|\leq h} 1\,dy + Ch^{2q}\int_{|y|\geq h} u^{\alpha}(y) \,dy=O( h^{
q+1}) ,\nonumber
\end{eqnarray}
where for the last line we use (\ref{1.3y}).
{\hfill $\square$ \bigskip}
{\bf Proof of Lemma \ref{lem-vpropt}}
It follows from the fact that $p_{r}(x)\leq p_{r}(y)$ for all $r$ if $|y|\leq |x|$, (\ref{pot.1w}), and (\ref{1.3x}) that
\begin{eqnarray}
\int_{0}^{T} |\Delta ^{ h}\,p_{t}(x)|\,dt
& \leq &e^{T/2}\int_{0}^{\infty}e^{-t/2} |\Delta ^{ h}\,p_{t}(x)|\,dt \nonumber\\
& =& e^{T/2}\bigg| \Delta ^{ h}\(\int_{0}^{\infty}e^{-t/2}\,p_{t}(x)\,dt\)\bigg | \label{9.7}\\
& =&e^{T/2}|\Delta ^{ h}\,u^{1/2}(x)|\leq C_{T} h\, e^{-|x|}. \nonumber
\end{eqnarray}
This gives (\ref{9.3x}).
For (\ref{9.3w}), we note that
\begin{eqnarray}
\bigg|{d^{2} \over dx^{2}}p_{t}(x )\bigg|&=&\bigg|{x^{2}/t-1 \over t\sqrt{2\pi t}}e^{-x^{2}/2t}\bigg| \label{9.9}\\
&\le& {C\over t^{3/2}}\({x^{2} \over 2t}+1\)e^{-x^{2}/2t}\le \frac{C}{t^{3/2}}e^{-x^{2}/4t},\nonumber
\end{eqnarray}
since $\sup_{s>0}se^{-s }<\infty$.
We use this and Taylor's theorem to see that for some $0\leq h_{t}',h_{t}''\leq h$,
\begin{eqnarray}
|\Delta^{ h}\Delta^{ -h} p_{t}(x )|
\label{9.11} & =&|2 p_{t}(x )-p_{t}(x+h )-p_{t}(x-h )| \label{2.26k}\\
& =&{h^{2} \over 2}\bigg|{d^{2} \over dx^{2}}p_{t}(x+h_{t}')+{d^{2} \over dx^{2}}p_{t}(x-h_{t}'')\bigg| \nonumber\\
& \leq &{Ch^{2} \over t^{3/2}}
\(e^{-(x+h_{t}')^{2}/4t}+e^{-(x+h_{t}'')^{2}/4t} \) \nonumber.
\end{eqnarray}
Therefore, when $|x|\geq 2h$,
\begin{equation}
|\Delta^{ h}\Delta^{ -h} p_{t}(x )| \leq {Ch^{2} \over t^{3/2}}
e^{-x^{2}/16t}.
\end{equation}
Consequently, when $|x|\geq 2h$,
\begin{eqnarray}
\int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt&\leq& Ch^{2} \int_{0}^{T} { e^{-x^{2}/16t}\over t^{3/2}}\,dt\nonumber\\
&\leq & Ch^{2} e^{-x^{2}/32T} \int_{0}^{\infty} { e^{-x^{2}/32t}\over t^{3/2}}\,dt
\label{ggg}\\
&=& Ch^{2}{e^{-x^{2}/32T} \over |x| }\int_{0}^{\infty} { e^{-1/32t}\over t^{3/2}}\,dt\leq C_{T}h^{2}{e^{-x^{2}/32T} \over |x| }\nonumber,
\end{eqnarray}
which proves (\ref{9.3w}).
Using (\ref{9.3x}) and (\ref{9.3w}) we see that
\begin{eqnarray}
\lefteqn{\int w_{T}^{q}(x)\,dx
\label{2.28}}\\
&&=\int_{|x|\leq 2h} \(\int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt\)^{q}\,dx\nonumber\\
&&\qquad+ \int_{|x|\geq 2h} \(\int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt\)^{q}\,dx \nonumber\\
&&\leq 4\int_{|x|\leq 2h} \(\int_{0}^{T} |\Delta^{ h} p_{t}(x )|\,dt\)^{q}\,dx\nonumber\\
&&\qquad+ \int_{|x|\geq 2h} \(\int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt\)^{q}\,dx \nonumber\\
&&\leq C_{T}\int_{|x|\leq 2h} h^{q}\,dx+ C_{T}h^{2q}\int_{|x|\geq 2h} {1 \over |x|^{q}}\,dx\leq C_{T}h^{q+1} \nonumber,
\end{eqnarray}
which gives us (\ref{9.30g}).
For (\ref{9.30gb}) we note that when $h\le 1/4$, $\sqrt h\ge 2h$. Therefore, it folows from (\ref{9.3w}) that
\begin{eqnarray}
&&
\int_{|x|\geq \sqrt{h}}w_{T}^{q}(x)\,dx \label{9.14 }\\
&&\leq C_{T}h^{2q}\int_{|x|\geq \sqrt{h}} {1 \over |x|^{q}}\,dx \leq C_{T}h^{3q/2+1/2}.\nonumber
\end{eqnarray}
Finally, to obtain (\ref{9.13t}) we use (\ref{9.3x}) and (\ref{9.3w}) to see that
\begin{eqnarray}
&&\int w_{T} (x) \,dx
\label{9.1}\\&&
=\int_{|x|\leq h} \int_{0}^{T} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt \,dx+\int_{|x|\geq h} \int_{0}^{1} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt \,dx \nonumber\\
&&\leq 2\int_{|x|\leq h} \int_{0}^{T} |\Delta^{ h} p_{t}(x )|\,dt \,dx+\int_{|x|\geq h} \int_{0}^{1} |\Delta^{ h}\Delta^{ -h} p_{t}(x )|\,dt \,dx \nonumber\\
&&\leq C_{T}\int_{|x|\leq h} h\,dx+C_{T}h^{2}\int_{|x|\geq h} {e^{-x^{2}/8}\over |x|} \,dx\leq C_{T}h^{ 2}\log h. \nonumber
\end{eqnarray}
{\hfill $\square$ \bigskip}
\begin{remark}
{\rm Using Remark 2.1 and (\ref{9.7}) it is easy to check that we obtain the analog of (\ref{9.3x} ) for all $|h|\leq 1$ if on the right hand side we replace $h$ by $|h|$.}
\end{remark}
{\bf Proof of Lemma \ref{lem-vpropd}} The proof of (\ref{d9.300}) is immediate. (\ref{d9.3w})
follows from (\ref{9.11}), and a similar application of the mean value theorem gives (\ref{d9.3x}).
{\hfill $\square$ \bigskip}
{\bf Proof of Lemma \ref{lem-big}} Using $2-e^{ihp } -e^{-ihp }=2-2\cos (hp)=4\sin^{2} (p h/2)$ we can write
\begin{eqnarray}
&&
\int_{0}^{\infty} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt \label{big.3}\\
&&={1 \over 2\pi}\int_{0}^{\infty} \int e^{ipx}(2-e^{ihp } -e^{-ihp })e^{-tp^{2}/2} \,dp\,dt\nonumber\\
&&={4 \over 2\pi}\int_{0}^{\infty} \int e^{ipx}\sin^{2} (p h/2)e^{-tp^{2}/2} \,dp\,dt\nonumber\\
&&={8 \over 2\pi} \int e^{ipx}{\sin^{2} (p h/2) \over p^{2}} \,dp. \nonumber
\end{eqnarray}
Similarly
\begin{equation}
\int_{0}^{h} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt={8 \over 2\pi} \int e^{ipx}{\sin^{2} (p h/2) \over p^{2}}\(1-e^{-hp^{2}/2}\) \,dp\label{big.4}
\end{equation}
and
\begin{equation}\qquad
\Delta^{ h}\Delta^{ -h} u^{1/2}(x )=\int_{0}^{\infty}e^{-t/2} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt={8 \over 2\pi} \int e^{ipx}{\sin^{2} (p h/2) \over 1+p^{2}} \,dp.\label{big.5}
\end{equation}
Using (\ref{big.3}) and the Fourier inversion formula we see that
\begin{eqnarray}
&&\int \(\int_{0}^{\infty} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt\)^{q}\,dx
\label{big.7}\\
&&=\({8 \over 2\pi}\)^{q}\int \( \int e^{ipx}{\sin^{2} (p h/2) \over p^{2}} \,dp\)^{q}\,dx \nonumber\\
&&=\({8 \over 2\pi}\)^{q}\int \( \int e^{ix\sum_{j=1}^{q}p_{j}}\prod_{j=1}^{q}{\sin^{2} (p_{j} h/2) \over p_{j}^{2}} \,dp_{j}\)\,dx \nonumber\\
&&=\({8 \over 2\pi}\)^{q} \int \(\int e^{ix\sum_{j=2}^{q}p_{j}}\(\int e^{ix p_{1}}{\sin^{2} (p_{1} h/2) \over p_{1}^{2}} \,dp_{1} \)\,dx\)\nonumber\\
&&\hspace{3 in}\prod_{j=2}^{q}{\sin^{2} (p_{j} h/2) \over p_{j}^{2}} \,dp_{j} \nonumber\\
&&= {8^{q} \over (2\pi)^{q-1}} \int {\sin^{2} (p_{1} h/2) \over p_{1}^{2}}\prod_{j=2}^{q}{\sin^{2} (p_{j} h/2) \over p_{j}^{2}} \,dp_{j} \nonumber
\end{eqnarray}
where now $p_{1}=\sum_{j=2}^{q}p_{j}$. Scaling in $h$ we then obtain
\begin{eqnarray}
&&
\int \(\int_{0}^{\infty} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt\)^{q}\,dx \label{big.8}\\
&&=
{8^{q} h^{q+1}\over (2\pi)^{q-1}} \int {\sin^{2} (p_{1} /2) \over p_{1}^{2}}\prod_{j=2}^{q}{\sin^{2} (p_{j} /2) \over p_{j}^{2}} \,dp_{j}.\nonumber
\end{eqnarray}
Similarly we see that
\begin{eqnarray}
&&\qquad
\int \(\int_{0}^{h} \Delta^{ h}\Delta^{ -h} p_{t}(x )\,dt\)^{q}\,dx \label{big.9}\\
&&=
{8^{q} h^{q+1}\over (2\pi)^{q-1}} \int {\sin^{2} (p_{1} /2) \over p_{1}^{2}}
\(1-e^{-p_{1}^{2}/2h}\)\prod_{j=2}^{q}{\sin^{2} (p_{j} /2) \over p_{j}^{2}}\(1-e^{-p_{j}^{2}/2h}\) \,dp_{j}\nonumber
\end{eqnarray}
and
\begin{eqnarray}
&&
\int \( \Delta^{ h}\Delta^{ -h} u^{1/2}(x ) \)^{q}\,dx \label{big.10}\\
&&=
{8^{q} h^{q+1}\over (2\pi)^{q-1}} \int {\sin^{2} (p_{1} /2) \over h^{2}+p_{1}^{2}}\prod_{j=2}^{q}{\sin^{2} (p_{j} /2) \over h^{2}+p_{j}^{2}} \,dp_{j}.\nonumber
\end{eqnarray}
Using the fact that ${\sin^{2} (p /2) \over p^{2}}$ is bounded and
\begin{equation}
\int e^{-p^{2}/2h}\,dp=Ch^{1/2}\label{big.11}
\end{equation}
our Lemma follows from comparing (\ref{big.8})-(\ref{big.10}) with (\ref{1.30g}).{\hfill $\square$ \bigskip}
{\bf Acknowledgment.} I would like to thank David Nualart for pointing out an error in the first draft of this paper.
\def\noopsort#1{} \def\printfirst#1#2{#1}
\def\singleletter#1{#1}
\def\switchargs#1#2{#2#1}
\def\bibsameauth{\leavevmode\vrule height .1ex
depth 0pt width 2.3em\relax\,}
\makeatletter
\renewcommand{\@biblabel}[1]{\hfill#1.}\makeatother
\newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $W \geq 1$, and let $V(n)$, $n \geq 0$, be independent, identically distributed random variables taking values in the space of $W \times W$ real symmetric matrices, so that
\begin{equation}\label{eq:model} \mathbb E \| V(n)\|^\eta < \infty \quad \text{for some $\eta > 0$,}\end{equation}
and the support $\mathcal S$ of the distribution of $V(n)$ is sufficiently rich, say, in the following sense:
\begin{equation}\label{eq:jcond}\left[\begin{split}
&\text{$\mathcal S$ is irreducible (i.e.\ preserves no non-trivial linear subspace of $\mathbb R^W$)}\\
&\text{and contains $V, V'$ such that $\operatorname{rk}(V - V') = 1$}\end{split}\right.
\end{equation}
The main example (the Schr\"odinger case) is
\begin{equation}\label{eq:strip} V(n)_{\alpha,\alpha'} = \begin{cases}
1~, &|\alpha-\alpha'| = 1 \\
v_{n,\alpha}~, &\alpha = \alpha'~,
\end{cases} \end{equation}
where $\{v_{n,\alpha}\}_{n \in \mathbb Z_+, \alpha \in \{1,\cdots,q\}}$ are independent, identically distributed real-valued random variables not concentrated at one point and having $\mathbb E |v_{n,\alpha}|^\eta<\infty$.
We are interested in the spectral properties of the random
operator $H$ on $\ell_2(\mathbb Z_+ \to \mathbb C^W)$, defined as follows:
\[ (H \psi)(n) = \begin{cases}
\psi(n+1) + V(n) \psi(n) + \psi(n-1)~, & n \geq 1\\
\psi(1) + V(0) \psi(0)~, &n = 0~.
\end{cases}\]
On an event of full probability, $H$ exhibits Anderson localisation which manifests itself in the following spectral properties: the spectrum of $H$ is pure point, and the eigenfunctions decay exponentially, meaning that there exists a deterministic $\gamma > 0$ such that for each eigenfunction $\psi$ of $H$
\begin{equation}\label{eq:expdecay} \limsup_{n \to \infty} \frac{1}{n} \log \|\psi(n)\| \leq - \gamma~. \end{equation}
where $\| \cdot \|$ denotes the Euclidean norm in $\mathbb C^W$.
For $W=1$, the pure point nature of the spectrum was first established in \cite{GMP}, and exponential decay -- by Molchanov in \cite{Molch}; see further Kunz and Souillard \cite{KS}. In these works, it was assumed that the distribution of the potential is absolutely continuous with bounded density, The case of singular potentials was settled by Carmona, Klein, and Martinelli \cite{CKM}. For $W>1$ (Schr\"odinger case) with absolutely continuous distribution of the potential, the pure point nature of the spectrum was first proved in \cite{G80}, and exponential decay -- by Lacroix in \cite{Lacr,Lacr2}. The general Schr\"odinger case was settled by Klein, Lacroix and Speis in \cite{KLS}, building on \cite{GM}; the argument given there can be extended to the general situation (\ref{eq:jcond}), once the result of \cite{G95} (discussed below) is taken into account. In this paper, we do not discuss Anderson localisation in dimension $d > 1$, and refer to the works of Fr\"ohlich and Spencer \cite{FS} and Aizenman and Molchanov \cite{AM} and also to the monograph of Aizenman and Warzel \cite{AW}.
\medskip A more precise version of the relation (\ref{eq:expdecay}) can be stated in terms of the Lyapunov exponents associated with $H$. For $\lambda \in \mathbb R$, define the one-step transfer matrices
\[ T_n (\lambda) = \left( \begin{array}{cc} \lambda - V(n) & - \mathbbm 1 \\ \mathbbm 1 & 0 \end{array} \right) \in \operatorname{Sp}(2W, R) \qquad (n \geq 0) \]
and the multi-step transfer matrices
\[ \Phi_{n, n'}(\lambda) = T_{n-1}(\lambda) \cdots T_{n'}(\lambda)~, \quad \Phi_n(\lambda) = \Phi_{n,0}(\lambda) \qquad (n > n' \geq 0)~. \]
The Lyapunov exponents $\gamma_1(\lambda) \geq \gamma_2(\lambda) \geq \cdots \geq \gamma_{2W}(\lambda)$ are defined as
\[ \gamma_j(\lambda) = \lim_{n \to \infty} \frac1n \mathbb E \log s_j(\Phi_n(\lambda))~, \]
where $s_j$ stands for the $j$-th singular value. According to a general result of Furstenberg and Kesten \cite{FK}, one
has
\begin{equation}\label{eq:fk}
\forall \lambda \in \mathbb R \,\,\,\,\, \mathbb P \left\{ \gamma_j(\lambda) = \lim_{n \to \infty} \frac1n \log s_j(\Phi_n(\lambda)) \right\} = 1~.\end{equation}
Due to the symplectic structure, $\gamma_{2W+1-j}(\lambda) = - \gamma_j(\lambda)$ for $j =1, \cdots, W$.
Following precursory work by Tutubalin (see the survey\cite{SazTut}) and Virtser \cite{Vir}, Guivarc$'$h and Raugi showed \cite{GR} that if
\begin{equation}\label{eq:cond}
\left[ \begin{split}
&\text{the action of the semigroup generated by the support $\mathcal S_\lambda$ of $T_n(\lambda)$ } \\
&\text{on $\mathbb R^{2W}$ and its wedge powers is strongly irreducible and contractive,}
\end{split} \right.
\end{equation}
then the Lyapunov exponents are distinct:
\begin{equation}\label{eq:simplespec} \gamma_1(\lambda) > \gamma_2(\lambda) \cdots >\gamma_W(\lambda) > 0~. \end{equation}
In the case (\ref{eq:strip}) with absolutely continuous distribution of $v_{n,\alpha}$, the condition (\ref{eq:cond}) was verified in \cite{Lacr}, while in \cite{G80} (\ref{eq:simplespec}) was directly established using the results of \cite{SazTut}. In \cite{GM}, the following general theorem is proved: (\ref{eq:cond}) (and consequently also (\ref{eq:simplespec})) holds if
\begin{equation}\label{eq:zcond}
\text{the group generated by $\mathcal S_\lambda$ is Zariski-dense in $\operatorname{Sp}(2W, \mathbb R)$}.
\end{equation} It was also shown in \cite{GM} that in the Schr\"odinger case (\ref{eq:strip}) one has (\ref{eq:zcond}) for any $\lambda \in \mathbb R$. In \cite{G95}, a general method to compute the Zariski closure of the group generated by the support of $T_n(\lambda)$ was developed; one of its consequences is that (\ref{eq:zcond}) holds for any $\lambda \in \mathbb R$ also in the generality of (\ref{eq:jcond}).
Now we can state the full result of Klein, Lacroix and Speis \cite{KLS}: there is an event of full probability on which each eigenpair $H\psi = \lambda \psi$ satisfies
\begin{equation}\label{eq:upperbd} \limsup_{n \to \infty} \frac1n \log \|\psi(n)\| \leq - \gamma_W(\lambda)~. \end{equation}
A variety of heuristic arguments indicate that (\ref{eq:upperbd}) should be sharp in the following strong sense: there is an event of full probability on which each eigenpair $H\psi = \lambda \psi$ satisfies
\begin{equation}\label{eq:conjlowerbd} \text{(conjecture)}\qquad \liminf_{n \to \infty} \frac1n \log (\| \psi(n)\| + \|\psi(n+1)\|) \geq - \gamma_W(\lambda)~, \end{equation}
(which, in conjuction with (\ref{eq:upperbd}) implies the existence of a limit equal to $-\gamma_W(\lambda)$).
For example, the Fermi Golden Rule leads one to believe that eigenfunctions violating (\ref{eq:conjlowerbd}) are unstable under perturbation. From the point of view of random matrix products, an eigenfunction decaying at a rate faster than $\gamma_W$ indicates a non-generic intersection between the $W$-dimensional space of initial conditions with the $W$-dimensional Oseledec subspace of decaying solutions in $\mathbb R^{2W}$.
The relation (\ref{eq:conjlowerbd}) was repeatedly conjectured at least since the 1980s; however, we are not aware of any rigorous results improving on the trivial bound
\begin{equation}\label{eq:trivlowerbd}\liminf_{n \to \infty} \frac1n \log (\| \psi(n)\| + \|\psi(n+1)\|) \geq - \gamma_1(\lambda) \end{equation}
(which follows from a general result of Craig and Simon \cite{CS}, or from its quantitative version, stated as Lemma~\ref{l:upperbd} below). The main difficulty comes from the fact that, although for each fixed energy $\lambda$ the probability to have an eigenfunction which decays with rate faster than $\gamma_W(\lambda)$ is zero, one can not use the union bound over the uncountable set of all real $\lambda$.
\medskip In this paper we make a step towards (\ref{eq:conjlowerbd}) by improving upon (\ref{eq:trivlowerbd}). To state the results precisely, we introduce some notation. Let $\mathcal E(H) = \{ (\lambda, \psi) \}$ be the collection of eigenpairs of $H$, with the normalisation $\|\psi(0)\|=1$ (the choice of the sign is not important for us, and spectral multiplicity is known to be a null event). For $\gamma > 0$ and a bounded interval $I \Subset \mathbb R$, consider the two realisation-dependent sets:
\begin{equation}\begin{split}
\operatorname{Fast}^+(\gamma; I) &= \left\{ \lambda \in I: \exists (\lambda, \psi) \in \mathcal E(H),\, \liminf_{n \to \infty} \frac{\log (\| \psi(n)\| + \|\psi(n+1)\|) }n \leq - \gamma \right\}~,\\
\operatorname{Fast}^-(\gamma; I) &= \left\{ \lambda \in I: \exists (\lambda, \psi) \in \mathcal E(H),\, \limsup_{n \to \infty} \frac{\log (\| \psi(n)\| + \|\psi(n+1)\|) }n \leq - \gamma \right\} ~.\end{split}\label{eq:deffast}\end{equation}
These sets consist of the eigenvalues for which the corresponding eigenvector decays at rate $\geq \gamma$ (along a subsequence, or uniformly). We note that there is no simple way to define the sets as random variables on the underlying probability space (see Kendall \cite{Kendall} and Tsirelson \cite{Tsir} for possible frameworks to address such questions); this does not cause problems since we only work with the measureable events $\{ \operatorname{Fast}^\pm(\gamma; I) = \varnothing \}$ and $\{ \operatorname{Fast}^\pm(\gamma; I) \neq \varnothing \}$ (which in fact lie in the tail $\sigma$-algebra).
For $\lambda$ in the spectrum $\sigma(H)$ of $H$, define the deterministic quantities
\begin{equation}\label{eq:degamma}\gamma_*^\pm(\lambda) = \inf \Big\{ \gamma > 0 : \, \exists r > 0~, \, \mathbb P \left\{ \operatorname{Fast}^\pm(\gamma, (\lambda - r, \lambda + r)) \neq \varnothing \right\} = 0 \Big\} ~. \end{equation}
Roughly speaking, the functions $\gamma_*^\pm(\lambda)$ measure the fastest possible decay of an eigenfunction in the vicinity of $\lambda$. In this notation, (\ref{eq:upperbd}) and (\ref{eq:trivlowerbd}) imply that
\begin{equation}\label{eq:sandwich}\gamma_1(\lambda) \geq \gamma_*^{+}(\lambda) \geq \gamma_*^{-}(\lambda) \geq \gamma_W(\lambda)~,\end{equation}
whereas the conjecture (\ref{eq:conjlowerbd}) stipulates that the last two inequalities are in fact equalities: $\gamma_*^\pm \overset{\text{\tiny conj}}{=} \gamma_
W$. The results below show that the first inequality in (\ref{eq:sandwich}) is strict (for $W \geq 2$), whereas the last one is an equality, at least, if one asumes
\medskip
\begin{assum}\label{assum} (a) The distribution of $V(n)$ is compactly supported on a real-analytic submanifold $\mathcal M$ in the space of symmetric $W \times W$ matrices, and is absolutely continuous with bounded density with respect to the $(\dim \mathcal M)$-dimensional Lebesgue measure on $\mathcal M$; (b) for each $\lambda \in \mathbb R$ the image of $\mathcal M$ under
\[ V \mapsto \left( \begin{array}{cc} \lambda - V & - \mathbbm 1 \\ \mathbbm 1 & 0 \end{array} \right) \]
generates $\operatorname{Sp}(2W, \mathbb R)$ as a Lie group.
\end{assum}
\begin{rmk} Assumption~\ref{assum} implies both (\ref{eq:model}) and (\ref{eq:zcond}).
\end{rmk}
\begin{rmk}In the Schr\"odinger case (\ref{eq:strip}), Assumption~\ref{assum} is satisfied if the random variables $v_{n,\alpha}$ are bounded and their distribution is absolutely contiunous with bounded density (see \cite[Section 1.4]{Lacr3}).\end{rmk}
\begin{thm}\label{thm:1}
Let $W \geq 3$. If Assumption~\ref{assum} holds, then $\gamma_*^{+}(\lambda) \leq \gamma_{*,1}(\lambda)$ for $\lambda \in \sigma(H)$, where $\gamma_{*,1}(\lambda)$ is the unique solution of the equation
\begin{equation}\label{eq:thm1} \big( (W-1) \gamma - \sum_{j=1}^{W-1}\gamma_j(\lambda)\big)_+ + \gamma = \gamma_1(\lambda)~. \end{equation}
\end{thm}
Here $x_+ = \max(x, 0)$. We observe that for $W \geq 3$ $\gamma_{*,1}(\lambda) < \gamma_1(\lambda)$, hence (\ref{eq:thm1}) indeed improves on (\ref{eq:trivlowerbd}). For $W = 2$, $\gamma_{*,1} = \gamma_1(\lambda)$; however, we prove
\begin{thm}\label{thm:2}
Let $W = 2$. If Assumption~\ref{assum} holds, then $\gamma_*^{+}(\lambda) \leq \frac{2 \gamma_{1}(\lambda) + \gamma_2(\lambda)}{3}$ for all $\lambda \in \sigma(H)$.
\end{thm}
\noindent As for $\gamma_*^{-}$, our methods yield the optimal result:
\begin{thm}\label{thm:3}
Let $W \geq 2$. If Assumption~\ref{assum} holds, then $\gamma_*^{-}(\lambda) = \gamma_{W}(\lambda)$ for all $\lambda \in \sigma(H)$.
\end{thm}
\noindent The following corollary summarises our main conclusions:
\begin{cor} Let $W \geq 2$. If Assumption~\ref{assum} holds, then
\begin{equation}
\gamma_W(\lambda) = \gamma_*^{-}(\lambda) \leq \gamma_*^{+}(\lambda) < \gamma_{1}(\lambda)
\end{equation}
for all $\lambda \in \sigma(H)$.
\end{cor}
In the proofs, we repeatedly use the following argument, inspired by the work of Kakutani \cite{Kakutani} and its ramifications by Spencer and Aizenman \cite{Aiz}, to estimate the probability of exceptional events. Suppose we want to bound $\mathbb P\left\{ A \neq \varnothing\right\}$, where $A$ is a random subset of, say, the interval $[0, 1]$. Suppose we find $\eta \in (0, 1]$ and a random superset $A^{+} \supset A$ with the following properties: (a)~for each $\lambda \in [0, 1]$, $\mathbb P \left\{ \lambda \in A^{+}\right\} \leq p$ (``single-energy bound''); (b) if $\lambda \in A$ and $|\lambda' - \lambda| < \eta$, then $\lambda' \in A^{+}$ (``propagation estimate''). Then the Chebyshev inequality and the Fubini theorem yield:
\[ \mathbb P \left\{ A \neq \varnothing \right\}
\leq \mathbb P \left\{ \operatorname{mes} (A^+ \cap [0,1]) \geq \eta \right\}
\leq \frac1\eta \mathbb E \operatorname{mes} (A^+ \cap [0,1])
\leq \frac{p}{\eta}~.\]
The paper is organised as follows. Some preliminary estimates are collected in Section~\ref{s:prelim}. In Sections~\ref{s:1} and \ref{s:2} we prove Theorems~\ref{thm:2} and \ref{thm:3}, respectively. In Section~\ref{s:concl} we discuss the prospects of improving the bounds in Theorems~\ref{thm:1} and \ref{thm:2}, and point out the connection to the problem, going back to \cite{G75} and recently studied by Gorodetski and Kleptsyn \cite{GorKl}, of uniform convergence to the Lyapunov exponents, i.e.\ whether the quantifier $\forall \lambda$ in (\ref{eq:fk}) can be inserted inside the curly brackets. We also prove Proposition~\ref{p:gorkl}, which is an $\operatorname{Sp}(2W, \mathbb R)$-counterpart of one of the results of \cite{GorKl}. The proof of Theorem~\ref{thm:3} in Section~\ref{s:3} makes use of this proposition.
\medskip
We conclude this introduction with two remarks. First, we have chosen to present the arguments for the one-sided strip $\mathbb Z_+ \times \{1, \cdots, W\}$; similar arguments can be applied to the two-sided strip $\mathbb Z \times \{1, \cdots, W\}$. Second, it is possible that Assumption~\ref{assum} can be somewhat relaxed, and that a refinement of the current methods could be applicable when the invariant measure (describing the limiting distribution of the unitary matrices in the singular value decomposition of the transfer matrices $\Phi_n$) is absolutely continuous with bounded density with respect to the Haar measure on the compact symplectic group, or at least enjoys the Frostman property (upper bound on the measure of every ball by a power of the radius) with a sufficiently large exponent. On the other hand, it is known (see \cite{H} for the case $W=1$) that for singular distributions of $V(n)$ the invariant measure may be supported on lower-dimensional subsets of the symplectic group. Extending our results to such cases would require additional ideas.
\section{Preliminaries}\label{s:prelim}
\paragraph{Convergence to the Lyapunov exponent} Assume that (\ref{eq:cond}) holds at some $\lambda$. Then (\ref{eq:cond}) also holds in a neighbourhood of $\lambda$, and then (see e.g.\ \cite[Corollary 2.5]{KLS}) the Lyapunov exponents $\gamma_j(\lambda)$ are continuous at $\lambda$. For each $\epsilon > 0$, let $r_\epsilon(\lambda)\in (0, 1/2]$ be such that
\begin{equation}\label{eq:def-r} \forall \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \,\, \forall 1 \leq j \leq W \,\,\,\, |\gamma_j(\lambda') - \gamma_j(\lambda)| < \epsilon~. \end{equation}
The following large deviation estimate goes back to the work of Le Page \cite{LP}.
\begin{lemma}[see \cite{DK}, {\cite[Section V.6]{BougL}}]\label{l:largedev}
Assume (\ref{eq:model}). Let $I \Subset \mathbb R$ be a finite interval such that (\ref{eq:cond}) holds for all $\lambda \in I$. Then there exist $C>0$ and $c>0$ such that for each $\lambda \in I$, $1 \leq j \leq W$, $\epsilon \in(0, 1]$, and $n \geq 1$
\begin{equation}\label{eq:largedev} \mathbb P\left\{\left| \frac{1}{n} \log s_j(\Phi_n(\lambda)) - \gamma_j(\lambda) \right| \geq \epsilon\right\} \leq C \exp(-c \epsilon^2 n)~.\end{equation}
\end{lemma}
\noindent In Section~\ref{s:concl} we shall also need the following well-known strengthening of Lemma~\ref{l:largedev}:
\begin{rmk}\label{rem:largedev} Let $F_1$ and $F_2$ be two isotropic $j$-dimensional subspaces of $\mathbb R^{2W}$ (i.e.\ $J F_j \subset F_j^\perp$, where $J = \left( \begin{array}{cc} 0 & - \mathbbm{1} \\ \mathbbm{1} & 0 \end{array} \right)$ is the symplectic rotation), and let $P_{F_j}$ be the orthogonal projection onto $F_j$. Then the estimate (\ref{eq:largedev}) still holds if $s_j(\Phi_n(\lambda))$ is replaced with $s_j(P_{F_1} \Phi_n(\lambda) P_{F_2})$.
\end{rmk}
This strengthened version follows from the usual Lemma~\ref{l:largedev}, the exponential convergence of the matrix $V_n$ in the singular value decomposition $\Phi_n = U_n \Sigma_n V_n^*$ of $\Phi_n$ to a limiting matrix (see \cite{GM}), and the Frostman property of the distribution of the limiting matrix (see \cite[Section VI.5]{BougL}). In the special case that we need -- of matrices satisfying Assumption~\ref{assum} -- one can also appeal to item (b) of Lemma~\ref{l:smooth} below.
\smallskip
The arguments leading to the following corollary of Lemma~\ref{l:largedev} are also well known (we also mention a result of Craig--Simon \cite[Theorem 2.3]{CS}, which is not quantitative, but on the other hand holds in more general setting).
\begin{lemma}\label{l:upperbd} Assume (\ref{eq:model}). Suppose $\lambda \in \mathbb R$ is such that (\ref{eq:cond}) holds. Then there exist $C>0$ and $c>0$ such that for each $1 \leq j \leq W$, $\epsilon \in(0, 1]$, and $n \geq 1$
\[ \mathbb P\left\{ \exists \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \,\, : \,\, \frac{1}{n} \sum_{i=1}^j \log s_i(\Phi_n(\lambda')) \geq \sum_{i=1}^j \gamma_i(\lambda) + 2j \epsilon \right\} \leq C n \exp(-c \epsilon^2 n)~.\]
\end{lemma}
\begin{proof}
If $n \leq 10$ or $\epsilon^2 \leq 100 \log n / n$, we can ensure the desired inequality by adjusting the constants, therefore we assume that $n > 10$ and $\epsilon^2 \geq 100 \log n / n$.
Consider the $j$-th exterior power $\Phi_n(\lambda')^{\wedge j}$ of $\Phi_n(\lambda')$, so that
\[\log \|\Phi_n(\lambda')^{\wedge j} \| = \sum_{i=1}^j \log s_j(\Phi_n(\lambda'))~.\]
Each matrix element $p(\lambda')$ of $\Phi_n(\lambda')^{\wedge j}$ (where $p$ runs in a finite set $P$ enumerating
the matrix elements) is a polynomial of degree $\leq j n \leq W n$ in $\lambda$. Now we use the following result of Bernstein \cite{Bern}, although we require much less than its full strength (in place of the logarithmic dependence on the degree with a precise constant, we could do with any prefactor growing slower than exponentially): for any polynomial $q$ of degree $n$
\[ \max_{|\lambda| \leq 1} |q(\lambda)| \leq C_n \max_{\alpha\in \{0,1,\cdots,n\}} |q(\cos (\pi \frac{\alpha+\frac12}{n+1}))|~, \quad \text{where} \quad
C_n = (1+ o(1)) \, \frac2\pi \log n~. \]
Returning to our setting, let
\[ \lambda_\alpha = \lambda + r_\epsilon(\lambda) \cos(\pi \frac{\alpha +\frac12}{Wn+1})~, \quad 0 \leq \alpha \leq Wn~;\]
then we have for any $p \in P$:
\begin{equation}\label{eq:interp}\max_{\lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))} |p(\lambda)| \leq C \log (Wn) \max_{\alpha\in \{0,1,\cdots,Wn\}} |p(\lambda_\alpha)| \leq e^{\frac{\epsilon n}{3}} \max_{0 \leq \alpha \leq Wn} |p(\lambda_\alpha)|~. \end{equation}
By Lemma~\ref{l:largedev} and the choice of $r_\epsilon$,
\[ \mathbb P \left\{ |p(\lambda_\alpha)| \geq \exp\left\{n \left[ \sum_{i=1}^j \gamma_i(\lambda) + \frac{4j}{3} \epsilon \right] \right\} \right\} \leq C' \exp(-c' \epsilon^2 n)~.\]
Thus by (\ref{eq:interp})
\[ \mathbb P \left\{ \max_{p \in P}\max_{\lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))} |p(\lambda')| \geq \exp\left\{n \left[ \sum_{i=1}^j \gamma_i(\lambda) + \frac{5j}{3} \epsilon \right] \right\} \right\} \leq C'' n \exp(-c' \epsilon^2 n)~.\]
Finally, $\|\Phi_n(\lambda')\| \leq C \max_{p \in P} |p(\lambda')| \leq e^{\epsilon n /3} \max_{p} |p(\lambda')|$, and this completes the proof.
\end{proof}
\paragraph{The probability density of transfer matrices} The following lemma builds on the arguments going back to the work of Ricci and Stein \cite{RS}. In the context of random Schr\"odinger operators, it appears in the work Shubin, Vakilian and Wolff \cite{SVW}. Recently, a general argument in the setting of motivic morphisms has been developed by Glazer and Hendel \cite{GH}. For completeness, we sketch a proof (restricted to the generality of the current discussion) below.
\begin{lemma}\label{l:smooth}
Assume Assumption~\ref{assum}. There exists $n_0$ such that the following holds.
\begin{enumerate}
\item[(a)] For any $\lambda \in \mathbb R$ the distribution of $\Phi_{n_0}(\lambda)$ is absolutely continuous with bounded density with respect to the Haar measure on $\operatorname{Sp}(2W, \mathbb R)$.
\item[(b)] Let $\Phi_n(\lambda) = U_n(\lambda) \Sigma_n(\lambda) V_n(\lambda)^*$ be the singular value decomposition of $\Phi_n(\lambda)$ (defined in an arbitrary measurable way). Then there exists $C$ such that for any $n \geq n_0$ the distributions of $V_n(\lambda)$, $U_n(\lambda)$ and $V_n^*(\lambda) U(\lambda)$ are absolutely continuous with density $\leq C$ with respect to the Haar measure on the compact symplectic group $\operatorname{Sp}(2W, \mathbb R) \cap \operatorname{SO}(2W)$.
\end{enumerate}
Moreover, the bounds in (a)--(b) are locally uniform in $\lambda$.
\end{lemma}
\begin{proof} Consider the product map
\begin{equation}\label{eq:map} F_{n} = F_{n, \lambda}: \mathcal M^n \to \operatorname{Sp}(2W, \mathbb R)~, \quad (V(1), \cdots, V(n)) \mapsto \Phi_{n}(\lambda)~. \end{equation}
According to \cite[Proposition~1.1]{RS}, for
\[ n_1 = 2^{\dim \operatorname{Sp}(2W, \mathbb R) - \dim \mathcal M} = 2^{W(2W+1) - \dim \mathcal M} \]
the image $F_{n_1}(\mathcal M)$ contains an open set in $\operatorname{Sp}(2W, \mathbb R)$ (in the Schr\"odinger case, the same conclusion holds for $n_1 = \dim \operatorname{Sp}(2W, \mathbb R) \div \dim \mathcal M = 2W+1$; see \cite[Proposition 1.4.35]{Lacr3}). Hence $\det [(D F_{n_1})^* (D F_{n_1})]$ is not identically zero; by continuity, the maximum of its absolute value is bounded away from zero locally uniformly in $\lambda$.
The map (\ref{eq:map}) is real analytic, therefore the probability density of $\Phi_{n_1}(\lambda)$ lies in $L_p$ for some $p>1$ (this can be proved directly or deduced from \cite[Proposition~2.1]{RS} using an appropriate embedding theorem), and, again, both $p$ and the bound are locally uniform in $\lambda$.
Applying the inequality
\[ \| f_1 * f_2 * \cdots * f_n \|_\infty \leq \prod_{\alpha=1}^n \|f_\alpha\|_{1+\frac1n}~, \quad f_\alpha \in L_{1 + \frac1n}(\operatorname{Sp}(2W, \mathbb R)) \]
(which is a simple special case of the Young convolution inequality on $\operatorname{Sp}(2W, \mathbb R)$), we obtain that for $n_0= n_1 (\lfloor (1 - 1/p)^{-1} \rfloor+ 1)$ the density of $\Phi_{n_0}(\lambda)$ is bounded. This proves the first item, from which the second one follows.
\end{proof}
\paragraph{A geometric lemma} Denote by $S(F)$ the unit sphere of an Euclidean vector space $F$. For future reference, we record the following fact (attributed to Archimedes): if $u$ is a random vector uniformly distributed on $S(\mathbb R^\ell)$, then the probability density of the random vector $P_F u$, where $P_F: \mathbb R^\ell \to F$ be the orthogonal projection onto a fixed $k$-dimensional subspace $F \subset \mathbb R^\ell$, $1 \leq k \leq \ell - 1$, is given by
\begin{equation}\label{eq:archimedes} f_{\ell,k}(v) = C_{\ell,k} (1 - \|v\|)_+^{\frac{\ell-k}{2} - 1}~.\end{equation}
\begin{lemma}\label{l:geom} Let $U$ be a random matrix taking values in $\operatorname{SO}(\ell)$ such that for each $u \in S(\mathbb R^\ell)$ the vector $Uu$ is uniformly distributed on $S(\mathbb R^\ell)$. Let $D = \operatorname{diag}(e^{a_1}, \cdots, e^{a_\ell})$, where $a_1 \geq a_2 \geq \cdots \geq a_\ell$, and let $F \subset \mathbb R^\ell$ be a $k$-dimensional subspace. Then for any $a_1 \geq a \geq a_\ell$
\[ \mathbb P \left\{ \exists u \in S(\mathbb R^k) \, : \, \|D U u \| \leq e^a \right\}
\leq C_\ell \exp \left\{ - \sum_{j=k}^\ell (a_j - a)_+\right\}~.\]
\end{lemma}
\begin{proof}
It is sufficient to prove the estimate for the $\ell_\infty$ norm $\| \cdot \|_\infty$ in place of the Euclidean norm, as this will only affect the value of the numerical constant $C_\ell$. We first observe that for a fixed $u \in S(\mathbb R^\ell)$
\begin{equation}\label{eq:geom-fixed}
\mathbb P \left\{ \|DU u\|_\infty \leq e^{a} \right\} \leq C_\ell \exp(-\sum_{j=1}^\ell (a_j - a)_+)~.
\end{equation}
Indeed, let $j_0$ be such that $a_{j_0} \geq a > a_{j_0+1}$. The random vector $((Uu)_j)_{j=1}^{j_0}$ has bounded density in a neighbourhood of zero (according to (\ref{eq:archimedes}), for $j_0 \leq \ell- 2$ the density is uniformly bounded, whereas for $j_0 = \ell - 1$ it explodes only on the boundary of the unit ball). Therefore
\[ \mathbb P \left\{ \|DU u\|_\infty \leq e^{a} \right\} = \mathbb P \left\{ \forall 1 \leq j \leq j_0 \,\, |(DU u)_j| \leq e^{a} \right\} \leq
C_\ell \prod_{j=1}^{j_0} e^{a -a_j} = C_\ell \exp(-\sum_{j=1}^\ell (a_j - a)_+)~,\]
thus concluding the proof of (\ref{eq:geom-fixed}).
Second, we note that if $\|Dv\|_\infty \leq e^a$, then $\|Dv'\|_\infty \leq 2e^a$ for all
\[ v' \in Q_v = \{ v' \in S(\mathbb R^\ell) \, : \, |v_j' - v_j| \leq \exp(- (a_j - a)_+)\}~. \]
For any $k$-dimensional subspace $F_1 \subset \mathbb R^\ell$ and $v \in S(F_1)$, the $k-1$ dimensional measure of the intersection of $Q_v$ with $S(F_1)$ admits the lower bound
\[ \sigma_{k-1} (S(F_1) \cap Q_v)
\geq c_\ell \exp(- \sum_{j=1}^{k-1} (a_j - a)_+)~,\]
whence by the Chebyshev inequality, the Fubini theorem and (\ref{eq:geom-fixed})
\[\begin{split}
&\mathbb P \left\{ \exists v \in S(UF) \, : \, \|Dv\|_\infty \leq e^a \right\} \\
&\quad\leq \mathbb P \left\{ \sigma_{k-1} \left\{ v' \in S(UF) \, : \, \|Dv'\|_\infty \leq 2e^a\right\} \geq c_\ell \exp(-\sum_{j=1}^{k-1} (a_j - a)_+)\right\} \\
&\quad\leq C_\ell' \exp(\sum_{j=1}^{k-1}(a_j - a)_+) \mathbb E \sigma_{k-1} \left\{ v' \in S(UF) \, : \, \|Dv'\|_\infty \leq 2e^a\right\} \\
&\quad \leq C_\ell'' \exp(\sum_{j=1}^{k-1}(a_j - a)_+) \, \exp(-\sum_{j=1}^\ell (a_j - a)_+)
=C_\ell'' \exp \left\{ - \sum_{j=k}^\ell (a_j - a)_+\right\}~.\qedhere\end{split} \]
\end{proof}
\section{Proof of Theorem~\ref{thm:1}}\label{s:1}
For the whole proof, we fix $\lambda \in \sigma(H)$ and $\gamma > \gamma_{*,1}(\lambda)$. Choose an auxiliary small parameter $\epsilon > 0$; eventually, we shall substitute $\epsilon = \frac{1}{100 W} (\gamma - \gamma_{*,1}(\lambda))$.
Denote
\begin{equation}\label{eq:def-omega}
\Omega_{n,\epsilon}(\lambda) = \bigcap_{1 \leq j \leq W} \bigcap _{1 \leq m_1 \leq m_2 \leq n} \Omega_{n,\epsilon}^{m_1,m_2,j}(\lambda)~,
\end{equation}
where
\[ \Omega_{n,\epsilon}^{m_1,m_2,j}(\lambda) = \left\{ \forall \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \,\,\, \sum_{i=1}^j \log s_i(\Phi_{m_1,m_2}(\lambda')) \leq (m_2 - m_1) \sum_{i=1}^j \gamma_i(\lambda) + 2\epsilon j n \right\}~. \]
From Lemma~\ref{l:upperbd} we obtain the following maximal inequality:
\begin{equation} \label{eq:p-omega}
\mathbb P(\Omega_{n,\epsilon}(\lambda))\geq 1 - C n^3 \exp(-c \epsilon^2 n)~.
\end{equation}
Let
\begin{equation}\label{eq:def-F0} F_0 = \left\{ \binom{v_1}{0} \, : \, v_1 \in \mathbb R^W \right\} \subset \mathbb R^{2W}\end{equation}
be the space of initial conditions. Denote:
\begin{equation}\label{eq:def-fast}
\operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) = \left\{ \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \,\, : \,\,
\exists v \in S(F_0)~, \,\, \| \Phi_n(\lambda') v\| \leq e^{-n \gamma}\right\}~,\end{equation}
so that
\[ \operatorname{Fast}^+\big(\gamma, (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))\big) \subset \limsup_{n \to \infty} \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)~. \]
We shall prove that for sufficiently small $\epsilon$
\begin{equation}\label{eq:est-fast-1}\mathbb{P} \left\{ \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) \neq \varnothing \right\} \leq C e^{-cn}~; \end{equation}
by the Borel--Cantelli lemma, this estimate will imply that almost surely
\[ \operatorname{Fast}^+\big(\gamma, (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))\big) = \varnothing \]
and thus $\gamma_*^+ \leq\gamma$.
\medskip\noindent
The proof of (\ref{eq:est-fast-1}) rests on two claims, a propagation estimate and a single-energy bound. Set $\eta = n^{-1} e^{-n(\gamma + \gamma_1(\lambda) + 4 \epsilon)}$.
\begin{cl}\label{cl:2}
On the event $\Omega_{n,\epsilon}(\lambda)$,
\begin{equation}\label{eq:propag-1}
\left. \begin{split} \lambda', \lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))&\\
\lambda' \in \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)& \\
|\lambda'' - \lambda'| \leq\eta& \end{split} \right\} \Longrightarrow \lambda'' \in \operatorname{Fast}_{n,\epsilon}(\gamma - \frac{\log 2}{n}, \lambda)~.
\end{equation}
\end{cl}
\begin{proof} On $\Omega_{n,\epsilon}(\lambda)$, we have
\begin{equation}\label{eq:16.5} \lambda', \lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))
\Longrightarrow \| \Phi_n(\lambda') - \Phi_n(\lambda'') \| \leq n e^{n (\gamma_1(\lambda) + 4\epsilon)} |\lambda ' - \lambda''|~, \end{equation}
hence for $|\lambda' - \lambda''| \leq \eta$ we have
\[ \| \Phi_n(\lambda') - \Phi_n(\lambda'')\| \leq e^{-n\gamma}~.\]
If $\lambda' \in \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)$, then there exists $v \in S(F_0)$ such that $\|\Phi_n(\lambda') v\| \leq e^{-n\gamma}$,
and then
\begin{equation}\label{eq:propag} \|\Phi_n(\lambda'') v\| \leq e^{-n\gamma} + \|\Phi_n(\lambda'')-\Phi_n(\lambda')\| \leq
e^{-n\gamma} + e^{-n\gamma} = 2e^{-n\gamma}~, \end{equation}
i.e.\ $\lambda'' \in \operatorname{Fast}_{n,\epsilon}(\gamma - \frac{\log 2}{n}, \lambda)$, as asserted.\end{proof}
\begin{cl}\label{cl:1} For any $\gamma>0$, $n \geq n_0$, and $\lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda+r_\epsilon(\lambda))$
\begin{equation}\label{eq:singlelam-1}
\begin{split}
&\mathbb P \left\{ \lambda'' \in \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)~, \, \omega \in \Omega_{n,\epsilon}(\lambda) \right\} \\
&\qquad\leq C \exp\left\{ -n \left[ 2 \gamma + \big( (W-1)\gamma - \sum_{j=1}^{W-1} \gamma_{j}(\lambda) \big)_+ - 2W\epsilon \right]\right\}~.
\end{split}\end{equation}
\end{cl}
\begin{proof}
Let $n_0$ be as in Lemma~\ref{l:smooth}, and let $M \in \operatorname{Sp}(2W, \mathbb R)$ be a random matrix uniformly distributed according to the restriction of the Haar measure to a sufficiently large ball in operator norm. Denote $\widetilde \Phi_n(\lambda) = \Phi_{n, n_0}(\lambda)M$. According to Lemma~\ref{l:smooth}, it suffices to show that
\begin{equation}\label{eq:singlelam-1'}
\mathbb P \left\{ s_W(\widetilde \Phi_n(\lambda'')|_{F_0}) \leq e^{-\gamma n}~, \, \omega \in \Omega_{n,\epsilon}(\lambda) \right\}
\leq (\text{RHS of (\ref{eq:singlelam-1})})~.
\end{equation}
Introduce the singular value decompositon
\[ \Phi_{n, n_1}(\lambda'') = U_{n, n_1}(\lambda'') \Sigma_{n, n_1}(\lambda'') V_{n, n_1}(\lambda'')^*~, \quad
M =U \Sigma V^*~,\]
so that
\[\widetilde\Phi_n(\lambda'') = U_{n, n_1}(\lambda'') \Sigma_{n, n_1}(\lambda'') \left[ V_{n, n_1}(\lambda'')^*U \right] \Sigma V^*~,\]
and let $F_1 = \Sigma V^* F_0$. If $\| \widetilde\Phi_n(\lambda'') v_0 \| \leq e^{-n\gamma}$ for some $v_0 \in S(F_0)$, then
\begin{equation}\label{eq:ev1} \| \Sigma_{n, n_1}(\lambda'') \left[ V_{n, n_1}(\lambda'')^*U\right] v_1 \| \leq e^{-n\gamma + C_1} \end{equation}
for $v_1 = \Sigma V^* v_0 / \| \Sigma V^* v_0 \| \in S(F_1)$. Note that $\left[ V_{n, n_1}(\lambda'')^*U\right]$ is distributed uniformly on the compact symplectic group, and therefore its action on any fixed vector on the sphere is distributed uniformly on the sphere.
On the event $\Omega_{n,\epsilon}(\lambda)$, the numbers $a_j = \frac{1}{n} \log s_j(\Phi_{n_0, n}(\lambda))$ satisfy
\[ a_{2W+1-j} = - a_j~, \quad \sum_{i=1}^j a_i \leq (1 - n_0/n) \sum_{i=1}^j \gamma_i(\lambda) + 2\epsilon j \leq \sum_{i=1}^j \gamma_i(\lambda) + 2\epsilon W \quad (1 \leq j \leq W)~. \]
Therefore
\[\begin{split} \sum_{j=1}^{W+1} (\gamma - a_j)_+ &\geq 2 \gamma +\sum_{j=1}^{W-1} (\gamma - a_j)_+
\\
& \geq 2 \gamma + (W-1) (\gamma - \frac{1}{W-1} \sum_{j=1}^{W-1} a_j)_+
\geq 2 \gamma + ((W-1)\gamma - \sum_{j=1}^{W-1} \gamma_j(\lambda))_+ - 2 \epsilon W~,\end{split}\]
whence
\[\sum_{j=1}^{W+1} (\gamma - \frac{C_1}{n} - a_j)_+ \geq 2 \gamma + ((W-1)\gamma - \sum_{j=1}^{W-1} \gamma_j(\lambda))_+ - 2 \epsilon W - \frac{2C_1W}{n} \]
According to Lemma~\ref{l:geom},
\[ \mathbb P\left\{ \text{(\ref{eq:ev1})} \text{ and } \omega \in \Omega_{n,\epsilon}(\lambda) \right\} \leq
C_2 \exp\left\{ - n \left[ 2 \gamma + ((W-1)\gamma - \sum_{j=1}^{W-1} \gamma_j(\lambda))_+ - 2 \epsilon W \right] \right\}~, \]
as claimed in (\ref{eq:singlelam-1'}).
\end{proof}
Now we combine Claim~\ref{cl:2} with Claim~\ref{cl:1} (applied to $\gamma - \log2 /n$ in place of $\gamma$) and (\ref{eq:p-omega}), and use the Fubini theorem:
\[\begin{split} &\mathbb P \left\{ \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) \neq \varnothing \right\} \\
&\quad \leq (1 - \mathbb P (\Omega_{n,\epsilon}(\lambda)) + 2 C r_\epsilon(\lambda) \exp\left\{ - n \left[ 2 \gamma + ((W-1)\gamma - \sum_{j=1}^{W-1} \gamma_j(\lambda))_+ - 2 \epsilon W \right] \right\}
\eta^{-1} \\
&\quad \leq C n^3 e^{-cn} + C n \exp\left\{ - n \left[ - \gamma_1(\lambda) + \gamma + ((W-1)\gamma - \sum_{j=1}^{W-1} \gamma_j(\lambda))_+ - 4 \epsilon W \right] \right\}~.
\end{split}\]
For $\epsilon = \frac{1}{100 W} (\gamma - \gamma_{*,1})$, this expression tends to zero exponentially with $n$, thus concluding the proof of (\ref{eq:est-fast-1}) and of Theorem~\ref{thm:1}.\qed
\section{Proof of Theorem~\ref{thm:2}}\label{s:2}
Let $\gamma > \frac{1}{3} (2\gamma_1(\lambda)+\gamma_2(\lambda))$, and let $\epsilon = \frac{1}{100} \min(\gamma_1(\lambda) - \gamma_2(\lambda), \gamma_2(\lambda))$. We keep the notation $F_0$ (space of initial conditions, (\ref{eq:def-F0})), $\Omega_{n,\epsilon}(\lambda)$ (the event on which the products of singular values admit an upper bound, (\ref{eq:def-omega})), and $\operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)$ (the set of energies $\lambda'$ in the vicinity of $\lambda$ for which there is a fast-decaying solution, (\ref{eq:def-fast})) from the previous section. Similarly to the previous section, or goal is to prove (\ref{eq:est-fast-1}), i.e.\ that $\operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)$ is empty outside an event of exponentially small probability.
Denote by $u_j(\lambda')$ ($j=1,2,3,4$) the right singular vectors of $\Phi_n(\lambda')$ (i.e.\ the eigenvectors of $\Phi_n(\lambda')^* \Phi_n(\lambda')$; the choice of the direction of the vectors will be specified later), and by $P_{F_0}$ -- the orthogonal projection onto $F_0$. Let
\begin{align}
& \eta = \frac{1}{n} \exp(-n(\gamma - \gamma_2(\lambda)))~, \\
&A^+ = \left\{ \lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \, : \, \| P_{F_0} u_1(\lambda'') \| \leq C \exp(-n(2\gamma-\gamma_1-\gamma_2-4\epsilon))\right\}~, \label{eq:def-A+}\end{align}
where $C>0$ will be specified shortly. The required estimate (\ref{eq:est-fast-1}) follows from (\ref{eq:p-omega}) and the following two ingredients: a propagation estimate
\begin{equation}\text{on $\Omega_{n,\epsilon}(\lambda)$}: \Big[ \lambda' \in A \overset{\text{def}}{=} \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda)~, \,\, |\lambda'' - \lambda'| < \eta~, \,\,
|\lambda'' - \lambda| < r_\epsilon(\lambda) \Big] \Longrightarrow \lambda'' \in A^+ \label{eq:propag-2}\end{equation}
which replaces Claim~\ref{cl:2}, and the single-energy bound
\begin{equation}|\lambda'' - \lambda| < r_\epsilon(\lambda) \Longrightarrow \mathbb{P} \left\{ \lambda'' \in A^+\right\} \leq C' e^{-\epsilon n} \eta \label{eq:est-2}
\end{equation}
which replaces Claim~\ref{cl:1}.
To prove (\ref{eq:propag-2}), we first observe that $\lambda'\in A$ implies that $\frac1n \log s_1(\Phi_n(\lambda')) \geq \gamma$, and hence on $\Omega_{n,\epsilon}(\lambda)$
\begin{equation}\label{eq:ubd-emp}
\begin{split}
\frac1n \log s_2(\Phi_n(\lambda')) &= \frac1n \big(\log s_1(\Phi_n(\lambda')) + \log s_2(\Phi_n(\lambda'))\big)- \frac1n \log s_1(\Phi_n(\lambda')) \\
&\leq \gamma_1(\lambda) + \gamma_2(\lambda) - \gamma + 4\epsilon~.
\end{split}\end{equation}
Further, $\lambda' \in A$ implies that there exists $v \in F_0$ such that for $j=1,2,3$
\[ |\langle v, u_j(\lambda') \rangle | \leq \frac{\exp(- n \gamma)}{s_j(\Phi_n(\lambda'))} \leq
\exp(- n \gamma) s_2(\Phi_n(\lambda')) \leq \exp(- n (2\gamma - \gamma_1(\lambda) - \gamma_2(\lambda) - 4\epsilon) )~.\]
These inequalities imply that (for the appropriate choice of signs)
\[ \| v - u_4(\lambda') \| \leq C_1 \exp(- n (2\gamma - \gamma_1(\lambda) - \gamma_2(\lambda) - 4\epsilon) )~.\]
Now we use the symplectic rotation $J = \left( \begin{array}{cc} 0 & - \mathbbm{1} \\ \mathbbm{1} & 0 \end{array} \right)$. The matrix $\Phi_n(\lambda')$ is symplectic, hence (up to sign) $J u_4(\lambda') = u_1(\lambda')$. Thus
\[ \| J v - u_1(\lambda') \| \leq C_1 \exp(- n (2\gamma - \gamma_1(\lambda) - \gamma_2(\lambda) - 4\epsilon) )~.\]
On the other hand, $F_0 \subset \mathbb R^{2W}$ is a Lagrangian subspace (i.e.\ $F_0 = (J F_0)^\perp$), hence $J v \perp F_0$. Consequently,
\begin{equation}\label{eq:proj-lam'} \| P_{F_0} u_1(\lambda') \|\leq C_1 \exp(- n (2\gamma - \gamma_1(\lambda) - \gamma_2(\lambda) - 4\epsilon) )~.\end{equation}
To complete the proof of (\ref{eq:propag-2}), we need to show that the estimate (\ref{eq:proj-lam'}) does not deteriorate too fast as we vary $\lambda'$. If $|\lambda'' - \lambda'| \leq \eta$ and $|\lambda'' - \lambda| \leq r_\epsilon(\lambda)$, we have (still on $\Omega_{n,\epsilon}(\lambda)$, cf.\ (\ref{eq:16.5})):
\[ \| \Phi(\lambda'') - \Phi(\lambda') \| \leq \eta n \exp(n(\gamma_1(\lambda)+4\epsilon))~, \]
whence by Wedin's perturbation bound for singular vectors \cite{Wed}
\begin{equation}\label{eq:pert-u1}\begin{split} \| u_1(\lambda'') - u_1(\lambda')\| &\leq C_2 \frac{\eta n e^{n(\gamma_1(\lambda)+4\epsilon)}}{s_1(\Phi_n(\lambda')) - s_2(\Phi_n(\lambda'))} \\
&\leq \frac{2C_2 \eta n e^{n(\gamma_1(\lambda)+4\epsilon)}}{e^{n\gamma}} = 2C_2 \exp(-n(2\gamma - \gamma_1(\lambda) - \gamma_2(\lambda) - 4\epsilon ) ) \end{split}\end{equation}
From (\ref{eq:proj-lam'}) and (\ref{eq:pert-u1}) we obtain that $\lambda'' \in A^+$, provided that we set $C = C_1 + 2C_2$ in (\ref{eq:def-A+}). This concludes the proof of (\ref{eq:propag-2}).
To prove (\ref{eq:est-2}), we use once again that if $U$ is uniformly distributed on the compact symplectic group $\operatorname{Sp}(2W, \mathbb R) \cap \operatorname{SO}(2W)$, then each column of $U$ is uniformly distributed on the unit sphere. Thus, according to Lemma~\ref{l:smooth}, the probability density of $u_1(\lambda'')$ with respect to the Haar measure on $S(\mathbb R^{2W})$ is bounded uniformly in $n \geq n_0$. Hence by (\ref{eq:archimedes})
\[ \begin{split} \mathbb P \left\{ \| P_{F_0} u_1(\lambda'') \| \leq C \exp(-n(2\gamma-\gamma_1-\gamma_2-4\epsilon)) \right\} &\leq C_4 \exp(-2n(2\gamma-\gamma_1-\gamma_2-4\epsilon)) \\
&\leq C_5 \exp(-\epsilon n) \eta~.\end{split}\]
This concludes the proof of (\ref{eq:est-2}) and of the theorem. \qed
\section{On the uniform convergence to the Lyapunov exponents}\label{s:concl}
A better understanding of the deviations of $\frac{1}{n} \log s_j(\Phi_n(\lambda))$ from their limiting values $\gamma_j$ would allow us to strengthen the conclusion of Theorems~\ref{thm:1} and \ref{thm:2}, possibly up to the conjectured $\gamma_*^+ \overset{\text{conj}}{=} \gamma_1$, as we now discuss.
Recall the following result from \cite{G75} pertaining to $W=1$: with probability one, the set $\Lambda_{\frac12}$, where
\[ \Lambda_{\tau} = \left\{ \lambda \in \mathbb R\, : \, \liminf_{n \to \infty} \frac{1}{n} \log \|\Phi_n(\lambda) \| \leq \tau\gamma_1(\lambda)\right\}~, \quad \tau \in [0,1]~, \]
is dense in $\sigma(H)$. Subsequently, it was found that also the (possibly) smaller set $\Lambda_{0}$
is almost surely dense in $\sigma(H)$. Recently, a general framework encompassing and generalising these results was developed by Gorodetski and Kleptsyn \cite{GorKl}, who also provided detailed information on the structure of the exceptional sets $\Lambda_\tau$, and showed that
\begin{equation}\label{eq:gorkl}
\mathbb P \left\{ \forall \lambda \in \mathbb R\, : \, \limsup_{n \to \infty} \frac{1}{n} \log \|\Phi_n(\lambda) \| = \gamma_1(\lambda)\right\} = 1~. \end{equation}
We are not aware of a published reference discussing the extension of this problem for $W > 1$. However, it is plausible that the arguments developed in the aforementioned works could yield that
\[ \Lambda_{0}^{(W)} = \left\{ \lambda \in \mathbb R\, : \, \liminf_{n \to \infty} \frac{1}{n} \log s_W(\Phi_n(\lambda)) =0 \right\} \]
is dense in $\sigma(H)$. It is not clear to us what would be the right counterpart of this statement for $1 \leq j \leq W-1$. If the higher exponents would exhibit regular behaviour, i.e.\
\begin{equation}\label{eq:??}
\text{if it were true that \,} \mathbb P \left\{ \forall \lambda \in \sigma(H) \quad \lim_{n \to \infty} \frac{1}{n} \log s_j(\Phi_n(\lambda)) = \gamma_j(\lambda) \right\} =1~, \quad 1 \leq j \leq W-1~, \end{equation}
one could significantly improve the results of the current paper: the argument of Theorem~\ref{thm:1} would yield $\gamma_*^+ \leq \gamma_{*,2}$, where $\gamma_{*,2}$ is the solution of
\[ \gamma + \sum_{j=1}^{W-1} \left( \gamma - \gamma_j\right)_+ = \gamma_1~, \]
whereas the argument of Theorem~\ref{thm:2} would establish the optimal bound $\gamma_*^+ = \gamma_W$ (for arbitrary $W$, cf.\ the proof of Theorem~\ref{thm:3} below). If (\ref{eq:??}) is false, it would be helpful to understand
\begin{equation}\label{eq:???}
\text{is it true that \,} \mathbb P \left\{ \forall \lambda \in \sigma(H) \quad \limsup_{n \to \infty} \frac{1}{n} \log s_j(\Phi_n(\lambda)) \leq \gamma_j(\lambda) \right\} =1~, \quad 1 \leq j \leq W~. \end{equation}
Following Craig and Simon \cite{CS} (cf.\ Lemma~\ref{l:upperbd}), note that (\ref{eq:???}) holds (unconditionally) for $j=1$. Also (according to the same lemma) (\ref{eq:??}) would imply (\ref{eq:???}).
\medskip
In this section, we prove the following extension of (\ref{eq:gorkl}) to $2W$-dimensional cocycles. We confine ourselves to the setting of transfer matrices, which is used in the proof of Theorem~\ref{thm:3}. Denote
\[ \operatorname{Dev}_{n}(\lambda) = \max_{1 \leq j \leq W} \left|\frac1n \log s_j(\Phi_n(\lambda)) - \gamma_j(\lambda)\right|~.\]
\begin{prop}\label{p:gorkl} Assume that $V(n)$ satisfy (\ref{eq:model}), and that (\ref{eq:zcond}) holds for every $\lambda \in [a,b]$. Then for any $\epsilon > 0$ there exist $C> 0$ and $c > 0$ such that
\begin{equation}
\mathbb P \left\{ \sup_{\lambda \in [a,b]} \min\left(\operatorname{Dev}_n(\lambda), \operatorname{Dev}_{n^2}(\lambda)\right) \geq \epsilon \right\} \leq C e^{-cn}~.
\end{equation}
In particular,
\[ \mathbb P \left\{ \sup_{\lambda \in [a,b]} \liminf_{n \to \infty} \operatorname{Dev}_n(\lambda)= 0 \right\} = 1~. \]
\end{prop}
\begin{rmk} Here $n^2$ can be replaced with any function tending to infinity faster than linearly.
\end{rmk}
\begin{proof}[Proof of Proposition~\ref{p:gorkl}] Fix $\lambda \in \mathbb R$; let $\epsilon > 0$, and choose $r_\epsilon(\lambda)$ as in (\ref{eq:def-r}). It will suffice to show that there exist $C, c$ such that
\begin{equation}\label{eq:need-gk}
\mathbb P \left\{ \sup_{|\lambda' - \lambda| < r_\epsilon(\lambda)} \min\left(d_n(\lambda'), d_{n^2}(\lambda')\right) \geq 10 W \epsilon \right\} \leq C e^{-cn}~,
\end{equation}
where
\begin{equation}\label{eq:dn}d_n(\lambda') = \max_{1 \leq j \leq n} \left|\frac1n \log s_j(\Phi_n(\lambda')) - \gamma_j(\lambda)\right|~.\end{equation}
By the Chebyshev inequality, one can choose $\kappa > 0$ such that
\[ \mathbb P (\Omega^{(1)}_n) \geq 1 - e^{-n}~, \quad \Omega^{(1)}_n = \left\{ \sup_{|\lambda' - \lambda| < r_\epsilon(\lambda)} \|\Phi_n(\lambda')\| \geq e^{\kappa n} \right\}~.\]
On $\Omega^{(1)}_n$,
\[ \left|\log s_j(\Phi_{n^2}(\lambda')) - \log s_j(\Phi_{n^2, n}(\lambda')) \right| \leq \kappa n~,\]
therefore for sufficiently large $n$
\[ d_{n^2}(\lambda') \leq \epsilon + \tilde d_{n^2}(\lambda')~, \quad \tilde d_{n^2}(\lambda') = \max_{1 \leq j \leq n} \left|\frac1{n^2-n} \log s_j(\Phi_{n^2,n}(\lambda')) - \gamma_j(\lambda)\right|~. \]
Here $\tilde d_{n^2}(\cdot)$ is independent of $d_n(\cdot)$. Also recall from Lemma~\ref{l:upperbd} that $\mathbb P(\Omega_n^{(2)}) \geq 1 - C e^{-cn}$, where
\[ \Omega_n^{(2)} = \left\{ \forall \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))~, \, 1 \leq j \leq n\,\, : \,\, \frac{1}{n} \sum_{i=1}^j \log s_i(\Phi_n(\lambda')) \leq \sum_{i=1}^j \gamma_i(\lambda) + 2j \epsilon \right\} ~.\]
Now, Lemma~\ref{l:largedev} and Remark~\ref{rem:largedev} imply that for each $\lambda'\in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))$
\begin{equation}\label{eq:fromldbd}\mathbb P \left\{ |\frac{1}{n} \log |\Phi_n(\lambda')_{11}| - \gamma_1(\lambda) | \geq 2\epsilon \right\} \leq C \exp(-c n)~. \end{equation}
Note that $\Phi_n(\lambda')_{11}$ is a polynomial in $\lambda'$ of degree $n$, therefore the set $A_n^{(1)}$ of $\lambda' \in (\lambda - r_\epsilon(\lambda),\lambda + r_\epsilon(\lambda))$ for which $|\frac{1}{n} \log |\Phi_n(\lambda')_{11}| - \gamma_1(\lambda) | \geq 2\epsilon$ is a union of at most $n$ intervals. Applying the same argument to the wedge powers $\Phi_n(\lambda)^{\wedge n}$, we construct the sets $A_n^{(2)}, \cdots, A_n^{(W)}$ such that $A_n^{(j)}$ is a union of $\leq jn$ intervals,
\begin{equation}\label{eq:prob-An} \mathbb P \left\{ \lambda' \in A_n^{(j)}\right\} \leq C e^{-cn}~, \end{equation}
and
\[ \lambda' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda)) \setminus A_n^{(j)}
\Longrightarrow \sum_{i=1}^j \log s_i(\Phi_n(\lambda')) \geq \sum_{i=1}^j \gamma_i(\lambda) - 2j \epsilon~.\]
We also construct similar sets $A_{n^2,n}^{(j)}$ corresponding to $\Phi_{n^2,n}(\lambda')$, and let
\[ A_n = \bigcup_{j=1}^W A_n^{(j)}~, \quad A_{n^2,n} = \bigcup_{j=1}^W A_{n^2,n}^{(j)}~. \]
The set $A_n$ is a union of $\leq n W ( W+1)/2$ intervals, whereas $A_{n^2,n}$ is a union of $\leq n^2 W ( W+1)/2$ intervals. If these two sets intersect, than either one of the edges of the intervals comprising $A_n$ lies in $A_{n^2,n}$ , or vice versa. Invoking (\ref{eq:prob-An}), we see that
\[\mathbb P (\Omega_n^{(3)}) \geq 1 - C e^{-cn}~, \quad \text{where} \quad \Omega_n^{(3)} = \left\{ A_n \cap A_{n^2, n} = \varnothing \right\} ~. \]
Observe that on $\Omega_n^{(1)} \cap \Omega_n^{(2)} \cap \Omega_n^{(3)}$, for each $\lambda'$, either $\frac{1}{n} \log s_j(\Phi_n(\lambda'))$ is close to $\gamma_j(\lambda)$ for all $j$, or this holds true for $\frac{1}{n^2} \log s_1(\Phi_{n^2}(\lambda'))$. This concludes the proof of the proposition.
\end{proof}
\section{Proof of Theorem~\ref{thm:3}}\label{s:3}
We keep the notation from the previous sections. Let $\gamma > \gamma_W(\lambda)$, and let $\epsilon = \frac{1}{100 W^2} (\gamma - \gamma_W(\lambda))$. It suffices to show that
\begin{equation}\label{eq:cl-3}
\mathbb P \left\{ \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) \cap \operatorname{Fast}_{n^2,\epsilon}(\gamma, \lambda) \neq \varnothing \right\}
\leq C e^{-cn}~.
\end{equation}
To keep the notation consistent with the previous sections, it will be convenient to rely on the estimate (\ref{eq:need-gk}) rather than on the conclusion of Proposition~\ref{p:gorkl}. Denote
\[ \operatorname{Reg}_{n,\epsilon}(\lambda) = \left\{ \lambda' \in (\lambda - r_\epsilon(\lambda),\lambda + r_\epsilon(\lambda)) \, : \,
d_n(\lambda') < 10W\epsilon\right\}\]
where $d_n$ are as in (\ref{eq:dn}). From (\ref{eq:need-gk}),
\[ \mathbb P\left\{ \operatorname{Reg}_{n,\epsilon}(\lambda) \cup \operatorname{Reg}_{n^2,\epsilon}(\lambda) = (\lambda - r_\epsilon(\lambda),\lambda + r_\epsilon(\lambda))\right\} \geq 1 - C e^{-cn}~. \]
Therefore (\ref{eq:cl-3}) and the theorem are implied by (\ref{eq:p-omega}) and the following estimate:
\begin{equation}\label{eq:est-3}\mathbb P \left\{\operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) \cap \operatorname{Reg}_{n,\epsilon}(\lambda) \neq \varnothing; \omega \in \Omega_{n,\epsilon}(\lambda) \right\} \leq C e^{-cn}~.
\end{equation}
The proof of (\ref{eq:est-3}) is similar to the argument in Section~\ref{s:2}. Denote
\[ A = \operatorname{Fast}_{n,\epsilon}(\gamma, \lambda) \cap \operatorname{Reg}_{n,\epsilon}(\lambda)~, \quad \eta = \frac1n \exp(- (\gamma - \gamma_W(\lambda) + 20W^2 \epsilon )n)~, \]
and let $A^+$ be the set of $\lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))$ for which there exists
\begin{equation}\label{eq:w-3}w \in S(\operatorname{span}(u_1(\lambda''), \cdots, u_{W-1}(\lambda'')))~, \,\, \| P_{F_0} w \| \leq C \exp(-n (\gamma - \gamma_W(\lambda) - 10 W \epsilon))~, \end{equation}
where $C>0$ will be specified later.
We claim that on $\Omega_{n,\epsilon}(\lambda)$ we have the propagation estimate
\begin{equation}\label{eq:propag-3}\lambda' \in A~, \,\, |\lambda'' - \lambda'| < \eta~,\,\,|\lambda'' - \lambda | < r_\epsilon(\lambda) \Longrightarrow \lambda'' \in A^+~,\end{equation}
and that for each $\lambda'' \in (\lambda - r_\epsilon(\lambda), \lambda + r_\epsilon(\lambda))$
\begin{equation}\label{eq:prob-3}
\mathbb P \left\{ \lambda'' \in A^+ \right\} \leq C' \exp(-2n (\gamma - \gamma_W(\lambda) - 10 W \epsilon))
\end{equation}
These two claims imply (\ref{eq:est-3}) and thus conclude the proof of the theorem.
To prove (\ref{eq:propag-3}), we observe that if $\lambda' \in A$, there exists $v \in S(F_0)$ such that for all $1 \leq j \leq W+1$
\[ | \langle v, u_j(\lambda') \rangle| \leq \frac{\exp(-n \gamma)}{s_j(\Phi_n(\lambda'))}
\leq \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~, \]
where $u_j(\lambda')$ is the $j$-th right singular vector of $\Phi_n(\lambda')$, and thus there exists $\theta \in S(\mathbb R^{W-1})$ such that
\[ \| v - \sum_{j=1}^{W-1} \theta_j u_{2W+1-j}(\lambda')\| \leq C_1 \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~.\]
Now let $w = Jv$, where $J$ is the symplectic rotation. Then $w \perp F_0$, and
\begin{equation}\label{eq:w-prop}\| w - \sum_{j=1}^{W-1} \theta_j u_{j}(\lambda')\| \leq C_1 \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~. \end{equation}
Applying Wedin's bound to the $j$-th wedge power of $\Phi_n(\lambda')$, we have:
\[\begin{split}
&\|u_1(\lambda'') \wedge u_2(\lambda'') \wedge \cdots \wedge u_j(\lambda'') - u_1(\lambda') \wedge u_2(\lambda') \wedge \cdots \wedge u_j(\lambda') \| \\
&\quad \leq \frac{C_2 \eta n e^{(\gamma_1(\lambda) + \cdots + \gamma_j(\lambda) + 4 W \epsilon)n}}{e^{(\gamma_1(\lambda) + \cdots + \gamma_j(\lambda) -10W^2 \epsilon)n}} \leq C_2 \eta n e^{12 W^2 \epsilon n}
\leq C_2 e^{- n(\gamma - \gamma_W(\lambda) - 10 W \epsilon)}~,\end{split}\]
and consequently
\[ \| u_j(\lambda'') - u_j(\lambda')\| \leq C_3 e^{- n(\gamma - \gamma_W(\lambda) - 10 W \epsilon)}~.\]
This and (\ref{eq:w-prop}) implies
\begin{equation}\label{eq:w-prop'}\| w - \sum_{j=1}^{W-1} \theta_j u_{j}(\lambda'')\| \leq C_4 \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~, \end{equation}
i.e.\ $\lambda '' \in A^+$ (if $C$ in (\ref{eq:w-3}) is chosen appropriately), as claimed in (\ref{eq:propag-3}).
Now we prove (\ref{eq:prob-3}). If
\begin{equation}\label{eq:prop-alpha}
\| P_{F_0} \sum_{j=1}^{W-1} \theta_j u_{j}(\lambda'') \| \leq C \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon)) \end{equation}
for a certain $\theta \in S(\mathbb R^{W-1})$, then
\[\| P_{F_0} \sum_{j=1}^{W-1} \theta_j' u_{j}(\lambda'') \| \leq 2C \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon)) \]
for all $\theta'$ in a neighbourhood of $\theta$ on $S(\mathbb R^{W-1})$; the $(W-2)$-dimensional volume of this neighbourhood is bounded from below by
\[ c \exp(-(W-2) n(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~. \]
On the other hand, (\ref{eq:archimedes}) implies that for each $\theta\ \in S(\mathbb R^{W-1})$
\[ \mathbb P \left\{ \| P_{F_0} \sum_{j=1}^{W-1} \theta'_j u_{j}(\lambda'') \| \leq 2C \exp(-n(\gamma - \gamma_W(\lambda) - 10 W \epsilon)) \right\} \leq C' \exp(-n W(\gamma - \gamma_W(\lambda) - 10 W \epsilon))~. \]
Therefore the probability that there exists $\theta$ satisfying (\ref{eq:prop-alpha}) is at most
\[ \frac{C'}{c} \exp(-2n (\gamma - \gamma_W(\lambda) - 10 W \epsilon))~,\]
as claimed. This concludes the proof of (\ref{eq:prob-3}) and of the theorem.
\qed
\paragraph{Acknowledgement} We are grateful to Yotam Hendel and Itay Glazer for explaining us the argument used in the proof of Lemma~\ref{l:smooth}, and to Fulvio Ricci for helpful comments on the work \cite{RS}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The Palatini first order formulation is a convenient starting point
for the standard Hamiltonian approach to general relativity by
Arnowitt, Deser and Misner
\cite{Deser:1959zza,Arnowitt:1959ah,Arnowitt1960}. It is in this
framework that appropriate surface integrals at spatial infinity for
energy-momentum have originally been constructed
\cite{Arnowitt1960c,R.ArnowittS.DeserC.W.Misner1961} and that the
Hamiltonian formulation is presented in
\cite{Arnowitt:1962aa,Misner:1970aa}.
Conserved quantities in first order formulations of general relativity
have recently been investigated from a variety of perspectives, see
e.g.~\cite{Hehl:1994ue,Julia:1998ys,Julia:2000er,Julia:2002df,%
Ashtekar:2008jw,JacobsonMohd2015,CorichiRubalcavaVukasinac2014,%
Lehner:2016vdi,Korovin:2017xqu,DePaoli:2018erh,Oliveri:2019gvm}. In
the approach that we follow here
\cite{Barnich:2001jy,Barnich:2003xg,Barnich:2007bf}, one constructs
conserved co-dimension $2$ forms in the linearized theory from the
weakly vanishing Noether currents associated to gauge
symmetries. Indeed, one can show in the linearized theory that there
are conserved co-dimension $2$ forms for each reducibility parameter
of the background. The latter correspond to the Killing vectors of the
background metric in general relativity and one can show that there
are no other conserved co-dimension $2$ forms which are non-trivial.
The method has been applied recently to first order formulations of
general relativity where the variables are either a vielbein and a
Lorentz connection in coordinate basis \cite{Barnich:2016rwk}, or a
vielbein and the spin coefficients of the Newman-Penrose formalism
\cite{Barnich:2019vzx}. Two additional general results have been added
in that context: a general expression for conserved co-dimension $2$
forms applicable in a generic first order theory and a detailed
discussion of the breaking term, the flux terms that appear on the
right hand side of what would be a conservation law when one uses
general gauge parameters rather than reducibility parameters of the
background.
Two subtleties have to be faced when applying this construction to the
Palatini formulation of general relativity. The first is that the
theory is not first order in the sense that the transformation of the
connection under infinitesimal diffeomorphisms involves second order
derivatives of the vector field parametrizing these
diffeomorphisms. This leads to a weakly vanishing Noether current that
is not first order. It turns out however that all higher order terms
are contained in a total derivative and such terms are easily handled
by the contracting homotopy operator used to built the co-dimension
$2$ forms. As a consequence, the construction is as straightforward as
in other first order approaches to general relativity. The second
subtlety is that, as for other discussions of symmetries on the level
of an action principle, all computations are performed off-shell. For
the Palatini formalism, this means that one has to deal with
non-metricity.
The paper is organized as follows. We start with a very brief review
of how to construct conserved co-dimension 2 forms out of weakly
vanishing Noether currents. More details can be found in the original
literature cited above and an extensive recent summary has been
provided in \cite{Barnich:2019vzx}. We then discuss various identities
satisfied by the curvature tensor in the general context of a
non-holonomic frame including torsion and non-metricity because these
are relevant for the Noether identities that are crucial to the
construction. We then apply these general considerations to the
particular case of the Palatini formulation for which one uses a
coordinate basis and a connection without torsion. Finally we
construct the co-dimension $2$ forms and the associated breaking
terms.
\section{Construction of co-dimension 2 forms}
\subsection{General case}
\label{sec:expl-constr}
We consider a theory with a Lagrangian $n$-form
${\mathcal L}=L\, d^nx$ in $n$ dimensional spacetime. The fields of
the variational principle are denoted by $\phi^i$. Consider a
generating set (see e.g.~\cite{Henneaux:1992ig}, chapter 3) of non
trivial gauge transformations
$\delta_\epsilon \phi^i=R^i_\alpha(\epsilon^\alpha)$. One can prove
that there is an isomorphism between equivalence classes of on-shell
closed co-dimension $2$ forms and equivalence classes of reducibility
parameters $\bar f^\alpha[x,\phi]$ satisfying
$R^i_\alpha(\bar f^\alpha)\approx 0$. Equivalent co-dimension $2$
forms differ on-shell by an exact local form while equivalent sets of
reducibility parameters agree on-shell.
The relation between on-shell closed co-dimension $2$ forms and
reducibility parameters is constructive. For arbitrary gauge
parameters $f^\alpha$, a direct application of the Leibniz rule for
total derivatives leads to
\begin{equation}
R^i_\alpha(f^\alpha)\vddl{\mathcal{L}}{\phi^i}=f^\alpha
R^{+i}_\alpha\left(\vddl{\mathcal{L}}{\phi^i}\right)
+d_H S_f,
\label{eq:1}
\end{equation}
for some weakly vanishing $n-1$ form
\begin{equation}
S_f=S^{i\mu}_\alpha\left(\ddl{}{dx^\mu}\vddl{\mathcal{L}}{\phi^i},f^\alpha\right
). \label{eq:4}
\end{equation}
The $n-2$ form can be constructed by using the contracting homotopy
$\rho_H$ for the horizontal differential of the variational bi-complex
\cite{Andersonbook,Olver:1993}
\begin{equation}
\label{eq:5}
\{d_H,\rho_H\}\omega^p=\omega^p\ {\rm for}\ p<n.
\end{equation}
Indeed, the Noether identities that are associated to the generating
set of non-trivial gauge transformations are
\begin{equation}
\label{eq:6}
R^{+i}_\alpha\left(\vddl{\mathcal{L}}{\phi^i}\right)=0.
\end{equation}
For reducibility parameters $\bar f^\alpha$, \eqref{eq:1} reduces to
$d_H S_{\bar f}\approx 0$. One then shows (see section 3.3 of
\cite{Barnich:2001jy} for details) that the weakly vanishing terms on
the right hand side can be absorbed by the horizontal differential of
a ``doubly'' weakly vanishing $n-1$ form $M_{\bar f}$ on the left hand
side, leading to $d_H (S_{\bar f}+M_{\bar f})=0$. When applying the
contracting homotopy to $J_{\bar f}=S_{\bar f}+M_{\bar f}$,
\begin{equation}
\label{eq:7}
k_{\bar f}=\rho_H J_{\bar f},
\end{equation}
it then follows from \eqref{eq:5} that
\begin{equation}
\label{eq:8}
d_H k_{\bar f}= J_{\bar f}\approx 0.
\end{equation}
In case where one can show that a set of reducibility parameters is
equivalent to a set for which $R^i_\alpha(\bar f^\alpha) =0$, the
reasoning simplifies since in this case $d_H S_{\bar f}= 0$, and the
application of \eqref{eq:5} now directly yields
$d_H k_{\bar f}= S_{\bar f}\approx 0$ with
$k_{\bar f}=\rho_H S_{\bar f}$. Similarily, in linear gauge theories,
the application of the homotopy formula to $M_{\bar f}$ gives rise to
a weakly vanishing and thus trivial $n-2$ form, which can be
omitted. In this case, we still have $k_{\bar f}=\rho_H S_{\bar f}$
but now $d_H k_{\bar f}\approx S_{\bar f}\approx 0$.
\subsection{Linearized theories and asymptotics}
\label{sec:linearized-theories}
For the purposes of exposition, we focus on the Einstein-Hilbert
action in metric formulation, where a generating set of gauge
transformations corresponds to the Lie derivative of the metric,
$\delta_\xi g_{\mu\nu}={\mathcal L}_\xi g_{\mu\nu}$. In spacetime
dimensions $n\geq 3$, one can then show that all equivalence classes
of reducibility parameters admit representatives $\xi^\rho[x]$ that do
not depend on $g_{\mu\nu}$ and its derivatives. The condition that
such vectors are reducibility parameters then reduces to the Killing
equation for a generic metric. Since a generic metric does not have
Killing vectors, no non-trivial conserved $n-2$ forms can be
constructed in general relativity. However, one can linearize the
theory around a background solution $\bar g_{\mu\nu}$. A generating
set of gauge transformations of the linearized theory corresponds to
the Lie derivative of the background metric,
$\delta_\xi h_{\mu\nu}={\mathcal L}_\xi\bar g_{\mu\nu}$. It then
follows that there are as many conserved $n-2$ forms as there are
Killing vectors of the background solution. The explicit expressions
of the $n-2$ forms are obtained by applying the construction described
in previous subsection in the framework of the linearized theory. This
has been done explicitly for Einstein gravity in
\cite{Barnich:2001jy}.
More generally, one can show \cite{Barnich:2003xg} that the $n-2$
forms of the linearized theory can be obtained from the weakly
vanishing co-dimension 1 form $S_f$ of the full theory through
\begin{equation}
\label{eq:9}
k_f[\delta\phi,\phi]=k^{\mu\nu}_f(d^{n-2}x)_{\mu\nu}=\frac{|\lambda|+1}{|\lambda|+2}
\partial_{(\lambda)}\left[\delta\phi^i\vddl{}{\phi^i_{((\lambda)\nu)}}\ddl{}{dx^\nu}
S_f\right],
\end{equation}
by replacing $f$ by reducibility parameters of the linearized theory,
$\phi^i$ by the background solution $\bar\phi^i$ and $\delta\phi^i$ by
any solution $\bar\varphi^i$ of the theory linearized around
$\bar\phi^i$. We refer to \cite{Andersonbook} and \cite{Olver:1993}
for the explicit expressions for the higher order Euler-Lagrange
derivatives. Our conventions and notations for multi-indices are
summarized in the appendix of \cite{Barnich:2001jy}.
For theories such as general relativity in metric formulation where
$S_f$ is at most of second order in derivatives, the formula involves
the higher order Euler-Lagrange operators only up to order $2$ and
reduces to
\begin{equation}
\label{eq:64}
k_f[\delta\phi,\phi]=\frac{1}{2}\delta\phi^i\vddl{}{\phi^i_\nu}\ddl{}{dx^\nu}
S_f+\frac{2}{3}\d_\sigma\left[\delta\phi^i\vddl{}{\phi^i_{\nu\sigma}}\ddl{}{dx^\nu}
S_f\right].
\end{equation}
For later use, note that for a local function $M$ involving the fields
and their derivatives up to some finite order, a key property of the
higher order Euler-Lagrange derivatives is that they ``absorb'' total
derivatives,
\begin{equation}
\frac{\delta \partial_\lambda M }{\delta \phi^i}=0,\qquad
\frac{\delta \partial_\lambda M }{\delta \phi^i_\nu}
= \delta^\nu_\lambda \frac{\delta M }{\delta \phi^i}, \qquad
\frac{\delta \partial_\lambda M}{\delta \phi^i_{\mu \nu}}
= \delta^{(\mu}_\lambda \frac{\delta M}{\delta \phi^i_{\nu)}},
\label{useful identities M}
\end{equation}
where the round (square) brackets denote (anti) symmetrization of
enclosed indices divided by the factorial of the number of indices
involved. Furthermore, if $M^1$ depends at most on first order
derivatives, the Euler-Lagrange derivatives of order
one reduce to partial derivatives,
\begin{equation}
\frac{\delta M^1}{\delta
\phi^i_\nu}=\frac{\d M^1}{\d \phi^i_\nu}.\label{eq:21}
\end{equation}
As shown in detail in \cite{Barnich:2019vzx}, when using general gauge
parameters $f^\alpha$ instead of reducibility parameters,
non-conservation is controlled by the co-dimension $1$ form
$b[\delta\phi,R_f,\phi]$ defined by
\begin{equation}
\label{eq:10}
b=-\d_{(\lambda)}\left[R^i_{\alpha}(f^\alpha)
\delta\phi^j\frac{\delta}{\delta
\phi^j_{(\lambda)\nu}}\frac{\partial}{\partial dx^\nu}\left(\frac{\delta
\mathcal L}{\delta \phi^i}\right)\right]
\end{equation}
which satisfies $b[\delta\phi,R_f,\phi]=-b[R_f,\delta\phi,\phi]$ by
construction. Indeed, when $\phi^i$ is a solution to the equations of
motion, $\delta\phi^i$ a solution to the linearized equations of
motion, the co-dimension 2 form $k_f$ constructed as in \eqref{eq:9}
is is no longer $d_H$-closed but satisfies instead
\begin{equation}
\label{eq:3}
d_H k_f=b.
\end{equation}
As in asymptotically flat general relativity at null infinity
\cite{Wald:1999wa,Barnich:2011mi,Barnich:2013axa}, these on-shell
non-closed co-dimension 2 forms $k_f$ are in general not integrable
either.
\section{Vielbeins and connection}
\label{sec:first}
Now, we recall some notions of vielbeins and connection by including
torsion and non-metricity into the standard discussion. In particular,
this completes the results of \cite{Barnich:2016rwk,Barnich:2019vzx}
by considering non-metricity.
\subsection{General case}
\label{sec:general-case}
Consider an $n$-dimensional spacetime with a moving frame (or
vielbein)
\begin{equation}
e_a={e_a}^\mu\ddl{}{x^\mu},\quad e^a={e^a}_\mu dx^\mu, \label{eq:2}
\end{equation}
where ${e_a}^\mu{e^a}_\nu=\delta^\mu_\nu$,
${e_a}^\mu{e^b}_\mu=\delta_a^b$, and $\d_a f=e_a(f)$. The structure functions are defined by
\begin{equation}
[e_a,e_b]={D^c}_{ab}e_c \iff de^a=-\frac{1}{2} {D^a}_{bc}e^be^c.\label{eq:13}
\end{equation}
For further use, note that if ${\mathbf e}={\rm det}\,{e^a}_\mu$, then
\begin{equation}
\label{eq:82}
\d_\mu(\mathbf{e}\,{e^\mu}_a)=\mathbf{e}\, {D^b}_{ba},
\end{equation}
and, if we define,
\begin{equation}
\label{eq:49}
{d^a}_{bc}={e^a}_\lambda \d_b {e_c}^\lambda,
\end{equation}
then
\begin{equation}
{d^\sigma}_{\rho\mu}=-{e_d}^\sigma\d_\rho {e^d}_\mu,\quad
{D^a}_{bc}=2{d^a}_{[bc]}, \label{eq:100}
\end{equation}
where it is understood that tangent space indices $a,b,\dots$ and
world-indices $\mu,\nu,\dots$ are transformed into each other by using
the vielbeins and their inverse.
In addition, we assume that there is an affine connection
\begin{equation}
D_a e_b={\Gamma^c}_{ba}e_c\iff D_b v^a=\d_b v^a+{\Gamma^a}_{cb} v^c. \label{eq:12}
\end{equation}
The components of the torsion tensor are given by
\begin{equation}
{T^a}_{\mu\nu}=\d_\mu {e^a}_\nu-\d_\nu
{e^a}_\mu+{\Gamma^a}_{b\mu}
{e^b}_\nu-{\Gamma^a}_{b\nu}{e^b}_\mu,\label{eq:70}
\end{equation}
\begin{equation}
{T^c}_{ab}=2{\Gamma^c}_{[ba]}+{D^c}_{ba}=2({\Gamma^c}_{[ba]}+{d^c}_{[ba]}),\label{eq:14}
\end{equation} while the components of the curvature tensor can be written as
\begin{equation}
{R^f}_{c\mu\nu}=\d_\mu
{\Gamma^f}_{c\nu}-\d_\nu {\Gamma^f}_{c\mu}
+{\Gamma^f}_{d\mu}{\Gamma^d}_{c\nu}-{\Gamma^f}_{d\nu}{\Gamma^d}_{c\mu},\label{eq:71}
\end{equation}
\begin{equation}
{R^f}_{cab}=\d_a
{\Gamma^f}_{cb}-\d_b {\Gamma^f}_{ca}
+{\Gamma^f}_{da}{\Gamma^d}_{cb}-{\Gamma^f}_{db}{\Gamma^d}_{ca}-{D^d}_{ab}{\Gamma^f}_{cd}.
\label{eq:15}
\end{equation}
Furthermore,
\begin{equation}
\label{eq:20}
[D_a,D_b]v_c=-{R^d}_{cab}v_d-{T^d}_{ab}D_dv_c.
\end{equation}
The Bianchi
identities are given explicitly by
\begin{equation}
\label{eq:24}
{R^a}_{[bcd]}=D_{[b}{T^a}_{cd]}+{T^a}_{f[b}{T^f}_{cd]},\quad
D_{[f}{R^a}_{|b|cd]}=-{R^a}_{bg[f}{T^g}_{cd]},
\end{equation}
where a bar encloses indices that are not involved in
the (anti) symmetrization. The Ricci tensor is defined by
${R}_{ab}={R^{c}}_{acb}$, while
$S_{ab}={R^c}_{cab}$. Contracting the Bianchi identities gives
\begin{equation}
\label{eq:27}
{R}_{ab}-{R}_{ba}=S_{ab}-D_c {T^c}_{ab}
-2D_{[a} {T^c}_{b]c}-{T^c}_{dc}{T^d}_{ab},
\end{equation}
\begin{equation}
2D_{[f}{R}_{|b|d]}+D_c{R^c}_{bdf}={R}_{bg}{T^g}_{df}
-2{R^c}_{b[f|g|}{T^g}_{d]c}, \label{eq:28a}
\end{equation}
\begin{equation}
\label{eq:28b}
D_{[f}S_{cd]}=-S_{g[f}{T^g}_{cd]}.
\end{equation}
Assume now that there is a pseudo-Riemannian metric,
\begin{equation}
g_{\mu\nu}={e^a}_\mu g_{ab} {e^b}_\nu\label{eq:11},
\end{equation}
i.e., a symmetric, non-degenerate $2$-tensor.
As usual, tangent space indices $a,b,\dots$ and world indices
$\mu,\nu,\dots$ are lowered and raised with $g_{ab}$, $g_{\mu\nu}$,
and their inverses.
The non-metricity tensor is defined as
$\Xi^{ab}=dg^{ab}+2\Gamma^{(ab)}$. The associated Bianchi identities
are given by
$d\Xi^{ab}+{\Gamma^{a}}_c\Xi^{cb}+{\Gamma^b}_c\Xi^{ac}=2R^{(ab)}$.
More explicitly,
\begin{equation}
\label{eq:95}
{\Xi^{ab}}_c=D_c g^{ab}, \quad 2D_{[c}
{\Xi^{ab}}_{d]}=-{\Xi^{ab}}_f{T^f}_{cd}+2{R^{(ab)}}_{cd}.
\end{equation}
Note
also that, from $g^{ab}g_{bc}=\delta^a_c$, it follows that
\begin{equation}
\label{eq:40}
D_c g_{ab}=-\Xi_{abc}.
\end{equation}
Contracting the last of \eqref{eq:95} with $g_{ab}$
gives
\begin{equation}
\label{eq:86}
S_{cd}=g_{ab}D_{[c} {\Xi^{ab}}_{d]}+\frac{1}{2}
{\Xi^a}_{af}{T^f}_{cd},
\end{equation}
while \eqref{eq:28a} contracted with $g^{bf}$ gives
\begin{multline}
\label{eq:34}
D^b {R}_{ba}-\frac{1}{2} D_a R=\frac{1}{2}
{R^{bc}}_{da}{T^{d}}_{bc}+{{R}^b}_c{T^c}_{ab}\\- \frac{1}{2}({\Xi^{bc}}_c{
R}_{ba}+{\Xi^{cd}}_b{R^b}_{cda}+{\Xi^{bc}}_a{
R}_{bc})\\+D_c(D_{[b}{\Xi^{bc}}_{a]}+\frac{1}{2}{\Xi^{bc}}_d{T^d}_{ba})+
(D_{[b}{\Xi^{bc}}_{d]}+\frac{1}{2}{\Xi^{bc}}_d{T^d}_{bd}){T^d}_{ac}.
\end{multline}
The curvature scalar is defined by
${R}=g^{ab}{R}_{ab}$, the Einstein tensor by
\begin{equation}
\label{eq:33}
G_{ab}={R}_{(ab)}-\frac{1}{2} g_{ab} {R}.
\end{equation}
When combining with \eqref{eq:27}, the contracted Bianchi identity
\eqref{eq:34} written in terms of the Einstein tensor is
\begin{multline}
\label{eq:34a}
D^b {G}_{ba}=\frac{1}{2}
{R^{bc}}_{da}{T^{d}}_{bc}+{{R}^b}_c{T^c}_{ab} - \frac{1}{2} {\Xi_{ab}}^b R \\
+\frac{1}{2} D^b(S_{ab}-D_c {T^c}_{ab}
-2D_{[a} {T^c}_{b]c}-{T^c}_{dc}{T^d}_{ab})\\ - \frac{1}{2}({\Xi^{bc}}_c{
R}_{ba}+{\Xi^{cd}}_b{R^b}_{cda}+{\Xi^{bc}}_a{
R}_{bc})\\+D_c(D_{[b}{\Xi^{bc}}_{a]}+\frac{1}{2}{\Xi^{bc}}_d{T^d}_{ba})+
(D_{[b}{\Xi^{bc}}_{d]}+\frac{1}{2}{\Xi^{bc}}_d{T^d}_{bd}){T^d}_{ac} .
\end{multline}
By the usual manipulations, one may show in full generality that the existence of the metric
implies that the most general connection can be written as
\begin{equation}
\label{eq:17}
\Gamma_{abc}=
\{{}_{abc}\}+M_{abc}+K_{abc}+r_{abc} ,
\end{equation}
where
\begin{equation}
\{{}_{abc}\}=\frac{1}{2}(g_{ab,c}+g_{ac,b}-g_{bc,a})=\{{}_{acb}\}, \label{eq:97a}
\end{equation}
\begin{equation}
M_{abc}=\frac{1}{2}(\Xi_{abc}+\Xi_{acb}-\Xi_{bca})=M_{acb}, \label{eq:97b}
\end{equation}
\begin{equation}
\label{eq:99c}
K_{abc}=\frac{1}{2}(T_{bac}+T_{cab}-T_{abc})=-K_{bac},
\end{equation}
\begin{equation}
r_{abc}=\frac{1}{2}(D_{bac}+D_{cab}-D_{abc})=-r_{bac}.\label{eq:96}
\end{equation}
Furthermore, one can directly show that
\begin{equation}
\label{eq:105}
{\Gamma^a}_{b\mu}={e^a}_\nu(\d_\mu
{e_b}^\nu+{\Gamma^\nu}_{\rho\mu}{e^\rho}_b)\iff
{\Gamma}_{abc}=e_{a\nu}\d_c{e_b}^\nu+{e_a}^\mu{e_b}^\nu{e_c}^\rho\Gamma_{\mu\nu\rho}.
\end{equation}
Finally, we will need the following variation
\begin{equation}
\label{eq:47}
\delta
{R^a}_{b\mu\nu}=D_\mu\delta{\Gamma^a}_{b\nu}-D_\nu\delta{\Gamma^a}_{b\mu}.
\end{equation}
\subsection{Coordinate basis, torsionless connection}
\label{sec:coordinate-basis}
We now consider the particular case of a coordinate basis,
${e_a}^\mu={\delta_a}^\mu$ so that ${D^\lambda}_{\mu\nu}=0$ and
${T^\lambda}_{\mu\nu}={\Gamma^\lambda}_{\nu\mu}-{\Gamma^\lambda}_{\mu\nu}$. We
also impose vanishing of torsion, which requires the connection to be
symmetric, ${\Gamma^\lambda}_{\mu\nu}= {\Gamma^\lambda}_{\nu\mu}$. In
this case, equation \eqref{eq:27} implies
$S_{\mu\nu}=R_{\mu\nu}-R_{\nu\mu}$ and the contracted Bianchi
identities \eqref{eq:34a} become
\begin{equation}
\label{eq:34c}
D^\nu {G}_{\nu\mu}=
D^\nu R_{[\mu\nu]} +D_\lambda
{R^{(\lambda\nu)}}_{\nu\mu}
- \frac{1}{2}(D_\nu g^{\nu\lambda}{
R}_{\lambda\mu}+D_\nu g^{\lambda\rho} {R^\nu}_{\lambda\rho\mu}+
D_\mu g^{\nu\lambda} {
R}_{\nu\lambda}+ D^\nu g_{\mu \nu} R),
\end{equation}
while the variation \eqref{eq:47} simplifies to
\begin{equation}
\label{eq:51}
\delta
{R^\alpha}_{\beta\mu\nu}=D_\mu\delta{\Gamma^\alpha}_{\beta\nu}
-D_\nu\delta{\Gamma^\alpha}_{\beta\mu}.
\end{equation}
We also have
\begin{equation}
\d_\mu(\sqrt{|g|} v^\mu)=\sqrt{|g|}
(D_\mu-{\Gamma^\nu}_{\mu\nu} +\frac{1}{2} g^{\nu\lambda}\d_\mu
g_{\nu\lambda}) v^\mu
=D_{\mu}(\sqrt{|g|}v^\mu),
\label{eq:41}
\end{equation}
where the last equality follows by introducing the convenient
definition for the covariant derivative of a scalar density,
\begin{equation}
\label{eq:53}
D_\mu\sqrt{|g|}=\sqrt{|g|}(\frac{1}{2} g^{\nu\lambda}\d_\mu
g_{\nu\lambda}-{\Gamma^\nu}_{\mu\nu}).
\end{equation}
If in addition, as will be imposed below on-shell, one requires
metricity, $\Xi^{ab}=dg^{ab}+2\Gamma^{(ab)} =0$, one recovers the
standard Christoffel connection
\begin{equation}
\label{eq:18}
\Gamma_{\lambda\mu\nu}=\frac{1}{2}(\d_\nu g_{\lambda\mu}+\d_\mu
g_{\lambda\nu}-\d_\lambda g_{\mu\nu}),
\end{equation}
The contracted Bianchi identities \eqref{eq:34c} reduce to
\begin{equation}
\label{eq:42}
D^\nu {G_{\nu\mu}}=0,
\end{equation}
and \eqref{eq:41} to
\begin{equation}
\label{eq:43}
\d_\mu(\sqrt{|g|} v^\mu)=\sqrt{|g|} D_\mu v^\mu.
\end{equation}
\section{Palatini formulation}
\label{sec:palatini-formulation}
\subsection{Variational principle}
\label{sec:vari-princ-Pal}
In the formulation discussed for example in \cite{Misner:1970aa}, one
uses a coordinate basis ${e_a}^\mu=\delta^\mu_a$,
${D^\mu}_{\nu\rho}=0$ with a metric $g_{\mu\nu}$ and a torsionfree
connection ${\Gamma^\lambda}_{\mu\nu}={\Gamma^\lambda}_{\nu\mu}$ as
variables\footnote{Adapting the arguments below to the case where the
variables are chosen as the contravariant metric tensor density and
the connection as done in \cite{Deser:1959zza,Arnowitt:1959ah} is
straightforward.} to write the Palatini action as
\begin{equation}
\label{eq:3P}
S^P[g_{\mu\nu},{\Gamma^\lambda}_{\mu\nu}]=\kappa \int d^nx\, L^P=
\kappa \int d^nx \sqrt {|g|}
({ R}-2\Lambda),
\end{equation}
where $\kappa^{-1} = 16\pi G$ and we assume $n\geq 3$. Using \eqref{eq:51}, the variation of the
action is given by
\begin{equation}
\label{eq:52}
\delta S^P= \kappa \int d^nx \sqrt {|g|}\big[-(G^{\mu\nu}+\Lambda
g^{\mu\nu})\delta
g_{\mu\nu}+g^{\alpha\beta}(D_\mu\delta{\Gamma^\mu}_{\alpha\beta}
-D_\beta\delta{\Gamma^\mu}_{\alpha\mu})\big].
\end{equation}
Using in addition \eqref{eq:41} and neglecting boundary terms yields
\begin{multline}
\label{eq:55}
\delta S^P=\kappa \int d^nx \big[-\sqrt {|g|}(G^{\mu\nu}+\Lambda
g^{\mu\nu})\delta
g_{\mu\nu}\\+(-D_\mu[\sqrt {|g|}g^{\alpha\beta}]+D_\lambda[\sqrt
{|g|}g^{\alpha\lambda}\delta_\mu^\beta])
\delta{\Gamma^\mu}_{\alpha\beta}
\big],
\end{multline}
so that the Euler-Lagrange derivatives of $L^P$ with respect to the
fields $g_{\mu\nu}$ and ${\Gamma^\lambda}_{\mu\nu}$ take the form
\begin{align}
\label{eq:56}
\vddl{L^P}{g_{\mu\nu}} &=-\sqrt {|g|}(G^{\mu\nu}+\Lambda
g^{\mu\nu}), \\
\vddl{L^P}{{\Gamma^\mu}_{\alpha\beta}} &= -D_\mu[\sqrt
{|g|}g^{\alpha\beta}]+\frac{1}{2} D_\lambda[\sqrt
{|g|}g^{\alpha\lambda}\delta_\mu^\beta]+\frac{1}{2} D_\lambda[\sqrt
{|g|}g^{\beta\lambda}\delta_\mu^\alpha].\label{eq:57}
\end{align}
Contracting the equations of motion corresponding to \eqref{eq:57}
with $\delta^\mu_\beta$ gives
$D_\beta[\sqrt{|g|}g^{\alpha\beta}]=0$. When
re-injecting this result into the equation of motion, this implies $D_\mu[\sqrt{|g|}g^{\alpha\beta}]=0$. From
${\rm det}(\sqrt{|g|}g^{\alpha\beta})=|g|^{\frac{n-2}{2}}$, one then
deduces that
\begin{equation*}
\delta |g|^\frac{1}{2}=\delta ({\rm
det}(\sqrt{|g|}g^{\alpha\beta}))^{\frac{1}{n-2}}= \frac{1}{n-2}
{\rm det}(\sqrt{|g|}g^{\alpha\beta})^{\frac{1}{n-2}-1}\delta {\rm
det}(\sqrt{|g|}g^{\alpha\beta})\\
=\frac{1}{n-2}g_{\alpha\beta}\delta(\sqrt{|g|}g^{\alpha\beta}).
\end{equation*}
When the variation corresponds to the covariant derivative $D_\mu$, we
deduce that these equations of motion imply that $D_\mu \sqrt{|g|}=0$,
and then metricity, $D_\mu g^{\alpha\beta}=0$. Since this implies
\eqref{eq:18}, it follows that ${\Gamma^\mu}_{\alpha\beta}$ are
auxiliary fields, i.e., fields that can be eliminated algebraically by
their own equations of motion.
\subsection{Gauge symmetries and Noether identities}
If $\xi^\mu(x)$ denotes the vector field parametrizing an infinitesimal
diffeomorphism, the variation of the variables of the variational
principle at a given point is
\begin{align}
\delta_\xi g_{\mu\nu} &= \mathcal{L}_\xi g_{\mu\nu}
= \xi^\rho \partial_\rho g_{\mu\nu} + g_{\rho \nu} \partial_\mu
\xi^\rho
+ g_{\mu \rho} \partial_\nu
\xi^\rho, \label{variation g} \\
\delta_\xi
{\Gamma^\mu}_{\nu\rho}&=\d_\rho\d_\nu\xi^\mu+\xi^\sigma\partial_\sigma
{\Gamma^\mu}_{\nu\rho}-\d_\sigma\xi^\mu
{\Gamma^\sigma}_{\nu\rho}+\d_\nu\xi^\sigma
{\Gamma^\mu}_{\sigma\rho}+\d_\rho\xi^\sigma
{\Gamma^\mu}_{\nu\sigma}. \label{variation
Gamma}
\end{align}
These transformations are infinitesimal gauge symmetries of the
Palatani formulation in the sense that
$\delta_\xi L^P=\d_\mu(\xi^\mu L^P)$ for all $\xi^\mu(x)$. As usual,
this follows as a consequence of the fact that $L^P$ transforms like a
scalar density under finite diffeomorphisms.
At this stage, we note that the transformation law of the connection
in \eqref{variation Gamma} involves derivatives of the gauge parameter
$\xi^\mu$ up to second order. Therefore, even though the
Euler-Lagrange equations \eqref{eq:56} and \eqref{eq:57} are of first
order, the theory is not in the class of first order theories
described for instance \cite{Barnich:2019vzx}.
The Noether identities and the weakly vanishing Noether current
associated to these gauge symmetries are then identified by using
Leibniz rule to write the analog of \eqref{eq:1} in the present
case,
\begin{equation}
\label{eq:62}
\vddl{\kappa L^P}{g_{\mu\nu}}\delta_\xi
g_{\mu\nu}+\vddl{\kappa L^P}{{\Gamma^\mu}_{\alpha\beta}}\delta_\xi
{\Gamma^\mu}_{\alpha\beta} =\xi^\rho N_\rho+\d_\mu
S^\mu_\xi,
\end{equation}
which leads to the Noether identities
\begin{multline}
\label{eq:54}
\kappa^{-1} N_\rho=\vddl{L^P}{g_{\mu\nu}}\d_\rho g_{\mu\nu}-2\d_\sigma
\left(\vddl{L^P}{g_{\sigma\nu}}g_{\rho\nu}\right)\\+
\d_\alpha\d_\beta \left(\vddl{L^P}{{\Gamma^\rho}_{\alpha\beta}}
\right)
+\vddl{L^P}{{\Gamma^\mu}_{\alpha\beta}}\d_\rho{\Gamma^\mu}_{\alpha\beta}
+\d_\sigma
\left(\vddl{L^P}{{\Gamma^\rho}_{\alpha\beta}}{\Gamma^\sigma}_{\alpha\beta}
\right) -2\d_\sigma
\left(\vddl{L^P}{{\Gamma^\mu}_{\sigma\beta}}{\Gamma^\mu}_{\rho\beta}
\right)=0.
\end{multline}
These identities correspond to the contracted Bianchi identities
\eqref{eq:34c}. Indeed, \eqref{eq:54} can be rewritten as
\begin{equation}
\label{Noeth Pal}
\vddl{L^P}{g_{\mu\nu}} D_\rho g_{\mu\nu} - 2 D_\mu \left( g_{\rho \nu}
\vddl{L^P}{g_{\mu\nu}}
\right) + \vddl{L^P}{{\Gamma^\tau}_{\sigma\nu}} {R^\tau}_{\sigma \rho
\nu}
+ D_\sigma D_\nu \left( \vddl{L^P}{{\Gamma^\rho}_{\sigma\nu}} \right) = 0.
\end{equation}
By inserting \eqref{eq:56} and \eqref{eq:57} into \eqref{Noeth Pal},
one recovers \eqref{eq:34c}. For this computation, the
identities
\begin{equation}
[D_\mu , D_\nu ] \sqrt{|g|} = - \sqrt{|g|} ({R}_{\mu \nu} - {R}_{\nu \mu}),
\end{equation}
\begin{equation}
[D_\mu , D_\nu ] D_\lambda \sqrt{|g|} = - D_\tau \sqrt{|g|}
{R^\tau}_{\lambda \mu \nu} - D_\lambda \sqrt{|g|}({R}_{\mu \nu}
- {R}_{\nu \mu}),
\end{equation}
are useful.
\subsection{Construction of the co-dimension 2 form}
\label{sec:constr-co-dimens-Pal}
We also get from \eqref{eq:62} the weakly vanishing Noether current
associated with the gauge symmetries,
\begin{equation}
\label{3formePalatini}
\kappa^{-1} S^\mu_\xi=2\frac{\delta L^P}{\delta g_{\mu\tau}}\xi_\tau +
2 \frac{\delta L^P}{\delta {\Gamma^\tau}_{\mu\rho}}D_\rho
\xi^\tau-{\Gamma^\mu}_{\rho\sigma}\frac{\delta L^P}{\delta
{\Gamma^\tau}_{\sigma\rho}}\xi^\tau - \partial_\rho
\left(\frac{\delta L^P}{\delta {\Gamma^\tau}_{\mu\rho}}\xi^\tau
\right).
\end{equation}
In order to compute the co-dimension 2 forms, one now needs to insert
this expression into the general formula \eqref{eq:9}. Note that
\eqref{3formePalatini} involves second order derivatives of the fields
in the last term as a consequence of the second order derivatives on
the gauge parameters. Since these second derivatives occur under a
total derivative however, the properties \eqref{useful identities M}
allow one to reduce the actual computation to one involving only the
Euler-Lagrange derivatives of order one acting on expressions which
are at most of first order in derivatives,
\begin{multline}
\label{2formePalatini}
\kappa^{-1} k^{[\mu\nu]}_\xi=\frac{1}{2}\delta\phi^i\frac{\delta}{\delta
\phi^i_\nu}
\left[2\frac{\delta L^P}{\delta g_{\mu\tau}}\xi_\tau
+ 2 \frac{\delta L^P}{\delta {\Gamma^\tau}_{\mu\rho}}D_\rho
\xi^\tau-{\Gamma^\mu}_{\rho\sigma}\frac{\delta L^P}{\delta
{\Gamma^\tau}_{\sigma\rho}}\xi^\tau \right]\\
-\frac{1}{3}\partial_\rho \left[\delta\phi^i\frac{\delta}{\delta
\phi^i_{\nu}}
\left(\frac{\delta L^P}{\delta {\Gamma^\tau}_{\mu\rho}}\xi^\tau
\right)\right]
-(\mu\leftrightarrow\nu).
\end{multline}
To proceed in this computation, let us introduce the notations
$h_{\mu\nu}=\delta g_{\mu\nu}$,
$\delta {\Gamma^\rho}_{\mu \nu} = {C^\rho}_{\mu \nu}$, indices being
lowered and raised with $g_{\mu\nu}$ and its inverse, and
$h=h^\mu_\mu$. Using
\begin{equation*}
\begin{split}
& \delta ( \sqrt{|g|} g^{\mu \lambda} ) = - \sqrt{|g|} h^{\mu \lambda}
+ \frac{1}{2} \sqrt{|g|} g^{\mu \lambda} h,\\
& \partial_\lambda ( \sqrt{|g|} h^{\mu \lambda} \xi^\nu) = D_\lambda (
\sqrt{|g|} h^{\mu \lambda} \xi^\nu) - \sqrt{|g|} {\Gamma^\mu}_{\lambda
\tau} h^{\tau \lambda} \xi^\nu - \sqrt{|g|} {\Gamma^\nu}_{\lambda
\tau} h^{\mu \lambda} \xi^{\tau},
\end{split}
\end{equation*}
one finally obtains the explicit
expression for the co-dimension 2 form,
\begin{multline}
\label{Final expression Pal}
\kappa^{-1} k^{[\mu\nu]}_\xi = \sqrt{|g|} \Big[\xi^\sigma
C^{\mu\nu}_{\;\;\;\sigma}-\xi^\mu
C_{\;\;\sigma}^{\sigma\;\;\;\nu}+\frac{1}{2}\xi^\mu
C^{\nu\sigma}_{\;\;\;\sigma}- h^{\nu\sigma}D_\sigma\xi^\mu +\frac{1}{2} h
D^\nu\xi^\mu \\ -\frac{1}{2} \xi^\nu D_\sigma
h^{\mu\sigma}+\frac{1}{4}\xi^\nu D^\mu h\Big] -\frac{1}{2}
h^{\mu\lambda}\xi^\nu D_\lambda( \sqrt{|g|})+\frac{1}{4}h\xi^\nu
D_\lambda(\sqrt{|g|} g^{\mu\lambda})-(\mu\leftrightarrow\nu).
\end{multline}
The breaking term is easy to work out since it merely involves the
Euler-Lagrange derivatives of order one acting on expressions that are
at most of first order in derivatives. It is given by
\begin{equation}
\kappa^{-1} b^\mu= \delta_\xi {\Gamma^\mu}_{\rho\nu} \delta (\sqrt {|g|} g^{\nu\rho} ) -
\delta_\xi {\Gamma^\nu}_{\rho\nu} \delta (\sqrt {|g|} g^{\mu\rho} ) - (\delta_\xi
\leftrightarrow \delta).
\end{equation}
\subsection{Reduction to the metric formulation}
\label{sec:reduction}
We now compare the expression \eqref{Final expression Pal} with the
standard results obtained in the metric formulation, where absence of
torsion and metricity are assumed. For this purpose, let us go
on-shell for the auxiliary fields ${\Gamma^\rho}_{\mu \nu}$ appearing
in the co-dimension 2 form \eqref{Final expression Pal}. One directly
gets
\begin{equation}
\begin{split}
\kappa^{-1} k^{[\mu\nu]}_\xi=\sqrt{|g|} \Big[ &\xi^\sigma
C^{\mu\nu}_{\;\;\;\sigma}-\xi^\mu
C_{\;\;\sigma}^{\sigma\;\;\;\nu}+\frac{1}{2}\xi^\mu
C^{\nu\sigma}_{\;\;\;\sigma}-\frac{1}{2} \xi^\nu D_\sigma
h^{\mu\sigma}+\frac{1}{4}\xi^\nu D^\mu h\\ &- h^{\nu\sigma}
D_\sigma \xi^\mu +\frac{1}{2} h
D^\nu\xi^\mu\Big]-(\mu\leftrightarrow\nu).
\end{split}
\end{equation}
When taking into account that $D_\mu$ is now the connection involving
the Christoffel symbols, so that
$C^\mu_{\;\;\tau\sigma}=\frac{1}{2}(D_\tau h^\mu_\sigma + D_\sigma
h^\mu_\tau -D^\mu h_{\tau\sigma})$, we obtain
\begin{equation}
\kappa^{-1} k^{[\mu\nu]}_\xi=\sqrt{|g|}\left[\xi_\sigma D^\nu
h^{\mu\sigma}+\xi^\nu D^\mu h -\xi^\nu D_\sigma
h^{\mu\sigma}- h^{\nu\sigma} D_\sigma \xi^\mu + \frac{1}{2} h
D^\nu\xi^\mu\right]-(\mu\leftrightarrow\nu) .
\end{equation}
Assuming that $\xi^\mu$ is a Killing vector, namely
$D_\sigma \xi^\mu + D^\mu \xi_\sigma = 0$, this expression reproduces
exactly the co-dimension 2 form obtained in metric formalism
\cite{Iyer:1994ys,Anderson:1996sc,Barnich:2001jy}. Therefore, we see
that the co-dimension $2$ forms expressions are equivalent in Palatini
and metric formalisms for exact reducibility parameters. However, this
result does not hold when $\xi^\mu$ is not a Killing vector. In
particular, the expressions may not match when using asymptotic
Killing vectors, as previously pointed out in the the Cartan
formulation of general relativity
\cite{Barnich:2016rwk,Oliveri:2019gvm}.
\section*{Acknowledgements}
\label{sec:acknowledgements}
\addcontentsline{toc}{section}{Acknowledgments}
This work is supported by the F.R.S.-FNRS Belgium through
conventions FRFC PDR T.1025.14 and IISN 4.4503.15. The work
of P.~Mao is supported in part by the National Natural Science Foundation
of China under Grant Nos. 11905156 and 11935009. The work of
R.~Ruzziconi is supported by a FRIA fellowship.
\addcontentsline{toc}{section}{References}
\providecommand{\href}[2]{#2}\begingroup\raggedright | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The fractional Brownian motion (fBm for short) $B=\{B_{t} , t\in [0,T]\}$ with Hurst parameter $H\in (0,1)$ is a Gaussian self-similar process with stationary increments.
This process was introduced by Kolmogorov \cite{kol} and studied by Mandelbrot and Van Ness in \cite{MN}, where a stochastic integral representation in terms of a standard
Brownian motion was established. The parameter
$H$ is called Hurst index from the
statistical analysis, developed by the climatologist Hurst \cite{hurst}. The self-similarity and stationary increments properties make the fBm an appropriate model for many applications in diverse fields from biology to finance. From the properties of the fBm, it follows that for every $\alpha >0$
$$
\mathbb{E}\left(|B_t-B_s|^{\alpha}\right) = \mathbb{E}\left(|B_1|^{\alpha}\right)|t-s|^{\alpha H}.
$$
As a consequence of the Kolmogorov continuity theorem, we deduce that there exists a version of the fBm $B$ which is a continuous process and whose paths are $\gamma$-H\"{o}lder continuous for every $\gamma <H$.
Therefore, the fBm with Hurst parameter $H\neq \frac12$ is not a semimartingale and then the It\^{o} approach to the construction of stochastic integrals with respect to fBm is not valid. Two main approaches have been used in the literature to define stochastic integrals with respect to fBm with Hurst parameter $H$. Pathwise Riemann-Stieltjes stochastic integrals can be defined using Young's integral \cite{young} in the case $H>\frac 12$. When $H\in (\frac14, \frac12)$, the rough path analysis introduced by Lyons \cite{lyons} is a suitable method to construct pathwise stochastic integrals.
A second approach to develop a stochastic calculus with respect to the fBm is based on the techniques of Malliavin calculus. The divergence operator, which is the adjoint of the derivative operator, can be regarded as a stochastic integral, which coincides with the limit of Riemann sums constructed using the Wick product.
This idea has been developed by
Decreusefond and \"{U}st\"{u}nel \cite{DU}, Carmona, Coutin and Montseny \cite{CC}, Al\`os, Mazet and Nualart \cite{AMN1, AMN2}, Al\`os and Nualart \cite{AN} and Hu \cite{hu}, among others. The integral constructed by this method has zero mean.
Different versions of the It\^o formula have been proved by the divergence integral in these papers.
In particular, if $H\in (\frac 14, 1)$ and $f\in C^2(\mathbb{R})$ is a real-valued function satisfying some suitable growth condition, then the stochastic process $\{f'(B_t)\textbf{1}_{[0,t]}, 0\le t \le T\}$ belongs to domain of the divergence operator and
\begin{equation}\label{ito}
f(B_t) = f(0) + \displaystyle\int_0^t f'(B_s)\delta B_s
+ H\displaystyle\int_0^tf''(B_s) s^{2H-1} ds.
\end{equation}
For $H\in (0, \frac 14]$, this formula still holds, if the stochastic integral is interpreted as an extended divergence operator (see \cite{CN,LN}). A multidimensional version of the change of variable formula for the divergence integral has been recently proved by Hu, Jolis and Tindel in \cite{HMS}.
Using the self-similarity of fBm and the Ergodic Theorem one can prove that the fBm has a finite $\frac 1H$-variation on any interval $[0,t]$, equals to $e_H t$, where $e_H = \mathbb{E}\left[|B_1|^{\frac{1}{H}}\right]$ (see, for instance, Rogers \cite{rogers}). More precisely, we have, as $n$ tends to infinity
\begin{equation}\label{res1}
\sum_{i=0}^{n-1}|B_{t(i+1)/n}-B_{it/n}| ^{\frac{1}{H}} \overset{L^{1}(\Omega)} {\longrightarrow}
t\, e_H.
\end{equation}
This result has been generalized by Guerra and Nualart \cite{GN} to the case of divergence integrals with respect to the fBm with Hurst parameter $H\in (\frac12, 1)$.
The purpose of this paper is to study the $\frac 1H$-variation of divergence processes
$X=\{ X_t, t\in [0,T]\}$, where $X_t=\int_{0}^{t} u_{s} \delta B_{s}$, with respect to the fBm with Hurst parameter $H< \frac12$. Our main result, Theorem \ref{the2}, states that the $\frac 1H$-variation of $X$ exists in $L^1(\Omega)$ and is equal to $e_H \int_0^T |u_s|^{\frac{1}{H}} ds$, under suitable assumptions on the integrand $u$. This is done by proving an estimate of the $L^{p}$-norm of the Skorohod integral $\int_{a}^{b} u_{s} \delta B_{s}$, where $0\leq a\leq b\leq T$. Unlike the case $H>\frac 12$, here we need to impose H\"older continuity conditions on the process $u$ and its Malliavin derivative.
We also derive an extension of this result to divergence integrals with respect to a $d$-dimensional fBm, where $d\ge 1$.
In the last part of the paper, we study the fractional Bessel process $R= \{R_t, t\in [0,T]\}$, defined by $R_t:= \|B_t\|$, where $B$ is a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$.
The following integral representation of this process
\begin{equation} \label{rep1}
R_t = \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds,
\end{equation}
has been derived in \cite{GN} when $H>\frac 12$. Completing the analysis initiated in \cite{HN}, we
establish the representation (\ref{rep1}) in the case $H<\frac 12$, using a suitable notion of the extended domain of the divergence operator. Applying the results obtained in the first part of the paper and assuming $2dH^2>1$, we prove that the $\frac 1H$-variation of the divergence integral of the process
$$\Theta_t:= \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)},$$
exists in $L^1(\Omega)$ and is equal to$ \displaystyle\int_{\mathbb{R}^d}\left[\displaystyle\int_ 0^T \left | \left\langle\dfrac{B_s}{R_s}, \xi\right \rangle \right|^{\frac{1}{H}}ds\right]\nu(d\xi),
$ where $\nu$ is the normal distribution $N(0, I)$ on $\mathbb{R}^d$. We also discuss some other properties of the process $\{\Theta_t,t\in [0,T]\} $.
The paper is organized as follows. Section 2 contains some preliminaries on Malliavin calculus. In Section 3, we prove an $L^p$-estimate for divergence integral with respect to fBm. Section 4, is devoted to the study the $\frac 1H $-variation of the divergence integral with respect to fBm, for $H<\frac12$. Section 5, deals with the $\frac 1H $-variation of the divergence integral with respect to $d$-dimensional fBm. An application to fractional Bessel process has been given in Section 6.
\section{Preliminaries on Malliavin calculus}
Here we describe the elements from stochastic analysis that we will need in the paper. Let $B=\{B_{t} , t\in [0,T]\}$ be a fractional Brownian motion with Hurst parameter $H\in (0,1)$ defined in
a complete probability space $(\Omega, \mathcal{F},P)$,
where $\mathcal{F}$ is generated by $B$. That is, $B$ is a
centred Gaussian process with covariance function
\begin{equation*}
R_H(t,s):=\mathbb{E}(B_{t}B_{s}) = \dfrac12 (t^{2H}+s^{2H}-|t-s|^{2H}),
\end{equation*}
for $s,t \in [0,T]$.
We denote by $\EuFrak H$ the Hilbert space associated to $B$, defined as the closure of the linear space generated by
the indicator functions $\{ \mathbf{1}_{[0,t]}, t\in [0,T]\} $, with respect to the inner product
\begin{equation*}
\langle \mathbf{1}_{[0,t]} , \mathbf{1}_{[0,s] } \rangle _{\EuFrak H}
=R_H(t,s), \hskip0.5cm s,t\in [0,T].
\end{equation*}
The mapping $\mathbf{1}_{[0,t]} \to B_{t}$ can be extended to a linear isometry between $\EuFrak H$ and the Gaussian space generated by $B$. We
denote by $B(\varphi)= \int_0^T \varphi_t dB_t $ the image of an element $\varphi \in \EuFrak H$
by this isometry.
We will first introduce some elements of the Malliavin calculus associated
with $B$. We refer to \cite{nualart} for a detailed account of these notions.
For a smooth and cylindrical random variable $F=f\left( B(\varphi _{1}), \ldots ,
B(\varphi_{n})\right) $, with $\varphi_{i} \in \EuFrak H$ and $f\in
C_{b}^{\infty}(\mathbb{R}^{n})$ ($f$ and all its partial derivatives are bounded), the
derivative of $F$ is the $\EuFrak H$-valued random variable defined by
\begin{equation*}
D F =\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(B(\varphi_{1}),\dots,B(%
\varphi_{n}))\varphi_{j}.
\end{equation*}
For any integer $k\ge 1$ and any real number $p\ge 1$ we denote by $\mathbb{D%
}^{k,p}$ the Sobolev space defined as the the closure of the space of smooth and cylindrical
random variables with respect to the norm
\begin{equation*}
\Vert F\Vert_{k,p}^{p}=\mathbb{E}(|F|^{p})+\sum_{j=1}^{k} \mathbb{E} (\Vert
D^{j}F\Vert_{ \EuFrak H ^{\otimes j} }^{p}).
\end{equation*}
Similarly, for a given Hilbert space $V$ we can define Sobolev spaces of $V$%
-valued random variables $\mathbb{D}^{k,p}(W)$.
The divergence operator $\delta$ is introduced as the adjoint of the derivative operator. More precisely, an element $u\in L^{2}(\Omega;\EuFrak H)$ belongs to the domain of $\delta$, denoted by ${\rm Dom}\, \delta$, if there exists
a constant $c_u$ depending on $u$ such that
\begin{equation*}
|\mathbb{E}(\langle D F,u\rangle_{\EuFrak H})|\leq c_u\Vert F\Vert_{2},
\end{equation*}
for any smooth random variable $F\in \mathcal{S}$. For any $u\in {\rm Dom}\, \delta$, $\delta(u)$ is the
element of $L^{2}(\Omega)$ given by the duality relationship
\begin{equation*}
\mathbb{E}(\delta (u)F)=\mathbb{E}(\langle D F,u\rangle_{\EuFrak H}),
\end{equation*}
for any $F\in \mathbb{D}^{1,2}$. We will make use of the notation $\delta
(u)=\int_{0}^{T}u_{s}\delta B_{s}$, and we call $\delta(u)$ the divergence integral of $u$ with respect to the fBm $B$.
Note that $\mathbb{E}(\delta (
u ) )=0$. On the other hand, the space $\mathbb{D}^{1,2}(\EuFrak H)$ is included in the domain of $\delta $, and for $u\in \mathbb{D}^{1,2}(\EuFrak H)$, the variance of $\delta(u)$ is given by
\begin{equation*}
\mathbb{E}(\delta (u)^{2})=\mathbb{E}(\Vert u\Vert_{\EuFrak H}^{2})+\mathbb{E}(\langle D u,(D
u)^{\ast}\rangle_{\EuFrak H\otimes\EuFrak H} ),
\end{equation*}
where $(D u)^{\ast}$ is the
adjoint of $D u$ in the Hilbert space $\EuFrak H\otimes\EuFrak H$.
By Meyer's inequalities (see Nualart \cite{nualart}), for all $p>1$, the divergence operator
is continuous from $ \mathbb{D}^{1,p}(\EuFrak H)$ into $ L^p(\Omega)$, that is,
\begin{equation}\label{meyer}
\mathbb{E}(|\delta (u)|^{p})\leq C_{p}\left( \mathbb{E}(\Vert u\Vert_{\EuFrak H%
}^{p})+\mathbb{E}(\Vert D u\Vert_{\EuFrak H\otimes\EuFrak H}^{p})\right).
\end{equation}
We will make use of the property
\begin{equation}\label{p1}
\delta (Fu)= F\delta (u)+\langle D F,u\rangle_{\EuFrak H},
\end{equation}
which holds if $F\in \mathbb{D}^{1,2}$, $u\in {\rm Dom} \, \delta$ and the right-hand side is square integrable. We have also the commutativity relationship between $%
D $ and $\delta $
\begin{equation*}
D \delta (u)= u + \int_{0}^{T} D u_{s}\delta B_{s},
\end{equation*}
which holds if $u\in \mathbb{D}^{1,2}(\EuFrak H)$ and the $\EuFrak H$-valued process $\{D u_s, s\in
[0,T]\}$ belongs to the domain of $\delta $.
The covariance of the fractional Brownian motion can be written as
$$
R_H(t,s) = \int_0^{t\wedge s} K_H(t,u)K_H(s,u)du,
$$
where $K_H(t,s)$ is a square integrable kernel, defined for $0<s<t<T$. In what follows, we assume that $0<H <\frac12$. In this case, this kernel has the following expression
$$
K_H(t,s)= c_H\left[ \left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac12} -(H-\frac12)s^{H-\frac12}\int_s^t u^{H-\frac32}(u-s)^{H-\frac12}du\right],
$$
with $c_H = \left(\frac{2H}{(1-2H)\beta(1-2H, H+\frac12)}\right)^{\frac12}$ and $\beta(x,y):= \displaystyle\int_ 0^1 t^{x-1}(1-t)^{y-1}dt$ for $x, y>0$. Notice also that
$$
\frac{\partial K_H}{\partial t}(t,s) = c_H (H-\frac12)\left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac32}.
$$
From these expressions it follows that the kernel $K_H$ satisfies the following two estimates
\begin{equation}\label{est1A}
\left|\frac{\partial K_H}{\partial t}(t,s)\right| \leq c_H (t-s)^{H-\frac32},
\end{equation}
and
\begin{equation}\label{est2}
|K_H(t,s)|\leq d_H \left((t-s)^{H-\frac12} + s^{H-\frac 12} \right),
\end{equation}
for some constant $d_H$.
Let $\mathcal{E}$ be the linear span of the indicator functions on $[0,T]$.
Consider the linear operator $K_H^*$ from ${\mathcal{E}}$ to $L^2([0, T])$ defined by
\begin{equation}\label{est0}
K_H^*(\varphi)(s) = K_H(T,s)\varphi(s)+ \int_s^T (\varphi(t)-\varphi(s))\dfrac{\partial K_H}{\partial t}(t,s)dt.
\end{equation}
Notice that
$$
K_H^*(\textbf{1}_{[0, t]})(s) = K_H(t,s)\textbf{1}_{[0, t]}(s).
$$
The operator $K_H^*$ can be expressed in terms of fractional derivatives as follows
$$
(K_H^*\varphi)(s)= c_H \Gamma(H+\frac12)s^{\frac12 -H}(D_{T-}^{\frac12 -H} u^{H -\frac12}\varphi(u))(s).
$$
In this expression, $D_{t-}^{\frac 12 -H}$ denotes the left-sided fractional derivative operator, given by
$$
D_{t-}^{\frac 12 -H }f(s):= \frac{1}{\Gamma(\frac 12+H )}\left(\dfrac{f(t)}{(t-s)^{\frac 12-H}}+\left(\frac 12 -H\right)\displaystyle\int_s^t\dfrac{f(s)-f(y)}{(y-s)^{\frac 32-H}}dy\right),
$$
for almost all $s\in (0,t)$ and for a function $f $ in
the image of $L^p([0,t])$, $p\ge 1$, by the left-sided fractional operator $I^{\frac 12-H}_{t-}$ (see \cite{SK} for more details).
As a consequence $C^{\gamma}([0,T])\subset \EuFrak H\subset L^2([0,T])$. It should be noted that the operator $K_H^*$ is an isometry between the Hilbert space $\EuFrak H$ and $L^2([0,T])$. That is, for every $\varphi, \psi\in\EuFrak H$,
\begin{equation} \label{equ1}
\langle \varphi, \psi \rangle_\EuFrak H= \langle K_H^*\varphi, K_H^* \psi \rangle_{L^2([0,T])}.
\end{equation}
Consider the following seminorm on the space ${\mathcal{E}}$
\begin{equation}\label{iso}
\begin{array}{ll}
\| \varphi\|_ K^2 = \displaystyle\int_ 0^T &\varphi^2(s)[(T-s)^{2H-1}+ s^{2H-1}]ds \\ & + \displaystyle\int_0^T\left(\displaystyle\int_s^T |\varphi(t)-\varphi(s)|(t-s)^{H-\frac32}dt\right)^2 ds.
\end{array}
\end{equation}
We denote by $\EuFrak H_K$ the completion of ${\mathcal{E}}$ with respect to this seminorm. From the estimates (\ref{est1A}) and (\ref{est2}), there exists a constant $k_H$ such that for any $\varphi \in \EuFrak H_K$,
\begin{equation}\label{est01}
\| \varphi\|^2_{\EuFrak H} =\|K^*_{H}(\varphi)\|^2_{L^2([0,T])}
\leq k_H\| \varphi\|^2_{ K} .
\end{equation}
As a consequence, the space $\EuFrak H_K$ is continuously embedded in $\EuFrak H$. This implies also that
$\mathbb{D}^{1, 2}(\EuFrak H_K) \subset \mathbb{D}^{1,2}(\EuFrak H) \subset {\rm Dom}\, \delta$.
One can show also that $\EuFrak H = I_{T-}^{\frac12 -H}(L^2([0,T]))$ (see \cite{DU}). Then, the space $\EuFrak H$ is too small for some purposes. For instance, it has been proved in \cite{CN}, that the trajectories of the fBm $B$
belongs to $\EuFrak H$ if and only if $H>\frac14$. This creates difficulties when defining the divergence $\delta(u)$ of a stochastic process whose trajectories do not belong to $\EuFrak H$, for example, if $u_t=f(B_t)$ and $H<\frac 14$, because the domain of $\delta$ is included in
$L^{2}(\Omega; \EuFrak H)$. To overcome this difficulty, an extended domain of the divergence operator has been introduced in \cite{CN}. The main ingredient in the definition of this extended domain is the extension of the inner produce $\langle \varphi, \psi \rangle_\EuFrak H$ to the case where $\psi \in \mathcal{E}$ and $\varphi \in L^\beta([0,T])$ for some $\beta >\frac 1{2H}$ (see \cite{LN}).
More precisely, for $\varphi \in L^\beta([0,T])$ and $\psi = \sum_{j=1}^{m}b_j\textbf{1}_{[0,t_j]} \in \mathcal{E}$ we set
\begin{equation} \label{ext}
\langle\varphi, \psi \rangle_\EuFrak H = \displaystyle\sum_{j=1}^{m}b_j\displaystyle\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t_j)ds.
\end{equation}
This expression coincides with the inner produce in $\EuFrak H$ if $\varphi \in \EuFrak H$, and it is well defined, because
\[
|\langle\varphi, \textbf{1}_{[0,t]} \rangle_\EuFrak H|
= \left|\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t)ds \right|
\leq \|\varphi\|_{L^\beta([0,T])} \sup_{0\leq t\leq T} \left(\int_0^T|\dfrac{\partial R}{\partial s}(s, t_j)|^{\alpha}ds\right)^{\frac{1}{\alpha}}<\infty.
\]
\vspace{0.1cm}
We will make use of following notations: for each $(a, b)\in\mathbb{R}^2$, $a\wedge b = \min(a, b)$ and
$a\vee b = \max(a, b)$.
\section{$L^p$-estimate of divergence integrals with respect to fBm}
Let $V$ be a given Hilbert space. We introduce the following hypothesis for a $V$-valued stochastic process $u=\{ u_t, t\in [0,T]\}$, for some $p\ge 2$.
\medskip
\noindent
\textbf{Hypothesis} $\mathbf{(A.1)}_p$ \textit{ Let $p\ge 2$. Then, $\displaystyle\sup_{0\leq s\leq T}\Vert u_s\Vert_{L^{p}(\Omega; V)} <\infty $ and there exist constants $L>0$, $0<\alpha <\frac12$ and $\gamma >\frac12 -H$ such that,
\begin{equation*} \label{A1}
\Vert u_t -u_s\Vert_{L^{p}(\Omega; V)}\leq L s^{-\alpha }|t-s|^{\gamma},
\end{equation*}
for all $0<s\leq t \leq T$. }
For any $ 0\le a< b \le T$, we will make use of the notation
\[
\|u\|_{p,a,b} = \sup_{a\le s\le b} \|u_s\| _{L^{p}(\Omega; V)}.
\]
The following lemma is a crucial ingredient to establish the
$L^p$-estimates for the divergence integral with respect to fBm.
\begin{lemma}\label{lem1}
Let $u=\{u_t, 0\leq t\leq T\}$ be a process with values in a Hilbert space $V$, satisfying assumption $\mathbf{(A.1)}_p$ for some $p\geq 2$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that for every $0<a\leq b \le T$
\begin{equation} \label{est1}
\mathbb{E} \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,a,b}^p(b-a)^{pH}+ L^pa^{-p\alpha }(b-a)^{p\gamma +pH}\right).
\end{equation}
Moreover if $a=0$, then
\begin{equation} \label{est1a}
\mathbb{E} \left( \|u {\mathbf 1}_{[0,b]} \|^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,0,b}^p b^{pH}+L^pb^{-p\alpha +p\gamma+pH}\right).
\end{equation}
\end{lemma}
\noindent\textit{Proof}. Suppose first that $a> 0$. By equalities (\ref{equ1}) and (\ref{est0}) we obtain
\begin{eqnarray*}
&& \mathbb{E} \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right)
= \mathbb{E} \left( \| K_H^*(u \textbf{1}_{[a,b]} ) \| ^p_{L^2([0,T];V)} \right) \\
& &= \mathbb{E} \left( \left\| K_H(T,s)u _s{\mathbf 1}_{[a,b]}(s) +\displaystyle\int_{s}^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_{s} {\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt \right \|^p_{L^2([0,T];V)} \right).
\end{eqnarray*}
Consider the decomposition
\begin{eqnarray*}
&&\displaystyle\int_s^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_s{\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt
= \left[\displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s) \\
&&\qquad +\left[-\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s)
+\left[ \displaystyle\int_a^b u_t\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[0,a]}(s) \\
&& \qquad := I_1 +I_2+I_3.
\end{eqnarray*}
Therefore
\[
\mathbb{E} \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \le C\sum_{i=0}^3 A_i,
\]
where $A_0= \mathbb{E}\left[\| K_H(T,\cdot)u \textbf{1}_{[a,b]} \|^p_{L^2([0,T]; V)} \right]$ and for $i=1,2,3$, $A_i= \mathbb{E} \left[\| I_i\|^p_{L^2([0,T];V)} \right]$.
Let us now estimate the four terms $A_i$, $i=0,1,2,3$, in the previous inequality. By estimate (\ref{est2}), Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ we obtain
\begin{eqnarray}\notag
A_0
& \leq & C \mathbb{E}\left(\displaystyle\int_a^b [(T-s)^{2H-1}+ s^{2H-1}]\Vert u_s\Vert^2_Vds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\left(\displaystyle\int_a^b [(T-s)^{2H-1}+s^{2H-1}]\| u_s\|^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\
& \leq & C \| u \|_{p,a,b}^p (b-a)^{pH},\label{eqA0}
\end{eqnarray}
where we have used that $(T-a)^{2H}\leq (T-b)^{2H} +(b-a)^{2H}$ and $b^{2H} -a^{2H} \le (b-a)^{2H}$.
Using Minkowski inequality, Hypothesis $\mathbf{(A.1)}_p$ and estimate (\ref{est1A}), it follows that
\begin{eqnarray}
A_1 \notag
& \leq & \left(\displaystyle\int_a^b \left\Vert \displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & \left(\displaystyle\int_a^b \left( \displaystyle\int_s^b\Vert u_t -u_s\Vert_{{L^{p}(\Omega; V)}}\left|\dfrac{\partial K_H}{\partial t}(t,s)\right|dt\right)^2ds
\right)^{\frac{p}{2}} \\
&\leq & C L^p\left(\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha }(t-s)^{\gamma+H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}}.
\label{eqA00}
\end{eqnarray}
We have
\begin{eqnarray*}\notag
\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha}(t-s)^{\gamma+H-\frac32}dt\right)^2ds \notag
&= &\dfrac{1}{(\gamma +H-\frac12)^2}\displaystyle\int_a^bs^{-2\alpha }(b-s)^{2\gamma +2H-1}ds \\ \notag
&\le &
\dfrac{1}{(\gamma +H-\frac12)^2 (2\gamma +2H)} a^{-2\alpha } (b-a)^{2\gamma +2H}.
\end{eqnarray*}
Substituting this expression into inequality (\ref{eqA00}), yields
\begin{equation}\label{eqA1}
A_1
\leq
C L^p a^{-p\alpha } (b-a)^{p\gamma +pH}.
\end{equation}
By the same arguments as above, it follows from Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ that
\begin{eqnarray}
A_2
& =& \notag
\left(\displaystyle\int_a^b \left\Vert\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & C \left(\displaystyle\int_a^b\left(\displaystyle\int_b^T \Vert u_s\Vert_{{L^{p}(\Omega; V)}}(t-s)^{H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\|u \|_{p,a,b}^p \left(\displaystyle\int_a^b\left((T-s)^{H-\frac12}-(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq &C\|u \|_{p,a,b}^p\Big((T-a)^{2H}-(T-b)^{2H})+ (b-a)^{2H}\Big)^{\frac{p}{2}} \\ \label{eqA2}
&\leq & C\| u \|_{p,a,b}^p(b-a)^{pH},
\end{eqnarray}
where we have used that $(T-a)^{2H}-(T-b)^{2H} \leq (b-a)^{2H}$.\\
Finally, for the term $A_3$, we obtain in the same way
\begin{eqnarray}\notag
A_3 \notag
&\leq & \left(\displaystyle\int_0^a \left(\displaystyle\int_a^b \Vert u_t\Vert_{{L^{p}(\Omega; V)}}|\dfrac{\partial K_H}{\partial t}(t,s)|dt\right)^2ds\right)^{\frac{p}{2}} \\
& \leq & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left(\displaystyle\int_a^b (t-s)^{H-\frac{3}{2}}dt\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C \| u \|_{p,a,b}^p (b-a) ^{pH}. \label{eqA3}
\end{eqnarray}
For the last inequality we have used the following computations
\begin{eqnarray*}
& & \displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds \\
&& \quad = \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right)
-2\displaystyle\int_0^a (a-s)^{H-\frac12}(b-s)^{H-\frac12}ds \\
& & \quad \leq \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right) -2\displaystyle\int_0^a (b-s)^{2H-1}ds \\
&& \quad \leq \frac 1{2H} \left( (b-a) ^{2H} - (b^{2H} -a^{2H}) \right) \le \frac 1{2H} (b-a)^{2H}.
\end{eqnarray*}
The inequality (\ref{est1}) follows from the estimates (\ref{eqA0}), (\ref{eqA1}), (\ref{eqA2}) and (\ref{eqA3}). The case $a=0$ can be proved using similar arguments. The proof of Lemma \ref{lem1} is then completed.
\eop
\vspace{0.4cm}
We are now in the position to prove the following theorem which gives an estimate of the $L^{p}$-norm of the Skorohod integral of a process $u$ with respect to a fBm with Hurst parameter $H\in (0,\frac 12)$. We first need the following assumption on the process $u$.
\medskip
\noindent
\textbf{Hypothesis} $\mathbf{(A.2)}_p$ \textit{ Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process, which satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_u$, $ \alpha_1$ and $\gamma$ for a fixed $p\geq 2$. We also assume that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_{Du}$, $ \alpha_2$ and $\gamma$ for the same value of $p$. }
\medskip Hypothesis $\mathbf{(A.2)}_p$ means that $u_s$ and $Du_s$ have bounded $L^p$ norms in $[0,T]$ and satisfy
\begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{p}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma} \label{assump1}
\\
\Vert Du_t -Du_s\Vert_{L^{p}(\Omega; \EuFrak H)}&\leq & L_{Du}s^{-\alpha _2}|t-s|^{\gamma}, \label{assump2}
\end{eqnarray}
for all $0<s\leq t\leq T$.
\medskip
\begin{theorem}\label{the1}
Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\mathbf{(A.2)}_p$ for some $p\geq 2$. Let $0< a\leq b\leq T$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that
\begin{eqnarray} \notag
& & \mathbb{E}\left( \left |\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \\
& \leq &
C\left((\| u \|_{p,a,b}^p+\| Du \|_{p,a,b}^p)(b-a)^{pH}+ (L_u ^pa^{-p\alpha_1}+L_{Du}^pa^{-p\alpha_2})(b-a)^{p\gamma +pH}\right).\qquad
\label{ineq1}
\end{eqnarray}
If $a=0$, then
\begin{eqnarray}
\mathbb{E} \left( \left|\displaystyle\int_0^b u_s \delta B_s\right|^p \right) \leq C\left((\|u \|_{p,a,b}^p+\|Du \|_{p,a,b}^p) b^{pH}+(L_u^pb^{-p\alpha_1}+L_{Du}^pb^{-p\alpha_2})b^{p\gamma+pH}\right).
\label{ineq2}
\end{eqnarray}
\end{theorem}
\noindent\textit{Proof}.
By inequality (\ref{meyer}), we have
$$
\mathbb{E} \left( \left|\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C_p\left ( \mathbb{E} (\| u {\mathbf 1}_{[a,b]} \| ^p_{ \EuFrak H } )+\mathbb{E} ( \| D_s(u_t\textbf{1}_{[a,b]}(t)) \| ^p_{ \EuFrak H \otimes \EuFrak H}\right).
$$
The first and the second terms of the above inequality can be estimated applying Lemma \ref{lem1} to the processes $u$ and $Du$, with $V= \mathbb{R}$ and $V=\EuFrak H$, respectively. Theorem \ref{the1} is then proved.\eop
\begin{remark} If we suppose that $\alpha_1= \alpha_2=0$ in Hypothesis $\mathbf{(A.2)}_p$, that is, $u$ and $Du$ are H\"older continuous in $L^p$ on $[0,T]$, then estimate (\ref{ineq1}) in Theorem \ref{the1} can be written as
$$
\mathbb{E}\left( \left| \displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C\Vert u\Vert_{1,p,\gamma}^p(b-a)^{pH},
$$
where
$$
\Vert u\Vert_{1,p,\gamma}=\displaystyle\sup_{0\leq s<t\leq T}\dfrac{\Vert u_t -u_s\Vert_{1,p}}{|t-s|^{\gamma}}+\displaystyle\sup_{0\leq s\leq T} \Vert u_s\Vert_{1,p}.
$$
\end{remark}
\section{The $\frac{1}{H}$-variation of divergence integral with respect to fBm}
Fix $q\geq 1$ and $T>0$ and set $t_i^n:= \frac{iT}{n}$, where $n$ is a positive integer and $i=0,1,2,\dots,n$. We need the following definition.
\begin{definition}
Let $X$ be a given stochastic process defined in the complete probability space $(\Omega, {\cal F}, P)$. Let $V_n^q(X)$ be the random variable defined by
$$
V_n^q(X):= \sum_{i=0}^{n-1}|\Delta_i^n X|^q,
$$
where $\Delta_i^n X := X_{t^n_{i+1}}-X_{t^n_{i}}$. We define the $q$-variation of $X$ as the limit in $L^1(\Omega)$, as $n$ goes to infinity, of $V_n^q(X)$ if this limit exists.
\end{definition}
As in the last section we assume that $H\in (0, \frac12)$. In this section, we need the following assumption on the process $u$.
\medskip
\noindent
\textbf{Hypothesis} $\mathbf{(A.3)}$ \textit{
Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process which is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$ and satisfies the H\"older continuity property (\ref{assump1}) with $p=\frac{1}{H}$, that is
\begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{\frac{1}{H}}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma}. \label{assump11}
\end{eqnarray}
Suppose also that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ and satisfies the H\"older continuity property (\ref{assump2}) with $p=\frac{1}{H}$, that is
\begin{eqnarray}
\Vert Du_t -Du_s\Vert_{L^{\frac{1}{H}}(\Omega; \EuFrak H)}&\leq& L_{Du}s^{-\alpha _2}|t-s|^{\gamma}. \label{assump21}
\end{eqnarray}
Moreover, we assume that the derivative $\{D_tu_s, s,t\in [0,T]\}$ satisfies
\begin{equation}\label{assump3}
\displaystyle\sup_{0\leq s\leq T}\Vert D_su_t\Vert_{L^{\frac{1}{H}}(\Omega)} \leq K t^{-\alpha_3},
\end{equation}
for every $t\in(0, T]$ and for some constants $0<\alpha_3<2H$ and $K>0$.}
\medskip
Consider the indefinite divergence integral of $u$ with respect to the fBm $B$, given by
\begin{equation} \label{equ2}
X_t = \int_0^t u_s \delta B_s := \delta(u\textbf{1}_{[0,t]}).
\end{equation}
The main result of this section is the following theorem.
\begin{theorem}\label{the2}
Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\textbf{(A.3)}$, and consider the divergence integral process $X$ given by (\ref{equ2}). Then, we have
\[
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)} {\longrightarrow} e_H \displaystyle\int_ 0^T|u_s|^{\frac{1}{H}}ds,
\]
as $n$ tends to infinity,
where $e_H = \mathbb{E} \left[|B_1|^{\frac{1}{H}}\right]$.
\end{theorem}
\noindent\textit{Proof}.
We need to show that the expression
\[
F_n:= \mathbb{E}\left(\left|\sum_{i=0}^{n-1}\left|\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s\right|^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right),
\]
converges to zero as $n$ tends to infinity.
Using (\ref{p1}), we can write
\begin{equation}\label{decom}
\begin{array}{ll}
\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s
&=\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s + \displaystyle\int_{t_i^n}^{t_{i+1}^n} u_{t_{i}^n}\delta B_s
\\ & =\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s-\langle Du_{t_i^n}, \textbf{1}_{[t_{i}^n, t_{i+1}^n]}\rangle_{{\EuFrak H}} + u_{t_{i}^n}(B_{t_{i+1}^n}-B_{t_{i}^n}).
\\ & := A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n}.
\end{array}
\end{equation}
By the triangular inequality, we obtain
\begin{equation}
F_n \le \mathbb{E}\left(\sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) +D_n, \label{eq45}
\end{equation}
where
\[
D_n=\mathbb{E}\left(\left|
\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right).
\]
Using the mean value theorem and H\"older inequality, we can write
\begin{eqnarray} \notag
& &\mathbb{E}\left( \sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) \\ \notag
&& \leq \frac{1}{H}\mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |\left[ |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n}|^{\frac{1}{H}-1} + |A_{i}^{3,n}|^{\frac{1}{H}-1}\right]\right) \\ \notag
& & \leq
C\left[ \mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right)\right]^H \\
&& \qquad \qquad \times \left[\mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right)
+\mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right)\right]^{1-H}. \label{eq451}
\end{eqnarray}
Substituting (\ref{eq451}) into (\ref{eq45}) yields
\[
F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n,
\]
where
\begin{eqnarray*}
A_n&=&\mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right), \\
B_n &=& \mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right),\\
C_n &=&\mathbb{E}\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right).
\end{eqnarray*}
The proof will be divided into several steps. Along the proof, $C$ will denote a generic constant, which may vary from line to line and may depend on the processes $u$ and $Du$ and the different parameters appearing in the computations, but it is independent of $n$.
\\
\textit{Step 1.} We first prove that $B_n$ and $C_n$ are bounded. Remark that
\begin{eqnarray*}
B_n &= &\mathbb{E}\left( \left| \int_{0}^{\frac{T}{n}} u_s \delta B_s\right|^{\frac{1}{H}} \right)+ \mathbb{E} \left( \sum_{i=1}^{n-1}\left|\int_{t_i^n}^{t_{i+1}^n} u_s\delta B_s\right|^{\frac{1}{H}} \right) \\
& := &K_1^n+K_2^n.
\end{eqnarray*}
Using estimate (\ref{ineq2}) with $p=\frac{1}{H}$, it follows that
\begin{eqnarray*}
K_1^n
&\leq & C\left(\| u\| ^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+\| Du\|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right)n^{-1}+ \left(L_{u}^{\frac{1}{H}} n^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}n^{\frac{\alpha_2}{H}}\right)n^{-\frac{\gamma}{H}-1}
\\ & \leq & C \left(n^{-1} +n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}+n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\right).
\end{eqnarray*}
Therefore, $K_1^n$ is bounded since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. In a similar way, estimate (\ref{ineq1}) leads to
\begin{eqnarray*}
K_2^n &\leq & C\sum_{i=1}^{n-1}\bigg\{\left(\| u\|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n) \\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\bigg\} \\
& \leq &C \left(1+ n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right).
\end{eqnarray*}
This proves that $K_2^n$ is bounded and so is $B_n$.
Using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for $q>\frac{1}{H}$, we obtain
\begin{eqnarray*}
C_n
&=&\sum_{i=0}^{n-1}\mathbb{E}\left( |u_{t_{i}^n} |^{\frac{1}{H}} |B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{1}{H}}\right)
\\ & \leq & \sum_{i=0}^{n-1}\left[ \mathbb{E}\left(|u_{t_{i}^n}|^{q}\right)\right] ^{\frac{1}{qH}}\left[ \mathbb{E}\left(|B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{q}{qH -1}}\right)\right]^{1-\frac{1}{qH}}
\\ &
\leq & C \sum_{i=0}^{n-1}(t_{i+1}^n-t_{i}^n) =CT,
\end{eqnarray*}
and this proves the boundedness of $C_n$.\\
\textit{Step 2.} We prove that $A_n$ converges to zero. Consider the decomposition
\[
\sum_{i=0}^{n-1}|A_{i}^{1,n}| ^{\frac{1}{H}}
=\left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s\right|^{\frac{1}{H}} + \sum_{i=1}^{n-1}|A_{i}^{1,n} |^{\frac{1}{H}}.
\]
Using estimate (\ref{ineq2}) with $p =\frac{1}{H}$, it follows that
\begin{eqnarray*}
&& \mathbb{E}\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) \\
&& \leq C\left[ \|u-u_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+ \| Du-Du_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right]n^{-1}+ \left[L_{u}^{\frac{1}{H}}{n}^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}{n}^{\frac{\alpha_2}{H}}\right]{n}^{-\frac{\gamma}{H}-1} \\
&& \leq C n^{-1}\left(1 +{n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}}}+{n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}}}\right).
\end{eqnarray*}
Therefore $ \mathbb{E}\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity, since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. We can also prove that $\mathbb{E} \left(\sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero. In fact, using estimate (\ref{ineq1}) with $p =\frac{1}{H}$, we obtain
\begin{eqnarray*}
\mathbb{E}\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)
& \leq &
C \sum_{i=1}^{n-1}\Bigg[\left(\| u-u_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du-Du_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n)\\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\Bigg] \\
& & \leq C n^{-\frac{\gamma}{H}-1}\left({n^{\frac{\alpha_1}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ {n^{\frac{\alpha_2}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right),
\end{eqnarray*}
where we have used the fact that
\begin{equation*}
\| u-u_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n} \leq L_{u}T^{\gamma-\alpha_1} i^{-\alpha_1} n^{\alpha_1-\gamma},
\end{equation*}
and
\begin{equation*}
\| Du-Du_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n}
\leq L_{Du}T^{\gamma-\alpha_1} i^{-\alpha_2} n^{\alpha_2-\gamma}.
\end{equation*}
From the above computations,\ it follows that $\mathbb{E}\left( \sum_{i=1}^{n-1}|A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero as $n$ goes to infinity. Therefore, we conclude that
\begin{equation}\label{term1}
\lim _{n \rightarrow \infty} \mathbb{E}\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right) =0.
\end{equation}
Second, let us prove that $\mathbb{E} \left(\sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right)$ converge to zero as $n$ tends to infinity. It follows from (\ref{ext}) that each term $A_i^{2,n}$ can be expressed as
\begin{equation*}
A_{i}^{2,n}
=\int_0^T D_su_{t_i^n}\dfrac{\partial}{\partial s}\bigg(R(s,t_{i+1}^n)-(R(s,t_{i}^n)\bigg)ds.
\end{equation*}
Therefore we have the following decomposition
\begin{equation*}
A_{i}^{2,n}:= J_1^{i,n}+J_2^{i,n}+J_3^{i,n},
\end{equation*}
where
\begin{eqnarray*}
J_1^{i,n} &=& \frac 12
\int_0^{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((t_{i}^n-s)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_2^{i,n} &=&\frac 12 \int^{t_{i+1}^n}_{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_3^{i,n}&=& \frac 12 \int_{t_{i+1}^n}^{T} D_su_{t_i^n}\frac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(s-t_{i+1}^n)^{2H}))\right)ds.
\end{eqnarray*}
Using Minkowski inequality and assumption (\ref{assump3}), we obtain
\begin{eqnarray*}
\mathbb{E} \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right)
& \leq & H \sum_{i=0}^{n-1}\left[ \int_0^{t_{i}^n} \Vert D_su_{t_i^n}\Vert_{L^{\frac{1}{H}}(\Omega)} \left|(t_{i+1}^n-s)^{2H-1}-(t_{i}^n-s)^{2H-1})\right|ds\right]^{\frac{1}{H}} \\
& \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ \int_0^{t_{i}^n} \left[(t_{i}^n-s)^{2H-1}-(t_{i+1}^n-s)^{2H-1}\right]ds\right]^{\frac{1}{H}} \\
& =& C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ (t_{i+1}^n-t_{i}^n)^{2H}-\left[(t_{i+1}^n)^{2H}-(t_{i}^n)^{2H}\right]\right]^{\frac{1}{H}} \\
& \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}(t_{i+1}^n-t_{i}^n)^{2} \\
& \leq & C {n^{\frac{\alpha_3}{H} -2}}\sum_{i=1}^{n-1} {i^{-\frac{\alpha_3}{H}}}.
\end{eqnarray*}
Taking into account that $\alpha_3 <2H$, we obtain that $\mathbb{E} \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity. By means of similar arguments, we can show that $\mathbb{E} \left( \sum_{i=0}^{n-1}|J_2^{i,n}|^{\frac{1}{H}} \right) $ and $\mathbb{E} \left( \sum_{i=0}^{n-1}|J_3^{i,n}|^{\frac{1}{H}} \right) $ converge to zero as $n$ tends to infinity. Therefore,
\begin{equation}\label{term2}
\lim _{n \rightarrow \infty} \mathbb{E}\left( \sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right) =0.
\end{equation}
Consequently, from (\ref{term1}) and (\ref{term2}) we deduce that that $A_n$ converge to zero as $n$ goes to infinity.\\
\noindent
\textit{Step 3.} In order to show that the term $D_n$ converges to zero as $n$ tends to infinity, we replace $n$ by the product $nm$ and we let first $m$ tend to infinity. That is, we consider the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$ and we define
\begin{eqnarray}
Z^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |u_{t_{i}^{nm}}|^{\frac{1}{H}}|\Delta_{i}^{nm}B|^{\frac{1}{H}}
-e_H \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag
& = & \bigg| \sum_{j=0}^{n-1} \bigg[\sum_{i=jm}^{(j+1)m -1} \left(|u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& & \qquad +|u_{t_{j}^{n}}|^{\frac{1}{H}}\left(\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right)\bigg]\bigg|. \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left||u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right||\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& &\qquad +\sum_{j=0}^{n-1}|u_{t_{j}^{n}}|^{\frac{1}{H}}\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag \label{eq3}
& := & Z_1^{n,m} + Z_2^{n,m}.
\end{eqnarray}
By the mean value theorem, we can write
\[
Z^{n,m}_1 \le
\frac 1H \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left|u_{t_{i}^{nm}}-u_{t_{j}^{n}}\right|\left(|u_{t_{i}^{nm}}|^{\frac{1}{H}-1}+|u_{t_{j}^{n}}|^{\frac{1}{H}-1}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}}.
\]
Using H\"{o}lder inequality, assumption (\ref{assump11}) as well as the boundedness of $u$ in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we obtain
\[
\mathbb{E} (Z^{n,m}_1)
\leq Cn^{-1}m^{-1} \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} (t_j^n)^{-{\alpha_1}}(t_i^{nm}-t_j^{n})^{{\gamma}}
\leq {C}{n^{{\alpha_1}-{\gamma} -1}} \sum_{j=0}^{n-1} {j^{{-\alpha_ 1}}},
\]
which implies
\begin{equation}
\lim_{n\rightarrow \infty}\sup_{m\ge 1}\mathbb{E} (Z^{n,m}_1 )= 0. \label{equ6}
\end{equation}
On the other hand, using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we have
\begin{eqnarray*}
\mathbb{E}(Z^{n,m}_2)
&\leq &
\sum_{j=0}^{n-1}\left[ \mathbb{E}(|u_{t_{j}^{n}}|^q)\right]^{\frac{1}{qH}}\left[ \mathbb{E} \left(\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}} \\
& \leq & C \sum_{j=0}^{n-1}\left[ \mathbb{E} \left( \left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}}.
\end{eqnarray*}
For any fixed $n\ge 1$, by the Ergodic Theorem the sequence $ \sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})$ converges to $0$ in $L^s$ as $m$ tends to infinity, for every $s>1$. This implies that, for any $n\ge 1$,
\begin{equation}
\lim_{m\rightarrow \infty} \mathbb{E} (Z^{n,m}_2 )= 0. \label{equ7}
\end{equation}
Therefore, it follows from (\ref{equ6}) and (\ref{equ7}) that
\begin{equation}
\lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} \mathbb{E} (Z^{n,m})= 0. \label{equ8}
\end{equation}
By the mean value theorem, we can write
\begin{eqnarray*}
&& \Big| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\Big|
\leq \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}\left||u_{t^n_j}|^{\frac{1}{H}}-|u_s|^{\frac{1}{H}}\right| ds \\
& & \qquad \qquad \leq \frac 1H \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}|u_{t^n_j}-u_s| \left(|u_{t^n_j}|^{\frac{1}{H} -1}+|u_s|^{\frac{1}{H} -1}\right) ds.
\end{eqnarray*}
Then, applying H\"{o}lder inequality and assumption (\ref{assump11}), yields
\begin{eqnarray*}
\mathbb{E} \left( \left| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\right| \right)
&\leq& C \sum_{j=1}^{n-1}\displaystyle\int_{t^n_j}^{t^n_{j+1}} (t^n_j)^{-\alpha_1} ({t^n_{j+1}} -{t^n_{j}})^{\gamma} ds + Cn^{-1} \\
&\leq &C n^{\alpha_1-{\gamma}-1} \sum_{i=1}^{n-1}{i^{-{\alpha_1}}}+Cn^{-1}.
\end{eqnarray*}
This proves that $ \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})$
converge in $L^1$ to $\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds$ as $n$ tends to infinity. This convergence, together with (\ref{equ8}), imply that $D_n$ converges to zero as $n$ goes to infinity, which concludes the proof of the theorem.
\eop
\section{Divergence integral with respect to a $d$-dimensional fBm}
The purpose of this section is to generalize Theorem \ref{the2} to multidimensional processes. In order to proceed with this generalization, we first introduce the following notation.
Consider a $d$-dimensional fractional Brownian motion ($d\ge 2$)
$$
B=\{B_t, t\in [0,T]\} = \{ (B_t^{(1)}, B_t^{(2)},\dots, B_t^{(d)}),\,\, {t\in [0,T]}\}
$$
with Hurst parameter $H\in (0,1)$ defined in
a complete probability space $(\Omega, \mathcal{F},P)$, where $\mathcal{F}$ is generated by $B$. That is, the components $B^{(i)}$, $i=1,\dots,d$, are independent fractional Brownian motions with Hurst parameter $H$. We can define the derivative and divergence operators, $D^{(i)}$ and $\delta^{(i)}$, with respect to each component $B^{(i)},$ as in Section 2. Denote by $\mathbb{D}_i^{1,p}(\EuFrak H)$ the associated Sobolev spaces. We assume that these spaces include functionals depending on of all the components of $B$ and not only the $i$th component.
The Hilbert space $\EuFrak H_d$ associated with $B$ is the completion of the space $\mathcal{E}_d$ of step functions
$\varphi =(\varphi^{(1)},\dots,\varphi^{(d)}) : [0,T]\rightarrow \mathbb{R}^d$ with respect to the inner product
\[
\langle \varphi, \phi \rangle_{\EuFrak H_d} =\sum_{k=1}^d \langle \varphi^{(k)}, \phi^{(k)} \rangle_\EuFrak H.
\]
We can develop a Malliavin calculus for the process $B$, based on the Hilbert space $\EuFrak H_d$.
We denote by $\mathcal{S}_{d}$ the space of smooth and cylindrical random variables of the
form
$$
F=f\left( B(\varphi _{1}), \ldots ,
B(\varphi_{n})\right),
$$
where $f\in C_{b}^{\infty}(\mathbb{R}^{n})$,
$\varphi_{j} =(\varphi_{j}^{(1)},\dots,\varphi_{j}^{(d)}) \in \mathcal{E}_d$, and $B(\varphi_{j}) =\displaystyle\sum_{k=1}^{d} B ^{(k)} (\varphi_{j}^{(k)})$.
Denote by $\langle \cdot , \cdot \rangle$ the usual inner product on $\mathbb{R}^d$. The following result has been proved in \cite{GN} using the Ergodic Theorem.
\begin{lemma} \label{lem3}
Let $F$ be a bounded random variable with values in $\mathbb{R}^d$. Then, we have
$$
V_n^{\frac{1}{H}}(\langle F, B \rangle) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\mathbb{R}^d}\left[\displaystyle\int_ 0^T|\langle F, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi),
$$
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\mathbb{R}^d$.
\end{lemma}
The following theorem is the multidimensional version of Theorem \ref{the2}.
\begin{theorem}\label{the5}
Suppose that for each $i=1,\dots, d$, $u^{(i)}\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\textbf{(A.3)}$. Set $u_t=(u_t^{(1)},\dots,u_t^{(d)})$ and consider the divergence integral process $X=\{X_t, t \in [0,T]\}$ defined by $X_t :=\sum_{i=1}^d \int_0^t u_s^{(i)}\delta B_s^{(i)}$. Then, we have
$$
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\mathbb{R}^d}\left[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi),
$$
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\mathbb{R}^d$.
\end{theorem}
\noindent\textit{Proof}. This theorem can be proved by the same arguments as in the proof of Theorem \ref{the2}. We need to show that the expression
\[
F_n:= \mathbb{E}\left(\left|\sum_{i=0}^{n-1}\left|\sum_{k=1}^d\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}\right|^{\frac{1}{H}}-\displaystyle\int_{\mathbb{R}^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right),
\]
converges to zero as $n$ tends to infinity.
Using the decomposition (\ref{decom}) for $\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}$, and applying the same techniques as in the proof of Theorem \ref{the2}, it is not difficult to see that
\begin{equation*}\label{Fn}
F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n,
\end{equation*}
where $B_n,$ $C_n$ are bounded, $A_n$ converges to zero as $n$ tends to infinity, and $D_n$ is given by
\begin{eqnarray*}
D_n &:=& \mathbb{E}\left(\left|\sum_{i=0}^{n-1}| \langle u_{t_{i}^{n}}, \Delta_{i}^{n}B\rangle |^{\frac{1}{H}}-\displaystyle\int_{\mathbb{R}^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right).
\end{eqnarray*}
It only remains to show that $D_n$ converges to zero as $n$ tends to infinity. To do this, as in the proof of Theorem \ref{the2}, we introduce the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$, and we write
\begin{eqnarray}
V^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |\langle u_{t_{i}^{nm}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\sum_{j=0}^{n-1}\displaystyle\int_{\mathbb{R}^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\right| \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} |\langle u_{t_{i}^{nm}}-u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}\\ \notag
& & \qquad +\sum_{j=0}^{n-1}\bigg|\sum_{i=jm}^{(j+1)m -1}|\langle u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\displaystyle\int_{\mathbb{R}^d}|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\bigg| \\ \notag
& := & V_1^{n,m} + V_2^{n,m}.
\end{eqnarray}
Then, using the same arguments as in Theorem \ref{the2}, we have
\begin{equation}\label{equ6m}
\lim_{n\rightarrow \infty}\sup_{m\ge 1}\mathbb{E} (V^{n,m}_1 )= 0.
\end{equation}
On the other hand, Lemma \ref{lem3} implies that for all $n\geq 1$
\begin{equation}\label{equ7m}
\lim_{m\rightarrow \infty} \mathbb{E} (V^{n,m}_2 )= 0.
\end{equation}
Moreover, it is not difficult to show that
$$
\displaystyle\lim_{n\rightarrow \infty}\mathbb{E}\left|\sum_{j=0}^{n-1}\displaystyle\int_{\mathbb{R}^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)-\displaystyle\int_{\mathbb{R}^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right| =0.
$$
Finally, this convergence, together with (\ref{equ6m}) and (\ref{equ7m}), imply that $D_n$ converges to zero as $n$ tends to infinity. This completes the proof of
Theorem \ref{the5}.\eop
\section{Fractional Bessel process}
In this section, we are going to apply the results of the previous section to the fractional Bessel process.
Let $B$ be a $d$-dimensional fractional Brownian motion ($d\ge 2$).
The process $R= \{R_t, t\in [0,T]\}$, defined by $R_t= \|B_t\|$,
is called the fractional Bessel process of dimension $d$ and Hurst parameter $H$.
It has been proved in \cite{CN} that, for $H> \frac12$, the fractional Bessel process $R$ has the following representation
\begin{equation}\label{rep}
R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\displaystyle\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds.
\end{equation}
This representation (\ref{rep}) is similar the one obtained for Bessel processes with respect to standard Brownian motion (see, for instance, Karatzas and Shreve \cite{KS}). Indeed, if $W$ is a $d$-Brownian motion and $R_t =\|W_t\|$, then
$$
R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{W_s^{(i)}}{R_s}dW_s^{(i)} + \frac{d-1}2\displaystyle\int_ 0^t \dfrac{ds}{R_s}.
$$
The goal of this section is to extend the integral representation (\ref{rep}) to the case $H<\frac 12$. We cannot apply directly the It\^o's formula because the function $\|x\|$ is not smooth at the origin. We need the following extension of the domain of the divergence operator to processes with trajectories in $L^{\beta}([0,T], \mathbb{R}^d)$, where $\beta >\frac 1{2H}$.
\begin{definition}\label{def3}
Fix $\beta >\frac 1{2H}$.
We say that a $d$-dimensional stochastic process $u=(u^{(1)},\dots , u^{(d)})\in L^1(\Omega; L^{\beta}([0,T], \mathbb{R}^d))$ belongs to the extended domain of the divergence ${\rm Dom}^*\delta$, if there exists $q>1$ such that
\begin{equation} \label{78}
|\mathbb{E}\langle u, DF\rangle_{\EuFrak H_d}|= \left |\sum_{i=1}^{d}\mathbb{E}(\langle u^{(i)}, D^{(i)} F\rangle_{\EuFrak H})\right | \leq c_u \| F\|_{L^q(\Omega)},
\end{equation}
for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$, where $c_u$ is some constant depending on $u$. In this case $\delta(u)\in L^{p}(\Omega)$, where $p$ is the conjugate of $q$, is defined by the duality relationship
$$
\mathbb{E}(\langle u, DF\rangle_\EuFrak H )=\mathbb{E}(\delta(u) F),
$$
for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$.
\end{definition}
Notice that the inner product in (\ref{78}) is well defined by formula (\ref{ext}). If $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence, we will make use of the notation
\[
\delta(u\mathbf{1}_{[0,t]}) =\sum_{i=1}^d \int_0^t u^{(i)}_s \delta B_s^{(i)}.
\]
\begin{remark} Notice that, since $\beta >\frac 1{2H}$, we have $\EuFrak H_{d}\subset L^{\beta}([0,T], \mathbb{R}^d))$ and then ${\rm Dom}\, \delta \subset {\rm Dom}^*\delta$.
\end{remark}
\begin{remark} \label{rem5.1}
It should be noted that the process $R$ satisfies the following
\begin{equation}\label{eq6}
\mathbb{E}(R_t^{-q}) =C t^{-Hq} \displaystyle\int_0^{\infty} y^{d-1-q} e^{\frac{-y^2}{2}} dy:= K_q t^{-Hq},
\end{equation}
for every $q<d$, where $K_q$ is a positive constant. This property will be used later.
\end{remark}
We recall the following multidimensional It\^{o} formula for the fBm (see \cite{HMS}).
This formula requires a notion of extended domain of the divergence operator, ${\rm Dom}^{E} \delta$ introduced in \cite[Definition 3.9]{HMS}, which is slightly different from Definition
\ref{def3}, because we require $u\in L^1(\Omega; L^{\beta}([0,T], \mathbb{R}^d))$ (instead of $u\in L^2(\Omega \times [0,T]; \mathbb{R}^d)$ and the extended divergence belongs to $L^p(\Omega)$ (instead of $L^2(\Omega)$). Our notion of extended domain will be useful to handle the case of the fractional Bessel process. Moreover, the class of test functionals is not the same, although this is not relevant because both classes are dense in $L^p(\Omega)$.
\begin{theorem}
Let $B$ a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$. Suppose that $F\in C^2(\mathbb{R}^d)$ satisfies the growth condition
\begin{equation}\label{growth}
\max_{x \in\mathbb{R}^d}\left\{|F(x)|, \left \|\frac{\partial F}{\partial x_i}(x)\right\|, \left\| \frac{\partial^2 F}{\partial x^2_i}(x) \right\| , i=1,\dots,d\right\}\leq ce^{\lambda x^2},
\end{equation}
where $c$ and $\lambda$ are positive constants such that $\lambda <\dfrac{T^{-2H}}{4d }$. Then, for each $i=1,...,d$ and $t\in [0,T]$, the process $\textbf{1}_{[0,t]}\dfrac{\partial F}{\partial x_i}(B_t) \in {\rm Dom}^{E} \delta$, and the following formula holds
\begin{equation} \label{ito}
F(B_t) = F(0)+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial F}{\partial x_i}(B_s)\delta B_s^{(i)}+
H \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial^2 F}{\partial x^2_i}(B_s)s^{2H-1}ds,
\end{equation}
where ${\rm Dom}^{E} \delta$ is the extended domain of the divergence operator in the sense of Definition 3.9 in \cite{HMS}.
\end{theorem}
The next result is a change of variable formula for the fractional Bessel process in the case $H<\frac{1}{2}$.
\begin{theorem}\label{pro1}
Let $H<\frac{1}{2}$, and let $R=\{R_t, \in [0,T]\}$ be the fractional Bessel process. Set $u_t^i =\frac {B^i_t}{R_t}$ and $u_t=(u_t^{(1)},\dots,u_t^{(d)})$, for
$t\in [0,T]$. Then, we have the following results:
\begin{enumerate}
\item[(i)] For any $t\in (0,T]$, the process $\{u_s \mathbf{1}_{[0,t]}(s), s\in [0,T]\}$ belongs to the extended domain ${\rm Dom}^*\delta$ and the representation (\ref{rep}) holds true.
\item[(ii)] If $H>\frac{1}{4}$, for any $t\in [0,T]$, the process $u\mathbf{1}_{[0,t]} $ belongs to $L^2(\Omega;\EuFrak H_d)$ and to
the domain of $\delta$ in $L^p(\Omega)$ for any $p<d$.
\end{enumerate}
\end{theorem}
\noindent\textit{Proof}. Let us first prove part (i). Since the function $\|x\|$ is not differentiable at the origin, the It\^{o} formula (\ref{ito}) cannot be applied and we need to make a suitable approximation. For $\varepsilon >0$, consider the function $F_{\varepsilon}(x) = (\| x\|^2 +\varepsilon^2)^{\frac12}$, which is smooth and satisfies condition (\ref{growth}). Applying It\^{o}'s formula (\ref{ito}) we have
\begin{equation}\label{eq7}
F_{\varepsilon}(B_t) = \varepsilon+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{(R_s^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}+
Hd \int_ 0^t \dfrac{s^{2H-1}}{(R_t^2 +\varepsilon^2)^{\frac12}}ds-H \displaystyle\int_ 0^t \frac{s^{2H-1} R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds.
\end{equation}
Clearly, $F_{\varepsilon}(B_t)$ converges to $R_t$ in $L^p$ for any $p\ge 1$. Let $1\leq p <d$. Using Minkowski's inequality, and taking into account Remark \ref{rem5.1}, we have
\begin{eqnarray*}
\mathbb{E}\left( \left| \int_ 0^t s^{2H-1}R_s^{-1}ds\right|^p \right)
&\leq &\left(\int_ 0^t s^{2H-1}(\mathbb{E}(R_s^{-p})^{\frac{1}{p}}ds\right)^p \\
&\leq & K_p \left(\int_ 0^t s^{-H} s^{2H-1}ds\right)^p \le K_p H^{-p} t^{pH}.
\end{eqnarray*}
Since for every $\varepsilon >0,$ $ \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}\leq s^{2H-1}R_s^{-1}$, the dominated convergence theorem leads to the fact that $\int_ 0^t \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero.
In the same way, we prove that $\int_ 0^t \frac{s^{2H-1}R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero.
Coming back to (\ref{eq7}), we deduce that $ \sum_{i=1}^{d}\int_ 0^t\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}$ converges in $L^{p}$ for any $1\leq p<d$, to some limit $G_t$, as $\varepsilon$ tends to zero.
We are going to show that the process $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$. Let $F$ be a smooth and cylindrical random variable in $\mathcal{S}_{d}$.
For $i=1,\dots,d$, let $u_s^{\varepsilon, (i)} =\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}$, and $u_s^\varepsilon= (u_s^{\varepsilon, (1)}, \dots, u_s^{\varepsilon, (d)})$. By the duality relationship we obtain
\begin{equation*}\label{duality}
\mathbb{E} (\langle u^{\varepsilon}\textbf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d} ) = \mathbb{E}(\delta(u^{\varepsilon}\textbf{1}_{[0,t]}) F).
\end{equation*}
Taking into account that $\delta(u^{\varepsilon}\textbf{1}_{[0,t]})$ converges to $G_t$ in $L^p$, and that
\[
\lim_{\varepsilon \rightarrow 0} \mathbb{E}(\langle u^{\varepsilon}\textbf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}) =\mathbb{E}(\langle u \textbf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}),
\]
since the components of $u$ are bounded by one, we deduce that
\[
\mathbb{E}(\langle u1_{[0,t]}, DF\rangle_{\EuFrak H_d})=\mathbb{E}(G_tF).
\]
This implies that $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$.
To show part (ii), let us assume that $H>\frac14$. We first show that for any $i=1,\dots,d$, $u^{(i)} \in L^2( \Omega; \EuFrak H)$. We can write
\begin{eqnarray*}
|u_t^{(i)}-u_s^{(i)}|
& \leq &{|B_t^{(i)} - B_s^{(i)}|}{R_t^{-1}} + {|R_s - R_t||B_s^{(i)}|}{R_t^{-1}R_s^{-1}} \\
& \leq & {\| B_t - B_s\| }{R_t^{-1}} + {|R_s - R_t|\| B_s\|}{R_t^{-1}R_s^{-1}} \\
& \leq & 2{\| B_t - B_s\| }{R_t^{-1}},
\end{eqnarray*}
where we have used the fact that
$$
|R_s - R_t|=\left| \| B_t\| - \| B_s\|\right| \leq \| B_t - B_s\|.
$$
Since $|u_t^{(i)}-u_s^{(i)}|\leq 2$, we obtain
\[
|u_t^{(i)}-u_s^{(i)}| \leq 2\left({\| B_t - B_s\| }{R_t^{-1}}\wedge 1\right),
\]
which implies
\begin{equation}\label{eqqq1}
|u_t^{(i)}-u_s^{(i)}| \leq 2\| B_t - B_s\|^{\alpha} {R_t^{-\alpha}},
\end{equation}
for every $\alpha\in[0,1]$.
We can write, using (\ref{est01}),
\begin{eqnarray*}
\mathbb{E}(\| u_t^{(i)}\|_{\EuFrak H}^2) & \leq & k_H \mathbb{E} \left( \int_ 0^T (u_s^{(i)})^2[(T-s)^{2H-1}+ s^{2H-1}]ds \right) \\
&& + k_H \mathbb{E}\left(\displaystyle\int_0^T\left(\displaystyle\int_s^T |u_t^{(i)}-u_s^{(i)}|(t-s)^{H-\frac32}dt\right)^2 ds\right) \\
& :=& k_H[N_1 + N_2].
\end{eqnarray*}
Since $|u_t^i|\leq 1$, it is clear that $N_1$ is bounded. To estimate $N_2$, choose $\alpha$, $q$ and $p$ such that $\frac{1}{2H} -1<\alpha \leq 1 $, $1<q<\frac{d}{2\alpha}$, and $\frac{1}{p}+\frac{1}{q} =1$. Using inequality (\ref{eqqq1}) and Minkowski and H\"{o}lder inequalities, we get
\begin{eqnarray*}
N_2& \leq & 2
\int_0^T\mathbb{E}\left(\displaystyle\int_s^T \| B_t - B_s\|^{\alpha} {R_t^{-\alpha}}(t-s)^{H-\frac32}dt\right)^2 ds \\
& \leq & 2
\int_0^T\left(\int_s^T\left[ E( \| B_t -B_s\|^{2\alpha p})\right]^{\frac{1}{2p}} \left[ \mathbb{E} (R_t^{-2\alpha q}) \right]^{\frac{1}{2q}}(t-s)^{H-\frac32}dt\right)^2 ds \\
& \leq &
C\int_0^T\left(\int_s^T(t-s)^{\alpha H} t^{-\alpha H}(t-s)^{ H -\frac32}dt\right)^2 ds \\
& \leq &
C\displaystyle\int_0^T s^{-2\alpha H}(T-s)^{2(\alpha +1)H -1} ds \\
& =& C T^{2H}\beta(-2\alpha H+1, 2(\alpha +1)H).
\end{eqnarray*}
Hence, for $i=1,\dots,d$, $\mathbb{E} (\| u_t^{(i)}\|_{\EuFrak H}^2) <\infty$ and, therefore, $u\in L^2(\Omega, \EuFrak H_d)$. Moreover, by the first assertion, it follows that for every $F\in \mathcal{S}_{d}$ and for $p<d$,
\begin{equation*}
\mathbb{E}(\langle D F,u \mathbf{1}_{[0,t]}\rangle_{\EuFrak H_d})= \mathbb{E}(G_t F) \leq \|G_t\|_{p} \|F\|_{q}.
\end{equation*}
Therefore, $u\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta$ in $L^p(\Omega)$.
\eop
Notice that, if $d>2$, then we can take $p=2$ in part (ii), and $u\mathbf{1}_{[0,t]}$ belongs to ${\rm Dom}\, \delta$.
Also, we remark that although $u\mathbf{1}_{[0,t]}$ belongs to the (extended) domain of the divergence, this does not imply that each component $u^{(i)}\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta^{(i)}$. In the next theorem, we show that under the stronger condition $2dH^2 >1$, each process $u^{(i)}$ belongs to $\mathbb{D}^{1,2}_i (\EuFrak H)$, and satisfy the Hypothesis {\textbf{(A.3)} of Section 4.
\begin{theorem}\label{pro2}
Suppose that $2dH^2>1$. Let $R=\{R_t, t\in [0,T]\}$ be the fractional Bessel process. Then, for $i=1,2,\dots,d$, the process $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\textbf{(A.3)}$.
\end{theorem}
\noindent\textit{Proof}.
Fix $i=1,\dots, d$. The random variable $u_t^{(i)}$ is bounded and so, it is bounded in $L^{q}(\Omega)$ for all $q>\frac{1}{H}$. The Malliavin derivative $D^{(i)} u^{(i)}$ is given by
$$
D^{(i)}_su_t^{(i)} = \left(-R_t^{-3} (B_t^{(i)})^2+R_t^{-1}\right) \textbf{1}_{[0,t]}(s):= \phi_t \textbf{1}_{[0,t]}(s).
$$
Notice that
\begin{equation*} \label{89}
\| D^{(i)}u^{(i)}_t \|_{\EuFrak H} \le 2R_t^{-1} t^{H}.
\end{equation*}
This implies $D^{(i)}u^{(i)}_t$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ because $dH>1$. Indeed, we have
\begin{equation*}
\| D^{(i)}u^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega; \EuFrak H)} \le 2 \left(\mathbb{E}[R_t^{-\frac{1}{H}}]\right)^{H} t^{H} \le C.
\end{equation*}
Let us now prove that $u^{(i)}$ satisfies the inequalities (\ref{assump11}) and (\ref{assump21}), with $p=\frac{1}{H}$.
Let $0<s\le t \le T$.
Using estimate (\ref{eqqq1}) and choosing $\frac{1}{2H} -1<\alpha < Hd\wedge 1$, it follows that for $1< q< \frac{Hd}{\alpha}$ and $p_1>1$ such that $\frac{1}{p_1}+\frac{1}{q} =1$,
\begin{equation*}
\| u_t^{(i)}-u_s^{(i)}\|_{L^{\frac{1}{H}}(\Omega)} \leq 2\left[ \mathbb{E} \left( \| B_t - B_s\|^{\frac{\alpha p_1}{H}} \right) \right]^{\frac{H}{p_1}} \left(\mathbb{E} (R_t^{-\frac{\alpha q}{H}}) \right)^{\frac{H}{q}}
\leq C (t-s)^{\alpha H} s^{-\alpha H}.
\end{equation*}
Hence inequality (\ref{assump11}) is satisfied with $\alpha_1 =\alpha H<\frac12$ and $\gamma =\alpha H>\frac12 -H$.
In order to show inequality (\ref{assump21}) with $p=\frac{1}{H}$, we first write for $0<r \le t \le T$,
\begin{eqnarray} \notag
\| \phi_t\textbf{1}_{[0,t]}-\phi_r\textbf{1}_{[0,r]} \|_{\EuFrak H}
&\leq & \| \phi_t(\textbf{1}_{[0,t]}-\textbf{1}_{[0,r]}) \|_{\EuFrak H}+\| (\phi_t-\phi_r)\textbf{1}_{[0,r]} \|_{\EuFrak H} \\ \notag
& =& |\phi_t| \| \textbf{1}_{[0,t]}-\textbf{1}_{[0,r]} \|_{\EuFrak H}+|\phi_t-\phi_r|\| \textbf{1}_{[0,r]} \|_{\EuFrak H} \\
& \leq & C\left( R_t^{-1}(t-r)^{H}+|\phi_t-\phi_r|r^{H}\right). \label{phi}
\end{eqnarray}
We have
\begin{eqnarray*}
|\phi_t-\phi_r|& \le & \left| R_t^{-3} (B^{(i)}_t)^2 -R_r^{-3} (B^{(i)}_r)^2 \right| + | R_t^{-1} - R_r^{-1} | \\
&\le& R_t^{-3} R_r^{-3} \left( |R_t^3-R_r^3| (B^{(i)}_r)^2 + R_t^3 |(B^{(i)}_t)^2-(B^{(i)}_r)^2 | \right)+ R_t^{-1} R_r^{-1} |R_t-R_r| \\
&\le & \| B_t- B_r\| \left( 2R_t^{-1} R_r^{-1}+ 2R_t^{-3} R_r + R_t^{-2} + R_r^{-2} \right),
\end{eqnarray*}
and
$$
|\phi_t-\phi_r|\leq |\phi_t|+ |\phi_r| \leq 2(R^{-1}_t +R^{-1}_r).
$$
Put $R_{tr} := R_t^{-1} R_r^{-1}+ R_t^{-3} R_r + R_t^{-2} + R_r^{-2}$. Then, the above inequalities imply
\begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{tr}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right].
\end{equation*}
By using the same argument as above one can find also that
\begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{rt}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right].
\end{equation*}
Therefore, for every $\alpha \in [0,1]$, we can write
\begin{eqnarray} \notag
|\phi_t-\phi_r| & \leq& 4\left[ \left(\| B_t -B_r\| (R_{tr}\wedge R_{rt})\right)\wedge\left(R_t^{-1}\vee R_r^{-1}\right) \right]\\ \notag
& \leq &4 \| B_t -B_r\| ^\alpha (R_{tr}^\alpha\wedge R_{rt}^\alpha)\left(R_t^{\alpha-1}\vee R_r^{\alpha-1}\right) \\ \label{79}
&\le & C\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right).
\end{eqnarray}
Then, substituting (\ref{79}) into (\ref{phi}) yields
\begin{equation*}
\| \phi_t\textbf{1}_{[0,t]}-\phi_r\textbf{1}_{[0,r]} \|_{\EuFrak H}
\leq C\left( R_t^{-1}(t-r)^{H}+\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right)r^{H}\right).
\end{equation*}
Choose $\alpha$, $p_1$ and $q$ such that $\frac{1}{2H}-1<\alpha < (Hd-1)\wedge 1$, $1<p_1<\frac{dH}{\alpha +1}$ and $\frac{1}{p_1}+\frac{1}{q}= 1$. Then, we can write
\begin{eqnarray*}
&& \mathbb{E} \left(\| \phi_t\textbf{1}_{[0,t]}-\phi_r\textbf{1}_{[0,r]} \|_{\EuFrak H}^{\frac{1}{H}} \right) \\
&&\le C \mathbb{E}\left[ R_t^{-\frac{1}{H}}(t-r)+ r\| B_t -B_r\|^{ \frac{\alpha}{H}}\left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)\right] \\
& & \leq C \left[C t^{-1}(t-r)+ r\left[ \mathbb{E} \left( \|B_t -B_r\|^{\frac{\alpha q}{H}} \right) \right] ^{\frac{1}{q}}\left[ \mathbb{E} \left( \left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)^{p_1} \right)\right]^{\frac{1}{p_1}}\right] \\
& & \leq C\bigg(r^{-1}(t-r)+ r^{-\alpha }(t-r)^{\alpha }\bigg) \\
& & \leq 2C \max(r^{-1}(t-r),r^{-\alpha }(t-r)^{\alpha }),
\end{eqnarray*}
and inequality (\ref{assump21}) is satisfied with $\alpha_2 =H$ and $\gamma =\alpha H$.
Finally, for every $s \le t$, we have
\begin{equation*}
\|D^{(i)}_su^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega)}
\leq \left(\mathbb{E} (R_t^{-\frac{1}{H}} ) \right)^{H}
= C t^{-H},
\end{equation*}
and then assumption (\ref{assump3}) is satisfied with $\alpha_3 = H$.
This ends the proof of Theorem \ref{pro2}.
\eop
\begin{remark}
If $Hd>1$, we can show, using the same arguments as in the proof of Theorem \ref{pro2},
that $u^{(i)} \in \mathbb{D}^{1,2}_i (\EuFrak H)$, for $i=1,2,\dots,d$.
\end{remark}
We now discuss the properties of the process $\Theta= \{\Theta_t, t\in [0,T]\}$ defined by
$$
\Theta_t:= \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)}.
$$
By Theorem \ref{pro2}, we have that for every $i=1,\dots,d$, $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\textbf{(A.3)}$ if $2dH^2 >1$. Therefore, applying Theorem \ref{the5}, we have the following corollary.
\begin{corollary}\label{cor}
Suppose that $2dH^2 >1$. Then we have the following
\begin{equation*}
\begin{array}{ll}
V_n^{\frac{1}{H}}(\Theta) \overset{L^{1}(\Omega)}{{\longrightarrow}} \displaystyle\int_{\mathbb{R}^d}\left[\displaystyle\int_ 0^T \left| \left\langle\dfrac{B_s}{R_s}, \xi \right \rangle \right |^{\frac{1}{H}}ds\right]\nu(d\xi),
\end{array}
\end{equation*}
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\mathbb{R}^d$.
\end{corollary}
\begin{proposition} \label{prop7}
The process $\Theta$ is $H$-self-similar.
\end{proposition}
\noindent\textit{Proof}. Let $a>0$. By the representation (\ref{rep}) and the self-similarity of fBm, we have
\begin{eqnarray}\notag
\Theta_{at} &= & R_{at} -H(d-1)\displaystyle\int_O^{at} \dfrac{s^{2H-1}}{R_s}ds
\\ & \overset{d}{=}& a^HR_t -H(d-1)a^H\displaystyle\int_0^t \dfrac{u^{2H-1}}{R_u}du = a^H \Theta_t,\notag
\end{eqnarray}
where the symbol $\overset{d}{=}$ means that the distributions of both processes are the same. This proves that $\Theta$ is $H$-self-similar.
\eop
\begin{remark}
\begin{enumerate}
\item
Corollary \ref{cor} and Proposition \ref{prop7} imply that the process $\Theta$ and the fBm have the same $\frac 1H$-variation, if $2dH^2>1$, and they are both $H$-self-similar. These results generalize those proved by Guerra and Nualart in \cite{GN} in the case $H<\frac 12$.
\item Let us note that although $\Theta$ and the one-dimensional fBm are both $H$-self-similar and have the same $\frac 1H$-variation, as it is shown in \cite{HN}, it is not a fractional Brownian motion with Hurst parameter $H$.
The proof of this fact is based on the Wiener chaos expansion. Whereas, in the classical Brownian motion case it is well known, from L\'{e}vy's characterization theorem, that the process $\Theta$ is a Brownian motion.
\end{enumerate}
\end{remark}
\textbf{Acknowledgements.} This work was carried out during a stay of El Hassan Essaky at Kansas University (Lawrence, KS), as a part of Fulbright program. He would like to thank KU, especially Professor David Nualart, for warm welcome and kind hospitality.
\addcontentsline{toc}{chapter}{Bibliographie}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Inflation \cite{lrreview} has become the dominant paradigm for understanding
the initial conditions for structure formation and for Cosmic Microwave
Background (CMB) temperature anisotropies. In the inflationary picture,
primordial density and gravitational-wave fluctuations are created from quantum
fluctuations, ``redshifted'' beyond the horizon during an early period of
superluminal expansion of the universe, then ``frozen''
\cite{Starobinsky:1979ty,muk81,bardeen83}. Perturbations at the surface of
last scattering are observable as temperature anisotropies in the CMB, as first
detected by the Cosmic Background Explorer satellite
\cite{bennett96,gorski96}. The latest and most impressive confirmation of the
inflationary paradigm has been recently provided by the three-year data from
the Wilkinson Microwave Anisotropy Probe (WMAP) satellite
\cite{wmap3cosm,wmap3pol,wmap3temp,wmap3beam}. The WMAP collaboration has
produced new full-sky temperature maps in five frequency bands from 23 to 94
GHz based on the first three years of the WMAP sky survey. The new maps, which
are consistent with the first-year maps and more sensitive, incorporate
improvements in data processing made possible by the additional years of data
and by a more complete analysis of the polarization signal. WMAP data support
the inflationary model as the mechanism for the generation of super-horizon
curvature fluctuations.
The goal of this paper is to make use of the recent WMAP three-year data
(WMAP3) to discriminate among the various single-field inflationary models. As
such, this paper represents a complete update of our previous analysis
\cite{wmapping1} of the first-year WMAP data.
For single-field inflation models, the relevant parameter space for
distinguishing among models is defined by the scalar spectral index $n$, the
ratio of tensor to scalar fluctuations $r$, and the running of the scalar
spectral index $d n / d\ln{k}$. We employ {\em Monte Carlo reconstruction}, a
stochastic method for ``inverting'' observational constraints to generate an
ensemble of inflationary potentials compatible with observation
\cite{kinney02,easther02}. In addition to encompassing a broader set of models
than usually considered (large-field, small-field, hybrid and linear models),
Monte Carlo reconstruction makes it possible easily to include effects to
higher order in slow roll.
The paper is organized as follows: In Sec.\ II we will quickly review
single-field inflation models and their observables. In Sec.\ III we define the
inflationary model space as a function of the slow-roll parameters $\epsilon$
and $\eta$. In Sec.\ IV we describe our analysis method as well as our results.
Since a study of the implications of the WMAP3 data for single field models of
inflation has been already performed by the WMAP collaboration themselves
\cite{wmap3cosm}, we will also specify briefly the differences between our
analysis and theirs. In Sec.\ V we describe a Monte Carlo reconstruction
method to determine an ensemble of inflationary potentials compatible with
observations. In Sec.\ VI we present our conclusions.
\section{Single-field inflation and the inflationary observables}
In this section we briefly review scalar field models of inflationary
cosmology, and explain how we relate model parameters to observable quantities.
Inflation, in its most general sense, can be defined to be a period of
accelerating cosmological expansion during which the universe evolves toward
homogeneity and flatness. Within this broad framework, many specific models
for inflation have been proposed. We limit ourselves here to models with
``normal'' gravity ({\em i.e.,} general relativity) and a single order
parameter for the vacuum, described by a slowly rolling scalar field $\phi$,
the inflaton.
A scalar field in a cosmological background evolves with an equation of motion
$\ddot\phi + 3 H \dot\phi + V'\left(\phi\right) = 0.$ The evolution of the
scale factor is given by the scalar-field dominated FRW equation,
\begin{eqnarray}
H^2 & &= {8 \pi \over 3 m_{\rm Pl}^2} \left[{1 \over 2} \dot\phi^2 +
V\left(\phi\right)\right],\cr
\left(\ddot a \over a\right) &&= {8 \pi \over 3 m_{\rm Pl}^2}
\left[V\left(\phi\right) - \dot\phi^2\right].
\label{eqbackground}
\end{eqnarray}
We have assumed a flat Friedmann-Robertson-Walker metric $g_{\mu \nu} = {\rm
diag}(1, -a^2, -a^2 -a^2)$, where $a^2(t)$ is the scale factor of the universe.
{\em Inflation} is defined to be a period of accelerated expansion, $\ddot a >
0$. A powerful way of describing the dynamics of a scalar field-dominated
cosmology is to express the Hubble parameter as a function of the field $\phi$,
$H = H(\phi)$, which is consistent provided $\phi$ is monotonic in time. The
equations of motion become \cite{grishchuk88,muslimov90,salopek90,lidsey95}:
\begin{eqnarray}
& &\dot\phi = -{m_{\rm Pl}^2 \over 4 \pi} H'(\phi),\cr
& & \left[H'(\phi)\right]^2 - {12 \pi \over m_{\rm Pl}^2}
H^2(\phi) = - {32 \pi^2 \over m_{\rm Pl}^4}
V(\phi).
\label{eqbasichjequations}
\end{eqnarray}
These are completely equivalent to the second-order equation of motion. The
second of the above equations is referred to as the {\it Hamilton-Jacobi}
equation, and can be written in the useful form
\begin{equation}
H^2(\phi) \left[1 - {1\over 3}
\epsilon(\phi)\right] = \left({8 \pi \over 3 m_{\rm Pl}^2}\right) V(\phi),
\label{eqhubblehamiltonjacobi}
\end{equation}
where $\epsilon$ is defined to be
\begin{equation}
\epsilon(\phi) \equiv {m_{\rm Pl}^2 \over 4 \pi} \left({H'(\phi) \over
H(\phi)}\right)^2.\label{eqdefofepsilon}
\end{equation}
The physical meaning of $\epsilon(\phi)$ can be seen by expressing Eq.\
(\ref{eqbackground}) as
\begin{equation}
\left({\ddot a \over a}\right) = H^2 (\phi) \left[1 -
\epsilon(\phi)\right],
\end{equation}
so that the condition for inflation, $(\ddot a / a) > 0$, is equivalent to
$\epsilon < 1$. The scale factor is given by
\begin{equation}
a \propto e^{N} = \exp\left[\int_{t_0}^{t}{H\,dt}\right],
\end{equation}
where the number of {\it e}-folds $N$ is
\begin{equation}
N \equiv \int_{t}^{t_e}{H\,dt} = \int_{\phi}^{\phi_e}{{H \over
\dot\phi}\,d\phi} = {2 \sqrt{\pi} \over m_{\rm Pl}}
\int_{\phi_e}^{\phi}{d\phi \over
\sqrt{\epsilon(\phi)}}.\label{eqdefofN}
\end{equation}
We will frequently work within the context of the {\em slow-roll}
approximation, which is the assumption that the evolution of the field is
dominated by the drag from the cosmological expansion, so that $\ddot\phi
\simeq 0$ and $\dot \phi \simeq -V'/3 H$. The equation of state of the scalar
field is dominated by the potential, so that $p \simeq -\rho$, and the
expansion rate is approximately $H^2 \simeq 8 \pi V(\phi)/ 3 m_{\rm Pl}^2$.
The slow roll approximation is consistent if both the slope and curvature of
the potential are small, $V',\ V'' \ll V$. In this case the parameter
$\epsilon$ can be expressed in terms of the potential as
\begin{equation}
\epsilon \equiv {m_{\rm Pl}^2 \over 4 \pi} \left({H'\left(\phi\right) \over
H\left(\phi\right)}\right)^2 \simeq {m_{\rm Pl}^2 \over 16 \pi}
\left({V'\left(\phi\right) \over V\left(\phi\right)}\right)^2.
\end{equation}
We will also define a second ``slow-roll parameter'' $\eta$ by
\begin{eqnarray}
\eta\left(\phi\right) &\equiv& {m_{\rm Pl}^2 \over 4 \pi}
\left({H''\left(\phi\right)
\over H\left(\phi\right)}\right)\cr
&\simeq& {m_{\rm Pl}^2 \over 8 \pi}
\left[{V''\left(\phi\right) \over V\left(\phi\right)} - {1 \over 2}
\left({V'\left(\phi\right) \over V\left(\phi\right)}\right)^2\right].
\end{eqnarray}
Slow roll is then a consistent approximation for $\epsilon,\ \eta \ll 1$.
Inflation models not only explain the large-scale homogeneity of the universe,
but also provide a mechanism for explaining the observed level of {\em
inhomogeneity} as well. During inflation, quantum fluctuations on small scales
are quickly redshifted to scales much larger than the horizon size, where they
are ``frozen'' as perturbations in the background metric. The metric
perturbations created during inflation are of two types: scalar, or {\it
curvature} perturbations, which couple to the stress-energy of matter in the
universe and form the ``seeds'' for structure formation, and tensor, or
gravitational-wave perturbations, which do not couple to matter. Both scalar
and tensor perturbations contribute to CMB anisotropy. Scalar fluctuations can
also be interpreted as fluctuations in the density of the matter in the
universe. Scalar fluctuations can be quantitatively characterized by the
comoving curvature perturbation $P_{\cal R}$. As long as the equation of state
$\epsilon$ is slowly varying, the curvature perturbation can be shown to be
\cite{lrreview}
\begin{equation}
P_{\cal R}^{1/2}\left(k\right) = \left({H^2 \over 2 \pi \dot \phi}\right)_{k =
a H} = \left [{H \over m_{\rm Pl} } {1 \over \sqrt{\pi \epsilon}}\right]_{k
= a H}.
\end{equation}
The fluctuation power spectrum is in general a function of wavenumber $k$, and
is evaluated when a given mode crosses outside the horizon during inflation, $k
= a H$. Outside the horizon, modes do not evolve, so the amplitude of the mode
when it crosses back {\em inside} the horizon during a later radiation- or
matter-dominated epoch is just its value when it left the horizon during
inflation. Instead of specifying the fluctuation amplitude directly as a
function of $k$, it is convenient to specify it as a function of the number of
{\it e}-folds $N$ before the end of inflation at which a mode crossed outside
the horizon.
The {\em spectral index} $n$ for $P_{\cal R}$ is defined by
\begin{equation}
n - 1 \equiv {d\ln P_{\cal R} \over d\ln k},
\end{equation}
so that a scale-invariant spectrum, in which modes have constant amplitude at
horizon crossing, is characterized by $n = 1$.
The power spectrum of tensor fluctuation modes is given
by \cite{lrreview}
\begin{equation}
P_{T}^{1/2}\left(k_N\right) = \left[\frac{4 H}{m_{\rm Pl} \sqrt{\pi}}
\right]_{N}.
\end{equation}
The ratio of tensor-to-scalar modes is then $ P_{T}/P_{\cal R} = 16 \epsilon$,
so that tensor modes are negligible for $\epsilon \ll 1$.\footnote{This
normalization convention is different from that used in our analysis of the
first-year WMAP data \cite{wmapping1}, which used the convention $r = 10
\epsilon$. In this paper, we have adopted the more standard normalization
convention $r = 16
\epsilon$.}
\section{The inflationary model space}
\label{seczoology}
To summarize the results of the previous section, inflation generates scalar
(density) and tensor (gravitational-wave) fluctuations which are generally well
approximated by power laws:
$P_{\cal R}\left(k\right) \propto k^{n - 1}$, $P_{T}\left(k\right)
\propto k^{n_{T}}$.
In the limit of slow roll, the spectral indices $n$ and $n_{T}$ vary slowly or
not at all with scale. We can write the spectral indices $n$ and $n_{T}$ to
lowest order in terms of the slow roll parameters $\epsilon$ and $\eta$ as:
\begin{eqnarray}
n \simeq&& 1 - 4 \epsilon + 2 \eta,\cr
n_{T} \simeq&& - 2 \epsilon.
\end{eqnarray}
The tensor/scalar ratio is frequently expressed as a quantity $r$, which is
conventionally normalized as
\begin{equation}
r \equiv 16 \epsilon = {P_{\rm T} \over P_{\cal R}} .
\end{equation}
The tensor spectral index is {\em not} an independent parameter, but is
proportional to the tensor/scalar ratio, given to lowest order in slow roll by
$ n_{T} \simeq - 2 \epsilon = - r/8$. This is known as the consistency
relation for inflation. A given inflation model can therefore be described to
lowest order in slow roll by three independent parameters, $P_{\cal R}$,
$P_{T}$, and $n$. If we wish to include higher-order effects, we have a fourth
parameter describing the running of the scalar spectral index, $d n / d\ln{k}$.
Calculating the CMB fluctuations from a particular inflation model reduces
to the following basic steps: (1) from the potential, calculate $\epsilon$ and
$\eta$. (2) From $\epsilon$, calculate $N$ as a function of the field $\phi$.
(3) Invert $N\left(\phi\right)$ to find $\phi_N$. (4) Calculate $P_{\cal R}$,
$n$, and $P_T$ as functions of $\phi$, and evaluate them at $\phi =
\phi_N$. For the remainder of the paper, all parameters are assumed to be
evaluated at $\phi = \phi_N$, where $N$ varies from $46$ to $60$.
With the normalization fixed, the relevant parameter space for distinguishing
between inflation models to lowest order in slow roll is then the $r$---$n$
plane. (To next order in slow-roll parameters, one must introduce the running
of $n$.) Different classes of models are distinguished by the value of the
second derivative of the potential, or, equivalently, by the relationship
between the values of the slow-roll parameters $\epsilon$ and $\eta$. Each
class of models has a different relationship between $r$ and $n$. For a more
detailed discussion of these relations, the reader is referred to Refs.\
\cite{dodelson97,kinney98a}.
Even restricting ourselves to a simple single-field inflation scenario, the
number of models available to choose from is large \cite{lrreview}. It is
convenient to define a general classification scheme, or ``zoology'' for models
of inflation. We divide models into three general types: {\it large-field},
{\it small-field}, and {\it hybrid}, with a fourth classification, {\it linear}
models, serving as a boundary between large- and small-field models.
First order in $\epsilon$ and $\eta$ is sufficiently accurate for the purposes
of this Section, and for the remainder of this Section we will only work to
first order. The generalization to higher order in slow roll will be discussed
in the following.
\subsection{Large-field models: $-\epsilon < \eta \leq \epsilon$}
Large-field models have inflaton potentials typical of ``chaotic'' inflation
scenarios \cite{linde83}, in which the scalar field is displaced from the
minimum of the potential by an amount usually of order the Planck mass. Such
models are characterized by $V''\left(\phi\right) > 0$, and $-\epsilon < \eta
\leq \epsilon$. The generic large-field potentials we consider are polynomial
potentials $V\left(\phi\right) = \Lambda^4 \left({\phi / \mu}\right)^p$,
and exponential potentials, $V\left(\phi\right) = \Lambda^4 \exp\left({\phi /
\mu}\right)$.
For the case of an exponential potential, $V\left(\phi\right)
\propto \exp\left({\phi / \mu}\right)$, the tensor/scalar ratio $r$ is simply
related to the spectral index as
\begin{equation}
r = 8 \left(1 - n\right),
\end{equation}
but the slow roll parameters have no dependence on the number of e-folds $N$.
For inflation with a polynomial potential, $V\left(\phi\right) \propto \phi^p$,
we have
\begin{eqnarray}
n-1&=&-\frac{2+p}{2N}\, ,\nonumber\\
r&=&\frac{8p}{2N}=8 \left({p \over p + 2}\right) \left(1 - n\right)\, ,
\end{eqnarray}
so that tensor modes are large for significantly tilted spectra.
\subsection{Small-field models: $\eta < -\epsilon$}
Small-field models are the type of potentials that arise naturally from
spontaneous symmetry breaking (such as the original models of ``new'' inflation
\cite{linde82,albrecht82}) and from pseudo Nambu-Goldstone modes (natural
inflation \cite{freese90}). The field starts from near an unstable equilibrium
(taken to be at the origin) and rolls down the potential to a stable minimum.
Small-field models are typically characterized by $V''\left(\phi\right) < 0$
and $\eta < -\epsilon$. Typically $\epsilon$ (and hence the tensor amplitude)
is close to zero in small-field models. The generic small-field potentials we
consider are of the form $V\left(\phi\right) = \Lambda^4 \left[1 - \left({\phi
/ \mu}\right)^p\right]$, which can be viewed as a lowest-order Taylor expansion
of an arbitrary potential about the origin. The cases $p = 2$ and $p > 2$ have
very different behavior. For $p = 2$, $n-1\simeq -(1/2\pi)(m_{\rm Pl}/\mu)^2$
and there is no dependence upon the number of {\it e}-foldings. On the other
hand
\begin{equation}
r = 8 (1 - n) \exp\left[- 1 - N\left(1 - n\right)\right].
\end{equation}
For $p > 2$, the scalar spectral index is
\begin{equation}
n \simeq 1 - {2 \over N} \left({p - 1 \over p - 2}\right),
\end{equation}
{\it independent} of $(m_{\rm Pl}/\mu)$. Assuming $\mu < m_{\rm Pl}$ results in
an upper bound on $r$ of
\begin{equation}
r < 8 {p \over N \left(p - 2\right)} \left[{8 \pi \over N p \left(p -
2\right)}\right]^{p / \left(p - 2\right)}.
\end{equation}
\subsection{Hybrid models: $0 < \epsilon < \eta$}
The hybrid scenario \cite{linde91,linde94,copeland94,lr97} frequently appears
in models which incorporate supersymmetry into inflation. In a typical
hybrid-inflation model, the scalar field responsible for inflation evolves
toward a minimum with nonzero vacuum energy. The end of inflation arises as a
result of instability in a second field. Such models are characterized by
$V''\left(\phi\right) > 0$ and $0 < \epsilon < \eta$. We consider generic
potentials for hybrid inflation of the form $V\left(\phi\right) = \Lambda^4
\left[1 + \left({\phi / \mu}\right)^p\right].$ The field value at the end of
inflation is determined by some other physics, so there is a second free
parameter characterizing the models. Because of this extra freedom, hybrid
models fill a broad region in the $r$---$n$ plane. For $\left({\phi_N /
\mu}\right)\gg 1$ (where $\phi_N$ is the value of the inflaton field when there
are $N$ {\it e}-foldings till the end of inflation) one recovers the same
results as the large-field models. On the contrary, when $\left({\phi_N /
\mu}\right)\ll 1$, the dynamics are analogous to small-field models, except
that the field is evolving toward, rather than away from, a dynamical fixed
point. While in principle ``hybrid'' models can populate a broad region of the
inflationary parameter space, the presence of a dynamical fixed point means
that there is a simple subclass of hybrid models that live in a narrow band of
parameter space along a line with $r \simeq 0$, $n > 1$, and $dn/d\ln{k} \simeq
0$. We will see below that while the WMAP3 data do not rule out the entire
region which we label here as ``hybrid,'' the simplest hybrid models evolving
near the dynamical fixed point are clearly disfavored by the data.
An example of a model which falls into the``hybrid'' region of the $r$---$n$
plane away from the dynamical fixed point is a potential with a negative power
of the scalar field, $V\left(\phi\right) =V_0\left[ 1+\alpha\left(m_{\rm
Pl}/\phi\right)^p\right]$, used in intermediate inflation \cite{barrow93} and
dynamical supersymmetric inflation \cite{kinney97}. The power spectrum is blue:
the spectral index given by $n-1\simeq 2(p+1)/[(p+2)(N_{\rm tot}-N)]$, where
$N_{\rm tot}$ is the total number of {\it e}-foldings, and the parameter $r$ is
generally negligible. However, the model exhibits running of the spectral index
which would be potentially detectable by future experiments,
\begin{equation}
\label{eq:dsirunning}
{dn \over d\ln{k}} = -{1 \over 2} \left({p + 2 \over p + 1}\right)
\left(n - 1\right)^2.
\end{equation}
For example, for $p = 2$ and $n = 1.2$, the running is $dn / d\ln{k} =
-0.05$ \cite{kinney98}. When the running is sizable, the
tensor contribution is totally negligible,
\begin{equation}
r\ll P_{\cal R}^{1/2}(n-1)^{(3p+5)/(p+2)}.
\end{equation}
\subsection{Linear models: $\eta = - \epsilon$}
Linear models, $V\left(\phi\right) \propto \phi$, live on the boundary between
large-field and small-field models, with $V''\left(\phi\right) = 0$ and $\eta =
- \epsilon$. The spectral index and tensor/scalar ratio are related as
\begin{equation}
r = {8 \over 3} \left(1 - n\right).
\end{equation}
\subsection{Other models}
This enumeration of models is certainly not exhaustive. There are a number of
single-field models that do not fit well into this scheme, for example
logarithmic potentials $V\left(\phi\right) =V_0\left[1+(C g^2/8\pi)
\ln\left(\phi/\mu\right)\right]$ typical of supersymmetry
\cite{lrreview}, where $C$ counts the degrees of freedom coupled
to the inflaton field and $g$ is a coupling constant. For this kind of
potential, one gets $n-1\simeq -(1/N)(1+3C g^2/16 \pi^2)$ and $r\simeq (C
g^2/\pi^2)(1/N)$. This model requires an auxiliary field to end inflation
and is more properly categorized as a hybrid model, but falls into the
small-field region of the $r$---$n$ plane.
\subsection{Beyond first order}
The four classes of inflation models, categorized by the relationship between
the slow-roll parameters as $-\epsilon < \eta \leq \epsilon$ (large field),
$\eta \leq -\epsilon$ (small field, linear), and $0 < \epsilon < \eta$
(hybrid), cover the entire $r$---$n$ plane and are in that sense complete at
first order in the slow roll parameters. However, this feature is lost going
beyond first order: models can evolve from one region to another. This feature
is manifest when changing the parameter $N$, and is particularly relevant for
those models for which the running of the observables with the scale is
sizable \cite{kinneyriotto05}. Therefore, it is important to realize that the
lowest-order correspondence between the slow-roll parameters and the class of
models does not always survive to higher order in slow roll. For instance, for
potentials of the form $ V\left(\phi\right) = \Lambda^4 f\left(\phi/
\mu\right)$, the parameter $\Lambda$ is generally fixed by CMB normalization,
leaving the mass scale $\mu$ and the number of {\it e}-folds $N$ as free
parameters. For some choices of potential, for example $V \propto \exp{(\phi /
\mu)}$ or $V \propto 1 - (\phi / \mu)^2$, the spectral index $n$ varies as a
function of $\mu$. These models therefore appear for fixed $N$ as lines on
$r$---$n$ plane. Changing $N$ results in a broadening of the lines. For other
choices of potential, for example $V \propto 1 - (\phi / \mu)^p$ with $p > 2$,
the spectral index is independent of $\mu$, and each choice of $p$ describes a
point on the zoo plot for fixed $N$. A change in $N$ turns each of these points
into lines, which smear together into a continuous region.
\section{Analysis and Results}
\label{secCMBanalysis}
The method we adopt is based on the publicly available Markov Chain Monte Carlo
(MCMC) package \texttt{cosmomc} \cite{Lewis:2002ah}. We sample the following
eight-dimensional set of cosmological parameters, adopting flat priors on them:
the physical baryon and CDM densities, $\omega_b=\Omega_bh^2$ and
$\omega_c=\Omega_ch^2$, the ratio of the sound horizon to the angular diameter
distance at decoupling, $\theta_s$, the scalar spectral index, its running and
the overall normalization of the spectrum, $n$, $dn/d{\rm ln}\,k$ and $A$ at
$k=0.002$ Mpc$^{-1}$, the tensor contribution $r$, and, finally, the optical
depth to reionization, $\tau$. Furthermore, we consider purely adiabatic
initial conditions, we impose flatness, and we use the inflation consistency
relation to fix the value of the tensor spectral index $n_T$. We also restrict
our analysis to the case of three massless neutrino families; introducing a
neutrino mass in agreement with current neutrino oscillation data doesn't
change our results in a significant way.
We include the three-year data \cite{wmap3cosm} (temperature and polarization)
with the routine for computing the likelihood supplied by the WMAP team and
available at the \texttt{LAMBDA} web
site.\footnote{http://lambda.gsfc.nasa.gov/} We marginalize over the amplitude
of the Sunyaev-Zel'dovich signal, but the effect is small: including/excluding
the correction changes our conclusions on the best fit value of any single
parameter by less than 2\%, and always well within the 68\% C.L.\ contours. We
treat beam errors with the highest possible accuracy (see Ref.\
\cite{wmap3temp}, Appendix A.2), using full off-diagonal temperature covariance
matrix, Gaussian plus lognormal likelihood, and fixed fiducial $C_{\ell}$'s.
The MCMC convergence diagnostic is done on $8$ chains though the Gelman and
Rubin ``variance of chain mean''$/$``mean of chain variances'' $R$ statistic
for each parameter. Our $1-D$ and $2-D$ constraints are obtained after
marginalization over the remaining ``nuisance'' parameters, again using the
programs included in the \texttt{cosmomc} package. In addition to the CMB data,
we also consider the constraints on the real-space power spectrum of galaxies
from the Sloan Digital Sky Survey (SDSS) \cite{thx}. We restrict the analysis
to a range of scales over which the fluctuations are assumed to be in the
linear regime ($k < 0.2 h^{-1}$\rm Mpc). When combining the matter power
spectrum with CMB data, we marginalize over a bias $b$ considered as an
additional nuisance parameter. Furthermore, we make use of the HST measurement
of the Hubble parameter $H_0 = 100h \text{ km s}^{-1} \text{Mpc}^{-1}$
\cite{hst} by multiplying the likelihood by a Gaussian likelihood function
centered around $h=0.72$ and with a standard deviation $\sigma = 0.08$.
Finally, we include a top-hat prior on the age of the universe: $10 < t_0 < 20$
Gyrs.
\subsection{Results}
As now common practice, we plot the likelihood contours obtained from our
analysis on three different planes, $r$ vs.\ $n$, $dn/d\ln{k}$ vs.\ $n$, and
$r$ vs.\ $dn/d\ln{k}$: we do so in Figs.\ \ref{fig_rn}--\ref{fig_rdn}.
Presenting our results on these planes is useful for understanding the effects
of theoretical assumptions and/or external priors.
We consider two different choices of datasets: the WMAP3 dataset alone, and
WMAP3 plus the additional information of the SDSS. By analyzing these different
datasets we can check the consistency of the SDSS large-scale structure data
with WMAP3, something that is not completely trivial since the WMAP3 data seems
to prefer models with a lower value for the $\sigma_8$ parameter than the one
inferred from the SDSS data (see Refs.\ \cite{wmap3temp} and \cite{thx}).
In Fig.\ \ref{fig_rn}, we show the 68\% and 95\% likelihood contours on the $r$
vs.\ $n$ plane in the case of WMAP3 only (left panel) and WMAP3+SDSS (right
panel). We also consider a prior on the running: the results on the top panel
are obtained allowing the possibility of $dn/d\ln k \neq 0$ while on the bottom
panel assume no running. The WMAP3 only case including running exhibited
relatively poor convergence due to a degeneracy in the four-dimensional
parameter space of $r$, $n$, $dn/d\ln{k}$, and normalization. Adding the SDSS
data set removed the degeneracy and substantially improved the convergence of
the MCMC code.
Let's first investigate the case of {\it no} running. Marginalizing over all
the remaining nuisance parameters we constrain $n$ and $r$ to $0.94 < n < 1.04$
and $r<0.60$ at $95$\% C.L. Models with $n<0.9$ are therefore ruled out at
high significance, as are models with $n > 1.05$. The data clearly set
interesting constraints on tensor modes. Models with $n<1$ must have $r<0.4$ at
$95$\% C.L. Models with $n<0.9$ must have a negligible tensor component.
Including the SDSS data further reduces the limit on the amplitude of the
gravitational wave component with a relatively smaller effect on the spectral
index parameter. For WMAP3+SDSS we constrain $n$ and $r$ to $0.93<n<1.01$ and
$r<0.31$.
\begin{figure}
\includegraphics[width=3.25in]{rn.eps}
\caption{\label{fig_rn} Constraints on the
$n$---$r$ plane for different choices of experimental datasets.
The analyses in the top panels include a running spectral index,
while the analyses in the bottom panels are without running. The shaded
regions indicate 68\% and 95\% C.L.}
\end{figure}
If we allow running the main effect is to open the contours
toward higher value of $n$ and $r$. With running, marginalizing
over all the remaining nuisance parameters, we constrain $n$ and $r$
to $1.02 < n < 1.38$ and $r < 1.09$ at $95$\% C.L.\ for WMAP3 alone
and $0.97 < n < 1.21$ and $r<0.38$ in the case of WMAP3 plus SDSS.
\begin{table}
\caption{One-dimensional confidence limits on inflationary parameters,
marginalized over all other parameters, for WMAP3 alone and WMAP3 + SDSS.}
\begin{ruledtabular}
\begin{tabular}{l|c|r}
no running/ & limits on $n$, $r$ & data\\
running & 95\% C.L. & set \\
\hline
& & \\
& $0.94<n<1.04$ & WMAP3 ONLY \\
& $r<0.60$ & WMAP3 ONLY \\
no running & & \\
& $0.93 < n < 1.01$ & WMAP3 + SDSS \\
& $ r<0.31$ & WMAP3 + SDSS \\
& & \\
\hline
& & \\
& $1.02 < n <1.38$ & WMAP3 ONLY \\
& $ r < 1.09 $ & WMAP3 ONLY \\
& $ -0.17 < dn/d\ln k < -0.02$ & WMAP3 ONLY\\
running & & \\
& $0.97 < n < 1.21 $ & WMAP3 + SDSS \\
& $r < 0.38 $ & WMAP3 + SDSS \\
& $-0.13 < dn/d\ln k < 0.007 $ & WMAP3 + SDSS\\
& & \\
\end{tabular}
\end{ruledtabular}
\end{table}
Models with $n=1$ are therefore in very good agreement with CMB data in the
presence of a tensor component or running different from zero. Of particular
interest is the Harrison--Zel'dovich (HZ) model: $n=1$, $r=0$, $dn/d\ln k =0$.
As we see from the bottom panel of Fig.\ \ref{fig_rn}, pure HZ spectra are not
ruled out at more than 95\% C.L. from CMB data alone. In particular, we found
that, considering the whole sets of models in our $8$-D chain, the HZ best-fit
model is at $\Delta\chi^2/2=2.04$, $2.77$, and $3.96$ with respect to the the
best fit in the case of no running and no tensors, including tensors but no
running and including tensors and running. When we include the SDSS data we
found that the HZ best fit model is at $\Delta\chi^2/2=3.07$ with respect to
the best fit in the case of no running and no tensor, $\Delta\chi^2/2=3.4$ with
respect to the best fit with no running and $\Delta\chi^2/2=5.1$ with respect
to the overall best fit. Since $\Delta\chi^2/2=6.4$ at $95.4$\% confidence
level for $6$ degrees of freedom, those numbers clearly indicate that even when
the SDSS data is included, the HZ spectrum is in reasonable agreement with the
data.
The fact that the scale-invariant value $n=1$ is consistent with the data at
the 95\% C.L.\ when no running is imposed, considerably weakens the bounds
on inflationary models found in Ref.\ \cite{alabidi} where the original WMAP3
error bars were adopted concluding that $n=1$ was ruled out at more than
99\% C.L.
\begin{figure}
\includegraphics[width=3.25in]{dnn.eps}
\caption{\label{fig_dnn} Constraints on the
$n$---$dn/d\ln k$ plane for different choices of experimental datasets.
The shaded regions indicate 68\% and 95\% C.L.}
\includegraphics[width=3.25in]{rdn.eps}
\caption{\label{fig_rdn} Constraints on the
$dn/d\ln k$---$r$ plane for different choices of experimental datasets.
The shaded regions indicate 68\% and 95\% C.L.}
\end{figure}
In Fig.\ \ref{fig_dnn} and Fig.\ \ref{fig_rdn} a degeneracy is evident: an
increase in the spectral index $n$ is equivalent to a negative scale dependence
($dn/d\ln{k} < 0$). We emphasize, however, that this behavior depends strictly
on the position of the pivot scale $k_0$: choosing $k_0=0.05h$ Mpc$^{-1}$ would
change the direction of the degeneracy. Models with $n \sim1.1$ need a
negative running at more than about the 95\% C.L.\ (about $4 \sigma$ in
the case of WMAP3+SDSS). It is interesting also to note that models with a red
spectral index, $n < 1.0$, are in better agreement with the data with a zero or
positive running (see Fig.\ 1\ref{fig_dnn}), while models with a sizable
gravity wave background need a negative running (see Fig.\ \ref{fig_rdn}). For
the WMAP3 alone case the running is bounded by $-0.02 \gtrsim dn/d\ln{k} \gtrsim
-0.17$ at 95\% C.L.\ ($0.007 \gtrsim dn/d\ln{k} \gtrsim -0.13$ for WMAP3+SDSS).
We found that the best fit from WMAP3 alone with $dn/d\ln k=0$ is at $\Delta
\chi^2/2=1.2$ ($\Delta \chi^2/2=0.2$ when including SDSS) with respect to the
overall best fit. The current data, therefore, do not suggest the presence
of running at more than 95\% C.L.
Finally, we compare our results with those presented in Spergel, {\it et al.}\
\cite{wmap3cosm}. While there is qualitatively good agreement, a tension
appears when we compare our contour plots in Fig.\ \ref{fig:zoonorun} (the no
running case) with those presented in Fig.\ $14$ of Ref.\ \cite{wmap3cosm}.
Models with a pure HZ spectrum appear to be ruled out by WMAP3 alone at
about the 99\% C.L.\ in Ref.\ \cite{wmap3cosm}, while our analysis indicates
broader contours, with the Harrison-Zel'dovich spectrum inside the 95\% C.L.
region. Similarly, the contours in Ref.\ \cite{wmap3cosm} appear to rule out
$V(\phi) = \lambda\phi^4$, while our analysis indicates that this potential is
still marginally consistent with the WMAP3 data alone at 95\% confidence. In
order to better understand this discrepancy, we compared our results directly
with the chains made public by the WMAP team and available at the
\texttt{Lambda} web site.\footnote{\texttt{http://lambda.gsfc.nasa.gov}.} We
found that the error contours derived from the publicly available chains are
considerably larger than those shown in Fig.\ 14 of Ref.\ \cite{wmap3cosm}:
error contours from our analysis of the WMAP chains are plotted as dashed lines
in Fig.\ \ref{fig:zoonorun}.\footnote{The difference between the WMAP-team
contours and our contours as plotted in Fig.\ \ref{fig:zoonorun} can be
accounted for by the fact that, unlike the WMAP-team analysis, we include
priors on $H_0$ from the HST Key Project data and a top-hat age prior. We have
independently reproduced the dashed-line contours shown in Fig.\
\ref{fig:zoonorun} with our own analysis.} None of the contours are as tight
as those shown in Spergel {\it et al.}, and the discrepancy is significant
enough to influence important conclusions about the model space, in particular,
the consistency of a Harrison-Zel'dovich spectrum with the data. There appears
to be a clear inconsistency between our results and contours shown in Spergel
{\it et al.}, Figs.\ $12$ and $14$.
\begin{figure} \includegraphics[width=3.25in]{zoonorun.eps}
\caption{\label{fig:zoonorun} The $n$,$r$ parameter space WMAP3 alone (open
contours) and WMAP3 + SDSS (filled contours), with a prior of $dn/d\ln{k} = 0$.
The line segments show the predictions for $V(\phi) = m^2 \phi^2$ and $V(\phi)
= \lambda \phi^4$ for $N = [46,60]$. The dashed lines show the 68\% C.L.\ and
95\% C.L.\ contours from the chains made public by the WMAP team, which do not
include an HST prior on $H_0$ or an age prior. The scale of the plot is chosen
to allow direct comparison with Fig.\ $14$ of Spergel {\it et al.}\
\cite{wmap3cosm}. The shaded regions indicate 68\% and 95\% C.L.}
\end{figure}
\section{Monte Carlo reconstruction}
\label{secmontecarlorecon}
In this section we describe {\em Monte Carlo reconstruction}, a stochastic
method for ``inverting'' observational constraints to determine an ensemble of
inflationary potentials compatible with observation. The method is described in
more detail in Refs.\ \cite{kinney02,easther02}. In addition to encompassing a
broader set of models than we considered in the previous section, Monte Carlo
reconstruction allows us easily to incorporate constraints on the running of
the spectral index $d n / d \ln{k}$ as well as to include effects to higher
order in slow roll.
We have defined the slow-roll parameters $\epsilon$ and $\eta$ in terms of
the Hubble parameter $H\left(\phi\right)$ in a previous section.
Taking higher derivatives
of $H$ with respect to the field, we can define an infinite hierarchy of slow
roll parameters \cite{liddle94}:
\begin{eqnarray}
\sigma &\equiv& {m_{\rm Pl} \over \pi} \left[{1 \over 2} \left({H'' \over
H}\right) -
\left({H' \over H}\right)^2\right],\cr
{}^\ell\lambda_{\rm H} &\equiv& \left({m_{\rm Pl}^2 \over 4 \pi}\right)^\ell
{\left(H'\right)^{\ell-1} \over H^\ell} {d^{(\ell+1)} H \over d\phi^{(\ell +
1)}}.
\end{eqnarray}
Here we have chosen the parameter $\sigma \equiv 2 \eta - 4 \epsilon \simeq n
-1 $ to make comparison with observation convenient.
It is convenient to use $N$ as the measure of time during inflation. As above,
we take $t_e$ and $\phi_e$ to be the time and field value at end of
inflation. Therefore, $N$ is defined as the number of e-folds before the end of
inflation, and increases as one goes {\em backward} in time ($d t > 0
\Rightarrow d N < 0$):
\begin{equation}
{d \over d N} = {d \over d\ln a} = { m_{\rm Pl} \over 2 \sqrt{\pi}}
\sqrt{\epsilon} {d \over d\phi},
\end{equation}
where we have chosen the sign convention that $\sqrt{\epsilon}$ has the same
sign as $H'\left(\phi\right)$:
\begin{equation}
\sqrt{\epsilon} \equiv + {m_{\rm PL} \over 2 \sqrt{\pi}} {H' \over H}.
\end{equation}
Then $\epsilon$ itself can be expressed in terms of $H$ and $N$ simply as
\begin{equation}
\label{eqepsilonfromN}
{1 \over H} {d H \over d N} = \epsilon.
\end{equation}
Similarly, the evolution of the higher-order parameters during inflation is
determined by a set of ``flow'' equations \cite{hoffman00,schwarz01,kinney02},
\begin{eqnarray}
{d \epsilon \over d N} &=& \epsilon \left(\sigma + 2
\epsilon\right),\cr {d \sigma \over d N} &=& - 5 \epsilon \sigma - 12
\epsilon^2 + 2 \left({}^2\lambda_{\rm H}\right),\cr {d
\left({}^\ell\lambda_{\rm H}\right) \over d N} &=& \left[
\frac{\ell - 1}{2} \sigma + \left(\ell - 2\right) \epsilon\right]
\left({}^\ell\lambda_{\rm H}\right) + {}^{\ell+1}\lambda_{\rm
H}.\label{eqfullflowequations}
\end{eqnarray}
The derivative of a slow roll parameter at a given order is higher order in
slow roll.
A boundary condition can be specified at any point in the
inflationary evolution by selecting a set of parameters
$\epsilon,\sigma,{}^2\lambda_{\rm H},\ldots$ for a given value of $N$. This is
sufficient to specify a ``path'' in the inflationary parameter space that
specifies the background evolution of the spacetime. Taken to infinite order,
this set of equations completely specifies the cosmological evolution, up to
the normalization of the Hubble parameter $H$. Furthermore, such a
specification is exact, with no assumption of slow roll necessary. In practice,
we must truncate the expansion at finite order by assuming that the
${}^\ell\lambda_{\rm H}$ are all zero above some fixed value of $\ell$. We
choose initial values for the parameters at random from the following ranges:
\begin{eqnarray}
N &=& [46,60]\cr
\epsilon &=& \left[0,0.8\right]\cr
\sigma &=& \left[-0.5,0.5\right]\cr
{}^2\lambda_{\rm H} &=& \left[-0.05,0.05\right]\cr
{}^3\lambda_{\rm H} &=& \left[-0.025,0.025\right],\cr
&\cdots&\cr
{}^{M+1}\lambda_{\rm H} &=& 0.\label{eqinitialconditions}
\end{eqnarray}
Here the expansion is truncated to order $M$ by setting ${}^{M+1}\lambda_{\rm
H} = 0$. In this case, we still generate an exact solution of the background
equations, albeit one chosen from a subset of the complete space of
models. This is equivalent to placing constraints on the form of the potential
$V\left(\phi\right)$, but the constraints can be made arbitrarily weak by
evaluating the expansion to higher order. For the purposes of this analysis, we
choose $M = 5$. The results are not sensitive to either the choice of order
$M$ (as long as it is large enough) or to the specific ranges from which the
initial parameters are chosen.
Solutions to the truncated flow equations are solutions for which all of the
derivatives of the Hubble constant above order $M + 1$ vanish:
\begin{equation}
{d^\ell H \over d \phi^\ell} = 0,\ \ell \geq M + 2,
\end{equation}
with a simple polynomial solution \cite{Liddle:2003py},
\begin{equation}
H\left(\phi\right) = H_0 \left(1 + A_1 \phi + \cdots + A_{M + 1} \phi^{M +
1}\right).
\end{equation}
The Hamilton-Jacobi Equation (\ref{eqhubblehamiltonjacobi}) can be applied to
this solution to derive an analytic form for the potential in terms of the
parameters $A_1,\ldots,A_{M + 1}$. The set of boundary conditions in Eq.\
(\ref{eqinitialconditions}) then consist of a weak slow-roll prior on the
polynomial fit for $H(\phi)$: the inflaton must be slowly rolling at least at
one point in its evolution. Thus, while the flow equations in and of themselves
simply define an expansion in $H(\phi)$, the choice of boundary condition and
the requirement that inflation last at least 46 e-folds comprise a well-defined
physical prior on the inflationary model space.
Some interesting recent papers have explored alternative methods for
constraining the ``model space'' of inflation. In particular, Ref.\
\cite{Peiris:2006ug} incorporates the lowest-order flow parameters directly
into the Monte Carlo Markov Chain fit, although they do not include effects to
higher order in slow roll. Refs.\ \cite{Parkinson:2006ku,Pahud:2006kv} apply a
Bayesian model selection approach to the problem, but also do not consider
higher-order effects which in principle contribute to a running spectral index.
Our analysis extends these results by including running of the spectral index
as well as effects to higher order in slow roll.
Once we obtain a solution to the flow equations
$[\epsilon(N),\sigma(N),{}^\ell\lambda_{\rm H}(N)]$, we can calculate the
predicted values of the tensor/scalar ratio $r$, the spectral index $n$, and
the ``running'' of the spectral index $d n / d\ln k$. To lowest order, the
relationship between the slow roll parameters and the observables is especially
simple: $r = 16 \epsilon$, $n - 1 = \sigma$, and $d n / d \ln k = 0$. To
second order in slow roll, the observables are given by
\cite{liddle94,stewart93},
\begin{equation}
r = 16 \epsilon \left[1 - C \left(\sigma + 2 \epsilon\right)\right]
\label{eqrsecondorder}
\end{equation}
for the tensor/scalar ratio, and
\begin{equation}
n - 1 = \sigma - \left(5 - 3 C\right) \epsilon^2 - {1 \over 4} \left(3
- 5 C\right) \sigma \epsilon + {1 \over 2}\left(3 - C\right)
\left({}^2\lambda_{\rm H}\right)
\label{eqnsecondorder}
\end{equation}
for the spectral index. The constant $C \equiv 4 (\ln{2} +
\gamma) - 5 = 0.0814514$, where $\gamma \simeq 0.577$ is Euler's
constant. Derivatives
with respect to wavenumber $k$ can be expressed in terms of derivatives with
respect to $N$ as \cite{liddle95}
\begin{equation}
{d \over d N} = - \left(1 - \epsilon\right) {d \over d \ln k} .
\end{equation}
The scale dependence of $n$ is then given by the simple expression
\begin{equation}
{d n \over d \ln k} = - \left({1 \over 1 - \epsilon}\right) {d n \over d N},
\end{equation}
which can be evaluated by using Eq.~(\ref{eqnsecondorder}) and the flow
equations. For example, for the case of $V \propto \phi^4$,
the observables to lowest order are
\begin{eqnarray}
\label{eqphi4obs}
r &\simeq& {16 \over N + 1},\cr
n - 1 &\simeq& - {3 \over N + 1},\cr
{dn \over d\ln k} &\simeq& - {3 \over N \left(N + 1\right)}.
\end{eqnarray}
The final result following the evaluation of a particular path in the
$M$-dimensional ``slow-roll space'' is a point in ``observable parameter
space,'' {\em i.e.,} $(r,n,dn/d\ln k)$, corresponding to the
observational prediction
for that particular model.
The reconstruction method works as follows:
\begin{enumerate}
\item Specify a ``window'' of parameter space: {\em e.g.,} central values for
$n-1$, $r$, or $d n /d \ln{k}$ and their associated error bars.
\item Select a random point in slow roll space,
$[\epsilon,\eta,{}^\ell\lambda_{\rm H}]$, truncated at order $M$ in
the slow roll expansion.
\item Evolve forward in time ($d N < 0$) until either (a) inflation ends
($\epsilon > 1$), or (b) the evolution reaches a late-time fixed
point ($\epsilon = {}^\ell\lambda_{\rm H} = 0,\ \sigma = {\rm
const.}$).
\item If the evolution reaches a late-time fixed point, calculate the
observables $r$, $n - 1$, and $d n / d \ln k$ at this point.
\item If inflation ends, evaluate the flow equations backward $N$ e-folds from
the end of inflation. Calculate the observable parameters at that
point.
\item If the observable parameters lie within the specified window of
parameter space, compute the potential and add this model to the ensemble
of ``reconstructed'' potentials.
\item Repeat steps 2 through 6 until the desired number of models
have been found.
\end{enumerate}
We performed the Monte Carlo reconstruction using the more restrictive of the
data sets considered, combining the WMAP3 data with the Sloan Digital Sky
Survey. We ran the reconstruction code long enough (10,703,502
iterations) to collect 10,000 models consistent with the WMAP3 + SDSS error
bars: 4115 are within the 68\% C.L.\ contours, and 5885 are within the
95\% C.L.\ contours.
To illustrate the degree of overlap between the various classes of model, the
predictions for different models are shown in the top panel of Fig.\
\ref{fig:MCR_log}, including the effect of the uncertainty in the number of
e-folds $N$. The different classes of potential do not have significant
overlap, and it is therefore possible to distinguish one from another
observationally.
Figure \ref{fig:MCR_log} also shows the points generated by Monte Carlo
Reconstruction in the $n,r$ parameter space. Since there is no measure on the
space of initial conditions, the distribution of points generated by the flow
equations cannot be interpreted in a rigorously statistical fashion: the error
bars are those generated from the data using COSMOMC, and the points plotted
are points generated by the flow equations consistent with those errors,
including running of the spectral index as a parameter. The clustering of the
models in the parameter space, however, {\em is} significant: selecting models
based on even an extremely weak assumption of slow-roll results in a strong
clustering of the models in the region favoring a red spectrum and $dn/d\ln{k}
= 0$.
In this sense, the preference for running and a blue spectrum present in the
data itself contains very little information relevant to constraining slow-roll
inflation models: it can be interpreted simply an artifact of an ``accidental''
parameter degeneracy in the data. Allowing running as a parameter but assuming
slow roll inflation gives constraints on the inflationary model space largely
consistent with an analysis which assumes negligible running as a prior on the
parameter space from the beginning. In other words: {\em there is no evidence
for inflation with a measurable running of the spectral index.}
\begin{figure}
\includegraphics[width=3.25in]{zoomodels.eps}
\caption{\label{fig:MCR_log} Inflationary models plotted against the
68\% and 95\% WMAP3 +
SDSS error contours. Top panel: the predictions of various specific
inflationary potentials (solid bands) plotted against the error bars from WMAP3
+ SDSS with a prior of $dn / d\ln{k} = 0$. Bottom panel: 10,000 models
generated by flow Monte Carlo consistent with the WMAP3 + SDSS data sets
including running as a parameter, indicated by the larger error contours. The
contours with a $dn / d\ln{k} = 0$ prior are plotted as a reference, and were
not used in the Monte Carlo reconstruction. (Some data points fall outside the
error contours plotted because likelihoods for the models were calculated using
the full three-dimensional likelihood function ${\cal L}(n,r,dn/d\ln{k})$, and
the contours were obtained by marginalizing over $dn/d\ln{k}$).}
\end{figure}
{}From the flow equations (\ref{eqfullflowequations}) it is evident that the
line along the $r= 0$ axis, with $\epsilon = {}^\ell\lambda_{\rm H} = 0$ is a
fixed point of the flow evolution, including taking the flow equations to
infinite order.\footnote{See Refs.\ \cite{kinney02,Chongchitnan:2005pf} for a
detailed discussion of the fixed-point structure of the slow roll space.} For
parameters on the ``red'' side of scale invariance, {\it i.e.} $\sigma < 0$,
this fixed point is {\em unstable}: flow moves away from the fixed point as $N
\rightarrow 0$, and toward the fixed point as $N \rightarrow \infty$.
Conversely, the fixed point for $\sigma > 0$ is {\em stable}: models evolve
toward this fixed point at late times, $N \rightarrow 0$. Integrating the flow
equations forward in time yields one of two possible outcomes. One possibility
is that the condition $\epsilon = 1$ may be satisfied for some finite value of
$N$, which defines the end of inflation. We identify this point as $N=0$ so
that the primordial fluctuations are actually generated when $N = [46,60]$.
Alternatively, the solution can evolve toward an inflationary fixed point with
$r = 0$ and $n > 1$, in which case inflation never stops. In reality, inflation
must stop at some point, presumably via some sort of instability, such as the
``hybrid'' inflation mechanism \cite{linde91,linde94,copeland94,lr97}. Examples
of potentials which fall into this class of models are the simplest hybrid
potentials,
\begin{equation}
V\left(\phi\right) = \Lambda^4 \left[1 + \left({\phi \over
\mu}\right)^p\right].
\end{equation}
Here we take the observables for such models to be the values at the late-time
attractor. Since models on the attractor are by definition those for which the
variation in the slow roll parameters with $N$ vanishes, such models also
predict zero running of the scalar spectral index. We find that {\em the WMAP3
data strongly disfavor models which evolve to a late-time asymptote with $r =
0$, $n > 1$, and $dn / d \ln{k} = 0$.} The $95$\% confidence limit for a blue
spectrum with no tensors and no running ({\it i.e.}, not marginalized over $r$)
from WMAP3 alone is $n < 1.0007$, and from WMAP3 + SDSS is $n < 1.001$. Of
more than ten million models tested, only one model consistent with the data
relaxed to the late-time asymptote, with a spectral index $n = 1.0004$ and $r
= 0.0000002$; for all intents and purposes a Harrison-Zel'dovich spectrum.
Every other model in the Monte Carlo reconstruction set was of the
``nontrivial'' type, with inflation ending naturally by evolving through
$\epsilon = 1$ at late times. We note that the level of running required to
accommodate a blue spectrum is severe: even dynamical supersymmetric
inflation, which predicts a blue spectrum and negative running [Eq.\
(\ref{eq:dsirunning})], does not produce a strong enough running to match the
data, and is also ruled out to more than 95\% C.L.\ by WMAP3 + SDSS for $n >
1.001$.
\begin{figure}
\includegraphics[width=3.25in]{potentials.eps}
\caption{\label{fig:V_recon} Potentials generated by Monte Carlo Reconstruction
consistent with WMAP3 + SDSS to 68\% C.L.\ in the light shaded (yellow) region
and 95\% C.L.\ in the darker shaded (cyan) region. The WMAP3 data place an
upper limit of about $2 \times 10^{16}\ {\rm GeV}$ on the energy scale of
inflation. No lower limit is possible without a detection of a tensor mode
signal. The concave-up line at the bottom of the figure is the single model in
$10^{7}$ models generated which converged to an inflationary fixed point at
late time.}
\end{figure}
We can also place constraints on the energy scales relevant to inflation, in
particular the ``height'' of the potential $V \sim \Lambda^4$, and the
``width'' of the potential, typically quantified as the field variation $\Delta
\phi$ during inflation. Given a path in the slow roll parameter space, the
form of the potential is fixed, up to normalization
\cite{hodges90,copeland93,beato00,easther02}. The starting point is the
Hamilton-Jacobi equation,
\begin{equation}
V(\phi) = \left({3 m_{\rm Pl}^2} \over 8 \pi\right)
H^2(\phi) \left[1 - {1\over 3}
\epsilon(\phi)\right].\label{eqHJpotential}
\end{equation}
We have $\epsilon(N)$ trivially from the flow equations. In order to calculate
the potential, we need to determine $H(N)$ and $\phi(N)$. With $\epsilon$
known, $H(N)$ can be determined by inverting the definition of $\epsilon$, Eq.\
(\ref{eqepsilonfromN}). Similarly, $\phi(N)$ follows from the first
Hamilton-Jacobi equation, Eq.\ (\ref{eqbasichjequations}):
\begin{equation}
{d \phi \over d N} = {m_{\rm PL} \over 2 \sqrt{\pi}}
\sqrt{\epsilon}.
\end{equation}
Using these equations and Eq.~(\ref{eqHJpotential}), the form of the potential
can then be fully reconstructed from the numerical solution for $\epsilon(N)$.
The only necessary observational input is the normalization of the Hubble
parameter $H$, which enters the above equations as an integration constant.
Here we use the simple condition that the density fluctuation amplitude (as
determined by a first-order slow roll expression) be of order $10^{-5}$,
\begin{equation}
{\delta \rho \over \rho} \simeq {H \over m_{\rm Pl}}
\frac{1}{\sqrt{\pi \epsilon}} = 10^{-5}.
\end{equation}
A more sophisticated treatment would perform a full normalization to the
CMB data \cite{bunn94,stompor95}. The value of the field, $\phi$, also
contains an arbitrary, additive constant. Fig.\ \ref{fig:V_recon} shows the
reconstructed potentials consistent with the WMAP3 + SDSS data set.
We see that the energy scale for inflation favored by the flow Monte Carlo
gives $V_0$ between $5 \times 10^{14}\ {\rm GeV}$ and $2 \times 10^{16}\ {\rm
GeV}$, although without a detection of a nonzero tensor/scalar ratio, it is not
possible to put a purely observational lower limit on the height of the
inflationary potential. Inflationary potentials with very low energy scales (in
particular those with $\Delta \phi \ll m_{\rm Pl}$ during inflation) require
the imposition of a symmetry to suppress the mass term for the inflaton
\cite{Knox:1992iy,Kinney:1995cc,Easther:2006qu}. Such potentials (corresponding
to the region labeled $V(\phi) \propto 1 - (\phi / \mu)^p$ in Fig.\
\ref{fig:MCR_log}) are consistent with the WMAP3 data and predict an
unobservably small value for the tensor/scalar ratio $r$. Even if one wished to
consider such models fine-tuned (see, {\it e.g.} Ref.\
\cite{Efstathiou:2006ak}) and therefore disfavored, the range of energy scales
favored by the flow Monte Carlo (Fig.\ \ref{fig:V_recon}) is consistent with
tensor/scalar ratios as low as $r \sim 10^{-5}$, a level unlikely to be
detectable by any currently foreseen experiments. Refs.\
\cite{Boyle:2005ug,Bock:2006yf} suggest that fine-tuning considerations force
the tensor/scalar ratio for a red spectrum to detectable levels of $r \sim
0.01$. We see no evidence of such an effect in the flow analysis, for which no
explicit tuning of the potential is performed.
\section{Conclusions}
In this paper, we presented an analysis of the recent WMAP three-year data set
with an emphasis on parameters relevant for distinguishing among the various
possible models for inflation. Our results are in good agreement with other
analyses of the data \cite{Lewis:2006ma,Seljak:2006bg}, but show significant
inconsistencies with the results reported by the WMAP team in Figs.\ $12$ and
$14$ of Ref.\ \cite{wmap3cosm}.
We found that the WMAP3 data alone are consistent within 95\% C.L.\ with a
scale-invariant power spectrum, $n = 1$, with no running of the spectral index,
$dn/d\ln{k} = 0$ and no tensor component. The Harrison-Zel'dovich spectrum is
therefore still not ruled out at high significance, a conclusion in accord with
Refs.\ \cite{Parkinson:2006ku,Magueijo:2006we}. While a detection of a running
spectral index would be of great significance for inflationary model building
\cite{Easther:2006tv,Cline:2006db}, no clear evidence for running is present
in the WMAP3 data. The data are, however, consistent with strongly negative
running combined with a large tensor/scalar ratio and a ``blue'' power
spectrum.
The inclusion of the Sloan Digital Sky Survey datasets in the analysis has the
effect of reducing the error bars and gives a better determination of the
inflationary parameters. For instance, the inclusion of SDSS rules out quartic
chaotic models of inflation of the form $V(\phi)\sim \lambda \phi^4$. Chaotic
inflation with a quadratic potential $V(\phi) \sim m^2 \phi^2$ is consistent
with all datasets considered.
In addition, we applied the Monte Carlo reconstruction technique to generate an
ensemble of inflationary potentials consistent with observation. Our results
may be summarized as follows: Models which evolve to a late-time fixed point in
the space of flow parameters are strongly disfavored by the data. Of more than
10 million models analyzed in the flow Monte Carlo, one evolved to a late-time
inflationary asymptote indistinguishable from a Harrison-Zel'dovich spectrum.
The rest were characterized by a dynamical end to inflation, with the first
slow-roll parameter $\epsilon$ evolving to unity in finite time. The late-time
attractor in flow space corresponds to models with a ``blue'' power spectrum
($n > 1$) and negligible $r$ and $dn/d\ln{k}$, and we conclude that such
models are inconsistent with the data for $n > 1.001$. This is a significant
constraint on the inflationary model space. In particular, the data rule out
the simplest models of hybrid inflation of the form $V(\phi) = V_0 + m^2
\phi^2$ as well as models such as $V(\phi) \propto 1 + (\mu / \phi)^{p}$, which
predict some negative running. This of course does not rule out all models for
which inflation ends via a hybrid mechanism. Some hybrid models are
characterized by a red spectrum, for example ``inverted'' hybrid models and
models with logarithmic potentials inspired by (global) supersymmetry. Finally,
we find that there is no evidence to support any lower bound on the amplitude
of gravitational waves. Tensor/scalar ratios as low as $r \sim 10^{-5}$ were
produced by the flow Monte Carlo without explicit tuning of the inflationary
potential.
\acknowledgments
We thank Rachel Bean, Olivier Dore, Richard Easther, Justin Khoury, Hiranya
Peiris, and Licia Verde for helpful conversations. We acknowledge support
provided by the Center for Computational Research at the University at
Buffalo. WHK is supported in part by the National Science Foundation under
grant NSF-PHY-0456777. EWK is supported in part by NASA (NAG5-10842).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Ultrashort bursts of electromagnetic radiation mean a versatile, efficient optical tool that can control and even visualize processes on the femtosecond (fs) timescale \cite{Baltuska2003,U07,Wirth2011,K14}. This approach is complementary to the usage of frequency stabilized, many-cycle sources, that can e.g., transfer the system from its ground state to a well-defined excited state. For short, wideband pulses the mere time scale and the possibility of tailoring the pulses \cite{Wirth2011} provide a wide variety of promising possible applications, e.g., light-pulse control of electrons in solids (fast, "lightwave electronics" \cite{K14}). Along this line, the time scale of the electronic response of a solid to an ultrashort excitation is crucial.
The appearance of light-induced currents in a dielectric has been demonstrated in Ref.~\cite{Schiffrin2013} using fused silica targets. Theoretical models \cite{DR11,FBY13,WL14} (partially related to high-order harmonic generation \cite{FK96,VMcOKCB14,HIY15}), as well as measurements \cite{SRPSWGPBPYNL14} indicate that these currents are not only being generated on the fs timescale, but they also disappear similarly fast, which is ideal for ultrafast switching. However, the generation of measurable currents in a wide (several times the photon energy) bandgap material requires intense pulses, setting a considerable limitation for practical applications. Conductors, on the other hand, are known to have charge carriers that can produce currents already for weak external fields. This is clearly a more conventional effect, and the primary application of light-induced currents in metals may not be fast switching. Instead, since these currents are easily generated and their time integral (i.e., the charge transferred by the laser pulse) can also be easily measured, the phenomenon can be used for detection purposes. This would lead to affordable all-solid-state devices that can measure the properties of the exciting pulses.
For the sake of simplicity, in the following we consider a one-dimensional model, which, however, can adequately describe the interaction of laser pulses and metallic or semiconductor nanowires \cite{CXR07,LVC03,KSM08} or conducting carbon nanotubes \cite{R10}. (Additionally, effects related to the penetration depth and screening are expected to play a minor role for nanoscale conductors.) The dynamics of the excited electrons is calculated using the time-dependent version of the non-equilibrium Green's function method (TDNEGF) \cite{WJM93,Z05,G14,F15}, which is a standard tool based on the Landauer-B\"{u}ttiker \cite{L88,B86} formalism of transport processes in solids \cite{D95}. This allows us to compute the transmitted current for different energy eigenstates, and finally add these contributions. Using effective mass approximation, in the current work we focus on how fast different initial plane wave states respond to the excitation of the laser pulse.
In order to see the physical mechanisms beyond the numerically calculated dynamics, first we consider switch-on effects, i.e., assume a suddenly increasing, local potential that remains constant after being switched on. Clearly, this should lead to a steady state solution, when the transmission probability does not change in time. The dynamics of this process is visualized using space and time dependent electron densities, which clearly show the way the localized potential pushes the electrons outside the interaction region.
In the second part of the paper, we consider pulsed laser fields as the source of excitation, and show that the time scales that influence the dynamics are determined by i) the oscillation period of the laser field, ii) $\hbar/E(k)$, where $E(k)$ is the energy eigenvalue corresponding to the initial state and iii) the slow oscillations that result from the time-dependent ponderomotive potential related to the envelope of the laser pulse. We analyze in detail how the ratio of these time scales and the importance of their role in the time evolution depend on the physical parameters.
The current paper is organized as follows: We summarize the physical model and the methods to be used in Sec.~\ref{modelsec}. The results will be discussed in Sec.~\ref{resultsec}, first by considering switch-on effects (Subsec.~\ref{lorisubsec}) which is followed by the analysis of laser-induced currents in Subsec.~\ref{istvansubsec}. Conclusions will be given in Sec.~\ref{summarysubsec}.
\section{Model}
\label{modelsec}
In our one-dimensional (1D) model we consider three regions in space, as it is shown by Fig.~1. In the second, interaction region, the Hamiltonian describing the dynamics is given by
\begin{equation}
H(x,t)=\frac{1}{2m}(p-e{A(x,t)})^{2}
+e\Phi(x,t), \label{Ham}
\end{equation}
where $e$ denotes the elementary charge, $m$ is the effective mass of the conduction band electron, $p=-i\hbar \frac{\partial}{\partial x}$ is the canonical momentum, and $A$ and $\Phi$ denote the vector and scalar potentials corresponding to the excitation, respectively. (Note that in 1D, $A$ means the only nonzero component of the vector potential, i.e., $A=A_x.$) The actual space and time dependence of the electromagnetic potentials depends on the problem we consider and also on the choice of the electromagnetic gauge. For the investigation of switch-on effects (Subsec.~\ref{lorisubsec}) we use nonzero $\Phi$ but vanishing $A,$ while the laser pulse with electric field strength $F$ in Subsec.~\ref{istvansubsec} will be described using the velocity gauge, i.e., $\Phi$ will be zero and $A(x,t)=-\int_0^t F(x,t') dt'.$ However, in both cases, both $A$ and $\Phi$ will be zero outside the interaction region, i.e., $H=H_0=\frac{p^2}{2m}$ in regions I and III. We also assume that the electromagnetic potentials are zero for $t<0,$ thus for negative values of time, we have free propagation in all the three domains.
The initial state state of the electrons is assumed to be a plane wave,
\begin{equation}
\Psi(x,t)=\Psi_0(x,t)=e^{i\left[kx-\omega(k) t\right]}, \ \ t<0.
\end{equation}
Note that $H_0 \Psi(x,t) = E(k) \Psi(x,t),$ with $E(k)=\frac{\hbar^2 k^2}{2m}=\hbar \omega(k).$ For the sake of definiteness, we choose positive wave numbers $k,$ i.e., the initial states propagate in the positive $x$ direction. For $t>0,$ when the excitation is nonzero, the solutions of the time dependent Schr\"{o}dinger equation governed by the Hamiltonians $H_0$ (in regions I and III) and $H(t)$ (in region II) will be no longer plane waves. The perturbation in region II generates wave packets that propagate generally in both the positive and negative $x$ directions and superimpose on the initial plane wave. Considering that $k>0,$ the complete wave function in region III will be termed as the "transmitted" wave while in region I, we have a superposition of the "incoming" wave $\Psi_0$ and the "reflected" one. The usual probability current density (which is proportional to the charge current density)
\begin{equation}
j(x,t)=\frac{\hbar}{m}\mathrm{Im} \left[\Psi^*(x,t) \frac{\partial}{\partial x} \Psi(x,t)\right]
\end{equation}
can be used to define the transmission probability (valid in region III)
\begin{equation}
T(x,t)=\frac{j(x,t)}{j_0(x,t)}=\frac{j(x,t)m}{\hbar k}
\end{equation}
as the ratio of current densities corresponding to the incoming plane wave $\Psi_0$ and the transmitted one $\Psi,$ which, clearly, has to be evaluated in region III. (Note that in the equation above we used that the current density corresponding to $\Psi_0$ is constant both in time and space, i.e., $j_0(x,t)=j_0=\hbar k/m.$) For steady state solutions, $T(x,t)$ is also constant, but this is not the case for time dependent problems.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw[-{Stealth[length=3mm, width=1.5mm]}] (0,0) -- (7,0) node[anchor=north] {$x$};
\draw (2,0) node[anchor=north] {$ x_1 $}
(5,0) node[anchor=north] {$ x_2 $};
\draw (1,2.3) node{ \footnotesize \bf Region I }
(3.5,2.3) node{ \footnotesize \bf Region II }
(6,2.3) node{ \footnotesize \bf Region III };
\draw[-{Stealth[length=3mm, width=1.5mm]}] (0,0) -- (0,3) node[above, midway, color=black, rotate=90] { $\Phi / A$};
\draw[dotted] (2,0) -- (2,1);
\draw[dotted] (5,0) -- (5,1);
\draw[red,very thick] (0.01,.02) -- (2,0.02);
\draw[red,very thick] (5,0.02) -- (6.5,0.02);
\draw[color=red] (2.9,1.8) node{\tiny $\Phi(x,t)/A(x,t)$ };
\draw[red,thick] plot[smooth] coordinates {(2,0.02) (2.2,0.2)(2.5,1.1) (3.5,1.4)(4,2)(4.5,0.7) (4.8,0.2) (5,0.02)};
\draw[color=blue,Latex-Latex] (0,0.8) -- (2,0.8) node[midway, above]{\tiny Free propagation };
\draw[color=blue] (1,0.5) node{\tiny (incoming};
\draw[color=blue] (1,0.2) node{\tiny +reflection) };
\draw[color=blue,,Latex-Latex] (2,0.8) -- (5,0.8) node[midway, above] {\tiny Scattering };
\draw[color=blue,-Latex] (5,0.8) -- (7,0.8) node[midway, above] {\tiny Free propagation };
\draw[color=blue] (6,0.5) node{\tiny (transmission) };
\end{tikzpicture}
\caption{The scheme of the geometry that we consider. Localized external fields (described by the scalar and/or vector potentials) interact with the conduction band electrons in the middle of region II. The electrons can propagate freely in regions I and III.}
\label{timedepfig}
\end{center}
\end{figure}
\bigskip
The technical difficulty in solving the problem described above is that regions I and III are in principle semi-infinite. In other words, focusing only on a finite interval encompassing the interaction region, we face an open quantum system, which is in a strong connection with its surroundings: e.g., the transmitted wave propagates outside the region of interest. For sinusoidal excitations, this problem can be handled by using Floquet's theory \cite{F883,SPF15}, but for pulsed excitations a different approach is needed.
For localized, static potentials a method based on non-equilibrium Green's functions (NEGF) is proven to be very effective for the description of transport processes in nanoscale devices, where semi-infinite leads are assumed to be connected to the region of interest. Without going into details (that can be found e.g. in Ref.~\cite{D95}), the main idea is to use analytic results for the semi-infinite regions, in which the potential is zero (and quantum mechanical waves propagate freely). For a given input energy $E$, the scattering matrix connecting the two leads (which is in a direct connection with the transmission probability) is shown to be proportional to the retarded Green's function (matrix) $G^R$ of the problem \cite{FL81}. Using effective mass approximation and spatial discretization, one faces the problem of inverting an infinite matrix to obtain $G^R,$ which is essentially $(H-E)^{-1}.$ However, since the problem is "simple" in the leads (free propagation), analytic results exist for their contribution. In this way, the matrix one has to invert will be finite, in fact only the matrix elements of the Hamiltonian that connect the interaction area to the leads has to be modified, in order to take the effects of the leads into account. In this way, a numerically exact solution of the scattering problem can be obtained. This method will be used in Subsec.~\ref{lorisubsec} in order to calculate the solution of static scattering problem as a reference.
For time dependent excitations, a time dependent version of the non-equilibrium Green's function approach (TDNEGF) is to be used. In the single electron picture, this approach can be turned into direct numerical method \cite{G14,F15}. Practically, the difficulty to be handled is allowing the disturbance caused by the excitation to leave the interaction region undisturbed. For a general time-dependent excitation, the behaviour of the wave function at the boundaries of the interaction region is a result of the time evolution, thus cannot be known in advance. The numerical version of the single particle TDNEGF method allows us to modify the time derivative of the wave function at the boundaries of the interaction region to mimic the semi-infinite leads, similarly to the static case. The numerical price of the ideally transparent boundary conditions is that the corrections at the boundaries involve time integrals that keep track the probability current that has flown out of the interaction region. This approach will be used to calculate the consequences of time-dependent excitations.
\section{Results}
\label{resultsec}
\subsection{Switch-on effects}
\label{lorisubsec}
Let us start considering the case when an external potential is being suddenly switched on. This can serve as a model for gate voltage induced transients in metallic solids. Concretely, we assume that $A=0$ in Eq.~(\ref{Ham}), while the scalar potential is given by
\begin{equation}
\Phi(x,t)= \varphi(x) \chi(t)
\label{switchpot}
\end{equation}
with
\begin{equation}
\varphi(x)=
\begin{cases}
\varphi_{max}\sin^2{\left(\frac{5\pi}{L}x\right)} & x \in \left[0,\frac{L}{10}\right], \\
\varphi_{max} & x \in \left[\frac{L}{10},\frac{9 L}{10}\right], \\
\varphi_{max}\left\lbrace 1-\sin^2{\left[\frac{5\pi}{L}x-4.5\pi\right]}\right\rbrace & x \in \left[\frac{9 L}{10},L \right].\\
0 & \text{otherwise.}
\end{cases}
\label{spatial}
\end{equation}
The time dependent part of the scalar potential (i.e., $\chi(t)$) represents a fast (but not instantaneous) switch on with a $\sin^2$ envelope for Figs.~2 and 3, and also a similarly decaying switch off for Fig.~4. The actual behaviour of $\chi(t)$ is shown by the top panels of Figs.~2 and 4.
In order to be able to compare the current results to the case of a ponderomotive potential to be discussed in the next subsection, we consider $\Phi(x,t)$ above as a potential barrier for the electrons. The appearance of the potential hill changes the initially uniform electron density. For a given input energy $E(k)$ and a potential with a height that corresponds to a classically forbidden region for $E(k)$ (and has an extension that renders quantum mechanical tunneling practically impossible), the transmission will be zero on the long time limit. The slope of the potential forces the electron density out of the interaction region, leading to transient wave packets travelling in both the reflected and transmitted directions. On the transmission side of the potential, because of the impenetrable barrier, there will be no input current, thus after the disappearance of the transients, the probability density will be zero. For the other side of the potential barrier, however, there is a continuous input travelling towards the barrier. The incoming plane wave is totally reflected by the potential, and the interference of oppositely travelling waves creates a regular pattern that appears to propagate in the negative direction with a velocity determined by the reflected transients.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig2-eps-converted-to.pdf}
\caption{
Central panel: Space and time dependence of the electron density as induced by the time dependent potential shown on the top. The right panel shows the steady state electron density (blue dashed line) and the spatial dependence of the potential (solid red line). The maximum of the potential was chosen so that the steady state transmission probability is $T=1/2$ for the initial electron energy of $E(k)=54$ meV. The dashed white lines guide the eyes, they represent a motion with a constant band velocity.
}
\label{timedepfig}
\end{center}
\end{figure}
The steady state interference pattern can be calculated using the static NEGF method sketched in the previous section, allowing us to calculate a transmission probability $T,$ which is constant both in space and time. Typical transient dynamics for the case when $T$ is not very close to zero is shown in Fig.~2. As we can see, the main effects we mentioned for an impenetrable potential above, also appear in this case. We can clearly identify the decrease of the electron density on the transmission side of the barrier, and the gradual appearance of the interference pattern on the reflection side of the barrier is also visible. The time scale of these effects can be calculated by investigating the slopes of the dashed white lines in Fig.~2, which turn out to be very close to the band velocity $v_k=\frac{1}{\hbar}\frac{\partial E(k)}{\partial k}$ (which is equal to $\frac{\hbar k}{m}$ for the parabolic bands we consider).
There are, however, slower mechanisms that finally lead to the steady state solution. E.g., the high electron density peak at the reflection side of the potential is formed on a longer time scale. The intuitive reason for the appearance of this peak is that since the incoming plane wave cannot be perfectly transmitted through the potential, its reflected part interferes with its own continuously arriving input part and results in an accumulation of the electron density. The regular density pattern on the reflection side of the potential reaches its steady state structure also on a longer time scale: although the primary structure becomes visible practically as soon as the first reflected wave packet arrives, the heights of the interference peaks converge to their final, steady state value on a much longer time scale. That is, while the part of the problem that can be understood by merely classical considerations (i.e., the expulsion of the electron density from the interaction area) has a time scale that is determined by the band velocity, interference related effects need more time to be built up. This can be seen in Fig.~3, where the difference
\begin{equation}
D(t)=\int \left[\rho_s(x) - \rho(x,t)\right] dx
\label{dist}
\end{equation}
is shown. In the equation above, the electron densities have the usual definition of modulus square of the wave function, and $\rho_s(x)$ corresponds to the steady state density that is constant in time, while $\rho(x,t)$ is calculated from the actual, proper wave function that includes transients. The integration domain is the whole interaction region. As we can see, for all curves in Fig.~3, $D(t)$ approaches zero for increasing values of $t,$ which is natural since the transients are expected to leave the interaction area in the long time limit. Additionally, we can also see an initial, fast decrease of all $D(t)$ curves, which corresponds to the effects emphasized by the white dashed lines in Fig.~2. The time instant when the forward scattered wave packets leave the interaction region correspond to pronounced drops (indicated by the arrows in Fig.~3) of the function $D(t).$ Clearly, the behaviours of the curves depend on the input electron energies: when the corresponding band velocity is larger, convergence towards the steady state electron density is faster. This could be expected for the initial part of the time evolution, but according to our results, it also holds for the final, slower, interference-dominated part of the dynamics.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig3-eps-converted-to.pdf}
\caption{
The distance $D$ defined by Eq.~(\ref{dist}) between the steady state solution and the actual electron density for different electron energies $E(k).$ For the sake of easy comparison, the steady state transmission probability is $T=1/2$ for all curves. The input electron energies are indicated by the legend. The arrows correspond to time instants when the first pronounced wave packets leave the interaction region on the transmission side of the potential (e.g., see the upper dashed white line in Fig.~2.)
}
\label{timedepfig}
\end{center}
\end{figure}
Note that the results shown by the figures are representative, but unavoidably particular examples. Although the qualitative picture is the same as discussed above, different parameter settings influence the details of the process of convergence towards the final, steady state solution. Generally, the effect of the sudden emergence of the potential barrier results in the broadening of the initially infinitely narrow quasimomentum distribution. $\Psi_0,$ which corresponds to a well defined value of $k=k_0,$ will be transferred into a superposition of plane waves with different qausimomenta. This distribution has two dominant peaks at $\pm k_0,$ which determine the fast part of the time evolution, but the convergence is not complete until those parts of the wave packet that belong to smaller quasimomenta slowly leave the interaction area. Additionally, when defining a characteristic time for the convergence (e.g., by requiring $D(t)$ to decrease below a predefined limit), we always have to specify the interval on which we would like to observe convergence. Clearly, the further away the observation point from the potential barrier is, the more time is needed for the transients to disappear.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig4-eps-converted-to.pdf}
\caption{
Central panel: Space and time dependence of the electron density as induced by the time dependent potential shown on the top. Right panel: The spatial dependence of the potential (its maximum is the same as in Fig.~2.)
}
\label{timedepfig}
\end{center}
\end{figure}
\bigskip
It is also interesting to ask what happens when the suddenly emerging potential stays constant in time for a finite interval, then it becomes zero again. This corresponds to the case of switching on and off of a gate voltage that influences the electron dynamics. According to the example shown in Fig.~4, the most dominant effect in this case is the generation of wave packets that propagate in the same direction as the initial plane wave. In view of the previous results, we can interpret the first maximum as the propagation of the electron density that was pushed out (in the positive direction) of the interaction region by the emergence of the potential. The minimum following this peak is a consequence of the decrease of the transmission because of the potential barrier. The following peaks are new in the sense that they are not consequences of switching on the potential. Instead, they mean essentially the propagation of the pronounced density maximum that can be seen in Fig.~2, as well as that of the whole interference pattern on the reflection side of the potential. (Intuitively: the amassed probability density is released when the potential barrier vanishes.) During the propagation, weaker interference maxima merge, and essentially a double peaked wave packet remains. The minimum between the two peaks is a fingerprint of the time window when the potential was on, and the transmission was not unity.
Clearly, the result shown in Fig.~4 is not completely general. Depending on the time difference between switching on and off the potential, the time evolution of the electron density can be qualitatively different. (E.g., when the time between switching on and off the potential is considerably larger than the one shown in the figure, two separated wave packet families arise.) However, the parameters we used for Fig.~4 can mimic the ponderomotive potential of a localized laser excitation, and thus can help interpreting the results of the next subsection.
\subsection{Laser pulse induced localized excitations}
\label{istvansubsec}
Now we describe laser-matter interaction in the velocity gauge and assume that $\Phi=0,$ while the space and time dependence of the $x$ component of the vector potential is given by
\begin{equation}
A(x,t) = A_0 \sin^2 \left( \frac{\pi}{L} x \right) \sin^2\left( \frac{\pi}{\tau} t \right) \sin \left( \omega_0 t \right)
\end{equation}
if $x \in [0,L]$ and $t \in [0,\tau]$, otherwise $ A(x,t)=0$. In the simulations we consider the central wavelength of the laser to be $ \lambda_0 = \frac{2 \pi c}{\omega_0} = 800$ nm, and assume a pulse duration of $ \tau = 26,7$ fs (meaning 10 complete oscillations). The electric field can be calculated as the negative of the time derivative of $A(x,t)$, and we set the amplitude of the external electric laser field to $ F_0 = 1$GV/m.
It is important to emphasize that using dipole approximation, i.e., assuming that the vector potential has no spatial dependence, the time evolution induced by the Hamiltonian (\ref{Ham}) would be trivial for an initial plane wave. Since plane waves are eigenstates of $H(t)$ for $\frac{\partial A}{\partial x}=0,$ there will be no transitions between different plane wave states in this case. This is clearly related to the well-known fact that free electrons do not gain energy during the interaction with a single electromagnetic mode. That is, the localization of the excitation is crucial from the viewpoint of nontrivial time evolution (at least for a single band model).
Technically, for the calculation of the response of the electrons to the localized laser pulse, we found that instead of the TDNEGF approach, it is more efficient to use a Fourier transform-based method. In more details, we consider a relatively large computational box for the region II shown in Fig.~1. In this way the excitation-induced wave packets have no time to reach the boundaries of the computational box until the laser pulse is over. Then we extend the computational box by roughly ten times of its original size, apply periodic boundary conditions and use spatial Fourier transform to obtain $k$-resolved states, the time evolution of which are trivial, i.e., plane wave-like. Using a sufficiently large extended computational box, the effectiveness of the fast Fourier transform algorithm leads to a fast numerical method that provides an adequate description of the problem until the transients leave the (original) region II.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig5-eps-converted-to.pdf}
\caption{
Space and time dependent electron density. The inset zooms on the domain on which the excitation is nonzero. Parameters are: $ F_0 = 1$GV/m, $L = 160$nm, $ \tau = 26,7$fs, $ \lambda_0 = 800$nm.
}
\label{timedepfig}
\end{center}
\end{figure}
Fig.~5 shows the typical space and time dependence of the electron density. As we can see, during the presence of the laser signal, the electric field induces oscillatory electron motion in the interaction area. Depending on its sign, the external field moves the electrons towards the positive or negative $x$ direction. The corresponding density intensifications and attenuations leave the interaction area as narrow wave packets, the periodicity of which is the same as that of the laser carrier oscillations (see the inset in Fig.~5.). However, the amplitude of these oscillations strongly decreases as we increase the size of the interaction area, which is directly related to the fact mentioned in the first paragraph of this subsection. This effect is so strong, that for traditional focusing that corresponds to spot sizes being a few times larger than the wavelength, oscillations with the laser carrier frequency are practically unseen (unless we calculate Fourier transforms, see below). In order to show these oscillations on the figure, interaction lengths below the wavelength of the exciting laser pulse were considered, which can be achieved e.g., by nanolocalized fields (see e.g. \cite{SS11,FM15}).
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig6-eps-converted-to.pdf}
\caption{
Top panel: typical time dependent current density calculated at the point $x=2 L$. Bottom panel: power spectra of the time dependent signals for the sizes of the interaction area ($L$) indicated by the legend. Parameters: $ F_0 = 1$ GV/m, $ \tau = 26,7$fs, $ \lambda_0 = 800$nm.
}
\label{timedepfig}
\end{center}
\end{figure}
On the other hand, the exact limit of spatially independent excitations can practically never be reached, since the focal spot of the exciting laser pulse is obviously finite. This localization leads to the most pronounced wave packet structure shown in Fig.~5. The similarity of these double peaked density waves to the ones that appear for non-oscillating potentials suggests a common interpretation. Indeed, besides the mere visual correspondence, the space dependent ponderomotive potential of the laser field
\begin{equation}
U_p(x,t)=\frac{e^2\mathcal{E}_0^2(x,t)}{4m\omega_0^2}
\end{equation}
can play the role of the potential that were considered in the previous subsection. (Here $\mathcal{E}_0(x,t)$ denotes the slowly varying local amplitude of the electric field strength, the time dependence of which stems from the change of the envelope of the laser field.) In other words, the finite duration of the excitation means a ponderomotive potential that is switched on and off on the time scale of the duration of the laser pulse. This potential produces the double peaked density waves seen in Fig.~5, similarly to the case discussed in the previous subsection.
Let us recall, however, that the concept of the ponderomotive potential involves averaging over an oscillation period. That is, although $U_p$-related effects are seen for all parameter ranges that we investigated, they are expected to be stronger when -- in classical terms -- the electron spends more time in the interaction zone. The analysis of the current density $j$ measured at the transmission side boundary of the interaction area completely supports this qualitative picture.
Fig.~6 shows the time dependence of $j$ as well as its frequency spectrum for three different sizes of the interaction area. (Note that effects related to localized excitation (in the sense of quantum confinement) has recently been investigated in Ref.~\cite{MAAB17}). The general feature of $j(t)$ shown in the figure is that it contains three different signals: two relatively weak, fast oscillating wave packets, which compass a more slowly oscillating part with a larger amplitude. Comparing with two-dimensional plots as shown by Fig.~5 for the particle density, the weaker signals can be identified as consequences of the direct laser-driven charge oscillations: they are born during the interaction with the laser field, at both sides of the interaction area (where the gradient of the field is the largest), while the more slowly varying part is $U_p$-related. This is also seen in the power spectra $|j(\omega)|^2,$ which have two distinct peaks corresponding to the time signals described above. Note that the central frequency of the laser, $\omega_0,$ is conveniently compared to $\omega(k)=E(k)/\hbar$ in order to identify different regimes, but the frequency $\omega(k)$ is practically absent from the spectra. The reason for this is that the current density involves the product of the wave function and its complex conjugate, thus it is not sensitive to the quantum mechanical phase of the wave function.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{MSFfig7-eps-converted-to.pdf}
\caption{
Power spectra $|j(\omega)|^2$ calculated at the point $x=2L$ for different initial electron energies $E(k)=\hbar\omega(k)$. Parameters are the same as in Figs.~5.
}
\label{timedepfig}
\end{center}
\end{figure}
The relative weights of the $\omega_0$ and $U_p$-related peaks in the spectra have opposite dependences on the parameters we consider. As we can see in Figs.~6 and 7, oscillations with the frequency of $\omega_0$ get weaker for larger interaction areas as well as for lower electron energies $E(k).$ This is in agreement with the considerations in the first paragraph of this subsection: in classical terms again, the larger the effective interaction area is (the more time the electron spends in the laser field), the more appropriate the dipole approximation is, and the weaker laser-induced effects are expected. Although the ponderomotive potential is also slightly less effective for larger interaction areas (since the gradient of the field envelope is smaller), this effect is much weaker than the decrease of signal at $\omega_0.$ Therefore, as the interaction area increases, $U_p$-related effects become dominant.
Additionally, slow electrons experience more laser cycles, which, as Fig.~7 shows, leads to stronger low-frequency peaks in $|j(\omega)|^2.$ Moreover, while the peaks around $\omega_0$ in the spectra shown in Figs.~6 and 7 have practically the same widths, narrower interaction areas and higher electron energies lead to shorter $U_p$-related signals in time domain, and consequently the corresponding frequency domain peaks will be broader. This is in complete agreement with the intuitive picture we can associate to $U_p$ as a potential that is being switched on and off on the timescale of the pulse duration.
\section{Summary}
\label{summarysubsec}
Conduction band electrons were investigated in localized, time-dependent external fields. For switching on a potential that represents a gate voltage, it was shown that the convergence towards the steady state solution is determined by two time scales, a faster one that corresponds to the band velocity, and a slower one that describes the build up of the steady state interference pattern. For a laser pulse, the mechanisms behind the behaviour of the electrons are: laser driven oscillations and the effects of the ponderomotive potential. By investigating the frequency spectra of the time dependent current induced by the laser pulse, we have shown and explained that for large interaction area and low electron energies, the time evolution is dominated by the low frequency, few-cycle oscillations that are induced by the ponderomotive potential. Electronic answer at the same frequency as the exciting field is expected to be appear for spatially narrow excitations and high energy electrons. Our results are relevant for using metallic targets as detectors for the parameters of the laser pulse.
\bigskip
The ELI-ALPS project (Grants No. GOP-1.1.1-12/B-2012-000 and No. GINOP-2.3.6-15-2015-00001) is supported by the European Union and co-financed by the European Regional Development Fund.
Our work was also supported by the European Social Fund (EFOP-3.6.2-16-2017-00005).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
Image processing algorithms have various objectives that can include a combination of loss functions or a pair of target level-specific training datasets.
For example, in restoration task such as denoising, there is an optimized level for each input whose noise level is unknown; and in image synthesis, balancing between fidelity and naturalness \cite{pdt2018} depends on target applications.
In style transfer, the user hopes to control various styles and stylization strengths continuously.
However, most image processing deep networks are trained and optimized for a single-level.
In this paper, the word \textbf{\textit{level}} can be one of the following examples: a target noise level (sigma of Gaussian or quality factor of JPEG), a combination of loss functions, or a strength of stylization.
If we want to handle $N$ multiple levels, we have to train $N$ different models or exploit the structure of multi-task learning \cite{edsr2017} (Fig. \ref{fig:cmll} (b)), which is very inefficient when $N$ increases.
In addition, in many image processing tasks, levels can be real numbers and each level produces semantically meaningful outputs.
Therefore, to design a network for a continuous-level in an efficient way is a very practical issue. Fig. \ref{fig:cmll} describes the differences among multi-task learning, multi-level learning and continuous-level learning. Compared to multi-task learning, multi-level learning solves single-task multiple discrete-level problem. Continuous-Level Learning (CLL) is an extension of multi-level learning whose levels are continuous between two levels.
There have been several CLL frameworks~\cite{adafm2019,dynamicnet2019,cfsnet2019,dni2019,esrgan2018} and they commonly have following steps. In train phase, they train CNNs on the initial level, then the networks are fine-tuned or modified with tuning layer to the second level while some parameters are frozen. In test phase, they make their networks available at any intermediate level with their own interpolation methods.
These steps are derived from the observations~\cite{adafm2019,dni2019}. The observation shows that the fine-tuned filters are similar to that of original filters which makes the interpolation space between filters linear approximately.
To achieve general, smooth and stable results, CLL algorithms have to satisfy three conditions.
The first one is \textbf{\textit{good adaptation} (Sec. \ref{adaptation})}.
After fine-tuning a network on the second level, since it contains parameters for both levels, the performance might be lower than the one trained only for the second level. Therefore, CLL frameworks have to be flexible so that they can adapt to the new levels well.
The second one is \textbf{\textit{good interpolation} (Sec. \ref{interpolation})}.
Even though the network works well on the two trained levels, it might not for the other intermediate levels.
Therefore, it is important for the networks to maintain high performance and reasonable outputs for the intermediate levels.
The last one is \textbf{\textit{efficiency} (Sec. \ref{efficiency})}.
Since one of the main objective of CLL is to use a single network instead of using multiple networks trained for each level, requiring too large memory and computational resources is not practical for real-world applications.
\begin{table}[!t]
\centering
\caption{\small{Comparison of representative continuous-level learning methods. The word \textit{Regularized}} indicates it produces more smooth interpolation and less oversmoothing artifacts which will be discussed in Sec. \ref{regularization}. * indicates that it is achieved naturally from the linearity}
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{lcccc}
\toprule
& No Extra Memory & Non-linear Adaptation & Efficient & Regularized \\
\midrule
AdaFM~\cite{adafm2019} & \cmark & \xmark & \cmark & \cmark* \\
DNI~\cite{dni2019,esrgan2018} & \xmark & \cmark & \xmark & \xmark \\
Dynamic-Net~\cite{dynamicnet2019} & \cmark & \cmark & \xmark & \xmark \\
CFSNet~\cite{cfsnet2019} & \cmark & \cmark & \xmark & \xmark \\
FTN~(Ours) & \cmark & \cmark & \cmark & \cmark \\
\bottomrule
\end{tabular}}%
\label{tb:comparison}
\end{table}
Most of the prior approaches on CLL fail to satisfy all three conditions. AdaFM~\cite{adafm2019} introduces a tuning layer called feature modification layer for the second level which satisfies efficiency condition by just adding simple linear transition block (depth-wise convolution). However, the linearity reduces the flexibility of adaptation. Therefore, AdaFM cannot satisfy good adaptation condition, then it is not appropriate for more complex tasks such as style transfer. Deep Network Interpolation (DNI)~\cite{dni2019,esrgan2018} interpolates all parameters in two distinct networks trained for each level to increase flexibility. However, fine-tuning the network without any constraint cannot consider the initial level and it might lead to degraded performance on intermediate levels. Therefore, DNI fails to satisfy good interpolation condition. DNI also requires extra memory to save temporary network parameters and requires a third interpolated network for the inference. To satisfy both adaptation and interpolation condition, CFS-Net~\cite{cfsnet2019} and Dynamic~Net~\cite{dynamicnet2019} propose frameworks that interpolate the feature maps, not the model parameters using additional tuning branches. However, tuning branches require large memory and heavy computations up to double of the baseline networks. Therefore the efficiency condition is not satisfied. And these heterogeneous networks can cause oversmoothing artifacts because each branch cannot consider the opposite-level. This side-effect will be discussed in Sec. \ref{interpolation}
In this paper, we propose a novel CLL method called \textit{Filter Transition Network (FTN)} that take convolutional neural network filters as input and learn the transitions between levels. Since FTN is a non-linear module, networks can be adapted to any new level. Therefore, it can cover general image processing tasks from simple Gaussian image denoising to complex stylization tasks. FTN \textit{transforms} the filters of the main network via other learnable networks, which makes the fine-tuning process regularized in stable parameter spaces for smooth and stable interpolation. Therefore, the good interpolation condition can be satisfied.
For efficiency condition, from the observations in~\cite{adafm2019,dni2019}, FTN directly changes filters then it becomes data-agnostic. This prevents the increment of computational complexity, which is proportional to the spatial input resolution. Additionally, randomly initialized FTN makes the training process unstable since it directly changes model parameters. To solve this problem, we propose a method to initialize CNN modules with identity mapping. More specific comparisons with existing CLL frameworks are shown in Table~\ref{tb:comparison}.
In short, the proposed framework has following contributions:
\begin{itemize}
\item We propose a novel CLL (Continuous-Level Learning) method which is not only flexible using non-linear transition but also regularized not to forget the initial level preventing side-effects.
\item For the stability of learning for FTN, we propose a new initialization method that makes random non-linear convolutional network be identity.
\item Our method is smooth and stable in terms of adaptation and interpolation, and significantly efficient in both memory and operations, while the performance is reasonable compared to the other competitive algorithms.
\item We suggest an simple and efficient method for pixel-adaptive continous-level extensions without using pixel-adaptive convolutions.
\end{itemize}
\section{Related Work}
\noindent\textbf{Image Restoration.} CNN-based image restoration has shown great performance improvements over hand-crafted algorithms. After shallow networks, \cite{arcnn2015,srcnn2015}, some works stacked deeper layers, exploiting the advantages of residual skip-connection \cite{vdsr2016,dncnn2017}. Following the evolution of image recognition networks, restoration networks have focused on the coarse-to-fine scheme \cite{lapsrn2017}, dense connections \cite{rdn2018}, attentions \cite{rcan2018} and non-local networks \cite{nlrn2018}.
However, most networks are trained and optimized for a single level such as the Gaussian noise level in denoising, quality factor in JPEG compression artifact removal, and super-resolution scale in single-image super-resolution.
If the levels of training and test do not match, then optimal restoration performance cannot be achieved.
To solve the limitation, \cite{mildenhall2018burst,zhang2018ffdnet} proposed multiple noise-level training with a noise-level map, or noise estimation network \cite{guo2019toward} can be a solution. However, user cannot control at the test phase for better personalization~(\textit{e.g.} oversmoothing).
\noindent\textbf{The Perception-Distortion Trade-off.} In comparison with the general approach that attempts to reduce the pixel-error with the ground truth, some works \cite{argan2017,srgan2017,esrgan2018} attempted to produce more natural images using the generative power of GANs \cite{gan2014,cgan,dcgan}.
They used a combined loss of the fidelity term and adversarial term and then obtained better perceptual quality.
However, when a more adversarial loss is used, worse fidelity with the ground truth occurred due to the perception-distortion (PD) trade-off \cite{pdt2018}.
In \cite{pdt2018}, they proposed evaluating the restoration performance via a PD-plane \cite{pirm2018} considering the balance between fidelity and naturalness. However, the network must be retrained on another loss function to draw a continuous PD-function, which is a very time-consuming.
\noindent\textbf{Style Transfer.} With regard to image style transfer, Gatys~\emph{et~al.}~\cite{styletransfer2015} proposed a combination of content loss and style loss, and optimized content images via pre-trained feature extraction networks. Johnson~\emph{et~al.}~\cite{perceptual2016} made it possible to operate in a feed-forward manner using an image transformation network. However, a network trained on a single objective cannot control the balance between content and style and cannot handle continuous styles when it is trained on a single style.
Even though \cite{gatys2017controlling} can control several factors in the training phase and arbitrary (Zero-shot) style transfer such as \cite{huang2017arbitrary,sheng2018avatar} can handle infinite styles using adaptive instance normalization and style decorator, none of these can control continuous objectives (losses) at the test phase.
\section{Proposed Approach}
\subsection{Filter Transition Networks}
\label{defconv}
\begin{figure*}[!b]
\setlength{\belowcaptionskip}{-5pt}
\begin{center}
\includegraphics[width=0.99\linewidth]{"./images/net.png}
\end{center}
\vspace*{-5mm}
\caption{\small{Network architecture of our Filter Transition Network (FTN) when adapted in arbitrary main convolutional networks. Filter of main network (\second{blue}) is transformed via FTN for other levels (\first{red}). In inference phase, interpolated filter (\third{purple}) is used for intermediate levels}}
\label{fig:network}
\end{figure*}
General concept of our module is same with the prior CLL frameworks; \textit{fine-tune and interpolate} which was described in Sec. \ref{intro}. Our overall framework is detailed in Fig. \ref{fig:network}. Our FTN module in an arbitrary convolutional layer can be described as
\begin{equation}
\textbf{X}_{i+1} = \textbf{X}_{i} * (\textbf{f}_{Ai} \times (1-\alpha) + FTN(\textbf{f}_{Ai}) \times \alpha )
\label{eq:ftn}
\end{equation}
\noindent where $\textbf{X}_{i}$ is an $i$-th convolutional feature map and $\textbf{f}_{Ai}$ is a corresponding filter.
The FTN consists of two $1\times1$ convolutions with a $G$ grouped convolution \cite{alexnet,resnext}, PReLU~\cite{prelu2015} activation functions, and skip-connection with weighted sum.
First, we train the main convolutional filter for the initial level with $\alpha=0$ which creates a vanilla convolution. Then, we freeze the main network and train the FTN only for the second level with $\alpha=1$, which breaks the skip-connection. Next, the FTN learn the task transition itself. To that end, the FTN approximates filters of the second level like by $FTN(\textbf{f}_{Ai}) \approx \textbf{f}_{Bi}$, where $\textbf{f}_{Bi}$ is an optimal filter for the second level. In the inference phase, we can interpolate between two filters (levels) by choosing $\alpha$ in the range of 0 to 1, and (\ref{eq:ftn}). Consequently, the FTN implicitly learns continuous transitions between levels, and $\alpha$ represents the amount of filter transition towards the second level.
The reasons why $1\times1$ convolution is used are two-fold: 1) it is lightweight and 2) padding is not required. Even if $1 \times 1$ filters cannot process spatially, they can be \textit{learned} considering spatial correlations because input of the FTN is very small (usually $3 \times 3 \times C$). Note that $1\times1$ filters in FTN convolves network filters, neither images nor features.
\subsection{Initialization of FTN}
When we train for the second-level, an ideal initialization of the FTN with an identity function is desired because of a convolution such as
\begin{equation}
\textbf{X}_{i+1} = \textbf{X}_{i} * (FTN(\textbf{f}_{Ai}))
\label{eq:ftn_init}
\end{equation}
\noindent However, networks are usually initialized with common methods such as \cite{xavier2010,prelu2015}, which will predict random filters from $\textbf{f}_{Ai}$. These kinds of initialization make the training very unstable unless a special trick is added. In our framework, every convolution and activation function is initialized as an identity function. Convolutions can easily become identities~\cite{adafm2019}. In activations, we use PReLU \cite{prelu2015} with an initial negative slope $a=1$, which will be learned through training. This initialization for non-linear layers makes the training more stable.
\begin{table}[!t]
\centering
\caption{\small{\textbf{Filter analysis for regularization.} Distance and Similarity between the two levels. We measure Mean Average Error (MAE) for linear filter interpolation and filter-wise normalized cosine similarity. The task is PD-controllable super-resolution and baseline network is \textbf{CFSNet-30}}}
\resizebox{0.5\linewidth}{!}{
\begin{tabular}{cc|ccc}
\toprule
& & FTN (G=16) & FTN & Fine-tuning \\ \midrule
& MAE & \textbf{0.0082} & 0.0118 & 0.0139 \\
& Cos Sim. & \textbf{0.9443} & 0.8937 & 0.8666 \\ \bottomrule
\end{tabular}}
\label{tb:sr_similarity}
\end{table}
\subsection{Regularized Adaptation}
\label{regularization}
Good adaptation and good interpolation, which are mentioned in Section \ref{intro}, are in a trade-off relationship. For good adaptation, the flexibility of the transition between the two levels is important. For example, AdaFM fails to adapt parameters between the levels which are in non-linear relationship, while producing good interpolation results due to its linearity. However, focusing on good adaptation without any constraint like DNI can make the network forget the initial level. It makes difficult to obtain smooth and meaningful intermediate filters through interpolation. In that sense, FTN is a regularized nonlinear method that satisfies both adaptation and interpolation conditions.
In our FTN, learnable transformation is shared across the spatial locations of filters and output channels in a layer. Only channel-wise features are used for adaptation.
This can be viewed as a form of regularization and second-order representation of the filter. When group convolution ($G>2$) is used, feature extraction across channels is restricted, which results in stronger regularization.
Table \ref{tb:sr_similarity} shows the filter distance with the filters of the main network when they are fine-tuned or passed through FTNs. The results show the filter-conditioned regularization is effective to prevent significant filter change. Performance of adaptation and interpolation will be discussed in the experiments sections.
\subsection{Complexity Analysis}
\label{efficiency}
\begin{table}[!t]
\begin{center}
\caption{\small{Overall computations, relative computations from baseline (in percentage), and number of parameters of the frameworks. Setting and network configurations are described in Sec. 4.1}}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{lcccccc}
\toprule
Task & \multicolumn{2}{c}{Denoising} & \multicolumn{2}{c}{$\times$2 Super-Resolution} & \multicolumn{2}{c}{Style Transfer} \\
\midrule
Network & \multicolumn{2}{c}{AdaFM-Net} & \multicolumn{2}{c}{CFSNet-30} & \multicolumn{2}{c}{Transform-Net \cite{perceptual2016}} \\
\cmidrule{2-7} & GFLOPs & Params(M) & GFLOPs & Params(M) & GFLOPs & Params(M) \\
\midrule
Baseline & 25.11 & 1.41 & 155.96 & 2.37 & 40.42 & 1.68 \\
+ CFSNet \cite{cfsnet2019} & 46.96 (87.02\%) & 3.06 & 311.36 (99.64\%) & 4.93 & - & - \\
+ AdaFM \cite{adafm2019} & 26.01 (3.58\%) & \textbf{1.46} & 162.50 (4.20\%) & \textbf{2.47} & 41.73 (3.24\%) & \textbf{1.72} \\
+ Dynamic-Net \cite{dynamicnet2019} & - & - & - & - & 62.24 (53.98\%) & 2.59 \\
+ \textbf{FTN} & \textbf{25.36 (0.10\%)} & 1.83 & \textbf{156.34 (0.02\%)} & 3.01 & \textbf{40.86 (1.09\%)} & 2.06 \\
+ \textbf{FTN(G=16)} & \textbf{25.13 (0.01\%)} & \textbf{1.44} & \textbf{156.00 (0.00\%)} & \textbf{2.41} & \textbf{40.46 (0.10\%)} & \textbf{1.70} \\
\bottomrule
\end{tabular}}%
\label{tbl:complexity}
\end{center}
\end{table}
Table \ref{tbl:complexity} compares the computational complexity and number of parameters with other frameworks. If any tuning layer with convolutions on a feature map is added, additional computations (MACs) are $H \times W \times K_{H} \times K_{W} \times C_{in} \times C_{out}$ where $H$, $W$, $K_{H}$, $K_{W}$, $C_{in}$ and $C_{out}$ are the height and width of the feature map (e.g., image size), height and width of the filter, and the number of input channels and output channels, respectively. Dominant computations arise from $H$ and $W$. In our network, which is a data-independent module, only $K_{H} \times K_{W} \times C_{in} \times (C_{out}/Groups) \times N$ is needed for a single tuning layer, where $N$ is the depth of FTNs. As shown in Table \ref{tbl:complexity}, FTNs have extremely reduced computational complexity and a similar or much lower number of parameters than other frameworks in various tasks and networks.
\section{Experiments}
\subsection{Experimental Settings}
\label{setup}
To understand recent CLL frameworks, we evaluate FTNs against fine-tuning (DNI) \cite{dni2019}, AdaFM \cite{adafm2019}, CFSNet \cite{cfsnet2019}, and Dynamic-Net \cite{dynamicnet2019} on four general image processing tasks. We add tuning layer of FTNs into every convolution, and same for AdaFM \cite{adafm2019} except the last layer to prevent boundary artifact. We add a ResBlock-wise (or DenseBlock-wise) tuning branch for CFSNet \cite{cfsnet2019}. For a fair comparison, the main networks are identical and shared across frameworks, and every hyper-parameter is identical except the tuning layers of each framework. More detailed configurations are described in supplementary material.
\noindent\textbf{Denoising \& DeJPEG.} We use two baseline networks that were proposed in \cite{adafm2019} and \cite{cfsnet2019}. The first network (AdaFM-Net \cite{adafm2019}) consists of 16 residual blocks as in \cite{edsr2017} with downsampling and upsampling layers. The second network (CFSNet-10 \cite{cfsnet2019}) consists of 10 residual blocks without downsampling or upsampling. We use DIV2K \cite{DIV2k} as the training set with a patch size of 48. We test on the CBSD68 \cite{cbsd68} dataset for denoising and LIVE1 \cite{live1} for deJPEG. We fine-tune the main network from the weaker noise (20 in denoising and 40 in deJPEG). Maximum PSNR is obtained via grid search of $\alpha$.
\noindent\textbf{PD-Controllable Super-resolution.} In image super-resolution, as reported in \cite{pdt2018}, fidelity and naturalness exhibit trade-offs. A comparison between algorithms should consider this trade-off by plotting perception (fidelity)-distortion (naturalness) curves. Drawing this curve is possible by changing weights between loss terms. As in \cite{esrgan2018}, we train phase 1 using L1 loss, and fine-tune using a combined loss of L1, Perceptual (VGG, \cite{perceptual2016}) and GAN losses. We evaluate using two baseline networks that were proposed for \cite{cfsnet2019} (CFSNet-30) and \cite{esrgan2018} (ESRGAN). CFSNet-30 is the deeper version of CFSNet-10 for image super-resolution, and ESRGAN consists of multiple densely connected \cite{rdn2018} residual blocks. We use DIV2K as the training set with patch size a 128 and PIRM \cite{pirm2018} as the test set. PSNR and SSIM \cite{ssim} are used as distortion metrics, and NIQE \cite{niqe} and the Perceptual Index \cite{pirm2018} are used as perception metrics.
\noindent\textbf{Style Transfer.} In style transfer, we use Transform-Net which was proposed in \cite{perceptual2016} with instance normalization \cite{instance_norm}. We follow the settings of Dynamic-Net \cite{dynamicnet2019}. COCO 2014 train dataset \cite{cocodataset} is used for training. From the main network, Dynamic-Net inserts three tuning branches into pre-defined layers, while FTNs are inserted in every convolution layer. This means that FTNs have more opportunities to control in a layer-wise manner (See Fig. 1 of supplementary material of Dynamic-Net \cite{dynamicnet2019}).
\begin{table}[!t]
\centering
\caption{\small{\textbf{Ablation study for structures of FTN.} Average PSNR (dB) on CBSD68 denoising test dataset. Unseen noise levels are denoted with
*. The baseline network is \textbf{AdaFM-Net}. We color the \first{best} and the \second{second best}}}
\setlength{\belowcaptionskip}{-10pt}
\resizebox{0.4\linewidth}{!}{
\begin{tabular}{cc|ccccc}
\toprule
& Noise Level $\sigma$ & 20 & 30* & 40* & 50 \\ \midrule
& From Scratch & 32.44 & 30.37 & 29.00 & 27.96 \\
& FTN & \textbf{32.44} & 30.18 & 28.90 & \first{28.04} \\
& FTN-gc4 & \textbf{32.44} & \second{30.31} & \second{28.92} & 28.02 \\
& FTN-gc16 & \textbf{32.44} & \first{30.37} & \first{28.99} & 28.01 \\
& FTN-deeper & \textbf{32.44} & 30.06 & 28.81 & \second{28.03} \\
& FTN-spatial & \textbf{32.44} & 30.16 & 28.88 & \first{28.04} \\ \bottomrule
\end{tabular}}
\label{tb:dn_ablation}
\end{table}
\begin{figure}[!t]
\setlength{\abovecaptionskip}{-9pt}
\setlength{\belowcaptionskip}{-9pt}
\begin{center}
\includegraphics[width=0.58\linewidth]{"./images/sr_abl.pdf}
\end{center}
\caption{\small{\textbf{Ablation study for linearities of transition modules in PD-control}. Result show that linear module cannot transit well toward the second-level}}
\label{fig:sr_abl}
\end{figure}
\subsection{Ablation Study}
\label{ablation}
First, to check the effect of regularization, we perform an ablation study on AdaFM-Net to compare different structures of FTNs in Table \ref{tb:dn_ablation}. We define various versions of FTN: more regularized~(FTN-\textit{gc}) or less regularized~(FTN-\textit{deeper}, FTN-\textit{spatial}). FTN-\textit{gc} is the version that the convolution layers are replaced by group convolution. FTN-\textit{deeper} is a three-layer version of the FTN whose intermediate results are worse than others because too much modification hurts the interpolation results. FTN-\textit{spatial} is a depth-wise convolution version whose performance is inferior to other channel-wise convolutions. In Table \ref{tb:dn_ablation}, the versions with less regularization shows better adaptation performance, and the ones with more regularization shows better interpolation performance.
Also, Fig.~\ref{fig:sr_abl} is the comparison over the number of groups in FTN-\textit{gc} on the PD-controllable super-resolution. The curves also prove that stronger regularization improves the interpolation performance, while makes further adaptation difficult.
\subsection{Adaptation Performance}
\label{adaptation}
\begin{table}[!t]
\centering
\caption{\small{\textbf{Gaussian Denoising Results.} Average PSNR (dB) on CBSD68 test dataset. Unseen noise levels are denoted with *. We color the \first{best} and the \second{second best}}}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{c|cccc|cccc|cccc|cccc}
\toprule
\multicolumn{1}{c|}{} & \multicolumn{8}{c|}{\large{AdaFM-Net}} & \multicolumn{8}{c}{\large{CFSNet-10}} \\
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{Short Adaptation} & \multicolumn{4}{c|}{Long Adaptation} & \multicolumn{4}{c|}{Short Adaptation} & \multicolumn{4}{c}{Long Adaptation} \\
\midrule
Noise Level & 20 & 40* & 60* & 80 & 20 & 40* & 60* & 80 & 20 & 40* & 60* & 80 & 20 & 40* & 60* & 80 \\
\midrule
DNI \cite{dni2019} & \textbf{32.44} & 28.20 & \second{26.98} & 25.97 & \textbf{32.44} & 27.54 & 26.74 & 25.93 & \textbf{32.42} & \first{28.87} & \first{27.01} & \first{25.96} & \textbf{32.42} & \first{28.89} & \first{27.11} & \first{25.95} \\
AdaFM \cite{adafm2019} & \textbf{32.44} & 28.17 & 26.77& 25.96 & \textbf{32.44} & 28.28 & 26.80 & 25.96 & \textbf{32.42} & 28.48 & 26.75 & 25.84 & \textbf{32.42} & 28.42 & 26.72 & 25.82 \\
CFSNet \cite{cfsnet2019} & \textbf{32.44} & 28.41 & 26.87 & \second{26.00} & \textbf{32.44} & 28.44 & 26.86 & \second{26.00} & \textbf{32.42} & 28.65 & 26.95 & \second{25.93} & \textbf{32.42} & 28.67 & 26.98 & \second{25.93} \\
\textbf{FTN-gc16} & \textbf{32.44} & \first{28.78} & \first{27.05} & 25.98 & \textbf{34.44} & \second{28.78} & \first{27.05} & 25.98 & \textbf{32.42} & \second{28.77} & \second{27.00} & 25.89 & \textbf{32.42}& \second{28.77} & \second{27.00} & 25.88 \\
\textbf{FTN-gc4} & \textbf{32.44} & \second{28.64} & 26.95 & \second{26.00} & \textbf{34.44} & 26.62 & \second{26.92} & \second{26.00} & \textbf{32.42} & 28.65 & 26.90& 25.90 & \textbf{32.42} & 28.64 & 26.904 & 25.90 \\
\textbf{FTN} & \textbf{32.44} & 28.48 & 26.89 & \first{26.03} & \textbf{34.44} & \first{28.86} & 26.79 & \first{26.03} & \textbf{32.42} & 28.45 & 26.86 & 25.92 & \textbf{32.42} & 28.49 & 26.89 & \second{25.93} \\
\hline
\end{tabular}}
\label{tb:dn_result}
\end{table}
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{-1pt}
\setlength{\belowcaptionskip}{-25pt}
\begin{center}
\captionsetup[subfigure]{labelformat=empty}
\rotatebox{90}{\makebox[19mm][c]{\small{\textsc{Style A}}}}
\subfloat
{\includegraphics[height=0.157\linewidth]{./images/styleimage/cat.jpg}}\
\hfill
\rotatebox{90}{\makebox[19mm][c]{\small{\textsc{Content}}}}
\subfloat
{\includegraphics[height=0.157\linewidth]{./images/styleimage/8.png}}\
\hfill
\rotatebox{90}{\makebox[19mm][c]{\small{\textsc{Style B}}}}
\subfloat
{\includegraphics[height=0.157\linewidth]{./images/styleimage/la_muse.jpg}}\
\\[-2ex]
\rotatebox{90}{\makebox[27mm][c]{\small{\textsc{AdaFM}}}}
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_0_0.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_0_2.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_0_4.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_0_6.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_0_8.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_ADA_cat2lamuse8th_1_0.png}}\
\\[-2.5ex]
\rotatebox{90}{\makebox[27mm][c]{\small{\textsc{Dynamic-Net}}}}
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_0_0.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_0_2.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_0_4.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_0_6.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_0_8.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_DYN_cat2lamuse8th_1_0.png}}\
\\[-2.5ex]
\rotatebox{90}{\makebox[27mm][c]{\small{\textsc{FTN}}}}
\subfloat[$\alpha=0.0$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_0_0.png}}\
\hfill
\subfloat[$\alpha=0.2$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_0_2.png}}\
\hfill
\subfloat[$\alpha=0.4$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_0_4.png}}\
\hfill
\subfloat[$\alpha=0.6$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_0_6.png}}\
\hfill
\subfloat[$\alpha=0.8$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_0_8.png}}\
\hfill
\subfloat[$\alpha=1.0$]
{\includegraphics[width=0.155\linewidth]{./images/styleimage/output_FTN_cat2lamuse8th_1_0.png}}\
\caption{\small{\textbf{Visual comparison of controllable style transfer results between two styles.} Results show that linear module cannot transit semantic task. Compared to Dynamic-Net, FTN shows better adaptation results}}
\label{fig:style_visual}
\end{center}
\end{figure*}
We compare the adaptation performances to the other CLL methods. The adaptation performance means the performance on the second level compared to a network only trained for the level. Table~\ref{tb:dn_result} shows the adaptation/interpolation performances on the denoising task (Results on deJPEG task and result images are reported in supplementary material.). In the table, the adaptation performance indicates PSNR in noise level 80. \textit{Short/Long adaptation} indicates the training time for the second-level adaptation. More adaptation results are described in supplementary material. The table shows that there is a little difference between adaptation performances over the compared methods, including even AdaFM that uses linear adaptation. This is because denoising and deJPEG tasks require their model parameters to be changed less as the level changes. On the other hand, according to Fig.~\ref{fig:sr_abl}, the adaptation of AdaFM is less than FTN-\textit{gc16}, which is the most regularized version of FTN. Besides, Fig.~\ref{fig:style_visual} shows the results of the other aspects. The figure is the result on style transfer task, which requires large transition of the model parameters as the style changes. According to Fig.~\ref{fig:style_visual}, AdaFM fails to adapt from the style A to the style B. These results show that the linear adaptation has limitation in reaching the hard second level. In Dynamic-Net, it cannot deliver the second style smoothly because it only changes three pre-defined layer while FTN changes all convolutional filters. More results for stylization are described in supplementary material.
\subsection{Interpolation Performance}
\label{interpolation}
Although a CLL algorithm successfully adapt its network to the second level, it might fail for the interpolated levels. Especially, when the network forgets the initial level (i.e., loses correlation with the initial status) during the adaptation process, the interpolated parameters cannot work for the intermediate levels anymore. We evaluate interpolation performance in following three aspects.
\begin{figure*}[!t]
\setlength{\belowcaptionskip}{-17pt}
\begin{center}
\captionsetup[subfigure]{labelformat=empty}
\rotatebox{90}{\makebox[40mm][c]{{\textsc{CFSNet-30}}}}
\subfloat
{\includegraphics[width=0.450\linewidth]{./images/sr_cfs_pi.pdf}}\
\subfloat
{\includegraphics[width=0.450\linewidth]{./images/sr_cfs_sn.pdf}}\
\\[-8ex]
\rotatebox{90}{\makebox[40mm][c]{{\textsc{ESRGAN}}}}
\subfloat[{RMSE-PI}]
{\includegraphics[width=0.450\linewidth]{./images/sr_esr_pi.pdf}}\
\subfloat[{SSIM-NIQE}]
{\includegraphics[width=0.450\linewidth]{./images/sr_esr_sn.pdf}}\
\caption{\small{\textbf{Results of PD-controllable image super-resolution ($\times$4).} Combined, various adaptation results, and results images are described in supplementary material.}}
\label{fig:sr_results}
\end{center}
\end{figure*}
\noindent\textbf{Comparison on performance.} First, for better interpolation, the performances on the intermediate levels compared to those of the network trained only for each level are important. In Table~\ref{tb:dn_result}, the results show that the short adaptation performance is similar or even better than the long one. In AdaFM-Net, FTN-gc4 and FTN-gc16 outperforms the other frameworks. In CFSNet-10, DNI outperforms other frameworks but the margin is not large. In CFSNet-10, the network is shallower than AdaFM-Net, which means that the parameter space can be easily linear. This can increase the performance of the linear interpolation (DNI). Compared to AdaFM and CFSNet, the interpolation performances of FTNs are superior to the others using much fewer computational costs.
However, compared to denoising task, the DNI shows different pattern in Fig.~\ref{fig:sr_results}, Fig.~\ref{fig:graph_dj_network} and Fig.~\ref{fig:dn_results}. Although DNI performs well for both end levels, it shows significantly unstable and low performance for the intermediate levels. This is because the fine-tuning process of DNI is just updating the parameters, without considering the initial state. Therefore, the relation between the parameters for the two levels gets weaker, and the interpolated parameters start not to behave as intended. From the intermediate images (in supplementary material and in Fig.~\ref{fig:dn_results}) for DNI, a little color difference can cause huge pixel-error (RMSE), but similar value in SSIM metric. On the other hand, FTN always takes the initial filters as input and also can be regularized by group convolutions. Therefore we can get the better interpolation performance. Our results are slightly inferior to CFSNet. However, our FTNs require extremely low computations and parameters.
\begin{figure*}[!t]
\setlength{\belowcaptionskip}{-30pt}
\begin{center}
\captionsetup[subfigure]{labelformat=empty}
\rotatebox{90}{\makebox[40mm][c]{{\textsc{FTN-gc16}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/ftn_ada_vs_cfs_short_dj.pdf}}\
\rotatebox{90}{\makebox[40mm][c]{{\textsc{DNI}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/dni_ada_vs_cfs_short_dj.pdf}}\
\caption{\small{\textbf{Smoothness analysis for deJPEG. ($q=40$ to $q=10$)} We plot \first{\textbf{$q=30$}} and \third{\textbf{$q=20$}} lines as linearly optimal interpolation points. Number indicates input quality factor, $a$ denotes AdaFM-Net network and $c$ denotes CFSNet-10 network}}
\label{fig:graph_dj_network}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\setlength{\belowcaptionskip}{-20pt}
\begin{center}
\captionsetup[subfigure]{labelformat=empty}
\rotatebox{90}{\makebox[40mm][c]{{\textsc{FTN-gc16}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/ftn_ada_vs_cfs_short.pdf}}\
\rotatebox{90}{\makebox[40mm][c]{{\textsc{CFSNet}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/cfs_ada_vs_cfs_short.pdf}}\
\\[-5ex]
\rotatebox{90}{\makebox[40mm][c]{{\textsc{AdaFM}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/ada_ada_vs_cfs_short.pdf}}\
\rotatebox{90}{\makebox[40mm][c]{{\textsc{DNI}}}}
\subfloat
{\includegraphics[width=0.440\linewidth]{./images/graphs/dni_ada_vs_cfs_short.pdf}}\
\caption{\small{\textbf{Smoothness analysis for denoising. ($\sigma=20$ to $\sigma=80$)} We plot \first{\textbf{$\sigma=40$}} and \third{\textbf{$\sigma=60$}} lines as linearly optimal interpolation points. Number indicates input quality factor, \textbf{$a$} denotes AdaFM-Net network and \textbf{$c$} denotes CFSNet-10 network. Our FTN-gc16 results show that the choice of $\alpha$ is closest to the lines}}
\label{fig:graph_dn_network}
\end{center}
\end{figure*}
\noindent\textbf{Interpretability.} For practical use, it will be essential for the users to know which value of $\alpha$ corresponds to which level. For example, in denoising task, suppose that we train a network to work between the levels $\sigma=20$ and $\sigma=80$. When we set $\alpha=0.5$, it is reasonable that the network will perform best for the level $\sigma=50$, which is the middle point of the interval. In other words, $\alpha$ has to be linear along with the level. Fig.~\ref{fig:graph_dn_network} shows the result of denoising task over various noise level $\sigma$ of the test set and the parameter $\alpha$. According to the figure, the maximum performance point for $\sigma=40$ and $\sigma=60$ best matches to the vertical lines of $\alpha=0.33$ and $\alpha=0.66$, compared to the other methods.
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{-1pt}
\setlength{\belowcaptionskip}{-30pt}
\begin{center}
\captionsetup[subfigure]{labelformat=empty}
\rotatebox{90}{\makebox[30mm][c]{\small{\textsc{Input}}}}
\subfloat
{\includegraphics[width=0.158\linewidth]{./images/artifact/output_CFS0009G20_x1_0LR.png}}\
\hfill
\rotatebox{90}{\makebox[30mm][c]{\small{\textsc{Clean}}}}
\subfloat
{\includegraphics[width=0.158\linewidth]{./images/artifact/output_CFS0009G20_x1_0HR.png}}\
\\[-2ex]
\rotatebox{90}{\makebox[20mm][c]{\small{\textsc{FTN-gc16}}}}
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda0_00_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda0_20_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda0_40_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda0_60_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda0_80_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_FTN0009G20_lambda1_00_x1_0SR.png}}\
\\[-2ex]
\rotatebox{90}{\makebox[20mm][c]{\small{\textsc{DNI}}}}
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda0_00_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda0_20_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda0_40_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda0_60_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda0_80_x1_0SR.png}}\
\hfill
\subfloat
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_DNI0009G20_lambda1_00_x1_0SR.png}}\
\\[-2ex]
\rotatebox{90}{\makebox[20mm][c]{\small{\textsc{CFSNet}}}}
\subfloat[$\alpha=0.0$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda0_00_x1_0SR.png}}\
\hfill
\subfloat[$\alpha=0.2$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda0_20_x1_0SR.png}}\
\hfill
\subfloat[$\alpha=0.4$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda0_40_x1_0SR.png}}\
\hfill
\subfloat[$\alpha=0.6$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda0_60_x1_0SR.png}}\
\hfill
\subfloat[$\alpha=0.8$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda0_80_x1_0SR.png}}\
\hfill
\subfloat[$\alpha=1.0$]
{\includegraphics[width=0.155\linewidth]{./images/artifact/output_CFS0009G20_lambda1_00_x1_0SR.png}}\
\caption{\small{\textbf{Denoising results on weak noise level ($\sigma=20$). When user control to the large $\alpha$, oversmoothing color artifacts are arose}}}
\label{fig:dn_results}
\end{center}
\end{figure*}
\noindent\textbf{Oversmoothing Artifacts.} In real-world applications, since the user may not know the degradation level, user hope to control the \textit{strength} of the denoising. We describe our visual denoising result on an extreme case in Fig. \ref{fig:dn_results}. In Fig. \ref{fig:dn_results}, the input noise level is 20, which means the optimal results come from $\alpha=0$ in all frameworks. $\alpha=1$, which is optimal for noise level 80, can over-smoothen the image. When $\alpha$ increases, DNI and CFSNet results show striking color artifacts in the background, while the FTN-gc16 results is much clean, which can be strength for real-world feedback-based systems. Since CFSNet exploits dual network structure and each network cannot consider the other level in the test phase. In contrast, in FTNs, two filters for both side of FTNs are correlated and regularized.
\subsection{Efficient Pixel-Adaptive Continuous Control}
\label{pixeladaptive}
\begin{figure}[!t]
\setlength{\abovecaptionskip}{-5pt}
\begin{center}
\includegraphics[width=0.6\columnwidth]{"./images/pixel_adaptive_control.png}
\end{center}
\caption{\small{\textbf{Efficient pixel-adaptive control results.} Please zoom in for best view}}
\label{fig:pixel_adaptive}
\end{figure}
Considering real-world imaging applications, the user wants to control not only the global level but also locally (pixel-wise). In this case, every pixel has its own imaging levels from $\alpha=0$ to $\alpha=1$. Naive pixel-adaptive controlling requires filters for every levels which can cause large memory issue. For efficient inference of pixel-adaptive continuous control, we propose a simple modification from the pixel-adaptive convolution. This is described as
\begin{equation}
\begin{aligned}
\textbf{Y} {} & = \textbf{X} {*}_{i,j} (\textbf{f} \times (1-{\alpha}_{i, j}) + FTN(\textbf{f}) \times {\alpha}_{i, j} ) \\
& = (\textbf{1}-{\textbf{A}}) \odot (\textbf{X} * \textbf{f}) + {\textbf{A}} \odot (\textbf{X} * FTN(\textbf{f}) )
\end{aligned}
\end{equation}
\noindent where $\textbf{f}$ is the global filter, $\textbf{*}_{i, j}$ is the pixel-adaptive convolution, ${\alpha}_{i, j}$ is the per-pixel level, and \textbf{A} is the global level-map that describes the pixel-wise levels. $\odot$ denotes element-wise multiplication. This modification makes implementation much simpler because only two global convolutions and multiplications are needed for pixel-adaptive control.
Examples are shown in Fig. \ref{fig:pixel_adaptive}. We test on two examples: \textit{PD-control} and \textit{style control}. In Fig. \ref{fig:pixel_adaptive} (a), from leftmost pixels to rightmost pixels, the PSNR decreases and the texture becomes sharper (higher perceptual quality) in continuously and in Fig. \ref{fig:pixel_adaptive} (b), the pixels are smoothly stylized from one style to the other. More results with high-resolution sources can be found in supplementary material.
\section{Conclusion}
In this paper, we define three conditions for the general and stable CLL framework: good adaptation, good interpolation and efficiency. To achieve these conditions, we propose Filter Transition Networks (FTNs) and stable initilization method. Non-linear structure of FTNs satisfies adaptation condition and regularized structure makes the interpolation smoothly. FTNs are extremely lightweight because of its data-agnostic structure. Results on general imaging tasks show that FTNs are better than the other unstable frameworks, and comparable to the other complex ones.
\par\vfill\par
\clearpage
\bibliographystyle{splncs04}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Selecting a small subset of elements to represent a database is a fundamental problem which is of practical value since a database is often far too large for a typical user to search in it entirely. Some applications are selecting which products to advertise on a website or which phones to put on display. Two major techniques used for this purpose are top-$k$ queries and skyline queries.
In top-$k$ queries, a utility function is given by the user and the top-$k$ records which maximize the utility function are returned. This has the advantage of giving the user a number of items to choose from, which is especially important when the exact utility function of the user is unclear or only vaguely known. However, the weakness of this type of query is that a utility function close to the true utility function must be known in advance when there can often be a wide variety of possible utility functions \cite{Chaudhuri:1999:ETS:645925.671359}.
In skyline queries, all records not dominated by other records are returned, where a record dominates another record if all its coordinates are not worse and at least one of those coordinates is strictly better than those of the second record. While having the advantage of being able to function without the specification of any utility function, the skyline query does not effectively reduce the size of high dimensional databases \cite{Skyline}. In some cases, the skyline query returns the entire database when no record is dominated.
To avoid these limitations, the Regret Minimizing Set (RMS) query \cite{Nanongkai2010} was proposed to simultaneously possess the strengths of both types of queries,
resulting in many recent studies about this query in the database
community \cite{Asudeh:2017:ECR:3035918.3035932,Xie2019,Xie2018, Qi:2018:KQU:3243648.3230634,KesslerFaulkner:2015:KRQ:2831360.2831364,Unified}. In RMS, a subset of $r$ elements is chosen from a database of $n$ points such that the maximum regret ratio of any possible utility function between the best element in the database and the best element in the selected subset is minimized. This preserves the benefit of the bounded size of the top-$k$ query while keeping the skyline query's advantage of not requiring an exact utility function.
As comparing the elements in the selected subset to the best element in the database is a very demanding criterion, a relaxed version of the problem was proposed in $k$-Regret Minimizing Set ($k$-RMS) queries. In $k$-RMS, the selected subset is chosen such that the maximum regret ratio between the best element in the subset and the $k^{th}$ best element in the database for any given utility function is minimized.
Another approach is Average Regret Minimizing Sets (ARMS), where instead of minimizing the maximum regret ratio, the objective is instead minimizing the average regret ratio across a distribution of (possibly nonlinear) utility functions. The motivation behind using the average rather than the maximum regret is that optimizing the maximum may unfairly prioritize the least satisfied utility functions, while optimizing the average more properly satisfies the majority of utility functions. Since the expected regret of the distribution can be approximated with a sample with high confidence as the sample grows larger, it has been proposed to instead minimize the regret on a sample of $N$ utility functions.
More recently, multiple papers have also considered the happiness maximization version of RMS problems. \cite{being_happy:ICDE:2020} studied the min-size version of RMS, where the goal is to find the smallest set that provides a given level of happiness, defined as equivalent to 1 minus the regret. Another, \cite{Qiu2018}, studied the happiness maximization version of $k$-RMS and provided an optimization to the greedy algorithm based on the monotonicity of the minimum happiness function. \cite{Storandt2019}, which studied both ARMS and its happiness maximization variant, makes particular use of the properties of the average happiness ratio to provide a better approximation ratio for the happiness variant over the original ARMS. However, there has not yet been a study that systematically compares the theoretical properties of the regret minimization and happiness maximization.
In this paper, we study the approximation of the happiness ratio for $k$-RMS and ARMS which arguably has more natural theoretical properties than approximating the regret. In particular, we are able to resolve the approximability status of happiness approximation for $k$-RMS completely while the approximability of the regret is still an open problem for some settings. Even more so, we provide several happiness approximation algorithms with provable bounds for $k$-RMS/ARMS that do not admit bounds on regret.
\begin{enumerate}
\item For $k$-RMS, we will show the happiness ratio is NP-Hard to approximate when $d$, the dimensionality, is treated as an input for any fixed $k$ through a reduction from the set cover problem. We also extend this result to show that the problem of approximating the regret within a finite ratio is NP-Hard when $k$ and $d$ are treated as inputs, partially answering an open question posed in \cite{Kumar2018}.
\item We propose multiplicative and additive dataset reduction schemes for $k$-RMS from which we derive polynomial-time approximation schemes when $d$ is fixed. Together with previous results, this completely resolves the hardness of approximating the happiness of $k$-RMS for any $k$ and $d$, including unfixed $d$. We experimentally show that dataset reduction schemes can be used to significantly reduce the running time of existing heuristic based solvers for $k$-RMS while not significantly worsening the minimum happiness ratio/maximum regret ratio. For the largest settings tested, the reduction scheme was able to reduce the runtime by up to 93\% (from 4.2 hours to 16.7 minutes) while keeping happiness within 90\% of the original.
\item For ARMS, we provide a $1-\frac{1}{e}$-approximation algorithm for the happiness of a function sample of size $N$ with a time complexity of $O(drNn)$, an improvement from the previous $O(dNn^3)$-time algorithm originally proposed for regret with no constant approximation bound \cite{Zeighami:2016:MAR:2882903.2914831}. We experimentally show that our algorithm scales efficiently up to a dataset of 1,000,000 points.
\item For the special case of ARMS on a 2 dimensional dataset where the utility functions considered are linear, we provide an exact algorithm running in $O(n^2)$, an improvement from the $O(n^4)$ algorithm proposed in \cite{Zeighami:2016:MAR:2882903.2914831}. We also provide an approximation version running in $O(\frac{n}{\epsilon} + n \log n)$ where $\epsilon$ is the desired additive approximation ratio.
\end{enumerate}
The rest of the paper proceeds as follows:
Section 2 gives an overview of selected relevant work. Section 3 defines the $k$-RMS problem and presents our hardness results on the approximability of both the happiness and regret of $k$-RMS. Section 4 introduces additive and multiplicative dataset reduction schemes and extends them to polynomial time approximation schemes for $k$-RMS. Section 5 presents ARMS, and presents our proposed approximation algorithms. Section 6 provides experimental results for 1) performance improvements from applying the reduction schemes before running previously proposed heuristic based algorithms for 1-RMS and 2) our proposed approximation algorithm for ARMS. Section 7 is the conclusion and discusses potential future work.
\section{Related Work}
In this section, we discuss some related work relevant to the RMS problem. We follow the naming conventions as in \cite{Xie2019}.
\subsection{RMS Problems}
RMS can be regarded as a special case of $k$-RMS when $k = 1$. Following \cite{Xie2019}, we categorize RMS algorithms for general dimensionality $d$ into two main classes based on whether there exist theoretically guaranteed results: heuristic approaches, and theoretical approaches.
\paragraph{\textbf{Heuristic Approaches}} RMS algorithms that rely on heuristics can be further categorized into two subcategories: Linear Programming (LP) based algorithm and geometric algorithms. LP-based algorithms include Greedy \cite{Nanongkai2010} and ImpGreedy \cite{Xie2018}. Greedy \cite{Nanongkai2010} initializes RMS to the point with the best first dimensional value and iteratively inserts points that realize the current maximum regret ratio (computed with LP) until some defined stopping conditions are satisfied. ImpGreedy \cite{Xie2018} improves the efficiency of Greedy \cite{Nanongkai2010} by pruning nonessential LP computations. As shown in \cite{Qiu2018}, the efficiency could be further improved by performing randomized sampling on the input dataset before the greedy algorithms are executed. The geometric methods GeoGreedy and StoredList are greedy algorithms proposed by \cite{Peng2014}. The difference between GeoGreedy \cite{Peng2014} and Greedy \cite{Nanongkai2010} is that, in each iteration, the computational of maximum regret ratio is done with computational geometry methods instead of LP. As described in \cite{Peng2014}, StoredList is a materialization of GeoGreedy that pre-computes a set of candidates to run GeoGreedy on.
\paragraph{\textbf{Theoretical Approaches}} Theoretically guaranteed approaches for RMS include Cube \cite{Nanongkai2010}, $\varepsilon$-Kernel \cite{Agarwal2017,cao_et_al:LIPIcs:2017:7056}, Sphere \cite{Xie2018}, HittingSet \cite{Agarwal2017,Kumar2018} and DMM \cite{Asudeh:2017:ECR:3035918.3035932}.
Cube \cite{Nanongkai2010} constructs a solution set by dividing the data space into hypercubes based on the first $d-1$ dimensions and selecting the point within each hypercube with the largest coordinate in the $d^{th}$ dimension.
HittingSet \cite{Agarwal2017} transforms the RMS problem into a hitting set problem and applies an approximation algorithm from \cite{algorithmDesign}.
DMM works similarly to HittingSet but instead formulates the problem as a matrix min-max problem.
$\varepsilon$-Kernel \cite{Kumar2018} computes an $\varepsilon$-kernel on the original dataset to use as the input to the hitting set formulation and is more efficient than HittingSet. Sphere \cite{Xie2018} selects a small set of representative utility functions and includes points with high utilities for those functions.
\subsection{k-RMS Problems for $k \geq 2$ }
$k$-RMS, proposed by \cite{Chester:2014:CKM:2732269.2732275}, is a generalization of the RMS problem which relaxes the definition of regret to be computed against the $k^{th}$ best element along a given utility function rather than the single best element. Analogously to RMS, the goal of the $k$-RMS query is to minimize the maximum $k$-regret ratio over all possible utility functions while selecting up to $r$ elements. As previously mentioned, RMS can be viewed as a special case of $k$-RMS when $k$=1. The motivation behind this relaxation is that a user would often still be "happy" with even the second or third best choice, which makes optimizing for regret against the single best choice less practically useful. An added benefit of this relaxation is that it allows the dataset to be represented more succinctly.
As $k$-RMS is a relaxation of the stricter $k$-regret query problem, it is possible to apply RMS algorithms such as CUBE and SPHERE which have known upper bounds $O(r^{-1/(d-1)})$ and $O(r^{-2/(d-1)})$ respectively to achieve an upper bound on the maximum $k$-regret ratio \cite{Nanongkai2010,Xie2018} (Note that these papers use $k$ to denote the number of points selected rather than $k$). However, these upper bounds may lie far from the optimal $k$-regret, and so these algorithms would not qualify as approximation algorithms for the $k$-RMS problem.
\paragraph{\textbf{Approximation Algorithms}}
Approximation algorithms for the RMS problem such as those proposed in \cite{Asudeh:2017:ECR:3035918.3035932} cannot in general be applied to $k$-RMS with the same approximation ratios, since it is possible that the relaxation may reduce the maximum regret ratio of the best possible solution.
More recently, \cite{Agarwal2017} proposed a bicriteria approximation algorithm based on hitting sets for which the user can freely select the approximation ratio for the $k$-regret (requiring much larger run times for smaller approximation ratios) but may return more than the requested number of elements $r$ by up to a logarithmic factor. This deviates from the traditional setting of an approximation algorithm where generally only the optimized objective is allowed to differ from the optimal value by some factor.
Indeed, we show that it is impossible for there to be such an approximation algorithm for the general $k$-regret problem in arbitrary dimension unless P=NP.
\subsection{Average Regret Minimizing Sets}
ARMS was first studied in \cite{Zeighami:2016:MAR:2882903.2914831}, which defined the ARMS problem. ARMS was introduced to address issues with the original RMS problem. Specifically, RMS has the tendency to prioritize the least satisfied utility functions, which is often not representative of the majority of utility function. ARMS addresses this by instead minimizing the average regret ratio within a given distribution of utility functions. We further note that while the RMS problem was originally formulated using the set of linear utility functions, the ARMS is defined more generally for arbitrary utility functions.
\cite{Zeighami:2016:MAR:2882903.2914831} first showed that the regret could be closely approximated by its value on a sufficiently large sample of $N$ points and then provided an approximation algorithm running in $O(dNn^3)$ time based on the supermodularity and monotonicity of the average regret ratio function. They also provide exact algorithms for two dimensional datasets.
\cite{DBLP:conf/IEEEwisa/QiuZ18} further exploits the monotonicity of the average regret ratio to optimize the existing greedy algorithm with lazy evaluations. While this results in speedups, it does not improve the time complexity since there are worst case constructions that result in the same $O(dNn^3)$ runtime.
\cite{Storandt2019} studied the ARMS and its happiness variant, in the case of the space of linear functions. They provided a greedy algorithm with a $1-\frac{1}{e}$ approximation factor, whereas the existing result for ARMS only established an unbounded approximation bound dependent on the steepness of the average regret ratio. However, this requires computing volumes in $d$ dimensional spaces to do exactly, which may take up to $O(n^{d^2/4})$ time.
\section{$k$-RMS and Hardness of Approximation}
The goal of $k$-Regret Minimizing Sets ($k$-RMS) is to produce a small subset of a larger dataset that minimizes the maximum regret ratio (to be defined shortly). This results in a small representative set that ensures the highest regret is still within some acceptable ratio regardless of which utility function the user has. From the happiness perspective, this is equivalent to maximizing the lowest happiness ratio (defined to be 1 minus the regret ratio).
In this section, we define $k$-RMS and present our hardness results on approximability. Specifically, we show that the happiness ratio of a $k$-RMS is inapproximability for any fixed $k$. This proof can be slightly changed to show the inapproximability of the regret of $k$-RMS (albeit treating $k$ as a parameter), partially resolving an open problem posed in \cite{Kumar2018}.
\newtheorem{defi}{Definition}
\subsection{Problem Definition}
\begin{table}[ht]
\caption{Notation Used in This Paper}
\centering
\begin{tabular}{ |p{1.5cm}|p{6cm}| }
\hline
Symbol & Definition \\
\hline
\emph{D} & An input dataset \\
\emph{n} & \abs{\emph{D}}, the number of input points \\
\emph{d} & The number of dimensions \\
$\emph{p}_\emph{i}$ & The $\emph{i}^{th}$ point in \emph{D}\\
$\emph{p}_{\emph{i}}^{\emph{j}}$ & The $\emph{j}^{th}$ coordinate of $\emph{p}_{\emph{i}}$\\
\emph{R} & A subset of \emph{D}\\
\emph{r} & \abs{\emph{R}}, the number of elements in \emph{R} \\
\textbf{w} & A user weight vector\\
$\textbf{w}_{\emph{i}}$ & The $\emph{i}^{th}$ weight vector in a set \\
$\textbf{w}^j$ & The value in the $j^{th}$ dimension of \textbf{w} \\
$\emph{D}^{(\emph{k},\textbf{w})}$ & The $\emph{k}^{th}$ ranked point in \emph{D} with respect to \textbf{w}\\
$\emph{R}^{(\emph{k},\textbf{w})}$ & The $\emph{k}^{th}$ ranked point in \emph{R} with respect to \textbf{w}\\
\hline
\end{tabular}
\label{Tab:Tcr}
\end{table}
\newcommand{\w}[0]{\textbf{w}}
\newcommand{\p}[0]{\textbf{p}}
\DeclarePairedDelimiterX{\infdivx}[2]{(}{)}{%
#1\;\delimsize\|\;#2%
}
\newcommand{D\infdivx}{D\infdivx}
\vspace{\baselineskip}
Let $D$ be a database containing $n$ points in a $d$-dimensional space. All values of coordinates in the space are normalized such that the coordinates are real values in the range $[0,1]$, with at least one coordinate in each dimension being 1. A user weight vector or utility function is denoted by \w. We define the score of a point $p$ with respect to \w, denoted by $score(p,\w)$ as $p\cdot\w$ or, equivalently, $\sum_{j=1}^{d} p^j\w^j$. For simplicity, we assume without loss of generality, that $\w$ is normalized such that $\norm\w_1=1$ since it does not affect the problem, as both the numerator and denominator would be scaled by the same amount resulting in the same regret ratio. Now, we define $D^{(k,\w)}$ as the point in $D$ associated with the score with the $k^{th}$ rank in a sorted list of points' scores with respect to $\w$. Let the set of possible weight vectors be $\textbf{W}=\{\w \in [0,1]^d, \norm \w_1 =1\}$.
For a subset $R$ of $D$, the $k$-regret ratio with respect to a particular weight vector $\w$ is defined to be
\begin{defi}
$kreg(R,\w)= \max\left\{0,1-\frac{score(R^{(1,\w)},\w)}{score(D^{(k,\w)},\w)}\right\}$
\end{defi}
That is, if the best ranked point in $R$ is not worse than the $k^{th}$ ranked point in $D$ for a given $\w$, the $k$-regret ratio is 0. Otherwise, it is 1 minus the score of the best ranked point in $R$ divided by the score of the $k^{th}$ ranked point in $D$ with respect to \w. We define the $k$-regret ratio of a subset $R$ to be
\begin{defi}
$kreg(R)=\max_{\w \in \textbf{W}} kreg(R,\w)$
\end{defi}
In other words, $kreg(R)$ is defined to be the maximum $k$-regret ratio with respect to any possible weight vector.
In a $k$-RMS query, the inputs are $D$, the database, and a positive integer $r$, the size of the returned $k$-RMS. A $k$-RMS is defined as the subset $R$ with size $r$ of $D$ such that $kreg(R)$ is minimum among all subsets of size $r$.
Analogously to the regret ratio functions, we define the happiness ratio functions as
\begin{defi}
$khapp(R,\w)= \min\left\{1,\frac{score(R^{(1,\w)},\w)}{score(D^{(k,\w)},\w)}\right\}$
\end{defi}
\begin{defi}
$khapp(R)=\min_{\w \in \textbf{W}} khapp(R,\w)$
\end{defi}
It is straightforward to verify that $khapp(R,\w) = 1 - kreg(R, \w)$ and $khapp(R) = 1 - kreg(R)$, implying that a $k$-RMS will also maximize happiness. Thus, the objective of happiness maximization is equivalent to regret minimization. However, it is possible to prove stronger theoretical results on the approximability of the happiness ratio, with the happiness being inapproximable even for $k=1$, which may be of theoretical interest. Furthermore, as we show in Section 4, the happiness maximization form of the problem admits multiplicative polynomial time approximation schemes for any fixed $d$.
\paragraph{Example} We have a dataset of 4 hotels as shown in Table 2. A 1-RMS of size 2 is $R=(A,C)$, achieving $kreg(R) = 1-\frac{0.5*0.8+0.5*0.35}{0.5*0.6+0.5*0.6} = 0.042 $. Here, the least happy utility function which determines the regret ratio is $\w = (0.5,0.5)$.
\begin{table}[ht]
\caption{Example Dataset of Hotels and Scores}
\centering
\begin{tabular}{|l|l|l|}
\hline
Hotel & Stars & Price \\
\hline
A & 0.8 & 0.35 \\
B & 0.6 & 0.6 \\
C & 0.35 & 0.8 \\
D & 0.5 & 0.3 \\
\hline
\end{tabular}
\end{table}
\subsection{NP-Hardness of Approximating the Happiness of a $k$-RMS}
In this subsection, we prove that approximating the optimal $k$-happiness ratio of a $k$-RMS within any finite multiplicative ratio is NP-Hard even if $k$ is fixed. We begin by showing this for the special case $k=1$ through a reduction from the set cover problem, which is known to be NP-Hard. This result can then be extended to any larger value of $k$.
\begin{theorem}
Approximating the optimal $k$-happiness ratio of a $k$-RMS within any finite multiplicative ratio is NP-Hard for $k=1$ when treating $d$ as a parameter.
\end{theorem}
\begin{proof}
First, we define the set cover problem. For a set of items $U$ and a set of sets $T$ such that $\forall T_i \in T, T_i \subset U$ and a positive integer $r_{SC}$, does there exist a subset $S$ of $T$ with size no greater than $r_{SC}$ such that $\bigcup_{S_i \in S}S_i=U$. Let an instance of the set cover problem be denoted $I_{SC}(U,T)$.
We note that cases where there exists a member $U_j \in U$ such that $U_j \notin T_i \;\forall T_i \in T$ can be answered in $O(|U|+|T||U|)$ time by simply checking all the elements in each member of $T$. Since there is no set that covers $U_j$, it can be immediately concluded that the answer to such an instance is no.
For instances which do not fall into the previous category, from the given instance of the set cover problem, we will construct an instance of the $1$-RMS problem, $I_{1-RMS}$, and show that the existence of a polynomial-time approximation algorithm with a finite approximation factor would imply P=NP. Let the optimal $k$-happiness ratio for an instance $I_{1-RMS}$ be denoted $khapp(I_{1-RMS})$. Since a $k$-RMS maximizes the $k$-happiness ratio, the optimal value of $khapp(I_{1-RMS})$ is the greatest possible $k$-happiness ratio for $I_{1-RMS}(D,r)$.
For $r$, the number of points to be selected in the $1$-RMS, set $r$=$r_{SC}$. Also, we construct $D$. Let $d$, the dimensionality of $D$, be equal to $|U|$. For each set $T_i \in T$, we construct a point $p$ in $D$, such that $p^j=1$ if $U_j \in T_i$ and $p^j=0$ otherwise. Let the points constructed from $T$ be known as the data points. Also, for each item $U_j \in U$, we construct a point $a_j$ where $a_j$ is the point such that $a_j^j=1$ and $a_j^l=0$ when $l \neq j$. Let the set of points constructed from $U$ be known as the axis points. From the construction, there are $|T|=k-1$ data points and $|U|$ axis points. Thus, $n$, the size of the database will be equal to $|T|+|U|=O(|T|+|U|)$. This takes $O(n\cdot d)=O(|T||U|+|U|^2)$ time.
We now make use of the following lemma.
\begin{restatable}{lemma}{hmscase}
\label{thm:hmscase}
If the answer to $I_{SC}$ is no, $khapp(I_{1-RMS})=0$. Otherwise, the answer to $I_{SC}$ is yes and $khapp(I_{1-RMS})>0$.
\end{restatable}
\begin{proof}
Consider the case when there exists a subset $S$ of size no greater than $r_{SC}$ which contains all items in $U$. Then for each $S_i\in S$, we may select the point $p$ that was constructed from $S_i$. Since by definition of being a solution to this instance of the set cover problem, we know that $|S|\leq r_{SC}$ and we would thus select not more than $r_{SC}=r$ points so it would be an acceptable selection for $R$ in the 1-RMS problem. We now consider the 1-happiness ratio of this selection. By definition of $S$ being a solution to $I_{SC}$, we know that $\forall U_j \in U, \exists S_i \in S$ such that $U_j \in S_i$. Thus, $\forall j \in \{1,...,d\}$, $\exists R_i \in R$ such that $R_i^j=1$ based on the construction of $D$. Consider the 1-happiness ratio of $R$ with respect to a given weight vector $\mathbf{w}$. For any $\mathbf{w}$, we would necessarily have $(\exists R_i \in R, R_i^j=1) \implies (\exists R_i \in R, \mathbf{w} \cdot R_i >0) $ since any $\mathbf{w}$ has a positive coordinate in at least 1 dimension. Thus, $happ(\emph{R},\textbf{w})=\min\Big\{1,\frac{R^{(1,w)}\cdot\mathbf{w}}{D^{(1,w)}\cdot\mathbf{w}}\Big\}>0$.
Consider the opposite case when there does not exist such a subset $S$. Assume there is some selection of $R$ of size $r$ with a 1-happiness ratio more than 0. Consider the weight vector $\mathbf{w}$ that points to an axis point $a_j$ with $||\mathbf{w}||_1=1$. Clearly, for any such weight vector, $D^{(1,\mathbf{w})}$ is at least the score of the axis point that maximizes, and thus, $D^{(1,\mathbf{w})}\cdot \mathbf{w} =1$. Now consider $R^{(1,\mathbf{w})}\cdot \mathbf{w}$. As it points to an axis point, $\mathbf{w}$ has the value 1 only in a single dimension and 0 in all others. Based on the construction of $D$, any coordinate of any point $R_i \in R$ must be either 1 or 0. Since we assumed $R$ has a 1-happiness ratio of more than 0, we must have $R^{(1,\mathbf{w})} \cdot \mathbf{w}=1$ as $R^{(1,\mathbf{w})} \cdot \mathbf{w}=0$ would contradict our assumption. This would imply that $\exists R_i \in R$ such that $R^j=1$. Since the 1-happiness ratio is no greater than the minimum across all $d$ weight vectors that point to an axis point, we must have that $\forall j \in \{1,...,d\}, \exists R_i\in R$ such that $R_i^j=1$. However, then we could select each set $U_i$ that corresponds to a point $R_i$ in $R$ during the construction of $D$ for $I_{SC}$ as $S$ (We have excluded cases where there is an item in $U$ that is covered by no set in $T$ so if an axis point was chosen in $R$, it may be replaced with another data point which either dominates or is equivalent to the axis point). This would contradict the assumption that $S$ does not exist. Therefore, there is no selection of $R$ with 1-happiness ratio more than 0. Thus, in this case, $khapp(I_{1-RMS})=0$.
\end{proof}
Applying the lemma, any polynomial-time approximation algorithm with a finite multiplicative positive approximation ratio for the 1-happiness ratio would be able to distinguish the two cases of whether or not there exists a set cover, and its existence would imply P=NP. Thus, the problem of approximating the 1-happiness ratio of the optimal 1-RMS to a finite factor is NP-hard.
\end{proof}
\begin{corollary}
Approximating the $k$-happiness ratio of a $k$-RMS within any finite multiplicative ratio is NP-Hard for any fixed $k$ when treating $d$ as a parameter.
\end{corollary}
\begin{proof}
Since we can approximate the 1-happiness of 1-RMS with an approximation for the $k$-happiness of a $k$-RMS by making $k$ copies of every point, approximating the $k$-happiness of a $k$-RMS must also be NP-Hard for any other $k$.
\end{proof}
The proof of Theorem 3.1 can be changed slightly to show the NP-Hardness of approximating the regret of $k$-RMS as well when $k$ is treated as a parameter, in contrast to the happiness ratio where the result applies to any fixed $k$.
\begin{restatable}{theorem}{krms}
\label{thm:krms}
Approximating the regret of a $k$-RMS within any finite multiplicative ratio is NP-Hard when $k$ and $d$ treated as parameters.
\end{restatable}
\begin{proof}
The proof follows from the same reduction from the set cover problem as in Theorem 3.1, with the changes that $k$ is set to $|T|+1$ (where $|T|$ is the size of the set of sets) and that we construct $k$ copies of each axis point instead of only one.
\end{proof}
\newcommand{\textbf{W}}{\textbf{W}}
\section{Dataset Reduction Schemes and Polynomial Time Approximation Schemes}
Having shown that both the happiness and regret of $k$-RMS are NP-Hard to approximate in the general case, we now introduce dataset reduction schemes to improve the runtime of existing heuristic based approaches. We extend these reduction schemes to show that polynomial time approximations algorithms for the happiness of $k$-RMS are achievable for fixed dimensionality $d$ that allow any desired multiplicative or additive approximation factor.
While these approximation schemes are computationally infeasible, the dataset reduction schemes can be prior to existing heuristic based algorithms which often have poor scalabilty. In Section 6, we show experimental validation for the efficiency boost from applying the additive and multiplication reduction schemes on the performance of selected heuristic algorithms.
\subsection{Dataset Reduction Schemes}
We begin by defining the dataset reduction schemes and proving several properties about them that are used in the polynomial time approximation schemes.
A dataset reduction scheme is an algorithm that takes in a dataset $D$ as an input and outputs a new dataset $D'$ such that $|D'| \leq |D|$ and each point in $D'$ corresponds to an original point in $D$. We present two reductions schemes: the \textbf{Additive Reduction Scheme} and the \textbf{Multiplicative Reduction Scheme}. In this subsection, we will prove several useful properties about the reduction schemes that will ultimately be used in the proof of the polynomial time approximation schemes.
\subsubsection{\textbf{Additive Reduction Scheme}} Given an additive approximation factor $\varepsilon$, let $\varepsilon'= \frac{\varepsilon}{d}$. We create a new dataset $D'$ where the coordinates of each point are rounded down to nearest multiple of $\varepsilon'$. Formally, for each point $p$ in $D$, we create a point $p'$ in $D'$ such that each coordinate $p'^j$ of $p'$, we set $p'^j$ to the greatest multiple of $\varepsilon'$ no greater than $p^j$.
\subsubsection{\textbf{Multiplicative Reduction Scheme}}
Given a multiplicative approximation factor $1-\varepsilon$, let $\varepsilon^*= \frac{\varepsilon}{2}$. We create a new dataset $D'$ where the coordinates of each point are rounded down to nearest power of $(1-\varepsilon^*)$ and if this value would be less than $\frac{\varepsilon^*}{d}$, we set it to 0. Formally, for each point $p$ in $D$, we create a point $p'$ in $D'$ such that each coordinate $p'^{j}$ of $p'$, we set $p'^j$ to the greatest power of $(1-\varepsilon^*)$ no greater than $p^j$ if this value is at least $\frac{\varepsilon^*}{d}$ and $p'^{j}$ is set to 0 otherwise.
We begin by proving bounds on the size of the reduced dataset from both schemes.
\begin{lemma}
The output dataset $D'$ from the \textbf{Additive Reduction Scheme} has size at most $(\frac{d}{\varepsilon}+1)^d$.
\end{lemma}
\begin{proof}
Since originally each coordinate ranged from 0 to 1, for each coordinate there can be at most $\frac{1}{\varepsilon'}+1$ distinct values for coordinates and since there are only $d$ dimensions, at most $(\frac{1}{\varepsilon'}+1)^d=(\frac{d}{\varepsilon}+1)^d$ distinct points can exist in $D'$.
\end{proof}
\begin{lemma}
The output dataset $D'$ from the \textbf{Multiplicative Reduction Scheme} has size at most $(\log_{1-\varepsilon/2} \frac{\varepsilon}{2d}+2)^d$.
\end{lemma}
\begin{proof}
There are at most $\log_{1-\varepsilon^*} \frac{\varepsilon^*}{d}+2$ distinct values of coordinates since for powers of $(1-\varepsilon^*)$ we have $(1-\varepsilon^*)^k < \frac{\varepsilon^*}{d}$ for any $k > \log_{1-\varepsilon^*} \frac{\varepsilon^*}{d}$ and the only other allowed values are 1 and 0. Thus, there are at most $(\log_{1-\varepsilon^*} \frac{\varepsilon^*}{d}+2)^d=(\log_{1-\varepsilon/2} \frac{\varepsilon}{2d}+2)^d$ distinct coordinate tuples in $D'$.
\end{proof}
Next, we prove that the happiness ratio of the optimal set in the reduced dataset is at worst some (additive or multiplicative) factor worse than the happiness ratio of the optimal set in the original dataset.
\begin{lemma}
For the \textbf{Additive Reduction Scheme}, the optimal $k$-regret minimizing set $O$ in $D$ corresponds to a subset $O'$ in $D'$ such that the $k$-happiness ratio of $O'$ is at worse that of $R$ minus $\varepsilon$.
\end{lemma}
\begin{proof}
Consider an optimal $k$-regret minimizing set $O$ for any instance of the problem. Note that $O$ also optimizes $k$-happiness. Let $O'$ be the corresponding subset in $D'$. We must have that $O'^j_i \geq O^j_i -\varepsilon'$ for any point because of the way we do the rounding. Thus, for any weight vector $\w$, we will have that $\w \cdot O'_i \geq \sum_{j=1}^d \w^j O'^j_i \geq \sum_{j=1}^d w^j (O_i^j-\varepsilon')=\w\cdot O_i - \varepsilon' \sum_{j=1}^d \w^j \geq \w\cdot O_i -\varepsilon' $. Then it follows that $\max_{O'_i \in O'} \w\cdot O'_i \geq \max_{O_i \in O} \w\cdot O_i- \varepsilon'$ $\implies$ $\frac{\max_{O'_i \in O'} \w\cdot O'_i}{\max_{p \in D} \w\cdot p} \geq \frac{\max_{O_i \in O} \w\cdot O_i}{\max_{p \in D} \w\cdot p}-\frac{ \varepsilon'}{\max_{p \in D} \w\cdot p}\geq \frac{\max_{O_i \in O} \w\cdot O_i}{\max_{p \in D} \w\cdot p}- \varepsilon'd =\frac{\max_{O_i \in O} \w\cdot O_i}{\max_{p \in D} \w\cdot p}- \varepsilon$, and thus the happiness ratio of $O'$ is at worst that of $O$ minus $\varepsilon$.
\end{proof}
For the corresponding proofs for the multiplicative version, we will make use of the following lemma.
\begin{lemma}
For $x\geq \frac{1}{d}$ and $\varepsilon>0$, $x-\frac{\varepsilon}{d}\geq (1-\varepsilon)x$
\end{lemma}
\begin{proof}
$\varepsilon x > \frac{\varepsilon}{d} \implies (1-\varepsilon)x = x-\varepsilon x \leq x - \frac{\varepsilon}{d}$
\end{proof}
\begin{lemma}
For the \textbf{Multiplicative Reduction Scheme}, if $|O|\geq d$, the optimal $k$-regret minimizing set $O$ in $D$ corresponds to a subset $O'$ in $D'$ such that the $k$-happiness ratio of $O'$ is at worse that of $O$ times $(1-\varepsilon)$.
\end{lemma}
\begin{proof}
Consider an optimal $k$-regret minimizing set $O$ for any instance of the problem. Again, let $O'$ be the corresponding subset in $D'$.
We first note that since $|O|\geq d$, $O$ must perform at least as well as selecting $d$ points with 1 in each coordinate, and $||\w||_1=1$, $\max\limits_{O_i \in O} \w \cdot O_i \geq \frac{1}{d} $ which implies $ \max\limits_{O_i \in O} \w \cdot O_i- \frac{\varepsilon^*}{d} \geq (\max\limits_{O_i \in O} \w \cdot O_i) (1-\varepsilon^*)$ by Lemma 4.4.
Now consider the effect of setting some coordinates to 0 due to their value being less than $\frac{\varepsilon^*}{d}$. The reduction in the value of $\w \cdot O'_i$ is no more than $\frac{\varepsilon^*}{d} \sum_{j=1}^d \w^j \leq \frac{\varepsilon^*}{d} $ in the case we set all coordinates of $O'_i$ to 0. Then, since any coordinate is scaled down by at most $(1-\varepsilon^*)$, we must have
\[\max\limits_{O'_i \in O'} \w \cdot O'_i \geq (1-\varepsilon^*)( \max\limits_{O_i \in O} \w \cdot O_i -\frac{\varepsilon^*}{d})\]
Thus, \[\begin{split}\max\limits_{O'_i \in O'}\ w \cdot O'_i &\geq (1-\varepsilon^*)^2 \max\limits_{O_i \in O} w \cdot O_i = (1-2\varepsilon^*+(\varepsilon^*)^2) \max\limits_{O_i \in O} \w \cdot O_i \\ &\geq (1-2\varepsilon^*) \max\limits_{O_i \in O} \w \cdot O_i = (1-\varepsilon) \max\limits_{O_i \in O} \w \cdot O_i \end{split}\]
Then it follows that $\max_{O'_i \in O'} \w\cdot O'_i \geq (\max_{O_i \in O} \w\cdot O_i)(1- \varepsilon)$ $\implies$ $\frac{\max_{O'_i \in O'} \w\cdot O'_i}{\max_{p \in D} \w\cdot p} \geq(1-\varepsilon)(\frac{\max_{O_i \in O} \w\cdot O_i}{\max_{p \in D} \w\cdot p})$, and thus the $k$-happiness ratio of $O'$ is at least that of $O$ times $(1-\varepsilon)$.
\end{proof}
Finally, we prove that a set in the reduced dataset corresponds to some set in the original with at least the same happiness ratio for either reduction scheme.
\begin{lemma}
A set $A'$ in $D'$ corresponds to a set $A$ in $D$ with at least the same happiness ratio.
\end{lemma}
\begin{proof}
Each point $p'$ in $A'$ corresponds to some point $p(p')$ in $D$ such that $p$ (weakly) dominates $p'$. Taking $A=\bigcup_{p' \in A'} \{p(p')\}$, it follows that the happiness ratio of $A$ is at worst the happiness ratio of $A'$.
\end{proof}
\subsection{Polynomial Time Approximation Schemes}
\label{subsec:polynomialTimeApproximation}
In this subsection, we present our polynomial-time approximation schemes for approximating the happiness of $k$-RMS. While these approximation schemes are computationally infeasible, they nevertheless demonstrate that when $d$ is fixed, approximating the $k$-happiness can be done in polynomial time for any desired approximation ratio. This result resolves the multiplicative approximability status of $k$-happiness for all $k$ and $d$ - it is NP-Hard to multiplicatively approximate to any constant ratio when $d$ is unfixed, and can be approximated in polynomial time to any desired ratio when $d$ is fixed.
We make use of the following result which trivially follows from Theorem 3.2 of \cite{Agarwal2017}.
\begin{lemma}
For any dataset $D$ with $n$ points, a subset $R$ of size $O(\frac{1}{\varepsilon^{(d-1)/2}})$ with happiness ratio at least $1-\varepsilon$ can be computed in $O(n+\frac{1}{\varepsilon^{d-1}})$.
\end{lemma}
\textbf{Polynomial-Time Approximation Scheme}
\label{subsec:additivePolyTimeApproScheme}
\begin{enumerate}
\item We first attempt applying Lemma 4.7 to compute a subset $S$ of $D$ with happiness ratio at least $1-\frac{1}{\varepsilon^{(d-1)/2}}$. If $r$ is at least the resulting size, we can immediately return $S$ with additive approximation factor $\varepsilon$ and multiplicative factor $1-\varepsilon$ . Otherwise, $r$ is of $O(\frac{1}{\varepsilon^{(d-1)/2}})$ size, so $r\leq \frac{c}{\varepsilon^{(d-1)/2}}$ for some constant $c$.
\item \textbf{Reduction Scheme} Given the approximation factor $\varepsilon$, apply the additive or multiplicative reduction scheme on $D$ to get the reduced dataset $D'$.
\item Consider any possible combination of $r$ points in $D'$ and compute the happiness ratio for each combination, applying algorithm from Lemma 2 in \cite{Agarwal2017} which runs in $O(n^{2d-1})$ time. Choose the combination that results in the best happiness.
\item Construct the set $A$ using Lemma 4.6 and return it.
\end{enumerate}
\begin{theorem}
The \textbf{Polynomial-Time Approximation Scheme} runs in polynomial time and results in an additive $\varepsilon$ factor happiness approximation with the additive reduction scheme or a multiplicative $1-\varepsilon$ factor happiness approximation with the multiplicative reduction scheme.
\end{theorem}
\begin{proof}
Steps 1, 2, and 4 run in polynomial time. As for step 3, since $|D'|$ is constant given fixed $d$ and $\varepsilon$ ($(\frac{d^2}{\varepsilon}+1)^d$ for the additive scheme [Lemma 4.1] and $(\log_{1-\varepsilon'} \frac{\varepsilon'}{d^3}+2)^d$ for the multiplicative scheme [Lemma 4.2]), there are most ${|D|}^r \leq {|D'|}^{\frac{c}{\varepsilon^{(d-1)/2}}} $ combinations, which is polynomial in (and actually independent of) the input size for fixed $\varepsilon$ and $d$. Checking the happiness ratio of each will again take polynomial time so the total algorithm runs in polynomial time.
The correctness of the approximation scheme follows from the lemmas in the previous subsection. The happiness ratio of $A'$ is at worst an additive factor $\varepsilon$ worse than the optimal set $O$ in $D$ (Lemma 4.3) or multiplicative factor 1-$\varepsilon$ (Lemmas 4.5), and the returned set then has approximation ratio at worst $\varepsilon$ (Lemma 4.6).
\end{proof}
Note that the additive scheme can also be applied to approximate the regret of $k$-RMS with the same $\varepsilon$ approximation factor (the $k$-regret ratio of the returned set would be at worst $OPT+\varepsilon$). This follows from the result that $kreg(R)=1-khapp(R)$.
Unlike the additive scheme, the multiplicative scheme cannot be applied to the $k$-regret ratio with any finite bounds. A trivial example where this happens is a dataset of two points where one is slightly better in all coordinates, but both are rounded to the same point - the regret for one point is $0$, but some positive value for the other.
\section{Average Regret Minimizing Sets}
Besides $k$-RMS, an alternative approach to select a representative subset is Average Regret Minimizing Sets (ARMS). Instead of minimizing the maximum regret ratio, ARMS minimizes the average regret ratio, which may be more appropriate for situations where it is less necessary to optimize the regret of relatively rare utility functions.
In this section, we define the ARMS problem. We then provide an approximation algorithm with approximation ratio 1-$\frac{1}{e}$ for the average happiness ratio of a function sample for the general case. Finally, we provide an exact $O(n^2)$ algorithm for the special case of linear utilities in 2 dimensions, which can be turned into a $O(\frac{n}{\epsilon})$ additive $\epsilon$-approximation algorithm.
\subsection{Problem Definition}
Similar to $k$-RMS as described in Section 3.1, in ARMS we are given a dataset $D$ of $n$ points in a $d$-dimensional space. The score of a point $p$ against a utility function \textbf{w} is again denoted $score(p,\w)$. However, in contrast to $k$-RMS, current formulations of the ARMS query \cite{Zeighami:2016:MAR:2882903.2914831,Qiu2018} also require the set of utility functions to consider, $F$, and its probability distribution as inputs. The utility functions considered are not necessarily linear.
Formally, for an ARMS query, the inputs are $D$, the database, a positive integer $r$, the size of the returned ARMS, and the set of utility functions considered, $F$ along with its probability distribution $\Theta$. An ARMS is defined as the subset $R$ with size $r$ of $D$ such that the average regret ratio, $arr(R)$, is minimum among all subsets of size $r$. For simplicity, each function $f \in F$ is assumed to be computable in $O(d)$ time.
Here, given $F$ and $\eta$ (the probability density function corresponding the the probability distribution $\Theta$), $arr(R)$, the Average Regret Ratio, is defined as
\begin{defi}
$arr(R)= \int_{f \in F} reg(R,f) \eta(f) df $
\end{defi}
where the definition of the regret ratio, $reg(R,f)$, is generalized for possibly non-linear functions as
\begin{defi}
$reg(R,f)= \max\left\{0,1-\frac{\max_{p \in R} f(p)}{\max_{p \in D} f(p)}\right\}$
\end{defi}
In other words, the Average Regret Ratio, $arr(R)$, is the expectation of $reg(R)$, $E[reg(R)]$ when considering the set of utility functions $F$. The Average Happiness Ratio, $ahr(R)$, is then simply $1-arr(R)$.
\paragraph{Example} Consider again the hotels shown in Table 2. Suppose the set of utility functions considered are the linear functions in Table 3. An ARMS of size 2 is $R=(A,C)$, achieving $arr(R) = 0.6\cdot0+0.2\cdot0+0.2\cdot(1-\frac{0.5*0.8+0.5*0.35}{0.5*0.6+0.5*0.6}) = 0.008 $. Here utility functions 1 and 2 are fully satisfied by hotels A and C respectively.
\begin{table}[ht]
\caption{Example Set of Utility Functions and Probabilities}
\centering
\begin{tabular}{|l|l|l|l|}
\hline
Utility Function & Stars Weight & Price Weight & Probability\\
\hline
1 & 0 & 1 &0.6 \\
2 & 1 & 0 &0.2 \\
3 & 0.5 & 0.5 &0.2 \\
\hline
\end{tabular}
\end{table}
\subsection{Approximating Average Happiness Ratio of a Sample of Functions }
While the definition previously given describes the general form of an ARMS problem, it leaves open the question of how to encode the input, since the space of considered functions is unrestricted. In many practical cases, it is possible to instead compute an approximation, $arr^*(R)$, with arbitrary accuracy by computing the average happiness ratio for a random sampling $S$ of $N$ utility functions for sufficiently large $N$ (see \cite{Zeighami:2016:MAR:2882903.2914831} for precise guarantees and discussion):
\begin{defi}
\[arr^*(R,S) = \dfrac{1}{N}\sum_{f\in S} \left( 1-\dfrac{\max_{p\in R} f(p)}{\max_{p\in D} f(p)} \right)\]
\end{defi}
While an approximation algorithm was provided for $arr^*(R)$, in \cite{Zeighami:2016:MAR:2882903.2914831}, the approximation ratio is given as $\frac{e^t-1}{t}$, where $t=\frac{s}{1-s}$ and $s$ is the "steepness" (maximum marginal decrease) of the Average Regret Ratio. This approximation ratio is not fixed and is potentially quite bad (indeed it is not defined for $s=1$).
Meanwhile, for $ahr^*(R)$, defined analogously, there exists a probabilistic approximation algorithm with fixed approximation ratio $1-\frac{1}{e}$ as shown in \cite{Storandt2019} the case where $F$ is the infinite set of linear utility functions. This proof extends to general functions using a similar approach.
In \cite{Zeighami:2016:MAR:2882903.2914831}, it was shown that the sampled average regret ratio, $arr^*(\cdot,S)$ is monotonically decreasing and has the supermodularity property. Since $ahr^*(R,S) = 1- arr^*(R,S)$, it trivially follows that $ahr^*(\cdot , S)$ is submodular and monotonically increasing.
It then follows from the result in \cite{Nemhauser1978} on the maximization of monotonically increasing submodular functions that the maximum value of $ahr^*(R,S)$ can be approximated to a $1-\frac{1}{e}$ ratio by iteratively greedily choosing $R_{s+1}=R_s \cup {x}$ where $x$ is the point that results in the greatest $ahr^*(R_s \cup {x}, S)$, starting from the empty set $R_0 = \varnothing $. This greedy approach will form the backbone for the approximation algorithms provided in this section.
In \cite{Zeighami:2016:MAR:2882903.2914831}, an approximation algorithm for minimizing $arr^*(R,S)$ running in $O(dNn^3)$ time was given with the aforementioned unbounded approximation ratio $\frac{e^t-1}{t}$. Because their algorithm is based on minimizing supermodular functions, it iteratively removes points rather than adding them. Simply changing the algorithm to iteratively add while maximizing $ahr^*(R,S)$ already results in the improved time complexity of $O(drNn^2)$ with the fixed multiplicative approximation ratio of $1-\frac{1}{e}$. However, here we give an algorithm with further improved runtime bounds.
The main idea of the approximation algorithm is to keep track of the improvement in average happiness ratio gained by adding each point, $\Delta_x$, and iteratively adding the point $p_x$ with the highest $\Delta_x$. To do this, within each iteration, after adding each point $x$, we recalculate the happiness $\delta_{j,i}$ gained along each happiness function $S_i$ from further adding the point $p_j$ to recalculate $\Delta_j$ to use in the next step.
\begin{algorithm}
\caption{Multiplicative $1-\frac{1}{e}$-Approximation for $ahr^*$}\label{Alg:AHMS}
\begin{algorithmic}[1]
\Require Dataset $D$ with $n$ points, size of ARMS $r$
\Ensure $R$, a $1-\frac{1}{e}$ approximation for $ahr^*$.
\For {$j \gets 1 $ to $n$}
\State $H^{max}_j \gets \max_{p \in D} S_j (p) $ \Comment{so $happ(p,S_j)$ can be computed in $O(d)$}
\State $\Delta_j \gets \frac{1}{N}\sum_{i} happ(p_j,S_i)$
\EndFor
\State $R_0 \gets \varnothing$
\For {$j \gets 1 $ to $N$}
\State {$H_j \gets 0$}
\EndFor
\For {$s \gets 1 $ to $r$}
\State Find the point $p_x$ with highest $\Delta_x$
\State $R_s \gets R_{s-1} \cup \{p_x\}$
\For {$i \gets 1 $ to $N$}
\State {$H_i \gets \max(H_i,happ(p_x,S_i))$ }
\For {$j \gets 1 $ to $n$}
\State {$\delta_{j,i} \gets \max(happ(p_j,S_i)-H_i,0)$ }
\EndFor
\EndFor
\For {$j \gets 1 $ to $n$}
\State {$\Delta_j \gets \frac{1}{N}(\sum_i\delta_{j,i}) $ }
\EndFor
\EndFor
\State Select $R_r$ as $R$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The given approximation algorithm results in a multiplicative (1-$\frac{1}{e}$) factor approximation and takes $O(drNn)$ time.
\end{theorem}
\begin{proof}
The correctness can be shown through induction. At step $0$, $R_0$, $ahr^*(R_0,S)$ is trivially correct. $\Delta_k$ is recalculated at each step $i$ according to the definitions, and the correctness of $ahr^*(R_i,S)$ follows. The $1-\frac{1}{e}$ approximation bound follows from as discussed as this algorithm follows the greedy procedure of selecting the point with the greatest increase.
The total time complexity of lines 1-7 is $O(dNn)$ since it computes the value of each function for each point. Line 8-15, which are run $r$ times, take $O(dNn)$ time per iteration, the bottleneck being recomputing the gained happiness contribution for each point from each utility function. In total, the algorithm takes $O(drNn)$ time.
\end{proof}
\subsection{Approximating Average Happiness Ratio for Linear Utilities in 2 Dimensions}
Besides the sampling method, \cite{Zeighami:2016:MAR:2882903.2914831} also proposed an exact dynamic programming algorithm for the special case of linear utilities in $d=2$. As noted in \cite{Zeighami:2016:MAR:2882903.2914831}, this special case is of practical interest as two dimensional datasets often show up after feature selection or extraction. Improving the $O(n^4)$ algorithm from \cite{Zeighami:2016:MAR:2882903.2914831}, we introduce an exact algorithm running in $O(n^2)$ time. As in \cite{Zeighami:2016:MAR:2882903.2914831}, we assume each integral takes constant time to compute. We also present an approximation algorithm running in $O(\frac{n}{\epsilon})$ where $\epsilon$ is the desired additive approximation ratio.
\newcommand{\wa}[0]{\w_\alpha}
\newcommand{\ea}[0]{\eta_\alpha}
For this special case, we have
$ahr(R)= \int_{0}^1 \frac{\max_{p\in R} p \cdot \wa}{\max_{p\in D} p \cdot \wa}\eta_\alpha d \alpha$, where $\wa$ is the weight vector $[ \alpha, 1-\alpha]$ and $\ea$ is the probability density of $\wa$. Note that this is an equivalent simplification of the definition in \cite{Zeighami:2016:MAR:2882903.2914831}, but differs from the definition in \cite{Storandt2019}, which optimizes $ \frac{\int_{0}^1 (\max_{p\in R} p \cdot \wa) d \alpha}{\int_{0}^1 (\max_{p\in D} p \cdot \wa) d \alpha}$.
To aid the proofs, we apply dualization as in \cite{Storandt2019}. Each point $p=(x,y)$ is mapped to its dual line $p^*$ that passes $(0, y)$ and $(1,x)$ as illustrated in Fig. \ref{Fig:Dual}. It is straightforward to verify that the height of $p^*$ at $x=\alpha$ is equal to $p \cdot \wa$. Then $\max_{p\in R} p \cdot \wa$ will equal the $y$-value of highest line corresponding to a point in $R$ at $x=\alpha$. It then follows that the contribution to $ahr$ of $\wa$ corresponds the height of the upper envelope of duals of points selected in $R$ at $x=\alpha$ scaled by $ \frac{\eta_\alpha}{\max_{p\in D} p \cdot \wa} $.
\begin{figure}
\caption{Illustration of Dualization}
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{img/PrimalZoom.png}
\subcaption{Example Dataset $D$}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{img/dualzoom2.png}
\subcaption{Dualization of $D$}
\label{fig:three sin x}
\end{subfigure}
\hfill
\label{Fig:Dual}
\end{figure}
\setcounter{figure}{1}
\begin{figure}
\caption{Illustration for Lemma 5.1}
\centering
\includegraphics[width=0.4\textwidth]{img/NoNonConvexHullProofwiths.png}
\label{Fig:NoNonConvex}
\end{figure}
\begin{figure}
\caption{Illustration of H[i][j]}
\centering
\includegraphics[width=0.4\textwidth]{img/HCalc2.png}
\label{Fig:HCalc}
\end{figure}
As we are working with linear utilities, we assume for simplicity that $D$ consists only of skyline points. We now make the following observation:
\begin{theorem}
There is an optimal ARMS for linear utilities in $d=2$ that consists only of points on the convex hull of $D$.
\end{theorem}
\begin{proof}
First note that any non-convex hull point $s$ must be in the triangle spanned by two adjacent points $p_i, p_{i+1}$ on the convex hull and the origin. We will show it is always optimal to replace $s$ with either $p_i$ or $ p_{i+1}$.
Assume the optimal set is $R$ and includes $s$. Let $R' = R - \{ s \}$. First note that there is a point $t$ with dual $t^*$ that has the same slope $\tan \theta$ as the dual of $s$, $s^*$ which passes the intersection $(\gamma, \beta)$ between $p_i^*$ and $ p_{i+1}^*$ (as $s$ is not a convex hull point, it is below the intersection and so $t$ dominates $s$). We will show that adding either $p_i$ or $ p_{i+1}$ to $R'$ is no worse than adding $t$, which is no worse than adding $s$.
If adding $t$ to $R'$ does not add to the $ahr$, then we are done as adding $p_i$ or $p_{i+1}$ cannot make $ahr$ worse. Otherwise, there is a set of values of $\alpha$ where $t$ is better than all points in $R'$ for $\wa$. This set must be a contiguous interval ($\alpha_1$, $\alpha_2$) since it is the intersection of the intervals where $t \cdot \wa > p \cdot \wa$ for each $p \in R'$. This is depicted in Fig. \ref{Fig:NoNonConvex} with shaded regions representing parts already covered by $R'$. Now, the increase to $ahr$ from adding $t$ to $R'$ is $ \Delta = \int_{\alpha_1}^{\alpha_2} (\tan \theta (\alpha - \gamma) + \beta - m_\alpha) c_\alpha d\alpha$ where $c_\alpha = \frac{\eta_\alpha}{\max_{p\in D} p \cdot \wa}$ and $m_\alpha = {\max_{p\in R'} p \cdot \wa}$.
As $t$ is under the section of the convex hull between $p_i$ and $ p_{i+1}$, $\theta \in [\theta_i, \theta_{i+1}]$ where $\theta_i$ and $\theta_{i+1}$ are the slopes of $p_i^*$ and $ p_{i+1}^*$ respectively. The maximum value of $\Delta$ must occur at a local maxima or at one of the endpoints. If it occurs at one of the endpoints, then we have that adding $p_i$ or $p_{i+1}$ is not worse than $t$ as desired. Thus, we need only show that $\Delta$ has no local maxima.
Using Leibniz's Integral Rule, $ \frac{d \Delta}{d \theta}= \sec^2 \theta \int_{\alpha_1}^{\alpha_2} (\alpha - \gamma) c_\alpha d\alpha - (\tan \theta (\alpha_1 - \gamma) + \beta - m_{\alpha_1}) c_{\alpha_1} \frac{d \alpha_1}{d \theta} + (\tan \theta (\alpha_2 - \gamma) + \beta - m_{\alpha_2}) c_{\alpha_2} \frac{d \alpha_2}{d \theta} $. By definition of $\alpha_1$ and $\alpha_2$, both $\tan \theta(\alpha_1 - \gamma) + \beta - m_{\alpha_1}$ and $ \tan \theta(\alpha_2 - \gamma) + \beta - m_{\alpha_2}$ are $0$, so $ \frac{d \Delta}{d \theta}= \sec^2 \theta \int_{\alpha_1}^{\alpha_2} (\alpha - \gamma) c_\alpha d\alpha$. Now, assume there is a local maxima at $\theta =\theta^*$. We obtain $ \frac{d \Delta}{d \theta}\Bigr|_{\theta = \theta^*}= \sec^2 \theta^* \int_{\alpha_1}^{\alpha_2} (\alpha - \gamma) c_\alpha d\alpha = 0 \implies \int_{\alpha_1}^{\alpha_2} (\alpha - \gamma) c_\alpha d\alpha = 0$ (as $\sec^2 \theta^* \geq 1$). However, this would imply $ \frac{d \Delta}{d \theta}= 0$ for all values of $\theta$ implying that $\Delta$ is a constant function, contradicting the assumption that there is a local maxima as needed.
\end{proof}
A consequence of this theorem is that only the convex hull points need to be considered. This leads to a straightforward $O(rn^2)$ dynamic programming algorithm. Let $c$ be the number of points on the convex hull. Labelling the points of the convex hull as $p_1,p_2, \ldots, p_c$ in order of increasing x value (which implies increasing slope), and adding $(0,0)$ as $p_0$ for simplification, we have the recurrence relation $D[k][j] = \max_{0 \leq i<j} D[k-1][i] + H[i][j]$, where $H[i][j]$ is the increase in happiness from adding point $p_j$ to the set after most recently adding $p_i$ as illustrated in Fig. \ref{Fig:HCalc} --- any points added before $p_i$ do not affect the happiness increase from adding $p_j$, since $p_i^*$ will intersect $p_j^*$ at a larger x coordinate than it does $p_{i'}^*$ for $i'<i$ --- this is implied by Lemma 5.1.
\begin{lemma}
If $0 \leq i < j < k \leq c$, $I_{ik} \leq I_{jk} $ where $I_{ab}$ is the x coordinate of the intersection between $p_a^*$ and $p_b^*$.
\end{lemma}
\begin{proof}
First notice that for $a<b$, $\alpha < I_{ab} \implies$ $\wa \cdot p_a > \wa \cdot p_b$. Similarly, $\alpha > I_{ab} \implies$ $\wa \cdot p_a < \wa \cdot p_b$ for and $\alpha = I_{ab} \implies$ $\wa \cdot p_a = \wa \cdot p_b$.
Assume $I_{ik} > I_{jk}$ for the sake of contradiction. Then for $\alpha > I_{ik}$, we have $p_k \cdot \wa > p_j \cdot \wa$ since $\alpha > I_{ik} \implies \alpha > I_{jk}$. At $\alpha = I_{ik}$ , since $I_{ik} > I_{jk}$, we have $p_i \cdot \wa = p_k \cdot \wa > p_j \cdot \wa$. For $\alpha < I_{ik}$, we must have $p_i \cdot \wa > p_j \cdot \wa$ since $p_i \cdot \wa > p_j \cdot \wa$ at $\alpha=I_{ik}$ and the difference only increases as $\alpha$ decreases because the slope of $p_i^*$ is less than that of $p_j^*$. Thus, for any utility $\wa$, at least one of $p_i$ or $p_k$ will be better than $p_j$ for $\wa$, contradicting the fact that $p_j$ is a convex hull point (a convex hull point must be optimal for at least one utility function).
\end{proof}
We first show $H[i][j]$ can be filled in $O(n^2)$. It is helpful to refer to Fig. \ref{Fig:HCalc}. Here, we label the x coordinates of the intersections of the duals on the upper envelope $I_1, I_2, \dots, I_{c-1}$. We set $I_0 = 0$ and $I_{c} = 1$ for simplicity. It can be seen that for a pair of points $p_i$, $p_j$, the added happiness (corresponding to shaded area) can be divided by $x$ coordinate into two parts: A) the section between the intersection of $p_i^*$ and $p_j^*$ and the next intersection on the upper envelope ($I_{ij}$ to $I_2$ in the figure) and B) sections completely spanning a segment of the upper envelope (($I_2$, $I_3$) and ($I_3$, $I_4$) in the figure). Notice that the point $p_i$ optimizes $\wa$ for $\alpha \in [I_{i-1},I_i]$. Thus, the contribution to $ahr$ of a point $s$ in $[I_{a},I_b] \subset [I_{i-1},I_i]$ is simply $F(q, p_i, I_{a}, I_{b}) = \int_{I_{a}}^{I_{b}} \frac{q \cdot \wa}{p_i \cdot \wa} \ea d \alpha$. This has a closed form which can be computed in $O(1)$ if $\ea = 1$ (if $\alpha$ follows a uniform distribution). Then, part A of the added happiness is $F(p_i, p_k, I_{ij}, I_{k}) - F(p_j, p_k, I_{ij}, I_{k})$, where $I_k$ is the intersection immediately after $I_{ij}$, while part B is $\Sigma_{l=k+1}^{c} (F(p_i, p_l, I_{l-1}, I_{l}) - F(p_j, p_l, I_{l-1}, I_{l}))$ --- this is the happiness of $p_i$ minus that of $p_j$ in each region. Part B can be calculated efficiently with prefix sums $S[i][k] = \Sigma_{l=i}^{k} F(p_i, p_l, I_{l-1}, I_{l})$. This is expressed as an algorithm in Algorithm \ref{Alg:HCalc}, where it can be seen that $F$ is computed $O(n^2)$ times, each representing one integral computation, while other operations also take $O(n^2)$ time.
\begin{algorithm}
\caption{Computing $H$}\label{Alg:HCalc}
\begin{algorithmic}[1]
\Require Convex hull points $p_1,p_2, \ldots, p_c$
\Ensure $H[i][j]$ for $j \geq i$
\State Compute the x coordinates of the intersections on the upper envelope as $I_1, I_2, \dots, I_c$.
\For{$i \gets 1$ to $c$}
\State $S[i][i] \gets F(p_i, p_i, I_{i-1}, I_i)$
\For{$k \gets i$ to $c$}
\State $S[i][k] \gets F(p_i, p_j, I_{k-1}, I_k) + S[i][k-1]$
\EndFor
\EndFor
\For{$i \gets 1$ to $c$}
\State $k \gets 0$
\For{$j \gets i+1$ to $c$}
\State Compute the x coordinate of the intersection between $p_i^*$ and $p_j^*$ as $I_{ij}$ (treating $p_0^*$ as the y axis to simplify).
\While {$I_{ij} \geq I_{k+1}$} \Comment{ $I_{ij}$ is increasing in $j$ (Lemma 5.1)}
\State $k \gets k+1$
\EndWhile
\State $A \gets F(p_i, p_k, I_{ij}, I_k) - F(p_j, p_k, I_{ij}, I_k)$
\State $B \gets (S[i][c]-S[i][k]) - (S[j][c]-S[j][k]) $
\State $H[i][j] \gets A + B$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
Now that we have shown $H[i][j]$ can be filled in $O(n^2)$ integral computations, the given recurrence already results in a $O(rn^2)$ algorithm. However, this can further be improved by noticing that $H[i][j]$ satisfies a \textbf{Quadrangle Inequality}, also called the \textbf{Inverse Monge} property \cite{MongeProp}:
\begin{figure}
\caption{Illustration of Quadrangle Inequality}
\centering
\includegraphics[width=0.4\textwidth]{img/MongeProofZoom.png}
\label{Fig:Quad}
\end{figure}
\begin{lemma}
For $0 \leq i< j < k < l \leq c$ , $ H[i][l] + H[j][k] \leq H[i][k] + H[j][l]$
\end{lemma}
\begin{proof}
As shaded in Fig. \ref{Fig:HCalc}, $H[i][j]$ is the happiness from covering the area between $p_i^*$ and $p_j^*$ to the right of their intersection. To simplify the proof, we treat the dual of $p_0=(0,0)$, $p_0^*$ as the y axis.
Let $A,B,C,D,E$ be the added happiness from covering each area as labelled in Fig. \ref{Fig:Quad} where $i < j < k < l$. We have $H[i][l] = A + B + C$, $H[j][k] = B+D $, $H[i][k] = B + C + D + E$ and $H[j][l] = A + B$. Then, $ (H[i][k] + H[j][l]) - (H[i][l] + H[j][k]) = E \geq 0$ $\implies H[i][l] + H[j][k] \leq H[i][k] + H[j][l]$.
\end{proof}
As shown in \cite{Monge}, this property allows us to fill out recurrences of the form $D[k][j] = \max_{0 \leq i<j} D[k-1][i] + H[i][j]$ for $ 1 \leq k \leq r, 1 \leq j \leq c$ in $O(rn)$. Essentially, the main idea is that this inverse Monge property allows for the application of the SMAWK algorithm \cite{smawk} to compute $D[k][j], 1 \leq j \leq c$ for each fixed $k$ in $O(n)$ time. This follows from the following facts:
\begin{itemize}
\item If $H$ fulfills the inequality in Lemma 5.2, then so does $H'_{k-1}$ where $H'_{k-1}[i][j] = D[k-1][i] + H[i][j]$ for $i< j$ ($ H[i][l] + H[j][k] \leq H[i][k] + H[j][l]\implies D[k-1][i] + H[i][l] + D[k-1][j] +H[j][k] \leq D[k-1][i] + H[i][k] + D[k-1][j] + H[j][l]$). As mentioned, in \cite{Monge}, only the elements above the diagonal are relevant. Nevertheless, we can set $H'_{k-1}[j][i] = j - i $ for $j<i$ to obtain a full inverse Monge matrix to simplify the proof without affecting the algorithm's correctness.
\item An \textbf{Inverse Monge} matrix is \textbf{Totally Monotone}, meaning that for each submatrix, the row indices of the maximum value in each column (taking the last row in case of ties) are non-decreasing. \cite{MongeProp}
\item The SMAWK algorithm can find the column maximums of a $n$ by $n$ \textbf{Totally Monotone} matrix in $O(n)$ \cite{smawk}.
\end{itemize}
Combining these ideas, we obtain Algorithm \ref{Alg:2DAHMS}. Finding the skyline and the convex hull points takes $O(n \log n)$. Computing $H$ takes $O(n^2)$. SMAWK is run $O(r)$ times, each taking $O(n)$ time. Thus, the overall runtime is $O(n^2)$. Note that as the same optimal solution optimizes both regret and happiness ratios, this exact algorithm optimizes both the regret and happiness.
\begin{algorithm}
\caption{Exact Algorithm for 2D-ARMS with Linear Utilities}\label{Alg:2DAHMS}
\begin{algorithmic}[1]
\Require Dataset $D$ with $n$ points, size of ARMS $r$
\Ensure $R$, the ARMS
\State Find the convex hull of the skyline points $p_0, p_1, \dots, p_c$ sorted by x coordinate.
\State Compute $H$ as in Algorithm \ref{Alg:HCalc}
\For{$i \gets 1$ to $c$}
\State $D[1][i] \gets H[0][1]$ \Comment{$H[0][i]$ is equivalent to $ahr({p_i})$}
\EndFor
\For{$k \gets 2$ to $r$}
\State Apply SMAWK to compute $D[k][i]$ for $1 \leq i \leq c$ from $H'_{k-1}$ as defined above. ($H'_{k-1}$ is not explicitly constructed, but its entries can be calculated on the fly as needed in SMAWK)
\EndFor
\State Find the $i$ that maximizes $D[r][i]$ and select the points that constructed it as $R$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Additive $\epsilon$-Approximation for 2D-ARMS with Linear Utilities}\label{Alg:Approx}
\begin{algorithmic}[1]
\Require Dataset $D$ with $n$ points, size of ARMS $r$, Additive approximation ratio $\epsilon$
\Ensure $R$, an $\epsilon$-approximate ARMS
\State Find the convex hull of the skyline points $p_0, p_1, \dots, p_c$ sorted by x coordinate.
\State Apply the \textbf{Additive Reduction Scheme} with factor $\epsilon$ on the convex hull and again find the convex hull of the reduced points to obtain $c'$ candidate points.
\State Compute $H$ with a modified version of Algorithm \ref{Alg:HCalc}, passing both the convex hull and the candidate points.
\State Construct $R$ from $H$ as in Algorithm \ref{Alg:2DAHMS}.
\end{algorithmic}
\end{algorithm}
Finally, we note Algorithm \ref{Alg:2DAHMS} can be turned to an approximation algorithm, Algorithm \ref{Alg:Approx}, with additive approximation ratio $\epsilon$ running in $O(\frac{n}{\epsilon} + n \log n)$ by applying the \textbf{Additive Reduction Scheme} (4.1.1) to obtain a list of at most $O(\frac{1}{\epsilon})$ candidate points (not $O(\frac{1}{\epsilon^2})$ as we only consider the skyline). The main modification is to the computation of $H$, where we must now also pass a list of candidate points along with the convex hull points --- $H$ becomes a $c' \times c'$ matrix where $c'$ is the number of candidates. Specifically, we change the loops in lines 2, 6, and 8 of Algorithm \ref{Alg:HCalc} to loop through the candidate points instead of the whole convex hull. This changes the time complexity of computing $H$ to $O(\frac{n}{\epsilon})$.
This algorithm finds the set $R$ only containing candidate points with the maximum happiness (the proof follows near identically to the exact case, only changing the considered points in the calculation of $H$ and $D$). Thus, the approximation ratio can be proven as in Lemma 4.3 only replacing the max operators in the numerator with the expectation. Filling out $D$ now takes only $O(\frac{r}{\epsilon})$, so the overall runtime becomes $O(\frac{n}{\epsilon} + n \log n)$. As in Section 4, this approximation ratio for the \textbf{Additive Reduction Scheme} applies to both regret and happiness.
\section{Experimental Results}
This section presents experimental results for our reduction schemes and our approximation algorithms for ARMS. The algorithms were implemented in C++ and run on Ubuntu 16.04 VirtualBox virtual machine with 12GB RAM and 6 2.5 GHz Intel Core i7 Processors. The scalability tests in particular are run on CentOS IBM X3650 M3 servers with 2x6-Core 2.66GHz and 48GB RAM.
\subsection{Polynomial Time Reduction Schemes}
In this subsection, we discuss experimental results for applying our reduction schemes as a preprocessing step before applying existing heuristic algorithms for the 1-RMS problem.
The experiments were performed to evaluate the effect of applying the reduction schemes on the runtime and achieved Minimum Happiness Ratio (MHR) of selected RMS solvers:DMM \cite{Asudeh:2017:ECR:3035918.3035932}, Geogreedy \cite{Peng2014}, Greedy \cite{Nanongkai2010}, and ImpGreedy \cite{Xie2018}. Note that since the MHR is equivalent to the 1 minus the Maximum Regret Ratio (MRR), the solvers also optimize MHR.
We first evaluated the reductive power of both schemes at various values of $\varepsilon$ on a 5-dimensional dataset of $10^7$ points as shown in Fig. \ref{Fig:red}. It can be seen that both schemes are effective at reducing the dataset especially for higher values of $\varepsilon$, with the dataset reduced by at least $80\%$ for $\varepsilon > 0.3$. As we consider multiplicative approximation to be the more common goal, we focus on the multiplicative scheme for the remainder of the experiments. As a simple heuristic, when mapping a point in reduced dataset back to the original dataset, we select the original point with the maximum sum of coordinate values.
We conducted a scalability test varying $n$ with and without the multiplicative reduction scheme with $\varepsilon=0.3$ on a randomly generated dataset generated as specified in \cite{Skyline}. Skyline queries were not applied in these experiments (as they should not be in $k$-RMS for $k>1$). These results are shown in Fig. \ref{Fig:VaryN}.
DMM \cite{Asudeh:2017:ECR:3035918.3035932} and GeoGreedy \cite{Peng2014} have extremely inefficient runtime for large datasets $n \geq 10^7$ and are omitted from the plot in such cases.
\begin{figure}
\caption{Reduced Dataset Size vs $\varepsilon$ ($n = 1,000,000$, $d = 5$)}
\centering
\includegraphics[width=0.4\textwidth]{img/reduction.pdf}
\label{Fig:red}
\end{figure}
\begin{figure}
\vspace{-5mm}
\caption{Vary $n$ experiments ($d = 5$, $r = 50$)}
\vspace{-7mm}
\centering
\includegraphics[width=0.4\textwidth]{img/varyN.20210622-fixed.pdf}
\vspace{-18mm}
\label{Fig:VaryN}
\end{figure}
As seen in Fig. \ref{Fig:VaryN}, the multiplicative reduction scheme can reduce the required runtime of 1-RMS solvers by up to 92\% (from 27480s to 2138s) while keeping the MHR within 96\% of the original on the largest tested settings. However, for that same setting, the MRR increased by up to a factor of 8.8, which suggests that the reduction schemes are inappropriate for multiplicatively approximating MRR.
\subsection{Approximation Algorithm for ARMS}
\subsubsection{$1-\frac{1}{e}$-approximation algorithm for $ahr^*$}
Here, we experimentally compare the $O(drNn)$ time $1-\frac{1}{e}$-approximation algorithm for $ahr^*$ proposed in Section 3.3.1 with Greedy Shrink FAM \cite{Zeighami:2016:MAR:2882903.2914831}, the $O(dNn^3)$ time algorithm for ARMS, on a 5-dimensional NBA dataset of 17265 points, with 100 sampled linear utility functions. Both were implemented without any heuristics.
We show our results in Fig. \ref{Fig:AHMSvFAM}. Both algorithms achieve virtually identical average happiness ratios, while ours requires significantly less time to execute, requiring runtime at least two orders of magnitude lower. This result reflects the superior time complexity of our proposed approximation algorithm, despite both having equivalent goals of minimizing regret and maximizing happiness.
\begin{figure}
\caption{Comparison with Greedy Shrink FAM}
\includegraphics[width = 0.45\textwidth]{img/NBAx_cropped.pdf}
\label{Fig:AHMSvFAM}
\vspace{-4mm}
\end{figure}
\begin{figure}
\caption{ARMS Vary $n$ experiments ($r = 5$, $d=7$, $N = 1000$)}
\centering
\includegraphics[width=0.45\textwidth]{img/SamplevaryNx_cropped.pdf}
\label{Fig:AHMS-VaryN}
\vspace{-4mm}
\end{figure}
We further conducted a scalability test of our algorithm to show the efficiency at various large values of $n$ shown in Fig. \ref{Fig:AHMS-VaryN}.
The datasets were generated as in \cite{Skyline}. Notably, our algorithm is able to handle datasets of size $10^6$ with more sampled utilities in significantly less time than Greedy Shrink FAM requires for only 17265 points.
\subsubsection{Algorithms for Linear Utilities in $d=2$}
\begin{figure}
\caption{2D ARMS Vary $n$ experiments ($r = 5$)}
\centering
\includegraphics[width=0.45\textwidth]{img/2dVaryN_cropped.pdf}
\label{Fig:AHMS-2D}
\vspace{-4mm}
\end{figure}
For our $d=2$ algorithm, the time complexity depends significantly on the number convex hull points, however this number can be much smaller than the skyline (for example, it is known that the expected number of vertices on the convex hull of $n$ points uniformly sampled from the unit square is $O(\log n)$ \cite{polytope}). Thus, we chose to generate input data by sampling points on the unit circle in the first quadrant. This guarantees that all points are on the convex hull and lets us show the empirical time complexity in terms of convex hull size. Note that this can only make our algorithms' performance worse without affecting the performance of the naive DP algorithm \cite{Zeighami:2016:MAR:2882903.2914831}, which is agnostic to whether a point is on the convex hull. As for the utility function distribution, we set $\eta_\alpha=1$ for all $\alpha$ (uniform distribution) so that the integrals have a closed form as previously discussed.
We performed an experiment varying $n$ for the DP algorithm \cite{Zeighami:2016:MAR:2882903.2914831}, our Exact algorithm, and our approximation algorithms with $\epsilon = 0.01, 0.1,$ and $ 0.3$. This is shown in Fig. \ref{Fig:AHMS-2D}, omitting points for when an algorithm exceeds memory limits.
It can be seen that our Exact algorithm can handle significantly larger datasets than DP, being successful on $n=10^4$, while DP already exceeds memory limits at $n=10^3$. This reflects their respective memory complexities --- $O(n^3)$ vs. $O(n^2)$. In terms of Average Happiness Ratio (AHR), as both algorithms are exact, they achieve the same AHR when successful.
Meanwhile, our approximation algorithms handled the largest tested dataset of $10^7$ points, with the slowest requiring only 87 seconds. This shows that the additive reduction is successful at reducing the time and memory needed, while only marginally affecting AHR (AHR $\geq$ 0.995 for $\epsilon=0.01$ on all tested sets). We also note that the time reduction between $\epsilon=0.1$ and $\epsilon=0.3$ is barely noticeable while going from $\epsilon=0.01$ to $\epsilon=0.1$ reduces the time by roughly half. This reflects its time complexity, $O(\frac{n}{\epsilon} + n \log n)$, with the $O(n \log n)$ factor dominating as $\epsilon$ becomes sufficiently large.
\subsection{Experimental Summary}
For $k$-RMS, our multiplicative reduction scheme reduced the runtime of 1-RMS solvers by up to 92\% while keeping the MHR within 96\% of the original on the largest tested settings.
For the sampling based approach for AHMS, our algorithm ran in less than 1\% of the time used by Greedy Shrink FAM while maintaining nearly identical AHR on the largest settings. For the special case of linear utilities in $d=2$, our exact and approximation algorithms could handle datasets of $10^4$ and $10^7$ points respectively, whereas the existing DP algorithm failed on even $10^3$.
\section{Conclusion and Future Work}
We have studied the approximation of the happiness maximization version of regret minimizing set problems, resulting in multiple algorithms which come with stronger theoretical guarantees or time complexities than existing algorithms.
For $k$-RMS, have completely resolved the NP-Hardness of multiplicatively approximating $k$-happiness for all values of $d$ and $k$ by showing that it is multiplicatively approximable to any desired ratio for fixed $d$, but NP-Hard to multiplicatively approximate for unfixed $d$.
We then introduced dataset reduction schemes which we experimentally show to significantly improve the runtime of existing heuristic algorithms while mostly preserving the happiness ratio.
Finally, for ARMS, we have provided approximation ratios with significantly improved time complexities for the average happiness ratio. In particular, our algorithm for optimizing the average happiness of a sample is faster by a factor of $\Theta(n^2/r)$ than the algorithm previously proposed, while our exact algorithm for the special case of linear utilities in 2 dimensions improves on the previous by a factor of $\Theta(n^2)$.
For future work, as the provided polynomial-time approximation schemes for $k$-RMS are intended primarily as theoretical tools, it remains open whether computationally feasible schemes exist.
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Chandola \& Kumar \cite{Anomaly} define \textit{anomaly detection} as ``\textit{the problem of finding patterns in data that do not conform to expected behavior}''. Anomaly detection is made especially challenging by the lack of access to anomalous patterns during training. \textit{One-class classification} \cite{DBLP:journals/corr/KhanM13}, in which a model is trained to recognize normal data and flag an anomaly when something fails to be recognized as normal, is a common approach to building anomaly detectors. Like any other machine learning task, however, training a model can be challenging when dealing with raw data as opposed to a set of well-behaved engineered features---for example, when working with image data \cite{zhou2017anomaly}, audio data \cite{rushe2019anomaly}, or streaming data \cite{ahmad2017unsupervised}. Shallow models particularly suffer from this issue, and it is common practice to employ deep learning in these scenarios.
In this paper we propose a deep anomaly detection model that can be trained in a fully end-to-end fashion, and is suitable for use with raw, high-dimensional data sources such as images and timeseries data from sensors. Our proposed model, the \emph{Deep Radial Basis Function Data Descriptor} (D-RBFDD) network, is based on our previous work on the Radial Basis Function Data Descriptor (RBFDD) network \cite{RBFDD}. RBFDD networks (which in turn are based on Radial Basis Function (RBF) networks \cite{Ethem,Bishop}) are effective, efficient anomaly detectors that learn a compact set of Gaussian kernels to cover the normal region of input space, and recognize anomalies as instances that sit outside this region. The positions of these learned kernels, and the magnitude of the weights connecting each of them to the output layer, also facilitate straight-forward post-hoc explanation of outputs \cite{jin2003extracting, xi2018interpretable, augusteijn2000constructing}.
Although RBFDD networks are effective, they are shallow neural networks with a single hidden layer and do not perform well when trained on raw data \cite{RBFDD}. We identify three ways to adapt RBFDD networks to work with raw input data: (1) based on transfer learning using the latent representation from a generic pre-trained network as input to the RBFDD network, (2) extending the first approach to include fine-tuning the pre-trained network as part of training the RBFDD, and (3) adding multiple computational layers before the RBFDD network that are fully trained as part of training the network in an end-to-end fashion.
In this paper we explore the effectiveness of these three approaches, and show that the final approach that trains the entire network from random initialization---referred to as Deep RBFDD (D-RBFDD)---out-performs the other deepening approaches. This addresses a fundamental question of whether the latent representations learned by large models trained for multi-class classification are suitable as input for anomaly detection models, or whether they are too entangled with the multi-class classification problem. Our evaluation experiments---using anomaly detection scenarios constructed from well-known image classification datasets and a dataset from a real-world electrocardiogram (ECG) anomaly detection task---suggest that they are not suitable. Our experimental results also show that the D-RBFDD approach out-performs existing state-of-the-art anomaly detection approaches (including the shallow RBFDD approach) on the image datasets, while producing competitive results on the ECG dataset. D-RBFDD is therefore an effective anomaly detector trained in an end-to-end fashion, that has the advantages that come with an approach based on RBF networks: it is efficient, it lends itself to easy interpretation, and the local learning approach can adapt to dynamic definitions of normality and can accommodate changing distributions (i.e., concept drift when learning from streams).
The remainder of this article is structured as follows. In Section \ref{Related Work} we describe common approaches to anomaly detection including deep learning methods. In Section \ref{RBFDD networks}, we describe the RBFDD network approach and illustrate different methods for deepening it. The setup of our experiments is described in Section \ref{Experiment Setup}, and the results are discussed in Section \ref{Results and Discussions}. Finally, Section \ref{Conclusion and Futurework} concludes the paper and discusses some directions for future work.
\section{Related Work}\label{Related Work}
Machine learning approaches to anomaly detection are dominated by a family of algorithms that adapt the Support Vector Machine (SVM) \cite{Ethem} algorithm to work with only examples of a single class: One-Class SVM (OCSVM) \cite{OCSVM}. In fact Khan \& Madden \cite{DBLP:journals/corr/KhanM13} go so far as to say that one-class classification approaches should be divided into two categories: OCSVMs and everything else!
Much like the standard SVM approach, OCSVM models separate normal data points from the origin in the feature space using a hyper-plane, while maximizing the distance between the origin and this hyper-plane. Although any kernel function can be used, Gaussian kernels work particularly well for OCSVM models \cite{DBLP:journals/corr/KhanM13}. The main issue with OCSVM models is that they do not scale well. For large datasets the computational and storage requirements of OCSVMs grow polynomially with dataset size \cite{Awad2015}. Variations of the OCSVM approach include the Support Vector Machine Data Description (SVDD) \cite{Tax:2004:SVD:960091.960109}, which uses hyper-spheres rather than hyper-planes to achieve separation. The goal is to find a spherically shaped boundary, encompassing the normal data.
On the non-OCSVM side, Isolation Forests (iForests) \cite{iForest} and Auto-Encoder neural networks (AENs) \cite{GoodFellowDeepLearning} (and their many variations) are effective anomaly detectors. An iForest model isolates individual data points in a training set by constructing a decision tree that splits the input space randomly and repeatedly. The intuition behind this approach is that fewer splits should be required to isolate anomalous instances than normal ones. An auto-encoder network (AEN) is a neural network that learns a generative model of input data by transforming it into a representation with reduced dimensionality, and then reconstructing the original input data from this representation. If an AEN is trained to reconstruct only \emph{normal} data instances, it can detect anomalies by flagging test instances for which the reconstruction error is very high. Deep auto-encoders have been shown to be effective on problems with raw data inputs \cite{an2015variational,zhao2017spatio}.
Deep learning approaches for anomaly detection can be categorized as either \textit{mixed approaches} or \textit{end-to-end approaches}. In \emph{mixed approaches} a deep model is trained in an unsupervised way to work as a feature extractor that produces the data for a, typically shallow, anomaly detector, such as a OCSVM. The deep models used to learn features tend to be reconstruction-based models such as Deep Belief Networks (DBNs) or deep AENs \cite{RecognitionOfGeochemicalAnomaliesUsingADeepAutoencoderNetwork, OnAccurateandReliableAnomalyDetectionforGasTurbineCombustorDeepLearningApproach, AdeepLearningApproachforDetectingMaliciousJavaScriptCode}. For example, in \cite{erfani2016high} an unsupervised DBN is trained to extract generic underlying features, and a OCSVM is trained using these features. The mixture of a DBN and a OCSVM is shown to out-perform a standalone OCSVM.
Similarly, in \cite{marir2018distributed}, in order to detect anomalous behavior in large-scale network traffic data, a DBN model is trained as an unsupervised dimensionality reduction step, whose output features are then fed into a multi-layered ensemble Support Vector Machine (SVM). In \cite{xu2017detecting}, a fully unsupervised model is proposed for detecting anomalous frames in video. For every frame of the video, appearance and motion features are extracted, and fed into two separate Stacked Denoising Auto-Encoders (SDAEs). A fusion of these two types of features are fed into a third SDAE. The features obtained in the bottle-neck layers of the three SDAEs are then fed into three OCSVMs, each of which produces an anomaly score. The three anomaly scores are combined to make the final decision for an input video frame.
The main issue with mixed approaches is that the deep feature extractor is not trained for an anomaly detection objective, but on a different objective such as minimizing the reconstruction error. As a result, the features learned may not be useful inputs for the anomaly detection model. \textit{End-to-end approaches} address this issue, and aim to make the features extracted in latent representations more appropriate for the anomaly detection task by defining a one-class loss function. The loss function is then used to train an entire network in an end-to-end fashion, guiding the network to produce features that are appropriate for anomaly detection. For example, in \cite{zong2018deep} the Deep Auto-encoding Gaussian Mixture Model (DAGMM) is proposed as an end-to-end approach to anomaly detection. An AEN is used to reduce the dimensionality of the data, while the reconstruction error and the low-dimensional representation from the bottle-neck of the AEN are fed into a Gaussian Mixture Model (GMM) \cite{Ethem}. Similarly, in \cite{chalapathy2018anomaly} a One-Class Neural Network (OCNN) for anomaly detection is proposed. OCNN combines the rich feature extraction property of deep neural networks with a proposed OCSVM-like cost function. At first, a deep AEN is trained to produce good representative features of the input data. Next, the encoder portion of this pre-trained AEN is fed into a simple one-layer neural network, the ultimate output of which will be used to compute the cost. The weights of both the encoder and the one-layer neural network will be trained simultaneously, while minimizing the cost function. By combining the capability of deep neural networks to extract progressively rich features from the data with the proposed cost function, the aim is to obtain the hyper-plane that separates the normal data from the origin.
AnoGAN \cite{schlegl2017unsupervised} is a deep model for anomaly detection based on Generative Adversarial Networks (GANs) \cite{GoodFellowDeepLearning}. The generator network is trained to learn the distribution of the training data. Given a test instance it searches for a point in the latent space of the generator that would generate a sample that is closest to the test point. If an accurate distribution of the normal data has been learned, for a normal query, $x$, there should be a representation, $z$, in the latent space that the generator could use to generate a new data point, $G(z)$, that is similar to the normal query $x$. For an anomalous query a good representation, $z$, should not be found and, as a result, the generated data, $G(z)$, will not be similar to the query.
Finally, inspired by the Support Vector Data Descriptor (SVDD) model \cite{Tax:2004:SVD:960091.960109}, Deep-SVDD \cite{pmlr-v80-ruff18a} is another deep one-class neural network designed for anomaly detection. While the neural network is trained, the volume of a hypersphere that envelopes the data in the latent space is minimized. Thus, the neural network is trained to map the input data into a hypersphere with minimum volume. There are two versions of Deep SVDD: (1) Soft-boundary Deep SVDD which makes a compromise between the volume of the hypersphere and violations of the boundary; and (2) One-Class Deep SVDD which is a simpler version that assumes that most of the training data is normal (i.e., low proportion of outliers).
The effectiveness of end-to-end approaches, and in particular end-to-end approaches optimized using a loss function with a direct anomaly detection objective, rather than one based on reconstruction error, motivates our proposed D-RBFDD network. Moreover, it is desireable for the anomaly detectors to be both interpretable and adaptable to new data and changing concepts of normality. However, these characteristics are not easily associated with SVMs \cite{ratsch2006learning,1318049,10.1145/3357384.3357816}, or approaches built upon an SVM foundation \cite{nguyen2018scalable}. On the other hand, because of their local learning approach, RBF networks easily lend themselves to interpretation \cite{sendhoff2000extracting, jin2003extracting}, and are adaptable to changing concepts \cite{liu2020fast,han2011efficient}. This makes a deep end-to-end anomaly detector based on RBF networks, such as D-RBFDD, an attractive approach. D-RBFDD is a fast, effective anomaly detector, trained in an end-to-end fashion and capable of learning latent representations directly aligned with an anomaly detection objective, and that lends itself to easy interpretation and adaptation to changing concepts of normality.
\section{Deep Radial Basis Function Data Descriptor (D-RBFDD) Networks}\label{RBFDD networks}
This section describes the RBFDD network and three alternatives for adding depth to these networks, the last of which we refer to as the Deep RBFDD (D-RBFDD) network.
\subsection{Radial Basis Function Data Descriptor (RBFDD) Networks}
In our previous work \cite{RBFDD} we proposed the RBFDD network, which is a modification of the Radial Basis Function (RBF) network that enables it to be used for anomaly detection. An RBF network is a local-representation learning technique used for classification that divides the input space among a set of local kernels. In an RBF network, for every input data point, depending on where in the input space it appears, a fraction of these locally-tuned kernel units get activated. Activation is measured using a function of the distance between an input instance, \(X\), and the center, \(\mu_h\), of every kernel unit \(h\). Distance is typically measured with Euclidean distance, \norm{X - \mu_h}, and the activation function for the local kernels is usually implemented using a Gaussian function:
\begin{equation}
P_h(X) = \exp\left({-\frac{\norm{X - \mu_h}^2}{2s_h^2}}\right)
\end{equation}
\noindent where \(\mu_h\) and \(s_h\) denote the mean and standard deviation of the local unit \(h\). Activation is maximum when \(X = \mu_h\), and decreases as \(X\) and \(\mu_h\) diverge.
RBFDD networks adapt the RBF network approach to learn a set of Gaussian kernels that compactly represent normal instances in a training set, thus transforming them into anomaly detectors. A trained RBFDD network can be used as an anomaly detector by recognizing instances that are not covered by this compact representation. Figure~\ref{fig:RBFDD Network} shows the architecture of an RBFDD network. Here \(x_d\) denotes the \(d^{th}\) feature in the input data \(X\), which is a \(D\)-dimensional vector. In the output node of the network the \(tanh\) non-linear activation function proposed in \cite{LecunEfficientBackprop} is used, as it avoids saturation. In particular for a given \(D\)-dimensional input data, $X_i$, the output of the model is computed as:
\begin{equation}\label{RBFDD output}
y_i = 1.7159 \times tanh\left(\frac{2\times z(X_i)}{3}\right)
\end{equation}
\begin{equation}\label{Z(X)}
z(X_i) = \sum_{h=1}^{H} w_h \times P_h(X_i)
\end{equation}
\noindent where $w_h$ is a weight connecting the Gaussian kernel $h$ to the output unit.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.65]{Images/RBFDD.pdf}
\caption{The RBFDD network.}
\label{fig:RBFDD Network}
\end{center}
\end{figure}
After training, the output of the RBFDD network, \(y_i\), for a given input, \(X_i\), should be high if \(X_i\) belongs to the normal region of the input space and low otherwise. In the RBFDD network, the unsupervised pre-training phase used to train RBF networks \cite{Bishop} (i.e., \(k\)-means clustering \cite{Ethem}) remains in place. Following this step, the backpropagation of error algorithm is used with gradient descent to find the optimal values for the network parameters. In this process the following cost function is minimized for mini-batches of size \(N\):
\begin{equation}
\label{eq:RBFDD error}
E = \sum\limits_{i=1}^{N}\left(\frac{1}{2}\left[\left(1 - y_i\right)^2 + \beta \sum\limits_{h=1}^{H} s_{h}^2 + \lambda \sum\limits_{h=1}^{H}{w_h^2} \right]\right)
\end{equation}
This cost function is a weighted summation of three terms. In the first term, \((1 - y_i)^2\), \(y_i\) is the output of the RBFDD network for a given \(D\)-dimensional input data instance, $X_i$. This term encourages the network to learn a model that outputs a value as close as possible to 1 for instances belonging to the normal class. The second term, regularizes the spreads of the Gaussian kernels in the hidden layer of the network, and is the squared L-2 norm \cite{GoodFellowDeepLearning} of the spreads for the \(H\) Gaussians. This encourages the most compact set of Gaussians possible to represent the normal data. The third term, regularizes the weights connecting the RBFDD hidden layer units to the output unit. This stops the weights from becoming so large that they would actually ignore the outputs from the hidden units, and makes the RBFDD network robust to outliers in the training set \cite{GoodFellowDeepLearning}. Minimizing Eq.\eqref{eq:RBFDD error}, using gradient descent, finds the most compact set of Gaussian kernels whose collective output is still high for the normal region of the input space and low everywhere else (i.e., where the anomalies are expected to appear).
RBFDD networks use radial kernels, and thus, they might lack the necessary flexibility to learn certain distributions. To overcome this limitation of RBFDD, we previously proposed the Elliptical Basis Function Data Descriptor (EBFDD) \cite{bazarganielliptical}, where we make the anomaly detector more flexible by replacing the radial kernels with elliptical kernels. EBFDD was shown to perform better than RBFDD, however, at significantly increased computational cost (EBFDD requires a covariance matrix inversion which is a computationally very expensive operation). We believe, however, that the same flexibility can be achieved by adding computational layers before the RBFDD layer to transform the representation into a space where RBFDD can be applied effectively and, due to the lower computation time of RBFDD compared to EBFDD, also efficiently. Thus, we avoid the computational complexity of the EBFDD networks by adding more layers and retaining the capacity of the deep model to learn complex distributions in the normal data. Although the RBFDD network is an effective anomaly detector when used with well-behaved sets of input features, it does not perform well on high-dimensional raw data representations (e.g., pixel values in images or raw sensor data). This is the main motivation for deepening the structure of the RBFDD network so that we can apply it to anomaly detection problems with raw high-dimensional input data. The next section describes different alternatives for placing extra computational layers before the RBFDD network.
\subsection{Deepening RBFDD Networks}\label{Deep RBFDD}
We explore three ways to add depth to RBFDD networks (illustrated in Figure~\ref{fig:Architectures of deep networks}):
\begin{itemize}
\item \emph{Transfer learning} \cite{weiss2016survey} is exploited and the latent representation produced at the final layer of a network that is already trained on a large generic dataset is presented to the RBFDD network.
\item The latent representation from a pre-trained network, such as that described above, can be \emph{fine-tuned} using the cost function optimized when training an RBFDD network.
\item Computational layers placed before the RBFDD network can be trained from random initialization as part of the overall \emph{end-to-end} deep RBFDD training process.
\end{itemize}
\begin{figure}[tbh]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{Images/Fixed-Res.pdf}
\caption{Fix-Res + RBFDD}
\label{fig:Fixed ResNet-18 + RBFDD network}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{Images/Fine-Res.pdf}
\caption{Fine-Res + RBFDD}
\label{fig:Fine-tuned ResNet-18 + RBFDD network}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{Images/LeNet-RBFDD.pdf}
\caption{Deep RBFDD}
\label{fig:LeNet-5 + RBFDD network}
\end{subfigure}
\caption{Three approaches to deepening RBFDD networks. The red lines highlight the fixed portion of each model.}\label{fig:Architectures of deep networks}
\end{figure}
For the approach using transfer learning, referred to as \textit{Fix-Res + RBFDD}, we use a fixed, pre-trained ResNet-18 model trained on the ImageNet \cite{he2016deep} dataset, and extract the latent representation after its last convolutional layer for each data instance. This representation is then passed to a standard RBFDD model. In the second approach, referred to as \textit{Fine-Res + RBFDD}, again we connect a pre-trained ResNet-18 model to an RBFDD model. In this case, however, we fine-tune the last 4 convolutional layers, and the last 4 batch-norm layers of the pre-trained ResNet-18 model as part of training the RBFDD model using the cost function in Eq.\eqref{eq:RBFDD error}. This fine-tuning step is expected to make the latent representation passed to the RBFDD model more appropriate for anomaly detection, and improve its overall performance.
The final approach, that we refer to as the Deep RBFDD (D-RBFDD) network, attaches randomly initialized computational layers before the RBFDD layer and trains the entire network in an end-to-end fashion based on minimizing the cost function in Eq.\eqref{eq:RBFDD error}. Provided that the cost function can generate sufficient signal to train the entire deep model, the advantage of this method is that by using end-to-end training the latent representation that this network passes to the RBFDD layer will be more suited to anomaly detection than the representation generated by the pre-trained classification network, even when it is fine-tuned. In D-RBFDD we add layers following the simple LeNet-5 network architecture \cite{lecun1998gradient} preceding the RBFDD layer. The overall D-RBFDD network architecture is illustrated in Figure~\ref{fig:D-RBFDD}.\par
To facilitate the application of k-means clustering in the pre-training phase of training an RBFDD network we apply a \(tanh\) non-linear activation \cite{LecunEfficientBackprop} to the latent representations coming into the RBFDD layer. This ensures that the k-means algorithm is provided with a bounded latent representation and leads to better model initialization.
\begin{figure*}[t]
\label{D-RBFDD Architecture}
\begin{center}
\includegraphics[scale=.55]{Images/DRBFDD.pdf}
\caption{The Deep RBFDD Anomaly Detector}
\label{fig:D-RBFDD}
\end{center}
\end{figure*}
Deepening the RBFDD network, should improve the performance of the RBFDD network and make it a stronger anomaly detector, in particular, when dealing with raw high-dimensional input data.
\section{Experimental Setup}\label{Experiment Setup}
We have designed an experiment to evaluate the performance of the different approaches to deepening RBFDD networks, and to compare these to state-of-the-art anomaly detection approaches\footnote{The code for the D-RBFDD network is available on GitHub repository: https://github.com/MLDawn/DRBFDD}.
We include the following state-of-the-art anomaly detection approaches in this experiment: One-class Deep Support Vector Data Descriptor (DeepSVDD-OC) networks, Soft-boundary Deep Support Vector Data Descriptor (DeepSVDD-SB) networks, One-Class Support Vector Machines (OCSVMs), Isolation Forest (iForest), RBFDD networks, and deep Convolutional Auto Encoders (CAEs). In all cases only \emph{normal} data is used during model training.
To further investigate how effectively representations can be transferred from pre-trained classification networks to anomaly detection tasks, in the case of the OCSVM and iForest models (as well as RBFDD networks), we have also considered the scenario where the latent representation learned by a pre-trained classification network is used as input. We also use the latent representation learned by the version of RBFDD that fine-tunes the pre-trained classification model representation as input to these shallow models to better understand the impact of fine-tuning.
\subsection{Datasets \& Anomaly Detection Scenarios}\label{datasets}
We use two well-known labelled image classification datasets, MNIST \cite{lecun-mnisthandwrittendigit-2010} and Fashion MNIST \cite{xiao2017/online}, to explore the effectiveness of the three approaches to deepening RBFDD networks.
We generate multiple anomaly detection scenarios using these datasets. In each scenario we consider one class as normal and another class as anomalous. These pairs of classes (shown in Table \ref{Experiment Results on the classification datasets}) were selected to cover simple and challenging scenarios. For example, for MNIST we have a simple scenario where digit 0 is considered normal and digit 1 is anomalous, and similarly for Fashion MNIST we have a scenario where \emph{T-shirts/tops} are normal and \emph{boots} are anomalous. Images from these pairs of classes are easily discernible, and we expect to see high performance across most of the algorithms. We also have more challenging scenarios. For instance, from MNIST we have a scenario where digit 4 is normal and digit 9 is anomalous, and for Fashion MNIST we have a scenario where \emph{T-shirts/tops} are normal and \emph{shirts} are anomalous. These pairs of images are not easy to separate as they are so similar.
We also use the MIT-BIH Arrhythmia Database\footnote{https://physionet.org/content/mitdb/1.0.0/} \cite{932724,goldberger2000physiobank}, a real-world highly-imbalanced anomaly detection timeseries dataset, to compare the performance of the D-RBFDD network with state-of-the-art algorithms. This dataset contains 47 half-hour excerpts of two-channel ambulatory ECG recordings obtained from 48 subjects. We pre-processed the data to reduce the dimensionality by down-sampling from 360Hz to 187Hz and to extract individual heart-beats, each of which has an associated ground-truth label in the dataset\footnote{The peaks of each heartbeat are labelled in this dataset. Following the approach in \cite{xu2018towards}, we considered the mid-point between every two consecutive peak values to be the border between two consecutive heart-beats. All extracted heart-beats are zero-padded/truncated to a length that is higher than 95\% of the extracted heart-beat lengths, that is, 417.}. Out of the 19 anomalous classes in the dataset, we use the 4 most common to make four binary anomaly detection scenarios, and a \emph{One vs. All} scenario in which all of these 4 anomalous classes are considered together.
\begin{table*}[!htb]
\centering
\caption{Experiment results on the MNIST and Fashion MNIST datasets. Each column is labelled N-A, where N = normal class and A = anomalous class. The values in each cell are AUC scores followed by relative rank in parentheses. The average rank per algorithm is given in the last column. The labels for Fashion MNIST are: T: T-shirts/tops, B: Ankle boots, S:Shirts, Sn:Sneakers and Sa:Sandals.}
\label{Experiment Results on the classification datasets}
\adjustbox{max width=\textwidth}{
{\renewcommand{\arraystretch}{1.2}%
\begin{tabular}{l |r r r r r r| r r r r | r}
\hline
& \multicolumn{6}{|c|}{MNIST} & \multicolumn{4}{c|}{Fashion MNIST}&
\tabularnewline
& 0 - 1 & 7 - 1 & 4 - 9 & 7 - 9 &9 - 4& 9 - 7& T- B & T - S & Sa - Sn & B - Sa & Avg. Rank\tabularnewline
\hline
iForest & 0.9648 (\,~7) & 0.7725 (12)& 0.6296 (\,~7)& 0.7948 (\,~5)& 0.7484 (\,~6)& 0.7355 (\,~6)& 0.9963 (\,~3)& \textbf{0.8182} (1)& 0.5394 (10)& 0.9536 (\,~7) & 6.4 (\,~6)\tabularnewline
Fix-Res + iForest & 0.4795 (14)& 0.4608 (14)& 0.5904 (12)& 0.5939 (14)&0.5595 (14)& 0.5941 (14)& 0.6212 (14)&0.5614 (13)& 0.5304 (11)& 0.4782 (14) & 13.4 (14)\tabularnewline
Fine-Res + iForest & 0.7609 (11)& 0.9698 (\,~5)& 0.5807 (14)& 0.6910 (\,~9)& 0.6898 (10)& 0.6566 (11)& 0.9761 (10)& 0.4585 (14)& 0.4957 (14)& 0.7753 (10) & 10.8 (12)\tabularnewline
\hline
OCSVM & 0.9962 (\,~3)& 0.9623 (\,~6)& 0.8320 (\,~2)& \textbf{0.9209} (\,~1)& 0.9245 (\,~2)& 0.9125 (\,~2)& 0.9967 (\,~2)& 0.7872 (\,~6)& 0.5708 (\,~9)& 0.9708 (\,~5) & 3.8 (\,~3)\tabularnewline
Fix-Res + OCSVM & 0.5506 (13)& 0.6924 (13)& 0.5808 (13)&0.6098 (13)& 0.6035 (11)& 0.6268 (13)& 0.8981 (12)& 0.5992 (12)& 0.5100 (12)& 0.6011 (13) & 12.5 (13)\tabularnewline
Fine-Res + OCSVM & 0.8542 (\,~9)& 0.9746 (\,~3)& 0.5975 (11)& 0.7181 (\,~7) &0.7119 (\,~8)& 0.6850 (\,~9)& 0.9780 (\,~8)& 0.6429 (11)& 0.5036 (13)& 0.7232 (12) & 9.1 (\,~9)\tabularnewline
\hline
RBFDD & \textbf{0.9988} (\,~1)& 0.9722 (\,~4)& 0.7585 (\,~3)& 0.8187 (\,~4)& 0.8069 (\,~5)& 0.8583 (\,~5)& 0.9954 (\,~4)& 0.7898 (\,~5)& 0.6562 (\,~6)& 0.9830 (\,~4) & 4.1 (\,~4)\tabularnewline
Fix-Res + RBFDD & 0.7923 (10)& 0.8332 (11)& 0.6273 (8)& 0.6157 (12)& 0.6941 (\,~9)& 0.6714 (10)& 0.9740 (11)& 0.6881 (9)& 0.6197 (8)& 0.7299 (11) & 9.9 (11)\tabularnewline
Fine-Res + RBFDD& 0.9422 (\,~8)& 0.9119 (\,~9)& 0.6217 (\,~9)& 0.6612 (11)& 0.7236 (\,~7)& 0.7152 (\,~7)& 0.9901 (\,~7)& 0.8055 (\,~2)& \textbf{0.7310} (\,~1)& 0.8431 (\,~9) & 7.0 (\,~7)\tabularnewline
\hline
CAE-1 & 0.7000 (12)& 0.8964 (10)& 0.7137 (\,~6)& 0.8934 (\,~3)& 0.5688 (13)& 0.6419 (12)& 0.8426 (13)& 0.6823 (10)&0.6379 (\,~7)&0.8875 (\,~8) & 9.4 (10)
\tabularnewline
CAE-2 & 0.9914 (\,~5)& 0.9514 (\,~7)& 0.6110 (10)& 0.6614 (10)& 0.5843 (12)& 0.7139 (\,~8)& 0.9764 (\,~9)&0.7378 (\,~8)&0.7013 (\,~4)& 0.9647 (\,~6) & 7.9 (\,~8)\tabularnewline
\hline
DeepSVDD-OC & 0.9906 (\,~6)& \textbf{0.9943} (\,~1)& 0.7455 (\,~4)& 0.7071 (\,~8)& 0.9140 (\,~3)& 0.8950 (\,~4)& 0.9950 (\,~6)& 0.8001 (\,~4)& 0.6567 (\,~5) & 0.9871 (\,~3) & 4.4 (\,~5)\tabularnewline
DeepSVDD-SB & 0.9957 (\,~4)& 0.9916 (\,~2)& 0.7365 (\,~5)& 0.7417 (\,~6)& 0.9132 (\,~4)& 0.8969 (\,~3)& 0.9951 (\,~5)& 0.8020 (\,~3)& 0.7023 (\,~3)& \textbf{0.9896} (\,~1) & 3.6 (\,~2) \tabularnewline
\hline
D-RBFDD & 0.9981 (\,~2)& 0.9512 (\,~8)& \textbf{0.8450} (\,~1)& 0.8971 (\,~2)& \textbf{0.9480} (\,~1)& \textbf{0.9137} (\,~1)& \textbf{0.9987} (\,~1)& 0.7459 (\,~7)& 0.7161 (\,~2)& 0.9887 (\,~2) & \textbf{2.7} (\,~1)\tabularnewline
\hline
\end{tabular}}
}
\end{table*}
\begin{table*}[ht]
\centering \caption{Experiment results on the MIT-BIH Arrhythmia dataset. The label of each anomalous class is given at the top of the columns (for the label descriptions see supplementary materials). The values of each cell are AUC scores followed by the relative rank in parentheses. The average rank per algorithm is given in the last column.}\label{Experiment Results on Anomaly Detection Datasets}
%
{\renewcommand{\arraystretch}{1.0}%
\begin{tabular}{l r r r r r| r}
\hline
& L & R & V & / & One vs. All &Avg. Rank\tabularnewline
\hline
iForest & 0.5743 (8)& 0.7118 (8) & 0.6819 (8) & 0.7713 (8)& 0.6808 (8) & 8.0 (8)\tabularnewline
\hline
OCSVM & 0.6684 (5) & 0.7582 (6) & 0.8647 (5)& 0.8591 (6)& 0.7830 (5) & 5.4 (5)\tabularnewline
\hline
RBFDD & 0.7002 (4) & 0.8182 (4)& 0.8722 (4)& 0.8947 (5)& 0.8043 (4) & 4.2 (4)\tabularnewline
\hline
CAE-1 & 0.6331 (6) & 0.7525 (7)& 0.7416 (6)& 0.8139 (7)& 0.7174 (6) & 6.4 (7)\tabularnewline
CAE-2 & 0.5994 (7) & 0.8139 (5)& 0.7407 (7)& 0.9395 (1)& 0.7023 (7) & 5.4 (5)\tabularnewline
\hline
DeepSVDD-OC & 0.7700 (3)& 0.8352 (3)& 0.9241 (3)& 0.9187 (4) & 0.8324 (3) & 3.2 (3)\tabularnewline
DeepSVDD-SB & 0.7835 (1) & 0.8596 (1)& 0.9363 (1)& 0.9316 (2) & 0.8346 (2) & 1.4 (1)
\tabularnewline
\hline
D-RBFDD & 0.7723 (2) & 0.8458 (2)& 0.9361 (2)& 0.9261 (3)& 0.8507 (1) & 2.0 (2)\tabularnewline
\hline
\end{tabular}}
\end{table*}
In all experiments, the models are trained using \textit{only} instances of the normal class, and during testing we provide unseen samples from both normal and anomalous classes to measure the performance of the different models. For all the datasets, feature values have been normalized to $[0,1]$. In particular, the normalization for both the MNIST and Fashion MNIST datasets is done by dividing individual pixel values by 255, as this is the maximum pixel value for grey-scale images. For the MIT-BIH Arrhythmia dataset, the sample values range from 0 to 2047 inclusive, with 1024 (i.e., the mid-point) corresponding to zero. This is due to the fact that, at the digitization step, a resolution of 11-bits has been used, resulting in $2^{11}$ levels, which are the actual resultant values of the signal in this dataset. Thus, normalization is done by dividing individual values in the ECG signals by 2047\footnote{Since, the original value of 1024 will be translated to 0.50 in the normalized space, then 0.50 is the value by which we will pad at the end of our signals after segmenting the heart-beats. In terms of truncating, we will simply truncate the ending of the signals, where the heart-beat has a length more than 417, and this is for the heart-beats that have a length in the top 5\% of the heart-beat lengths.}.
The different datasets and the scenarios used are summarized in the supplementary material.
\subsection{Experimental Design}\label{sec:experiment_design}
To evaluate models we use an approach based on bootstrapping that makes maximum use of the anomalous samples available. For each iteration we randomly select 80\% of all normal instances in the dataset (with no replacement) to train the model. The remaining 20\% of normal instances is then mixed with all of the anomalous instances to form the test set. We perform 10 iterations of training and testing in this way and measure the area under the ROC curve (AUC) on the test set for each one. The AUC scores are then averaged across iterations to give the overall model performance.
We perform hyper-parameter tuning using a grid search that repeats the above process for each hyper-parameter combination. The range of searched hyper-parameters are listed in supplementary material.
We report the best averaged AUC from the grid search for the corresponding experiment. We are aware that reporting the performance of the models with the best set of hyper-parameters over-estimates the generalization performance of the models (known as the problem of \emph{many comparisons in induction algorithms} \cite{10.1007/978-3-319-07064-3_1}). However, as our goal is a relative comparison of algorithms, rather than an absolute estimate of generalization error, and all algorithms are evaluated in the same way this is an appropriate evaluation approach that makes better use of limited anomalous samples than measuring performance on a separate hold-out test set.
\subsection{State-of-the-Art Approaches}\label{state of the art}
Each state-of-the-art approach compared in this experiment is tuned to achieve its best possible performance (full details are provided in the supplementary material). For all OCSVM models we use Gaussian kernels, as recommended in \cite{DBLP:journals/corr/KhanM13}. The hyper-parameters tuned for OCSVM models are \(\nu\), and \(\gamma\). For iForest, the only hyper-parameter to be tuned is the number of estimators.
To explore their performance we use different CAE architectures, each with similar capacity to the D-RBFDD model. For the image classification datasets CAE-1 has two 2D convolutional layers in the encoder and two transposed 2D convolutional layers in the decoder, while CAE-2 has three convolutional layers in the encoder and three transposed convolutional layers in the decoder. For the ECG dataset, CAE-1 has two 1D convolutional layers in the encoder and two transposed 1D convolutional layers in the decoder. CAE-2 has the same structure but the second 1D convolutional layer has twice the number of convolutional filters compared to CAE-1. For all CAEs rectified linear activation functions and max-pooling are used at each layer.
For the RBFDD network and the D-RBFDD network, the hyper-parameters that are tuned are the number of Gaussians in the hidden layer, and the coefficients of the cost function (Eq. \eqref{eq:RBFDD error}): \(\beta\) and \(\lambda\) (whose values fall in the range of (0, 1]). The D-RBFDD network, is based on the LeNet-5 network architecture \cite{lecun1998gradient}. For the ECG dataset we replace the 2D convolutions with 1D convolutions.
In the case of DeepSVDD, a LeNet-type network architecture is used \cite{pmlr-v80-ruff18a}, which we use for the image datasets. For the ECG dataset we replace this with the 1D LeNet-5 architecture used in the D-RBFDD network. In both versions of DeepSVDD the weight decay coefficient $\lambda$ is a tuned hyper-parameter. For DeepSVDD-SB, $\nu$, is also a tunable hyper-parameter, whose role is to control the trade-off between violations of the boundary and the volume of the hypersphere. Following the training method in \cite{pmlr-v80-ruff18a}, the training of DeepSVDD models also includes a learning rate scheduler that reduces the learning rate by a factor of 10 after a 75\% of the specified training epochs have been completed.
In the experiments using the image classification datasets we use a pre-trained ResNet-18 model \cite{he2016deep} trained on the ImageNet \cite{deng2009imagenet} dataset\footnote{The ResNet-18 implementation used is available at: https://github.com/pytorch/vision/tree/master/torchvision/\\models/resnet.py} for transfer learning. No transfer learning is used for the ECG dataset, as reliable large-scale pre-trained models for ECG sensor data are not available.
\section{Results and Discussions}\label{Results and Discussions}
The results of the experiments based on the image classification datasets are detailed in Table~\ref{Experiment Results on the classification datasets}. These results were achieved using the best hyper-parameter combinations found during the grid search described in Section \ref{sec:experiment_design} (these are listed in the supplementary material).
For each anomaly detection scenario the different approaches have been ranked and the average ranks for each approach are summarized in the last column of each table. These results show the effectiveness of deepening RBFDD for raw datasets, and allow us to compare the three different strategies for deepening described in Section \ref{RBFDD networks}. The fact that the D-RBFDD model has out-performed the RBFDD model, in the majority of cases, demonstrates the value of using the deep model to generate a latent representation suitable for use by RBFDD. Moreover, it is interesting to note that none of the models that use the latent representation output by the fixed, pre-trained ResNet-18 model out-perform their equivalent models trained on the raw, high-dimensional representation. This is the case for the RBFDD models as well as for the OCSVM and iForest models. This is a reminder of the issue with \textit{mixed approaches} for deep anomaly detection mentioned in Section \ref{Related Work}. The fixed pre-trained ResNet-18 model has been trained for a multi-class classification objective and the latent representations generated by this network seem to be too entangled with that task to be very useful for anomaly detection.
This is further underlined by the fact that, in almost all cases, the performance of the models (RBFDD, OCSVM, and iForest) using the latent representations that arise from the version of the ResNet-18 model fine-tuned using the RBFDD network improves over the versions of the models trained using the representations from the fixed ResNet-18 model. However, it is important to note that in most cases this performance was still not better than the models working on raw data.
Together these results show that deepening RBFDD networks allows them to work effectively with raw inputs, and that the D-RBFDD approach is the best way to do this out of those compared. This conclusion is reinforced by the results based on the ECG dataset shown in Table~\ref{Experiment Results on Anomaly Detection Datasets}. In experiments using this dataset D-RBFDD outperforms RBFDD in all cases.
By examining the results for the image classification datasets in Table~\ref{Experiment Results on the classification datasets} and those based on the ECG dataset in Table~\ref{Experiment Results on Anomaly Detection Datasets} together we can evaluate how D-RBFDD compares to other state-of-the-art anomaly detection algorithms.
In the image classification dataset cases, the results show that, overall, the D-RBFDD network out-performs the other algorithms as it has the lowest average rank (lower ranks are better). On the ECG dataset DeepSVDD-SB has a slightly better average rank than D-RBFDD, although D-RBFDD performs better in the \emph{One vs. All} scenario which is particularly important for anomaly detection as it is likely that anomalies will arise from very different data distributions.
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/test-mnist-fashion.pdf}
\caption{On cases of MNIST and Fashion MNIST datasets}
\label{fig:cdplot_mnist}
\end{subfigure}\hfill\begin{subfigure}[t]{0.45\textwidth}\includegraphics[width=\textwidth]{Images/pval_diag2_friedman_finner_siglevel_0.05_table_eeg_mit_latest.csv.pdf}
\caption{On cases of MIT-BIH Arrhythmia dataset}
\label{fig:cdplot_ecg}
\end{subfigure}
\caption{Critical difference plots from a Friedman test using a significance level of $0.05$ on the different anomaly detection scenarios. Algorithms \emph{not} connected with horizontal bars are significantly different.}
\end{figure}
To further investigate and understand the overall differences between the performances of the different algorithms and the effectiveness of D-RBFDD, we perform non-parametric statistical significance tests for multiple classifier comparison. Following \cite{GARCIA20102044} a Friedman test followed by a Finner $p$-value correction was performed on the results in both Table~\ref{Experiment Results on the classification datasets} and \ref{Experiment Results on Anomaly Detection Datasets}. This test will analyze the difference in performance between each pair of algorithms with respect to the different anomaly detection scenarios.
The statistical test results for the image classification and ECG datasets are summarized in the critical difference plots (with significance level of $\alpha = 0.05$) in Figure~\ref{fig:cdplot_mnist} and Figure~\ref{fig:cdplot_ecg} respectively. Two algorithms not connected with bold horizontal lines are significantly different. The $p$-values of the statistical tests and the pairwise win/lose/tie results are provided in the supplementary material.
Figure~\ref{fig:cdplot_mnist} shows that D-RBFDD performs significantly and consistently better than the following algorithms: Fine-Res + RBFDD, CAE-2, Fine-Res + OCSVM, CAE-1, Fix-Res + RBFDD, Fix-Res + OCSVM, Fix-Res + iForest, Fine-Res + iForest. In the case of DeepSVDD-SB, OCSVM, RBFDD, DeepSVDD-OC and iForest, the null-hypothesis of the test could not be rejected with a significance level of $\alpha=0.05$, but D-RBFDD performed better in average rank. On the other hand, in a simple and direct pairwise win/lose/tie based comparison, D-RBFDD won in at least 70\% and up to 100\% of the anomaly detection cases when compared to the other algorithms (see supplementary material). This indicates that D-RBFDD typically performs as good as or better than the benchmark algorithms it is compared to.
From Figure~\ref{fig:cdplot_ecg} we can see that DeepSVDD-SB attained the best average rank of 1.4 on the ECG dataset. D-RBFDD achieved similar performance with an average rank of 2.0. In all of the scenarios DeepSVDD-SB has performed slightly better than D-RBFDD, but, interestingly D-RBFDD achieved the best performance in the \emph{One vs. All} case. Although, from an overall point of view DeepSVDD-SB and D-RBFDD, the null hypothesis could not be rejected in any of the significance levels. D-RBFDD performed better than iForest and CAE-1 at the significance level of $\alpha=0.01$ and $\alpha=0.05$ respectively, and performed better than OCSVM and CAE-2 with a significance level of $\alpha=0.1$.
Overall these results indicate that adding extra computational layers to RBFDD makes it a much more effective anomaly detector for problems with raw data representations.
Also, selecting D-RBFDD will lead to at least similar or many times better performance than the other approaches, making it an attractive solution for anomaly detection for these types of datasets. We believe that this strong performance, coupled with the easy interpretability and adaptability of approaches based on RBF networks make D-RBFDD a compelling approach.
\section{Conclusions \& Future Work}\label{Conclusion and Futurework}
In this article, we have proposed a deep one-class neural network, the D-RBFDD network, that adds convolutional layers before an RBFDD network. The D-RBFDD network is trained in an end-to-end fashion on an objective that is designed specifically for anomaly detection. We have shown that this network has successfully turned the shallow RBFDD network into a deep one-class classifier, suitable for anomaly detection on high-dimensional raw data such as images and sensor data.
Unlike some of the state-of-the-art algorithms, in particular OCSVMs, the D-RBFDD networks are scalable in size and can work with large datasets and high-dimensional data.
In a set of benchmark experiments, for image datasets, the D-RBFDD network has shown superior performance to state-of-the-art one-class classifiers---DeepSVDD, OCSVM, iForest, and CAEs. In the case of the ECG dataset, the D-RBFDD network has produced competitive results to those of the DeepSVDD-SB algorithm, and out-performed it in the \emph{One vs. All} scenario, which is particularly important for anomaly detection as it is likely that anomalies will arise from very different data distributions. We have also observed that our proposed D-RBFDD model, has indeed out-performed its shallower version, the RBFDD network, in almost all of our benchmark experiments. This suggests that, when dealing with raw data we have benefited from the introduction of depth in the D-RBFDD network.
Our experiments show that transfer learning using a pre-trained ResNet-18 with fixed weights, does not work well for anomaly detection. We believe that the reason for this is due to the fact that ResNet-18 has been trained for a multi-class classification task and that the latent representations that it generates are too entangled with that task to be useful for anomaly detection. Interestingly, we see that if the final layers of the ResNet-18 model are fine-tuned using the RBFDD network cost function, the performance improves in most cases.
Overall, it can be concluded that, selecting D-RBFDD for an anomaly detection task on raw data would lead to performance that is at least as good as or significantly better than current state-of-the-art algorithms.
The D-RBFDD network is an attractive option for the task of anomaly detection. First of all, it is showing significant improvement over its predecessor the RBFDD network and it is showing competitive performance with state-of-the-art anomaly detection algorithms. This is a pre-requisite for broadening the application of such networks to more challenging scenarios such as learning from streams of incoming data, where the main challenge is the dynamic nature of what constitutes normal and anomalous, known as, concept drift. In a D-RBFDD network, we have the ability to control the number of Gaussians, which would equip the network with a high degree of adaptability for scenarios where concept drift is a concern and the definition of normal would change over time. Thus, by adding/removing/replacing Gaussians, the D-RBFDD network could learn a variety of new emerging contexts as well as forget the expired ones. Furthermore, D-RBFDD networks have the potential to give us a reasonable degree of interpretability as to why an input is flagged as an anomaly. The features learned by the RBFDD component in the D-RBFDD network (i.e., centers and spreads of Gaussian kernels and associated weights) provide us with a level of interpretability that has potential to be quite informative in terms of understanding the model learned and the reasoning behind flagging anomalies.
In the future we plan to exploit the flexibility of D-RBFDD to adapt it for an on-line learning scenario where detection and handling concept drift in the incoming stream of data is important. We will explore approaches to allow the D-RBFDD network, to self-expand and prune to adapt to the appearance or disappearance of concepts.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{B-sec1}
This paper is a continuation of the first author's paper:~\cite{fujino-slc-trivial}.
We strongly recommend the reader to look
at \cite[1.~Introduction]{fujino-slc-trivial} before starting to read this paper.
In \cite{fujino-slc-trivial}, we
introduced the notion of basic slc-trivial fibrations, which
is a kind of canonical bundle formula for reducible varieties, and
investigated some fundamental properties.
For the precise definition of basic slc-trivial fibrations,
see \cite[Definition 4.1]{fujino-slc-trivial} or
Definition \ref{C-def3.1} below.
The following statement is one of the main results of \cite{fujino-slc-trivial}.
\begin{thm}[{\cite[Theorem 1.2]{fujino-slc-trivial}}]\label{B-thm1.1}
Let $f\colon (X, B)\to Y$ be a basic slc-trivial fibration and
let $\mathbf B$ and $\mathbf M$ be the induced
discriminant and moduli $\mathbb Q$-b-divisors of $Y$ respectively.
Then we have the following properties:
\begin{itemize}
\item[(i)] $\mathbf K+\mathbf B$ is $\mathbb Q$-b-Cartier, and
\item[(ii)] $\mathbf M$ is b-potentially nef, that is,
there exists a proper birational morphism $\sigma \colon Y'\to Y$
from a normal variety $Y'$ such that
$\mathbf M_{Y'}$ is a potentially nef $\mathbb Q$-divisor on $Y'$ and
that $\mathbf M=\overline{\mathbf M_{Y'}}$.
\end{itemize}
\end{thm}
For the definition and some basic properties of b-divisors,
see \cite[2.3.2 b-divisors]{corti} and \cite[Section 2]{fujino-slc-trivial}.
\medskip
On moduli $\mathbb Q$-b-divisors, we have the following conjecture,
which is still widely open.
\begin{conj}[{b-semi-ampleness conjecture,
see \cite[Conjecture 1.4]{fujino-slc-trivial}}]
\label{B-conj1.2}
Let $f \colon (X, B)\to Y$ be a basic
slc-trivial fibration.
Then the moduli $\mathbb Q$-b-divisor $\mathbf M$ is
b-semi-ample.
\end{conj}
The main purpose of this paper is to prove the following theorem.
\begin{thm}[Main Theorem]\label{B-thm1.3}
Let $f \colon (X, B)\to Y$ be a basic slc-trivial fibration
such that $Y$ is complete.
Let $\mathbf M$ be the moduli $\mathbb Q$-b-divisor
associated to $f \colon (X, B)\to Y$.
Assume that
there exists a proper birational
morphism $\sigma \colon Y'\to Y$ from a normal
variety $Y'$ such that
$\mathbf M=\overline {\mathbf M_{Y'}}$ with $\mathbf M_{Y'}\equiv 0$. Then
$\mathbf M_{Y'}\sim _{\mathbb Q}0$ holds.
\end{thm}
Theorem \ref{B-thm1.3} solves Conjecture \ref{B-conj1.2}
when the moduli $\mathbb Q$-b-divisor $\mathbf M$ is
b-numerically trivial.
It is obviously a generalization of \cite[Theorem 3.5]{ambro-moduli} and
\cite[Theorem 1.3]{floris}.
More precisely, Florin Ambro and Enrica Floris
proved Theorem \ref{B-thm1.3} for klt-trivial fibrations and lc-trivial fibrations,
respectively.
\medskip
As a direct consequence of Theorem \ref{B-thm1.3},
we have the following result:~Corollary \ref{B-cor1.4}.
It says that the b-semi-ampleness
conjecture
(see Conjecture \ref{B-conj1.2}) holds true when the base space is a curve.
Note that Corollary \ref{B-cor1.4} was already proved for klt-trivial fibrations
by Florin Ambro (see \cite[Theorem 0.1]{ambro-shokurov}).
\begin{cor}\label{B-cor1.4}
Let $f \colon (X, B)\to Y$ be a basic slc-trivial fibration
with $\dim Y=1$.
Then the moduli $\mathbb Q$-divisor $M_Y$ of $f \colon
(X, B)\to Y$ is
semi-ample.
\end{cor}
For the proof of Theorem \ref{B-thm1.3}, we closely follow Floris's arguments in
\cite{floris}. We adapt her proof of Theorem \ref{B-thm1.3} for lc-trivial fibrations
to our setting. As is well known, the
main ingredient of \cite[Theorem 0.1]{ambro-shokurov},
\cite[Theorem 3.5]{ambro-moduli}, and \cite[Theorem 1.3]{floris}
is Deligne's result on local subsystems of polarizable variations
of $\mathbb Q$-Hodge structure (see \cite[Corollaire (4.2.8)]{deligne}).
\medskip
In \cite{fujino-fujisawa}, the first and the second
authors discussed variations of mixed
Hodge structure toward applications for
higher-dimensional algebraic varieties (see also \cite{ffs}).
One of the most important
applications of \cite{fujino-fujisawa} is the proof of the
projectivity of the coarse moduli spaces of stable
varieties in \cite{fujino-ann}. Then the first author
introduced the notion of basic slc-trivial fibrations
in \cite{fujino-slc-trivial} in order to make results
in \cite{fujino-fujisawa} useful
for various geometric applications. The
first and the third authors established that
every quasi-log canonical pairs have only
Du Bois singularities in \cite{fujino-haidong} by using
\cite{fujino-slc-trivial}.
We strongly recommend the reader to look
at \cite[1.~Introduction]{fujino-slc-trivial} for
more details. In this paper, we prove
\cite[Conjecture 1.4]{fujino-slc-trivial} under
some special assumption. We freely use the
formulation introduced in \cite{fujino-slc-trivial}
and the
arguments in this paper heavily depend on \cite{fujino-fujisawa}.
\medskip
We briefly explain the organization of this paper.
In Section \ref{A-sec2},
we fix the notation and recall some definitions for the reader's convenience.
In Section \ref{C-sec3}, we quickly recall the notion of basic slc-trivial fibrations
and some definitions following \cite{fujino-slc-trivial}.
In Section \ref{d-sec4}, we see that
the cyclic group action constructed in
\cite[Section 6]{fujino-slc-trivial} preserves some parts of weight filtrations of
the variation of mixed Hodge structure.
Section \ref{C-sec5} is devoted to the proof of Theorem \ref{B-thm1.3}.
By using the result obtained in Section \ref{d-sec4},
we reduce Theorem \ref{B-thm1.3} to
Deligne's result on local subsystems of polarizable variations
of $\mathbb Q$-Hodge structure.
\begin{ack}
The first author was partially
supported by JSPS KAKENHI Grant Numbers
JP16H03925, JP16H06337.
The second author was partially supported by JSPS KAKENHI
Grant Number JP16K05107.
The authors would like to thank Takeshi Abe for
useful discussions and comments.
\end{ack}
\begin{conventions}
We work over $\mathbb C$, the complex number field, throughout
this paper. We freely use the basic
notation of the minimal model program as in
\cite{fujino-fundamental} and \cite{fujino-foundations}.
A {\em{scheme}} means a separated scheme of
finite type over $\mathbb C$.
A {\em{variety}} means a reduced scheme, that is,
a reduced separated scheme of finite type over $\mathbb C$.
In this paper, a variety may be reducible.
However, we sometimes assume that a variety is irreducible without
mentioning it explicitly if there is no danger of confusion.
The set of integers (resp.~rational numbers)
is denoted by $\mathbb Z$ (resp.~$\mathbb Q$).
The set of positive rational numbers (resp.~integers)
is denoted by $\mathbb Q_{>0}$ (resp.~$\mathbb Z_{>0}$).
\end{conventions}
In this paper, we do not use $\mathbb R$-divisors.
We only use $\mathbb Q$-divisors.
\section{Preliminaries}\label{A-sec2}
In this section, we quickly recall some basic definitions and notation
for the reader's convenience.
For the details, see \cite[Section 2]{fujino-slc-trivial}.
\medskip
Let us start with the definition of {\em{simple normal crossing pairs}}.
\begin{defn}[Simple normal crossing pairs]\label{A-def2.1}
We say that the pair $(X, B)$ is {\em{simple normal crossing}}
at a point $a\in X$ if $X$ has a Zariski open neighborhood
$U$ of $a$ that can be embedded in a smooth
variety $M$, where $M$ has a regular system of parameters
$(x_1, \ldots, x_p, y_1, \ldots, y_r)$ at $a=0$ in
which $U$ is defined by a monomial equation
$$
x_1\cdots x_p=0
$$
and
$$
B=\sum _{i=1}^r b_i (y_i=0)|_U, \quad
b_i\in \mathbb Q.
$$
We say that $(X, B)$ is a {\em{simple normal crossing pair}}
if it is simple normal crossing at every point of $X$.
If $(X, 0)$ is a simple normal crossing pair, then
$X$ is called a {\em{simple normal crossing variety}}.
If $(X, B)$ is a simple normal crossing pair and
$B$ is reduced, then $B$ is called a {\em{simple
normal crossing divisor}} on $X$.
Let $(X, B)$ be a simple normal crossing pair
such that all the coefficients of $B$ are
less than or equal to one.
Let $\nu \colon X^\nu\to X$ be the normalization of $X$.
We put $K_{X^\nu}+\Theta=\nu^*(K_X+B)$, that is,
$\Theta$ is the sum of
the inverse images of $B$ and the singular locus of $X$.
By assumption, all the coefficients of $\Theta$ are less than or equal to one.
Therefore, it is easy to see that $(X^\nu, \Theta)$ is sub log canonical.
In this situation, we simply say that $W$ is
a {\em{stratum}} of $(X, B)$ if $W$ is an irreducible component of $X$ or
$W$ is the $\nu$-image of some log canonical center of $(X^\nu, \Theta)$.
We note that a stratum of a simple normal crossing variety $X$
means a stratum of a simple normal crossing pair
$(X, 0)$.
\end{defn}
We write the precise definition of {\em{semi-log canonical pairs}},
{\em{slc centers}}, and {\em{slc strata}} for the reader's convenience.
For the details of semi-log canonical pairs, we recommend the reader to
see \cite{fujino-fund-slc}.
\begin{defn}[Semi-log canonical pairs]\label{x-def2.2}
Let $X$ be an equidimensional scheme which
satisfies Serre's $S_2$ condition and
is normal crossing in codimension one. Let $\Delta$
be an effective $\mathbb Q$-divisor on $X$
such that no irreducible component of $\Supp \Delta$
is contained in the singular locus of $X$ and that
$K_X+\Delta$ is $\mathbb Q$-Cartier.
We say that $(X, \Delta)$ is a {\em{semi-log canonical}} pair if
$(X^\nu, \Delta_{X^\nu})$ is log canonical
in the usual sense, where $\nu:X^\nu\to X$
is the normalization of $X$ and
$K_{X^\nu}+\Delta_{X^\nu}=\nu^*(K_X+\Delta)$,
that is, $\Delta_{X^\nu}$ is the sum of the inverse
images of $\Delta$
and the conductor of $X$. An {\em{slc center}} of $(X, \Delta)$
is the $\nu$-image of an lc center of $(X^\nu, \Delta_{X^\nu})$.
An {\em{slc stratum}} of $(X, \Delta)$
means either an slc center of $(X, \Delta)$ or an
irreducible component of $X$.
\end{defn}
We recall various definitions and operations of
($\mathbb Q$-)divisors.
We note that we are mainly interested in {\em{reducible}} varieties
in this paper.
\begin{say}[Divisors]\label{x-say2.3}
Let $X$ be a scheme with structure sheaf $\mathcal O_X$ and let
$\mathcal K_X$ be the sheaf of total quotient rings of $\mathcal O_X$.
Let $\mathcal K^*_X$ denote the (multiplicative)
sheaf of invertible elements in $\mathcal K_X$,
and $\mathcal O^*_X$ the sheaf of invertible
elements in $\mathcal O_X$.
We note that $\mathcal O_X\subset \mathcal K_X$
and $\mathcal O^*_X\subset
\mathcal K^*_X$ hold.
A {\em{Cartier divisor}} $D$ on $X$ is a global section of
$\mathcal K^*_X/\mathcal O^*_X$, that is,
$D$ is an element of $\Gamma(X, \mathcal K^*_X/\mathcal O^*_X)$.
A {\em{$\mathbb Q$-Cartier divisor}} is an element of
$\Gamma (X, \mathcal K^*_X/\mathcal O^*_X)\otimes
_{\mathbb Z}\mathbb Q$. Let $D_1$ and $D_2$
be two $\mathbb Q$-Cartier divisors
on $X$. Then $D_1$ is {\em{linearly}}
(resp.~{\em{$\mathbb Q$-linearly}})
{\em{equivalent}} to $D_2$, denoted by $D_1\sim D_2$ (resp.~$D_1
\sim _{\mathbb Q}D_2$), if
$$
D_1=D_2+\sum _{i=1}^k r_i (f_i)
$$
such that $f_i\in \Gamma (X, \mathcal K^*_X)$ and $r_i\in \mathbb Z$
(resp.~$r_i\in \mathbb Q$) for every $i$.
We note that $(f_i)$ is a {\em{principal Cartier divisor}}
associated to $f_i$, that is,
the image of $f_i$ by
$$
\Gamma (X, \mathcal K^*_X)\to
\Gamma(X, \mathcal K^*_X/\mathcal O^*_X).
$$
Let $f \colon X\to Y$ be a morphism between schemes.
If there exists a $\mathbb Q$-Cartier
divisor $B$ on $Y$ such that
$D_1\sim _{\mathbb Q} D_2+f^*B$, then
$D_1$ is said to be {\em{relatively $\mathbb Q$-linearly
equivalent to $D_2$}}.
It is denoted by $D_1\sim _{\mathbb Q, f}D_2$ or
$D_1\sim _{\mathbb Q, Y} D_2$.
\medskip
From now on, let $X$ be an equidimensional scheme. We note
that $X$ is not necessarily regular in codimension one.
A ({\em{Weil}}) {\em{divisor}} $D$ on $X$ is a finite formal
sum
$$
D=\sum _i d_iD_i,
$$
where $D_i$ is an irreducible reduced closed subscheme of $X$
of pure codimension one and $d_i$ is an integer
for every $i$ such that $D_i\ne D_j$ for every $i\ne j$.
If $d_i \in \mathbb Q$ for every $i$,
then $D$ is called a {\em{$\mathbb Q$-divisor}}.
Let $D=\sum _i d_i D_i$ be a $\mathbb Q$-divisor as above.
We put
\begin{equation*}
D^{\leq 1}=\sum _{d_i\leq 1}d_i D_i, \quad
D^{<1} =\sum _{d_i<1}d_iD_i, \quad
D^{= 1}=\sum _{d_i= 1} D_i, \quad \text{and} \quad
\lceil D\rceil =\sum _i \lceil d_i \rceil D_i,
\end{equation*}
where $\lceil d_i\rceil$ is the integer defined by $d_i\leq
\lceil d_i\rceil <d_i+1$. Let $D$ be a $\mathbb Q$-divisor.
We also put
$$
\lfloor D\rfloor=-\lceil -D\rceil.
$$
We call $D$ a {\em{subboundary}}
$\mathbb Q$-divisor if $D=D^{\leq 1}$ holds.
When $D$ is effective and $D=D^{\leq 1}$ holds,
we call $D$ a {\em{boundary}} $\mathbb Q$-divisor.
We further assume that
$f \colon X\to Y$ is a surjective morphism onto an irreducible
variety $Y$.
Then we put
$$
D^v=\sum _{f(D_i)\subsetneq Y}d_i D_i \quad
\text{and} \quad D^h=D-D^v,
$$
and call $D^v$ the {\em{vertical part}}
and $D^h$ the {\em{horizontal part}} of $D$
with respect to $f \colon X\to Y$, respectively.
\medskip
Finally, let $D$ be a $\mathbb Q$-Cartier divisor on a
complete normal irreducible variety $X$.
If $D\cdot C=0$ for any complete curve $C$ on $X$, then
$D$ is said to be {\em{numerically trivial}}. When $D$ is
numerically trivial, we simply write $D\equiv 0$.
\end{say}
Let us recall the definition of {\em{potentially nef divisors}}
introduced by the first author in \cite{fujino-slc-trivial}.
\begin{defn}[{Potentially nef divisors, see
\cite[Definition 2.5]{fujino-slc-trivial}}]\label{x-def2.4}
Let $X$ be a normal
irreducible variety and let $D$ be a divisor on $X$.
If there exist a completion $X^\dag$ of $X$,
that is, $X^\dag$ is a complete normal
variety and contains $X$ as a dense Zariski open set, and
a nef divisor $D^\dag$ on $X^\dag$ such that
$D=D^\dag|_X$, then $D$ is called
a {\em{potentially nef}} divisor on $X$.
A finite $\mathbb Q_{>0}$-linear
combination of potentially nef divisors is called
a {\em{potentially nef}} $\mathbb Q$-divisor.
\end{defn}
Although it is dispensable,
the following definition is very useful when we state our results (see
Theorems \ref{B-thm1.1} and \ref{B-thm1.3}).
We note that the {\em{$\mathbb Q$-Cartier
closure}} of a $\mathbb Q$-Cartier $\mathbb Q$-divisor
$D$ on a normal variety $X$ is the $\mathbb Q$-b-divisor
$\overline D$ with trace
$$
\overline D _Y=f^*D,
$$
where $f \colon Y\to X$ is a proper birational morphism
from a normal variety $Y$.
\begin{defn}[{see \cite[Definition 2.12]{fujino-slc-trivial}}]\label{x-def2.5}
Let $X$ be a normal irreducible variety.
A $\mathbb Q$-b-divisor $\mathbf D$ of $X$
is {\em{b-potentially nef}}
(resp.~{\em{b-semi-ample}}) if there
exists a proper birational morphism $X'\to X$ from a normal
variety $X'$ such that $\mathbf D=\overline {\mathbf D_{X'}}$, that
is, $\mathbf D$ is the $\mathbb Q$-Cartier closure of $\mathbf D_{X'}$, and that
$\mathbf D_{X'}$ is potentially nef
(resp.~semi-ample).
A $\mathbb Q$-b-divisor $\mathbf D$ of $X$ is {\em{$\mathbb Q$-b-Cartier}}
if there is a proper birational morphism $X'\to X$ from a normal
variety $X'$ such that $\mathbf D=\overline{\mathbf D_{X'}}$.
Let $X$ be a complete normal irreducible variety.
A $\mathbb Q$-b-divisor $\mathbf D$ of $X$ is
{\em{b-numerically trivial}} (resp.~{\em{$\mathbb Q$-b-linearly trivial}})
if there exists a proper birational morphism
$X'\to X$ from a complete normal variety $X'$ such that
$\mathbf D=\overline{\mathbf D_{X'}}$ with $\mathbf D_{X'}\equiv 0$
(resp.~$\mathbf D_{X'}\sim _{\mathbb Q}0$).
\end{defn}
For the details of (b-)potentially nef divisors,
we recommend the reader to see \cite[Section 2]
{fujino-slc-trivial}.
\section{Quick review of basic slc-trivial fibrations}\label{C-sec3}
In this section, we quickly recall some definitions
of {\em{basic slc-trivial fibrations}} in \cite[Section 4]{fujino-slc-trivial}.
We recommend the reader to see \cite[1.15]{fujino-slc-trivial}
for some historical comments.
\medskip
We introduce the notion of basic slc-trivial fibrations.
\begin{defn}[{Basic slc-trivial fibrations,
see \cite[Definition 4.1]{fujino-slc-trivial}}]\label{C-def3.1}
A {\em{pre-basic slc-trivial fibration}} $f \colon (X, B)\to Y$ consists of
a projective surjective morphism
$f \colon X\to Y$ and a simple normal crossing pair $(X, B)$ satisfying
the following properties:
\begin{itemize}
\item[(1)] $Y$ is a normal irreducible variety,
\item[(2)] every stratum of $X$ is dominant onto $Y$ and
$f_*\mathcal O_X\simeq \mathcal O_Y$,
\item[(3)] $B$ is a $\mathbb Q$-divisor such that $B=B^{\leq 1}$ holds
over
the generic point of $Y$, and
\item[(4)] there exists
a $\mathbb Q$-Cartier $\mathbb Q$-divisor $D$ on $Y$ such that
$$
K_X+B\sim _{\mathbb Q}f^*D.
$$
\end{itemize}
If a pre-basic slc-trivial fibration $f \colon (X, B)\to Y$ also satisfies
\begin{itemize}
\item[(5)] $\rank f_*\mathcal O_X(\lceil -B^{<1}\rceil)=1$,
\end{itemize}
then it is called a {\em{basic slc-trivial fibration}}.
\end{defn}
Roughly speaking, if $X$ is irreducible
and $(X, B)$ is sub kawamata log terminal
(resp.~sub log canonical) over the generic point of $Y$,
then it is a klt-trivial fibration (resp.~an lc-trivial fibration).
\medskip
In order to define discriminant $\mathbb Q$-b-divisors and
moduli $\mathbb Q$-b-divisors for basic slc-trivial fibrations,
we need the notion of induced (pre-)basic slc-trivial fibrations.
\begin{say}[{Induced (pre-)basic slc-tirival
fibrations, see \cite[4.3]{fujino-slc-trivial}}]\label{C-say3.2}
Let $f \colon (X, B)\to Y$ be a \linebreak
(pre-)basic slc-trivial fibration
and let $\sigma \colon Y'\to Y$ be a generically finite
surjective morphism from a normal irreducible variety $Y'$.
Then we have an {\em{induced {\em{(}}pre-{\em{)}}basic slc-trivial fibration}}
$f' \colon (X', B_{X'})\to Y'$, where
$B_{X'}$ is defined by $\mu^*(K_X+B)=K_{X'}+B_{X'}$, with
the following commutative diagram:
$$
\xymatrix{
(X', B_{X'}) \ar[r]^{\mu} \ar[d]_{f'} & (X, B)\ar[d]^{f} \\
Y' \ar[r]_{\sigma} & Y,
}
$$
where $X'$ coincides with
$X\times _{Y}Y'$ over a nonempty Zariski open set of $Y'$.
More precisely, $X'$ is a simple normal crossing variety with a morphism
$X'\to X\times _Y Y'$ that is an isomorphism over
a nonempty Zariski open set of $Y'$ such that
$X'$ is projective over $Y'$ and that every stratum of $X'$ is dominant onto
$Y'$.
\end{say}
Now we are ready to define {\em{discriminant
$\mathbb Q$-b-divisors}} and
{\em{moduli $\mathbb Q$-b-divisors}} for basic slc-trivial fibrations.
\begin{say}[{Discriminant and
moduli $\mathbb Q$-b-divisors,
see \cite[4.5]{fujino-slc-trivial}}]\label{C-say3.3}
Let $f \colon (X, B)\to Y$ be a \linebreak
(pre-)basic
slc-trivial fibration as in Definition \ref{C-def3.1}.
Let $P$ be a prime divisor on $Y$.
By shrinking $Y$ around the generic point of $P$,
we assume that $P$ is Cartier. We set
$$
b_P=\max \left\{t \in \mathbb Q\, \left|\,
\begin{array}{l} {\text{$(X^\nu, \Theta+t\nu^*f^*P)$ is sub log canonical}}\\
{\text{over the generic point of $P$}}
\end{array}\right. \right\},
$$
where $\nu \colon X^\nu\to X$ is the normalization and
$K_{X^\nu}+\Theta=\nu^*(K_X+B)$, that is,
$\Theta$ is the sum of the inverse images of $B$ and the singular
locus of $X$, and
set $$
B_Y=\sum _P (1-b_P)P,
$$
where $P$ runs over prime divisors on $Y$.
Then it is easy to see that
$B_Y$ is a well-defined $\mathbb Q$-divisor on
$Y$ and is called the {\em{discriminant
$\mathbb Q$-divisor}} of $f \colon (X, B)\to Y$. We set
$$
M_Y=D-K_Y-B_Y
$$
and call $M_Y$ the {\em{moduli $\mathbb Q$-divisor}} of $f \colon
(X, B)\to Y$.
By definition, we have
$$
K_X+B\sim _{\mathbb Q}f^*(K_Y+B_Y+M_Y).
$$
Let $\sigma\colon Y'\to Y$ be a proper birational morphism
from a normal variety $Y'$ and let $f' \colon (X', B_{X'})\to Y'$ be
an induced (pre-)basic slc-trivial fibration
by $\sigma \colon Y'\to Y$.
We can define $B_{Y'}$, $K_{Y'}$ and $M_{Y'}$ such that
$\sigma^*D=K_{Y'}+B_{Y'}+M_{Y'}$,
$\sigma_*B_{Y'}=B_Y$, $\sigma _*K_{Y'}=K_Y$
and $\sigma_*M_{Y'}=M_Y$. We note that
$B_{Y'}$ is independent of the choice of $(X', B_{X'})$,
that is, $B_{Y'}$ is well defined. Hence
there exist a unique $\mathbb Q$-b-divisor $\mathbf B$
such that
$\mathbf B_{Y'}=B_{Y'}$ for every $\sigma \colon Y'\to Y$ and a unique
$\mathbb Q$-b-divisor $\mathbf M$ such that $\mathbf M_{Y'}=M_{Y'}$ for
every $\sigma \colon Y'\to Y$.
Note that $\mathbf B$ is called the {\em{discriminant $\mathbb Q$-b-divisor}} and
that $\mathbf M$ is called
the {\em{moduli $\mathbb Q$-b-divisor}} associated to $f \colon (X, B)\to Y$.
We sometimes simply say that $\mathbf M$ is
the {\em{moduli part}} of $f \colon (X, B)\to Y$.
\end{say}
For the full details of this section, we recommend the reader to see
\cite[Section 4]{fujino-slc-trivial}.
\section{On variation of mixed Hodge structure}\label{d-sec4}
This section heavily depends on \cite[Sections 4 and 7]{fujino-fujisawa}.
We strongly recommend the reader to take a quick look at \cite[Section 4]{fujino-fujisawa}
before reading this section.
\medskip
Let us quickly recall \cite[Theorem 7.1]{fujino-fujisawa}, which is
one of the main ingredients of
\cite{fujino-slc-trivial} (see \cite[Section 3]{fujino-slc-trivial}).
\begin{thm}[{\cite[Theorem 7.1]{fujino-fujisawa}}]\label{d-thm4.1}
Let $(V, T)$ be a simple normal crossing pair such that $T$ is
reduced and let
$h \colon V\to Y$ be a projective surjective morphism onto a smooth
variety $Y$.
Assume that every stratum of $(V, T)$ is dominant onto $Y$.
Let $\Sigma$ be a simple normal crossing divisor on $Y$ such that
every stratum of $(V, T)$ is smooth over $Y^*=Y\setminus \Sigma$.
We put $V^*=h^{-1}(Y^*)$, $T^*=T|_{V^*}$, and $d=\dim V-\dim Y$.
Let $\iota \colon V^*\setminus T^*\hookrightarrow V^*$ be the natural
open immersion.
Then the local system $R^k(h|_{V^*})_*\iota_!\mathbb Q_{V^*\setminus T^*}$
underlies a graded polarizable admissible variation of
$\mathbb Q$-mixed Hodge structure on $Y^*$ for every $k$.
We put $\mathcal V^k_{Y^*}=R^k(h|_{V^*})_*\iota_!\mathbb Q_{V^*\setminus T^*}
\otimes \mathcal O_{Y^*}$ for
every $k$.
Let
$$
\cdots \subset F^{p+1}(\mathcal V^k_{Y^*})\subset F^p(\mathcal V^k_{Y^*})
\subset F^{p-1}(\mathcal V^k_{Y^*})\subset \cdots
$$
be the Hodge filtration.
We assume that all the local monodromies on the local system
$R^k(h|_{V^*})_*\iota_!\mathbb Q_{V^*\setminus T^*}$ around $\Sigma$
are unipotent for every $k$.
Then $R^kh_*\mathcal O_V(-T)$ is isomorphic to the canonical extension
of
$$
\Gr ^0_F(\mathcal V^k_{Y^*})=F^0(\mathcal V^k_{Y^*})/F^1(\mathcal V^k_{Y^*}),
$$
which is denoted by $\Gr^0_F(\mathcal V^k_Y)$, for every $k$.
By taking the dual, we have $$R^{d-k}h_*\omega_{V/Y}(T)\simeq
\left(\Gr^0_F(\mathcal V^k_Y)\right)^*$$ for every $k$.
\end{thm}
For the details of Theorem \ref{d-thm4.1}, we recommend the reader to
see \cite[Sections 4 and 7]{fujino-fujisawa} (see also \cite{ffs}).
We note that the reader can find basic definitions of variations of
mixed Hodge structure in \cite[Section 3]{fujino-fujisawa}.
\medskip
Let us introduce the notion of {\em{birational maps}}
of simple normal crossing
pairs.
\begin{defn}[Birational maps of simple normal crossing
pairs]\label{d-def4.2}
Let $(V_1, T_1)$ and $(V_2, T_2)$ be
simple normal crossing pairs such that
$T_1$ and $T_2$ are reduced.
Let $\alpha \colon V_1\dashrightarrow V_2$ be a proper birational map.
Assume that
there exist Zariski open sets $U_1$
and $U_2$ of $V_1$ and $V_2$ respectively such that
$U_1$ contains
the generic point of
any stratum of $(V_1, T_1)$, $U_2$ contains
the generic point of any stratum of $(V_2, T_2)$,
and $\alpha$ induces an isomorphism between
$(U_1, T_1|_{U_1})$ and $(U_2, T_2|_{U_2})$.
Then we call $\alpha$ a {\em{birational map between
$(V_1, T_1)$ and $(V_2, T_2)$}}.
\end{defn}
As an easy application of \cite[Lemma 6.2]{fujino-fujisawa} and
\cite[Theorem 1.4]{bierstone}, we can prove the
following useful lemma.
\begin{lem}\label{d-lem4.3}
Let $(V_1, T_1)$ and $(V_2, T_2)$ be simple normal crossing
pairs
such that $T_1$ and $T_2$ are reduced.
Let $\alpha \colon V_1\dashrightarrow V_2$ be a birational map
between $(V_1, T_1)$ and $(V_2, T_2)$.
Then there exists a commutative diagram
\begin{equation}\label{d-eq4.1}
\xymatrix{
& (V', T') \ar[dl]_-{p_1}\ar[dr]^-{p_2}& \\
(V_1, T_1) \ar@{-->}[rr]_-\alpha&& (V_2, T_2),
}
\end{equation}
where $(V', T')$ is a simple normal crossing pair
such that $T'$ is reduced, and $p_i$ is a proper
birational morphism between $(V', T')$ and
$(V_i, T_i)$ for $i=1, 2$.
In this situation, $p_i$ induces a natural one-to-one correspondence
between the set of strata of $(V', T')$ and that of $(V_i, T_i)$ for
$i=1, 2$.
Let $S$ be any stratum of $(V', T')$.
Then we have $$Rp_{i*}\mathcal O_S\simeq
\mathcal O_{p_i(S)}$$ for
$i=1, 2$.
Moreover, we have $$Rp_{i*}\mathcal O_{V'}(-T')\simeq
\mathcal O_{V_i}(-T_i)$$ for
$i=1, 2$.
\end{lem}
\begin{proof}
By \cite[Theorem 1.4]{bierstone}, we can take
a desired commutative diagram \eqref{d-eq4.1}, where
$p_i$ is a proper birational morphism
between $(V', T')$ and $(V_i, T_i)$ such that
$p_i$ is an isomorphism over $U_i$ for $i=1, 2$.
By \cite[Lemma 6.2]{fujino-fujisawa},
we have $Rp_{i*}\mathcal O_{V'}(-T')\simeq
\mathcal O_{V_i}(-T_i)$ for $i=1, 2$.
Let $S$ be a stratum of $(V', T')$.
Then $p_i(S)$ is a stratum of $(V_i, T_i)$ since
$p_i$ is a birational
morphism between $(V', T')$ and $(V_i, T_i)$ for $i=1, 2$.
Therefore, $p_i(S)$ is a smooth
irreducible variety and $p_i \colon S\to p_i(S)$
is obviously birational for $i=1, 2$.
This implies that $Rp_{i*}\mathcal O_S\simeq
\mathcal O_{p_i(S)}$ for $i=1, 2$.
Since $p_i \colon V'\to V_i$ is a proper
birational morphism between $(V', T')$ and
$(V_i, T_i)$, it is easy to see that there
exists a natural one-to-one correspondence between
the set of strata of $(V', T')$ and that of
$(V_i, T_i)$ for $i=1, 2$.
\end{proof}
\begin{rem}\label{x-rem4.4}
In Lemma \ref{d-lem4.3}, we assume that
$\alpha\colon (V_1, T_1)\dashrightarrow (V_2, T_2)$ is
projective over a fixed scheme $Y$, that is,
there exists the following commutative diagram
$$
\xymatrix{
(V_1, T_1) \ar[dr]_-{h_1}\ar@{-->}[rr]^-\alpha&& (V_2, T_2)
\ar[dl]^-{h_2}\\
&Y&
}
$$
such that $h_1$ and $h_2$ are projective.
Then we see that we can make $V'$ projective over $Y$ by the
proof of Lemma \ref{d-lem4.3}.
\end{rem}
We define a somewhat artificial
condition for birational maps of simple normal crossing pairs.
We will use it in Lemma \ref{x-lem4.6} below.
For the basic definitions of semi-simplicial
varieties, see, for example, \cite[Section 5.1]{peters-steenbrink}.
\begin{defn}\label{x-def4.5}
Let $(V, T)$ be a simple normal crossing
pair such that
$T$ is reduced.
Let $\alpha \colon V\dashrightarrow V$ be a birational map
between $(V, T)$ and $(V, T)$
in the sense of Definition \ref{d-def4.2}.
We say that $\alpha$ satisfies condition $(\bigstar)$
if there exists a commutative diagram
\begin{equation}\label{d-eq4.2}
\xymatrix{
& (V', T') \ar[dl]_-{p_1}\ar[dr]^-{p_2}& \\
(V, T) \ar@{-->}[rr]_-\alpha&& (V, T)
}
\end{equation}
with the following properties:
\begin{itemize}
\item[(1)] $(V', T')$ is a simple normal crossing pair such that $T'$ is reduced.
\item[(2)] $p_i$ is a proper
birational morphism between $(V', T')$ and $(V, T)$ in the sense of
Definition \ref{d-def4.2} for $i=1, 2$.
\item[(3)] There are semi-simplicial resolutions
$\varepsilon _T \colon T_\bullet \to T$
and $\varepsilon _V \colon V_\bullet \to V$, that is,
$T_\bullet$ and $V_\bullet$ are semi-simplicial varieties, $\varepsilon_T$ and
$\varepsilon _V$ are argumentations and of cohomological descent,
such that $V_p$ and $T_q$ are disjoint unions of some strata of $(V, T)$ for all
$p$ and $q$ and
that they fit in the following commutative diagram
\begin{equation}\label{d-eq4.3}
\xymatrix{T_\bullet\ar[d]_-{\varepsilon_T}\ar[r]^-\phi& V_\bullet
\ar[d]^-{\varepsilon _V}\\
T \ar[r]_j& V,
}
\end{equation}
where $\phi$ is a morphism of semi-simplicial varieties
and $j$ is the natural closed embedding.
Moreover, $\varepsilon_T \colon S\to \varepsilon _T(S)$
(resp.~$\varepsilon_V \colon S\to \varepsilon _V(S)$) is
a natural isomorphism for any irreducible component $S$ of $T_\bullet$
(resp.~$V_\bullet$). We note that
$S$ is a stratum of $(V, T)$.
\item[(4)] There are semi-simplicial varieties $\varepsilon_{T'} \colon
T'_\bullet
\to T'$ and $\varepsilon _{V'} \colon V'_\bullet \to V'$ such that
$\varepsilon _{T'}$ and $\varepsilon _{V'}$ are argumentations,
$V'_p$ and $T'_q$ are disjoint unions of
some strata of $(V', T')$ for all $p$ and $q$
and that they fit in the following commutative diagram
\begin{equation}\label{d-eq4.4}
\xymatrix{T'_\bullet\ar[d]_-{\varepsilon_{T'}}\ar[r]^-{\phi'}& V'_\bullet
\ar[d]^-{\varepsilon _{V'}}\\
T' \ar[r]_{j'}& V',
}
\end{equation}
where $\phi'$ is a morphism of semi-simplicial varieties and
$j'$ is the natural closed embedding.
As in (3), $\varepsilon_{T'} \colon S'\to \varepsilon _{T'}(S')$
(resp.~$\varepsilon_{V'} \colon S'\to \varepsilon _{V'}(S')$) is
a natural isomorphism for any irreducible component $S'$ of $T'_\bullet$
(resp.~$V'_\bullet$). We note that $S'$ is a stratum of $(V', T')$.
\item[(5)] The following commutative diagram
\begin{equation}\label{d-eq4.5}
\xymatrix{&& T' \ar[dll]_-{p_1|_{T'}}\ar[dd]^(.60){j'}\ar[drr]^-{p_2|_{T'}}&& \\
T\ar[dd]_-j \ar@{-->}[rrrr]|-\hole ^(.40){\alpha|_T}&&&& T\ar[dd]^-j \\
&& V'\ar[dll]_-{p_1}\ar[drr]^-{p_2} && \\
V \ar@{-->}[rrrr]_-\alpha&&&& V
}
\end{equation}
can be lifted to a commutative diagram
\begin{equation}\label{d-eq4.6}
\xymatrix{&& T'_\bullet \ar[dll]_-{p_1|_{T'_\bullet}}\ar[dd]^(.60){\phi'}
\ar[drr]^-{p_2|_{T'_\bullet}}&& \\
T_\bullet\ar[dd]_-\phi \ar@{-->}[rrrr]|-\hole ^(.40){\alpha|_{T_\bullet}}&&&& T_\bullet\ar[dd]^-\phi \\
&& V'_\bullet\ar[dll]_-{p_1|_{V'_\bullet}}\ar[drr]^-{p_2|_{V'_\bullet}} && \\
V _\bullet\ar@{-->}[rrrr]_-{\alpha_\bullet}&&&& V_\bullet
}
\end{equation}
over \eqref{d-eq4.5}
by \eqref{d-eq4.3} and \eqref{d-eq4.4}
such that $p_1|_{V'_p}, p_2|_{V'_p}, \alpha_p, p_1|_{T'_q},
p_2|_{T'_q}$, and $\alpha|_{T'_q}$ are birational maps
of smooth varieties for all $p$ and $q$.
\item[(6)] If $\alpha\colon (V, T)\dashrightarrow (V, T)$ is
projective over a fixed scheme $Y$, that is,
there exists the following commutative diagram
$$
\xymatrix{
(V, T) \ar[dr]_-h\ar@{-->}[rr]^-\alpha&& (V, T)
\ar[dl]^-h\\
&Y&
}
$$ such that $h$ is projective,
then
$V'$ is also projective over $Y$.
\end{itemize}
\end{defn}
The main purpose of this section is to establish the
following result, which will play a crucial role in the proof of
Theorem \ref{B-thm1.3} in Section \ref{C-sec5}.
\begin{lem}\label{x-lem4.6}
We use the same notation and assumption as in Theorem \ref{d-thm4.1}.
We assume that $Y$ is a curve.
We further assume that $(V, T+\Supp h^*\Sigma)$ is a simple
normal crossing pair and that all the local monodromies
on the local system $R^jh_*\mathbb Q_{S^*}$ around
$\Sigma$ are unipotent for any stratum $S$ of $(V, T)$ and
all $j$, where $S^*=S|_{V^*}$.
Let $\alpha \colon V\dashrightarrow V$ be a birational map between $(V, T)$
and $(V, T)$ over $Y$.
We assume that
$\alpha$ satisfies condition $(\bigstar)$ in Definition \ref{x-def4.5}.
Then $\alpha$ induces isomorphisms
$$
\alpha^* \colon W_m\Gr^0_F(\mathcal V^k_Y)\overset{\sim}{\longrightarrow}
W_m\Gr^0_F(\mathcal V^k_Y)
$$
for all $m$ and $k$, where $W$ denotes the canonical extension of
the weight filtration.
Let $G$ be a finite group which acts on $(V, T)$ birationally over $Y$
such that every element $\alpha\in G$ satisfies condition $(\bigstar)$
in Definition \ref{x-def4.5}.
Then $G$ acts on $W_m\Gr^0_F(\mathcal V^k_Y)$ for all $m$ and $k$.
\end{lem}
In the proof of Lemma \ref{x-lem4.6}, we will use some arguments and
constructions in \cite[Section 4]{fujino-fujisawa}.
\begin{proof}[Proof of Lemma \ref{x-lem4.6}]
\setcounter{step}{0}
By assumption, $\alpha$ satisfies condition $(\bigstar)$ in Definition \ref{x-def4.5}.
Therefore, we can take a commutative diagram
\begin{equation*}\label{eq-zu7}
\xymatrix{
& (V', T') \ar[dl]_-{p_1}\ar[dr]^-{p_2}& \\
(V, T) \ar@{-->}[rr]_-\alpha&& (V, T)
}
\end{equation*}
as in \eqref{d-eq4.2}. We note that
$V'$ is projective
over $Y$.
From now on, we will use the same notation as in
Definition \ref{x-def4.5}.
We put $u=h\circ j\circ\varepsilon_T \colon T_\bullet \to Y$ and
$v=h\circ \varepsilon _V \colon V_\bullet \to Y$.
We set $E_\bullet =v^{-1}(\Sigma)_{\mathrm{red}}$ and
$F_\bullet=u^{-1}(\Sigma)_{\mathrm{red}}$.
Since $(V, T+\Supp h^*\Sigma)$ is a simple normal crossing
pair by assumption,
$E_\bullet$ and $F_\bullet$ are simple
normal crossing divisors on $V_\bullet$ and $T_\bullet$, respectively.
As in the proof of \cite[Lemma 4.12]{fujino-fujisawa},
we can construct a complex $C(\phi^*)$
on $Y$ equipped with filtrations $W$ and $F$ such that
$H^k(C(\phi^*))\simeq
\mathcal V^k_Y$, where
$\mathcal V^k_Y$ is the canonical extension of
$\mathcal V^k_{Y^*}=R^k(h|_{V^*})_*
\iota_!\mathbb Q_{V^*
\setminus T^*}\otimes \mathcal O_{Y^*}$, for every $k$.
We note that the filtration $W$ is denoted by $L$ in \cite[Lemma 4.12]
{fujino-fujisawa}.
\begin{step}\label{d-step1}
The spectral sequence
$$
E^{p,q}_1(C(\phi^*), F)=H^{p+q}(\Gr^p_FC(\phi^*))\Rightarrow
H^{p+q}(C(\phi^*))
$$
degenerates at $E_1$
(see the proof of \cite[Lemma 4.12]{fujino-fujisawa}
and \cite[13.3]{fujino-slc-trivial}).
Therefore, we have the following short exact sequences
$$
\xymatrix{
0 \ar[r]& \ar[r]^-{s^{p+q}}H^{p+q}(F^1C(\phi^*)) & \ar[r]^-{t^{p+q}}H^{p+q}(C(\phi^*))
& H^{p+q}(\Gr^0_FC(\phi^*)) \ar[r]& 0
}
$$
for all $p$ and $q$.
We note that $F^0C(\phi^*)=C(\phi^*)$ by construction.
Let us consider the following commutative diagram.
$$
\xymatrix{
0 \ar[r]& \ar[r]^-{s^{p+q}}H^{p+q}(F^1C(\phi^*)) & \ar[r]^-{t^{p+q}}H^{p+q}(C(\phi^*))
& H^{p+q}(\Gr^0_FC(\phi^*)) \ar[r]& 0\\
& & H^{p+q} (W_{-p}C(\phi^*))\ar[r]\ar[u]_-{a^{p+q}_{-p}}&
H^{p+q}(W_{-p} \Gr^0_FC(\phi^*))\ar[u]_-{b^{p+q}_{-p}}&
}
$$
By definition,
we have
$$
F^1H^{p+q}(C(\phi^*))=\mathrm{Im}\, s^{p+q}
$$
and
$$
W_q H^{p+q}(C(\phi^*))=\mathrm{Im}\,a^{p+q}_{-p}
$$
for all $p$ and $q$.
We put
\begin{equation}\label{d-eq4.7}
W_qH^{p+q}(\Gr^0_FC(\phi^*)):=\mathrm{Im}\, b^{p+q}_{-p}
\end{equation}
for all $p$ and $q$.
Then the map $t^{p+q}$
induces
\begin{equation}\label{d-eq4.8}
\xymatrix{
\Gr^0_FH^{p+q}(C(\phi^*)) \ar[r]^-{\sim} & H^{p+q} (\Gr^0_FC(\phi^*)) \\
W_q \Gr^0_FH^{p+q} (C(\phi^*)) \ar[r]_-{i^{p+q}_q}\ar@{^{(}->}[u]&
W_q H^{p+q} (\Gr^0_FC(\phi^*)) \ar@{^{(}->}[u]
}
\end{equation}
for all $p$ and $q$.
We will prove that $i^{p+q}_q$ are isomorphisms
for all $p$ and $q$ in Step \ref{d-step2}.
\end{step}
\begin{step}\label{d-step2}
Let us analyse the spectral sequence
\begin{equation}\label{d-eq4.9}
E^{p, q}_1(C(\phi^*), W)\Rightarrow H^{p+q}(C(\phi^*))
\end{equation}
in detail.
Let $\Omega_{V_{p+1}/Y}(\log E_{p+1})$ and
$\Omega_{T_p/Y}(\log F_p)$ be relative logarithmic de Rham complexes
of $v_{p+1} \colon V_{p+1}\to Y$ and $u_p \colon T_p\to Y$, respectively.
Then we have
$$
\left(E^{p, q}_1(C(\phi^*), W), F\right)=
\left(R^q(v_{p+1})_*\Omega_{V_{p+1}/Y}(\log E_{p+1}), F\right)
\oplus \left(R^q(u_p)_*\Omega_{T_p/Y}(\log F_p), F\right)
$$
by construction. We note that the
differentials of the spectral sequence \eqref{d-eq4.9}
are strictly compatible with the filtration induced by $F$
(see \cite[(1.1.5)]{deligne}, \cite[Remark 3.2]{fujino-fujisawa},
and \cite[A.~3.1]{peters-steenbrink})
and that the spectral sequence \eqref{d-eq4.9}
degenerates at $E_2$.
We do not repeat the proof of the above facts here.
For the proof, see the first part of the proof of
\cite[Lemma 4.12]{fujino-fujisawa}
and \cite[13.3]{fujino-slc-trivial}.
The following argument corresponds to the strictness of the filtration $F$
on the $E_0$-term of the spectral sequence
$E^{p,q}_r(C(\phi^*), W)$ (see \cite[13.3]{fujino-slc-trivial}).
By \cite[(2.11) Theorem]{steenbrink},
$R^b(u_p)_*\Omega^a_{T_p/Y}(\log F_p)$ is
locally free for any $a$, $b$, and $p$.
Therefore, the spectral sequence
$$
R^b(u_p)_*\Omega^a_{T_p/Y}(\log F_p)
\Rightarrow
R^{a+b}(u_p)_*\Omega_{T_p/Y}(\log F_p)
$$
degenerates at $E_1$. In particular,
$$
\Gr^0_FR^q(u_p)_*\Omega_{T_p/Y}(\log F_p)\simeq
R^q(u_p)_*\mathcal O_{T_p}
$$
holds for any $p$, $q$.
By the same way, we see
that
$$\Gr^0_FR^q(v_{p+1})_*\Omega_{V_{p+1}/Y}(\log E_{p+1})\simeq
R^q(v_{p+1})_*\mathcal O_{V_{p+1}}
$$
holds for any $p$, $q$.
Thus we have
\begin{equation}\label{d-eq4.10}
\begin{split}
&\Gr^0_FE^{p,q}_1(C(\phi^*), W) \\&=\Gr^0_FR^q(v_{p+1})_*
\Omega_{V_{p+1}/Y}(\log E_{p+1})
\oplus \Gr^0_F R^q(u_p)_*\Omega_{T_p/Y}(\log F_p)\\
&\simeq R^q(v_{p+1})_*\mathcal O_{V_{p+1}}\oplus
R^q(u_p)_*\mathcal O_{T_p}.
\end{split}
\end{equation}
By taking $\Gr^0_F$ of the spectral sequence
\eqref{d-eq4.9},
we obtain the following spectral sequence
\begin{equation*}
E^{p,q}_1(\Gr^0_FC(\phi^*), W)\Rightarrow
H^{p+q}(\Gr^0_FC(\phi^*)).
\end{equation*}
Note that
$$
\Gr^0_FE^{p,q}_1(C(\phi^*), W)\simeq
E^{p,q}_1(\Gr^0_FC(\phi^*), W)
$$
holds as we saw in \eqref{d-eq4.10}. Moreover,
$$
\Gr^0_FE^{p, q}_r(C(\phi^*), W) \simeq E^{p, q}_r(\Gr^0_FC(\phi^*), W)
$$
holds
for every $r\geq 0$
by the lemma on two filtrations
(see \cite[Propositions (7.2.5) and (7.2.8)]{deligne2} and
\cite[Theorem 3.12]{peters-steenbrink}).
Hence, we obtain
\begin{equation}\label{d-eq4.11}
\begin{split}
\Gr^0_F\Gr^W_q\!H^{p+q}(C(\phi^*))&\simeq \Gr^0_FE^{p,q}_2(C(\phi^*), W)
\\ &\simeq E^{p,q}_2(\Gr^0_FC(\phi^*), W)
\simeq
\Gr^W_q\!H^{p+q}(\Gr^0_FC(\phi^*))
\end{split}
\end{equation}
for all $p$ and $q$.
We note that the filtration $W$ on $H^{p+q}(\Gr^0_FC(\phi^*))$
is the one defined in \eqref{d-eq4.7}. We also note that
$\Gr^0_F\Gr^W_q\!H^{p+q}(C(\phi^*))$ is canonically
isomorphic to $\Gr^W_q\!\Gr^0_FH^{p+q}(C(\phi^*))$.
Thus, we can check that
\begin{equation}\label{d-eq4.12}
\xymatrix{
i^{p+q}_q \colon W_q\Gr^0_FH^{p+q}(C(\phi^*)) \ar[r]&
W_qH^{p+q}(\Gr^0_FC(\phi^*))
}
\end{equation}
in \eqref{d-eq4.8} are isomorphisms
for all $p$ and $q$ inductively by using \eqref{d-eq4.8} and \eqref{d-eq4.11} .
\end{step}
\begin{step}\label{d-step3}
In this proof, we did not define the filtration $W$ on
$C(\phi^*)$ explicitly. For the details of
the filtration $W$ on $C(\phi^*)$,
which is denoted by
$L$ in \cite[Section 4]{fujino-fujisawa}, see
(4.2.1) and (4.8.2) in \cite[Section 4]{fujino-fujisawa}.
By construction, we have
\begin{equation*}
\begin{split}
W_{-p} \Gr^0_FC(\phi^*)^n &=
W_{-p-1}(Rv_*\mathcal O_{V_\bullet})^{n+1}\oplus
W_{-p}(Ru_*\mathcal O_{T_\bullet})^n\\
&= \bigoplus_{s\geq p+1} (R(v_s)_*\mathcal O_{V_s})^{n+1-s}
\oplus \bigoplus _{t\geq p} (R(u_t)_*\mathcal O_{T_t})^{n-t}.
\end{split}
\end{equation*}
Therefore, by Lemma \ref{d-lem4.3} and the commutative
diagram \eqref{d-eq4.6} in Definition \ref{x-def4.5},
$\alpha$ induces isomorphisms
$$
\alpha^* \colon W_{-p}\Gr^0_FC(\phi^*)\overset{\sim}{\longrightarrow}
W_{-p}\Gr^0_FC(\phi^*)
$$
for all $p$.
Thus $\alpha$ induces isomorphisms
$$
\alpha^* \colon W_qH^{p+q}(\Gr^0_FC(\phi^*))
\overset{\sim}{\longrightarrow} W_qH^{p+q}(\Gr^0_FC(\phi^*))
$$
for all $p$ and $q$ by the following commutative diagram
\begin{equation*}
\xymatrix{
H^{p+q}(W_{-p}\Gr^0_FC(\phi^*)) \ar[d]^-\wr_-{\alpha^*}\ar[r]
& H^{p+q}(\Gr^0_FC(\phi^*))\ar[d]^-\wr_-{\alpha^*}
\\
H^{p+q}(W_{-p}\Gr^0_FC(\phi^*))\ar[r]&
H^{p+q}(\Gr^0_FC(\phi^*))
}
\end{equation*}
and the definition of the filtration $W$ in \eqref{d-eq4.7}.
Hence, we obtain isomorphisms
$$
\alpha^* \colon W_mH^k(\Gr^0_FC(\phi^*))\overset{\sim}
{\longrightarrow}W_mH^k(\Gr^0_FC(\phi^*))
$$
for all $m$ and $k$ by putting
$p=k-m$ and $q=m$.
By \eqref{d-eq4.12} and the fact that
$\mathcal V^k_Y\simeq H^k(C(\phi^*))$, we obtain the
desired isomorphisms
$$
\alpha^* \colon W_m\Gr^0_F(\mathcal V^k_Y)\overset{\sim}
{\longrightarrow}W_m\Gr^0_F(\mathcal V^k_Y)
$$
for all $m$ and $k$.
\end{step}
When the group $G$ acts on $(V, T)$ birationally over $Y$
such that every element $\alpha\in G$ satisfies
condition $(\bigstar)$ in Definition
\ref{x-def4.5}, it is easy to see that
$G$ also acts on $W_m\Gr^0_F(\mathcal V^{k}_Y)$ for all $m$ and
$k$ by the
above result.
\end{proof}
We make an important remark on dual variations of
mixed Hodge structure. We will use it
in Step \ref{C-step4} in the proof of Theorem \ref{B-thm1.3}.
\begin{rem}
[{see \cite[Remarks 3.15 and 7.4]{fujino-fujisawa}}]\label{x-rem4.7}
We use the same notation and assumption as in Lemma \ref{x-lem4.6}.
Let us consider the dual local system of $R^k(h|_{V^*})_*\iota_!\mathbb Q_{
V^*\setminus T^*}$ and
the dual variation of mixed Hodge structure on it.
Then the locally free sheaf $(\mathcal V^k_{Y^*})^*$ carries
the Hodge filtration $F$ and the weight filtration $W$ defined
as in \cite[Remark 3.15]{fujino-fujisawa}.
By the construction of the Hodge filtration $F$,
$$
\Gr^0_F\!\left((\mathcal V^k_Y)^*\right)\simeq
\left(\Gr^0_F(\mathcal V^k_Y)\right)^*
$$
holds, where $(\mathcal V^k_Y)^*$ is
the canonical extension of $(\mathcal V^k_{Y^*})^*$.
More generally,
$$
\Gr^{-p}_F\!\left((\mathcal V^k_Y)^*\right)\simeq
\left(\Gr^p_F(\mathcal V^k_Y)\right)^*
$$
holds for every $p$.
We note that
$\Gr^0_F\!\left((\mathcal V^k_Y)^*\right)=F^0\!\left((\mathcal V^k_Y)^*\right)$,
the canonical extension of the lowest piece of the Hodge filtration.
By taking the dual of Lemma \ref{x-lem4.6},
$G$ acts on $W_m\Gr^0_F\!\left((\mathcal V^k_Y)^*\right)$ for
every $m$,
where $W$ denotes the canonical extension of
the weight filtration of $(\mathcal V^k_{Y^*})^*$.
We note that we have
$$
\Gr^W_m\Gr^p_F\!\left((\mathcal V^k_Y)^*\right)
\simeq \left( \Gr^W_{-m}\Gr^{-p}_F(\mathcal V^k_Y)\right)^*
$$
for all $p$ and $m$ by construction.
\end{rem}
We close this section with the following lemma, which is
more or less well known to the experts (see \cite{zucker},
\cite{peters}, \cite{kollar}, and \cite{fujino-fujisawa2}).
We will use it in the proof of Theorem \ref{B-thm1.3} in Section \ref{C-sec5}.
\begin{lem}\label{d-lem4.7}
Let $C$ be a smooth projective curve
and let $C_0$ be a non-empty Zariski open set of $C$.
Let $V_0$ be a polarizable variation of $\mathbb Q$-Hodge
structure over $C_0$ with unipotent monodromies around $\Sigma=C\setminus
C_0$. Let $F^b$ be the canonical extension of the lowest piece of
the Hodge filtration.
Let $\mathcal L$ be a line bundle on $C$ which is a direct summand of $F^b$.
Assume that $\deg_C\mathcal L=0$.
Then $\mathcal L|_{C_0}$ is a flat subbundle of $F^b|_{C^0}$.
\end{lem}
\begin{proof}
Let $h_0$ be the smooth hermitian metric on $\mathcal L|_{C_0}$ induced
by the Hodge metric of $F^b|_{C_0}$.
Then $\frac{\sqrt{-1}}{2\pi} \Theta_{h_0}(\mathcal L|_{C_0})$ is a semipositive
smooth $(1, 1)$-form on $C_0$.
We note that $\Theta_{h_0}(\mathcal L|_{C_0})$ is the curvature tensor of the Chern
connection of $(\mathcal L|_{C_0}, h_0)$.
Then $$\deg_C\mathcal L=\frac{\sqrt{-1}}{2\pi}\int _{C_0}
\Theta_{h_0}(\mathcal L|_{C_0})$$ holds (see, for example, \cite[Theorem 5.1]{kollar}).
Note that the right hand side is an improper integral.
By assumption, $\deg _C\mathcal L=0$.
This implies that $\Theta_{h_0}(\mathcal L|_{C_0})=0$.
Therefore, $\mathcal L|_{C_0}$ is a flat subbundle of $F^b|_{C_0}$.
\end{proof}
\begin{rem}\label{d-rem4.8}
In Lemma \ref{d-lem4.7},
the smooth hermitian metric $h_0$ on $\mathcal L|_{C_0}$ can be
extended naturally to a singular hermitian metric $h$
on $\mathcal L$ in the sense of Demailly such that
$\sqrt{-1}\Theta_h(\mathcal L)$ is positive in the sense of
currents and that the Lelong
numbers of $h$ are zero everywhere.
For the details, see \cite[Theorem 1.1]{fujino-fujisawa2}.
\end{rem}
\section{Proof of Theorem \ref{B-thm1.3}}\label{C-sec5}
In this section, we prove Theorem \ref{B-thm1.3} and
Corollary \ref{B-cor1.4}.
\medskip
Let us prepare an easy lemma.
By this lemma, we can reduce the problem to the case where the base
space is a curve.
\begin{lem}\label{C-lem5.1}
Let $Y$ be a smooth projective irreducible variety with $\dim Y\geq 2$ and
let $N$ be a numerically trivial Cartier divisor on $Y$.
Let $H$ be a smooth ample Cartier divisor on $Y$ such
that $H$ contains no irreducible components of $\Supp N$.
Then $N\sim 0$ if and only if $N|_H\sim 0$.
\end{lem}
\begin{proof}
We consider the following long exact sequence
\begin{equation*}
\begin{split}
0&\to H^0(Y, \mathcal O_X(N-H))\to
H^0(Y, \mathcal O_Y(N))\to H^0(H, \mathcal O_H(N|_H))
\\&\to H^1(Y, \mathcal O_Y(N-H))\to \cdots.
\end{split}
\end{equation*}
It is obvious that $H^0(Y, \mathcal O_Y(N-H))=0$.
By the Kodaira vanishing theorem,
we have $H^1(Y, \mathcal O_Y(N-H))=0$.
Therefore,
$H^0(Y, \mathcal O_Y(N))\simeq
H^0(H, \mathcal O_H(N|_H))$ holds.
In particular, $N\sim 0$ if and only if $N|_H\sim 0$.
\end{proof}
Let us start the proof of Theorem \ref{B-thm1.3}.
We adapt Floris's proof of
Theorem \ref{B-thm1.3} for lc-trivial
fibrations (see \cite{floris}) to our setting, that is,
basic slc-trivial fibrations.
\begin{proof}[Proof of Theorem \ref{B-thm1.3}]
This proof heavily depends on \cite[Section 6]{fujino-slc-trivial}.
Let $\sigma\colon Y'\to Y$ be a projective
birational morphism from a smooth
projective variety $Y'$. By considering
the induced basic slc-trivial fibration by $\sigma\colon Y'\to Y$,
we may assume that $Y$ is a smooth projective variety.
\setcounter{step}{0}
\begin{step}\label{C-step1}
In this step, we construct a cyclic cover of the generic
fiber of $f:X\to Y$ following \cite[6.1 and 6.2]{fujino-slc-trivial}.
Let $f \colon (X, B)\to Y$ be a basic slc-trivial fibration.
Let $F$ be a general fiber of $f \colon X\to Y$.
We put
$$
b=\min\{m \in \mathbb Z_{>0}\, |\, m(K_F+B_F)=m(K_X+B)|_F\sim 0\}.
$$
Then we can write
\begin{equation}\label{C-eq5.1}
K_X+B+\frac{1}{b}(\varphi)=f^*(K_Y+B_Y+M_Y)
\end{equation}
with $\varphi\in \Gamma(X, \mathcal K^*_X)$,
where $B_Y$ is the discriminant $\mathbb Q$-divisor
and $M_Y$ is the moduli $\mathbb Q$-divisor
of $f \colon (X, B)\to Y$.
By taking some suitable blow-ups
(see \cite[Theorem 1.4 and Section 8]{bierstone} and
\cite[Lemma 2.11]{fujino-ann}),
we may assume that $\Supp (B-f^*(B_Y+M_Y))$ is a simple
normal crossing divisor on $X$,
$(B^h)^{=1}$ is Cartier,
and every stratum of $(X, (B^h)^{=1})$ is dominant
onto $Y$.
We take the $b$-fold cyclic cover $\pi \colon \widetilde X\to X$ associated
to \eqref{C-eq5.1}, that is,
$$
\widetilde X=\Spec _X \bigoplus _{i=0}^{b-1}
\mathcal O_X(\lfloor i\Delta\rfloor),
$$
where $\Delta=K_{X/Y}+B-f^*(B_Y+M_Y)$.
We note that $\pi:\widetilde X\to X$ is a finite Galois
cover by construction (see \cite[Proposition 6.3 (i)]{fujino-slc-trivial}).
We put $K_{\widetilde X}+B_{\widetilde X}=\pi^*(K_X+B)$.
By construction, it is easy to see that
$(B^h_{\widetilde X})^{=1}=\pi^*((B^h)^{=1})$ and
that $(\widetilde X, (B^h_{\widetilde X})^{=1})$ is
semi-log canonical.
Moreover, every slc stratum of $(\widetilde X, (B^h_{\widetilde X})^{=1})$
is dominant onto $Y$.
We take a projective
birational morphism $d \colon V\to \widetilde X$
from a simple normal crossing variety $V$ such that
$d$ is an isomorphism
over the generic point of every slc stratum of $(\widetilde X,
(B^h_{\widetilde X})^{=1})$ by \cite[Theorem 1.4]{bierstone}. We
put $K_V+B_V=d^*(K_{\widetilde X}+B_{\widetilde X})$.
Then we
get the following commutative diagram
\begin{equation}\label{C-eq5.2}
\xymatrix{
(X, B)\ar[d]_-f & \widetilde X \ar[dl]_-{\widetilde f}
\ar[l]_-\pi& (V, B_V)\ar[dll]^-h\ar[l]_-d\\
Y & &
}
\end{equation}
with $g=\pi\circ d$.
By taking a suitable birational modification of $Y$ and
considering induced (pre-)basic slc-trivial fibrations as in
\cite[6.2]{fujino-slc-trivial},
we may further assume that the following properties hold for
$$
K_X+B+\frac{1}{b}(\varphi)=f^*(K_Y+B_Y+M_Y)
$$
and
$$
h \colon (V, B_V)
\overset{g}{\longrightarrow} (X, B)\overset{f}{\longrightarrow} Y.
$$
\begin{itemize}
\item[(a)] $Y$ is a smooth projective
irreducible variety, and $X$ and $V$ are projective
simple
normal crossing varieties.
\item[(b)] There exist simple normal crossing divisors
$\Sigma_X$, $\Sigma_V$, and $\Sigma_Y$
on $X$, $V$, and $Y$, respectively.
\item[(c)] $f$ and $h$ are projective surjective
morphisms.
\item[(d)] The supports of
$B$, $B_V$, and $B_Y$, $M_Y$ are
contained in
$\Sigma_X$, $\Sigma_V$, and $\Sigma_Y$, respectively.
\item[(e)] Every stratum of $(X, \Sigma^h_X)$
and $(V, \Sigma^h_V)$ is smooth
over $Y\setminus \Sigma_Y$.
\item[(f)] $f^{-1}(\Sigma_Y)\subset \Sigma_X$, $f(\Sigma^v_X)\subset
\Sigma_Y$, and $h^{-1}(\Sigma_Y)\subset
\Sigma_V$, $h(\Sigma^v_V)\subset
\Sigma_Y$.
\item[(g)] $(B^h)^{=1}$ and $(B^h_V)^{=1}$ are Cartier.
\end{itemize}
We note that conditions (a)--(g) above are nothing but the conditions
stated just before \cite[Proposition 6.3]{fujino-slc-trivial}.
As we saw in the proof of \cite[Theorem 1.2]{fujino-slc-trivial}
(see \cite[Section 9]{fujino-slc-trivial}),
$\mathbf M=\overline{\mathbf M_Y}$ holds and
$\mathbf M_Y$ is a nef $\mathbb Q$-divisor on $Y$.
By assumption, $\mathbf M_Y\equiv 0$.
If $\nu\colon Y''\to Y$ is a finite surjective morphism from a
smooth
projective irreducible variety $Y''$,
then it is easy to see that $\mathbf M_Y\sim _{\mathbb Q}0$ if and
only if $\nu^*\mathbf M_Y\sim _{\mathbb Q}0$.
Therefore, by taking a unipotent reduction
(see \cite[Lemma 7.3]{fujino-slc-trivial}),
we may further assume that
\begin{itemize}
\item[(A)] for any irreducible component $P$ of
$\Supp \Sigma_Y$, there exists a prime divisor $Q$ on $V$ such that
$\mult _Q(-B_V+h^*B_Y)=0$,
$h(Q)=P$, and
$\mult _Qh^*P=1$,
\item[(B)] all the local monodromies on the local system
$$R^{\dim V-\dim Y} (h|_{V^*})_*\iota_!\mathbb Q_{V^*\setminus
(B_{V^*}^h)^{=1}}$$ around
$\Sigma_Y$ are unipotent, where
$Y^*=Y\setminus \Sigma_Y$, $V^*=h^{-1}(Y^*)$, $B_{V^*}=(B_V)|_{V^*}$,
and $\iota\colon V^*\setminus (B_{V^*}^h)^{=1}\hookrightarrow V^*$ is the natural
open immersion, and
\item[(C)] all the local monodromies on the local system
$R^kh_*\mathbb Q_{S^*}$ around $\Sigma_Y$ are unipotent for any stratum $S$
of $(V, (B^h_V)^{=1})$ and every $k$, where $S^*=S|_{V^*}$.
\end{itemize}
Note that the above assumptions (A) and (B) are nothing but
the assumptions in (iv) and (v) in \cite[Proposition 6.3]{fujino-slc-trivial}.
We also note that
we do not treat the assumption (C) in the original statement of
\cite[Lemma 7.3]{fujino-slc-trivial}.
Therefore, we have to make $N_j$ in the proof of
\cite[Lemma 7.3]{fujino-slc-trivial} sufficiently divisible
in order to make the monodromy on the local system
$R^kh_*\mathbb Q_{S^*}$ around
$P_j$, an irreducible component of $\Sigma_Y$, unipotent for any
stratum $S$ of $(V, (B^h_V)^{=1})$ and
every $k$ when we take a finite cover $\nu \colon Y''\to Y$ for a unipotent
reduction (see \cite[Lemma 7.3]{fujino-slc-trivial}).
\end{step}
\begin{step}\label{C-step2}
We assume that $\dim Y\geq 2$.
Then we take a general ample Cartier divisor
$H$ on $Y$ and put $Z=f^*H$ and
$W=h^*H$.
In this situation,
$$
K_X+Z+B+\frac{1}{b}(\varphi)=f^*(K_Y+H+B_Y+M_Y).
$$
By adjunction,
$$
K_Z+B|_Z+\frac{1}{b}(\varphi|_Z)=f^*(K_H+B_Y|_H+M_Y|_H)
$$
holds.
It is not difficult to see that
$f|_Z \colon (Z, B|_Z)\to H$ is a basic slc-trivial fibration
and $$h|_W \colon (W, B_V|_W)\overset {g|_W}
{\longrightarrow} (Z, B|_Z)\overset{f|_Z}{\longrightarrow}
H$$ satisfies conditions (a)--(g),
(A), (B), and (C) in Step \ref{C-step1}.
We note that $B_Y|_H=B_H$ and $M_Y|_H=M_H$ hold,
where $B_H$ (resp.~$M_H$) is
the discriminant (resp.~moduli) $\mathbb Q$-divisor
of $f|_Z \colon (Z, B|_Z)\to H$.
By Lemma \ref{C-lem5.1},
$M_Y\sim _{\mathbb Q}0$ if and only if $M_Y|_H\sim
_{\mathbb Q}0$.
Therefore, we can replace $f \colon (X, B)\to Y$ with $f|_Z \colon
(Z, B|_Z)\to
H$.
By repeating this reduction finitely many times,
we may assume that $Y$ is a smooth projective
curve.
\end{step}
\begin{step}\label{C-step3}
In Step \ref{C-step1},
we have already seen that $\pi:\widetilde X\to X$ is Galois.
Let $G=\mathbb Z/b\mathbb Z$ be the Galois group of
the $b$-fold cyclic cover
$\pi \colon \widetilde X\to X$.
The action of $G$ on $\widetilde X$ preserves the slc strata
of $(\widetilde X, (B^h_{\widetilde X})^{=1})$ by construction.
Therefore,
any element $\alpha$ of $G$ induces a birational map between
$(V, T)$ and $(V, T)$ over $X$, where
$T=(B^h_V)^{=1}$.
From now on, we will check that $\alpha$ satisfies
condition $(\bigstar)$ in Definition \ref{x-def4.5}.
As usual, we can take a commutative diagram
\begin{equation*}
\xymatrix{
& (V', T') \ar[dd]^(.35){g'}\ar[dl]_-{p_1}\ar[dr]^-{p_2}& \\
(V, T) \ar[dr]_-g\ar[ddr]_-h\ar@{-->}[rr]^(.40){\alpha}&& (V, T)\ar[dl]^-g
\ar[ddl]^-h\\
& X\ar[d]^-f& \\
& Y&
}
\end{equation*}
by using \cite[Theorem 1.4]{bierstone}, where $(V', T')$ is a simple
normal crossing pair such that
$T'$ is reduced, and $p_i$ is a projective
birational morphism between $(V', T')$ and $(V, T)$ for
$i=1, 2$.
We put $C=(B^h)^{=1}$.
The irreducible decomposition of $X$ and $C$ are given by
$$
X=\bigcup _{i\in I} X_i, \quad\text{and}
\quad C=\bigcup _{\lambda\in \Lambda} C_\lambda
$$
respectively as in \cite[4.14]{fujino-fujisawa}.
We put $V=\bigcup _{i\in I}V_i$ and $V_i=\bigcup _j V_{i_j}$,
where $V_{i_j}$ runs over irreducible components
of $V$ such that
$g(V_{i_j})=X_i$.
We put $T=\bigcup _{\lambda\in \Lambda} T_\lambda$
and $T_\lambda=\bigcup _l T_{\lambda_l}$,
where $T_{\lambda_l}$ runs over irreducible components of $T$
such that $g(T_{\lambda_l})=C_\lambda$.
Note that $T_{\lambda}$ and $V_i$ are disjoint unions
of some strata of $(V, T)$.
By applying the same construction as above to
$(V', T')$ and $g':=g\circ p_1=g\circ p_2 \colon V'\to X$,
we get
$V'=\bigcup _{i\in I} V'_i$ and $T'=\bigcup _{\lambda\in \Lambda}
T'_\lambda$.
We apply the same construction as in \cite[4.14]{fujino-fujisawa}
to $V=\bigcup _{i\in I}V_i$ and $T=\bigcup _{\lambda\in \Lambda}
T_\lambda$
(resp.~$V'=\bigcup _{i \in I} V'_i$ and $T'=\bigcup _{\lambda\in \Lambda}
T'_\lambda$) instead of
$X=\bigcup _{i\in I}X_i$ and $D=\bigcup _{\lambda\in \Lambda}
D_\lambda$ in \cite[4.14]{fujino-fujisawa}.
Then we can construct semi-simplicial
resolutions $\varepsilon _T \colon T_\bullet \to T$ and
$\varepsilon _V \colon V_\bullet \to V$
(resp.~$\varepsilon _{T'} \colon T'_\bullet \to T'$ and $\varepsilon _{V'}
\colon V'_\bullet \to V'$).
By construction, these semi-simplicial
resolutions satisfy the conditions stated in Definition \ref{x-def4.5}.
Therefore, $\alpha$ satisfies condition $(\bigstar)$.
This is what we wanted.
\end{step}
\begin{step}\label{C-step4}
We note that $M_Y$ is a Cartier divisor on $Y$ and
that $\mathcal O_Y(M_Y)$ is a direct summand
of
$$\left(\Gr^0_F(\mathcal V^d_Y)\right)^*\simeq
\Gr^0_F\!\left((\mathcal V^d_Y)^*\right),
$$ where
$d=\dim X-\dim Y$ (see \cite[Proposition 6.3]{fujino-slc-trivial}).
More precisely, by construction,
$\mathcal O_Y(M_Y)$ is an eigensheaf of rank one corresponding
to the eigenvalue
$\zeta^{-1}$ of
$$h_*\omega_{V/Y}\left((B^h_V)^{=1}\right)\simeq
\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)$$
by the group action of $G=\mathbb Z/b\mathbb Z$,
where $\zeta$ is a fixed primitive $b$-th root of unity
(see the proof of \cite[Proposition 6.3]{fujino-slc-trivial}).
We take an integer $l$ such that
\begin{equation*}
\mathcal O_Y(M_Y)\subset W_l\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)
\ \
\text{and} \quad
\mathcal O_Y(M_Y)\not \subset W_{l-1}\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)
\end{equation*}
hold.
Thus we can easily see that
$\mathcal O_Y(M_Y)$ is an eigensheaf of rank one corresponding
to the eigenvalue $\zeta^{-1}$ of $W_l\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)$
and that $\mathcal O_Y(M_Y)\cap W_{l-1}\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)=\{0\}$
in $W_l\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)$. We note that
$G$ acts on $W_m\Gr^0_F\!\left((\mathcal V^d_Y)^*\right)$ for every $m$ by
Lemma \ref{x-lem4.6} and Remark \ref{x-rem4.7}.
Since $\deg M_Y=0$ by assumption,
$\mathcal O_Y(M_Y)|_{Y^*}$ defines a local subsystem of
$\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)$
by Lemma \ref{d-lem4.7}.
We note that
$$\Gr^W_l\Gr^0_F\!\left((\mathcal V^d_{Y^*})^*\right)\simeq
\Gr^0_F\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)
=F^0\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)\subset
\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)$$
holds since we have
$F^1\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)=0$ by
the construction of the dual Hodge filtration (see \cite[Remark 3.15]
{fujino-fujisawa} and Remark \ref{x-rem4.7}).
Therefore, there exists a positive integer $a$
such that $\mathcal O_Y(aM_Y)|_{Y^*}\simeq
\mathcal O_{Y^*}$ by \cite[Corollaire (4.2.8) (iii) b)]{deligne}.
This is because $\Gr^W_l\!\left((\mathcal V^d_{Y^*})^*\right)$ is a polarizable
variation of $\mathbb Q$-Hodge structure.
Thus we get $\mathcal O_Y(aM_Y)\simeq
\mathcal O_Y$ by taking the canonical extension.
This is what we wanted.
\end{step}
Hence, we obtain $\mathbf M_{Y'}\sim _{\mathbb Q}0$.
\end{proof}
We close this section with the proof of
Corollary \ref{B-cor1.4}.
\begin{proof}[Proof of Corollary \ref{B-cor1.4}]
By \cite[Lemma 4.12]{fujino-slc-trivial},
we may assume that $Y$ is a smooth projective
curve. We always have $\deg M_Y\geq 0$ since $M_Y$
is nef by \cite[Theorem 1.2]{fujino-slc-trivial}.
If $\deg M_Y>0$,
then it is obvious that
$M_Y$ is ample.
If $\deg M_Y=0$, then
$M_Y$ is numerically trivial.
In this case, by Theorem \ref{B-thm1.3},
$M_Y\sim _{\mathbb Q}0$ holds.
Therefore, we see that $M_Y$ is always
semi-ample.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{introduction}
The ultrarelativistic heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC) have created a partonic matter at extreme conditions of temperature and energy densities, the quark-gluon plasma (QGP), which is governed by the quantum chromodynamics (QCD) theory. The first-principles lattice QCD calculation shows that the transition from hadronic to the partonic matter at zero baryon chemical potential $\mu_B$ is a smooth crossover \cite{Aoki:2006we,Bellwied:2015rza,Bazavov:2018mes}. But the calculation of phase transition in the QCD phase diagram at finite baryon chemical potential still has large uncertainties~\cite{Fischer:2014ata,Gao:2015kea,Fu:2019hdw}, especially regarding the conjectured endpoint of the first-order phase transition boundary that is the so-called QCD critical endpoint (CEP) \cite{Gavai:2008zr,Stephanov:2004wx,Mohanty:2009vb}, due to the famous sign problem \cite{Gavai:2003mf,deForcrand:2002hgr,Fodor:2001au}.
To explore the nature of the QCD phase diagram, the beam energy scan (BES) program at RHIC is searching for the QCD critical point with Au$+$Au collisions at a large range of collision energies~\cite{Aggarwal:2010wy,Adamczyk:2013dal,Adamczyk:2014fia,Adare:2015aqk,Adamczyk:2017wsl,Adam:2020unf}. The fireballs created in Au$+$Au collisions at different energies freeze-out at different points of the QCD phase diagram. Because certain singularities will appear at the CEP in the thermodynamic limit~\cite{ibook02}, we expect to observe certain nonmonotonic behaviors if the evolution trajectory of the colliding system is close enough to the CEP. For example, event-by-event fluctuations of various conserved quantities are proposed as possible signatures of the existence of the CEP \cite{Koch:2005vg,Asakawa:2000wh,Asakawa:2009aj} because they are proportional to the corresponding susceptibilities and correlation lengths. Many recent experimental results on net-proton fluctuations hint that a critical point might have been reached during the evolution of Au$+$Au collisions at a low collision energy~\cite{Adamczyk:2013dal, Adam:2020unf,Luo:2017faz}, which serves as a main motivation for the upcoming research projects such as those at FAIR in Germany, NICA in Russia, and HIAF in China.
On the other hand, it is difficult to connect thermal properties of static QCD matter with the experimental measurements, since relativistic heavy-ion collisions involve different dynamical evolution stages. To study the full evolution history of the thermodynamic properties of the QCD matter with a dynamical transport model may serve as a bridge between the gap~\cite{Zhang:2008zzk, Lin:2014tya}. In this work, we investigate the space-time evolution of the parton matter created in Au$+$Au collisions at different energies, including transverse flow, effective temperature and conserved charge chemical potential by using the string melting version of a multiphase transport (AMPT) model~\cite{Lin:2004en}.
The paper is organized as follows. Section~\ref{AMPT} briefly introduces the string melting version of AMPT model and the improvements that we make. Comparison of the space-time evolution of transverse flow at different collision energies are presented in Sec.~\ref{flow}. We then discuss the space-time evolution of the effective temperature and chemical potentials in Sec.~\ref{SPACE-TIME EVOLUTION}. We show the trajectories of Au$+$Au collisions at different energies in the QCD phase diagram in Sec.~\ref{diagram}. We present the space-time evolution of pressure anisotropy to discuss the systems are in equilibrium or nonequilibrium in Sec.~\ref{Pressureanisotropy}. Finally, a summary is given in Sec.~\ref{summary}.
\section{A multiphase transport model including the nuclear thickness}
\label{AMPT}
The string melting version of the AMPT model consists of fluctuating initial conditions from the heavy-ion jet interaction generator (HIJING) model~\cite{Wang:1991hta}. In this model, minijet partons and strings are produced from hard processes and soft processes, respectively. With the string melting mechanism, all parent hadrons from the fragmentation of the excited strings are converted into partons. The interactions among these partons are described by Zhang's parton cascade (ZPC) model \cite{Zhang:1997ej}, which includes elastic two-body scatterings based on the leading order pQCD gg $\rightarrow $ gg cross section:
\begin{equation}
\frac{d\sigma}{dt}=\frac{9\pi\alpha^{2}_{s}}{2}(1+\frac{\mu^{2}}{s})\frac{1}{(t-\mu^{2})^{2}}.
\label{q1}
\end{equation}
In the above, $\alpha_{s}$ is the strong-coupling constant (taken as 0.33), while $s$ and $t$ are the usual Mandelstam variables. The effective screening mass $\mu$ is taken as a parameter in ZPC for the parton scattering cross section, and we set $\mu$ as 2.265 fm$^{-1}$ leading to a total cross section of about 3 mb for elastic scatterings in the default setting. The AMPT model implements a spatial quark coalescence model, which combines nearby freeze-out partons into mesons or baryons, to describe the transition from the partonic matter to the hadronic matter. The final-stage hadronic evolutions are modeled by an extension of a relativistic transport model (ART) including both elastic and inelastic scatterings for baryon-baryon, baryon-meson and meson-meson interactions~\cite{Li:1995pra}. Our other parameters are taken as same as those from Ref.~\cite{Lin:2014tya,Ma:2016fve}, which can reasonably reproduce many experimental observables such as rapidity distributions, $p_T$ spectra, and anisotropic flows \cite{Lin:2001zk,Chen:2004dv,Ma:2016fve} for both Au$+$Au collisions at RHIC and Pb$+$Pb collisions at LHC energies.
To study heavy-ion collisions at low energies, we have improved the string melting AMPT by modeling the finite nuclear thickness, which has been shown to be important for nuclear collisions at lower energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. In our convention, the $x$ axis is chosen along the direction of impact parameter $b$ from the target center to the projectile center, the $z$ axis is along the projectile direction, and the $y$ axis is perpendicular to both the $x$ and $z$ directions.
We consider the moment when the projectile and target nuclei contact each other as the starting time $t=0$, while the proper time $\tau$ is defined as $(t^2-z^2)^{1/2}$. The spatial density of nucleons inside projectile or target follows the Woods-Saxon distribution. As shown in Fig.~\ref{schematic_diagram}(a), for a nucleon inside a hard-sphere projectile located at an initial position of ($x_i, y_i, z_i$), the thickness length $l$ of target that the projectile nucleon punches through can be calculated as follows,
\begin{eqnarray}
l(x_i,y_i,b)=2\sqrt{R^2-(x_i\pm b/2)^2-y_i^2},
\label{thickness}
\end{eqnarray}
where $R$ is the hard-sphere radius of colliding nuclei, and $\pm$ applies to projectile or target nucleons, respectively.
As shown in Fig.~\ref{schematic_diagram}(b), the time $t_e$ when the projectile nucleon enters the target in the center-of-mass frame of a Au$+$Au collision can be calculated as follows,
\begin{eqnarray}
t_e(x_i,y_i,z_i,b)&=&\frac{\sqrt{R^2-b^2/4}-[l(x_i,y_i,b)/2 \pm z_i]}{2\mathrm{sinh}~y_{CM}},
\label{teq}
\end{eqnarray}
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{schematic_diagram.eps}
\caption{(Color online) The schematic diagrams of a Au$+$Au collision with an impact parameter $b$ in the $x$-$z$ plane. (a) Consider a projectile nucleon $N$ (small open circle) at a location of ($x_i, y_i, z_i$) at the starting time $t=0$; (b) the projectile nucleon enters the target nucleus at $t=t_e(x_i, y_i, z_i, b)$; (c) the wounded nucleon from the projectile produces parent hadrons at a location of ($x_H, y_H, z_H$) at $t=t_H(x_i, y_i, z_i, b)$; (d) the projectile nucleon leaves the target nucleus at $t=t_e(x_i, y_i, z_i, b)+d_t$.}
\label{schematic_diagram}
\end{figure}
where $y_{CM}$ is the projectile rapidity in the center-of-mass frame.
Since parent hadrons are produced by interactions between projectile and target nucleons, as shown in Fig.~\ref{schematic_diagram}(c), the production time of parent hadrons, $t_H$, is obtained by sampling according to a time profile based on the probability function ~\cite{Lin:2017lcj},
\begin{equation}
\frac{d^2E_T}{dy dt_H}=a_n [(t_H-t_e)(t_e+d_t-t_H)]^n \frac{dE_T}{dy}, t_H\in [t_e, t_e+d_t],
\label{time_profile}
\end{equation}
where we take the power as $n = 4$, $a_n = 1/d_t^{2n+1}/\beta(n+1, n+1)$ is the normalization factor with the $\beta$ function of $\beta(a, b)$, and $d_t=l/(2\mathrm{sinh}~y_{CM})$ is the duration time during which the projectile nucleon completely crosses the target nucleus. The parent hadrons produced by same projectile or target nucleon are assumed to be produced at the same time of $t_H$. Then the longitudinal coordinate of a parent hadron can be obtained as follows:
\begin{equation}
z_H=z_i\pm t_H\mathrm{sinh}~y_{CM},
\label{zstring}
\end{equation}
while its transverse coordinates ($x_H, y_H$) are set to the transverse positions of the projectile or target nucleon.
In the following, the partons are generated by string melting after a formation time:
\begin{equation}
t_f=E_{H}/m^2_{T,H},
\label{tf }
\end{equation}
where $E_{H}$ and $m_{T,H}$ represent the energy and transverse mass of the parent hadron. The initial positions of partons from melted strings are calculated from those of their parent hadrons using straight line trajectories. As a result, the initial condition of partonic matter after considering the finite-thickness effect is used for the parton cascade simulations in this study.
To study the thermodynamics properties of partonic matter, we focus on the space-time evolution of partonic matter during the process of parton cascade only in this study. Using the string-melting version of the AMPT model with the finite-thickness effect, 10 000 events of Au$+$Au central collisions ($0 - 5\%$ centrality modeled with b$\leq$3 fm) are generated for each energy ($\sqrt {s_{NN}}$ = 200, 62.4, 39, 27, 19.6, 11.5, 7.7, 4.9, and 2.7 GeV) which can be provided by the RHIC, FAIR, and NICA facilities.
\section{Results and discussions}
\label{results}
\subsection{Space-time evolution of transverse flow}
\label{flow}
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{denx3.eps}
\caption{(Color online) Proper-time evolution of parton density averaged over the transverse area of the overlap volume within space-time rapidity $|\eta_s|<0.5$ in $b=0$ fm Au$+$Au collisions at different energies.}
\label{denx3}
\end{figure}
First, the densities of formed partons averaged over the transverse area of the overlap volume within space-time rapidity $|\eta_s|<0.5$ as functions of proper time in $b=0$ fm Au$+$Au collisions at different energies are shown in Fig.~\ref{denx3}. The nuclear transverse area $A_T$ \cite{Lin:2017lcj} is defined as:
\begin{eqnarray}\label{transverse}
A_T=
\begin{cases}
\pi R^2_A & {t\ge d_t^{nuclei}/2}\\
\pi R^2_A \left [ 1-(1-2t/d_t^{nuclei})^2\right ] & {t<d_t^{nuclei}/2}
\end{cases},
\end{eqnarray}
with $R_A=1.12A^{1/3}$ fm, $A=197$, and $d_t^{nuclei} = 2R_A/\mathrm{sinh}~y_{CM}$ is the duration time for two nuclei of the same mass number $A$ with $b=0$ fm to cross each other in the center-of-mass frame. The density increases with the proper time at first, because more partons are produced. Higher density is reached at higher collision energy. With the expansion of the fireball, the density decreases gradually. Both the increase and the decrease become slower at lower collision energies, since the nuclei have a larger thickness at lower collision energies which slows down the evolution, especially in the longitudinal direction.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\linewidth]{tr_flow.eps}
\caption{(Color online) Transverse flow component $\beta_x$ along the $x$ axis ($|y| < 0.5$ fm) as functions of $x$ and $\eta_s$ at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV (first row) and 7.7 GeV (second row).}
\label{Tr_flow}
\end{figure*}
At the same time, the radial flow is calculated employing $\vec{\beta}=(\sum_{i}\vec{p}_i/\sum_{i}E_i)$, where the sum over index $i$ takes into account all partons in the cell for all events of a given collision system. Flow component along the $x$ direction $\beta_x$ as functions of coordinate $x$ and space-time rapidity $\eta_s$ at different times in cells within $|y| < 0.5$ fm in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV and 7.7 GeV are shown in Fig.~\ref{Tr_flow}. We can see the antisymmetry of the transverse flow along the $x$ axis in space-time rapidity, after averaging over many events of central collisions. The flow is very small at the early time $\tau = 0.2$ fm/$c$ and then develops rather faster, especially at larger $x$~\cite{Lin:2014tya}.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{flow_t9.eps}
\caption{(Color online) Proper time evolution of transverse flow component $\beta_x$ of partons within space-time rapidity $|\eta_s|<0.5$ in the cells at ($x, y$)=(1 fm, 0 fm) (filled symbols) and ($x, y$)=(7 fm, 0 fm) (open symbols) in central Au$+$Au collisions at different energies.}
\label{flow_t}
\end{figure}
Figure~\ref{flow_t} shows the transverse flows of partons in the two selected cells at ($x, y$)=(1 fm, 0 fm) and ($x, y$)=(7 fm, 0 fm) within space-time rapidity $|\eta_s|<0.5$ as functions of proper time in central Au$+$Au collisions at different energies. The transverse flow is bigger further away from the center of the overlap volume of central collisions \cite{Lin:2014tya}. We see that the transverse flow increases with time for both the inner cell and the outer cell in the beginning. Because the parton density increases faster at higher collision energy, the transverse flow grows faster at higher collision energy for both the inner and outer cells. However, compared with the case of parton density in Fig.~\ref{denx3}, the development of transverse flow generally shows a time delay is slower.
\subsection{Space-time evolution of temperature and chemical potentials}
\label{SPACE-TIME EVOLUTION}
In the AMPT model, the energy-momentum tensor, $T^{\mu \nu }$ can be calculated by
averaging over particles and events in a volume $V$~\cite{Zhang:2008zzk}, i.e.,
\begin{equation}
T^{\mu \nu }=\frac{1}{V}\sum_{i}\frac{p_{i}^{\mu }p_{i}^{\nu
}}{E_{i}}.
\end{equation}
In the rest frame of a small volume cell, the energy density can be given by $\epsilon = T^{00}$, while the pressure components are related to the energy-momentum tensor by $P_{x} = T^{11}$, $P_{y} = T^{22}$, $P_{z} = T^{33}$. The net conserved charge number densities $n_{B}$, $n_{Q}$, and $n_{S}$ can be calculated for the given volume as well. Therefore, the corresponding chemical potentials $n_{B}$, $n_{Q}$, and $n_{S}$, and $T$ can be obtained by numerical solving Eqs.~(\ref{mu_T}) after the net conserved charge densities $n_{B}$, $n_{Q}$, and $n_{S}$ and $\epsilon$ are obtained through the AMPT model. Note that in this study we only extract $\mu$ and $T$ values for the center cell,
for which the rest frame is assumed to be A$+$A collision center-of-mass frame.
\begin{figure}[htbp]
\includegraphics[scale=0.37]{n_epsilon.eps}
\caption{(Color online) Proper-time evolution of net baryon number density $n_{B}$ (first row), net electric charge density $n_{Q}$ (second row), net strangeness number density $n_{S}$ (third row), and energy density $\epsilon$ (fourth row) for the central cell in central Au$+$Au collisions at 200 GeV (left column), 27 GeV (middle column), and 4.9 GeV (right column) with (solid) and without (dashed) including the finite nuclear thickness.}\label{nB_epsilon}
\end{figure}
Figure~\ref{nB_epsilon} shows the proper-time evolution of net baryon number density $n_{B}$, net electric charge density $n_{Q}$, net strangeness number density $n_{S}$, and energy density $\epsilon$ for the central cell, defined as the cell within ($|x| < 0.5$ fm, $|y| < 0.5$ fm) and the space-time rapidity range of $|\eta_s| < 0.5$, in central Au$+$Au collisions at three selected beam energies from the AMPT-SM model. At the top RHIC energy of 200 GeV, the results with and without the finite nuclear thickness are almost the same \cite{Lin:2017lcj, Mendenhall:2020fil}. With the decrease of the beam energy, the peak energy and charge densities are reached later due to the longer time that two nuclei take to cross each other. Therefore, it is important to consider the finite nuclear thickness effect for simulating heavy-ion collisions at low beam energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. Note that we show the results with the finite nuclear thickness effect in the rest of this paper, unless stated otherwise. In addition, we see that the net strangeness number density can be negative at low energies in the central cell. This is because of the large baryon density, which leads to most $s$ in $\Lambda$ but most $\Bar{s}$ in K. Since the quark formation time is inversely proportional to the parent hadron transverse mass in AMPT's string melting, $s$ from $\Lambda$ has a smaller formation time than $\Bar{s}$, which produces negative $n_S$ at early times.
\begin{figure}[htbp]
\includegraphics[width=1\linewidth]{contours.eps}
\caption{(Color online) Contour plots of the effective temperature from Boltzmann statistics as a function of the $x$ coordinate and space-time rapidity $\eta_{s}$ at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV (top row) and 7.7 GeV (bottom row) for the parton matter within $|y|<$0.5 fm.}\label{contours}
\end{figure}
The two-dimensional (2D) distributions of extracted local temperature from
Boltzmann statistics as functions of coordinate $x$ and space-time rapidity at different proper times in central Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 and 7.7 GeV are shown in Fig.~\ref{contours}. We can see that the highest temperature is reached at the center of the overlap region after the two nuclei overlap completely ($\tau \approx $ 0.2 and 4 fm/$c$ for 200 and 7.7 GeV, respectively). After that moment, the temperature decreases with the evolution of the expanding system.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{T_mub.eps}
\caption{(Color online) Proper time evolution of (a) baryon chemical potential $\mu_{B}$ and (b) temperature $T$ for the central cell in central Au$+$Au collisions at different energies.}
\label{T_mub}
\end{figure}
The proper-time evolutions of the baryon chemical potential $\mu_{B}$ and temperature $T$ for the central cell in central Au$+$Au collisions at different beam energies from the AMPT-SM model are shown in Fig.~\ref{T_mub}(a) and \ref{T_mub}(b), respectively. We can see that both baryon chemical potential and temperature increase with time at first, then they decrease with time, which indicates that the collision system is first compressed and heated, and then becomes dilute and cools down due to the expansion. However, the energy dependencies of the baryon chemical potential and temperature are different. Figure.~\ref{T_mub}(b) shows that a higher temperature is reached at a higher collision energy; in contrast, the highest baryon chemical potential is achieved at an intermediate energy of $\sqrt{s_{NN}}$ = 7.7 GeV, as shown in Fig.~\ref{T_mub}(a). In general, the time evolution at lower energies is slower than that at higher energies due to the influence of the finite nuclear thickness.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{muq_mus.eps}
\caption{(Color online) Proper-time evolution of (a) chemical potentials of strangeness $\mu_{S}$ and (b) electric charge $\mu_{Q}$ for the central cell in central Au$+$Au collisions at different energies.}
\label{muq_nus}
\end{figure}
The proper-time evolutions of the chemical potentials of electric charge $\mu_{Q}$ and strangeness $\mu_{S}$ for the central cell in central Au$+$Au collisions at different beam energies from the AMPT-SM model are shown in Fig.~\ref{muq_nus}(a) and \ref{muq_nus}(b), respectively. We obtain positive $\mu_{S}$ but negative $\mu_{Q}$. The $\mu_{S}$ is seen to be roughly proportional to $\mu_{B}$, i.e., $\mu_S \approx 1/3\mu_B$, while the magnitude of $\mu_Q$ is very small. We observe that the magnitudes of two chemical potentials increase with time at first, and then decrease with time, which follow a similar trend as $\mu_B$.
\subsection{Trajectories in the QCD phase diagram}
\label{diagram}
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{phase_diagram1.eps}
\caption{(Color online) AMPT results on the average trajectories of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram. Three cases are compared: (I) $\mu_Q = 0$, $\mu_S = 0$, $m_q = 0$ (open symbols); (II) $\mu_Q \ne 0$, $\mu_S \ne 0$, $m_q = 0$ (half open symbols); (III) $\mu_Q \ne 0$, $\mu_S \ne 0$, $m_q \ne 0$ (filled symbols). The black curve shows the crossover phase boundary with the critical endpoint obtained from the functional renormalization group approach with $N_f = 2+1$ \cite{Fu:2019hdw}. The corresponding lifetime during which each trajectory stays in the QGP phase is also shown.
}
\label{phase_diagram}
\end{figure}
In Fig.~\ref{phase_diagram}, we present the event-averaged evolution trajectory of the central cell of the partonic matter produced in central Au$+$Au collisions at different beam energies from the moment when the baryon chemical potential reaches the maximum value to the moment when it reaches the crossover curve in the QCD phase diagram of temperature and baryon chemical potential. Note that the crossover phase boundary is obtained from the functional renormalization group (FRG) method with $N_f = 2+1$, which agrees well with the phase boundary from the lattice QCD \cite{Fu:2019hdw}. From the filled symbols that represent the full consideration in which all chemical potentials and quark mass are included, we find that the partonic stage can last 3.4-4.8 fm/$c$ if the time when the system stays above the phase boundary is counted, which is consistent with the previous AMPT results for mid-central Au$+$Au collisions~\cite{Chen:2009cju}, but longer than the lifetime for the matter averaged over the transverse area from a semi-analytical calculation \cite{Mendenhall:2021maf}. If we take the location of the critical endpoint at ($T_{CEP}, \mu_{B_{CEP}}$) = (107, 635) MeV from the FRG calculation, the beam energies lower than 4.9 GeV~\cite{Fu:2019hdw, Andronic:2017pug} seem to be the most promising to reach the CEP, which could be accessed at fixed-target experiments at RHIC. Note that it has been found that the chemical and kinetic freeze-out parameters extracted from the AMPT model agree with the RHIC experimental measurements~\cite{Wang:2020wvu}.
We further study the influences of the $\mu_Q$, $\mu_S$, and quark current mass $m_q$ on the event-averaged evolution trajectories (see Appendix~\ref{Boltzmann statistics}), as shown by half open and open symbols in Fig.~\ref{phase_diagram}. We can see that the influence of the quark mass is so small that the filled and half open symbols most overlap, because the current quark masses we use here are very small compared with the temperature and baryon chemical potential. However, we can observe that there is a large difference between filled or half open and open symbols, which indicates that $\mu_Q$ and $\mu_S$ are important for drive the evolution of the system.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{phase_diagram_statistics.eps}
\caption{(Color online) The average trajectory of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram of temperature versus baryon chemical potential from Boltzmann statistics (filled symbols) and the quantum statistics (open symbols). }
\label{phase_diagram_statistics}
\end{figure}
Furthermore, we check whether the different statistics (see Appendices~\ref{Boltzmann statistics} and \ref{Quantum statistics}) can result in a difference of trajectories in the QCD phase diagram. We compare the results from Boltzmann statistics (filled symbols) and the quantum statistics (open symbols) in Fig.~\ref{phase_diagram_statistics}. We can see that with the decrease of collision energy, the difference between the two trajectories from the two statistics becomes larger. In general, a higher $\mu_B$ is obtained by the quantum statistics than that obtained by Boltzmann statistics, since the Pauli exclusion begins to play a role as $\mu_B$ increases, while this effect is absent in the Boltzmann statistics. Because the AMPT model assumes Boltzmann statistics, the results in the rest of this paper are presented using Boltzmann statistics.
\begin{figure}[h]
\centering\includegraphics[scale=0.35]{phase_diagram_thickness.eps}
\caption{(Color online) The average trajectory of the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram from Boltzmann statistics with (filled symbols) and without (open symbols) including the finite nuclear thickness.}
\label{phase_diagram_thickness}
\end{figure}
In addition, the finite thickness of nuclei is expected to affect the evolution trajectories in the QCD phase diagram, especially at low energies \cite{Lin:2017lcj, Mendenhall:2020fil, Mendenhall:2021maf}. In Fig.~\ref{phase_diagram_thickness} we compare the average trajectories with and without including the finite thickness for the central cell in central Au$+$Au collisions at different energies in the QCD phase diagram based on full consideration of Boltzmann statistics. We do not see any obvious change of evolution trajectory for the top RHIC energy, but the difference becomes more and more significant with the decrease of collision energy. For lower energies, the results without considering the finite-thickness effect start at much higher temperature and larger baryon chemical potential. For example, when considering the finite-thickness effect, the trajectory for 2.7 GeV disappears below the phase-transition boundary. Therefore, it is clearly necessary to properly include the finite nuclear thickness effect, especially for simulating heavy-ion collisions at low beam energies.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{byevent.eps}
\caption{(Color online) AMPT results on event-by-event trajectories of the central cell in central Au$+$Au collisions at different beam energies in the QCD phase diagram.}
\label{byevent}
\end{figure}
We note that the above results come from the average of 10 000 central Au$+$Au events. However, event-by-event fluctuations can not be neglected, and these fluctuations could affect the search for the CEP of the QCD phase diagram. Figure.~\ref{byevent} shows the event-by-event trajectories of central Au$+$Au collisions at different beam energies from the AMPT-SM model. To suppress the effect from volume fluctuation, a multiplicity cut is further applied, in which we divide the total central events into 100 bins by multiplicity and only use the events in one middle bin around the average. Even so, we can see that the fluctuation of evolution trajectory is still large, especially at high energies, which could be due to larger volume fluctuations at higher energies.
It should be noted that the QGP created in high-energy heavy-ion collisions, which may consist of gluons and quarks in or near chemical and thermal equilibrium, should be governed by nonperturbative QCD interactions, which are missing in our model. Furthermore, the method that we used to extract temperature and baryon chemical potential only works for a noninteracting parton system in principle. Our extraction method assumes that all partons in the cell are in full thermal and chemical equilibrium~\cite{Lin:2014tya}; therefore, the extracted temperature and chemical potentials are the effective values if the system is in partial of thermal and/or chemical equilibrium. In addition, we focus on the central space-time rapidity and only study the partonic matter without the subsequent phase transition and hadronic evolution.
\subsection{Equilibrium or nonequilibrium}
\label{Pressureanisotropy}
In the central cell of central Au$+$Au collisions, due to the cylindrical symmetry around the beam axis, the two transverse pressure components $P_{x}$ and $P_{y}$ are equal. Therefore, the transverse pressure can be defined to be $P_{T} = (P_{x}+P_{y})/2$~\cite{Zhang:2008zzk}, while the longitudinal pressure $P_{L}$ is just $P_{z}$. For a system in thermal equilibrium, its pressure must be isotropic, which satisfies the relation of $P_{T} = P_{L}=P$; otherwise, we
define the total pressure as $P = (P_{x}+P_{y}+P_{z})/3$. Therefore, a pressure anisotropy parameter, $P_{L}/P_{T}$, is defined to describe the degree of pressure anisotropy of the system. The closer the value of $P_{L}/P_{T}$ is to unity, the closer the system is to the state of thermal equilibrium.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{PLPT.eps}
\caption{(Color online) AMPT results for the time evolution of the pressure anisotropy parameter when its temperature and baryon chemical potential are above (filled symbols) and below (dotted curves) the phase boundary in the QCD phase diagram in the central cell in central Au$+$Au collisions at different beam energies.}
\label{PLPT}
\end{figure}
Figure.~\ref{PLPT} shows how the pressure anisotropy parameter in the central cell evolves with proper time in central Au$+$Au collisions at different beam energies. For Au$+$Au collisions at 200 GeV, we can see that $P_{L}/P_{T}$ keeps increasing, but still can not reach unity up to 5 fm/$c$. It indicates that even for the top RHIC energy, the central cell of the system actually does not reach thermal equilibrium when it arrives at the phase boundary in the AMPT model, which is consistent with previous results~\cite{Zhang:2008zzk, Lin:2014tya}. For lower energies, $P_{L}/P_{T}$ first increases up to a peak and decreases into a valley, and finally increases gradually due to the finite nuclear thickness. However, none of them reaches thermalization during the partonic stage. It shows that it is indeed different from the equilibrium evolution of hydrodynamical models.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{Pressure.eps}
\caption{(Color online) AMPT results on the time evolution of (a) three diagonal components of the energy-momentum tensor ($P_{DC}$; open symbols) and the pressure from the Boltzmann statistical model ($P_{Boltzmann}$; filled symbols) in the central cell in central Au$+$Au energies, and (b) the ratio of $P_{DC}$ and $P_{Boltzmann}$.}
\label{Pressure}
\end{figure}
The total pressure can be extracted from the Boltzmann statistical model via:
\begin{eqnarray}
P(T)=\sum_i d_i \int \frac{d^3p}{(2\pi)^3}\frac{p^2}{3E_i(p, T)}f_B(p, T),
\label{p_Boltzmann}
\end{eqnarray}
where $d_i$ is the degeneracy of partonic matter, $f_B(p, T)$ is the Boltzmann statistical distribution function, and $T$ is the temperature extracted from the Boltzmann statistical model. Figure.~\ref{Pressure} compares the pressure from three diagonal components of the energy-momentum tensor ($P_{DC}$) and from the Boltzmann statistical model ($P_{Boltzmann}$) in the central cell for central Au$+$Au energies. One can find that they are different especially for earlier time at lower energies, which indicates more extreme nonequilibrium of the system exists there.
\begin{figure}[htbp]
\centering\includegraphics[scale=0.4]{T_comparison.eps}
\caption{(Color online) AMPT results on the time evolution of the effective temperature extracted from transverse (dotted curves), three (dashed curves) diagonal components of the energy-momentum tensor, and the Boltzmann statistical model (solid curves) in the central cell in central Au$+$Au collisions at (a)200 GeV, (b) 27 GeV, (c) 11.5 GeV, and (d) 4.9 GeV.}
\label{T_comparison}
\end{figure}
The effective temperature can be defined locally by the ratio between the average of the diagonal components of the energy-momentum tensor and the density of all particles~\cite{Sorge:1995pw}. The effective temperature extracted from the diagonal components of the energy-momentum tensor and the Boltzmann statistical model in the central cell are shown in Fig.~\ref{T_comparison}. One can see that the effective temperatures extracted from the diagonal components of the energy-momentum tensor are different from our temperature especially for lower energies, although they give consistent trends. It is not only due to the nonequilibrium of the system, but also because our temperature extraction also considers the chemical potentials of conserved charges, especially the baryon chemical potential. In this sense, we should emphasize again that since the parton systems in Au$+$Au collisions at different energies from the AMPT model are not in complete equilibrium, the thermodynamic properties that we extracted above could be only approximate.
\section{Summary}
\label{summary}
We have studied the space-time evolution of the parton matter produced in central Au$+$Au collisions at different collision energies using the AMPT model with string melting and the finite nuclear thickness effect. The space-time evolutions of parton density and transverse flow is first presented for different collision energies. Then we extract the effective temperature and chemical potentials of the partons in the central cell based on Boltzmann statistics and quantum statistics. The temperature and baryon chemical potential first increase and then decrease with time, but their dependencies on the collision energy are opposite. By investigating the evolution of the partonic matter created in Au$+$Au collisions from 2.7 to 200 GeV, we obtain their evolution trajectories in the QCD phase diagram. The results indicate that the partonic state in the central cell exists for 3.4-4.8 fm/$c$ over this wide range of energies, and the trajectory depends on the statistics and whether the finite nuclear thickness is considered. We observe that the event-by-event trajectory fluctuates widely in the phase diagram. However, the evolution of pressure anisotropy indicates that only partial thermalization can be achieved when the partonic systems reach the predicted QCD phase boundary. Further studies of the evolution and the thermodynamic properties of the matter in heavy-ion collisions are indispensable for studying the QCD phase structure and the search for the critical point in experiments.
\begin{acknowledgments}
We thank Todd Mendenhall for checking the results in the Appendices. This work is supported in part by the National Natural Science Foundation of China under Grants No. 12147101, No. 11961131011, No. 11890710, No. 11890714, and No. 11835002, the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB34030000, and the Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2020B0301030008 (H.-S. W. and G.-L.M.), the National Science Foundation under Grant No. PHY-2012947 (Z.-W.L.), and the National Natural Science Foundation of China under Contract No. 11775041 (W.-j. F.).
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec1}
The introduction of agile approaches marks a paradigm shift for the software industry; they have affected development methods by advocating for adaptive planning, evolutionary development, early delivery, continual improvement, and responsiveness to change~\cite{abrahamsson2017agile}. They have also influenced how software companies organize their work by shifting the focus from the individual developer, instead highlighting the significance of teams, collaboration, and communication~\cite{highsmith2002agile,cockburn2006agile, lenberg2015behavioral}. As a direct result, in modern software organizations, the team has replaced the individual as the most critical entity.
In addition to having significant effects on how software companies organize and what methods they use, agile approaches have also shaped the companies' cultures and organizational value structures. The agile manifesto~\cite{beck2001manifesto}, which is the founding document for agile approaches, emphasize the significance of organizational values. Therefore, over the past 15 years, researchers have explored the intricate link between agile methods and organizational values or culture~\footnote{Culture relates strongly to values. It is, however, a broader more all-inclusive term that covers more aspects of organizational life (see Section ~\ref{lbl_organizational_culture})}. These studies have often relied on an assumption of compatibility or fit~\cite{iivari2011relationship}. Using the cultural dimensions identified by Hofstede (clan, democratic, hierarchical, and disciplined)~\cite{hofstede2001organizations}, Siakas and Siakas~\cite{siakas2007agile} identified the democratic culture dimension as the most suitable for an agile approach. Strode et al.~\cite{strode2009impact} investigated the relationship between the competing values framework (CVF)~\cite{denison1991organizational} and the agile XP method. Data extracted from nine projects indicated most consistently significant associations between XP and the collaborate dimension of CVF.
Even if studies have repeatedly recognized that successful agile adoption entails reforms to the existing value foundation~\cite{chow2008survey, chandra2010identifying, tolfo2011agile, hamid2015factors, dikert2016challenges}, such conclusions are not uncontested. For example, Robinson and Sharp~\cite{robinson2005organisational} argue that, due to their innate flexibility, agile approaches can thrive in a variety of cultural settings, while Siakas and Siakas~\cite{siakas2007agile} suggest that organizational agility should be regarded as a culture of its own. We note, however, that outside of the framing of agile transitions, software engineering studies exploring organizational values have been scarce.
Given that social science research has repeatedly recognized the critical role that values play in various facets of organizational life~\cite{belias2014organizational, schneider2013organizational, hartnell2011organizational, leidner2006review}, we believe that broader and more extensive insights on organizational values are likely to be beneficial for software engineering. A social science strategy that is related to organizational values and used to describe and explain organizational behavior is alignment theory~\cite{quiros2009organizational}. It draws on the assumption that in order to achieve effectiveness, the organizational entities must be directed and structured so that they are suited to each other~\cite{nadler1988strategic}. Research has, for example, shown that values alignment fosters collaboration and could be a proactive approach to conflict management~\cite{lynn2007literature}
One of our previous studies also demonstrated the potential usefulness of value alignment in the software engineering context~\cite{lenberg2018used}. The findings indicated that discrepancies in shared values between organizational groups adversely affected performance. In the present deductive study, we aimed to test these initial findings further and thereby to delineate and add support to a between-group value misalignment theory. Our primary objective was, therefore, to \emph{examine how discrepancies in values between organizational groups affect software companies' performance}. To the best of our knowledge, the effects of between-group value misalignment have not previously been explored within the software engineering context.
In addition to the relatively narrow primary objective, we also aimed to extend the knowledge of organizational values more broadly using an exploratory research approach. Accordingly, our secondary objective was to \emph{gain general insights into organizational values and how they affect behaviors and performance in software companies}.
To meet these objectives, we selected a mixed method research design~\cite{creswell2013research} (see Figure~\ref{fig_method_overview}). First, we collected qualitative data by interviewing 14 (\textit{N} = 14) employees working in four different software engineering organizations. We strove to gain insights as to the effects of between-group value misalignment and what organizational performance factors that were affected. The data were processed using thematic analysis~\cite{braun2006using}. Then, to statistically test if value misalignment related to the performance factors that we had identified in the qualitative analysis, we surveyed seven organizations (\textit{N} = 184). In the questionnaire, we utilized the CVF~\cite{denison1991organizational} to estimate organizational values and as the basis for calculating the between-group value misalignment.
\begin{figure
\centering
\includegraphics[width=0.90\textwidth]{Fig/6StudyOverview.png}
\caption{An overview of the study.}
\label{fig_method_overview}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.65\textwidth]{Fig/3CVF_Overview.png}
\caption{An overview of the competing values framework~\cite{denison1991organizational}.}
\label{fig_cvf_figure}
\end{figure}
Taken together, we argue that our research contributes by demonstrating the effects of value misalignment between groups in software development. That is important for software engineering organizations since they, almost exclusively, organize their work in groups or teams. Our efforts also contribute by providing more general insights on values in software engineering organizations. This is relevant considering that such research has had a limited focus, primarily exploring the fit between specific values and agile approaches~\cite{iivari2011relationship}.
In the next section (Section~\ref{sec_back}), we give further background information on organizational value and other similar concepts and provide an overview of organizational value research in software engineering. We then describe the methods, analysis, and results of the two parts of our mixed research design study (see Section \ref{section_study_a} and \ref{section_study_b}). Finally, the aggregated findings are discussed (\ref{section_discussion}) and concluded (\ref{section_conclusion}).
\section{Background}
\label{sec_back}
In the following sections, we provide information that we deemed relevant to the study. First, we define the construct organizational values and frame our research in a contextual and historical setting by providing an overview of the research on organization values in software engineering. We also briefly describe other similar constructs and, finally, summarize the literature on organizational alignment.
\subsection{Organizational values}
The initial research on values focused on the personal values of individuals~\cite{malbasic2015balance}. Personal values define preferences and thus reflect what is essential to individuals~\cite{lynn2007literature, hultman2002balancing}. In an organization, individual values guide the employees' private decisions and actions, while \emph{organizational values} provide norms that specify how they should behave and how organizational resources should be allocated~\cite{edwards2009value}. According to a review of \emph{organizational values} by Stavru~\cite{stavru2012organizational}, the most studied constructs concerning \emph{organizational values} are organizational commitment, employee retention, and well-being.
The literature shows multiple definitions of the \emph{organizational value} construct. Virtually all of them, however, acknowledge that the construct operates as a guide to the decision-making process and that it is used to evaluate individual and organizational actions and states~\cite{stavru2013we}. In this study, we use a definition adopted from Enz~\cite{enz1988role} that describes \emph{organizational values} as \emph{beliefs, a group of persons, express by preference in the context of identifying desirable courses of action and goals}.
The research into \emph{organizational values} in software engineering has mostly been conducted within the framing of the \emph{organizational culture} construct. For the past 15 years, the focus of the research has primarily been to explore the relationship between agile approaches and culture, where the studies have often relied on an assumption of compatibility or fit~\cite{iivari2011relationship}. For example, using the cultural dimensions identified by Hofstede (clan, democratic, hierarchical and disciplined)~\cite{hofstede2001organizations}, Siakas and Siakas~\cite{siakas2007agile} identify the democratic culture type as the most suitable for an agile approach. Strode et al.~\cite{strode2009impact} investigated the relationship between CVF~\cite{denison1991organizational} and the agile XP method. Data extracted from nine projects indicated most consistently significant associations between XP and the collaborate culture. The CVF was also used as a basis in work by Iivari and Iivari~\cite{iivari2011relationship}. The authors, somewhat in conflict with the study by Strode, suggest that all culture CVF orientations (except control) favor agile methods.
Moreover, Tolfo et al.~\cite{tolfo2011agile} explored a view of organizational culture in three levels as a theoretical framework to allow early detection of problems. The authors note that many facilitators of or obstacles to the adoption of an agile approach can be hidden in the lower, latent, levels of the culture. Although these studies emphasize the importance of recognizing the complicated interplay between agile methods and organizational culture, they do not provide hands-on guidance as to how to introduce and adopt agile methods into the organizational culture.
The majority of the studies have recognized that successful agile adoption entails reforms to the existing value foundation~\cite{chow2008survey, chandra2010identifying, tolfo2011agile, hamid2015factors, dikert2016challenges}. Still, the research is not unanimous. Robinson and Sharp~\cite{robinson2005organisational} suggest that due to their innate flexibility, agile approaches can thrive in a variety of cultural settings, while Siakas and Siakas~\cite{siakas2007agile} propose that organizational agility should be regarded as a culture of its own.
Even if agile approaches have framed the primary focus of organizational values research, there are exceptions. A study by Shih and Huang~\cite{shih2010exploring} explored the relationship between software process improvement (SPI) and organizational culture. Using the CVF, they conclude that an SPI deployment was made possible primarily by a control culture that emphasizes procedure, order, and stability. Furthermore, two studies by Mathew~\cite{mathew2007relationship} and Lavallee and Robillard~\cite{lavallee2015good} using qualitative research approaches provide fascinating, yet initial, results that strengthen the link between organizational culture and software quality. A literature review by Purna~\cite{purna2011soft} also adds support to this relationship.
\subsubsection{Related constructs}
According to Schneider et al.~\cite{schneider2013organizational}, the complexity of organizational behavior calls for various constructs used for description and analysis of organizational life. In the literature, the following three constructs relate to and partially overlap with \emph{organizational values}: \emph{organizational culture}, \emph{organizational climate}, and \emph{organizational identity}. These constructs are detailed in the following sections.
\paragraph{Organizational culture}
\label{lbl_organizational_culture}
The \emph{organizational culture} construct has its conceptual and methodological basis in anthropology~\cite{schneider2013organizational}. Its natural unit of analysis is thus the collective, whereas differences among individual employees tend to be of less interest. Compared to \emph{organizational values}, it is a broader, more all-inclusive, term that may cover almost every aspect of organizational life, for example, basic assumptions and beliefs, values, models of behavior, rituals, practices, symbols, heroes, artifacts, and technology~\cite{iivari2011relationship, hartnell2011organizational, schneider2013organizational}.
There seems to be agreement among researchers that the \emph{organizational culture} construct has several levels~\cite{hofstede1990measuring, schein2010organizational, schneider2013organizational}. One of the most prominent and well-known models used to capture these levels is the culture framework defined by Schein~\cite{schein2010organizational}. His model consists of three levels: artifacts, espoused beliefs and values, and underlying assumptions (see Figure~\ref{fig_schein}).
\begin{figure
\centering
\includegraphics[width=0.55\textwidth]{Fig/3Schein_Overview.png}
\caption{An overview, adopted from Schein~\cite{schein2010organizational}, of the three layers in his model.}
\label{fig_schein}
\end{figure}
Artifacts represent phenomena that are visible, including, for example, rituals, working routines, language, myths, ways of dressing, and the organization of workspace. These are the most readily accessible to outsiders, but also the most ambiguous concerning the underlying meaning they may represent. Thus, although many artifacts may look the same across organizations, the meaning(s) ascribed to them may be entirely different~\cite{schneider2013organizational}.
The intermediate level consists of espoused values or the values that are reported by management as being core to the organization. These may or may not be reflected in the employee's actual organizational behavior. According to Schein, the leadership should have strong influential skills in order to make such values acceptable by employees. The values allow organizational members to interpret signals, events, and issues that guide behavior.
The final level (underlying assumptions) indicates why employees go about their day-to-day work lives as they do, and are frequently so ingrained that they cannot necessarily be easily articulated, requiring in-depth interviewing to illuminate them. Underlying assumptions develop over time when members of a group create strategies to face problems.
According to Schneider et al.~\cite{schneider2013organizational}, researchers have conceptualized the culture construct in two way: by focusing on culture either as something an organization \emph{has} or something an organization \emph{is}. The research done from the former perspective is usually comparative and uses quantitative surveys to uncover the attributes that differentiate more efficient organizations from less efficient ones. As for the latter approach, the researchers' goal is more exploratory, aiming at uncovering fundamental assumptions and root metaphors. Such research approaches tend to be qualitative and inductive in order to report how insiders experience their organizations.
\paragraph{Organizational climate}
A closely related construct to \emph{organizational culture} is \emph{organizational climate}. The two constructs have been used interchangeably in research studies, and there is no definite distinction between the two~\cite{mclean2005organizational, zohar2012organizational}. While \emph{organizational culture} is, as stated above, the shared beliefs and assumptions about the organization's expectations and values, \emph{organizational climate} is the shared perceptions and attitudes about the organization. It may be defined as the meaning attached to the policies, practices, and procedures employees experience and the behaviors they observe being rewarded~\cite{schneider2013organizational, ostroff2003organizational}.
Schein~\cite{schein2010organizational} suggests that climate provides behavioral evidence for the culture, such that those behaviors form the bases for employees' conclusions about the values and beliefs that constitute the organizational culture.
It is also worth noting that while \emph{organizational culture} has naturally been considered a collective construct, \emph{organizational climate} research has struggled with a unit of analysis issue (i.e. whether climate is an individual experience construct or an organizational attribute). Today, the vast majority of the climate research conducted is collective, but there are still a few exceptions.
\paragraph{Organizational identity}
The construct of \emph{organizational identify} was defined by Albert and Whetten in 1985 and later clarified in 2006~\cite{albert1985organizational, whetten2006albert}. They define \emph{organizational identity} as a set of statements that employees believe are central, distinctive, and enduring to their organization. Central means that the statements should include features that are critical to the organization, while distinctive emphasizes that the identity statement should be able to distinguish the organization from others. Finally, `enduring' indicates that the identity statements are stable in the organization over time.
According to this definition, an identity statement is collectively and cognitively held by organization members to answer questions such as `Who are we?' or `What do we want to be?'. \emph{Organizational identity} often attempts to apply sociological and psychological concepts and theories about identity to organizations.
The relationship between identity and culture has long been debated among academics, and the two concepts clearly overlap. Drawing upon Mead's framework~\cite{mead1934mind}, Hatch and Schultz~\cite{hatch2002dynamics} propose a dynamic model to illustrate the relationship between organizational identity, culture, and image. According to their model, employees express their understanding of their organizational culture through identity, which affects the perception of others outside the organization about the organization. The outsiders' perception, or organizational image, in turn, affects the organizational identity, which again is reflected in the central elements of the organizational culture~\cite{mujib2017organizational}. According to this model, \emph{organizational identity} has (in contrast to \emph{organizational values}, \emph{organizational culture}, and \emph{organizational climate}) a component related to others' perceptions of an organization.
\subsection{Organizational alignment}
Alignment theory - a relatively recent approach used to explain organizational life - aims at the need for alignment among the cultural, structural, and strategic components of an organization~\cite{quiros2009organizational}. The approach is based on the assumption that in order to achieve effectiveness, all of the organizational entities must be directed and structured so that they are suited to each other. A seminal framework designed for studying alignment was compiled by two pioneers of the field, Nadler and Tushman~\cite{nadler1988strategic}.
The literature distinguishes between two types of organizational alignment: vertical and horizontal. Vertical alignment refers to the congruence of strategies, goals, and objectives between various hierarchical levels. Horizontal (also known as lateral) alignment refers to the coordination of efforts across an organization. It is related to the consistency of decisions across entities, so that activities across, for example, marketing, operations, HR, and other functions support one another~\cite{kathuria2007organizational}.
The fact that effective companies tend to have shared and aligned values has sizable empirical support~\cite{detert2000framework,denison1991organizational, burnes2011success, stavru2013we}. Research has shown that values alignment fosters collaboration and could be a proactive approach to conflict management~\cite{lynn2007literature}. However, a majority of the studies on alignment have been related to the alignment (or congruence) of values between the organization and individual employees, also referred to as person-to-organization fit~\cite{hultman2002balancing, burnes2011success, edwards2009value, stavru2013we}. Clear~\cite{clear2010exploring} has also conducted initial research into cultural fit and virtual teams. In the paper, the author outlined a model emphasizing that the cultural construct is complex and includes several layers that organizations need to consider.
To the best of our knowledge, only a few studies exist that have explored the alignment of values between groups in a software engineering context. Huang et al.\cite{huang2003dangerous} investigated the link between inconsistencies in organizational subcultures and the introduction of component-based software development methods. They stressed that misalignment of values among subcultures hindered the information sharing and collaboration needed to integrate a method effectively. Furthermore, in 2012, Stavru outlined a study designed to explore the relationship between organizational values and the deployment of agile methods~\cite{stavru2012organizational}. However, the results of that study (if it was conducted) are yet to be published.
\section{Part I - Exploring the effects of value misalignment}
The purpose of this study was dual. First, we aimed to broadly explore organizational values in software companies to attain a more informed understanding of the construct. Second, we strove to gain specific in-depth insights into between-group value misalignment in software organizations and how it affect behaviors and performance.
\label{section_study_a}
\subsection{Method}
Since we sought to explore organizational values and gain in-depth insights into value misalignment, we undertook a qualitative study in which we collected data using interviews. The following sections present an overview of the participating companies, our sample, the data collection process, and our analysis method.
\subsubsection{Company overview}
\label{company_overview}
We collected industry data from seven departments in six companies. For simplicity reasons, we will, in this study, refer to the departments as organizations. The companies were international (employees in multiple countries) and large (>3000 employees). A brief description of the organizations is found in Table~\ref{table:companies}. It presents the number of employees, number of interviewees in part I of this study, number of respondents and teams in part II (survey), a description of the department, and, finally, what agile approach they were using.
All of the participating organizations claimed to use an agile development approach. Drawing on the guidelines from Gren, Torkar, and Feldt~\cite{gren2015prospects} and using the knowledge we gained through the interviews (see Section~\ref{section_study_a}), we deemed that the organizations were all on level two or three of the Agile Adoption Framework developed by Sidky~\cite{sidky2007disciplined}.
We selected the organizations using convenience sampling by contacting 12 organizations that we had contacted previously. Of these, four agreed to participate in the interviews and the surveys, three were only willing to conduct the surveys, three organizations did not respond, and two declined due to high workload. The seven participating organizations were all located in Sweden.
\begin{table
\footnotesize
\setlength\extrarowheight{2pt}
\centering
\begin{tabular}{ C{.03\textwidth} C{.07\textwidth} C{.07\textwidth} C{.07\textwidth} L{.5\textwidth} C{.07\textwidth} } \hline
Org. Id. & No. of employee & No. of interviewees & No. of respondents & Description & Agile framework \\ \hline
A & 800 & 4 & 32 (5) & A department within a large engineering company (the same company as department C). The department had employees in Sweden and in the United States. Among the 800 employees, roughly 120 were software engineers developing both low-level components and high-level applications used in the department's products. & Scrum \\
B & 1000 & 0 & 35 (4) & An engineering company that had existed for more than 50 years. It manufactured complete products, not only software. The managers were thus not only responsible for software engineers. They were also responsible for units that designed, produced, and developed mechanical constructions. The company had roughly 80 software engineers organized in 12 teams, developing low-level software components. & Scrum \\
C & 120 & 3 & 22 (4) & A department within the same company as A. However, the two departments were separated both geographically and hierarchically. The common denominator was the CEO of the company. The department, which had almost one 100 software engineers, developed large customized business-to-business systems that included both hardware and software. & SAFe \\
D & 160 & 3 & 31 (4) & An in-house software consultant company. The department that participated in our study had approximately 160 software engineers working in roughly 40 teams. As a consultant company, they developed both low-level components and high-level applications. & Scaled scrum \\
E & 150 & 4 & 23 (4) & A department with 50 software engineers within a large system development company that developed low-level software components used by the other departments in the company. & SAFe \\
F & 150 & 0 & 19 (3) & A department that specialized in developing advanced software solutions used by a large system development company. All employees were software engineers. & SAFe \\
G & 100 & 0 & 22 (3) & A small department with roughly 1000 software engineers within the large IT consultant company that had both in-house software teams and regular consultants. The participating teams were in-house consultants. & Scrum \\ \hline
\end{tabular}
\caption{An overview of the seven departments that participated in the study.}
\label{table:companies}
\end{table}
\subsubsection{Sample}
The sample population of this sub-study was employees in the four organizations (i.e. A, C, D, and E) described in Table~\ref{table:companies}. In total, 14 employees consisting of four women and ten men aged from 29 to 60 years with an average of 43 years participated. All participants had worked in their respective organizations for more than five years and had held their current position for at least two years.
As can be seen in Figure~\ref{fig:stda_interviewees}, in two of the organizations, four employees participated, and in the other two organizations, three employees participated. In each organization, the participating employees had different hierarchical positions: one software engineer (member of an agile team), one section manager, one department manager, and, for two of the included organizations, one business unit manager.
The participants from respective organizations were hierarchically in line, which constrained the selection process. The upper management provided a list of managers that reported directly to them. From that list, we randomly selected three managers that we contacted by email. We chose to include the first manager that responded. The selection of the other participants followed the same process. Roughly 50\% of the participants we contacted by email replied. Of these, all were willing to participate.
\begin{figure
\centering
\includegraphics[width=0.85\textwidth]{Fig/4Participants.png}
\caption{An overview of the interviewees.}
\label{fig:stda_interviewees}
\end{figure}
\subsubsection{Materials}
\label{lbl_materials}
We used semi-structured interviews since this format allows the interviewer to ask follow-up questions to gain in-depth knowledge and explore concepts ad hoc through dialog with the interviewee~\cite{smith2007qualitative, de2001research}. We used the following primary questions as a basis for the interviews:
\begin{enumerate}
\item[i] Could you please describe your job at [company]?
\item[ii] Could you describe a satisfactory working day for you?
\item[iii] What constitutes prosperous behavior in your organization?
\item[iv] What behavior harms your organization?
\end{enumerate}
The primary questions were designed to expose the organizational behaviors that the interviewees and their respective organizations found acceptable and prosperous and vice versa. However, these questions were not directly related to the purpose of the interviews (i.e. to gain insights into organizational values and value misalignment). The researcher instead used follow-up questions to gravitate the conversations toward the topics of primary interest to this study. For example, if the interviewee mentioned that there were trust issues between organizational units, the researcher asked questions to uncover the underlying causes (e.g. `Why don't you trust the members in Unit X?' and `How does the lack of trust affect your work?'). We chose this method to try to ensure that the dialogue was grounded in real experiences and actual opinions. The first two questions related to the behavior of the participants, while the last two questions focused on the participants' experiences of behavior in the organization as a whole.
\subsubsection{Procedure}
The interviews were conducted on the respective organizations' premises during a period of six weeks from October to December of 2017. The sessions were performed face-to-face by a single researcher in a separate conference room without the risk of disturbance by others. The sessions were held in Swedish and lasted, on average, 45 minutes. All sessions were audio recorded.
The interviews began with the interviewer informing the interviewee that participation was voluntary, and that he or she could end the interview at any time. The interviewee was told that she or he did not represent any team, group, unit, or organization and that instead the interviewer's primary interest was the interviewee's true opinions and experiences. Moreover, the interviewee was informed that his or her interview would be confidential and that the raw and analyzed data would not be distributed to anyone. The interviewer also presented the objectives of the study and provided some background information to the research area and asked the interviewee if he or she had any questions regarding the interview procedure or the background materials.
The interviewer then began the formal interview based on the questions outlined in Section~\ref{lbl_materials}. At the end of the interview, when the interviewer had no more follow-up questions, the interviewees were asked if they had any finally comments or reflections and if they felt that their true opinions had surfaced.
To verify the procedure and the interview questions, we conducted one pilot interview. Based on its results, we saw no reason to alter the procedure or questions. The pilot interview was, however, not included in the analysis.
\subsubsection{Analysis}
To process the data, we chose to use thematic analysis based on the guidelines outlined by Braun and Clarke~\cite{braun2006using}. Thematic analysis is considered a flexible method that can be used for reporting experiences and is a method that we, the authors, have previously used~\cite{lenberg2015human}. A single author (i.e. the first author) conducted the analysis. The result of each phase (described below) was, however, discussed with the co-authors.
In the analysis, which was based on realism underpinnings~\cite{scotland2012exploring}, we did not aim to provide a rich thematic description of the entire data. Our ambition was, instead, to provide a detailed and nuanced account for a group of themes related to the dual aim of this sub-study. We thus limited our themes to relate to organizational values in software companies and, more specifically, to the causes and consequences of value misalignment.
Our ambition was to create insights that go further than the participants own understanding. To accomplish that, we chose a latent analysis depth, meaning that we tried to go beyond the semantic content of the data, and examine the underlying ideas and assumptions that are shaping the semantic content~\cite{braun2006using}. Also, we used an inductive, data-driven, approach (i.e. we drew theoretical and general conclusions from the data).
The analysis included the following phases:
\subparagraph{Data familiarization} We transcribed the interviews, after which we read and reread them to obtain a general understanding and idea of the content. This process allows any new information obtained to affect the interpretation of the general content~\cite{kvale2014kvalitativa}. We analyzed the data organization by organization, starting with the top manager. We hoped that this strategy would facilitate the comparison of individual experiences and opinions as well as the comparison between organizations.
\subparagraph{Coding} We then identified initial codes from the transcripts. A code identifies a feature of the data that appears of interest to the aim of the study (i.e. it related to organizational values or value misalignment). The initial codes were then grouped in a thematic map~\cite{cruzes2011recommended}.
\subparagraph{Comparison sorting} As a pre-step to the theme identification phase, we roughly sorted the codes based on differences of opinions. This step created pairs of code clusters, where the codes of one of the clusters supported one opinion while the other cluster included codes supporting the other viewpoint.
\subparagraph{Theme identification} Next, we categorized the codes into sub-themes and themes, and analyzed the relationships between them~\cite{smith2015qualitative} using the thematic map as well as surrounding data. This step is a reductive abstracting process, aimed at finding an integrated, meaningful conceptual pattern overlaying the data~\cite{hellstrom2001affecting}.
\subparagraph{Theme defining and naming} In the final step, we reread the data and assigned citations that illustrated the final themes. The citations were primarily selected based on the relevance of the theme; however, we also wished for citations to cover a significant part of the collected data.
\subsection{Results}
Our analysis resulted in three themes. These are, together with their respective sub-themes, presented in Table~\ref{table_themes}. The first theme, labeled \emph{the dark side of agile}, relates to the adverse effects caused by the value structure that, according to the interviewees, tends to follow the introduction of the agile approaches. The second theme, labeled \emph{triggers and winds of change}, summarizes the main factors that will form the value structure in future software organizations. The third and final theme, labeled \emph{value misalignment}, relates to the causes and effects of alignment, and misalignment, between groups in software engineering organizations.
\begin{table
\footnotesize
\setlength\extrarowheight{2pt}
\centering
\begin{tabular}{ L{.2\textwidth} L{.3\textwidth} L{.4\textwidth}} \hline
Theme & Sub-theme & Summary \\
\hline
\multirow{3}{*}{1. The dark side of agile} & 1.1 Political correctness &
\multirow{3}{.4\textwidth}{The theme relates to the potential adverse effects caused by the change in value structure that follow the introduction of the agile approaches.} \\
& 1.2 One ambiguous method to rule them all & \\
& 1.3 Agile as trademark & \\
\multirow{3}{*}{2. Triggers and winds of change} & 2.1 Cultural awareness &
\multirow{3}{.4\textwidth}{The theme summarizes the main factors that will form the value structure in future software organizations.} \\
& 2.2 Scaling agile & \\
& 2.3 Loyalty shift & \\
\multirow{3}{*}{3. Value misalignment} & 3.1 Objects of discrepancy &
\multirow{3}{.4\textwidth}{The theme relates to the causes and effects of value misalignment between groups in software engineering organizations.} \\
& 3.2 Consequences & \\
& 3.3 Causes & \\
\hline
\end{tabular}
\caption{Identified themes and sub-themes.}
\label{table_themes}
\end{table}
\subsubsection{The dark side of agile}
During the interviews, all of the participants mentioned the significant influence the agile transition had on the organizational values in software companies. Still, the comments they made were not solely positive. Several managers were of the opinion that the agile movement had grown too powerful, which, according to them, had unfavorable effects on organizational life.
\paragraph{Political correctness}
A number of participants stressed that the agile approaches introduce several drawbacks that the organizations were reluctant to discuss. These participants argued that the agile community, at its worst, promotes a cult-like behavior in which it was prohibited to question agile superiority. The top manager in Organization C said, somewhat despondently, that ``It has become difficult to challenge or even have an open discussion about the agile usefulness. If you don't embrace agile unconditionally, you're considered a dinosaur -- a dying organization.''
As an example, the majority of the interviewees acknowledged that delegating authority and responsibility increase the employees' engagement. Still, a few managers emphasized that delegating responsibility comes at a cost. In traditional organizations, if a project diverted from the initial plan, the software engineers could often be shielded from the reprimands that usually followed by their managers who had the overall responsibility. The managers thus had the role of identifying and compensating for ideas that did not work, and, in a way, acted as the adder of structure in areas that lacked clarity. Thereby, the managers could carry some of the burden of their employees' anxiety.
Agile organizations tend to lack the safety-net that is built-in into hierarchical organizations. If not managed correctly, that can add to the pressure and stress of the software engineers. The second line manager in Organization D stated that ``Teams or team members can become too committed. If something goes wrong, they feel personally responsible. They put immense pressure on themselves, which can lead to stress and burn-out.'' Thus, when creating autonomous and self-managing teams, the organization should add structure in the areas that were previously handled by the managers by default.
\paragraph{One ambiguous method to rule them all}
According to nearly all of the participants, agile was the only game in town when it came to software development methodology. Still, when the participants attempted to explain the core meaning of agile, their interpretations were quite varied. They attributed different meanings to the construct and, also, to what signified an agile organization. Several participants argued that the construct was too ambiguous, too open to interpretation, and too high-level, which meant that various parts of the organization had a different definition of what it means to be agile and was a source of confusion and misunderstandings. As a direct consequence, in Organization A, the differences in interpretation of the agile constructs had led to so much confusion and organizational discomfort that the management had chosen to ban the use of the word internally. The software engineer said that ``To facilitate communication, we talk about a team-based organization instead of an agile organization.''
\paragraph{Agile as trademark}
Some managers reported that when hiring developers, they regularly exaggerated their agile maturity level during interviews. Being agile had become a necessary trademark, a certificate, that indicated the lowest acceptable standard for software engineers. The relationship between employee and manager was, therefore, not off to a promising start, since the employee realized soon after arrival that the company's methods did not meet the promised expectations. As a consequence, the psychological contract between the hiring manager and the new employee was breached, which, naturally, had an adverse effect on organizational trust and set an unfavorable tone for the future relationship between employing managers and employees.
\subsubsection{Triggers and winds of change}
As we previously have reported, agile approaches have significantly influenced the value structure of software engineering companies. However, during our interviews, several other factors surfaced that, according to the participants, are likely to considerably influence the values of software organizations in the future.
\paragraph{Cultural awareness}
A majority of the participants maintained that employees or managers rarely discussed the values that should govern behavior. Although the managers recognized the importance of shared values, they unwittingly prioritized and focused on more concrete issues, such as improving processes, methods, techniques, and tools. The human resources department initiated one of the rare occasions where the employees sat down to discuss values. The participants' opinions of such events, however, varied, and several participants indicated that the outcome (i.e. organizational core values) was too general and all-inclusive. These core values had, therefore, a minimal influence on organizational behavior.
\begin{quote}
\emph{Identifying the core value could be a good thing. However, defining three words that should encompass the culture of several departments, where more than 10 000 employees work, will, without a doubt, result in them being so ubiquitous and vague that they, in the end, mean absolutely nothing.} (E2)
\end{quote}
Some participants explained that the purpose of the core values was not to alter organizational behavior, but rather to create a public profile that the company could exploit when communicating externally (i.e. a part of their organizational identity). There were, however, signals of increasing cultural awareness. Almost all managers reported that culture and values were climbing increasingly higher on the management teams' agendas. Management's attention to these concerns is also significantly higher nowadays than it was 15 years ago. The participants felt that this growing focus would undoubtedly benefit the software industry as a whole.
\paragraph{Scaling agile}
The introduction of the agile approaches emphasized teamwork, collaboration, and cooperation, and all of the participating companies in this study organized their work in teams. As the complexity and size of the software increased, many of the companies' products became too complicated for one team to manage. Therefore, inter-team collaboration had become increasingly important.
\begin{quote}
\emph{Before, we needed to get individual software engineers to collaborate. That was a challenge in itself. Now, we'll have to make groups of developers collaborate. That's much, much more challenging.} (E2)
\end{quote}
Three of the four organizations had started to implement a method for scaling agile. Of these, two had chosen to implement SAFe (Scaled Agile Framework), while one organization had purposely chosen to `reinvent the wheel.' The top manager in this organization argued that to facilitate multi-team collaboration, an organization must have common organizational values and a strong identity. The manager meant that the process of creating a tailored, scaled agile method itself strengthened shared values and formed the organization's identity. In his experience, without a strong identity and value structure, the system would be sub-optimized by having teams that are effective on their own, but cannot collaborate.
\paragraph{Loyalty shift}
A few managers stated that the loyalties of the software engineers had shifted over the past 10 years. Previously, the developers expressed loyalty towards the company or the company's products. This loyalty has decreased, and the reduction was, according to the managers, more apparent among software engineers compared to other types of engineers. They felt that a significant contributing factor was the increasingly influential role of teams in software-engineering organizations. The teams' cohesion had a notable effect on developers, who expressed greater loyalty to the teams' working processes than to the organization or its products.
Another development related to the job market that, according to several participants, has affected the behaviors and value structure of the large software companies is the growing shortage of software engineers. The more skilled developers have learned to exploit the market and maximize their profits by starting a consultant company of their own or by changing jobs frequently. A few managers pointed out that this development was somewhat incompatible with the trend of agile. The section manager in Organization D felt that ``It is becoming increasingly hard to keep the teams constant when the employee turnover rate increases.''
\subsubsection{Value misalignment}
The alignment of preferred organizational behavior differed significantly among the included organizations. In Organizations D and E, the narratives the interviewees used to portray their respective organizations overlapped; they had roughly the same view of what challenges the companies faced, what behaviors were prosperous, and the purpose and identity of their organizations. By contrast, the participants from Company A and C had a more fragmented view and their stories did not overlap to the same extent, instead providing an incoherent description of their respective organizations.
\paragraph{Objects of discrepancy}
All of the participants in Organizations D and E repeatedly expressed the opinion that autonomy and self-organization were desirable organizational traits, since they increased the software engineers' motivation. In Company A and C, the situation was more complicated, and the interviewed employees' beliefs were not as congruent. There was not, however, a clear-cut distinction between managers and developers. In both organizations, the interviewed developers' and section managers' values were aligned but differed from those of the upper management, (i.e. department and business unit managers). Although the upper managers acknowledged the trend towards increased autonomy and self-management, they seemed to hold conflicting feelings and from time to time argued against it during their interviews. Their statements of doubt gradually became more apparent during the discussion, as shown in a statement by C3: ``I think it is important that the right person makes the decisions. Sometimes it is evident that the engineers do not grasp the whole situation and can therefore not be expected to make a sound decision. They just don't know all of the relevant facts.''
The differences in opinion were also evident when discussing organizational role models. The section manager in Company C was inspired by ``the new generation of companies'' such as Google, Spotify, and Facebook, while the department manager stated that ``I am tired of employees comparing us to Google -- we are nothing like them.'' The department manager argued that it was unreasonable to compare their organization to Google; since Google produces software only, they can hire almost anyone they wish, and they do not have to battle an organizational culture that has been built up over 50 years. Drawing on this reasoning, the department manager felt that it was far from evident that methods that work well at Google should automatically function adequately in their organization as well.
Moreover, in the organizations with high misalignment (i.e. A and C), there were indications that the groups were not in agreement on what timeframe should be used when making decisions. As an example, the developer in Company C said that their organization frequently needed to ``live to fight another day,'' meaning that they were forced to make decisions within a short timeframe; in practice, this meant lowering the software quality by ignoring reusability and skipping automatic unit tests. The developers argued that optimizing for the near future is, at least to some degree, at odds with the ethical codes and the professional identity of the software engineering profession. During their education, the developers are taught to build for the future and to develop components that can withstand future trials. Continually adding to the technical department of a system thus made the software engineers uncomfortable, and reduced their engagement and motivation.
\paragraph{Consequences}
\label{lbl_consequences}
Our analysis indicated that between-group value misalignment affected the four performance factors of organizational effectiveness, conflict levels, between-group trust, and employee job satisfaction. The discrepancy of values within Organizations A and C led to tensions and conflicts between groups. Several participants suggested that in a misaligned organization, the employees did not fully understand what behaviors were expected of them when collaborating with employees from groups other than their own. This raised feelings of insecurity and adversely affected employees' job satisfaction.
In addition to not knowing how to act themselves, value misalignment led to the employees being unsure of how other groups would react and behave, which eroded the trust between organizational groups. This created an unpredictable working environment in which the employees were reluctant to make decisions themselves, instead frequently delegating decisions hierarchically upwards, most often to their managers. That reduced the organizational efficiency by delaying or in some cases halting the development process. The interviewed developer in Organization D explained that ``Often you cannot get a decision on a matter right away. So in order to not waste any time, one has to keep on working as if a decision has already been made. To do that with some confidence, one needs to be able to predict the decision in advance.''
Moreover, there were indications that the employees in Organization A and C were more inclined to focus on maximizing the performance of their team than the organization as a whole. As an example, there were notable differences in how the organizations described their goals. The participants from Organization D and E highlighted the organizational goals and how they, or their unit, could contribute to them. The employees of A and C, however, were more likely to emphasize low-level goals related to their group, whereas the company's overall goals were not typically mentioned. Their reasoning was the opposite when discussing organizational improvements. In this case, the employees in organization A and C often addressed issues outside of their control while the participants from D and E presented improvements they, themselves, could manage.
Another distinction was how the interviewees portrayed the relationship between themselves and the organization. The interviewees from D and E often identified themselves with the entire organization using the pronouns \emph{we} or \emph{our}, ``Our most important goal next year is to increase our sales.'' (D1). They saw themselves and their group as a part of a collective to which they belonged and contributed. The interviewees of A and C, on the other hand, were more inclined to distance themselves from the other groups, as illustrated by the following statement from A2: ``The products are already state-of-the-art, it is now up to the sales department. They need to step it up.''
\paragraph{Causes}
A distinction that separated the top managers in A and C from their colleagues in D and E was that the participants from the former argued that key aspects of the agile approaches, for example, autonomy and self-organization, had already been tried before. They considered agility to be a trend, something in passing that one did not need to focus on very much. The business unit manager in organization A stated that ``We [the company] tried self-organization 25 years ago, and it didn't work then. These things seem to go in circles.''
These managers thus saw no reasons to challenge their beliefs and, consequently, felt no sense of urgency to change them. The interviewed software engineers and the section managers, on the other hand, communicated that a transition to a more agile approach was a necessity. As a consequence, the adoption of agile approaches added to the differences in values between groups in organization A and C. By contrast, in organization D and E where the agile adoption process had a more company-wide acceptance, the changes reduced the differences in values.
A few participants argued that it was natural for large organizations to be misaligned during a transitional phase and that, sooner or later, the different organizational units would adopt the new values. The department manager at Organization D had a somewhat pessimistic mindset and stated that ``When it comes to such fundamental concerns, some people do not change their minds, but they eventually get replaced by someone else.''
As we previously reported, several of the participants argued that organizational values seldom surface as a topic in everyday operations, in which the purpose of a discussion is most often to solve an immediate problem or settle a heated discussion. If a conflict of interest arises, the involved employees usually aim for the short-term goal (i.e. to solve the current issue pragmatically). They rarely strive to identify a conflict's underlying cause, which may or may not be a misalignment of values. The reluctance to openly discuss values thus concealed the value misalignment, making the phenomenon harder to detect. One participant in Organization E said that ``Talking about values falls into the same category as talking about feelings. Engineers, in general, are rather uncomfortable with that and it is definitely not part of the engineering culture.''
\section{Part II - Testing the effects of value misalignment}
\label{section_study_b}
In the second part of the mixed model study, we statistically tested whether value misalignment affected the four performance factors (effectiveness, trust, conflicts, and job satisfaction) that we identified in the qualitative analysis (see Section \ref{lbl_consequences}).
\subsection{Method}
Since we aimed to verify the qualitative findings statistically, we chose a quantitative research approach with questionnaires.
\subsubsection{Sample}
We collected questionnaire data from the seven organizations described in Section~\ref{company_overview}. As is shown in Table~\ref{table_respondents}, 184 employees working in 27 different teams participated. In this study, we define a team as group of people working together to complete a task, job, or project. The majority of all respondents worked in Sweden, but a few (~10\%) were based in India. We did not collect background information from the respondents; however, since we conducted the data collection ourselves, we determined that 20\% were women.
The participating teams were selected on the grounds of convenience~\cite{etikan2016comparison}. The upper management selected which teams in their organization we were allowed to contact. They all stated that the selection was based on availability and workload. We contacted the teams via e-mail and included the ones that first responded. Only a few teams declined to participate in the study. These stressed that they needed to focus on other, more pressing, matters.
\begin{table
\centering
\begin{tabular}{ l l l l l}
\hline
Org. Id. & Man. teams & Dev. teams & HR teams & Total \\ \hline
A & 12 (2) & 10 (2) & 10 (1) & 32 (5) \\
B & 18 (2) & 17 (2) & 0 & 35 (4) \\
C & 8 (2) & 14 (2) & 0 & 22 (4) \\
D & 15 (2) & 16 (2) & 0 & 31 (4) \\
E & 11 (2) & 12 (2) & 0 & 23 (4) \\
F & 13 (2) & 6 (1) & 0 & 19 (3) \\
G & 8 (1) & 14 (2) & 0 & 22 (3) \\
Sum: & 85 (13) & 89 (13) & 10 (1) & 184 (27) \\ \hline
\end{tabular}
\caption{Overview of respondents (teams) per organization.}
\label{table_respondents}
\end{table}
\subsubsection{Materials}
The result of the qualitative inquiry (see Section \ref{lbl_consequences}) indicated that alignment of values between organizational groups positively affected software companies' overall effectiveness, between group trust, level of conflicts and the employees' job satisfaction.
The study's intent was to gain general insights as to the significance of the \emph{between-group value misalignment} construct. We wished to compare the group-level construct with a similar individual-level construct that previously had been used to measure value congruence in organizations. One construct that met these criteria was \emph{value strength}, which we chose to include in our analysis. The \emph{value strength} construct provides a measurement of overall agreement among individual employees (i.e. not agreement between groups) regarding values in the organization~\cite{schneider2002climate}. Research concerning organizational climate has utilized and proven the significance of this type of concept~\cite{schneider2002climate}.
We measured the constructs using self-assessment questionnaires. However, in an attempt to triangulate and add support to the validity of the constructs of \emph{effectiveness} and \emph{job satisfaction}, we collected `real' project data (i.e. \emph{delivery success} and \emph{employee turnover}) from four of the seven participating organizations.
An overview of the included constructs is shown in Figure~\ref{fig:overview_constructs}.
\begin{figure
\centering
\includegraphics[width=0.75\textwidth]{Fig/3ConstuctsOverview.png}
\caption{The figure presents an overview of the constructs used in the study.}
\label{fig:overview_constructs}
\end{figure}
\paragraph{Questionnaire}
We aimed to utilize previously verified constructs rather than developing new scales and items.
\subparagraph{Between-group value misalignment} We operationalized organizational values using the organizational culture assessment instrument (OCAI)~\cite{cameron2011diagnosing}, which is based on the CVF~\cite{hartnell2011organizational, cameron2011diagnosing, denison1991organizational, quinn1983spatial}. The survey measured both the \emph{existing} values and the respondents' \emph{preferred} values that would help their organizations to achieve their highest aspirations. In our analysis, we used the \emph{preferred} values, since we sought the respondents' desired preferences and beliefs, not their perceptions of the current state of their organization.
We chose to use the CVF because it relates to the core values in the organization, is well reported in the literature~\cite{hartnell2011organizational}, and has previously been used in a software engineering setting~\cite{iivari2011relationship, iivari2007relationship}. Furthermore, other viable alternatives (for example, the organizational culture inventory~\cite{cooke1988behavioral}) have the disadvantage of including over 100 questions; we reasoned that such a time-consuming survey would have resulted in a significantly reduced response rated. The CVF characterizes organizational values on two dimensions (see Figure\ref{fig_cvf_figure}). As we previously stated, the first dimension represents a set of competing values indicating to what degree an organization emphasizes centralization and control over decentralization and flexibility. The second dimension is the degree to which the organization is oriented toward its internal environment and processes over its external environment and relationships with outside entities.
Several methods are presented in the literature that could be used to estimate alignment or congruence between groups; the choice of which method to use is therefore far from evident. Some of the viable alternatives are commonly used in multi-level analyses to justify aggregation of individual data points to a group. In this type of research, data should demonstrate both definite differences between groups and agreement within groups. As a result, in an organization with aligned values between groups, the respondents' reported values are not affected by their groups.
Two constructs commonly used to estimate group-level properties are intraclass correlation coefficient (ICC) and eta-squared corrected ($\eta^2(C)$)~\cite{bartko1976various, bliese1998group}. In a comparison study, Shieh~\cite{shieh2012comparison} argued that even though researchers and reviewers are familiar with, and almost reflexively demand, ICC, empirical evidence demonstrates that further improvement may be obtained by adopting the eta-squared estimation since it performs better for small values of ICC. Considering that the accumulated knowledge shows that magnitudes of ICC collected from industry data tend to be small, we chose to report eta-squared as an estimation for \emph{between-group value misalignment}. Still, for parts of our data, we also calculated the ICC values, which yielded similar results to eta-squared. We are therefore reasonably convinced that our choice of method does not pose a major threat to the study's validity.
In an attempt to add strength to the \emph{between-group value misalignment} measurements, we also assessed the CVF data using analysis of variance (ANOVA)~\cite{agresti2011categorical}. We conducted ANOVA analyses for each of the seven organizations, with the four CFV values as dependent variables and team belonging as the independent variable. Since ANOVA analysis indicates the existence of differences between groups, we expected the organizations with high \emph{between-group value misalignment} scores to have significant ANOVA results.
\subparagraph{Value strength} We followed the advice of Schneider et al.~\cite{schneider2002climate} on climate strength and operationalized the \emph{value strength} construct as the (sign inverted) standard deviation of the four reported organizational values in CVF. We have noted, however, that the use of standard deviation as an estimate for strength has been criticized, as it is a measurement of disagreement (i.e. the opposite of agreement). Still, we agree with Schneider's reasoning and argue that such difference is negligible for our purposes and thus calculated \emph{value strength} estimates for each of the four dimensions of the CVF, where a higher value indicated more value agreement.
Homogeneity statistic $r_{wg}$ would be an alternative construct but has several drawbacks~\cite{bliese2000within}; for example, it may overestimate the degree of agreement and result in values greater than one, which are difficult to interpret~\cite{zohar2005multilevel}.
\subparagraph{Performance measurements} The organizational \emph{effectiveness} was estimated based on four items suggested by Cohen et al.~\cite{cohen1996predictive}, and the internal \emph{trust} between teams was measured using four questions define by Huff and Kelley~\cite{huff2003levels}. We used four items extracted from work by Jehn and Manix~\cite{jehn2001dynamic} (task conflicts) and Friedman et al.~\cite{friedman2000goes} (personal conflicts) to estimate the \emph{conflicts} between teams. Finally, \emph{job satisfaction} was estimated based on the four questions suggested by Thompson and Phua~\cite{thompson2012brief}. The items for measurements are listed in Table~\ref{table_items}.
\begin{table
\footnotesize
\setlength\extrarowheight{2pt}
\centering
\begin{tabular}{ L{.15\textwidth} L{.45\textwidth}} \hline
Construct & Question \\ \hline
Effectiveness 1 & This organization does high quality work. \\
Effectiveness 2 & This organization is effective. \\
Effectiveness 3 & This organization delivers according to schedule. \\
Effectiveness 4 & This organization rarely has cost overruns. \\
Trust 1 & In this organization, there is a high level of trust between units. \\
Trust 2 & In this organization, employees have a great deal of trust for managers. \\
Trust 3 & If someone in this organization makes a promise, others will almost always trust that the person will do his or her best to keep the promise. \\
Trust 4 & Managers in this organization trust their employees to make good decisions. \\
Conflict 1 & In this organization, one party frequently undermines another. \\
Conflict 2 & Much `plotting' (conspiring) takes place `behind the scenes' in this organization. \\
Conflict 3 & In this organization, people from different units often disagree about how work should be conducted. \\
Conflict 4 & There are often conflicts of ideas in this organization. \\
Job satisfaction 1 & I find real enjoyment in my job. \\
Job satisfaction 2 & I like my job better than the average person. \\
Job satisfaction 3 & Most days I am enthusiastic about my job. \\
Job satisfaction 4 & I feel fairly well satisfied with my job. \\ \hline
\end{tabular}
\caption{The table shows the items used to compile the four variables; effectiveness, trust, conflict, and job satisfaction.}
\label{table_items}
\end{table}
The answers to all of these items were measured using a five-point Likert scale with the following options: `strongly agree', `agree', `neither agree nor disagree', `disagree', and `strongly disagree'.
\paragraph{Real project data} Four of the organizations reported project data. For the four most recently completed projects or deliveries, the top management of these companies was asked to report if the project was completed according to the original schedule and if the original cost estimates were held. We then summarized the values for each organization (positive response = 1, negative response = 0), giving this measurement a value range between zero and eight, which was reported as a percentage. The organization also reported their average yearly employee turnover in the percentage of employees for the previous two years.
\subsubsection{Procedure}
All data collection was conducted over nine weeks starting at the beginning of March 2018. The questionnaires were distributed to the teams by the first author at their weekly or monthly meetings, earning us an overall response rate of over 90\%.
Before completing the survey, the respondents were informed about the general purpose of the research, that it was anonymous, and that it was voluntary to participate. The researcher also informed that he would not share the raw or analyzed data with other researchers or with their respective management.
The researcher and the respondents also discussed the concept of \emph{organization} that was used in the survey, as it was critical that the respondents from each organization used the same interpretation. We defined the concept as the part of the organization that was known to the respondent but limited it to the part of the company that the participating top management team was responsible for (i.e. the units hierarchically below them).
For the vast majority of the respondents, it took less than 20 minutes to answer the questionnaire. No respondents completed the survey in less than six minutes and the longest anyone sat was 45 minutes. To increase the number of correctly completed questionnaires, the researcher encouraged the respondents to review their answers before handing in the questionnaires. Of the 184 respondents, more than 95\% completed the entire survey correctly. Incorrect answers were ignored in the analysis.
We also asked the participating teams if they wished to receive information about their results in relation to the overall mean of the other participating teams. All teams were interested in this feedback.
\subsubsection{Analysis}
We reported the relationship between the two constructs \emph{between-group value misalignment} and \emph{value strength}, and the four performance measurements using Pearson correlation coefficients. Parts of the results are also presented visually using a scatter-plot.
Before the analysis, we tested the internal consistency for the included constructs using Chronbach alpha, which ranged from 0.73 (for conflict) to 0.86 (for job satisfaction). These alpha values were acceptable~\cite{tavakol2011making} and aligned with measurements in previous research~\cite{lenberg2018safety, jehn2001dynamic, huff2003levels, friedman2000goes}.
We also tested if the acquired information fulfilled the parametric statistical tests assumptions. First, we inspected the histogram, box-plot, and descriptive data for all constructs and confirmed that no outliers existed. Then we tested whether the variables were approximately normally distributed using the Shapiro-Wilk test. Finally, we tested the homogeneity of variances assumptions using the Levene test. All constructs met the required assumptions (i.e. none provided a significant result, which indicates that parametric tests are applicable).
We conducted this analysis using SPSS Version 24.
\subsection{Results}
The mean and standard deviation for the included constructs are presented in Table~\ref{res_mean_stddev}. The value range for the performance measurements are 1.0-5.0, where a higher value indicates a better result (i.e. more effective and fewer conflicts). The \emph{project success} construct reports the rate of success for the organization's previous four projects/deliveries, while the \emph{turnover} construct is the turnover as a percentage of the total number of employees. The table is sorted by \emph{effectiveness}, starting with the least effective organization. The table indicates a significant difference between the least effective organization and the most effective organization. The respondents in Organization G report, on average, 70\% higher effectiveness than employees in Organization C. Furthermore, the table indicates that collected project data (i.e. \emph{delivery success}) roughly follow the same pattern as self-assessed effectiveness, which, at least to some extent, adds support to the results of the self-assessed measurements.
\begin{table
\centering
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Org. Id.}} & \multicolumn{4}{c|}{Performance} & \multicolumn{2}{c|}{Project data} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Effectiveness} & \multicolumn{1}{c|}{Job satisfaction} & \multicolumn{1}{c|}{Conflict} & \multicolumn{1}{c|}{Trust} & \multicolumn{1}{c|}{Project Success} & \multicolumn{1}{c|}{Turnover} \\ \hline
C & 2.31(0.62) & 3.38(0.65) & 3.11(0.70) & 3.35(0.58) & 25\% & 10\% \\
A & 2.67(0.61) & 3.91(0.69) & 3.09(0.57) & 3.30(0.49) & 37\% & 12\% \\
B & 3.05(0.38) & 3.80(0.50) & 3.01(0.67) & 3.26(0.71) & 75\% & 24\% \\
F & 3.41(0.46) & 4.03(0.55) & 3.18(0.60) & 3.72(0.38) & - & - \\
E & 3.54(0.67) & 3.84(0.65) & 3.49(0.70) & 3.70(0.51) & 100\% & 3\% \\
D & 3.88(0.42) & 3.80(0.59) & 3.55(0.61) & 4.10(0.46) & 87\% & 10\% \\
G & 3.98(0.42) & 3.84(0.46) & 3.72(0.51) & 4.01(0.39) & - & - \\ \hline
\end{tabular}
\caption{The table presents the measured values (mean and standard deviation) of the four self-assessed performance measurements and the real project data.}
\label{res_mean_stddev}
\end{table}
The four calculated \emph{between-group value misalignment} scores (one for each dimension in the CVF) are presented in Table~\ref{res_misalignment}. As can be seen, C and A are the most misaligned organizations, indicated by relative high scores for the four \emph{between-group value misalignment} measurements. The employees in D and G, as a comparison, report significantly lower misalignment. The between-group misalignment for Organizations C and A is also strengthened by the ANOVA analyses, which reported statistically significant differences for two of the four dimensions. The ANOVA results for organization C were \emph{F(3,18) = 5.13, p = .010} (collaborate) and \emph{F(3,18) = 3.62, p = .033} (compete), while the results for Organization A were \emph{F(4,27) = 3.95, p = .012} (create) and \emph{F(4, 27) = 3.31, p = .025} (compete).
\begin{table
\centering
\begin{tabular}{ccccccccc}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Org. Id.}} & \multicolumn{4}{c|}{Between-group value misalignment} & \multicolumn{4}{c|}{Value strength} \\ \cline{2-9}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Collaborate} & \multicolumn{1}{c|}{Create} & \multicolumn{1}{c|}{Compete} & \multicolumn{1}{c|}{Control} & \multicolumn{1}{c|}{Collaborate} & \multicolumn{1}{c|}{Create} & \multicolumn{1}{c|}{Compete} & \multicolumn{1}{c|}{Control} \\ \hline
C & 0.46 (A) & 0.12 & 0.37 (A) & 0.23 & -0.066 & -0.035 & -0.054 & -0.053 \\
A & 0.28 & 0.36 (A) & 0.32 (A) & 0.14 & -0.044 & -0.051 & -0.051 & -0.040 \\
B & 0.15 & 0.02 & 0.29 (A) & 0.14 & -0.068 & -0.054 & -0.058 & -0.072 \\
F & 0.03 & 0.05 & 0.02 & 0.09 & -0.065 & -0.057 & -0.064 & -0.051 \\
E & 0.17 & 0.13 & 0.09 & 0.13 & -0.053 & -0.043 & -0.056 & -0.041 \\
D & 0.05 & 0.04 & 0.05 & 0.15 & -0.065 & -0.052 & -0.069 & -0.036 \\
G & 0.04 & 0.06 & 0.03 & 0.04 & -0.059 & -0.060 & -0.069 & -0.058 \\ \hline
\end{tabular}
\caption{The table presents the \emph{between-group value misalignment} and \emph{value strength} scores for each of the four dimensions in the CVF framework (see Figure~\ref{fig_cvf_figure}). The (A) behind the \emph{between-group value misalignment} scores indicates that the ANOVA analysis reported a significant difference (p < .05) between groups for the corresponding CVF dimension.}
\label{res_misalignment}
\end{table}
Table~\ref{res_correlation} shows the Pearson correlation coefficients for the relationship between two constructs \emph{between-group value misalignment} and \emph{value strength}, and the four self-assessed \emph{performance} measurements. The data clearly indicate that \emph{between-group value misalignment} relates to \emph{performance}. This suggests that organizations with low between-group misalignment perform better than organizations with high between-group misalignment. Using the rule of thumb advised by Cohen~\cite{cohen1992power}, we deem the link to be strong considering that the correlation coefficients are mostly above 0.5~\footnote{We note that the correlation between \emph{job satisfaction} and \emph{create} is considerably weaker compared to the other coefficients. Unfortunately, we cannot, using our current data, give a satisfactory explanation for this and, therefore, suggest that this is explored in future studies.}. The relationship is also displayed visually in a scatter plot (see Figure~\ref{fig:chart_corr}) for the self-assessed \emph{effectiveness} construct.
Moreover, the data indicate a considerably weaker link between \emph{value strength} and \emph{performance} since these coefficients, in general, are small and not coherent. Our data thus suggest that \emph{performance} has a stronger relation to \emph{between-group value misalignment} than to \emph{value strength}.
\begin{table
\centering
\begin{tabular}{c|cccccccc}
\cline{2-9}
& \multicolumn{4}{c|}{Between-team value misalignment} & \multicolumn{4}{c|}{Value strength} \\ \cline{2-9}
& \multicolumn{1}{c|}{Collaborate} & \multicolumn{1}{c|}{Create} & \multicolumn{1}{c|}{Compete} & \multicolumn{1}{c|}{Control} & \multicolumn{1}{c|}{Collaborate} & \multicolumn{1}{c|}{Create} & \multicolumn{1}{c|}{Compete} & \multicolumn{1}{c|}{Control} \\ \hline
\multicolumn{1}{c}{Effectiveness} & -0.89* & -0.52 & -0.92* & -0.74 & 0.10 & 0.64 & 0.85* & -0.15 \\
\multicolumn{1}{c}{Job satisfaction} & -0.77* & 0.05 & -0.60 & -0.75 & -0.33 & 0.78* & 0.30 & -0.14 \\
\multicolumn{1}{c}{Conflict} & -0.54 & -0.30 & -0.76* & -0.52 & -0.08 & 0.28 & 0.70 & -0.33 \\
\multicolumn{1}{c}{Trust} & -0.71 & -0.46 & -0.89* & -0.51 & 0.14 & 0.41 & 0.88* & -0.38 \\ \hline
\end{tabular}
\caption{Pearson correlation coefficients for the performance measurements and the two alignment constructs \emph{between-group value misalignment} and \emph{value strength}. The ´*' sign indicates a significant correlation at 0.05 level.}
\label{res_correlation}
\end{table}
\begin{figure
\centering
\includegraphics[width=0.99\textwidth]{Fig/CorrChart2.png}
\caption{The chart plots \emph{effectiveness} (X axis) against the four \emph{between-group value misalignment} measurements (Y axis).}
\label{fig:chart_corr}
\end{figure}
\section{Discussion}
\label{section_discussion}
The primary objective of this study was to \emph{examine how discrepancies in values between organizational groups affect software companies' performance}. As a secondary objective, we sought to \emph{gain general insights into organizational values and how they affect behaviors and performance in software companies}. We used a mixed research design to meet these objectives. First, we collected qualitative data by interviewing 14 employees (software engineers (\textit{N} = 4) and managers (\textit{N} = 10)) working in four different software engineering organizations and processed the data using thematic analysis. We then conducted a quantitative survey of seven departments in six companies (\textit{N} = 184) to test the effects of value misalignment statically.
Regarding the primary objective, the results of our combined sub-studies show that, for our data, between-group value misalignment (a group-level construct) significantly affects software companies' performance. Organizations with low between-group value misalignment levels reported higher performance than those with more elevated levels. Aligned companies were more effective, more satisfied, had higher trust, and fewer conflicts. A similar, however individual-level construct that has been previously used to estimate value congruence in organizations is value strength~\cite{schneider2002climate}. Our analysis revealed that organizational performance had a stronger relation to between-group value misalignment than to value strength, indicating that between-group value misalignment is a more critical construct.
Moreover, our qualitative analysis suggests that diverse organizational values (i.e. high between-group value misalignment) create an unpredictable work environment, meaning that the employees cannot make an educated guess about future events. This creates a feeling of uncertainty, which raises stress levels among employees and lowers their sense of empowerment and autonomy. In contrast, companies in which the organizational values are aligned and known, create conditions for a stable and predictable working environment with fewer insecurities.
According to our findings, several factors contribute to the misalignment of values in software engineering organizations. First, a prerequisite for aligned values is an open dialogue, without which values are bound to diverge. However, our results suggest that shared values are seldom discussed in software organizations. Such dialogues tend to make software engineers feel uncomfortable and are not natural occurrences in organizational life. Since conversations about values seldom surface in everyday settings, employees learn acceptable behavior and shared values by studying others' behavior and reasoning. Drawing a parallel to social norms, one could argue that the values in software engineering organizations are descriptive rather than injunctive~\cite{cialdini1990focus}.
Secondly, since the agile construct is relatively elusive, each group's interpretations were biased by their respective prior understanding, experiences, and purposes. That has been recognized in our previous work~\cite{lenberg2018used}, as well as by other researchers~\cite{weyrauch2006we, laanti2013definitions, dikert2016challenges}. The results from this study suggest that a transition to an agile approach could, if successful, facilitate the normalization of organizational values and thus reduce between-group value misalignment, which, in turn, increases performance. However, if the meaning of the agile construct is not clarified and made common within the organization, an agile introduction could instead increase between-group value misalignment.
Thirdly, it naturally takes time to change organizational values. For a company to implement agile methods and processes is a swift change compared to a fundamental transformation of the organizational value structure. Companies can expect, and must take into account, that various organizational groups adopt new values at different paces, which at least temporarily increases between-group value misalignment.
Regarding the secondary objective (i.e. to gain general insights as to organizational values and how they affect behaviors and performance in software companies), our study confirms the significant influence that the agile transition has had, and continues to have, on organizational values in software companies. By showing their usefulness in term of increased effectiveness, agile approaches have opened up alternative ways of organizing work by moving away from, and questioning, traditional Tayloristic principles and value foundations.
Our findings, however, indicate that the agile transition has not been a blessing only and that it should not be considered a `silver bullet'. As was reported by both the interviewed software engineers and the managers, the agile community has grown overly powerful and, at its worst, has created organizational values that prohibit questioning of the alleged agile superiority. To some extent, we acknowledge that this cult-like behavior was necessary when, at the turn of the century, the agile transition began to penetrate the norms and values foundations that then prevailed. Nevertheless, the agile community is no longer an underdog and its usefulness has strong empirical support~\cite{serrador2015does}. Such behaviors are no longer necessary and could, quite unnecessarily, contribute to a culture of silence that is likely to harm organizations.
As an example, it is commonly known that the agile organizations foster employee commitment and engagement. However, according to our interviews, agile advocators sometimes fail to recognize the fact that over-commitment and engagement may lead to stress and, potentially, burn out. In traditional organizations, the manager could, to a certain extent, carry the anxiety of his or her employees and thereby reduce the pressure. Agile organizations with autonomous teams, in which the managers have a less prominent role, should therefore replace the managers' responsibilities with additional structures or processes. Otherwise, the individual responsibility that comes with commitment might be too much to handle. In a culture of silence, where the agile way of working is considered flawless, the organization may neglect to implement such structures, resulting in excessive stress and reduced well-being.
We believe that our study contributes in several ways. In general, it adds support to previous research on alignment by reinforcing its importance in software engineering organizations and confirming the relationship between value alignment and performance~\cite{lynn2007literature}. Our work also extends existing alignment theory by providing initial empirical support for a between-group value misalignment theory. Such group-level theory is valuable for software engineering organizations in particular, since they almost exclusively organize their work in groups or teams. The software industry is currently transitioning to the use of scaled agile methods~\cite{dikert2016challenges} and additional insights as to which factors leverage inter-group collaboration are thus crucial.
The theory also provides additional or alternative explanations for the proven success of agile approaches. For instance, drawing on alignment theory, an agile transition contributes exclusively by forcing an organization to consider and reflect on their current value structure. Our results support this since in two of the participating organizations, the agile adoption processes had a harmonizing effects on organizations' internal values, thereby reducing misalignment and improving performance.
Moreover, our study contributes by providing additional general insights into organizational values in software engineering organizations. This is important because the current research on organizational values in software engineering has focused on exploring the fit between specific values and agile approaches~\cite{chandra2010identifying, hamid2015factors, dikert2016challenges}. Notably, we think it is essential to acknowledge the adverse effects that agile approaches can have on the value foundation. The research on agile has, so far, been somewhat optimistic and prone to report its strengths rather than its weaknesses~\cite{Barlow2011OverviewAG}. We believe that our findings can contribute to a more nuanced understanding of the agile concept.
Finally, our efforts have provided encouraging, yet initial results in an important research area. Given the relevance of teamwork and inter-team collaboration in software companies, we believe that it is important that between-group value misalignment is further, and more profoundly, explored in the software engineering context. Even if our findings indicate that there is a link between performance and between-group value misalignment, we have reason to presume that the relationship is not simplistic and linear. For example, we expect that organizations with virtually no misalignment create static psychosocial environments with few or no disagreements that stimulate creativity, improvement, and innovation. Drawing on the general agreement among scholars as to the relationship between tension levels and performance~\cite{de2006too, jehn1995multimethod, van1994optimising}, we suggest that between-group value misalignment has an inverted U-shaped relation to performance. At low and high levels of value misalignment, organizations are less effective than at moderate levels. Still, such a hypothesis, of course, requires empirical support from future studies.
\subsection{Limitations}
The study has several limitations. In the following sections, we discuss its internal validity, external validity, and content validity.
\paragraph{Internal validity}
Our survey included nearly 200 employees. Still, we used multi-level analysis, and base the conclusions on an analysis of aggregated constructs. We recognize that seven organizations are at the lower end of the scale, which we consider as a significant threat to internal validity. We thus regard our findings as preliminary.
We acknowledge that we had limited control of the selection process for both parts of the study, which could potentially affect the validity. Still, the upper management of each organization (which selected the teams we were permitted to contact) had little to gain from deluding us and seemed genuinely interested in providing an accurate view of their organizations. We therefore deem this particular threat to be limited.
Finally, a single researcher conducted all interviews and significant parts of the analysis. Throughout the study, all steps and phases have, however, were thoroughly discussed and reviewed by all authors as well as external experts. Still, in qualitative research, the researchers are instruments and are the most significant threat to qualitative research~\cite{poggenpoel2003researcher}. Even if we have taken measures to reduce the effects of researcher bias, we acknowledge that it has indeed colored the result.
\paragraph{External validity}
The survey and interviews were conducted in a limited number of companies located in Sweden, which threatens the generalizability of our findings. Still, the size of the unit limited the number of respondents, and the response rate of almost 90\% is acceptable.
\paragraph{Construct validity}
The representativeness of the survey measurements, which we estimated with self-assessment, is a threat to validity. In an attempt to raise the validity by triangulating the data (i.e. use additional data sources), we collected real project data. We recognize, however, a reduced resolution of the project data. Nonetheless, since we utilized items and scales that have been previously used and estimated the internal consistency, we believe that we can justify the use of the constructs in the study and thus rate this threat as moderate and acceptable.
We chose thematic analysis as the analysis method for the qualitative data. We acknowledge that there are other viable options, such as interpretive phenomenological analysis~\cite{smith2004reflecting}, content analysis~\cite{hsieh2005three}, or grounded theory~\cite{martin1986grounded}.
\section{Conclusions}
\label{section_conclusion}
This study reinforces the importance of considering organizational values in software companies. Empirical data collected from seven organizations indicated that value misalignment between groups is related to organizational performance. The aligned companies were more effective and more satisfied and had higher trust and fewer conflicts.
These findings can help to explain why some companies are more efficient than others and, thus, give initial direction to interventions addressing organizational challenges. Our results, for example, suggest that agile transitions have, in successful cases, forced the companies to clarify and evaluate their current organizational values. This has had a harmonizing effect on internal value structures, thereby reducing the misalignment and, in turn, improving performance.
Our efforts also provide more general insights on values in software engineering organizations. In particular, we emphasize the adverse effects that agile approaches can have on the organizational values foundation and hope that our findings can contribute to a more nuanced understanding of the agile concept.
\section*{Acknowledgments}
We acknowledge the support of Swedish Armed Forces, Swedish Defense Materiel Administration and Swedish Governmental Agency for Innovation Systems (VINNOVA) in the project number 2017-04874.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
Subsets and Splits